title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
Chapter 4. Management of users on the Ceph dashboard
Chapter 4. Management of users on the Ceph dashboard As a storage administrator, you can create, edit, and delete users with specific roles on the Red Hat Ceph Storage dashboard. Role-based access control is given to each user based on their roles and the requirements. This section covers the following administrative tasks: Creating users on the Ceph dashboard . Editing users on the Ceph dashboard . Deleting users on the Ceph dashboard . 4.1. Creating users on the Ceph dashboard You can create users on the Red Hat Ceph Storage dashboard with adequate roles and permissions based on their roles. For example, if you want the user to manage Ceph object gateway operations, then you can give rgw-manager role to the user. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Admin level of access to the Dashboard. Note The Red Hat Ceph Storage Dashboard does not support any email verification when changing a users password. This behavior is intentional, because the Dashboard supports Single Sign-On (SSO) and this feature can be delegated to the SSO provider. Procedure Log in to the Dashboard. Click the Dashboard Settings icon and then click User management . On Users tab, click Create . In the Create User window, set the Username and other parameters including the roles, and then click Create User . You get a notification that the user was created successfully. Additional Resources See the Creating roles on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide for more details. See the User roles and permissions on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide for more details. 4.2. Editing users on the Ceph dashboard You can edit the users on the Red Hat Ceph Storage dashboard. You can modify the user's password and roles based on the requirements. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Admin level of access to the Dashboard. User created on the dashboard. Procedure Log in to the Dashboard. Click the Dashboard Settings icon and then click User management . To edit the user, click the row. On Users tab, select Edit from the Edit drop-down menu. In the Edit User window, edit parameters like password and roles, and then click Edit User . Note If you want to disable any user's access to the Ceph dashboard, you can uncheck Enabled option in the Edit User window. You get a notification that the user was created successfully. Additional Resources See the Creating users on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide for more details. 4.3. Deleting users on the Ceph dashboard You can delete users on the Ceph dashboard. Some users might be removed from the system. The access to such users can be deleted from the Ceph dashboard. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Admin level of access to the Dashboard. User created on the dashboard. Procedure Log in to the Dashboard. Click the Dashboard Settings icon and then click User management . On Users tab, click the user you want to delete. select Delete from the Edit drop-down menu. In the Delete User dialog box, Click the Yes, I am sure box and then Click Delete User to save the settings. Additional Resources See the Creating users on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide for more details.
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/dashboard_guide/management-of-users-on-the-ceph-dashboard
Chapter 16. Provisioning real-time and low latency workloads
Chapter 16. Provisioning real-time and low latency workloads Many organizations need high performance computing and low, predictable latency, especially in the financial and telecommunications industries. OpenShift Container Platform provides the Node Tuning Operator to implement automatic tuning to achieve low latency performance and consistent response time for OpenShift Container Platform applications. You use the performance profile configuration to make these changes. You can update the kernel to kernel-rt, reserve CPUs for cluster and operating system housekeeping duties, including pod infra containers, isolate CPUs for application containers to run the workloads, and disable unused CPUs to reduce power consumption. Note When writing your applications, follow the general recommendations described in RHEL for Real Time processes and threads . Additional resources Tuning nodes for low latency with the performance profile 16.1. Scheduling a low latency workload onto a worker with real-time capabilities You can schedule low latency workloads onto a worker node where a performance profile that configures real-time capabilities is applied. Note To schedule the workload on specific nodes, use label selectors in the Pod custom resource (CR). The label selectors must match the nodes that are attached to the machine config pool that was configured for low latency by the Node Tuning Operator. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. You have applied a performance profile in the cluster that tunes worker nodes for low latency workloads. Procedure Create a Pod CR for the low latency workload and apply it in the cluster, for example: Example Pod spec configured to use real-time processing apiVersion: v1 kind: Pod metadata: name: dynamic-low-latency-pod annotations: cpu-quota.crio.io: "disable" 1 cpu-load-balancing.crio.io: "disable" 2 irq-load-balancing.crio.io: "disable" 3 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: dynamic-low-latency-pod image: "registry.redhat.io/openshift4/cnf-tests-rhel8:v4.15" command: ["sleep", "10h"] resources: requests: cpu: 2 memory: "200M" limits: cpu: 2 memory: "200M" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] nodeSelector: node-role.kubernetes.io/worker-cnf: "" 4 runtimeClassName: performance-dynamic-low-latency-profile 5 # ... 1 Disables the CPU completely fair scheduler (CFS) quota at the pod run time. 2 Disables CPU load balancing. 3 Opts the pod out of interrupt handling on the node. 4 The nodeSelector label must match the label that you specify in the Node CR. 5 runtimeClassName must match the name of the performance profile configured in the cluster. Enter the pod runtimeClassName in the form performance-<profile_name>, where <profile_name> is the name from the PerformanceProfile YAML. In the example, the name is performance-dynamic-low-latency-profile . Ensure the pod is running correctly. Status should be running , and the correct cnf-worker node should be set: USD oc get pod -o wide Expected output NAME READY STATUS RESTARTS AGE IP NODE dynamic-low-latency-pod 1/1 Running 0 5h33m 10.131.0.10 cnf-worker.example.com Get the CPUs that the pod configured for IRQ dynamic load balancing runs on: USD oc exec -it dynamic-low-latency-pod -- /bin/bash -c "grep Cpus_allowed_list /proc/self/status | awk '{print USD2}'" Expected output Cpus_allowed_list: 2-3 Verification Ensure the node configuration is applied correctly. Log in to the node to verify the configuration. USD oc debug node/<node-name> Verify that you can use the node file system: sh-4.4# chroot /host Expected output sh-4.4# Ensure the default system CPU affinity mask does not include the dynamic-low-latency-pod CPUs, for example, CPUs 2 and 3. sh-4.4# cat /proc/irq/default_smp_affinity Example output 33 Ensure the system IRQs are not configured to run on the dynamic-low-latency-pod CPUs: sh-4.4# find /proc/irq/ -name smp_affinity_list -exec sh -c 'i="USD1"; mask=USD(cat USDi); file=USD(echo USDi); echo USDfile: USDmask' _ {} \; Example output /proc/irq/0/smp_affinity_list: 0-5 /proc/irq/1/smp_affinity_list: 5 /proc/irq/2/smp_affinity_list: 0-5 /proc/irq/3/smp_affinity_list: 0-5 /proc/irq/4/smp_affinity_list: 0 /proc/irq/5/smp_affinity_list: 0-5 /proc/irq/6/smp_affinity_list: 0-5 /proc/irq/7/smp_affinity_list: 0-5 /proc/irq/8/smp_affinity_list: 4 /proc/irq/9/smp_affinity_list: 4 /proc/irq/10/smp_affinity_list: 0-5 /proc/irq/11/smp_affinity_list: 0 /proc/irq/12/smp_affinity_list: 1 /proc/irq/13/smp_affinity_list: 0-5 /proc/irq/14/smp_affinity_list: 1 /proc/irq/15/smp_affinity_list: 0 /proc/irq/24/smp_affinity_list: 1 /proc/irq/25/smp_affinity_list: 1 /proc/irq/26/smp_affinity_list: 1 /proc/irq/27/smp_affinity_list: 5 /proc/irq/28/smp_affinity_list: 1 /proc/irq/29/smp_affinity_list: 0 /proc/irq/30/smp_affinity_list: 0-5 Warning When you tune nodes for low latency, the usage of execution probes in conjunction with applications that require guaranteed CPUs can cause latency spikes. Use other probes, such as a properly configured set of network probes, as an alternative. Additional resources Placing pods on specific nodes using node selectors Assigning pods to nodes 16.2. Creating a pod with a guaranteed QoS class Keep the following in mind when you create a pod that is given a QoS class of Guaranteed : Every container in the pod must have a memory limit and a memory request, and they must be the same. Every container in the pod must have a CPU limit and a CPU request, and they must be the same. The following example shows the configuration file for a pod that has one container. The container has a memory limit and a memory request, both equal to 200 MiB. The container has a CPU limit and a CPU request, both equal to 1 CPU. apiVersion: v1 kind: Pod metadata: name: qos-demo namespace: qos-example spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: qos-demo-ctr image: <image-pull-spec> resources: limits: memory: "200Mi" cpu: "1" requests: memory: "200Mi" cpu: "1" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] Create the pod: USD oc apply -f qos-pod.yaml --namespace=qos-example View detailed information about the pod: USD oc get pod qos-demo --namespace=qos-example --output=yaml Example output spec: containers: ... status: qosClass: Guaranteed Note If you specify a memory limit for a container, but do not specify a memory request, OpenShift Container Platform automatically assigns a memory request that matches the limit. Similarly, if you specify a CPU limit for a container, but do not specify a CPU request, OpenShift Container Platform automatically assigns a CPU request that matches the limit. 16.3. Disabling CPU load balancing in a Pod Functionality to disable or enable CPU load balancing is implemented on the CRI-O level. The code under the CRI-O disables or enables CPU load balancing only when the following requirements are met. The pod must use the performance-<profile-name> runtime class. You can get the proper name by looking at the status of the performance profile, as shown here: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile ... status: ... runtimeClass: performance-manual Note Currently, disabling CPU load balancing is not supported with cgroup v2. The Node Tuning Operator is responsible for the creation of the high-performance runtime handler config snippet under relevant nodes and for creation of the high-performance runtime class under the cluster. It will have the same content as the default runtime handler except that it enables the CPU load balancing configuration functionality. To disable the CPU load balancing for the pod, the Pod specification must include the following fields: apiVersion: v1 kind: Pod metadata: #... annotations: #... cpu-load-balancing.crio.io: "disable" #... #... spec: #... runtimeClassName: performance-<profile_name> #... Note Only disable CPU load balancing when the CPU manager static policy is enabled and for pods with guaranteed QoS that use whole CPUs. Otherwise, disabling CPU load balancing can affect the performance of other containers in the cluster. 16.4. Disabling power saving mode for high priority pods You can configure pods to ensure that high priority workloads are unaffected when you configure power saving for the node that the workloads run on. When you configure a node with a power saving configuration, you must configure high priority workloads with performance configuration at the pod level, which means that the configuration applies to all the cores used by the pod. By disabling P-states and C-states at the pod level, you can configure high priority workloads for best performance and lowest latency. Table 16.1. Configuration for high priority workloads Annotation Possible Values Description cpu-c-states.crio.io: "enable" "disable" "max_latency:microseconds" This annotation allows you to enable or disable C-states for each CPU. Alternatively, you can also specify a maximum latency in microseconds for the C-states. For example, enable C-states with a maximum latency of 10 microseconds with the setting cpu-c-states.crio.io : "max_latency:10" . Set the value to "disable" to provide the best performance for a pod. cpu-freq-governor.crio.io: Any supported cpufreq governor . Sets the cpufreq governor for each CPU. The "performance" governor is recommended for high priority workloads. Prerequisites You have configured power saving in the performance profile for the node where the high priority workload pods are scheduled. Procedure Add the required annotations to your high priority workload pods. The annotations override the default settings. Example high priority workload annotation apiVersion: v1 kind: Pod metadata: #... annotations: #... cpu-c-states.crio.io: "disable" cpu-freq-governor.crio.io: "performance" #... #... spec: #... runtimeClassName: performance-<profile_name> #... Restart the pods to apply the annotation. Additional resources Configuring power saving for nodes that run colocated high and low priority workloads 16.5. Disabling CPU CFS quota To eliminate CPU throttling for pinned pods, create a pod with the cpu-quota.crio.io: "disable" annotation. This annotation disables the CPU completely fair scheduler (CFS) quota when the pod runs. Example pod specification with cpu-quota.crio.io disabled apiVersion: v1 kind: Pod metadata: annotations: cpu-quota.crio.io: "disable" spec: runtimeClassName: performance-<profile_name> #... Note Only disable CPU CFS quota when the CPU manager static policy is enabled and for pods with guaranteed QoS that use whole CPUs. For example, pods that contain CPU-pinned containers. Otherwise, disabling CPU CFS quota can affect the performance of other containers in the cluster. Additional resources Recommended firmware configuration for vDU cluster hosts 16.6. Disabling interrupt processing for CPUs where pinned containers are running To achieve low latency for workloads, some containers require that the CPUs they are pinned to do not process device interrupts. A pod annotation, irq-load-balancing.crio.io , is used to define whether device interrupts are processed or not on the CPUs where the pinned containers are running. When configured, CRI-O disables device interrupts where the pod containers are running. To disable interrupt processing for CPUs where containers belonging to individual pods are pinned, ensure that globallyDisableIrqLoadBalancing is set to false in the performance profile. Then, in the pod specification, set the irq-load-balancing.crio.io pod annotation to disable . The following pod specification contains this annotation: apiVersion: performance.openshift.io/v2 kind: Pod metadata: annotations: irq-load-balancing.crio.io: "disable" spec: runtimeClassName: performance-<profile_name> ... Additional resources Managing device interrupt processing for guaranteed pod isolated CPUs
[ "apiVersion: v1 kind: Pod metadata: name: dynamic-low-latency-pod annotations: cpu-quota.crio.io: \"disable\" 1 cpu-load-balancing.crio.io: \"disable\" 2 irq-load-balancing.crio.io: \"disable\" 3 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: dynamic-low-latency-pod image: \"registry.redhat.io/openshift4/cnf-tests-rhel8:v4.15\" command: [\"sleep\", \"10h\"] resources: requests: cpu: 2 memory: \"200M\" limits: cpu: 2 memory: \"200M\" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" 4 runtimeClassName: performance-dynamic-low-latency-profile 5", "oc get pod -o wide", "NAME READY STATUS RESTARTS AGE IP NODE dynamic-low-latency-pod 1/1 Running 0 5h33m 10.131.0.10 cnf-worker.example.com", "oc exec -it dynamic-low-latency-pod -- /bin/bash -c \"grep Cpus_allowed_list /proc/self/status | awk '{print USD2}'\"", "Cpus_allowed_list: 2-3", "oc debug node/<node-name>", "sh-4.4# chroot /host", "sh-4.4#", "sh-4.4# cat /proc/irq/default_smp_affinity", "33", "sh-4.4# find /proc/irq/ -name smp_affinity_list -exec sh -c 'i=\"USD1\"; mask=USD(cat USDi); file=USD(echo USDi); echo USDfile: USDmask' _ {} \\;", "/proc/irq/0/smp_affinity_list: 0-5 /proc/irq/1/smp_affinity_list: 5 /proc/irq/2/smp_affinity_list: 0-5 /proc/irq/3/smp_affinity_list: 0-5 /proc/irq/4/smp_affinity_list: 0 /proc/irq/5/smp_affinity_list: 0-5 /proc/irq/6/smp_affinity_list: 0-5 /proc/irq/7/smp_affinity_list: 0-5 /proc/irq/8/smp_affinity_list: 4 /proc/irq/9/smp_affinity_list: 4 /proc/irq/10/smp_affinity_list: 0-5 /proc/irq/11/smp_affinity_list: 0 /proc/irq/12/smp_affinity_list: 1 /proc/irq/13/smp_affinity_list: 0-5 /proc/irq/14/smp_affinity_list: 1 /proc/irq/15/smp_affinity_list: 0 /proc/irq/24/smp_affinity_list: 1 /proc/irq/25/smp_affinity_list: 1 /proc/irq/26/smp_affinity_list: 1 /proc/irq/27/smp_affinity_list: 5 /proc/irq/28/smp_affinity_list: 1 /proc/irq/29/smp_affinity_list: 0 /proc/irq/30/smp_affinity_list: 0-5", "apiVersion: v1 kind: Pod metadata: name: qos-demo namespace: qos-example spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: qos-demo-ctr image: <image-pull-spec> resources: limits: memory: \"200Mi\" cpu: \"1\" requests: memory: \"200Mi\" cpu: \"1\" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "oc apply -f qos-pod.yaml --namespace=qos-example", "oc get pod qos-demo --namespace=qos-example --output=yaml", "spec: containers: status: qosClass: Guaranteed", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile status: runtimeClass: performance-manual", "apiVersion: v1 kind: Pod metadata: # annotations: # cpu-load-balancing.crio.io: \"disable\" # # spec: # runtimeClassName: performance-<profile_name> #", "apiVersion: v1 kind: Pod metadata: # annotations: # cpu-c-states.crio.io: \"disable\" cpu-freq-governor.crio.io: \"performance\" # # spec: # runtimeClassName: performance-<profile_name> #", "apiVersion: v1 kind: Pod metadata: annotations: cpu-quota.crio.io: \"disable\" spec: runtimeClassName: performance-<profile_name> #", "apiVersion: performance.openshift.io/v2 kind: Pod metadata: annotations: irq-load-balancing.crio.io: \"disable\" spec: runtimeClassName: performance-<profile_name>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/scalability_and_performance/cnf-provisioning-low-latency-workloads
Preface
Preface Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/getting_started_with_camel_k/pr01
Chapter 12. File systems and storage
Chapter 12. File systems and storage 12.1. File systems 12.1.1. Btrfs has been removed The Btrfs file system has been removed in Red Hat Enterprise Linux 8. This includes the following components: The btrfs.ko kernel module The btrfs-progs package The snapper package You can no longer create, mount, or install on Btrfs file systems in Red Hat Enterprise Linux 8. The Anaconda installer and the Kickstart commands no longer support Btrfs. 12.1.2. XFS now supports shared copy-on-write data extents The XFS file system supports shared copy-on-write data extent functionality. This feature enables two or more files to share a common set of data blocks. When either of the files sharing common blocks changes, XFS breaks the link to common blocks and creates a new file. This is similar to the copy-on-write (COW) functionality found in other file systems. Shared copy-on-write data extents are: Fast Creating shared copies does not utilize disk I/O. Space-efficient Shared blocks do not consume additional disk space. Transparent Files sharing common blocks act like regular files. Userspace utilities can use shared copy-on-write data extents for: Efficient file cloning, such as with the cp --reflink command Per-file snapshots This functionality is also used by kernel subsystems such as Overlayfs and NFS for more efficient operation. Shared copy-on-write data extents are now enabled by default when creating an XFS file system, starting with the xfsprogs package version 4.17.0-2.el8 . Note that Direct Access (DAX) devices currently do not support XFS with shared copy-on-write data extents. To create an XFS file system without this feature, use the following command: Red Hat Enterprise Linux 7 can mount XFS file systems with shared copy-on-write data extents only in the read-only mode. 12.1.3. The ext4 file system now supports metadata checksums With this update, ext4 metadata is protected by checksums. This enables the file system to recognize the corrupt metadata, which avoids damage and increases the file system resilience. 12.1.4. The /etc/sysconfig/nfs file and legacy NFS service names are no longer available In Red Hat Enterprise Linux 8.0, the NFS configuration has moved from the /etc/sysconfig/nfs configuration file, which was used in Red Hat Enterprise Linux 7, to /etc/nfs.conf . The /etc/nfs.conf file uses a different syntax. Red Hat Enterprise Linux 8 attempts to automatically convert all options from /etc/sysconfig/nfs to /etc/nfs.conf when upgrading from Red Hat Enterprise Linux 7. Both configuration files are supported in Red Hat Enterprise Linux 7. Red Hat recommends that you use the new /etc/nfs.conf file to make NFS configuration in all versions of Red Hat Enterprise Linux compatible with automated configuration systems. Additionally, the following NFS service aliases have been removed and replaced by their upstream names: nfs.service , replaced by nfs-server.service nfs-secure.service , replaced by rpc-gssd.service rpcgssd.service , replaced by rpc-gssd.service nfs-idmap.service , replaced by nfs-idmapd.service rpcidmapd.service , replaced by nfs-idmapd.service nfs-lock.service , replaced by rpc-statd.service nfslock.service , replaced by rpc-statd.service 12.2. Storage 12.2.1. The BOOM boot manager simplifies the process of creating boot entries BOOM is a boot manager for Linux systems that use boot loaders supporting the BootLoader Specification for boot entry configuration. It enables flexible boot configuration and simplifies the creation of new or modified boot entries: for example, to boot snapshot images of the system created using LVM. BOOM does not modify the existing boot loader configuration, and only inserts additional entries. The existing configuration is maintained, and any distribution integration, such as kernel installation and update scripts, continue to function as before. BOOM has a simplified command-line interface (CLI) and API that ease the task of creating boot entries. 12.2.2. Stratis is now available Stratis is a new local storage manager. It provides managed file systems on top of pools of storage with additional features to the user. Stratis enables you to more easily perform storage tasks such as: Manage snapshots and thin provisioning Automatically grow file system sizes as needed Maintain file systems To administer Stratis storage, use the stratis utility, which communicates with the stratisd background service. Stratis is provided as a Technology Preview. For more information, see the Stratis documentation: Setting up Stratis file systems . 12.2.3. LUKS2 is now the default format for encrypting volumes In RHEL 8, the LUKS version 2 (LUKS2) format replaces the legacy LUKS (LUKS1) format. The dm-crypt subsystem and the cryptsetup tool now uses LUKS2 as the default format for encrypted volumes. LUKS2 provides encrypted volumes with metadata redundancy and auto-recovery in case of a partial metadata corruption event. Due to the internal flexible layout, LUKS2 is also an enabler of future features. It supports auto-unlocking through the generic kernel-keyring token built in libcryptsetup that allow users unlocking of LUKS2 volumes using a passphrase stored in the kernel-keyring retention service. Other notable enhancements include: The protected key setup using the wrapped key cipher scheme. Easier integration with Policy-Based Decryption (Clevis). Up to 32 key slots - LUKS1 provides only 8 key slots. For more details, see the cryptsetup(8) and cryptsetup-reencrypt(8) man pages. 12.2.4. Multiqueue scheduling on block devices Block devices now use multiqueue scheduling in Red Hat Enterprise Linux 8. This enables the block layer performance to scale well with fast solid-state drives (SSDs) and multi-core systems. The SCSI Multiqueue ( scsi-mq ) driver is now enabled by default, and the kernel boots with the scsi_mod.use_blk_mq=Y option. This change is consistent with the upstream Linux kernel. Device Mapper Multipath (DM Multipath) requires the scsi-mq driver to be active. 12.2.5. VDO now supports all architectures Virtual Data Optimizer (VDO) is now available on all of the architectures supported by RHEL 8. 12.2.6. VDO no longer supports read cache The read cache functionality has been removed from Virtual Data Optimizer (VDO). The read cache is always disabled on VDO volumes, and you can no longer enable it using the --readCache option of the vdo utility. Red Hat might reintroduce the VDO read cache in a later Red Hat Enterprise Linux release, using a different implementation. 12.2.7. The dmraid package has been removed The dmraid package has been removed from Red Hat Enterprise Linux 8. Users requiring support for combined hardware and software RAID host bus adapters (HBA) should use the mdadm utility, which supports native MD software RAID, the SNIA RAID Common Disk Data Format (DDF), and the Intel(R) Matrix Storage Manager (IMSM) formats. 12.2.8. Software FCoE and Fibre Channel no longer support the target mode Software FCoE: NIC Software FCoE target functionality is removed in Red Hat Enterprise Linux 8.0. Fibre Channel no longer supports the target mode. Target mode is disabled for the qla2xxx QLogic Fibre Channel driver in Red Hat Enterprise Linux 8.0. For more information, see FCoE software removal . 12.2.9. The detection of marginal paths in DM Multipath has been improved The multipathd service now supports improved detection of marginal paths. This helps multipath devices avoid paths that are likely to fail repeatedly, and improves performance. Marginal paths are paths with persistent but intermittent I/O errors. The following options in the /etc/multipath.conf file control marginal paths behavior: marginal_path_double_failed_time marginal_path_err_sample_time marginal_path_err_rate_threshold marginal_path_err_recheck_gap_time DM Multipath disables a path and tests it with repeated I/O for the configured sample time if: the listed multipath.conf options are set, a path fails twice in the configured time, and other paths are available. If the path has more than the configured err rate during this testing, DM Multipath ignores it for the configured gap time, and then retests it to see if it is working well enough to be reinstated. For more information, see the multipath.conf man page on your system. 12.2.10. New overrides section of the DM Multipath configuration file The /etc/multipath.conf file now includes an overrides section that allows you to set a configuration value for all of your devices. These attributes are used by DM Multipath for all devices unless they are overwritten by the attributes specified in the multipaths section of the /etc/multipath.conf file for paths that contain the device. This functionality replaces the all_devs parameter of the devices section of the configuration file, which is no longer supported. 12.2.11. NVMe/FC is fully supported on Broadcom Emulex and Marvell Qlogic Fibre Channel adapters The NVMe over Fibre Channel (NVMe/FC) transport type is now fully supported in Initiator mode when used with Broadcom Emulex and Marvell Qlogic Fibre Channel 32Gbit adapters that feature NVMe support. NVMe over Fibre Channel is an additional fabric transport type for the Nonvolatile Memory Express (NVMe) protocol, in addition to the Remote Direct Memory Access (RDMA) protocol that was previously introduced in Red Hat Enterprise Linux. Enabling NVMe/FC: To enable NVMe/FC in the lpfc driver, edit the /etc/modprobe.d/lpfc.conf file and add the following option: To enable NVMe/FC in the qla2xxx driver, edit the /etc/modprobe.d/qla2xxx.conf file and add the following option: Additional restrictions: NVMe clustering is not supported with NVMe/FC. kdump is not supported with NVMe/FC. Booting from Storage Area Network (SAN) NVMe/FC is not supported. 12.2.12. Support for Data Integrity Field/Data Integrity Extension (DIF/DIX) DIF/DIX is an addition to the SCSI Standard. It remains in Technology Preview for all HBAs and storage arrays, except for those specifically listed as supported. DIF/DIX increases the size of the commonly used 512 byte disk block from 512 to 520 bytes, adding the Data Integrity Field (DIF). The DIF stores a checksum value for the data block that is calculated by the Host Bus Adapter (HBA) when a write occurs. The storage device then confirms the checksum on receipt, and stores both the data and the checksum. Conversely, when a read occurs, the checksum can be verified by the storage device, and by the receiving HBA. 12.2.13. libstoragemgmt-netapp-plugin has been removed The libstoragemgmt-netapp-plugin package used by the libStorageMgmt library has been removed. It is no longer supported because: The package requires the NetApp 7-mode API, which is being phased out by NetApp. RHEL 8 has removed default support for the TLSv1.0 protocol with the TLS_RSA_WITH_3DES_EDE_CBC_SHA cipher, using this plug-in with TLS does not work. 12.2.14. Removal of Cylinder-Head-Sector addressing from sfdisk and cfdisk Cylinder-Head-Sector (CHS) addressing is no longer useful for modern storage devices. It has been removed as an option from the sfdisk and cfdisk commands. Since RHEL 8, you cannot use the following options: -C, --cylinders number -H, --heads number -S, --sectors number For more information, see the sfdisk(8) and cfdisk(8) man pages. 12.3. LVM 12.3.1. Removal of clvmd for managing shared storage devices LVM no longer uses clvmd (cluster lvm daemon) for managing shared storage devices. Instead, LVM now uses lvmlockd (lvm lock daemon). For details about using lvmlockd , see the lvmlockd(8) man page on your system. For details about using shared storage in general, see the lvmsystemid(7) man page on your system. For information about using LVM in a Pacemaker cluster, see the help screen for the LVM-activate resource agent. For an example of a procedure to configure a shared logical volume in a Red Hat High Availability cluster, see Configuring a GFS2 file system in a cluster . 12.3.2. Removal of lvmetad daemon LVM no longer uses the lvmetad daemon for caching metadata, and will always read metadata from disk. LVM disk reading has been reduced, which reduces the benefits of caching. Previously, autoactivation of logical volumes was indirectly tied to the use_lvmetad setting in the lvm.conf configuration file. The correct way to disable autoactivation continues to be setting auto_activation_volume_list in the lvm.conf file. 12.3.3. LVM can no longer manage devices formatted with the GFS pool volume manager or the lvm1 metadata format. LVM can no longer manage devices formatted with the GFS pool volume manager or the`lvm1` metadata format. if you created your logical volume before Red Hat Enterprise Linux 4 was introduced, then this may affect you. Volume groups using the lvm1 format should be converted to the lvm2 format using the vgconvert command. 12.3.4. LVM libraries and LVM Python bindings have been removed The lvm2app library and LVM Python bindings, which were provided by the lvm2-python-libs package, have been removed. Red Hat recommends the following solutions instead: The LVM D-Bus API in combination with the lvm2-dbusd service. This requires using Python version 3. The LVM command-line utilities with JSON formatting; this formatting has been available since the lvm2 package version 2.02.158. The libblockdev library, included in AppStream, for C/C++ You must port any applications using the removed libraries and bindings to the D-Bus API before upgrading to Red Hat Enterprise Linux 8. 12.3.5. The ability to mirror the log for LVM mirrors has been removed The mirrored log feature of mirrored LVM volumes has been removed. Red Hat Enterprise Linux (RHEL) 8 no longer supports creating or activating LVM volumes with a mirrored mirror log. The recommended replacements are: RAID1 LVM volumes. The main advantage of RAID1 volumes is their ability to work even in degraded mode and to recover after a transient failure. Disk mirror log. To convert a mirrored mirror log to disk mirror log, use the following command: lvconvert --mirrorlog disk my_vg/my_lv .
[ "mkfs.xfs -m reflink=0 block-device", "lpfc_enable_fc4_type=3", "qla2xxx.ql2xnvmeenable=1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/considerations_in_adopting_rhel_8/file-systems-and-storage_considerations-in-adopting-RHEL-8
Integrating Red Hat Process Automation Manager with other products and components
Integrating Red Hat Process Automation Manager with other products and components Red Hat Process Automation Manager 7.13
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/integrating_red_hat_process_automation_manager_with_other_products_and_components/index
Chapter 2. Basic Red Hat build of Keycloak deployment
Chapter 2. Basic Red Hat build of Keycloak deployment 2.1. Performing a basic Red Hat build of Keycloak deployment This chapter describes how to perform a basic Red Hat build of Keycloak Deployment on OpenShift using the Operator. 2.1.1. Preparing for deployment Once the Red Hat build of Keycloak Operator is installed and running in the cluster namespace, you can set up the other deployment prerequisites. Database Hostname TLS Certificate and associated keys 2.1.1.1. Database A database should be available and accessible from the cluster namespace where Red Hat build of Keycloak is installed. For a list of supported databases, see Configuring the database . The Red Hat build of Keycloak Operator does not manage the database and you need to provision it yourself. Consider verifying your cloud provider offering or using a database operator. For development purposes, you can use an ephemeral PostgreSQL pod installation. To provision it, follow the approach below: Create YAML file example-postgres.yaml : apiVersion: apps/v1 kind: StatefulSet metadata: name: postgresql-db spec: serviceName: postgresql-db-service selector: matchLabels: app: postgresql-db replicas: 1 template: metadata: labels: app: postgresql-db spec: containers: - name: postgresql-db image: postgres:latest volumeMounts: - mountPath: /data name: cache-volume env: - name: POSTGRES_PASSWORD value: testpassword - name: PGDATA value: /data/pgdata - name: POSTGRES_DB value: keycloak volumes: - name: cache-volume emptyDir: {} --- apiVersion: v1 kind: Service metadata: name: postgres-db spec: selector: app: postgresql-db type: LoadBalancer ports: - port: 5432 targetPort: 5432 Apply the changes: oc apply -f example-postgres.yaml 2.1.1.2. Hostname For a production ready installation, you need a hostname that can be used to contact Red Hat build of Keycloak. See Configuring the hostname for the available configurations. For development purposes, this chapter will use test.keycloak.org . 2.1.1.3. TLS Certificate and key See your Certification Authority to obtain the certificate and the key. For development purposes, you can enter this command to obtain a self-signed certificate: openssl req -subj '/CN=test.keycloak.org/O=Test Keycloak./C=US' -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem You should install it in the cluster namespace as a Secret by entering this command: oc create secret tls example-tls-secret --cert certificate.pem --key key.pem 2.1.2. Deploying Red Hat build of Keycloak To deploy Red Hat build of Keycloak, you create a Custom Resource (CR) based on the Keycloak Custom Resource Definition (CRD). Consider storing the Database credentials in a separate Secret. Enter the following commands: oc create secret generic keycloak-db-secret \ --from-literal=username=[your_database_username] \ --from-literal=password=[your_database_password] You can customize several fields using the Keycloak CRD. For a basic deployment, you can stick to the following approach: Create YAML file example-kc.yaml : apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: instances: 1 db: vendor: postgres host: postgres-db usernameSecret: name: keycloak-db-secret key: username passwordSecret: name: keycloak-db-secret key: password http: tlsSecret: example-tls-secret hostname: hostname: test.keycloak.org Apply the changes: oc apply -f example-kc.yaml To check that the Red Hat build of Keycloak instance has been provisioned in the cluster, check the status of the created CR by entering the following command: oc get keycloaks/example-kc -o go-template='{{range .status.conditions}}CONDITION: {{.type}}{{"\n"}} STATUS: {{.status}}{{"\n"}} MESSAGE: {{.message}}{{"\n"}}{{end}}' When the deployment is ready, look for output similar to the following: CONDITION: Ready STATUS: true MESSAGE: CONDITION: HasErrors STATUS: false MESSAGE: CONDITION: RollingUpdate STATUS: false MESSAGE: 2.1.3. Accessing the Red Hat build of Keycloak deployment The Red Hat build of Keycloak deployment is exposed through a basic Ingress and is accessible through the provided hostname. On installations with multiple default IngressClass instances or when running on OpenShift 4.12+ you should provide an ingressClassName by setting ingress spec with className property to the desired class name: Edit YAML file example-kc.yaml : apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: ... ingress: className: openshift-default If the default ingress does not fit your use case, disable it by setting ingress spec with enabled property to false value: Edit YAML file example-kc.yaml : apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: ... ingress: enabled: false Apply the changes: oc apply -f example-kc.yaml You can provide an alternative ingress resource pointing to the service <keycloak-cr-name>-service . For debugging and development purposes, consider directly connecting to the Red Hat build of Keycloak service using a port forward. For example, enter this command: oc port-forward service/example-kc-service 8443:8443 2.1.4. Accessing the Admin Console When deploying Red Hat build of Keycloak, the operator generates an arbitrary initial admin username and password and stores those credentials as a basic-auth Secret object in the same namespace as the CR. Warning Change the default admin credentials and enable MFA in Red Hat build of Keycloak before going to production. To fetch the initial admin credentials, you have to read and decode the Secret. The Secret name is derived from the Keycloak CR name plus the fixed suffix -initial-admin . To get the username and password for the example-kc CR, enter the following commands: oc get secret example-kc-initial-admin -o jsonpath='{.data.username}' | base64 --decode oc get secret example-kc-initial-admin -o jsonpath='{.data.password}' | base64 --decode You can use those credentials to access the Admin Console or the Admin REST API.
[ "apiVersion: apps/v1 kind: StatefulSet metadata: name: postgresql-db spec: serviceName: postgresql-db-service selector: matchLabels: app: postgresql-db replicas: 1 template: metadata: labels: app: postgresql-db spec: containers: - name: postgresql-db image: postgres:latest volumeMounts: - mountPath: /data name: cache-volume env: - name: POSTGRES_PASSWORD value: testpassword - name: PGDATA value: /data/pgdata - name: POSTGRES_DB value: keycloak volumes: - name: cache-volume emptyDir: {} --- apiVersion: v1 kind: Service metadata: name: postgres-db spec: selector: app: postgresql-db type: LoadBalancer ports: - port: 5432 targetPort: 5432", "apply -f example-postgres.yaml", "openssl req -subj '/CN=test.keycloak.org/O=Test Keycloak./C=US' -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem", "create secret tls example-tls-secret --cert certificate.pem --key key.pem", "create secret generic keycloak-db-secret --from-literal=username=[your_database_username] --from-literal=password=[your_database_password]", "apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: instances: 1 db: vendor: postgres host: postgres-db usernameSecret: name: keycloak-db-secret key: username passwordSecret: name: keycloak-db-secret key: password http: tlsSecret: example-tls-secret hostname: hostname: test.keycloak.org", "apply -f example-kc.yaml", "get keycloaks/example-kc -o go-template='{{range .status.conditions}}CONDITION: {{.type}}{{\"\\n\"}} STATUS: {{.status}}{{\"\\n\"}} MESSAGE: {{.message}}{{\"\\n\"}}{{end}}'", "CONDITION: Ready STATUS: true MESSAGE: CONDITION: HasErrors STATUS: false MESSAGE: CONDITION: RollingUpdate STATUS: false MESSAGE:", "apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: ingress: className: openshift-default", "apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: ingress: enabled: false", "apply -f example-kc.yaml", "port-forward service/example-kc-service 8443:8443", "get secret example-kc-initial-admin -o jsonpath='{.data.username}' | base64 --decode get secret example-kc-initial-admin -o jsonpath='{.data.password}' | base64 --decode" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/operator_guide/basic-deployment-
Chapter 2. Red Hat Developer Toolset 12.1 Release
Chapter 2. Red Hat Developer Toolset 12.1 Release 2.1. Features 2.1.1. List of Components Red Hat Developer Toolset 12.1 provides the following components: Development Tools GNU Compiler Collection (GCC) binutils elfutils dwz make annobin Debugging Tools GNU Debugger (GDB) strace ltrace memstomp Performance Monitoring Tools SystemTap Valgrind OProfile Dyninst For details, see the Red Hat Developer Toolset Components table in the Red Hat Developer Toolset User Guide . 2.1.2. Changes in Red Hat Developer Toolset 12.1 All components in Red Hat Developer Toolset 12.1 are distributed with the devtoolset-12- prefix and only for Red Hat Enterprise Linux 7. The following components have been upgraded in Red Hat Developer Toolset 12.1 compared to the release of Red Hat Developer Toolset: GCC to version 12.2.1 annobin to version 11.08 In addition, a security update is available for binutils . For detailed information on changes in Red Hat Developer Toolset 12.1, see Red Hat Developer Toolset User Guide . 2.1.3. Container Images The following container images have been updated with Red Hat Developer Toolset: rhscl/devtoolset-12-perftools-rhel7 rhscl/devtoolset-12-toolchain-rhel7 For more information, see the Red Hat Developer Toolset Images chapter in Using Red Hat Software Collections Container Images . Note that only the latest version of each container image is supported. 2.2. Known Issues dyninst component, BZ# 1763157 Dyninst 12 is provided only for the AMD64 and Intel 64 architectures. gcc component, BZ# 1731555 Executable files created with Red Hat Developer Toolset are dynamically linked in a nonstandard way. As a consequence, Fortran code cannot handle input/output (I/O) operations asynchronously even if this functionality is requested. To work around this problem, link the libgfortran library statically with the -static-libgfortran option to enable asynchronous I/O operations in Fortran code. Note that Red Hat discourages static linking for security reasons. gcc component, BZ# 1570853 In Red Hat Developer Toolset, libraries are linked via linker scripts which might specify some symbols through static archives. This is required to ensure compatibility with multiple versions of Red Hat Enterprise Linux. However, the linker scripts use names of the respective shared object files. As a consequence, the linker uses different symbol handling rules than expected, and does not recognize symbols required by object files when the option adding the library is specified before options specifying the object files, such as: Such use of a library from Red Hat Developer Toolset results in linker error messages undefined reference to symbol . To enable successful symbol resolution and linking, follow the standard linking practice and specify the option adding the library after the options specifying the object files: Note that this recommendation applies when using the version of GCC available as a part of Red Hat Enterprise Linux, too. gcc component, BZ# 1433946 GCC in Red Hat Developer Toolset 3.x contained the libasan package, which might have conflicted with the system version of libasan . As a consequence, depending on which libasan was present in the system, the -fsanitize=address tool worked only either with the system GCC or with the Red Hat Developer Toolset version of GCC , but not with both at the same time. To prevent the described conflict, in Red Hat Developer Toolset 4.x and later versions, the package was renamed to libasan N , where N is a number. However, if the Red Hat Software Collections repository is enabled, the problem can occur after the system update because the system version of libasan is available in an earlier version than the Red Hat Developer Toolset 3.x version, which is still available in the repository. To work around this problem, exclude this package while updating: oprofile component OProfile 1.3.0 and OProfile 1.2.0 shipped in Red Hat Developer Toolset works on all supported architectures, with the exception of IBM Z, where only the ocount tool works on the following models: z196, zEC12, and z13. operf and the other tools, such as oparchive or opannotate , do not work on IBM Z. For profiling purposes, users are recommended to use the Red Hat Enterprise Linux 7 system OProfile 0.9.9 version, which supports opcontrol with TIMER software interrupts. Note that for correct reporting of data collected by OProfile 0.9.9 , the corresponding opreport utility is necessary. Thus opcontrol -based profiling should be performed with Red Hat Developer Toolset disabled because the reporting tools from Red Hat Developer Toolset cannot process data collected within opcontrol legacy mode correctly. valgrind component, BZ# 869184 The default Valgrind gdbserver support ( --vgdb=yes ) can cause certain register and flags values to be not always up-to-date due to optimizations done by the Valgrind core. The GDB utility is therefore unable to show certain parameters or variables of programs running under Valgrind . To work around this problem, use the --vgdb=full parameter. Note that programs might run slower under Valgrind when this parameter is used. multiple components The devtoolset- version - package_name -debuginfo packages can conflict with the corresponding packages from the base Red Hat Enterprise Linux system or from other versions of Red Hat Developer Toolset. This namely applies to devtoolset- version -gcc-debuginfo , devtoolset- version -ltrace-debuginfo , devtoolset- version -valgrind-debuginfo , and might apply to other debuginfo packages, too. A similar conflict can also occur in a multilib environment, where 64-bit debuginfo packages conflict with 32-bit debuginfo packages. For example, on Red Hat Enterprise Linux 7, devtoolset-7-gcc-debuginfo conflicts with three packages: gcc-base-debuginfo , gcc-debuginfo , and gcc-libraries-debuginfo . On Red Hat Enterprise Linux 6, devtoolset-7-gcc-debuginfo conflicts with one package: gcc-libraries-debuginfo . As a consequence, if conflicting debuginfo packages are installed, attempts to install Red Hat Developer Toolset can fail with a transaction check error message similar to the following examples: To work around the problem, manually uninstall the conflicting debuginfo packages prior to installing Red Hat Developer Toolset 12.1. It is advisable to install only the relevant debuginfo packages when necessary and expect such problems to happen. Other Notes Red Hat Developer Toolset primarily aims to provide a compiler for development of user applications for deployment on multiple versions of Red Hat Enterprise Linux. Operating system components, kernel modules and device drivers generally correspond to a specific version of Red Hat Enterprise Linux, for which the supplied base OS compiler is recommended. Red Hat Developer Toolset 12.1 supports only C, C++ and Fortran development. For other languages, invoke the system version of GCC available on Red Hat Enterprise Linux. Building an application with Red Hat Developer Toolset 12.1 on Red Hat Enterprise Linux (for example, Red Hat Enterprise Linux 7) and then executing that application on an earlier minor version (such as Red Hat Enterprise Linux 6.7.z) may result in runtime errors due to differences in non-toolchain components between Red Hat Enterprise Linux releases. Users are advised to check compatibility carefully. Red Hat supports only execution of an application built with Red Hat Developer Toolset on the same, or a later, supported release of Red Hat Enterprise Linux than the version used to build that application. Valgrind must be rebuilt without Red Hat Developer Toolset's GCC installed, or it will be used in preference to Red Hat Enterprise Linux system GCC . The binary files shipped by Red Hat are built using the system GCC . For any testing, Red Hat Developer Toolset's GDB should be used. All code in the non-shared library libstdc++_nonshared.a in Red Hat Developer Toolset 12.1 is licensed under the GNU General Public License v3 with additional permissions granted under Section 7, described in the GCC Runtime Library Exception version 3.1, as published by the Free Software Foundation. The compiler included in Red Hat Developer Toolset emits newer DWARF debugging records than compilers available on Red Hat Enterprise Linux. These new debugging records improve the debugging experience in a variety of ways, particularly for C++ and optimized code. However, certain tools are not yet capable of handling the newer DWARF debug records. To generate the older style debugging records, use the options -gdwarf-2 -gstrict-dwarf or -gdwarf-3 -gstrict-dwarf . Some newer library features are statically linked into applications built with Red Hat Developer Toolset to support execution on multiple versions of Red Hat Enterprise Linux. This adds a small additional security risk because regular Red Hat Enterprise Linux errata would not change this code. If the need for developers to rebuild their applications due to such an issue arises, Red Hat will signal this in a security erratum. Developers are strongly advised not to statically link their entire application for the same reasons. Note that error messages related to a missing libitm library when using the -fgnu-tm option require the libitm package to be installed. You can install the package with the following command: To use the ccache utility with GCC included in Red Hat Developer Toolset, set your environment correctly. For example: Alternatively, you can create a shell with the Red Hat Developer Toolset version of GCC as the default compiler: After you have created the shell, run the following two commands: Because the elfutils libraries contained in Red Hat Developer Toolset 12.1 are linked to a client application statically, caution is advised when passing handles to libelf , libdw , and libasm data structures to external code and when passing handles received from external code to libelf , libdw , and libasm . Be especially careful when an external library, which is linked dynamically against the system version of elfutils , is passed a pointer to a structure that comes from the Red Hat Developer Toolset 12.1 version of elfutils (or vice versa). Generally, data structures used in the Red Hat Developer Toolset 12.1 version of elfutils are not compatible with the Red Hat Enterprise Linux system versions, and structures coming from one should never be touched by the other. In applications that use the Red Hat Developer Toolset 12.1 libraries, all code that was linked against the system version of the libraries should be recompiled against the libraries included in Red Hat Developer Toolset 12.1. The elfutils EBL library, which is used internally by libdw , was amended not to open back ends dynamically. Instead, a selection of back ends is compiled in the library itself: the 32-bit AMD and Intel architecture, AMD64 and Intel 64 systems, Intel Itanium, IBM Z, 32-bit IBM Power Systems, 64-bit IBM Power Systems, IBM POWER, big endian, and the 64-bit ARM architecture. Some functionality may not be available if the client wishes to work with ELF files from architectures other than those mentioned above. Some packages managed by the scl utility include privileged services that require sudo . The system sudo clears environment variables and so Red Hat Developer Toolset includes its own sudo shell script, wrapping scl enable . This script does not currently parse or pass normal sudo options, only sudo COMMAND ARGS ... . In order to use the system version of sudo from within a Red Hat Developer Toolset-enabled shell, use the /usr/bin/sudo binary. Intel have issued erratum HSW136 concerning TSX (Transactional Synchronization Extensions) instructions. Under certain circumstances, software using the Intel TSX instructions may result in unpredictable behavior. TSX instructions may be executed by applications built with Red Hat Developer Toolset GCC under certain conditions. These include use of GCC 's experimental Transactional Memory support (using the -fgnu-tm option) when executed on hardware with TSX instructions enabled. The users of Red Hat Developer Toolset are advised to exercise further caution when experimenting with Transaction Memory at this time, or to disable TSX instructions by applying an appropriate hardware or firmware update. To use the Memory Protection Extensions (MPX) feature in GCC , the Red Hat Developer Toolset version of the libmpx library is required, otherwise the application might not link properly. The two binutils linkers, gold and ld , have different ways of handling hidden symbols, which leads to incompatibilities in their behavior. Previously, the gold and ld linkers had inconsistent and incorrect behavior with regard to shared libraries and hidden symbols. There were two scenarios: If a shared library referenced a symbol that existed elsewhere in both hidden and non-hidden versions, the gold linker produced a bogus warning message about the hidden version. If a shared library referenced a symbol that existed elsewhere only as a hidden symbol, the gold linker created an executable, even though it could not work. The gold linker has been updated so that it no longer issues bogus warning messages about hidden symbols that also exist in a non-hidden version. The second scenario cannot be solved in the linker. It is up to the programmer to ensure that a non-hidden version of the symbol is available when the application is run. As a result, the two linkers' behavior is closer, but they still differ in case of a reference to a hidden symbol that cannot be found elsewhere in a non-hidden version. Unfortunately, there is not a single correct behavior for this situation, so the linkers are allowed to differ. The valgrind-openmpi subpackage is no longer provided with Valgrind in Red Hat Developer Toolset. The devtoolset-<version>-valgrind-openmpi subpackages previously caused incompatibility issues with various Red Hat Enterprise Linux minor releases and problems with rebuilding. Users are recommended to use the latest Red Hat Enterprise Linux system version of the valgrind and valgrind-openmpi packages if they need to run Valgrind against their programs that are built against the openmpi-devel libraries. The stap-server binary is no longer provided with SystemTap since Red Hat Developer Toolset 12. BZ# 2099259
[ "gcc -lsomelib objfile.o", "gcc objfile.o -lsomelib", "~]USD yum update --exclude=libasan", "file /usr/lib/debug/usr/lib64/libitm.so.1.0.0.debug from install of gcc-base-debuginfo-4.8.5-16.el7.x86_64 conflicts with file from package devtoolset-7-gcc-debuginfo-7.2.1-1.el7.x86_64", "file /usr/lib/debug/usr/lib64/libtsan.so.0.0.0.debug from install of gcc-debuginfo-4.8.5-16.el7.x86_64 conflicts with file from package devtoolset-7-gcc-debuginfo-7.2.1-1.el7.x86_64", "file /usr/lib/debug/usr/lib64/libitm.so.1.0.0.debug from install of devtoolset-7-gcc-debuginfo-7.2.1-1.el6.x86_64 conflicts with file from package gcc-libraries-debuginfo-7.1.1-2.3.1.el6_9.x86_64", "install libitm", "~]USD scl enable devtoolset-12 '/usr/lib64/ccache/gcc -c foo.c '", "~]USD scl enable devtoolset-12 'bash'", "~]USD export PATH=/usr/lib64/ccacheUSD{PATH:+:USD{PATH}}", "~]USD gcc -c foo.c" ]
https://docs.redhat.com/en/documentation/red_hat_developer_toolset/12/html/12.1_release_notes/dts12.1_release
Deploying the Shared File Systems service with native CephFS
Deploying the Shared File Systems service with native CephFS Red Hat OpenStack Platform 16.2 Understanding, using, and managing the Shared File Systems service with native CephFS in Red Hat OpenStack Platform OpenStack Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/deploying_the_shared_file_systems_service_with_native_cephfs/index
Chapter 19. NetworkPolicy [networking.k8s.io/v1]
Chapter 19. NetworkPolicy [networking.k8s.io/v1] Description NetworkPolicy describes what network traffic is allowed for a set of Pods Type object 19.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object NetworkPolicySpec provides the specification of a NetworkPolicy 19.1.1. .spec Description NetworkPolicySpec provides the specification of a NetworkPolicy Type object Required podSelector Property Type Description egress array egress is a list of egress rules to be applied to the selected pods. Outgoing traffic is allowed if there are no NetworkPolicies selecting the pod (and cluster policy otherwise allows the traffic), OR if the traffic matches at least one egress rule across all of the NetworkPolicy objects whose podSelector matches the pod. If this field is empty then this NetworkPolicy limits all outgoing traffic (and serves solely to ensure that the pods it selects are isolated by default). This field is beta-level in 1.8 egress[] object NetworkPolicyEgressRule describes a particular set of traffic that is allowed out of pods matched by a NetworkPolicySpec's podSelector. The traffic must match both ports and to. This type is beta-level in 1.8 ingress array ingress is a list of ingress rules to be applied to the selected pods. Traffic is allowed to a pod if there are no NetworkPolicies selecting the pod (and cluster policy otherwise allows the traffic), OR if the traffic source is the pod's local node, OR if the traffic matches at least one ingress rule across all of the NetworkPolicy objects whose podSelector matches the pod. If this field is empty then this NetworkPolicy does not allow any traffic (and serves solely to ensure that the pods it selects are isolated by default) ingress[] object NetworkPolicyIngressRule describes a particular set of traffic that is allowed to the pods matched by a NetworkPolicySpec's podSelector. The traffic must match both ports and from. podSelector LabelSelector podSelector selects the pods to which this NetworkPolicy object applies. The array of ingress rules is applied to any pods selected by this field. Multiple network policies can select the same set of pods. In this case, the ingress rules for each are combined additively. This field is NOT optional and follows standard label selector semantics. An empty podSelector matches all pods in this namespace. policyTypes array (string) policyTypes is a list of rule types that the NetworkPolicy relates to. Valid options are ["Ingress"], ["Egress"], or ["Ingress", "Egress"]. If this field is not specified, it will default based on the existence of ingress or egress rules; policies that contain an egress section are assumed to affect egress, and all policies (whether or not they contain an ingress section) are assumed to affect ingress. If you want to write an egress-only policy, you must explicitly specify policyTypes [ "Egress" ]. Likewise, if you want to write a policy that specifies that no egress is allowed, you must specify a policyTypes value that include "Egress" (since such a policy would not include an egress section and would otherwise default to just [ "Ingress" ]). This field is beta-level in 1.8 19.1.2. .spec.egress Description egress is a list of egress rules to be applied to the selected pods. Outgoing traffic is allowed if there are no NetworkPolicies selecting the pod (and cluster policy otherwise allows the traffic), OR if the traffic matches at least one egress rule across all of the NetworkPolicy objects whose podSelector matches the pod. If this field is empty then this NetworkPolicy limits all outgoing traffic (and serves solely to ensure that the pods it selects are isolated by default). This field is beta-level in 1.8 Type array 19.1.3. .spec.egress[] Description NetworkPolicyEgressRule describes a particular set of traffic that is allowed out of pods matched by a NetworkPolicySpec's podSelector. The traffic must match both ports and to. This type is beta-level in 1.8 Type object Property Type Description ports array ports is a list of destination ports for outgoing traffic. Each item in this list is combined using a logical OR. If this field is empty or missing, this rule matches all ports (traffic not restricted by port). If this field is present and contains at least one item, then this rule allows traffic only if the traffic matches at least one port in the list. ports[] object NetworkPolicyPort describes a port to allow traffic on to array to is a list of destinations for outgoing traffic of pods selected for this rule. Items in this list are combined using a logical OR operation. If this field is empty or missing, this rule matches all destinations (traffic not restricted by destination). If this field is present and contains at least one item, this rule allows traffic only if the traffic matches at least one item in the to list. to[] object NetworkPolicyPeer describes a peer to allow traffic to/from. Only certain combinations of fields are allowed 19.1.4. .spec.egress[].ports Description ports is a list of destination ports for outgoing traffic. Each item in this list is combined using a logical OR. If this field is empty or missing, this rule matches all ports (traffic not restricted by port). If this field is present and contains at least one item, then this rule allows traffic only if the traffic matches at least one port in the list. Type array 19.1.5. .spec.egress[].ports[] Description NetworkPolicyPort describes a port to allow traffic on Type object Property Type Description endPort integer endPort indicates that the range of ports from port to endPort if set, inclusive, should be allowed by the policy. This field cannot be defined if the port field is not defined or if the port field is defined as a named (string) port. The endPort must be equal or greater than port. port IntOrString port represents the port on the given protocol. This can either be a numerical or named port on a pod. If this field is not provided, this matches all port names and numbers. If present, only traffic on the specified protocol AND port will be matched. protocol string protocol represents the protocol (TCP, UDP, or SCTP) which traffic must match. If not specified, this field defaults to TCP. Possible enum values: - "SCTP" is the SCTP protocol. - "TCP" is the TCP protocol. - "UDP" is the UDP protocol. 19.1.6. .spec.egress[].to Description to is a list of destinations for outgoing traffic of pods selected for this rule. Items in this list are combined using a logical OR operation. If this field is empty or missing, this rule matches all destinations (traffic not restricted by destination). If this field is present and contains at least one item, this rule allows traffic only if the traffic matches at least one item in the to list. Type array 19.1.7. .spec.egress[].to[] Description NetworkPolicyPeer describes a peer to allow traffic to/from. Only certain combinations of fields are allowed Type object Property Type Description ipBlock object IPBlock describes a particular CIDR (Ex. "192.168.1.0/24","2001:db8::/64") that is allowed to the pods matched by a NetworkPolicySpec's podSelector. The except entry describes CIDRs that should not be included within this rule. namespaceSelector LabelSelector namespaceSelector selects namespaces using cluster-scoped labels. This field follows standard label selector semantics; if present but empty, it selects all namespaces. If podSelector is also set, then the NetworkPolicyPeer as a whole selects the pods matching podSelector in the namespaces selected by namespaceSelector. Otherwise it selects all pods in the namespaces selected by namespaceSelector. podSelector LabelSelector podSelector is a label selector which selects pods. This field follows standard label selector semantics; if present but empty, it selects all pods. If namespaceSelector is also set, then the NetworkPolicyPeer as a whole selects the pods matching podSelector in the Namespaces selected by NamespaceSelector. Otherwise it selects the pods matching podSelector in the policy's own namespace. 19.1.8. .spec.egress[].to[].ipBlock Description IPBlock describes a particular CIDR (Ex. "192.168.1.0/24","2001:db8::/64") that is allowed to the pods matched by a NetworkPolicySpec's podSelector. The except entry describes CIDRs that should not be included within this rule. Type object Required cidr Property Type Description cidr string cidr is a string representing the IPBlock Valid examples are "192.168.1.0/24" or "2001:db8::/64" except array (string) except is a slice of CIDRs that should not be included within an IPBlock Valid examples are "192.168.1.0/24" or "2001:db8::/64" Except values will be rejected if they are outside the cidr range 19.1.9. .spec.ingress Description ingress is a list of ingress rules to be applied to the selected pods. Traffic is allowed to a pod if there are no NetworkPolicies selecting the pod (and cluster policy otherwise allows the traffic), OR if the traffic source is the pod's local node, OR if the traffic matches at least one ingress rule across all of the NetworkPolicy objects whose podSelector matches the pod. If this field is empty then this NetworkPolicy does not allow any traffic (and serves solely to ensure that the pods it selects are isolated by default) Type array 19.1.10. .spec.ingress[] Description NetworkPolicyIngressRule describes a particular set of traffic that is allowed to the pods matched by a NetworkPolicySpec's podSelector. The traffic must match both ports and from. Type object Property Type Description from array from is a list of sources which should be able to access the pods selected for this rule. Items in this list are combined using a logical OR operation. If this field is empty or missing, this rule matches all sources (traffic not restricted by source). If this field is present and contains at least one item, this rule allows traffic only if the traffic matches at least one item in the from list. from[] object NetworkPolicyPeer describes a peer to allow traffic to/from. Only certain combinations of fields are allowed ports array ports is a list of ports which should be made accessible on the pods selected for this rule. Each item in this list is combined using a logical OR. If this field is empty or missing, this rule matches all ports (traffic not restricted by port). If this field is present and contains at least one item, then this rule allows traffic only if the traffic matches at least one port in the list. ports[] object NetworkPolicyPort describes a port to allow traffic on 19.1.11. .spec.ingress[].from Description from is a list of sources which should be able to access the pods selected for this rule. Items in this list are combined using a logical OR operation. If this field is empty or missing, this rule matches all sources (traffic not restricted by source). If this field is present and contains at least one item, this rule allows traffic only if the traffic matches at least one item in the from list. Type array 19.1.12. .spec.ingress[].from[] Description NetworkPolicyPeer describes a peer to allow traffic to/from. Only certain combinations of fields are allowed Type object Property Type Description ipBlock object IPBlock describes a particular CIDR (Ex. "192.168.1.0/24","2001:db8::/64") that is allowed to the pods matched by a NetworkPolicySpec's podSelector. The except entry describes CIDRs that should not be included within this rule. namespaceSelector LabelSelector namespaceSelector selects namespaces using cluster-scoped labels. This field follows standard label selector semantics; if present but empty, it selects all namespaces. If podSelector is also set, then the NetworkPolicyPeer as a whole selects the pods matching podSelector in the namespaces selected by namespaceSelector. Otherwise it selects all pods in the namespaces selected by namespaceSelector. podSelector LabelSelector podSelector is a label selector which selects pods. This field follows standard label selector semantics; if present but empty, it selects all pods. If namespaceSelector is also set, then the NetworkPolicyPeer as a whole selects the pods matching podSelector in the Namespaces selected by NamespaceSelector. Otherwise it selects the pods matching podSelector in the policy's own namespace. 19.1.13. .spec.ingress[].from[].ipBlock Description IPBlock describes a particular CIDR (Ex. "192.168.1.0/24","2001:db8::/64") that is allowed to the pods matched by a NetworkPolicySpec's podSelector. The except entry describes CIDRs that should not be included within this rule. Type object Required cidr Property Type Description cidr string cidr is a string representing the IPBlock Valid examples are "192.168.1.0/24" or "2001:db8::/64" except array (string) except is a slice of CIDRs that should not be included within an IPBlock Valid examples are "192.168.1.0/24" or "2001:db8::/64" Except values will be rejected if they are outside the cidr range 19.1.14. .spec.ingress[].ports Description ports is a list of ports which should be made accessible on the pods selected for this rule. Each item in this list is combined using a logical OR. If this field is empty or missing, this rule matches all ports (traffic not restricted by port). If this field is present and contains at least one item, then this rule allows traffic only if the traffic matches at least one port in the list. Type array 19.1.15. .spec.ingress[].ports[] Description NetworkPolicyPort describes a port to allow traffic on Type object Property Type Description endPort integer endPort indicates that the range of ports from port to endPort if set, inclusive, should be allowed by the policy. This field cannot be defined if the port field is not defined or if the port field is defined as a named (string) port. The endPort must be equal or greater than port. port IntOrString port represents the port on the given protocol. This can either be a numerical or named port on a pod. If this field is not provided, this matches all port names and numbers. If present, only traffic on the specified protocol AND port will be matched. protocol string protocol represents the protocol (TCP, UDP, or SCTP) which traffic must match. If not specified, this field defaults to TCP. Possible enum values: - "SCTP" is the SCTP protocol. - "TCP" is the TCP protocol. - "UDP" is the UDP protocol. 19.2. API endpoints The following API endpoints are available: /apis/networking.k8s.io/v1/networkpolicies GET : list or watch objects of kind NetworkPolicy /apis/networking.k8s.io/v1/watch/networkpolicies GET : watch individual changes to a list of NetworkPolicy. deprecated: use the 'watch' parameter with a list operation instead. /apis/networking.k8s.io/v1/namespaces/{namespace}/networkpolicies DELETE : delete collection of NetworkPolicy GET : list or watch objects of kind NetworkPolicy POST : create a NetworkPolicy /apis/networking.k8s.io/v1/watch/namespaces/{namespace}/networkpolicies GET : watch individual changes to a list of NetworkPolicy. deprecated: use the 'watch' parameter with a list operation instead. /apis/networking.k8s.io/v1/namespaces/{namespace}/networkpolicies/{name} DELETE : delete a NetworkPolicy GET : read the specified NetworkPolicy PATCH : partially update the specified NetworkPolicy PUT : replace the specified NetworkPolicy /apis/networking.k8s.io/v1/watch/namespaces/{namespace}/networkpolicies/{name} GET : watch changes to an object of kind NetworkPolicy. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 19.2.1. /apis/networking.k8s.io/v1/networkpolicies HTTP method GET Description list or watch objects of kind NetworkPolicy Table 19.1. HTTP responses HTTP code Reponse body 200 - OK NetworkPolicyList schema 401 - Unauthorized Empty 19.2.2. /apis/networking.k8s.io/v1/watch/networkpolicies HTTP method GET Description watch individual changes to a list of NetworkPolicy. deprecated: use the 'watch' parameter with a list operation instead. Table 19.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 19.2.3. /apis/networking.k8s.io/v1/namespaces/{namespace}/networkpolicies HTTP method DELETE Description delete collection of NetworkPolicy Table 19.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 19.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind NetworkPolicy Table 19.5. HTTP responses HTTP code Reponse body 200 - OK NetworkPolicyList schema 401 - Unauthorized Empty HTTP method POST Description create a NetworkPolicy Table 19.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.7. Body parameters Parameter Type Description body NetworkPolicy schema Table 19.8. HTTP responses HTTP code Reponse body 200 - OK NetworkPolicy schema 201 - Created NetworkPolicy schema 202 - Accepted NetworkPolicy schema 401 - Unauthorized Empty 19.2.4. /apis/networking.k8s.io/v1/watch/namespaces/{namespace}/networkpolicies HTTP method GET Description watch individual changes to a list of NetworkPolicy. deprecated: use the 'watch' parameter with a list operation instead. Table 19.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 19.2.5. /apis/networking.k8s.io/v1/namespaces/{namespace}/networkpolicies/{name} Table 19.10. Global path parameters Parameter Type Description name string name of the NetworkPolicy HTTP method DELETE Description delete a NetworkPolicy Table 19.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 19.12. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified NetworkPolicy Table 19.13. HTTP responses HTTP code Reponse body 200 - OK NetworkPolicy schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified NetworkPolicy Table 19.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.15. HTTP responses HTTP code Reponse body 200 - OK NetworkPolicy schema 201 - Created NetworkPolicy schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified NetworkPolicy Table 19.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.17. Body parameters Parameter Type Description body NetworkPolicy schema Table 19.18. HTTP responses HTTP code Reponse body 200 - OK NetworkPolicy schema 201 - Created NetworkPolicy schema 401 - Unauthorized Empty 19.2.6. /apis/networking.k8s.io/v1/watch/namespaces/{namespace}/networkpolicies/{name} Table 19.19. Global path parameters Parameter Type Description name string name of the NetworkPolicy HTTP method GET Description watch changes to an object of kind NetworkPolicy. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 19.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/network_apis/networkpolicy-networking-k8s-io-v1
20.20. Security Label
20.20. Security Label The <seclabel> element allows control over the operation of the security drivers. There are three basic modes of operation, 'dynamic' where libvirt automatically generates a unique security label, 'static' where the application/administrator chooses the labels, or 'none' where confinement is disabled. With dynamic label generation, libvirt will always automatically relabel any resources associated with the virtual machine. With static label assignment, by default, the administrator or application must ensure labels are set correctly on any resources. However, automatic relabeling can be enabled if desired. If more than one security driver is used by libvirt, multiple seclabel tags can be used, one for each driver and the security driver referenced by each tag can be defined using the attribute model Valid input XML configurations for the top-level security label are: <seclabel type='dynamic' model='selinux'/> <seclabel type='dynamic' model='selinux'> <baselabel>system_u:system_r:my_svirt_t:s0</baselabel> </seclabel> <seclabel type='static' model='selinux' relabel='no'> <label>system_u:system_r:svirt_t:s0:c392,c662</label> </seclabel> <seclabel type='static' model='selinux' relabel='yes'> <label>system_u:system_r:svirt_t:s0:c392,c662</label> </seclabel> <seclabel type='none'/> Figure 20.69. Security label If no 'type' attribute is provided in the input XML, then the security driver default setting will be used, which may be either 'none' or 'dynamic' . If a <baselabel> is set but no 'type' is set, then the type is presumed to be 'dynamic' . When viewing the XML for a running guest virtual machine with automatic resource relabeling active, an additional XML element, imagelabel , will be included. This is an output-only element, so will be ignored in user supplied XML documents. The following elements can be manipulated with the following values: type - Either static , dynamic or none to determine whether libvirt automatically generates a unique security label or not. model - A valid security model name, matching the currently activated security model relabel - Either yes or no . This must always be yes if dynamic label assignment is used. With static label assignment it will default to no . <label> - If static labelling is used, this must specify the full security label to assign to the virtual domain. The format of the content depends on the security driver in use: SELinux : a SELinux context. AppArmor : an AppArmor profile. DAC : owner and group separated by colon. They can be defined both as user/group names or uid/gid. The driver will first try to parse these values as names, but a leading plus sign can used to force the driver to parse them as uid or gid. <baselabel> - If dynamic labelling is used, this can optionally be used to specify the base security label. The format of the content depends on the security driver in use. <imagelabel> - This is an output only element, which shows the security label used on resources associated with the virtual domain. The format of the content depends on the security driver in use When relabeling is in effect, it is also possible to fine-tune the labeling done for specific source file names, by either disabling the labeling (useful if the file lives on NFS or other file system that lacks security labeling) or requesting an alternate label (useful when a management application creates a special label to allow sharing of some, but not all, resources between domains). When a seclabel element is attached to a specific path rather than the top-level domain assignment, only the attribute relabel or the sub-element label are supported.
[ "<seclabel type='dynamic' model='selinux'/> <seclabel type='dynamic' model='selinux'> <baselabel>system_u:system_r:my_svirt_t:s0</baselabel> </seclabel> <seclabel type='static' model='selinux' relabel='no'> <label>system_u:system_r:svirt_t:s0:c392,c662</label> </seclabel> <seclabel type='static' model='selinux' relabel='yes'> <label>system_u:system_r:svirt_t:s0:c392,c662</label> </seclabel> <seclabel type='none'/>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/section-libvirt-dom-xml-security-label
Object Gateway Configuration and Administration Guide
Object Gateway Configuration and Administration Guide Red Hat Ceph Storage 4 Configuring and administering the Ceph Storage Object Gateway Red Hat Ceph Storage Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/object_gateway_configuration_and_administration_guide/index
Chapter 1. Overview of machine management
Chapter 1. Overview of machine management You can use machine management to flexibly work with underlying infrastructure like Amazon Web Services (AWS), Azure, Google Cloud Platform (GCP), OpenStack, Red Hat Virtualization (RHV), and vSphere to manage the OpenShift Container Platform cluster. You can control the cluster and perform auto-scaling, such as scaling up and down the cluster based on specific workload policies. The OpenShift Container Platform cluster can horizontally scale up and down when the load increases or decreases. It is important to have a cluster that adapts to changing workloads. Machine management is implemented as a Custom Resource Definition (CRD). A CRD object defines a new unique object Kind in the cluster and enables the Kubernetes API server to handle the object's entire lifecycle. The Machine API Operator provisions the following resources: MachineSet Machine Cluster Autoscaler Machine Autoscaler Machine Health Checks What you can do with machine sets As a cluster administrator you can: Create a machine set on: AWS Azure GCP OpenStack RHV vSphere Manually scale a machine set by adding or removing a machine from the machine set. Modify a machine set through the MachineSet YAML configuration file. Delete a machine. Create infrastructure machine sets . Configure and deploy a machine health check to automatically fix damaged machines in a machine pool. Autoscaler Autoscale your cluster to ensure flexibility to changing workloads. To autoscale your OpenShift Container Platform cluster, you must first deploy a cluster autoscaler, and then deploy a machine autoscaler for each machine set. The cluster autoscaler increases and decreases the size of the cluster based on deployment needs. The machine autoscaler adjusts the number of machines in the machine sets that you deploy in your OpenShift Container Platform cluster. User-provisioned infrastructure User-provisioned infrastructure is an environment where you can deploy infrastructure such as compute, network, and storage resources that host the OpenShift Container Platform. You can add compute machines to a cluster on user-provisioned infrastructure either as part of or after the installation process. What you can do with RHEL compute machines As a cluster administrator, you can: Add Red Hat Enterprise Linux (RHEL) compute machines , also known as worker machines, to a user-provisioned infrastructure cluster or an installation-provisioned infrastructure cluster. Add more Red Hat Enterprise Linux (RHEL) compute machines to an existing cluster.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/machine_management/overview-of-machine-management
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_using_microsoft_azure/providing-feedback-on-red-hat-documentation_azure
20.13. Working with Snapshots
20.13. Working with Snapshots 20.13.1. Shortening a Backing Chain by Copying the Data This section demonstrates how to use the virsh blockcommit domain <path> [<bandwidth>] [<base>] [--shallow] [<top>] [--active] [--delete] [--wait] [--verbose] [--timeout <number>] [--pivot] [--keep-overlay] [--async] [--keep-relative] command to shorten a backing chain. The command has many options, which are listed in the help menu or man page. The virsh blockcommit command copies data from one part of the chain down into a backing file, allowing you to pivot the rest of the chain in order to bypass the committed portions. For example, suppose this is the current state: Using virsh blockcommit moves the contents of snap2 into snap1, allowing you to delete snap2 from the chain, making backups much quicker. Procedure 20.1. How to shorten a backing chain Enter the following command, replacing guest1 with the name of your guest virtual machine and disk1 with the name of your disk. # virsh blockcommit guest1 disk1 --base snap1 --top snap2 --wait --verbose The contents of snap2 are moved into snap1, resulting in: base <- snap1 <- active . Snap2 is no longer valid and can be deleted Warning virsh blockcommit will corrupt any file that depends on the --base argument (other than files that depended on the --top argument, as those files now point to the base). To prevent this, do not commit changes into files shared by more than one guest. The --verbose option will allow the progress to be printed on the screen. 20.13.2. Shortening a Backing Chain by Flattening the Image virsh blockpull can be used in in the following applications: Flattens an image by populating it with data from its backing image chain. This makes the image file self-contained so that it no longer depends on backing images and looks like this: Before: base.img <- active After: base.img is no longer used by the guest and Active contains all of the data. Flattens part of the backing image chain. This can be used to flatten snapshots into the top-level image and looks like this: Before: base <- sn1 <-sn2 <- active After: base.img <- active. Note that active now contains all data from sn1 and sn2 , and neither sn1 nor sn2 are used by the guest. Moves the disk image to a new file system on the host. This is allows image files to be moved while the guest is running and looks like this: Before (The original image file): /fs1/base.vm.img After: /fs2/active.vm.qcow2 is now the new file system and /fs1/base.vm.img is no longer used. Useful in live migration with post-copy storage migration. The disk image is copied from the source host to the destination host after live migration completes. In short this is what happens: Before: /source-host/base.vm.img After: /destination-host/active.vm.qcow2 . /source-host/base.vm.img is no longer used. Procedure 20.2. How to shorten a backing chain by flattening the data It may be helpful to create a snapshot prior to running virsh blockpull . To do so, use the virsh snapshot-create-as command. In the following example, replace guest1 with the name of your guest virtual machine, and snap1 with the name of your snapshot. # virsh snapshot-create-as guest1 snap1 --disk-only If the chain looks like this: base <- snap1 <- snap2 <- active , enter the following command, replacing guest1 with the name of your guest virtual machine and path1 with the source path to your disk ( /home/username/VirtualMachines/* , for example). # virsh blockpull guest1 path1 This command makes snap1 the backing file of active, by pulling data from snap2 into active resulting in: base <- snap1 <- active . Once the virsh blockpull is complete, the libvirt tracking of the snapshot that created the extra image in the chain is no longer useful. Delete the tracking on the outdated snapshot with this command, replacing guest1 with the name of your guest virtual machine and snap1 with the name of your snapshot. # virsh snapshot-delete guest1 snap1 --metadata Additional applications of virsh blockpull can be performed as follows: Example 20.31. How to flatten a single image and populate it with data from its backing image chain The following example flattens the vda virtual disk on guest guest1 and populates the image with data from its backing image chain, waiting for the populate action to be complete. # virsh blockpull guest1 vda --wait Example 20.32. How to flatten part of the backing image chain The following example flattens the vda virtual disk on guest guest1 based on the /path/to/base.img disk image. # virsh blockpull guest1 vda /path/to/base.img --base --wait Example 20.33. How to move the disk image to a new file system on the host To move the disk image to a new file system on the host, run the following two commands. In each command replace guest1 with the name of your guest virtual machine and disk1 with the name of your virtual disk. Change as well the XML file name and path to the location and name of the snapshot: # virsh snapshot-create guest1 --xmlfile /path/to/snap1.xml --disk-only # virsh blockpull guest1 disk1 --wait Example 20.34. How to use live migration with post-copy storage migration To use live migration with post-copy storage migration enter the following commands: On the destination enter the following command replacing the backing file with the name and location of the backing file on the host. # qemu-img create -f qcow2 -o backing_file=/source-host/vm.img /destination-host/vm.qcow2 On the source enter the following command, replacing guest1 with the name of your guest virtual machine: # virsh migrate guest1 On the destination, enter the following command, replacing guest1 with the name of your guest virtual machine and disk1 with the name of your virtual disk: # virsh blockpull guest1 disk1 --wait 20.13.3. Changing the Size of a Guest Virtual Machine's Block Device The virsh blockresize command can be used to resize a block device of a guest virtual machine while the guest virtual machine is running, using the absolute path of the block device, which also corresponds to a unique target name ( <target dev="name"/> ) or source file ( <source file="name"/> ). This can be applied to one of the disk devices attached to guest virtual machine (you can use the command virsh domblklist to print a table showing the brief information of all block devices associated with a given guest virtual machine). Note Live image resizing will always resize the image, but may not immediately be picked up by guests. With recent guest kernels, the size of virtio-blk devices is automatically updated (older kernels require a guest reboot). With SCSI devices, it is required to manually trigger a re-scan in the guest with the command, echo > /sys/class/scsi_device/0:0:0:0/device/rescan . In addition, with IDE it is required to reboot the guest before it picks up the new size. Example 20.35. How to resize the guest virtual machine block device The following example resizes the guest1 virtual machine's block device to 90 bytes: # virsh blockresize guest1 90 B
[ "base <- snap1 <- snap2 <- active ." ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-backing-chain
Installing on IBM Cloud
Installing on IBM Cloud OpenShift Container Platform 4.16 Installing OpenShift Container Platform IBM Cloud Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_ibm_cloud/index
Managing and allocating storage resources
Managing and allocating storage resources Red Hat OpenShift Data Foundation 4.9 Instructions on how to allocate storage to core services and hosted applications in OpenShift Data Foundation, including snapshot and clone. Red Hat Storage Documentation Team Abstract This document explains how to allocate storage to core services and hosted applications in Red Hat OpenShift Data Foundation.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/managing_and_allocating_storage_resources/index
Chapter 7. Virtualization
Chapter 7. Virtualization Increased Maximum Number of vCPUs in KVM The maximum number of supported virtual CPUs (vCPUs) in a KVM guest has been increased to 240. This increases the amount of virtual processing units that a user can assign to the guest, and therefore improves its performance potential. 5th Generation Intel Core New Instructions Support in QEMU, KVM, and libvirt API In Red Hat Enterprise Linux 7.1, the support for 5th Generation Intel Core processors has been added to the QEMU hypervisor, the KVM kernel code, and the libvirt API. This allows KVM guests to use the following instructions and features: ADCX, ADOX, RDSFEED, PREFETCHW, and supervisor mode access prevention (SMAP). USB 3.0 Support for KVM Guests Red Hat Enterprise Linux 7.1 features improved USB support by adding USB 3.0 host adapter (xHCI) emulation as a Technology Preview. Compression for the dump-guest-memory Command Since Red Hat Enterprise Linux 7.1, the dump-guest-memory command supports crash dump compression. This makes it possible for users who cannot use the virsh dump command to require less hard disk space for guest crash dumps. In addition, saving a compressed guest crash dump usually takes less time than saving a non-compressed one. Open Virtual Machine Firmware The Open Virtual Machine Firmware (OVMF) is available as a Technology Preview in Red Hat Enterprise Linux 7.1. OVMF is a UEFI secure boot environment for AMD64 and Intel 64 guests. Improve Network Performance on Hyper-V Several new features of the Hyper-V network driver have been introduced to improve network performance. For example, Receive-Side Scaling, Large Send Offload, Scatter/Gather I/O are now supported, and network throughput is increased. hypervfcopyd in hyperv-daemons The hypervfcopyd daemon has been added to the hyperv-daemons packages. hypervfcopyd is an implementation of file copy service functionality for Linux Guest running on Hyper-V 2012 R2 host. It enables the host to copy a file (over VMBUS) into the Linux Guest. New Features in libguestfs Red Hat Enterprise Linux 7.1 introduces a number of new features in libguestfs , a set of tools for accessing and modifying virtual machine disk images. Namely: virt-builder - a new tool for building virtual machine images. Use virt-builder to rapidly and securely create guests and customize them. virt-customize - a new tool for customizing virtual machine disk images. Use virt-customize to install packages, edit configuration files, run scripts, and set passwords. virt-diff - a new tool for showing differences between the file systems of two virtual machines. Use virt-diff to easily discover what files have been changed between snapshots. virt-log - a new tool for listing log files from guests. The virt-log tool supports a variety of guests including Linux traditional, Linux using journal, and Windows event log. virt-v2v - a new tool for converting guests from a foreign hypervisor to run on KVM, managed by libvirt, OpenStack, oVirt, Red Hat Enterprise Virtualization (RHEV), and several other targets. Currently, virt-v2v can convert Red Hat Enterprise Linux and Windows guests running on Xen and VMware ESX. Flight Recorder Tracing Support for flight recorder tracing has been introduced in Red Hat Enterprise Linux 7.1. Flight recorder tracing uses SystemTap to automatically capture qemu-kvm data as long as the guest machine is running. This provides an additional avenue for investigating qemu-kvm problems, more flexible than qemu-kvm core dumps. For detailed instructions on how to configure and use flight recorder tracing, see the Virtualization Deployment and Administration Guide . LPAR Watchdog for IBM System z As a Technology Preview, Red Hat Enterprise Linux 7.1 introduces a new watchdog driver for IBM System z. This enhanced watchdog supports Linux logical partitions (LPAR) as well as Linux guests in the z/VM hypervisor, and provides automatic reboot and automatic dump capabilities if a Linux system becomes unresponsive. RDMA-based Migration of Live Guests The support for Remote Direct Memory Access (RDMA)-based migration has been added to libvirt . As a result, it is now possible to use the new rdma:// migration URI to request migration over RDMA, which allows for significantly shorter live migration of large guests. Note that prior to using RDMA-based migration, RDMA has to be configured and libvirt has to be set up to use it. Removal of Q35 Chipset, PCI Express Bus, and AHCI Bus Emulation Red Hat Enterprise Linux 7.1 removes the emulation of the Q35 machine type, required also for supporting the PCI Express (PCIe) bus and the Advanced Host Controller Interface (AHCI) bus in KVM guest virtual machines. These features were previously available on Red Hat Enterprise Linux as Technology Previews. However, they are still being actively developed and might become available in the future as part of Red Hat products.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.1_release_notes/chap-Red_Hat_Enterprise_Linux-7.1_Release_Notes-Virtualization
Chapter 6. ControllerRevision [apps/v1]
Chapter 6. ControllerRevision [apps/v1] Description ControllerRevision implements an immutable snapshot of state data. Clients are responsible for serializing and deserializing the objects that contain their internal state. Once a ControllerRevision has been successfully created, it can not be updated. The API Server will fail validation of all requests that attempt to mutate the Data field. ControllerRevisions may, however, be deleted. Note that, due to its use by both the DaemonSet and StatefulSet controllers for update and rollback, this object is beta. However, it may be subject to name and representation changes in future releases, and clients should not depend on its stability. It is primarily for internal use by controllers. Type object Required revision 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources data RawExtension Data is the serialized representation of the state. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata revision integer Revision indicates the revision of the state represented by Data. 6.2. API endpoints The following API endpoints are available: /apis/apps/v1/controllerrevisions GET : list or watch objects of kind ControllerRevision /apis/apps/v1/watch/controllerrevisions GET : watch individual changes to a list of ControllerRevision. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps/v1/namespaces/{namespace}/controllerrevisions DELETE : delete collection of ControllerRevision GET : list or watch objects of kind ControllerRevision POST : create a ControllerRevision /apis/apps/v1/watch/namespaces/{namespace}/controllerrevisions GET : watch individual changes to a list of ControllerRevision. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps/v1/namespaces/{namespace}/controllerrevisions/{name} DELETE : delete a ControllerRevision GET : read the specified ControllerRevision PATCH : partially update the specified ControllerRevision PUT : replace the specified ControllerRevision /apis/apps/v1/watch/namespaces/{namespace}/controllerrevisions/{name} GET : watch changes to an object of kind ControllerRevision. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 6.2.1. /apis/apps/v1/controllerrevisions HTTP method GET Description list or watch objects of kind ControllerRevision Table 6.1. HTTP responses HTTP code Reponse body 200 - OK ControllerRevisionList schema 401 - Unauthorized Empty 6.2.2. /apis/apps/v1/watch/controllerrevisions HTTP method GET Description watch individual changes to a list of ControllerRevision. deprecated: use the 'watch' parameter with a list operation instead. Table 6.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 6.2.3. /apis/apps/v1/namespaces/{namespace}/controllerrevisions HTTP method DELETE Description delete collection of ControllerRevision Table 6.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 6.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind ControllerRevision Table 6.5. HTTP responses HTTP code Reponse body 200 - OK ControllerRevisionList schema 401 - Unauthorized Empty HTTP method POST Description create a ControllerRevision Table 6.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.7. Body parameters Parameter Type Description body ControllerRevision schema Table 6.8. HTTP responses HTTP code Reponse body 200 - OK ControllerRevision schema 201 - Created ControllerRevision schema 202 - Accepted ControllerRevision schema 401 - Unauthorized Empty 6.2.4. /apis/apps/v1/watch/namespaces/{namespace}/controllerrevisions HTTP method GET Description watch individual changes to a list of ControllerRevision. deprecated: use the 'watch' parameter with a list operation instead. Table 6.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 6.2.5. /apis/apps/v1/namespaces/{namespace}/controllerrevisions/{name} Table 6.10. Global path parameters Parameter Type Description name string name of the ControllerRevision HTTP method DELETE Description delete a ControllerRevision Table 6.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 6.12. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ControllerRevision Table 6.13. HTTP responses HTTP code Reponse body 200 - OK ControllerRevision schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ControllerRevision Table 6.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.15. HTTP responses HTTP code Reponse body 200 - OK ControllerRevision schema 201 - Created ControllerRevision schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ControllerRevision Table 6.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.17. Body parameters Parameter Type Description body ControllerRevision schema Table 6.18. HTTP responses HTTP code Reponse body 200 - OK ControllerRevision schema 201 - Created ControllerRevision schema 401 - Unauthorized Empty 6.2.6. /apis/apps/v1/watch/namespaces/{namespace}/controllerrevisions/{name} Table 6.19. Global path parameters Parameter Type Description name string name of the ControllerRevision HTTP method GET Description watch changes to an object of kind ControllerRevision. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 6.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/metadata_apis/controllerrevision-apps-v1
Chapter 2. Getting Started
Chapter 2. Getting Started For certain applications, you can look at the following resources to quickly get started with Red Hat build of Keycloak Authorization Services: Securing a JakartaEE Application in Wildfly Securing a Spring Boot Application Securing Quarkus Applications Securing Node.js Applications
null
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/authorization_services_guide/getting_started_overview
21.4. The guestfish Shell
21.4. The guestfish Shell guestfish is an interactive shell that you can use from the command line or from shell scripts to access guest virtual machine file systems. All of the functionality of the libguestfs API is available from the shell. To begin viewing or editing a virtual machine disk image, enter the following command, substituting the path to your intended disk image: --ro means that the disk image is opened read-only. This mode is always safe but does not allow write access. Only omit this option when you are certain that the guest virtual machine is not running, or the disk image is not attached to a live guest virtual machine. It is not possible to use libguestfs to edit a live guest virtual machine, and attempting to will result in irreversible disk corruption. /path/to/disk/image is the path to the disk. This can be a file, a host physical machine logical volume (such as /dev/VG/LV), or a SAN LUN (/dev/sdf3). Note libguestfs and guestfish do not require root privileges. You only need to run them as root if the disk image being accessed needs root to read or write or both. When you start guestfish interactively, it will display this prompt: At the prompt, type run to initiate the library and attach the disk image. This can take up to 30 seconds the first time it is done. Subsequent starts will complete much faster. Note libguestfs will use hardware virtualization acceleration such as KVM (if available) to speed up this process. Once the run command has been entered, other commands can be used, as the following section demonstrates. 21.4.1. Viewing File Systems with guestfish This section provides information on viewing file systems with guestfish. 21.4.1.1. Manual Listing and Viewing The list-filesystems command will list file systems found by libguestfs. This output shows a Red Hat Enterprise Linux 4 disk image: Other useful commands are list-devices , list-partitions , lvs , pvs , vfs-type and file . You can get more information and help on any command by typing help command , as shown in the following output: To view the actual contents of a file system, it must first be mounted. You can use guestfish commands such as ls , ll , cat , more , download and tar-out to view and download files and directories. Note There is no concept of a current working directory in this shell. Unlike ordinary shells, you cannot for example use the cd command to change directories. All paths must be fully qualified starting at the top with a forward slash ( / ) character. Use the Tab key to complete paths. To exit from the guestfish shell, type exit or enter Ctrl+d . 21.4.1.2. Via guestfish inspection Instead of listing and mounting file systems by hand, it is possible to let guestfish itself inspect the image and mount the file systems as they would be in the guest virtual machine. To do this, add the -i option on the command line: Because guestfish needs to start up the libguestfs back end in order to perform the inspection and mounting, the run command is not necessary when using the -i option. The -i option works for many common Linux guest virtual machines. 21.4.1.3. Accessing a guest virtual machine by name A guest virtual machine can be accessed from the command line when you specify its name as known to libvirt (in other words, as it appears in virsh list --all ). Use the -d option to access a guest virtual machine by its name, with or without the -i option: 21.4.2. Adding Files with guestfish To add a file with guestfish you need to have the complete URI. The file can be a local file or a file located on a network block device (NBD) or a remote block device (RBD). The format used for the URI should be like any of these examples. For local files, use ///: guestfish -a disk .img guestfish -a file:/// directory / disk .img guestfish -a nbd:// example.com [ : port ] guestfish -a nbd:// example.com [ : port ]/ exportname guestfish -a nbd://?socket=/ socket guestfish -a nbd:/// exportname ?socket=/ socket guestfish -a rbd:/// pool / disk guestfish -a rbd:// example.com [ : port ]/ pool / disk 21.4.3. Modifying Files with guestfish To modify files, create directories or make other changes to a guest virtual machine, first heed the warning at the beginning of this section: your guest virtual machine must be shut down . Editing or changing a running disk with guestfish will result in disk corruption. This section gives an example of editing the /boot/grub/grub.conf file. When you are sure the guest virtual machine is shut down you can omit the --ro flag in order to get write access using a command such as: Commands to edit files include edit , vi and emacs . Many commands also exist for creating files and directories, such as write , mkdir , upload and tar-in . 21.4.4. Other Actions with guestfish You can also format file systems, create partitions, create and resize LVM logical volumes and much more, with commands such as mkfs , part-add , lvresize , lvcreate , vgcreate and pvcreate . 21.4.5. Shell Scripting with guestfish Once you are familiar with using guestfish interactively, according to your needs, writing shell scripts with it may be useful. The following is a simple shell script to add a new MOTD (message of the day) to a guest: 21.4.6. Augeas and libguestfs Scripting Combining libguestfs with Augeas can help when writing scripts to manipulate Linux guest virtual machine configuration. For example, the following script uses Augeas to parse the keyboard configuration of a guest virtual machine, and to print out the layout. Note that this example only works with guest virtual machines running Red Hat Enterprise Linux: Augeas can also be used to modify configuration files. You can modify the above script to change the keyboard layout: Note the three changes between the two scripts: The --ro option has been removed in the second example, giving the ability to write to the guest virtual machine. The aug-get command has been changed to aug-set to modify the value instead of fetching it. The new value will be "gb" (including the quotes). The aug-save command is used here so Augeas will write the changes out to disk. Note More information about Augeas can be found on the website http://augeas.net . guestfish can do much more than we can cover in this introductory document. For example, creating disk images from scratch: Or copying out whole directories from a disk image: For more information see the man page guestfish(1).
[ "guestfish --ro -a /path/to/disk/image", "guestfish --ro -a /path/to/disk/image Welcome to guestfish, the guest filesystem shell for editing virtual machine filesystems and disk images. Type: 'help' for help on commands 'man' to read the manual 'quit' to quit the shell ><fs>", "><fs> run ><fs> list-filesystems /dev/vda1: ext3 /dev/VolGroup00/LogVol00: ext3 /dev/VolGroup00/LogVol01: swap", "><fs> help vfs-type NAME vfs-type - get the Linux VFS type corresponding to a mounted device SYNOPSIS vfs-type mountable DESCRIPTION This command gets the filesystem type corresponding to the filesystem on \"device\". For most filesystems, the result is the name of the Linux VFS module which would be used to mount this filesystem if you mounted it without specifying the filesystem type. For example a string such as \"ext3\" or \"ntfs\".", "guestfish --ro -a /path/to/disk/image -i Welcome to guestfish, the guest filesystem shell for editing virtual machine filesystems and disk images. Type: 'help' for help on commands 'man' to read the manual 'quit' to quit the shell Operating system: Red Hat Enterprise Linux AS release 4 (Nahant Update 8) /dev/VolGroup00/LogVol00 mounted on / /dev/vda1 mounted on /boot ><fs> ll / total 210 drwxr-xr-x. 24 root root 4096 Oct 28 09:09 . drwxr-xr-x 21 root root 4096 Nov 17 15:10 .. drwxr-xr-x. 2 root root 4096 Oct 27 22:37 bin drwxr-xr-x. 4 root root 1024 Oct 27 21:52 boot drwxr-xr-x. 4 root root 4096 Oct 27 21:21 dev drwxr-xr-x. 86 root root 12288 Oct 28 09:09 etc", "guestfish --ro -d GuestName -i", "guestfish -d RHEL3 -i Welcome to guestfish, the guest filesystem shell for editing virtual machine filesystems and disk images. Type: 'help' for help on commands 'man' to read the manual 'quit' to quit the shell Operating system: Red Hat Enterprise Linux AS release 3 (Taroon Update 9) /dev/vda2 mounted on / /dev/vda1 mounted on /boot ><fs> edit /boot/grub/grub.conf", "#!/bin/bash - set -e guestname=\"USD1\" guestfish -d \"USDguestname\" -i <<'EOF' write /etc/motd \"Welcome to Acme Incorporated.\" chmod 0644 /etc/motd EOF", "#!/bin/bash - set -e guestname=\"USD1\" guestfish -d \"USD1\" -i --ro <<'EOF' aug-init / 0 aug-get /files/etc/sysconfig/keyboard/LAYOUT EOF", "#!/bin/bash - set -e guestname=\"USD1\" guestfish -d \"USD1\" -i <<'EOF' aug-init / 0 aug-set /files/etc/sysconfig/keyboard/LAYOUT '\"gb\"' aug-save EOF", "guestfish -N fs", "><fs> copy-out /home /tmp/home" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-guest_virtual_machine_disk_access_with_offline_tools-the_guestfish_shell
Chapter 1. Understanding OpenShift Dedicated
Chapter 1. Understanding OpenShift Dedicated With its foundation in Kubernetes, OpenShift Dedicated is a complete OpenShift Container Platform cluster provided as a cloud service, configured for high availability, and dedicated to a single customer. 1.1. An overview of OpenShift Dedicated OpenShift Dedicated is professionally managed by Red Hat and hosted on Amazon Web Services (AWS) or Google Cloud Platform (GCP). Each OpenShift Dedicated cluster comes with a fully managed control plane (Control and Infrastructure nodes), application nodes, installation and management by Red Hat Site Reliability Engineers (SRE), premium Red Hat Support, and cluster services such as logging, metrics, monitoring, notifications portal, and a cluster portal. OpenShift Dedicated provides enterprise-ready enhancements to Kubernetes, including the following enhancements: OpenShift Dedicated clusters are deployed on AWS or GCP environments and can be used as part of a hybrid approach for application management. Integrated Red Hat technology. Major components in OpenShift Dedicated come from Red Hat Enterprise Linux and related Red Hat technologies. OpenShift Dedicated benefits from the intense testing and certification initiatives for Red Hat's enterprise quality software. Open source development model. Development is completed in the open, and the source code is available from public software repositories. This open collaboration fosters rapid innovation and development. To learn about options for assets you can create when you build and deploy containerized Kubernetes applications in OpenShift Container Platform, see Understanding OpenShift Container Platform development . 1.1.1. Custom operating system OpenShift Dedicated uses Red Hat Enterprise Linux CoreOS (RHCOS), a container-oriented operating system that combines some of the best features and functions of the CoreOS and Red Hat Atomic Host operating systems. RHCOS is specifically designed for running containerized applications from OpenShift Dedicated and works with new tools to provide fast installation, Operator-based management, and simplified upgrades. RHCOS includes: Ignition, which OpenShift Dedicated uses as a firstboot system configuration for initially bringing up and configuring machines. CRI-O, a Kubernetes native container runtime implementation that integrates closely with the operating system to deliver an efficient and optimized Kubernetes experience. CRI-O provides facilities for running, stopping, and restarting containers. Kubelet, the primary node agent for Kubernetes that is responsible for launching and monitoring containers. 1.1.2. Other key features Operators are both the fundamental unit of the OpenShift Dedicated code base and a convenient way to deploy applications and software components for your applications to use. In OpenShift Dedicated, Operators serve as the platform foundation and remove the need for manual upgrades of operating systems and control plane applications. OpenShift Dedicated Operators such as the Cluster Version Operator and Machine Config Operator allow simplified, cluster-wide management of those critical components. Operator Lifecycle Manager (OLM) and the OperatorHub provide facilities for storing and distributing Operators to people developing and deploying applications. The Red Hat Quay Container Registry is a Quay.io container registry that serves most of the container images and Operators to OpenShift Dedicated clusters. Quay.io is a public registry version of Red Hat Quay that stores millions of images and tags. Other enhancements to Kubernetes in OpenShift Dedicated include improvements in software defined networking (SDN), authentication, log aggregation, monitoring, and routing. OpenShift Dedicated also offers a comprehensive web console and the custom OpenShift CLI ( oc ) interface. 1.1.3. Internet and Telemetry access for OpenShift Dedicated In OpenShift Dedicated, you require access to the internet to install and upgrade your cluster. Through the Telemetry service, information is sent to Red Hat from OpenShift Dedicated clusters to enable subscription management automation, monitor the health of clusters, assist with support, and improve customer experience. The Telemetry service runs automatically and your cluster is registered to Red Hat OpenShift Cluster Manager. In OpenShift Dedicated, remote health reporting is always enabled and you cannot opt out. The Red Hat Site Reliability Engineering (SRE) team requires the information to provide effective support for your OpenShift Dedicated cluster. Additional resources For more information about Telemetry and remote health monitoring for OpenShift Dedicated clusters, see About remote health monitoring
null
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/introduction_to_openshift_dedicated/osd-understanding
13.2.11. Creating Domains: LDAP
13.2.11. Creating Domains: LDAP An LDAP domain means that SSSD uses an LDAP directory as the identity provider (and, optionally, also as an authentication provider). SSSD supports several major directory services: Red Hat Directory Server OpenLDAP Identity Management (IdM or IPA) Microsoft Active Directory 2008 R2 Note All of the parameters available to a general LDAP identity provider are also available to Identity Management and Active Directory identity providers, which are subsets of the LDAP provider. Parameters for Configuring an LDAP Domain An LDAP directory can function as both an identity provider and an authentication provider. The configuration requires enough information to identify and connect to the user directory in the LDAP server, but the way that those connection parameters are defined is flexible. Other options are available to provide more fine-grained control, like specifying a user account to use to connect to the LDAP server or using different LDAP servers for password operations. The most common options are listed in Table 13.8, "LDAP Domain Configuration Parameters" . Note Server-side password policies always take precedence over the policy enabled from the client side. For example, when setting the ldap_pwd_policy=shadow option, the policies defined with the shadow LPAD attributes for a user have no effect on whether the password policy is enabled on the OpenLDAP server. Note Many other options are listed in the man page for LDAP domain configuration, sssd-ldap(5) . Table 13.8. LDAP Domain Configuration Parameters Parameter Description ldap_uri Gives a comma-separated list of the URIs of the LDAP servers to which SSSD will connect. The list is given in order of preference, so the first server in the list is tried first. Listing additional servers provides failover protection. This can be detected from the DNS SRV records if it is not given. ldap_search_base Gives the base DN to use for performing LDAP user operations. Important If used incorrectly, ldap_search_base might cause SSSD lookups to fail. With an AD provider, setting ldap_search_base is not required. The AD provider automatically discovers all the necessary information. Red Hat recommends not to set the parameter in this situation and instead rely on what the AD provider discovers. ldap_tls_reqcert Specifies how to check for SSL server certificates in a TLS session. There are four options: never disables requests for certificates. allow requests a certificate, but proceeds normally even if no certificate is given or a bad certificate is given. try requests a certificate and proceeds normally if no certificate is given, If a bad certificate is given, the session terminates. demand and hard are the same option. This requires a valid certificate or the session is terminated. The default is hard . ldap_tls_cacert Gives the full path and file name to the file that contains the CA certificates for all of the CAs that SSSD recognizes. SSSD will accept any certificate issued by these CAs. This uses the OpenLDAP system defaults if it is not given explicitly. ldap_referrals Sets whether SSSD will use LDAP referrals, meaning forwarding queries from one LDAP database to another. SSSD supports database-level and subtree referrals. For referrals within the same LDAP server, SSSD will adjust the DN of the entry being queried. For referrals that go to different LDAP servers, SSSD does an exact match on the DN. Setting this value to true enables referrals; this is the default. Referrals can negatively impact overall performance because of the time spent attempting to trace referrals. Disabling referral checking can significantly improve performance. ldap_schema Sets what version of schema to use when searching for user entries. This can be rfc2307 , rfc2307bis , ad , or ipa . The default is rfc2307 . In RFC 2307, group objects use a multi-valued attribute, memberuid , which lists the names of the users that belong to that group. In RFC 2307bis, group objects use the member attribute, which contains the full distinguished name (DN) of a user or group entry. RFC 2307bis allows nested groups using the member attribute. Because these different schema use different definitions for group membership, using the wrong LDAP schema with SSSD can affect both viewing and managing network resources, even if the appropriate permissions are in place. For example, with RFC 2307bis, all groups are returned when using nested groups or primary/secondary groups. If SSSD is using RFC 2307 schema, only the primary group is returned. This setting only affects how SSSD determines the group members. It does not change the actual user data. ldap_search_timeout Sets the time, in seconds, that LDAP searches are allowed to run before they are canceled and cached results are returned. When an LDAP search times out, SSSD automatically switches to offline mode. ldap_network_timeout Sets the time, in seconds, SSSD attempts to poll an LDAP server after a connection attempt fails. The default is six seconds. ldap_opt_timeout Sets the time, in seconds, to wait before aborting synchronous LDAP operations if no response is received from the server. This option also controls the timeout when communicating with the KDC in case of a SASL bind. The default is five seconds. LDAP Domain Example The LDAP configuration is very flexible, depending on your specific environment and the SSSD behavior. These are some common examples of an LDAP domain, but the SSSD configuration is not limited to these examples. Note Along with creating the domain entry, add the new domain to the list of domains for SSSD to query in the sssd.conf file. For example: Example 13.2. A Basic LDAP Domain Configuration An LDAP domain requires three things: An LDAP server The search base A way to establish a secure connection The last item depends on the LDAP environment. SSSD requires a secure connection since it handles sensitive information. This connection can be a dedicated TLS/SSL connection or it can use Start TLS. Using a dedicated TLS/SSL connection uses an LDAPS connection to connect to the server and is therefore set as part of the ldap_uri option: Using Start TLS requires a way to input the certificate information to establish a secure connection dynamically over an insecure port. This is done using the ldap_id_use_start_tls option to use Start TLS and then ldap_tls_cacert to identify the CA certificate which issued the SSL server certificates.
[ "id uid=500(myserver) gid=500(myserver) groups=500(myserver),510(myothergroup)", "domains = LOCAL,LDAP1,AD,PROXYNIS", "An LDAP domain [domain/LDAP] cache_credentials = true id_provider = ldap auth_provider = ldap ldap_uri = ldaps://ldap.example.com:636 ldap_search_base = dc=example,dc=com", "An LDAP domain [domain/LDAP] cache_credentials = true id_provider = ldap auth_provider = ldap ldap_uri = ldap://ldap.example.com ldap_search_base = dc=example,dc=com ldap_id_use_start_tls = true ldap_tls_reqcert = demand ldap_tls_cacert = /etc/pki/tls/certs/ca-bundle.crt" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/Configuring_Domains-Configuring_a_Native_LDAP_Domain
Chapter 3. CloudCredential [operator.openshift.io/v1]
Chapter 3. CloudCredential [operator.openshift.io/v1] Description CloudCredential provides a means to configure an operator to manage CredentialsRequests. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object CloudCredentialSpec is the specification of the desired behavior of the cloud-credential-operator. status object CloudCredentialStatus defines the observed status of the cloud-credential-operator. 3.1.1. .spec Description CloudCredentialSpec is the specification of the desired behavior of the cloud-credential-operator. Type object Property Type Description credentialsMode string CredentialsMode allows informing CCO that it should not attempt to dynamically determine the root cloud credentials capabilities, and it should just run in the specified mode. It also allows putting the operator into "manual" mode if desired. Leaving the field in default mode runs CCO so that the cluster's cloud credentials will be dynamically probed for capabilities (on supported clouds/platforms). Supported modes: AWS/Azure/GCP: "" (Default), "Mint", "Passthrough", "Manual" Others: Do not set value as other platforms only support running in "Passthrough" logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". unsupportedConfigOverrides `` unsupportedConfigOverrides holds a sparse config that will override any previously set options. It only needs to be the fields to override it will end up overlaying in the following order: 1. hardcoded defaults 2. observedConfig 3. unsupportedConfigOverrides 3.1.2. .status Description CloudCredentialStatus defines the observed status of the cloud-credential-operator. Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 3.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 3.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Property Type Description lastTransitionTime string message string reason string status string type string 3.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 3.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 3.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/cloudcredentials DELETE : delete collection of CloudCredential GET : list objects of kind CloudCredential POST : create a CloudCredential /apis/operator.openshift.io/v1/cloudcredentials/{name} DELETE : delete a CloudCredential GET : read the specified CloudCredential PATCH : partially update the specified CloudCredential PUT : replace the specified CloudCredential /apis/operator.openshift.io/v1/cloudcredentials/{name}/status GET : read status of the specified CloudCredential PATCH : partially update status of the specified CloudCredential PUT : replace status of the specified CloudCredential 3.2.1. /apis/operator.openshift.io/v1/cloudcredentials Table 3.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of CloudCredential Table 3.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind CloudCredential Table 3.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.5. HTTP responses HTTP code Reponse body 200 - OK CloudCredentialList schema 401 - Unauthorized Empty HTTP method POST Description create a CloudCredential Table 3.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.7. Body parameters Parameter Type Description body CloudCredential schema Table 3.8. HTTP responses HTTP code Reponse body 200 - OK CloudCredential schema 201 - Created CloudCredential schema 202 - Accepted CloudCredential schema 401 - Unauthorized Empty 3.2.2. /apis/operator.openshift.io/v1/cloudcredentials/{name} Table 3.9. Global path parameters Parameter Type Description name string name of the CloudCredential Table 3.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a CloudCredential Table 3.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 3.12. Body parameters Parameter Type Description body DeleteOptions schema Table 3.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified CloudCredential Table 3.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 3.15. HTTP responses HTTP code Reponse body 200 - OK CloudCredential schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified CloudCredential Table 3.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.17. Body parameters Parameter Type Description body Patch schema Table 3.18. HTTP responses HTTP code Reponse body 200 - OK CloudCredential schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified CloudCredential Table 3.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.20. Body parameters Parameter Type Description body CloudCredential schema Table 3.21. HTTP responses HTTP code Reponse body 200 - OK CloudCredential schema 201 - Created CloudCredential schema 401 - Unauthorized Empty 3.2.3. /apis/operator.openshift.io/v1/cloudcredentials/{name}/status Table 3.22. Global path parameters Parameter Type Description name string name of the CloudCredential Table 3.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified CloudCredential Table 3.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 3.25. HTTP responses HTTP code Reponse body 200 - OK CloudCredential schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified CloudCredential Table 3.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.27. Body parameters Parameter Type Description body Patch schema Table 3.28. HTTP responses HTTP code Reponse body 200 - OK CloudCredential schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified CloudCredential Table 3.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.30. Body parameters Parameter Type Description body CloudCredential schema Table 3.31. HTTP responses HTTP code Reponse body 200 - OK CloudCredential schema 201 - Created CloudCredential schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/operator_apis/cloudcredential-operator-openshift-io-v1
8.87. hwdata
8.87. hwdata 8.87.1. RHEA-2014:1553 - hwdata enhancement update An updated hwdata package that adds one enhancement is now available for Red Hat Enterprise Linux 6. The hwdata package contains tools for accessing and displaying hardware identification and configuration data. Enhancement BZ# 1064381 The PCI, USB, and vendor ID files have been updated with information about recently released hardware. Hardware utility tools that use these ID files are now able to correctly identify recently released hardware. Users of hwdata are advised to upgrade to this updated package, which adds this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/hwdata
Chapter 4. Understanding upgrade channels and releases
Chapter 4. Understanding upgrade channels and releases In OpenShift Container Platform 4.1, Red Hat introduced the concept of channels for recommending the appropriate release versions for cluster updates. By controlling the pace of updates, these upgrade channels allow you to choose an update strategy. Upgrade channels are tied to a minor version of OpenShift Container Platform. For instance, OpenShift Container Platform 4.7 upgrade channels recommend updates to 4.7 and updates within 4.7. They also recommend updates within 4.6 and from 4.6 to 4.7, to allow clusters on 4.6 to eventually update to 4.7. They do not recommend updates to 4.8 or later releases. This strategy ensures that administrators explicitly decide to update to the minor version of OpenShift Container Platform. Upgrade channels control only release selection and do not impact the version of the cluster that you install; the openshift-install binary file for a specific version of OpenShift Container Platform always installs that version. OpenShift Container Platform 4.7 offers the following upgrade channels: candidate-4.7 fast-4.7 stable-4.7 eus-4.y (only when running an even-numbered 4.y cluster release, like 4.6) Warning Red Hat recommends upgrading to versions suggested by Openshift Update Service only. For minor version update, versions must be contiguous. Red Hat does not test updates to noncontiguous versions and cannot guarantee compatibility with earlier versions. 4.1. Upgrade channels and release paths Cluster administrators can configure the upgrade channel from the web console. 4.1.1. candidate-4.7 channel The candidate-4.7 channel contains candidate builds for a z-stream (4.7.z) and minor version releases. Release candidates contain all the features of the product but are not supported. Use release candidate versions to test feature acceptance and assist in qualifying the version of OpenShift Container Platform. A release candidate is any build that is available in the candidate channel, including ones that do not contain a pre-release version such as -rc in their names. After a version is available in the candidate channel, it goes through more quality checks. If it meets the quality standard, it is promoted to the fast-4.7 or stable-4.7 channels. Because of this strategy, if a specific release is available in both the candidate-4.7 channel and in the fast-4.7 or stable-4.7 channels, it is a Red Hat-supported version. The candidate-4.7 channel can include release versions from which there are no recommended updates in any channel. You can use the candidate-4.7 channel to update from a minor version of OpenShift Container Platform. 4.1.2. fast-4.7 channel The fast-4.7 channel is updated with new and minor versions of 4.7 as soon as Red Hat declares the given version as a general availability release. As such, these releases are fully supported, are production quality, and have performed well while available as a release candidate in the candidate-4.7 channel from where they were promoted. Some time after a release appears in the fast-4.7 channel, it is added to the stable-4.7 channel. Releases never appear in the stable-4.7 channel before they appear in the fast-4.7 channel. You can use the fast-4.7 channel to update from a minor version of OpenShift Container Platform. 4.1.3. stable-4.7 channel While the fast-4.7 channel contains releases as soon as their errata are published, releases are added to the stable-4.7 channel after a delay. During this delay, data is collected from Red Hat SRE teams, Red Hat support services, and pre-production and production environments that participate in connected customer program about the stability of the release. You can use the stable-4.7 channel to update from a minor version of OpenShift Container Platform. 4.1.4. eus-4.y channel In addition to the stable channel, all even-numbered minor versions of OpenShift Container Platform offer an Extended Update Support (EUS). These EUS versions extend the Full and Maintenance support phases for customers with Standard and Premium Subscriptions to 18 months. Although there is no difference between stable-4.y and eus-4.y channels until OpenShift Container Platform 4.y transitions to the EUS phase, you can switch to the eus-4.y channel as soon as it becomes available. When updates to the EUS channel are offered, you can switch to the EUS channel and update until you have reached the EUS version. This update process does not apply for the eus-4.6 channel. Note Both standard and non-EUS subscribers can access all EUS repositories and necessary RPMs ( rhel-*-eus-rpms ) to be able to support critical purposes such as debugging and building drivers. 4.1.5. Upgrade version paths OpenShift Container Platform maintains an upgrade recommendation service that understands the version of OpenShift Container Platform you have installed as well as the path to take within the channel you choose to get you to the release. You can imagine seeing the following in the fast-4.7 channel: 4.7.0 4.7.1 4.7.3 4.7.4 The service recommends only update that have been tested and have no serious issues. It will not suggest updating to a version of OpenShift Container Platform that contains known vulnerabilities. For example, if your cluster is on 4.7.1 and OpenShift Container Platform suggests 4.7.4, then it is safe for you to update from 4.7.1 to 4.7.4. Do not rely on consecutive patch numbers. In this example, 4.7.2 is not and never was available in the channel. Update stability depends on your channel. The presence of an update recommendation in the candidate-4.7 channel does not imply that the update is supported. It means that no serious issues have been found with the update yet, but there might not be significant traffic through the update to suggest stability. The presence of an update recommendation in the fast-4.7 or stable-4.7 channels at any point is a declaration that the update is supported. While releases will never be removed from a channel, update recommendations that exhibit serious issues will be removed from all channels. Updates initiated after the update recommendation has been removed are still supported. Red Hat will eventually provide supported update paths from any supported release in the fast-4.7 or stable-4.7 channels to the latest release in 4.7.z, although there can be delays while safe paths away from troubled releases are constructed and verified. 4.1.6. Fast and stable channel use and strategies The fast-4.7 and stable-4.7 channels present a choice between receiving general availability releases as soon as they are available or allowing Red Hat to control the rollout of those updates. If issues are detected during rollout or at a later time, updates to that version might be blocked in both the fast-4.7 and stable-4.7 channels, and a new version might be introduced that becomes the new preferred update target. Customers can improve this process by configuring pre-production systems on the fast-4.7 channel, configuring production systems on the stable-4.7 channel, and participating in the Red Hat connected customer program. Red Hat uses this program to observe the impact of updates on your specific hardware and software configurations. Future releases might improve or alter the pace at which updates move from the fast-4.7 to the stable-4.7 channel. 4.1.7. Restricted network clusters If you manage the container images for your OpenShift Container Platform clusters yourself, you must consult the Red Hat errata that is associated with product releases and note any comments that impact updates. During update, the user interface might warn you about switching between these versions, so you must ensure that you selected an appropriate version before you bypass those warnings. 4.1.8. Switching between channels A channel can be switched from the web console or through the patch command: USD oc patch clusterversion version --type json -p '[{"op": "add", "path": "/spec/channel", "value": "<channel>"}]' The web console will display an alert if you switch to a channel that does not include the current release. The web console does not recommend any updates while on a channel without the current release. You can return to the original channel at any point, however. Changing your channel might impact the supportability of your cluster. The following conditions might apply: Your cluster is still supported if you change from the stable-4.7 channel to the fast-4.7 channel. You can switch to the candidate-4.7 channel but, some releases for this channel might be unsupported. You can switch from the candidate-4.7 channel to the fast-4.7 channel if your current release is a general availability release. You can always switch from the fast-4.7 channel to the stable-4.7 channel. There is a possible delay of up to a day for the release to be promoted to stable-4.7 if the current release was recently promoted.
[ "oc patch clusterversion version --type json -p '[{\"op\": \"add\", \"path\": \"/spec/channel\", \"value\": \"<channel>\"}]'" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/updating_clusters/understanding-upgrade-channels-releases
2.3. Services
2.3. Services 2.3.1. Retrieving Services The API provides a set of services, each associated with a server path. For example, the service that manages the collection of virtual machines in the system is located in /vms , and the service that manages the virtual machine with identifier 123 is located in /vms/123 . In the Ruby software development kit, the root of that tree of services is implemented by the system service . It is retrieved by calling the system_service method of the connection: Retrieving System Service system_service = connection.system_service Once you have the reference to the system service , you can use it to retrieve references to other services, using the *_service methods (called service locators ). For example, to retrieve a reference to the service that manages the collection of virtual machines in the system, you can use the vms_service service locator: Retrieving Other Services vms_service = system_service.vms_service To retrieve a reference to the service that manages the virtual machine with identifier 123 , use the service locator of the vm_service service. The service locator uses the virtual machine identifier as a parameter: Retrieving Virtual Machine Service Using Identifier vm_service = vms_service.vms_service('123') Important The objects returned by the service locator calls are pure services, and do not contain data. For example, the vm_service Ruby object retrieved in the example is not the representation of a virtual machine. It is the service that is used to retrieve, update, delete, start, and stop a virtual machine. 2.3.2. Service Methods After you have located the service you want, you can call its service methods. These methods send requests to the server and do the real work. Services that manage collections of objects usually have the list and add methods. Services that manage a single object usually have the get , update , and remove methods. Services may have additional action methods, which perform actions other than retrieving, creating, updating, or removing. These methods are most commonly found in services that manage a single object. 2.3.2.1. Get The get method retrieves the representation of a single object. The following example locates and retrieves the representation of the virtual machine with identifier 123 : # Find the service that manages the virtual machine: vms_service = system_service.vms_service vm_service = vms_service.vm_service('123') # Retrieve the representation of the virtual machine: vm = vm_service.get The result will be an instance of the corresponding type. In this case, the result is an instance of the Ruby class Vm . The get method of some services supports additional parameters that control how to retrieve the representation of the object, or which representation to retrieve, if there is more than one. For example, you may want to retrieve a virtual machine's future state, after boot-up. The get method of the service that manages a virtual machine supports a next_run Boolean parameter: Retrieving a Virtual Machine's next_run State # Retrieve the representation of the virtual machine; not the # current one, but the one that will be used after the # boot: vm = vm_service.get(next_run: true) See the reference documentation of the software development kit for details. If the object cannot be retrieved, the software development kit will raise an Error exception, containing details of the failure. This will occur if you try to retrieve a non-existent object. Note The call to the service locator method never fails, even if the object does not exist, because the service locator method does not send a request to the server. In the following examples, the service locator method will succeed, while the get method will raise an exception: Locating the Service of Non-existent Virtual Machine: No Error # Find the service that manages a virtual machine that does # not exist. This will succeed. vm_service = vms_service.vm_service('non_existent_VM') Retrieving a Non-existent Virtual Machine Service: Error # Retrieve the virtual machine. This will raise an exception. vm = vm_service.get 2.3.2.2. List The list method retrieves the representations of multiple objects in a collection. Listing a Collection of Virtual Machines # Find the service that manages the collection of virtual # machines: vms_service = system_service.vms_service vms = vms_service.list The result is a Ruby array containing the instances of the corresponding types. In the above example, the response is a list of instances of the Ruby class Vm . The list method of some services supports additional parameters. For example, almost all of the top-level collections support a search parameter to filter the results, and a max parameter to limit the number of results returned by the server. Listing Ten Virtual Machines Called "my*" vms = vms_service.list(search: 'name=my*', max: 10) Note Not all the list methods support the search or max parameters. Some list methods may support other parameters. See the reference documentation for details. If the list of results is empty, the returned value will be an empty Ruby array. It will never be nil . If the list of results cannot be retrieved, the SDK will raise an Error exception containing the details of the failure. 2.3.2.3. Add Add methods add new elements to collections. They receive an instance of the relevant type describing the object to add, send the request to add it, and return an instance of the type describing the added object. Adding a New Virtual Machine # Add the virtual machine: vm = vms_service.add( OvirtSDK4::Vm.new( name: 'myvm', cluster: { name: 'mycluster' }, template: { name: 'mytemplate' } ) ) Important The Ruby object returned by the add method is an instance of the relevant type. It is not a service, just a container of data. In the above example, the returned object is an instance of the Vm class. If you need to perform an action on the virtual machine you just added, you must locate the service that manages it and call the service locator: Starting a New Virtual Machine # Add the virtual machine: vm = vms_service.add( ... ) # Find the service that manages the virtual machine: vm_service = vms_service.vm_service(vm.id) # Start the virtual machine: vm_service.start The creation of most objects is an asynchronous task. For example, if you create a new virtual machine, the add method will return the virtual machine before the virtual machine is completely created and ready to be used. You should poll the status of the object until it is completely created. For a virtual machine that means checking until the status is DOWN . The recommended approach is to create a virtual machine, locate the service that manages the new virtual machine, and retrieve the status repeatedly until the virtual machine status is DOWN , indicating that all the disks have been created. Adding a Virtual Machine, Locating Its Service, and Retrieving Its Status # Add the virtual machine: vm = vms_service.add( ... ) # Find the service that manages the virtual machine: vm_service = vms_service.vm_service(vm.id) # Wait until the virtual machine is DOWN, indicating that all the # disks have been created: loop do sleep(5) vm = vm_service.get break if vm.status == OvirtSDK4::VmStatus::DOWN end If the object cannot be created, the SDK will raise an Error exception containing the details of the failure. It will never return nil . 2.3.2.4. Update Update methods update existing objects. They receive an instance of the relevant type describing the update to perform, send the request to update it, and return an instance of the type describing the updated object. Note The Ruby object returned by this update method is an instance of the relevant type. It is not a service, just a container of data. In this particular example the returned object will be an instance of the Vm class. In the following example, the service locator method locates the service managing the virtual machine and the update method updates its name: Updating a Virtual Machine Name # Find the virtual machine and the service that # manages it: vm = vms_service.list(search: 'name=myvm').first vm_service = vms_service.vm_service(vm.id) # Update the name: updated_vm = vms_service.update( OvirtSDK4::Vm.new( name: 'newvm' ) ) When you update an object, update only the attributes you want to update: Updating a Selected Attribute of a Virtual Machine (Recommended) vm = vm_service.get vm.name = 'newvm' Do not update the entire object: Updating All Attributes of a Virtual Machine (Not Recommended) # Retrieve the current representation: vms_service.update(vm) Updating all attributes of the virtual machine is a waste of resources and can introduce unexpected bugs on the server side. Update methods of some services support additional parameters that can be used to control how or what to update. For example, you may want to update the memory of a virtual machine, not in its current state, but the time it is started. The update method of the service that manages a virtual machine supports a next_run Boolean parameter: Updating the Memory of a Virtual Machine at Run vm = vm_service.update( OvirtSDK4::Vm.new( memory: 1073741824 ), next_run: true ) If the update cannot be performed, the SDK will raise an Error exception containing the details of the failure. It will never return nil . 2.3.2.5. Remove Remove methods remove existing objects. They normally do not support parameters because they are methods of services that manage single objects, and the service already knows what object to remove. Removing a Virtual Machine with Identifier 123 vm_service = vms_service.vm_service('123') vms_service.remove Some remove methods support parameters that control how or what to remove. For example, it is possible to remove a virtual machine while preserving its disks, using the detach_only Boolean parameter: Removing a Virtual Machine while Preserving Disks vm_service.remove(detach_only: true) The remove method returns nil if the object is removed successfully. It does not return the removed object. If the object cannot be removed, the SDK will raise an Error exception containing the details of the failure. 2.3.2.6. Additional Actions There are additional action methods, apart from the methods described above. The service that manages a virtual machine has methods to start and stop it. Starting a Virtual Machine vm_service.start Some action methods include parameters that modify the operation. For example, the start method supports a use_cloud_init parameter. Starting a Virtual Machine with Cloud-Init vm_service.start(use_cloud_init: true) Most action methods return nil when they succeed, and raise an Error when they fail. Some action methods, however, return values. For example, the service that manages storage domains has an is_attached action method that checks whether the storage domain is already attached to a data center. The is_attached action method returns a Boolean value: Checking for Attached Storage Domain sds_service = system_service.storage_domains_service sd_service = sds_service.storage_domain_service('123') if sd_service.is_attached ... end See the reference documentation of the software development kit to see the action methods supported by each service, their parameters, and return values.
[ "system_service = connection.system_service", "vms_service = system_service.vms_service", "vm_service = vms_service.vms_service('123')", "Find the service that manages the virtual machine: vms_service = system_service.vms_service vm_service = vms_service.vm_service('123') Retrieve the representation of the virtual machine: vm = vm_service.get", "Retrieve the representation of the virtual machine; not the current one, but the one that will be used after the next boot: vm = vm_service.get(next_run: true)", "Find the service that manages a virtual machine that does not exist. This will succeed. vm_service = vms_service.vm_service('non_existent_VM')", "Retrieve the virtual machine. This will raise an exception. vm = vm_service.get", "Find the service that manages the collection of virtual machines: vms_service = system_service.vms_service vms = vms_service.list", "vms = vms_service.list(search: 'name=my*', max: 10)", "Add the virtual machine: vm = vms_service.add( OvirtSDK4::Vm.new( name: 'myvm', cluster: { name: 'mycluster' }, template: { name: 'mytemplate' } ) )", "Add the virtual machine: vm = vms_service.add( ) Find the service that manages the virtual machine: vm_service = vms_service.vm_service(vm.id) Start the virtual machine: vm_service.start", "Add the virtual machine: vm = vms_service.add( ) Find the service that manages the virtual machine: vm_service = vms_service.vm_service(vm.id) Wait until the virtual machine is DOWN, indicating that all the disks have been created: loop do sleep(5) vm = vm_service.get break if vm.status == OvirtSDK4::VmStatus::DOWN end", "Find the virtual machine and the service that manages it: vm = vms_service.list(search: 'name=myvm').first vm_service = vms_service.vm_service(vm.id) Update the name: updated_vm = vms_service.update( OvirtSDK4::Vm.new( name: 'newvm' ) )", "vm = vm_service.get vm.name = 'newvm'", "Retrieve the current representation: vms_service.update(vm)", "vm = vm_service.update( OvirtSDK4::Vm.new( memory: 1073741824 ), next_run: true )", "vm_service = vms_service.vm_service('123') vms_service.remove", "vm_service.remove(detach_only: true)", "vm_service.start", "vm_service.start(use_cloud_init: true)", "sds_service = system_service.storage_domains_service sd_service = sds_service.storage_domain_service('123') if sd_service.is_attached end" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/ruby_sdk_guide/Services
Chapter 2. Working with model registries
Chapter 2. Working with model registries 2.1. Registering a model As a data scientist, you can register a model from the OpenShift AI dashboard. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have access to an available model registry in your deployment. Procedure From the OpenShift AI dashboard, click Model Registry . From the Model registry drop-down menu, select the model registry that you want to register a model in. Click Register model . The Register model dialog opens. In the Model details section, configure details to apply to all versions of the model: In the Model name field, enter a name for the model. Optional: In the Model description field, enter a description for the model. In the Version details section, enter details to apply to the first version of the model: In the Version name field, enter a name for the model version. Optional: In the Version description field, enter a description for the first version of the model. In the Source model format field, enter the name of the model format, for example, onnx . In the Source model format version field, enter the version of the model format. In the Model location section, specify the location of the model by providing either object storage details, or a URI. To provide object storage details, ensure that the Object storage radio button is selected. To autofill the details of an existing connection: Click Autofill from connection . In the Autofill from connection dialog that opens, from the Project drop-down list, select the data science project that contains the connection. From the Connection name drop-down list, select the connection that you want to use. This list contains only object storage types which contain a bucket. Click Autofill . Alternatively, manually fill out your object storage details: In the Endpoint field, enter the endpoint of your S3-compatible object storage bucket. In the Bucket field, enter the name of your S3-compatible object storage bucket. In the Region field, enter the region of your S3-compatible object storage account. In the Path field, enter a path to a model or folder. This path cannot point to a root folder. To provide a URI, ensure that the URI radio button is selected. In the URI field, enter the URI for the model. Important Deployment of models that are registered by using a URI is not currently supported for this feature. Click Register model . Verification The new model appears on the Model Registry page. 2.2. Registering a model version You can register a model version. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have access to an available model registry in your deployment. Procedure From the OpenShift AI dashboard, click Model Registry . From the Model registry drop-down menu, select the model registry that you want to register a model in. In the Model name column, click the name of the model that you want to register a new version of. The details page for the model opens. Click Register new version . In the Version details section, enter details to apply to the first version of the model: In the Version name field, enter a name for the model version. Optional: In the Version description field, enter a description for the first version of the model. In the Source model format field, enter the name of the model format, for example, onnx . In the Source model format version field, enter the version of the model format. In the Model location section, specify the location of the model by providing either object storage details, or a URI. To provide object storage details, ensure that the Object storage radio button is selected. To autofill the details of an existing connection: Click Autofill from connection . In the Autofill from connection dialog that opens, from the Project drop-down list, select the data science project that contains the connection. From the Connection name drop-down list, select the connection that you want to use. This list contains only object storage types which contain a bucket. Click Autofill . Alternatively, manually fill out your object storage details: In the Endpoint field, enter the endpoint of your S3-compatible object storage bucket. In the Bucket field, enter the name of your S3-compatible object storage bucket. In the Region field, enter the region of your S3-compatible object storage account. In the Path field, enter a path to a model or folder. This path cannot point to a root folder. To provide a URI, ensure that the URI radio button is selected. In the URI field, enter the URI for the model. Important Deployment of models that are registered by using a URI is not currently supported for this feature. Click Register new version . Verification The new model version appears on the details page for the model. 2.3. Viewing registered models You can view the details of models registered in OpenShift AI, such as registered versions, deployments, and metadata associated with the model. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. An available model registry exists in your deployment, and contains at least 1 registered model. You have access to the model registry that contains the model that you want to view. Procedure From the OpenShift AI dashboard, click Model Registry . From the Model registry drop-down menu, select the model registry that contains the model that you want to view. The Model Registry page provides a high-level view of registered models, including the model name, labels, last modified timestamp, and owner of each model. Models are sorted by their Last modified timestamp by default. Use the search bar to find a model in the list. You can search with a keyword by default, or click the search bar drop-down menu and select Owner to search by model owner. Searching by keyword will perform a search across the name, description, and labels of registered models and their versions. Click the name of a model to view more details. The details page for the model opens. On the Versions tab, you can view registered versions of the model. On the Details tab, you can view the description, labels, custom properties, model ID, owner, and last modification and creation timestamps for the model. Verification You can view information about the selected model on the details page for the model. 2.4. Viewing registered model versions You can view the details of model versions that are registered in OpenShift AI, such as the version metadata and deployment information. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. An available model registry exists in your deployment, and contains at least 1 registered model. You have access to the model registry that contains the model version that you want to view. Procedure From the OpenShift AI dashboard, click Model Registry . From the Model registry drop-down menu, select the model registry that contains the model version that you want to view. Click the name of a model to view its versions. The details page for the model opens. On the Versions tab, you can view registered versions of the model. Versions are sorted by their Last modified timestamp by default. Use the search bar to find a version in the list. You can search with a keyword by default, or click the search bar drop-down menu and select Author to search by model author. Searching by keyword will perform a search across the name, description, and labels of registered models and their versions. Click the name of a version to view more details. The details page for the version opens. On the Details tab, you can view the description, labels, custom properties, version ID, author, and last modification and registration timestamps for the model. You can also view the source model format and location information for the model. On the Deployments tab, you can view deployments initiated from the model registry for this version. Click the name of a deployment to open its metrics page. For information about model metrics on the single-model serving platform, see Serving large models: Monitoring model performance . For information about model metrics on the multi-model serving platform, see Serving small and medium sized models: Monitoring model performance . Verification You can view the details of registered model versions on the Model Registry page. 2.5. Editing model metadata in a model registry You can edit the metadata of models registered in OpenShift AI, such as the models's description, labels, and custom properties. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. An available model registry exists in your deployment, and contains at least 1 registered model. You have access to the model registry that contains the model that you want to edit. Procedure From the OpenShift AI dashboard, click Model Registry . From the Model registry drop-down menu, select the model registry that contains the model that you want to edit. The Model Registry page provides a high-level view of registered models, including the model name, labels, last modified timestamp, and owner of each model. Click the name of a model to view more details. The details page for the model opens. On the Details tab, you can edit metadata for the model. In the Description section, click Edit to edit the description of the model. In the Labels section, click Edit to edit the labels of the model. In the Properties section, click Add property to add a new property to the model. To edit an existing property, click the action menu ( ... ) beside the property, and then click Edit . To delete a property, click the action menu ( ... ) beside the property, and then click Delete . Verification You can view the updated metadata on the details page for the model. 2.6. Editing model version metadata in a model registry You can edit the metadata of model versions that are registered in OpenShift AI, such as the version's description, labels, and custom properties. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. An available model registry exists in your deployment, and contains at least 1 registered model. You have access to the model registry that contains the model version that you want to edit. Procedure From the OpenShift AI dashboard, click Model Registry . From the Model registry drop-down menu, select the model registry that contains the model version that you want to edit. Click the name of a model to view more details. The details page for the model opens. Click the name of a version to view more details. The details page for the version opens. On the Details tab, you can edit the version metadata. In the Description section, click Edit to edit the description of the version. In the Labels section, click Edit to edit the labels of the version. In the Properties section, click Add property to add a new property to the version. To edit an existing property, click the action menu ( ... ) beside the property, and then click Edit . To delete a property, click the action menu ( ... ) beside the property, and then click Delete . Verification You can view the updated metadata on the details page for the model version. 2.7. Deploying a model version from a model registry You can deploy a version of a registered model directly from a model registry. Prerequisites An available model registry exists in your deployment, and contains at least 1 registered model. To deploy a model version by using the single-model serving platform, you have fulfilled the prerequisites described in Deploying a model by using the single-model serving platform . To deploy a model version by using the multi-model serving platform, you have fulfilled the prerequisites described in Deploying a model by using the multi-model serving platform . Procedure From the OpenShift AI dashboard, click Model Registry . From the Model registry drop-down menu, select the model registry from which you want to deploy a model version. In the Model name column, click the name of the model that contains the version that you want to deploy. The details page for the model version opens. Click the action menu ( ... ) beside the model version that you want to deploy. Click Deploy . In the Deploy model dialog, configure properties for deploying the model. From the Project drop-down list, select a target project. Click Deploy . Configure the following properties for deploying your model: From the Project drop-down list, select a project in which to deploy your model. Optional: In the Model deployment name field, enter a name for the model deployment. Each deployment is named in the following format by default: <model name> <model version name> <deployment creation timestamp> This will be the name of the inference service that is created when the model is deployed. Configure the remaining properties for deploying your model, as described in Deploying a model by using the multi-model serving platform or Deploying a model by using the single-model serving platform . Click Deploy . Verification The model version appears on the Deployments tab for the model version. You can edit the model version deployment by clicking the action menu ( ... ) beside it, and then clicking Edit . You can delete the model version deployment by clicking the action menu ( ... ) beside it, and then clicking Delete . 2.8. Editing the deployment properties of a deployed model version from a model registry You can edit model version deployment properties from a model registry for models that were deployed from the registry. For example, you can change the deployment name, model framework, and source model location details. 2.8.1. Editing the deployment properties of a model deployed by using the multi-model serving platform You can edit the deployment properties of a deployed model version from a model registry. For example, you can change the deployment name, model framework, and source model location details. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. An available model registry exists in your deployment, and contains at least 1 registered and deployed model version. You have access to the model registry that contains the model version deployment that you want to edit. Procedure From the OpenShift AI dashboard, click Model Registry . From the Model registry drop-down menu, select the model registry that contains the model deployment that you want to edit. In the Model name column, click the name of the model that contains the deployment that you want to edit. The details page for the model opens. Click the name of the model version with the deployment that you want to edit. Click Deployments Click the action menu ( ... ) beside the model deployment that you want to edit. Click Edit . In the Edit model dialog, edit the model deployment properties: In the Model deployment name field, enter a new, unique name for your model deployment. From the Model framework list, select a different framework for your model. Note The Model framework list shows only the frameworks that are supported by the model serving runtime that you specified when you configured your model server. Edit the connection by specifying an existing connection, or by creating a new connection. Click Redeploy . Verification The model redeploys and appears with updated details on the Deployments tab for the model version. 2.8.2. Editing the deployment properties of a model deployed by using the single-model serving platform You can edit the deployment properties of a deployed model version from a model registry. For example, you can change the deployment name, model framework, number of model server replicas, model server size, and source model location details. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. An available model registry exists in your deployment, and contains at least 1 registered and deployed model version. You have access to the model registry that contains the model version deployment that you want to edit. Procedure From the OpenShift AI dashboard, click Model Registry . From the Model registry drop-down menu, select the model registry that contains the model deployment that you want to edit. In the Model name column, click the name of the model that contains the deployment that you want to edit. The details page for the model opens. Click the name of the model version with the deployment that you want to edit. Click Deployments Click the action menu ( ... ) beside the model deployment that you want to edit. Click Edit . In the Edit model dialog, edit the model deployment properties: In the Model deployment name field, enter a new, unique name for your model deployment. From the Model framework list, select a different framework for your model. Note The Model framework list shows only the frameworks that are supported by the model serving runtime that you specified when you deployed your model. In the Number of model server replicas to deploy field, specify a value. From the Model server size list, select a value. In the Model route section, select the Make deployed models available through an external route checkbox to make your deployed models available to external clients. In the Token authentication section, select the Require token authentication checkbox to require token authentication for your model server. To finish configuring token authentication, perform the following actions: In the Service account name field, enter a service account name for which the token will be generated. The generated token is created and displayed in the Token secret field when the model server is configured. To add an additional service account, click Add a service account and enter another service account name. Edit the connection by specifying an existing connection, or by creating a new connection. Customize the runtime parameters in the Configuration parameters section: Modify the values in Additional serving runtime arguments to define how the deployed model behaves. Modify the values in Additional environment variables to define variables in the model's environment. The Configuration parameters section shows predefined serving runtime parameters, if any are available. Note Do not modify the port or model serving runtime arguments, because they require specific values to be set. Overwriting these parameters can cause the deployment to fail. Click Redeploy . Verification The model redeploys and appears with updated details on the Deployments tab for the model version. 2.9. Deleting a deployed model version from a model registry You can delete the deployments of model versions from a model registry. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. An available model registry exists in your deployment, and contains at least 1 registered model with a deployed model version. You have access to the model registry that contains the model version deployment that you want to delete. Procedure From the OpenShift AI dashboard, click Model Registry . From the Model registry drop-down menu, select the model registry that contains the model that you want to edit. Click the name of a model to view more details. The details page for the model opens. Click the name of the model version with the deployment that you want to delete. The details page for the model version opens. Click Deployments . To delete a deployment, click the action menu ( ... ) beside the deployment, and then click Delete . The Delete deployed model? dialog opens. Enter the name of the model deployment in the text field to confirm that you intend to delete it. Click Delete deployed model . Verification The model deployment no longer appears on the Deployments tab for the model version. 2.10. Archiving a model You can archive a model that you no longer require. The model and all of its versions will be archived and unavailable for use unless it is restored. Important Models with deployed versions cannot be archived. To archive a model, you must first delete all deployments of its registered versions from the Model Serving Deployed models page. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. An available model registry exists in your deployment, and contains at least 1 registered model. You have access to the model registry that contains the model that you want to archive. Procedure From the OpenShift AI dashboard, click Model Registry . From the Model registry drop-down menu, select the model registry that you want to archive a model in. Click the action menu ( ... ) beside the model that you want to archive. Click Archive model . In the Archive model? dialog that appears, enter the name of the model in the text field to confirm that you intend to archive it. Click Archive . Verification The model no longer appears on the Model Registry page. The model now appears on the archived models page for the model registry. 2.11. Archiving a model version You can archive a model version that you no longer require. The model version will be archived and unavailable for use unless it is restored. Important Deployed model versions cannot be archived. To archive a model version, you must first delete all deployments of the version from the Model Serving Deployed models page. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. An available model registry exists in your deployment, and contains at least 1 registered model. You have access to the model registry that contains the model version that you want to archive. Procedure From the OpenShift AI dashboard, click Model Registry . From the Model registry drop-down menu, select the model registry that you want to archive a model in. In the Model name column, click the name of the model that contains the version that you want to archive. The details page for the model version opens. Click the action menu ( ... ) beside the version that you want to archive. Click Archive model version . In the Archive version? dialog that appears, enter the name of the model version in the text field to confirm that you intend to archive it. Click Archive . Verification The model version no longer appears on the details page for the model. The model version now appears on the archived versions page for the model. 2.12. Restoring a model You can restore an archived model. The model and all of its versions will be restored and returned to the registered models list. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. An available model registry exists in your deployment, and contains at least one archived model. You have access to the model registry that contains the model that you want to restore. Procedure From the OpenShift AI dashboard, click Model Registry . From the Model registry dropdown menu, select the model registry that you want to restore a model in. Click the action menu ( ... ) beside the Register model drop-down menu, and then click View archived models . The archived models page for the model registry opens. Click the action menu ( ... ) beside the model that you want to restore. Click Restore model . In the Restore model? dialog that appears, click Restore . Verification The model appears on the Model Registry page. The model no longer appears on the archived models page for the model registry. 2.13. Restoring a model version You can restore an archived model version. The model version will be restored and returned to the versions list for the model. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. An available model registry exists in your deployment, and contains at least one archived model version. You have access to the model registry that contains the model version that you want to restore. Procedure From the OpenShift AI dashboard, click Model Registry . From the Model registry drop-down menu, select the model registry that you want to restore a model in. In the Model name column, click the name of the model that contains the version that you want to restore. The details page for the model version opens. Click the action menu ( ... ) beside the Register new version drop-down menu, and then click View archived versions . The archived versions page for the model opens. Click the action menu ( ... ) beside the version that you want to restore. Click Restore version . In the Restore version? dialog that appears, click Restore . The details page for the version opens. Verification The model version appears on the details page for the model. The model no longer appears on the archived versions page for the model.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/working_with_model_registries/working-with-model-registriesmodel-registry
Chapter 8. Copying files between buckets
Chapter 8. Copying files between buckets To copy files between buckets in your object store from your workbench, use the copy() method. Prerequisites You have cloned the odh-doc-examples repository to your workbench. You have opened the s3client_examples.ipynb file in your workbench. You have installed Boto3 and configured an S3 client. You know the key of the source file that you want to copy, and the bucket that the file is stored in. Procedure In the notebook, locate the following instructions to copy files between buckets: Within the copy_source block, replace <bucket_name> with the name of the source bucket and <key> with the key of the source file, as shown in the example. Replace the <destination_bucket> with the name of the bucket to copy to, and <destination_key> with the name of the key to copy to, as shown in the example. Execute the code cell. Verification Locate the following instructions to list objects in a bucket. Replace <bucket_name> with the name of the destination bucket, as shown in the example, and run the code cell. The file that you copied is displayed in the output.
[ "#Copying files between buckets #Replace the placeholder values with your own. copy_source = { 'Bucket': '<bucket_name>', 'Key': '<key>' } s3_client.copy(copy_source, '<destination bucket>', '<destination_key>')", "copy_source = { 'Bucket': 'aqs086-image-registry', 'Key': 'series43-image12-086.csv' }", "s3_client.copy(copy_source, 'aqs971-image-registry', '/tmp/series43-image12-086.csv')", "#Copy Verification bucket_name = '<bucket_name>' for key in s3_client.list_objects_v2(Bucket=bucket_name)['Contents']: print(key['Key'])", "#Copy Verification bucket_name = 'aqs971-image-registry' for key in s3_client.list_objects_v2(Bucket=bucket_name)['Contents']: print(key['Key'])." ]
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/working_with_data_in_an_s3-compatible_object_store/copying-files-to-between-buckets_s3
Chapter 6. Using a password vault with Red Hat JBoss Web Server
Chapter 6. Using a password vault with Red Hat JBoss Web Server The JBoss Web Server password vault, which is named tomcat-vault , is a PicketLink vault extension for Apache Tomcat. You can use the password vault to mask passwords and other sensitive strings, and to store sensitive information in an encrypted Java keystore. When you use the password vault, you can stop storing clear-text passwords in your Tomcat configuration files. Tomcat can use the password vault to search for passwords and other sensitive strings from a keystore. Important For more information about using the CRYPT feature with the password vault, see Using CRYPT . Note The Federal Information Processing Standard (FIPS) 140-2 does not support the password-based encryption that is provided by tomcat-vault . If you want to use password-based encryption on the JBoss Web Server host, you must ensure that FIPS is disabled. If you attempt to use tomcat-vault when FIPS mode is enabled, the following error message is displayed: Security Vault can't be used in FIPS mode 6.1. Password vault installation from an archive file When you install JBoss Web Server from an archive file, the password vault is installed automatically when you install the jws-6.0.0-application-server.zip file. The password vault is located in the JWS_HOME /tomcat/lib/tomcat-vault.jar file. 6.2. Installing the password vault on RHEL by using the DNF package manager When you install JBoss Web Server on Red Hat Enterprise Linux from RPM packages, you can use the DNF package manager to install the password vault. Procedure Enter the following command as the root user: 6.3. Enabling the password vault in JBoss Web Server You can enable the password vault by adding a configuration property in the catalina.properties file. Prequisites You have installed the password vault from an archive file or by using the DNF package manager . Procedure Stop Tomcat if it is already running. Open the JWS_HOME /tomcat/conf/catalina.properties file. In the catalina.properties file, enter the following line: org.apache.tomcat.util.digester.PROPERTY_SOURCE=org.apache.tomcat.vault.util.PropertySourceVault Note In the preceding example, replace JWS_HOME with the path to your JBoss Web Server installation. The paths shown in this example use a forward slash ( / ) for directory separators. 6.4. Creating a Java keystore in JBoss Web Server Before you use the password vault, you must first create a Java keystore by using the keytool -genseckey command. Procedure Enter the following command: Note In the preceding example, replace the parameter settings with values that are appropriate for your environment. For more information about each parameter, use the keytool -genseckey -help command. Important The password vault does not currently support the PKCS12 keystore type. The password vault supports the JCEKS keystore type only. Depending on the keystore algorithm that you are using, you must specify one of the following keysize values: If you are using AES, specify -keysize 128 . If you are using DES, specify -keysize 56 . If you are using DESede, specify -keysize 168 . 6.5. Password vault initialization for Apache Tomcat You can use the tomcat-vault.sh script to initialize the password vault for Apache Tomcat. The tomcat-vault.sh script supports either of the following mechanisms to initialize the password vault: Interactive setup Noninteractive setup Note Depending on how you installed the password vault, the location of the tomcat-vault script varies: If you installed the password vault from an archive file, the tomcat-vault.sh script is located in the JWS_HOME /tomcat/bin directory. If you installed the password vault by using the DNF package manager, the tomcat-vault.sh script is located in the /opt/rh/jws6/root/usr/bin directory. 6.5.1. Initializing password vault for Apache Tomcat interactively You can initialize the password vault for Tomcat interactively. In this situation, the tomcat-vault.sh script prompts you to enter values while the script is running. Procedure Go to the directory that contains the tomcat-vault.sh script: If you installed the password vault from an archive file, go to the JWS_HOME /tomcat/bin directory. If you installed the password vault from an RPM package, go to the /opt/rh/jws6/root/usr/bin directory. Run the tomcat-vault.sh script: Follow the on-screen prompts. For example: In the preceding example, replace the specified settings with values that are appropriate for your environment. Note the output for the Tomcat properties file. You need this information when configuring Tomcat to use the password vault. 6.5.2. Initializing password vault for Apache Tomcat by using a noninteractive setup You can initialize the password vault for Tomcat by using a noninteractive setup. In this situation, you must provide the required input as arguments to the tomcat-vault.sh script when you run the script. Procedure Go to the directory that contains the tomcat-vault.sh script: If you installed the password vault from an archive file, go to the JWS_HOME /tomcat/bin directory. If you installed the password vault from an RPM package, go to the /opt/rh/jws6/root/usr/bin directory. Run the tomcat-vault.sh script and provide the required arguments: For example: In the preceding example, replace the specified settings with values that are appropriate for your environment. Note When you specify the -g, --generate-config option, the tomcat-vault.sh script also creates a vault.properties file that contains the specified properties. 6.6. Configuring Tomcat to use the password vault You can configure Apache Tomcat to use the password vault by updating configuration settings in the vault.properties file. Prerequisites You have initialized the password vault for Tomcat . Procedure Go to the JWS_HOME /tomcat/conf/ directory. Create a file named vault.properties . In the vault.properties file, enter the vault configuration properties that you specified when you initialized the password vault for Tomcat. For example: Note The preceding example is based on the example vault settings in Initializing password vault for Apache Tomcat interactively . For the KEYSTORE_PASSWORD setting, ensure that you use the masked value that was generated when you initialized the password vault. 6.7. External password vault configuration You can store the vault.properties file for the password vault outside of the JWS_HOME /tomcat/conf/ directory. If you have already set a CATALINA_BASE /conf/ directory, you can store the vault.properties file in the CATALINA_BASE /conf/ directory. For more information about setting the CATALINA_BASE directory, see the "Advanced Configuration - Multiple Tomcat Instances" section in Running The Apache Tomcat 10.1 Servlet/JSP Container on the Apache Tomcat website. Note The default location for CATALINA_BASE is JWS_HOME /tomcat/ . This is also known as the CATALINA_HOME directory. Additional Resources Apache Tomcat 10: Introduction - Directories and Files Running The Apache Tomcat 10.1 Servlet/JSP Container : "Advanced Configuration - Multiple Tomcat Instances" 6.8. Storing a sensitive string in the password vault You can use the tomcat-vault.sh script to store sensitive strings in the password vault. You can run the tomcat-vault.sh script interactively or in a noninteractive mode. When you add a sensitive string to the password vault, you must specify a name for the string. In this situation, the name of the string is called an attribute name , and the string itself is called a secured attribute . Procedure Go to the directory that contains the tomcat-vault.sh script: If you installed the password vault from an archive file, go to the JWS_HOME /tomcat/bin directory. If you installed the password vault from an RPM package, go to the /opt/rh/jws6/root/usr/bin directory. To use the tomcat-vault.sh script in noninteractive mode, enter the following command: Note The preceding example is based on the example vault settings in Initializing password vault for Apache Tomcat interactively . The preceding example stores the sensitive string, P@SSW0#D , with the attribute name, manager_password . When you run the tomcat-vault.sh script, you can optionally specify a vault block to store the password in. If you do not specify a block, the tomcat-vault.sh script creates a block automatically. The preceding example specifies a vault block named my_block . 6.9. Using a stored sensitive string in your Tomcat configuration When you store a sensitive string in the password vault, you can refer to the attribute name rather than specify the actual string in your configuration files. By replacing a secured string with the attribute name for the string, you can ensure that the Tomcat configuration file contains only a reference to the password. In this situation, the actual password is stored in the password vault only. Procedure Open the Tomcat configuration file that contains the sensitive string. Replace the sensitive string with the attribute name for the string, and ensure that you enter the attribute name in the following format: USD{VAULT:: block_name :: attribute_name ::} For example: Consider the following example file entry for the secured string, P@SSW0#D : <user username="manager" password=*"P@SSW0#D"* roles="manager-gui"/> If the secured string, P@SSW0#D , has the attribute name, manager_password , replace the secured string with the following value: <user username="manager" password=*"USD{VAULT::my_block::manager_password::}"* roles="manager-gui"/> Note The preceding example is based on the example settings in Storing a sensitive string in the password vault . The preceding example replaces a sensitive string, P@SSW0#D , with an attribute name, manager_password , that is in a block called, my_block .
[ "dnf install jws6-tomcat-vault", "org.apache.tomcat.util.digester.PROPERTY_SOURCE=org.apache.tomcat.vault.util.PropertySourceVault", "keytool -genseckey -keystore JWS_HOME/tomcat/vault.keystore -alias my_vault -storetype jceks -keyalg AES -keysize 128 -storepass <vault_password> -keypass <vault_password> -validity 730", "./tomcat-vault.sh", "WARNING JBOSS_HOME may be pointing to a different installation - unpredictable results may occur. ========================================================================= JBoss Vault JBOSS_HOME: JWS_HOME/tomcat JAVA: java ========================================================================= ********************************** **** JBoss Vault *************** ********************************** Please enter a Digit:: 0: Start Interactive Session 1: Remove Interactive Session 2: Exit 0 Starting an interactive session Enter directory to store encrypted files: JWS_HOME /tomcat/ Enter Keystore URL: JWS_HOME /tomcat/vault.keystore Enter Keystore password: <vault_password> Enter Keystore password again: <vault_password> Values match Enter 8 character salt: 1234abcd Enter iteration count as a number (Eg: 44): 120 Enter Keystore Alias: my_vault Initializing Vault Jun 16, 2018 10:24:27 AM org.apache.tomcat.vault.security.vault.PicketBoxSecurityVault init INFO: PBOX000361: Default Security Vault Implementation Initialized and Ready Vault Configuration in tomcat properties file: ******************************************** KEYSTORE_URL=JWS_HOME/tomcat/vault.keystore KEYSTORE_PASSWORD=MASK-3CuP21KMHn7G6iH/A3YpM/ KEYSTORE_ALIAS=my_vault SALT=1234abcd ITERATION_COUNT=120 ENC_FILE_DIR=JWS_HOME/tomcat/ ******************************************** Vault is initialized and ready for use Handshake with Vault complete Please enter a Digit:: 0: Store a secured attribute 1: Check whether a secured attribute exists 2: Exit 2", "./tomcat-vault.sh --keystore JWS_HOME /tomcat/vault.keystore --keystore-password <vault_password> --alias my_vault --enc-dir JWS_HOME /tomcat/ --iteration 120 --salt 1234abcd --generate-config JWS_HOME /tomcat/conf/vault.properties", "KEYSTORE_URL= JWS_HOME /tomcat/vault.keystore KEYSTORE_PASSWORD=MASK-3CuP21KMHn7G6iH/A3YpM/ KEYSTORE_ALIAS=my_vault SALT=1234abcd ITERATION_COUNT=120 ENC_FILE_DIR= JWS_HOME /tomcat/", "./tomcat-vault.sh --keystore JWS_HOME/tomcat/vault.keystore --keystore-password <vault_password> --alias my_vault --enc-dir JWS_HOME/tomcat --iteration 120 --salt 1234abcd --vault-block my_block --attribute manager_password --sec-attr P@SSW0#D", "<user username=\"manager\" password=*\"P@SSW0#D\"* roles=\"manager-gui\"/>", "<user username=\"manager\" password=*\"USD{VAULT::my_block::manager_password::}\"* roles=\"manager-gui\"/>" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/installation_guide/vault_for_jws
C.2.2. Desktop Environments
C.2.2. Desktop Environments A desktop environment integrates various X clients to create a common graphical user environment and a development platform. Desktop environments have advanced features allowing X clients and other running processes to communicate with one another, while also allowing all applications written to work in that environment to perform advanced tasks, such as drag-and-drop operations. Red Hat Enterprise Linux provides two desktop environments: GNOME - The default desktop environment for Red Hat Enterprise Linux based on the GTK+ 2 graphical toolkit. KDE - An alternative desktop environment based on the Qt 4 graphical toolkit. Both GNOME and KDE have advanced-productivity applications, such as word processors, spreadsheets, and Web browsers; both also provide tools to customize the look and feel of the GUI. Additionally, if both the GTK+ 2 and the Qt libraries are present, KDE applications can run in GNOME and vice versa.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-x-clients-desktop
Chapter 12. Appendix: Managing users, groups, SSH keys, and secrets in image mode for RHEL
Chapter 12. Appendix: Managing users, groups, SSH keys, and secrets in image mode for RHEL Learn more about users, groups, SSH keys, and secrets management in image mode for RHEL. 12.1. Users and groups configuration Image mode for RHEL is a generic operating system update and configuration mechanism. You cannot use it to configure users or groups. The only exception is the bootc install command that has the --root-ssh-authorized-keys option. Users and groups configuration for generic base images Usually, the distribution base images do not have any configuration. Do not encrypt passwords and SSH keys with publicly-available private keys in generic images because of security risks. Injecting SSH keys through systemd credentials You can use systemd to inject a root password or SSH authorized_keys file in some environments. For example, use System Management BIOS (SMBIOS) to inject SSH keys system firmware. You can configure this in local virtualization environments, such as qemu . Injecting users and SSH keys by using cloud-init Many Infrastructure as a service (IaaS) and virtualization systems use metadata servers that are commonly processed by software such as cloud-init or ignition . See AWS instance metadata . The base image you are using might include cloud-init or Ignition, or you can install it in your own derived images. In this model, the SSH configuration is managed outside of the bootc image. Adding users and credentials by using container or unit custom logic Systems such as cloud-init are not privileged. You can inject any logic you want to manage credentials in the way you want to launch a container image, for example, by using a systemd unit. To manage the credentials, you can use a custom network-hosted source, for example, FreeIPA . Adding users and credentials statically in the container build In package-oriented systems, you can use the derived build to inject users and credentials by using the following command: You can find issues in the default shadow-utils implementation of useradd : Users and groups IDs are allocated dynamically, and this can cause drift. User and group home directories and /var directory For systems configured with persistent /home /var/home , any changes to /var made in the container image after initial installation will not be applied on subsequent updates. For example, if you inject /var/home/someuser/.ssh/authorized_keys into a container build, existing systems do not get the updated authorized_keys file. Using DynamicUser=yes for systemd units Use the systemd DynamicUser=yes option where possible for system users. This is significantly better than the pattern of allocating users or groups at package install time, because it avoids potential UID or GID drift. Using systemd -sysusers Use systemd -sysusers, for example, in your derived build. For more information, see the systemd -sysusers documentation. The sysusers tool makes changes to the traditional /etc/passwd file as necessary during boot time. If /etc is persistent, this can avoid UID or GID drift. It means that the UID or GID allocation depends on how a specific machine was upgraded over time. Using systemd JSON user records See JSON user records systemd documentation. Unlike sysusers , the canonical state for these users lives in /usr . If a subsequent image drops a user record, then it also vanishes from the system. Using nss-altfiles With nss-altfiles , you can remove the systemd JSON user records. It splits system users into /usr/lib/passwd and /usr/lib/group , aligning with the way the OSTree project handles the 3 way merge for /etc as it relates to /etc/passwd . Currently, if the /etc/passwd file is modified in any way on the local system, then subsequent changes to /etc/passwd in the container image are not applied. Base images built by rpm-ostree have nns-altfiles enabled by default. Also, base images have a system users pre-allocated and managed by the NSS file to avoid UID or GID drift. In a derived container build, you can also append users to /usr/lib/passwd , for example. Use sysusers.d or DynamicUser=yes . Machine-local state for users The filesystem layout depends on the base image. By default, the user data is stored in both /etc , /etc/passwd , /etc/shadow and groups , and /home , depending on the base image. However, the generic base images have to both be machine-local persistent state. In this model /home is a symlink to /var/home/ user . Injecting users and SSH keys at system provisioning time For base images where /etc and /var are configured to persist by default, you can inject users by using installers such as Anaconda or Kickstart. Typically, generic installers are designed for one time bootstrap. Then, the configuration becomes a mutable machine-local state that you can change in Day 2 operations, by using some other mechanism. You can use the Anaconda installer to set the initial password. However, changing this initial password requires a different in-system tool, such as passwd . These flows work equivalently in a bootc-compatible system, to support users directly installing generic base images, without requiring changes to the different in-system tool. Transient home directories Many operating system deployments minimize persistent, mutable, and executable state. This can damage user home directories. The /home directory can be set as tmpfs , to ensure that user data is cleared across reboots. This approach works especially well when combined with a transient /etc directory. To set up the user's home directory to, for example, inject SSH authorized_keys or other files, use the systemd tmpfiles.d snippets: SSH is embedded in the image as: /usr/lib/tmpfiles.d/<username-keys.conf . Another example is a service embedded in the image that can fetch keys from the network and write them. This is the pattern used by cloud-init . UID and GID drift The /etc/passwd and similar files are a mapping between names and numeric identifiers. When the mapping is dynamic and mixed with "stateless" container image builds, it can cause issues. Each container image build might result in the UID changing due to RPM installation ordering or other reasons. This can be a problem if that user maintains a persistent state. To handle such cases, convert it to use sysusers.d or use DynamicUser=yes . 12.2. Injecting secrets in image mode for RHEL Image mode for RHEL does not have an opinionated mechanism for secrets. You can inject container pull secrets in your system for some cases, for example: For bootc to fetch updates from a registry that requires authentication, you must include a pull secret in a file. In the following example, the creds secret contains the registry pull secret. To build it, run podman build --secret id=creds,src=USDHOME/.docker/config.json . Use a single pull secret for bootc and Podman by using a symlink to both locations to a common persistent file embedded in the container image, for example /usr/lib/container-auth.json . For Podman to fetch container images, include a pull secret to /etc/containers/auth.json . With this configuration, the two stacks share the /usr/lib/container-auth.json file. Injecting secrets by embedding them in a container build You can include secrets in the container image if the registry server is suitably protected. In some cases, embedding only bootstrap secrets into the container image is a viable pattern, especially alongside a mechanism for having a machine authenticate to a cluster. In this pattern, a provisioning tool, whether run as part of the host system or a container image, uses the bootstrap secret to inject or update other secrets, such as SSH keys, certificates, among others. Injecting secrets by using cloud metadata Most production Infrastructure as a Service (IaaS) systems support a metadata server or equivalent which can securely host secrets, particularly bootstrap secrets. Your container image can include tools such as cloud-init or ignition to fetch these secrets. Injecting secrets by embedding them in disk images You can embed bootstrap secrets only in disk images. For example, when you generate a cloud disk image from an input container image, such as AMI or OpenStack, the disk image can contain secrets that are effectively machine-local state. Rotating them requires an additional management tool or refreshing the disk images. Injecting secrets by using bare metal installers Installer tools usually support injecting configuration through secrets. Injecting secrets through systemd credentials The systemd project has a credential concept for securely acquiring and passing credential data to systems and services, which applies in some deployment methodologies. See the systemd credentials documentation for more details. Additional resources Example bootc images 12.3. Configuring container pull secrets To be able to fetch container images, you must configure a host system with a "pull secret", which includes the host updates itself. See the appendix about Injecting secrets in image mode for RHEL documentation for more details. You can configure the container pull secrets to an image already built. If you use an external installer such as Anaconda for bare metal, or bootc-image-builder , you must configure the systems with any applicable pull secrets. The host bootc updates write the configuration to the /etc/ostree/auth.json file, which is shared with rpm-ostree . Podman does not have system wide credentials. Podman accepts the containers-auth locations that are underneath the following directories: /run : The content of this directory vanishes on reboot, which is not desired. /root : Part of root's home directory, which is local mutable state by default. To unify bootc and Podman credentials, use a single default global pull secret for both bootc and Podman. The following container build is an example to unify the bootc and the Podman credentials. The example expects a secret named creds to contain the registry pull secret to build. Prerequisites TBD Procedure Create a symbolic link between bootc and Podman to use a single pull secret. By creating the symbolic link, you ensure that both locations are present to a common persistent file embedded in the container image. Create the /usr/lib/container-auth.json file. When you run the Containerfile, the following actions happen: The Containerfile makes /run/containers/0/auth.json a transient runtime file. It creates a symbolic link to the /usr/lib/container-auth.json . It also creates a persistent file, which is also symbolic linked from /etc/ostree/auth.json . 12.4. Injecting pull secrets for registries and disabling TLS You can configure container images, pull secrets, and disable TLS for a registry within a system. These actions enable containerized environments to pull images from private or insecure registries. You can include container pull secrets and other configuration to access a registry inside the base image. However, when installing by using Anaconda, the installation environment might need a duplicate copy of "bootstrap" configuration to access the targeted registry when fetching over the network. To perform arbitrary changes to the installation environment before the target bootc container image is fetched, you can use the Anaconda %pre command. See the containers-auth.json(5) for more detailed information about format and configurations of the auth.json file. Procedure Configure a pull secret: With this configuration, the system pulls images from quay.io using the provided authentication credentials, which are stored in /etc/ostree/auth.json . Disable TLS for an insecure registry: With this configuration, the system pulls container images from a registry that is not secured with TLS. You can use it in development or internal networks. You can also use %pre to: Fetch data from the network by using binaries included in the installation environment, such as curl . Inject trusted certificate authorities into the installation environment /etc/pki/ca-trust/source/anchors by using the update-ca-trust command. You can configure insecure registries similarly by modifying the /etc/containers directory. Additional resources Working with container registries
[ "RUN useradd someuser", "COPY mycustom-user.conf /usr/lib/sysusers.d", "f~ /home/ user /.ssh/ authorized_keys 600 user user - <base64 encoded data>", "FROM registry.redhat.io/rhel9/bootc-image-builder:latest COPY containers-auth.conf /usr/lib/tmpfiles.d/link-podman-credentials.conf RUN --mount=type=secret,id=creds,required=true cp /run/secrets/creds /usr/lib/container-auth.json && chmod 0600 /usr/lib/container-auth.json && ln -sr /usr/lib/container-auth.json /etc/ostree/auth.json", "FROM quay.io/ <namespace> / <image> :_<tag>_ COPY containers-auth.conf /usr/lib/tmpfiles.d/link-podman-credentials.conf RUN --mount=type=secret,id=creds,required=true cp /run/secrets/creds /usr/lib/container-auth.json && chmod 0600 /usr/lib/container-auth.json && ln -sr /usr/lib/container-auth.json /etc/ostree/auth.json", "%pre mkdir -p /etc/ostree cat > /etc/ostree/auth.json << 'EOF' { \"auths\": { \"quay.io\": { \"auth\": \"<your secret here>\" } } } EOF %end", "%pre mkdir -p /etc/containers/registries.conf.d/ cat > /etc/containers/registries.conf.d/local-registry.conf << 'EOF' [[registry]] location=\"[IP_Address]:5000\" insecure=true EOF %end" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_image_mode_for_rhel_to_build_deploy_and_manage_operating_systems/managing-users-groups-ssh-key-and-secrets-in-image-mode-for-rhel
Appendix A. Revision History
Appendix A. Revision History Note that revision numbers relate to the edition of this manual, not to version numbers of Red Hat Enterprise Linux. Revision History Revision 7.0-51 Thu Mar 4 2021 Florian Delehaye 7.9 GA version of the guide. Added a new section about adjusting DNA ID ranges manually. Revision 7.0-50 Wed May 27 2020 Florian Delehaye Several fixes and updates. Revision 7.0-49 Tue Aug 06 2019 Marc Muehlfeld Document version for 7.7 GA publication. Revision 7.0-48 Wed Jun 05 2019 Marc Muehlfeld Updated Configuring Trust Agents , added How the AD Provider Handles Trusted Domains and Changing the Format of User Names Displayed by SSSD . Revision 7.0-47 Tue Apr 08 2019 Marc Muehlfeld Several minor fixes and updates. Revision 7.0-46 Mon Oct 29 2018 Filip Hanzelka Preparing document for 7.6 GA publication. Revision 7.0-45 Mon Jun 25 2018 Filip Hanzelka Added Switching Between SSSD and Winbind for SMB Share Access . Revision 7.0-44 Thu Apr 5 2018 Filip Hanzelka Preparing document for 7.5 GA publication. Revision 7.0-43 Wed Feb 28 2018 Filip Hanzelka Updated GPO Settings Supported by SSSD. Revision 7.0-42 Mon Feb 12 2018 Aneta Steflova Petrova Updated Creating a Two-Way Trust with a Shared Secret . Revision 7.0-41 Mon Jan 29 2018 Aneta Steflova Petrova Minor fixes. Revision 7.0-40 Fri Dec 15 2017 Aneta Steflova Petrova Minor fixes. Revision 7.0-39 Mon Dec 6 2017 Aneta Steflova Petrova Updated Using Samba for Active Directory Integration . Revision 7.0-38 Mon Dec 4 2017 Aneta Steflova Petrova Updated DNS and Realm Settings for trusts. Revision 7.0-37 Mon Nov 20 2017 Aneta Steflova Petrova Updated Creating a Two-Way Trust with a Shared Secret . Revision 7.0-36 Mon Nov 6 2017 Aneta Steflova Petrova Minor fixes. Revision 7.0-35 Mon Oct 23 2017 Aneta Steflova Petrova Updated Active Directory Entries and POSIX Attributes and Configuring an AD Domain with ID Mapping as a Provider for SSSD . Revision 7.0-34 Mon Oct 9 2017 Aneta Steflova Petrova Added Configuration Options for Using Short Names . Updated Trust Controllers and Trust Agents . Revision 7.0-33 Tue Sep 26 2017 Aneta Steflova Petrova Updated the autodiscovery section in the SSSD chapter. Added two sections on configuring trusted domains. Revision 7.0-32 Tue Jul 18 2017 Aneta Steflova Petrova Document version for 7.4 GA publication. Revision 7.0-31 Tue May 23 2017 Aneta Steflova Petrova A minor fix for About Security ID Mapping. Revision 7.0-30 Mon Apr 24 2017 Aneta Steflova Petrova Minor fixes for Defining Windows Integration. Revision 7.0-29 Mon Apr 10 2017 Aneta Steflova Petrova Updated Direct Integration. Revision 7.0-28 Mon Mar 27 2017 Aneta Steflova Petrova Moved Allowing Users to Change Other Users' Passwords Cleanly to the Linux Domain Identity guide as Enabling Password Reset. Updated Supported Windows Platforms for trusts. Fixed broken links. Other minor updates. Revision 7.0-27 Mon Feb 27 2017 Aneta Steflova Petrova Updated port requirements for trusts. Minor restructuring for trust and sync. Other minor updates. Revision 7.0-26 Wed Nov 23 2016 Aneta Steflova Petrova Added ipa-winsync-migrate. Minor fixes for the trust, SSSD, and synchronization chapters. Revision 7.0-25 Tue Oct 18 2016 Aneta Steflova Petrova Version for 7.3 GA publication. Revision 7.0-24 Thu Jul 28 2016 Marc Muehlfeld Updated diagrams, added Kerberos flags for services and hosts, other minor fixes. Revision 7.0-23 Thu Jun 09 2016 Marc Muehlfeld Updated the synchronization chapter. Removed the Kerberos chapter. Other minor fixes. Revision 7.0-22 Tue Feb 09 2016 Aneta Petrova Updated realmd, removed index, moved a part of ID views to the Linux Domain Identity guide, other minor updates. Revision 7.0-21 Fri Nov 13 2015 Aneta Petrova Version for 7.2 GA release with minor updates. Revision 7.0-20 Thu Nov 12 2015 Aneta Petrova Version for 7.2 GA release. Revision 7.0-19 Fri Sep 18 2015 Tomas Capek Updated the splash page sort order. Revision 7.0-18 Thu Sep 10 2015 Aneta Petrova Updated the output format. Revision 7.0-17 Mon Jul 27 2015 Aneta Petrova Added GPO-based access control, a number of other minor changes. Revision 7.0-16 Thu Apr 02 2015 Tomas Capek Added ipa-advise, extended CIFS share with SSSD, admonition for the Identity Management for UNIX extension. Revision 7.0-15 Fri Mar 13 2015 Tomas Capek Async update with last-minute edits for 7.1. Revision 7.0-13 Wed Feb 25 2015 Tomas Capek Version for 7.1 GA release. Revision 7.0-11 Fri Dec 05 2014 Tomas Capek Rebuild to update the sort order on the splash page. Revision 7.0-7 Mon Sep 15 2014 Tomas Capek Section 5.3 Creating Trusts temporarily removed for content updates. Revision 7.0-5 June 27, 2014 Ella Deon Ballard Improving Samba+Kerberos+Winbind chapters. Revision 7.0-4 June 13, 2014 Ella Deon Ballard Adding Kerberos realm chapter. Revision 7.0-3 June 11, 2014 Ella Deon Ballard Initial release.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/windows_integration_guide/doc-history
Chapter 44. Kafka Sink
Chapter 44. Kafka Sink Send data to Kafka topics. The Kamelet is able to understand the following headers to be set: key / ce-key : as message key partition-key / ce-partitionkey : as message partition key Both the headers are optional. 44.1. Configuration Options The following table summarizes the configuration options available for the kafka-sink Kamelet: Property Name Description Type Default Example bootstrapServers * Brokers Comma separated list of Kafka Broker URLs string password * Password Password to authenticate to kafka string topic * Topic Names Comma separated list of Kafka topic names string user * Username Username to authenticate to Kafka string saslMechanism SASL Mechanism The Simple Authentication and Security Layer (SASL) Mechanism used. string "PLAIN" securityProtocol Security Protocol Protocol used to communicate with brokers. SASL_PLAINTEXT, PLAINTEXT, SASL_SSL and SSL are supported string "SASL_SSL" Note Fields marked with an asterisk (*) are mandatory. 44.2. Dependencies At runtime, the `kafka-sink Kamelet relies upon the presence of the following dependencies: camel:kafka camel:kamelet 44.3. Usage This section describes how you can use the kafka-sink . 44.3.1. Knative Sink You can use the kafka-sink Kamelet as a Knative sink by binding it to a Knative object. kafka-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: kafka-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: kafka-sink properties: bootstrapServers: "The Brokers" password: "The Password" topic: "The Topic Names" user: "The Username" 44.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 44.3.1.2. Procedure for using the cluster CLI Save the kafka-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f kafka-sink-binding.yaml 44.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind channel:mychannel kafka-sink -p "sink.bootstrapServers=The Brokers" -p "sink.password=The Password" -p "sink.topic=The Topic Names" -p "sink.user=The Username" This command creates the KameletBinding in the current namespace on the cluster. 44.3.2. Kafka Sink You can use the kafka-sink Kamelet as a Kafka sink by binding it to a Kafka topic. kafka-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: kafka-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: kafka-sink properties: bootstrapServers: "The Brokers" password: "The Password" topic: "The Topic Names" user: "The Username" 44.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 44.3.2.2. Procedure for using the cluster CLI Save the kafka-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f kafka-sink-binding.yaml 44.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic kafka-sink -p "sink.bootstrapServers=The Brokers" -p "sink.password=The Password" -p "sink.topic=The Topic Names" -p "sink.user=The Username" This command creates the KameletBinding in the current namespace on the cluster. 44.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/kafka-sink.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: kafka-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: kafka-sink properties: bootstrapServers: \"The Brokers\" password: \"The Password\" topic: \"The Topic Names\" user: \"The Username\"", "apply -f kafka-sink-binding.yaml", "kamel bind channel:mychannel kafka-sink -p \"sink.bootstrapServers=The Brokers\" -p \"sink.password=The Password\" -p \"sink.topic=The Topic Names\" -p \"sink.user=The Username\"", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: kafka-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: kafka-sink properties: bootstrapServers: \"The Brokers\" password: \"The Password\" topic: \"The Topic Names\" user: \"The Username\"", "apply -f kafka-sink-binding.yaml", "kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic kafka-sink -p \"sink.bootstrapServers=The Brokers\" -p \"sink.password=The Password\" -p \"sink.topic=The Topic Names\" -p \"sink.user=The Username\"" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/kamelets_reference/kafka-sink
Chapter 2. Understanding build configurations
Chapter 2. Understanding build configurations The following sections define the concept of a build, build configuration, and outline the primary build strategies available. 2.1. BuildConfigs A build configuration describes a single build definition and a set of triggers for when a new build is created. Build configurations are defined by a BuildConfig , which is a REST object that can be used in a POST to the API server to create a new instance. A build configuration, or BuildConfig , is characterized by a build strategy and one or more sources. The strategy determines the process, while the sources provide its input. Depending on how you choose to create your application using OpenShift Container Platform, a BuildConfig is typically generated automatically for you if you use the web console or CLI, and it can be edited at any time. Understanding the parts that make up a BuildConfig and their available options can help if you choose to manually change your configuration later. The following example BuildConfig results in a new build every time a container image tag or the source code changes: BuildConfig object definition kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: "ruby-sample-build" 1 spec: runPolicy: "Serial" 2 triggers: 3 - type: "GitHub" github: secret: "secret101" - type: "Generic" generic: secret: "secret101" - type: "ImageChange" source: 4 git: uri: "https://github.com/openshift/ruby-hello-world" strategy: 5 sourceStrategy: from: kind: "ImageStreamTag" name: "ruby-20-centos7:latest" output: 6 to: kind: "ImageStreamTag" name: "origin-ruby-sample:latest" postCommit: 7 script: "bundle exec rake test" 1 This specification creates a new BuildConfig named ruby-sample-build . 2 The runPolicy field controls whether builds created from this build configuration can be run simultaneously. The default value is Serial , which means new builds run sequentially, not simultaneously. 3 You can specify a list of triggers, which cause a new build to be created. 4 The source section defines the source of the build. The source type determines the primary source of input, and can be either Git , to point to a code repository location, Dockerfile , to build from an inline Dockerfile, or Binary , to accept binary payloads. It is possible to have multiple sources at once. See the documentation for each source type for details. 5 The strategy section describes the build strategy used to execute the build. You can specify a Source , Docker , or Custom strategy here. This example uses the ruby-20-centos7 container image that Source-to-image (S2I) uses for the application build. 6 After the container image is successfully built, it is pushed into the repository described in the output section. 7 The postCommit section defines an optional build hook.
[ "kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: \"ruby-sample-build\" 1 spec: runPolicy: \"Serial\" 2 triggers: 3 - type: \"GitHub\" github: secret: \"secret101\" - type: \"Generic\" generic: secret: \"secret101\" - type: \"ImageChange\" source: 4 git: uri: \"https://github.com/openshift/ruby-hello-world\" strategy: 5 sourceStrategy: from: kind: \"ImageStreamTag\" name: \"ruby-20-centos7:latest\" output: 6 to: kind: \"ImageStreamTag\" name: \"origin-ruby-sample:latest\" postCommit: 7 script: \"bundle exec rake test\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/builds_using_buildconfig/understanding-buildconfigs
5.3. Updating Model Count
5.3. Updating Model Count The term "updating model count" refers to the number of times any model is updated during the execution of a command. It is used to determine whether a transaction, of any scope, is required to safely execute the command. Table 5.3. Updating Model Count Settings Count Description 0 No updates are performed by this command. 1 Indicates that only one model is updated by this command (and its subcommands). Also the success or failure of that update corresponds to the success or failure of the command. It should not be possible for the update to succeed while the command fails. Execution is not considered transactionally unsafe. * Any number greater than 1 indicates that execution is transactionally unsafe and an XA transaction will be required.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/updating_model_count
Chapter 6. Applying patches with kernel live patching
Chapter 6. Applying patches with kernel live patching You can use the Red Hat Enterprise Linux kernel live patching solution to patch a running kernel without rebooting or restarting any processes. With this solution, system administrators: Can immediately apply critical security patches to the kernel. Do not have to wait for long-running tasks to complete, for users to log off, or for scheduled downtime. Control the system's uptime more and do not sacrifice security or stability. Note that not every critical or important CVE will be resolved using the kernel live patching solution. Our goal is to reduce the required reboots for security-related patches, not to eliminate them entirely. For more details about the scope of live patching, see the Customer Portal Solutions article . Warning Some incompatibilities exist between kernel live patching and other kernel subcomponents. Read the Section 6.1, "Limitations of kpatch" section carefully before using kernel live patching. Note For details about the support cadence of kernel live patching updates, see: Kernel Live Patch Support Cadence Update Kernel Live Patch life cycles 6.1. Limitations of kpatch The kpatch feature is not a general-purpose kernel upgrade mechanism. It is used for applying simple security and bug fix updates when rebooting the system is not immediately possible. Do not use the SystemTap or kprobe tools during or after loading a patch. The patch could fail to take effect until after such probes have been removed. 6.2. Support for third-party live patching The kpatch utility is the only kernel live patching utility supported by Red Hat with the RPM modules provided by Red Hat repositories. Red Hat will not support any live patches which were not provided by Red Hat itself. For support of a third-party live patch, contact the vendor that provided the patch. For any system running with third-party live patches, Red Hat reserves the right to ask for reproduction with Red Hat shipped and supported software. In the event that this is not possible, we require a similar system and workload be deployed on your test environment without live patches applied, to confirm if the same behavior is observed. For more information about third-party software support policies, see How does Red Hat Global Support Services handle third-party software, drivers, and/or uncertified hardware/hypervisors or guest operating systems? 6.3. Access to kernel live patches Kernel live patching capability is implemented as a kernel module ( .ko file) that is delivered as an RPM package. All customers have access to kernel live patches, which are delivered through the usual channels. However, customers who do not subscribe to an extended support offering will lose access to new patches for the current minor release once the minor release becomes available. For example, customers with standard subscriptions will only be able to live patch RHEL 8.2 kernels until RHEL 8.3 is released. 6.4. Components of kernel live patching The components of kernel live patching are as follows: Kernel patch module The delivery mechanism for kernel live patches. A kernel module which is built specifically for the kernel being patched. The patch module contains the code of the desired fixes for the kernel. The patch modules register with the livepatch kernel subsystem and provide information about original functions to be replaced, with corresponding pointers to the replacement functions. Kernel patch modules are delivered as RPMs. The naming convention is kpatch_<kernel version>_<kpatch version>_<kpatch release> . The "kernel version" part of the name has dots and dashes replaced with underscores . The kpatch utility A command-line utility for managing patch modules. The kpatch service A systemd service required by multiuser.target . This target loads the kernel patch module at boot time. 6.5. How kernel live patching works The kpatch kernel patching solution uses the livepatch kernel subsystem to redirect old functions to new ones. When a live kernel patch is applied to a system, the following things happen: The kernel patch module is copied to the /var/lib/kpatch/ directory and registered for re-application to the kernel by systemd on boot. The kpatch module is loaded into the running kernel and the patched functions are registered to the ftrace mechanism with a pointer to the location in memory of the new code. When the kernel accesses the patched function, it is redirected by the ftrace mechanism which bypasses the original functions and redirects the kernel to patched version of the function. Figure 6.1. How kernel live patching works 6.6. Enabling kernel live patching A kernel patch module is delivered in an RPM package, specific to the version of the kernel being patched. Each RPM package will be cumulatively updated over time. The following subsections describe how to ensure you receive all future cumulative live patching updates for a given kernel. Warning Red Hat does not support any third party live patches applied to a Red Hat supported system. 6.6.1. Subscribing to the live patching stream This procedure describes installing a particular live patching package. By doing so, you subscribe to the live patching stream for a given kernel and ensure that you receive all future cumulative live patching updates for that kernel. Warning Because live patches are cumulative, you cannot select which individual patches are deployed for a given kernel. Prerequisites Root permissions Procedure Optionally, check your kernel version: Search for a live patching package that corresponds to the version of your kernel: Install the live patching package: The command above installs and applies the latest cumulative live patches for that specific kernel only. The live patching package contains a patch module, if the package's version is 1-1 or higher. In that case the kernel will be automatically patched during the installation of the live patching package. The kernel patch module is also installed into the /var/lib/kpatch/ directory to be loaded by the systemd system and service manager during the future reboots. Note If there are not yet any live patches available for the given kernel, an empty live patching package will be installed. An empty live patching package will have a kpatch_version-kpatch_release of 0-0, for example kpatch-patch-3_10_0-1062-0-0.el7.x86_64.rpm . The installation of the empty RPM subscribes the system to all future live patches for the given kernel. Optionally, verify that the kernel is patched: The output shows that the kernel patch module has been loaded into the kernel, which is now patched with the latest fixes from the kpatch-patch-3_10_0-1062-1-1.el7.x86_64.rpm package. Additional resources For more information about the kpatch command-line utility, see the kpatch(1) manual page. Refer to the relevant sections of the System Administrator's Guide for further information about software packages in RHEL 7. 6.7. Updating kernel patch modules Since kernel patch modules are delivered and applied through RPM packages, updating a cumulative kernel patch module is like updating any other RPM package. Prerequisites Root permissions The system is subscribed to the live patching stream, as described in Section 6.6.1, "Subscribing to the live patching stream" . Procedure Update to a new cumulative version for the current kernel: The command above automatically installs and applies any updates that are available for the currently running kernel. Including any future released cumulative live patches. Alternatively, update all installed kernel patch modules: Note When the system reboots into the same kernel, the kernel is automatically live patched again by the kpatch.service service. Additional resources For further information about updating software packages, see the relevant sections of System Administrator's Guide . 6.8. Disabling kernel live patching In case system administrators encountered some unanticipated negative effects connected with the Red Hat Enterprise Linux kernel live patching solution they have a choice to disable the mechanism. The following sections describe the ways how to disable the live patching solution. Important Currently, Red Hat does not support reverting live patches without rebooting your system. In case of any issues, contact our support team. 6.8.1. Removing the live patching package The following procedure describes how to disable the Red Hat Enterprise Linux kernel live patching solution by removing the live patching package. Prerequisites Root permissions The live patching package is installed. Procedure Select the live patching package: The example output above lists live patching packages that you installed. Remove the live patching package: When a live patching package is removed, the kernel remains patched until the reboot, but the kernel patch module is removed from disk. After the reboot, the corresponding kernel will no longer be patched. Reboot your system. Verify that the live patching package has been removed: The command displays no output if the package has been successfully removed. Optionally, verify that the kernel live patching solution is disabled: The example output shows that the kernel is not patched and the live patching solution is not active because there are no patch modules that are currently loaded. Additional resources For more information about the kpatch command-line utility, see the kpatch(1) manual page. For further information about working with software packages, see the relevant sections of System Administrator's Guide . 6.8.2. Uninstalling the kernel patch module The following procedure describes how to prevent the Red Hat Enterprise Linux kernel live patching solution from applying a kernel patch module on subsequent boots. Prerequisites Root permissions A live patching package is installed. A kernel patch module is installed and loaded. Procedure Select a kernel patch module: Uninstall the selected kernel patch module: Note that the uninstalled kernel patch module is still loaded: When the selected module is uninstalled, the kernel remains patched until the reboot, but the kernel patch module is removed from disk. Reboot your system. Optionally, verify that the kernel patch module has been uninstalled: The example output above shows no loaded or installed kernel patch modules, therefore the kernel is not patched and the kernel live patching solution is not active. Additional resources For more information about the kpatch command-line utility, refer to the kpatch(1) manual page. 6.8.3. Disabling kpatch.service The following procedure describes how to prevent the Red Hat Enterprise Linux kernel live patching solution from applying all kernel patch modules globally on subsequent boots. Prerequisites Root permissions A live patching package is installed. A kernel patch module is installed and loaded. Procedure Verify kpatch.service is enabled: Disable kpatch.service : Note that the applied kernel patch module is still loaded: Reboot your system. Optionally, verify the status of kpatch.service : The example output testifies that kpatch.service has been disabled and is not running. Thereby, the kernel live patching solution is not active. Verify that the kernel patch module has been unloaded: The example output above shows that the kernel patch module is still installed but the kernel is not patched. Additional resources For more information about the kpatch command-line utility, see the kpatch(1) manual page. For more information about the systemd system and service manager, unit configuration files, their locations, as well as a complete list of systemd unit types, see the relevant sections in System Administrator's Guide .
[ "uname -r 3.10.0-1062.el7.x86_64", "yum search USD(uname -r)", "yum install \"kpatch-patch = USD(uname -r)\"", "kpatch list Loaded patch modules: kpatch_3_10_0_1062_1_1 [enabled] Installed patch modules: kpatch_3_10_0_1062_1_1 (3.10.0-1062.el7.x86_64) ...", "yum update \"kpatch-patch = USD(uname -r)\"", "yum update \"kpatch-patch*\"", "yum list installed | grep kpatch-patch kpatch-patch-3_10_0-1062.x86_64 1-1.el7 @@commandline ...", "yum remove kpatch-patch-3_10_0-1062.x86_64", "yum list installed | grep kpatch-patch", "kpatch list Loaded patch modules:", "kpatch list Loaded patch modules: kpatch_3_10_0_1062_1_1 [enabled] Installed patch modules: kpatch_3_10_0_1062_1_1 (3.10.0-1062.el7.x86_64) ...", "kpatch uninstall kpatch_3_10_0_1062_1_1 uninstalling kpatch_3_10_0_1062_1_1 (3.10.0-1062.el7.x86_64)", "kpatch list Loaded patch modules: kpatch_3_10_0_1062_1_1 [enabled] Installed patch modules: < NO_RESULT >", "kpatch list Loaded patch modules:", "systemctl is-enabled kpatch.service enabled", "systemctl disable kpatch.service Removed /etc/systemd/system/multi-user.target.wants/kpatch.service.", "kpatch list Loaded patch modules: kpatch_3_10_0_1062_1_1 [enabled] Installed patch modules: kpatch_3_10_0_1062_1_1 (3.10.0-1062.el7.x86_64)", "systemctl status kpatch.service ● kpatch.service - \"Apply kpatch kernel patches\" Loaded: loaded (/usr/lib/systemd/system/kpatch.service; disabled; vendor preset: disabled) Active: inactive (dead)", "kpatch list Loaded patch modules: Installed patch modules: kpatch_3_10_0_1062_1_1 (3.10.0-1062.el7.x86_64)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/kernel_administration_guide/applying_patches_with_kernel_live_patching
Chapter 3. alt-java and java uses
Chapter 3. alt-java and java uses Depending on your needs, you can use either the alt-java binary or the java binary to run your application's code. 3.1. alt-java usage Use alt-java for any applications that run untrusted code. Be aware that using alt-java is not a solution to all speculative execution vulnerabilities. 3.2. java usage Use the java binary for performance-critical tasks in a secure environment. Additional resources See Java and Speculative Execution Vulnerabilities .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/using_alt-java/using-java-and-altjava
11.4.2. Procmail Recipes
11.4.2. Procmail Recipes New users often find the construction of recipes the most difficult part of learning to use Procmail. To some extent, this is understandable, as recipes do their message matching using regular expressions , which is a particular format used to specify qualifications for a matching string. However, regular expressions are not very difficult to construct and even less difficult to understand when read. Additionally, the consistency of the way Procmail recipes are written, regardless of regular expressions, makes it easy to learn by example. To see example Procmail recipes, refer to Section 11.4.2.5, "Recipe Examples" . Procmail recipes take the following form: The first two characters in a Procmail recipe are a colon and a zero. Various flags can be placed after the zero to control how Procmail processes the recipe. A colon after the <flags> section specifies that a lockfile is created for this message. If a lockfile is created, the name can be specified by replacing <lockfile-name> . A recipe can contain several conditions to match against the message. If it has no conditions, every message matches the recipe. Regular expressions are placed in some conditions to facilitate message matching. If multiple conditions are used, they must all match for the action to be performed. Conditions are checked based on the flags set in the recipe's first line. Optional special characters placed after the * character can further control the condition. The <action-to-perform> specifies the action taken when the message matches one of the conditions. There can only be one action per recipe. In many cases, the name of a mailbox is used here to direct matching messages into that file, effectively sorting the email. Special action characters may also be used before the action is specified. Refer to Section 11.4.2.4, "Special Conditions and Actions" for more information. 11.4.2.1. Delivering vs. Non-Delivering Recipes The action used if the recipe matches a particular message determines whether it is considered a delivering or non-delivering recipe. A delivering recipe contains an action that writes the message to a file, sends the message to another program, or forwards the message to another email address. A non-delivering recipe covers any other actions, such as a nesting block . A nesting block is a set of actions, contained in braces { } , that are performed on messages which match the recipe's conditions. Nesting blocks can be nested inside one another, providing greater control for identifying and performing actions on messages. When messages match a delivering recipe, Procmail performs the specified action and stops comparing the message against any other recipes. Messages that match non-delivering recipes continue to be compared against other recipes.
[ ":0 <flags>: <lockfile-name> * <special-condition-character> <condition-1> * <special-condition-character> <condition-2> * <special-condition-character> <condition-N> <special-action-character> <action-to-perform>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-email-procmail-recipes
E.2.3. /proc/cpuinfo
E.2.3. /proc/cpuinfo This virtual file identifies the type of processor used by your system. The following is an example of the output typical of /proc/cpuinfo : processor - Provides each processor with an identifying number. On systems that have one processor, only a 0 is present. cpu family - Authoritatively identifies the type of processor in the system. For an Intel-based system, place the number in front of "86" to determine the value. This is particularly helpful for those attempting to identify the architecture of an older system such as a 586, 486, or 386. Because some RPM packages are compiled for each of these particular architectures, this value also helps users determine which packages to install. model name - Displays the common name of the processor, including its project name. cpu MHz - Shows the precise speed in megahertz for the processor to the thousandths decimal place. cache size - Displays the amount of level 2 memory cache available to the processor. siblings - Displays the total number of sibling CPUs on the same physical CPU for architectures which use hyper-threading. flags - Defines a number of different qualities about the processor, such as the presence of a floating point unit (FPU) and the ability to process MMX instructions.
[ "processor : 0 vendor_id : GenuineIntel cpu family : 15 model : 2 model name : Intel(R) Xeon(TM) CPU 2.40GHz stepping : 7 cpu MHz : 2392.371 cache size : 512 KB physical id : 0 siblings : 2 runqueue : 0 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 2 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm bogomips : 4771.02" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-proc-cpuinfo
Chapter 2. Release-specific changes
Chapter 2. Release-specific changes 2.1. Server configuration changes 2.1.1. New Hostname options Hostname v2 options are supported by default, as the old hostname options were removed. List of necessary migrations: Old options New options hostname <hostname> hostname-url <url> hostname-path <path> hostname-port <port> hostname <hostname/url> hostname-admin <hostname> hostname-admin-url <url> hostname-admin <url> hostname-strict-backchannel <true/false> hostname-backchannel-dynamic <true/false> As you can see, the *-url suffixes were removed for hostname and hostname-admin options. Option hostname accepts both hostname and URL, but hostname-admin accepts only full URL now. Additionally, there is no way to set path or port separately. You can achieve it by providing the full URL for the hostname and hostname-admin options. If the port is not part of the URL, it is dynamically resolved from the incoming request headers. HTTPS is no longer enforced unless it is part of hostname and hostname-admin URLs. If not specified, the used protocol ( http/https ) is dynamically resolved from the incoming request. The hostname-strict-https option is removed. Removed options hostname-url hostname-admin-url hostname-path hostname-port hostname-strict-backchannel hostname-strict-https 2.1.1.1. Examples Simplified notation # Hostname v1 bin/kc.[sh|bat] start --hostname=mykeycloak.org --https-port=8543 --hostname-path=/auth --hostname-strict-https=true # Hostname v2 bin/kc.[sh|bat] start --hostname=https://mykeycloak.org:8543/auth As you can see in the example, all the parts of a URL can be now specified by using a single hostname option, which simplifies the hostname setup process. Notice that HTTPS is not enforced by the hostname-strict-https option, but by specifying it in the hostname URL. Backchannel setting # Hostname v1 bin/kc.[sh|bat] start --hostname=mykeycloak.org --hostname-strict-backchannel=true # Hostname v2 bin/kc.[sh|bat] start --hostname=mykeycloak.org --hostname-backchannel-dynamic=false Be aware that there is a change in behavior if the same URL is to be used for both backend and frontend endpoints. Previously, in hostname v1, the backchannel URL was dynamically resolved from request headers. Therefore, to achieve the required results, you had to specify the hostname-strict-backchannel=true . For hostname v2, the backchannel URLs are already the same as the frontend ones. In order to dynamically resolve it from request headers, you need to set the hostname-backchannel-dynamic=true and provide a full URL for the hostname option. For more details and more comprehensive scenarios, see Configuring the hostname (v2) . 2.1.2. kcadm and kcreg changes How kcadm and kcreg parse and handle options and parameters has changed. Error messages from usage errors, the wrong option or parameter, may be slightly different than versions. Also usage errors will have an exit code of 2 instead of 1. 2.1.3. Escaping slashes in group paths Red Hat build of Keycloak has never escaped slashes in the group paths. Because of that, a group named group/slash child of top uses the full path /top/group/slash , which is clearly misleading. Starting with this version, the server can be started to perform escaping of those slashes in the name: bin/kc.[sh|bat] start --spi-group-jpa-escape-slashes-in-group-path=true The escape char is the tilde character ~ . The example results in the path /top/group~/slash . The escape marks the last slash as part of the name and not a hierarchy separator. The escaping is currently disabled by default because it represents a change in behavior. Nevertheless enabling escaping is recommended and it can be the default in future versions. 2.1.4. --import-realm option can import the master realm When running a start or start-dev command with the --import-realm option before the master realm exists, it will be imported if it exists in the import material. The behavior was that the master realm was created first, then its import skipped. 2.1.5. Additional validations on the --optimized startup option The --optimized startup option now requires the optimized server image to be built first. This can be achieved either by running kc.sh|bat build first or by any other server commands (such as start , export , import ) without the --optimized flag. 2.1.6. Specify cache options at runtime Options cache , cache-stack , and cache-config-file are no longer build options, and they can be specified only during runtime. This eliminates the need to execute the build phase and rebuild your image due to them. Be aware that they will not be recognized during the build phase, so you need to remove them from the build phase and add them to the runtime phase. If you do not add your current caching options to the runtime phase, Red Hat build of Keycloak will fall back to the default caching settings. 2.1.7. Limiting memory usage when consuming HTTP responses In some scenarios, such as brokering, Red Hat build of Keycloak uses HTTP to talk to external servers. To avoid a denial of service when those providers send too much data, Red Hat build of Keycloak now restricts responses to 10 MB by default. Users can configure this limit by setting the provider configuration option spi-connections-http-client-default-max-consumed-response-size : Restricting the consumed responses to 1 MB bin/kc.[sh|bat] --spi-connections-http-client-default-max-consumed-response-size=1000000 2.1.8. kc.sh/bat import placeholder replacement The kc.[sh|bat] import command now has placeholder replacement enabled. Previously placeholder replacement was only enabled for realm import at startup. If you wish to disable placeholder replacement for the import command, add the system property -Dkeycloak.migration.replace-placeholders=false 2.2. Hostname Verification Policy The default for spi-truststore-file-hostname-verification-policy and the new tls-hostname-verifier option is now DEFAULT, rather than WILDCARD. The WILDCARD and STRICT option values have been deprecated. You should simply rely upon DEFAULT instead. Behavior supported by WILDCARD, that is not supported by DEFAULT: * allows wildcards in subdomain names (for example, *.foo.com) to match anything, including multiple levels (for example, a.b.foo.com). * allows matching against well known public suffixes - for example, foo.co.gl may match *.co.gl Behavior supported by STRICT, that is not supported by DEFAULT: * STRICT uses a small exclusion list for 2 or 3 letter domain names ending in a 2 letter top level (*.XXX.YY) when determining if a wildcard matches. Instead DEFAULT uses a more complete list of public suffix rules and exclusions from https://publicsuffix.org/list/ It is not expected that you should be relying upon these behaviors from the WILDCARD or STRICT options. 2.3. Persistent user sessions The new feature, persistent-user-sessions , stores online user sessions and online client sessions in the database. This change allows a user to stay logged in even if all instances of Red Hat build of Keycloak are restarted or upgraded. versions of Red Hat build of Keycloak stored only offline user and offline client sessions in the databases. This behavior is identical to versions of Red Hat build of Keycloak. Note When migrating to this version, all existing online user sessions and online client sessions are cleared and the users are logged out. Offline user sessions and offline client sessions are not affected. 2.3.1. Enabling persistent user sessions In Red Hat build of Keycloak 26, all user sessions are persisted in the database by default. It is possible to revert this behavior to the state by disabling the feature. Use the Volatile user sessions procedure in the Configuring distributed caches guide. With persistent sessions enabled, the in-memory caches for online user sessions, offline user sessions, online client sessions and offline client sessions are limited to 10000 entries per node by default, which will reduce the overall memory usage of Keycloak for larger installations. Items which are evicted from memory will be loaded on-demand from the database when needed. Once this feature is enabled, expect a reduced memory usage and an increased database utilization on each login, logout and refresh token request. To configure the cache size in an external Data Grid in a Red Hat build of Keycloak multi-site setup, see Deploy Data Grid for HA with the Data Grid Operator . With this feature enabled, the options spi-user-sessions-infinispan-offline-session-cache-entry-lifespan-override and spi-user-sessions-infinispan-offline-client-session-cache-entry-lifespan-override are no longer available, as they were previously used to override the time offline sessions were kept in-memory. 2.3.2. Signing out existing users To sign out all online users sessions of a realm when persistent-user-sessions is enabled, perform these steps: Log in to the Admin Console. Select the menu entry Sessions . Select the action Sign out all active sessions . 2.3.3. Restricting the size of session caches Since the database is now the source of truth for user sessions, it is possible to restrict the size of the session caches to reduce memory usage. If you use the default conf/cache-ispn.xml file, the caches for storing user and client sessions are by default configured to store only 10000 sessions and one owner for each entry. Update the size of the caches using the options cache-embedded-sessions-max-count , cache-embedded-client-sessions-max-count , cache-embedded-offline-sessions-max-count and cache-embedded-offline-client-sessions-max-count . For details about the updated resource requirements, see Concepts for sizing CPU and memory resources . 2.4. Metrics and health endpoints 2.4.1. Metrics for embedded caches enabled by default Metrics for the embedded caches are now enabled by default. To enable histograms for latencies, set the option cache-metrics-histograms-enabled to true . 2.4.2. Metrics for HTTP endpoints enabled by default The metrics provided by Red Hat build of Keycloak now include HTTP server metrics starting with http_server . See below for some examples. Use the new options http-metrics-histograms-enabled and http-metrics-slos to enable default histogram buckets or specific buckets for service level objectives (SLOs). Read more about histograms in the Prometheus documentation about histograms on how to use the additional metrics series provided in http_server_requests_seconds_bucket . 2.4.3. Management port for metrics and health endpoints The /health and /metrics endpoints are accessible on the management port 9000 , which is turned on by default. That means these endpoints are no longer exposed to the standard Red Hat build of Keycloak ports 8080 and 8443 . In order to reflect the old behavior, use the property --legacy-observability-interface=true , which will not expose these endpoints on the management port. However, this property is deprecated and will be removed in future releases, so it is recommended not to use it. The management interface uses a different HTTP server than the default Red Hat build of Keycloak HTTP server, and it is possible to configure them separately. Beware, if no values are supplied for the management interface properties, they are inherited from the default Red Hat build of Keycloak HTTP server. For more details, see Configuring the Management Interface . 2.5. XA changes 2.5.1. XA Transaction Changes The option transaction-xa-enabled will default to false, rather than true. If you want XA transaction support you will now need to explicitly set this option to true. XA Transaction recovery support is enabled by default if transaction-xa-enabled is true. Transaction logs will be stored at KEYCLOAK_HOME/data/transaction-logs. 2.5.2. Additional datasources now require using XA Red Hat build of Keycloak by default does not use XA datasources. However, this is considered unsafe if more than one datasource is used. Starting with this release, you need to use XA datasources if you are adding additional datasources to Red Hat build of Keycloak. If the default datasource supports XA, you can do this by setting the --transaction-xa-enabled=true option. For additional datasources, you need to use the quarkus.datasource.<your-datasource-name>.jdbc.transactions=xa option in your quarkus.properties file. At most one datasource can be non-XA. Recovery isn't supported when you don't have persistent storage for the transaction store. 2.6. Operator changes 2.6.1. Operator no longer defaults to proxy=passthrough The proxy option has been removed from the server. 2.6.2. Operator scheduling defaults Red Hat build of Keycloak Pods will now have default affinities to prevent multiple instances from the same CR from being deployed on the same node, and all Pods from the same CR will prefer to be in the same zone to prevent stretch cache clusters. 2.6.3. Operator's default CPU and memory limits/requests In order to follow the best practices, the default CPU and memory limits/requests for the Operator were introduced. It affects both non-OLM and OLM installs. To override the default values for the OLM install, edit the resources section in the operator's subscription . 2.7. API changes 2.7.1. New method in ClusterProvider API The following method was added to org.keycloak.cluster.ClusterProvider : void notify(String taskKey, Collection<? extends ClusterEvent> events, boolean ignoreSender, DCNotify dcNotify) When multiple events are sent to the same taskKey , this method batches events and just perform a single network call. This is an optimization to reduce traffic and network related resources. In Red Hat build of Keycloak 26, the new method has a default implementation to keep backward compatibility with custom implementation. The default implementation performs a single network call per an event, and it will be removed in a future version of Red Hat build of Keycloak. 2.7.2. New Java API to search realms by name The RealmProvider Java API now contains a new method Stream<RealmModel> getRealmsStream(String search) which allows searching for a realm by name. While there is a default implementation which filters the stream after loading it from the provider, implementations are encouraged to provide this with more efficient implementation. 2.8. Event changes 2.8.1. Group-related events no longer fired when removing a realm With the goal of improving the scalability of groups, they are now removed directly from the database when removing a realm. As a consequence, group-related events such as the GroupRemovedEvent are no longer fired when removing a realm. If you have extensions handling any group-related event when a realm is removed, make sure to use the RealmRemovedEvent instead to perform any cleanup or custom processing when a realm, and their groups, are removed. The GroupProvider interface is also updated with a new preRemove(RealmModel) method to force implementations to properly handle the removal of groups when a realm is removed. 2.8.2. Changed userId for events related to refresh token The userId in the REFRESH_TOKEN event is now always taken from the user session instead of sub claim in the refresh token. The userId in the REFRESH_TOKEN_ERROR event is now always null. The reason for this change is that the value of the sub claim in the refresh token may be null with the introduction of the optional sub claim or even different from the real user id when using pairwise subject identifiers or other ways to override the sub claim. However a refresh_token_sub detail is now added as backwards compatibility to have info about the user in the case of missing userId in the REFRESH_TOKEN_ERROR event. 2.9. Keycloak JS This release includes several changes to Keycloak JS library that should be taken into account. The main motivation for these changes is to de-couple the library from the Red Hat build of Keycloak server, so that it can be refactored independently, simplifing the code and making it easier to maintain in the future. The changes are as follows: 2.9.1. The library is no longer served statically from the server The Keycloak JS library is no longer served statically from the Red Hat build of Keycloak server. This means that the following URLs are no longer available: /js/keycloak-authz.js /js/keycloak-authz.min.js /js/keycloak.js /js/keycloak.min.js /js/{version}/keycloak-authz.js /js/{version}/keycloak-authz.min.js /js/{version}/keycloak.js /js/{version}/keycloak.min.js Additionally, the keycloakJsUrl property that linked to the library on these URLs has been removed from the Admin Console theme. If your custom theme was using this property to include the library, you should update your theme to include the library using a different method. You should now include the library in your project using a package manager such as NPM . The library is available on the NPM registry as keycloak-js . You can install it using the following command: npm install keycloak-js Alternatively, the distribution of the server includes a copy of the library in the keycloak-js-26.0.0.tgz archive. You can copy the library from there into your project. If you are using the library directly in the browser without a build, you'll need to host the library yourself. A package manager is still the recommended way to include the library in your project, as it will make it easier to update the library in the future. 2.9.2. Keycloak instance configuration is now required Previously it was possible to construct a Keycloak instance without passing any configuration. The configuration would then automatically be loaded from the server from a keycloak.json file based on the path of the included keycloak.js script. Since the library is no longer statically served from the server this feature has been removed. You now need to pass the configuration explicitly when constructing a Keycloak instance: // Before const keycloak = new Keycloak(); // After const keycloak = new Keycloak({ url: "http://keycloak-server", realm: "my-realm", clientId: "my-app" }); // Alternatively, you can pass a URL to a `keycloak.json` file. // Note this is not reccomended as it creates additional network requests, and is prone to change in the future. const keycloak = new Keycloak('http://keycloak-server/path/to/keycloak.json'); 2.9.3. Methods for login are now async Keycloak JS now utilizes the Web Crypto API to calculate the SHA-256 digests needed to support PKCE. Due to the asynchronous nature of this API the following public methods will now always return a Promise : login() createLoginUrl() createRegisterUrl() Make sure to update your code to await these methods: // Before keycloak.login(); const loginUrl = keycloak.createLoginUrl(); const registerUrl = keycloak.createRegisterUrl(); // After await keycloak.login(); const loginUrl = await keycloak.createLoginUrl(); const registerUrl = await keycloak.createRegisterUrl(); Make sure to update your code to await these methods. 2.9.4. Stricter startup behavior for build-time options When the provided build-time options differ at startup from the values persisted in the server image during the last optimized Red Hat build of Keycloak build, Red Hat build of Keycloak will now fail to start. Previously, a warning message was displayed in such cases. 2.9.5. New default client scope basic The new client scope named basic is added as a realm "default" client scope and hence will be added to all newly created clients. The client scope is also automatically added to all existing clients during migration. This scope contains preconfigured protocol mappers for the following claims: sub (See the details below in the dedicated section) auth_time This provides additional help to reduce the number of claims in a lightweight access token, but also gives the chance to configure claims that were always added automatically. 2.9.5.1. sub claim is added to access token via protocol mapper The sub claim, which was always added to the access token, is now added by default but using a new Subject (sub) protocol mapper. The Subject (sub) mapper is configured by default in the basic client scope. Therefore, no extra configuration is required after upgrading to this version. If you are using the Pairwise subject identifier mapper to map a sub claim for an access token, you can consider disabling or removing the Subject (sub) mapper, however it is not strictly needed as the Subject (sub) protocol mapper is executed before the Pairwise subject identifier mapper and hence the pairwise value will override the value added by Subject (sub) mapper. This may apply also to other custom protocol mapper implementations, which override the sub claim, as the Subject (sub) mapper is currently executed as the first protocol mapper. You can use the Subject (sub) mapper to configure the sub claim only for access token, lightweight access token, and introspection response. IDToken and Userinfo always contain sub claim. The mapper has no effects for service accounts, because no user session exists, and the sub claim is always added to the access token. 2.9.5.2. Nonce claim is only added to the ID token The nonce claim is now only added to the ID token strictly following the OpenID Connect Core 1.0 specification. As indicated in the specification, the claim is compulsory inside the ID token when the same parameter was sent in the authorization request. The specification also recommends against adding the nonce after a refresh request . Previously, the claim was set to all the tokens (Access, Refresh and ID) in all the responses (refresh included). A new Nonce backwards compatible mapper is also included in the software that can be assigned to client scopes to revert to the old behavior. For example, the JS adapter checked the returned nonce claim in all the tokens before fixing issue #26651 in version 24.0.0. Therefore, if an old version of the JS adapter is used, the mapper should be added to the required clients by using client scopes. 2.9.5.3. Using older javascript adapter If you use the latest Red Hat build of Keycloak server with older versions of the javascript adapter in your applications, you may be affected by the token changes mentioned above as versions of javascript adapter rely on the claims, which were added by Red Hat build of Keycloak, but not supported by the OIDC specification. This includes: Adding the Session State (session_state) mapper in case of using the Red Hat build of Keycloak Javascript adapter 24.0.3 or older Adding the Nonce backwards compatible mapper in case of using a Red Hat build of Keycloak Javascript adapter that is older than Red Hat build of Keycloak 24 You can add the protocol mappers directly to the corresponding client or to some client scope, which can be used by your client applications relying on older versions of the Red Hat build of Keycloak Javascript adapter. Some more details are in the sections dedicated to session_state and nonce claims. 2.10. Identity Providers changes 2.10.1. Identity Providers no longer available from the realm representation As part of the improvements around the scalability of realms and organizations when they have many identity providers, the realm representation no longer holds the list of identity providers. However, they are still available from the realm representation when exporting a realm. To obtain the query the identity providers in a realm, prefer using the /realms/{realm}/identity-provider/instances endpoint. This endpoint supports filters and pagination. 2.10.2. Improving performance for selection of identity providers New indexes were added to the IDENTITY_PROVIDER table to improve the performance of queries that fetch the IDPs associated with an organization, and fetch IDPs that are available for login (those that are enabled , not link_only , not marked as hide_on_login ). If the table currently contains more than 300,000 entries, Red Hat build of Keycloak will skip the creation of the indexes by default during the automatic schema migration, and will instead log the SQL statements on the console during migration. In this case, the statements must be run manually in the DB after Red Hat build of Keycloak's startup. Also, the kc.org and hideOnLoginPage configuration attributes were migrated to the identity provider itself, to allow for more efficient queries when searching for providers. As such, API clients should use the getOrganizationId/setOrganizationId and isHideOnLogin/setHideOnLogin methods in the IdentityProviderRepresentation , and avoid setting these properties using the legacy config attributes that are now deprecated. 2.11. Other changes 2.11.1. Argon2 password hashing Argon2 is now the default password hashing algorithm used by Red Hat build of Keycloak in a non-FIPS environment. Argon2 was the winner of the 2015 password hashing competition and is the recommended hashing algorithm by OWASP . In Red Hat build of Keycloak 24 the default hashing iterations for PBKDF2 were increased from 27.5K to 210K, resulting in a more than 10 times increase in the amount of CPU time required to generate a password hash. With Argon2, you can achieve better security, with almost the same CPU time as releases of Red Hat build of Keycloak. One downside is Argon2 requires more memory, which is a requirement to be resistant against GPU attacks. The defaults for Argon2 in Red Hat build of Keycloak requires 7MB per-hashing request. To prevent excessive memory and CPU usage, the parallel computation of hashes by Argon2 is by default limited to the number of cores available to the JVM. To support the memory intensive nature of Argon2, we have updated the default GC from ParallelGC to G1GC for a better heap utilization. Note that Argon2 is not compliant with FIPS 140-2. So if you are in the FIPS environment, the default algorithm will be still PBKDF2. Also note that if you are on non-FIPS environment and you plan to migrate to the FIPS environment, consider changing the password policy to a FIPS compliant algorithm such as pbkdf2-sha512 at the outset. Otherwise, users will not be able to log in after they switch to the FIPS environment. 2.11.2. Default http-pool-max-threads reduced http-pool-max-threads if left unset will default to the greater of 50 or 4 x (available processors). Previously it defaulted to the greater of 200 or 8 x (available processors). Reducing the number or task threads for most usage scenarios will result in slightly higher performance due to less context switching among active threads. 2.11.3. Improved performance of findGrantedResources and findGrantedOwnerResources queries These queries performed poorly when the RESOURCE_SERVER_RESOURCE and RESOURCE_SERVER_PERM_TICKET tables had over 100k entries and users were granted access to over 1k resources. The queries were simplified and new indexes for the requester and owner columns were introduced. The new indexes are both applied to the RESOURCE_SERVER_PERM_TICKET table. If the table currently contains more than 300,000 entries, Red Hat build of Keycloak will skip the creation of the indexes by default during the automatic schema migration, and will instead log the SQL statements on the console during migration. In this case, the statements must be run manually in the DB after Red Hat build of Keycloak's startup. 2.11.4. Method getExp added to SingleUseObjectKeyModel As a consequence of the removal of deprecated methods from AccessToken , IDToken , and JsonWebToken , the SingleUseObjectKeyModel also changed to keep consistency with the method names related to expiration values. The getExpiration method is now deprecated and you should prefer using new newly introduced getExp method to avoid overflow after 2038. 2.11.5. Concurrent login requests are blocked by default when brute force is enabled If an attacker launched many login attempts in parallel then the attacker could have more guesses at a password than the brute force protection configuration permits. This was due to the brute force check occurring before the brute force protector has locked the user. To prevent this race the Brute Force Protector now rejects all login attempts that occur while another login is in progress in the same server. If you prefer to disable this feature, use this command: bin/kc.[sh|bat] start --spi-brute-force-protector-default-brute-force-detector-allow-concurrent-requests=true 2.11.6. Changes in redirect URI verification when using wildcards Because of security concerns, the redirect URI verification now performs an exact string matching (no wildcard involved) if the passed redirect uri contains a userinfo part or its path accesses the parent directory ( /../ ). The full wildcard * can still be used as a valid redirect in development for http(s) URIs with those characteristics. In production environments, configure an exact valid redirect URI without wildcard needs for any URI of that type. Note that wildcard valid redirect URIs are not recommended for production and not covered by the OAuth 2.0 specification. 2.11.7. Infinispan marshalling changes Marshalling is the process of converting Java objects into bytes to send them across the network between Red Hat build of Keycloak servers. With Red Hat build of Keycloak 26, the marshalling library has changed from JBoss Marshalling to Infinispan Protostream. The libraries are not compatible between each other and, it requires some steps to ensure the session data is not lost. Warning JBoss Marshalling and Infinispan Protostream are not compatible with each other and incorrect usage may lead to data loss. Consequently, all caches are cleared when upgrading to this version. All existing online user and client sessions are cleared. Offline user and client sessions are not affected. 2.11.8. Automatic redirect from root to relative path The user is automatically redirected to the path where Red Hat build of Keycloak is hosted when the http-relative-path property is specified. It means when the relative path is set to /auth , and the user accesses localhost:8080/ , the page is redirected to localhost:8080/auth . The same change applies to the management interface when the http-management-relative-path or http-relative-path property is specified. This change improves user experience. Users no longer need to set the relative path to the URL explicitly. 2.11.9. Consistent usage of UTF-8 charset for URL encoding org.keycloak.common.util.Encode now always uses the UTF-8 charset for URL encoding instead relying implicitly on the file.encoding system property. 2.11.10. Configuring the LDAP Connection Pool In this release, the LDAP connection pool configuration relies solely on system properties. The main reason is that the LDAP connection pool configuration is a JVM-level configuration rather than specific to an individual realm or LDAP provider instance. Compared to releases, any realm configuration related to the LDAP connection pool will be ignored. If you are migrating from versions where any of the following settings are set to your LDAP provider(s), consider using system properties instead: connectionPoolingAuthentication connectionPoolingInitSize connectionPoolingMaxSize connectionPoolingPrefSize connectionPoolingTimeout connectionPoolingProtocol connectionPoolingDebug For more details, see Configuring the connection pool . 2.11.11. Persisting revoked access tokens across restarts In this release, revoked access tokens are written to the database and reloaded when the cluster is restarted by default when using the embedded caches. To disable this behavior, use the SPI option spi-single-use-object-infinispan-persist-revoked-tokens as outlined in All provider configuration . The SPI behavior of SingleUseObjectProvider has changed that for revoked tokens only the methods put and contains must be used. This is enforced by default, and can be disabled using the SPI option spi-single-use-object-infinispan-persist-revoked-tokens . 2.11.12. Highly available multi-site deployments Red Hat build of Keycloak 26 introduces significant improvements to the recommended high availability multi-site architecture, most notably: Red Hat build of Keycloak deployments are now able to handle user requests simultaneously in both sites. load balancer configurations handling requests only in one site at a time will continue to work. Active monitoring of the connectivity between the sites is now required to re-configure the replication between the sites in case of a failure. The blueprints describe a setup with Alertmanager and AWS Lambda. The loadbalancer blueprint has been updated to use the AWS Global Accelerator as this avoids prolonged fail-over times caused by DNS caching by clients. Persistent user sessions are now a requirement of the architecture. Consequently, user sessions will be kept on Red Hat build of Keycloak or Data Grid upgrades. External Data Grid request handling has been improved to reduce memory usage and request latency. As a consequence of the above changes, the following changes are required to your existing Red Hat build of Keycloak deployments. distributed-cache definitions provided by a cache configuration file are ignored when the multi-site feature is enabled, so you must configure the connection to the external Data Grid deployment via the cache-remote-* command line arguments or Keycloak CR as outlined in the blueprints. All remote-store configurations must be removed from the cache configuration file. Review your current cache configurations in the external Data Grid and update them with those outlined in the latest version of the Red Hat build of Keycloak's documentation. While versions of the cache configurations only logged warnings when the backup replication between sites failed, the new configurations ensure that the state in both sites stays in sync: When the transfer between the two sites fails, the caller will see an error. Due to that, you need to set up monitoring to disconnect the two sites in case of a site failure. The High Availability Guide contains a blueprint on how to set this up. While load balancer configurations will continue to work with Red Hat build of Keycloak, consider upgrading an existing Route53 configuration to avoid prolonged failover times due to client side DNS caching. If you have updated your cache configuration XML file with remote-store configurations, those will no longer work. Instead, enable the multi-site feature and use the cache-remove-* options. 2.11.13. Required actions improvements The required action provider name is now returned via the kc_action parameter when redirecting back from an application initiated required action execution. This eases the detection of which required action was executed for a client. The outcome of the execution can be determined via the kc_action_status parameter. Note: This feature required changes to the Keycloak JS adapter, therefore it is recommended to upgrade to the latest version of the adapter if you want to make use of this feature. 2.11.14. Keystore and trust store default format change Red Hat build of Keycloak now determines the format of the keystore and trust store based on the file extension. If the file extension is .p12 , .pkcs12 or .pfx , the format is PKCS12. If the file extension is .jks , .keystore or .truststore , the format is JKS. If the file extension is .pem , .crt or .key , the format is PEM. You can still override automatic detection by specifying the https-key-store-type and https-trust-store-type explicitly. The same applies to the management interface and its https-management-key-store-type . Restrictions for the FIPS strict mode stay unchanged. Note The spi-truststore-file-* options and the truststore related options https-trust-store-* are deprecated, we strongly recommend to use System Truststore. For more details refer to the relevant guide . 2.11.15. Paths for common theme resources have changed Some of the paths for the common resources of the keycloak theme have changed, specifically the resources for third-party libraries. Make sure to update your custom themes accordingly: node_modules/patternfly/dist is now vendor/patternfly-v3 node_modules/@patternfly/patternfly is now vendor/patternfly-v4 node_modules/@patternfly-v5/patternfly is now vendor/patternfly-v5 node_modules/rfc4648/lib is now vendor/rfc4648 Additionally, the following resources have been removed from the common theme: node_modules/alpinejs node_modules/jquery If you previously used any of the removed resources in your theme, make sure to add them to your own theme resources instead. 2.11.16. BouncyCastle FIPS updated Our FIPS 140-2 integration is now tested and supported with version 2 of BouncyCastle FIPS libraries. This version is certified with Java 21. If you use FIPS 140-2 integration, it is recommended to upgrade BouncyCastle FIPS library to the versions mentioned in the latest documentation. The BouncyCastle FIPS version 2 is certified with FIPS 140-3. So Red Hat build of Keycloak can be FIPS 140-3 compliant as long as it is used on the FIPS 140-3 compliant system. This might be the RHEL 9 based system, which itself is compliant with the FIPS 140-3. But note that RHEL 8 based system is only certified for the FIPS 140-2.
[ "Hostname v1 bin/kc.[sh|bat] start --hostname=mykeycloak.org --https-port=8543 --hostname-path=/auth --hostname-strict-https=true Hostname v2 bin/kc.[sh|bat] start --hostname=https://mykeycloak.org:8543/auth", "Hostname v1 bin/kc.[sh|bat] start --hostname=mykeycloak.org --hostname-strict-backchannel=true Hostname v2 bin/kc.[sh|bat] start --hostname=mykeycloak.org --hostname-backchannel-dynamic=false", "bin/kc.[sh|bat] start --spi-group-jpa-escape-slashes-in-group-path=true", "bin/kc.[sh|bat] --spi-connections-http-client-default-max-consumed-response-size=1000000", "http_server_active_requests 1.0 http_server_requests_seconds_count{method=\"GET\",outcome=\"SUCCESS\",status=\"200\",uri=\"/realms/{realm}/protocol/{protocol}/auth\"} 1.0 http_server_requests_seconds_sum{method=\"GET\",outcome=\"SUCCESS\",status=\"200\",uri=\"/realms/{realm}/protocol/{protocol}/auth\"} 0.048717142", "npm install keycloak-js", "// Before const keycloak = new Keycloak(); // After const keycloak = new Keycloak({ url: \"http://keycloak-server\", realm: \"my-realm\", clientId: \"my-app\" }); // Alternatively, you can pass a URL to a `keycloak.json` file. // Note this is not reccomended as it creates additional network requests, and is prone to change in the future. const keycloak = new Keycloak('http://keycloak-server/path/to/keycloak.json');", "// Before keycloak.login(); const loginUrl = keycloak.createLoginUrl(); const registerUrl = keycloak.createRegisterUrl(); // After await keycloak.login(); const loginUrl = await keycloak.createLoginUrl(); const registerUrl = await keycloak.createRegisterUrl();", "bin/kc.[sh|bat] start --spi-brute-force-protector-default-brute-force-detector-allow-concurrent-requests=true" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/upgrading_guide/migration-changes
Chapter 3. Important update on odo
Chapter 3. Important update on odo Red Hat does not provide information about odo on the OpenShift Container Platform documentation site. See the documentation maintained by Red Hat and the upstream community for documentation information related to odo . Important For the materials maintained by the upstream community, Red Hat provides support under Cooperative Community Support .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/cli_tools/developer-cli-odo
8.90. libibverbs-rocee
8.90. libibverbs-rocee 8.90.1. RHEA-2013:1740 - libibverbs-rocee and libmlx4-rocee bug fix and enhancement update Updated libibverbs-rocee and libmlx4-rocee packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. Red Hat Enterprise Linux includes a collection of InfiniBand and iWARP utilities, libraries, and development packages for writing applications that use Remote Direct Memory Access (RDMA) technology. Note The libibverbs-rocee packages have been upgraded to upstream version 1.1.7 and the libxml-rocee packages to upstream version 1.0.5, which provides a number of bug fixes and enhancements over the versions and keeps the HPN channel synchronized with the base Red Hat Enterprise Linux channel, where the sister versions of these packages (libibverbs and libmlx4) were also updated to the latest upstream release. All users of Remote Direct Memory Access (RDMA) technology are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/libibverbs-rocee
Chapter 9. Premigration checklists
Chapter 9. Premigration checklists Before you migrate your application workloads with the Migration Toolkit for Containers (MTC), review the following checklists. 9.1. Resources ❏ If your application uses an internal service network or an external route for communicating with services, the relevant route exists. ❏ If your application uses cluster-level resources, you have re-created them on the target cluster. ❏ You have excluded persistent volumes (PVs), image streams, and other resources that you do not want to migrate. ❏ PV data has been backed up in case an application displays unexpected behavior after migration and corrupts the data. 9.2. Source cluster ❏ The cluster meets the minimum hardware requirements . ❏ You have installed the correct legacy Migration Toolkit for Containers Operator version: operator-3.7.yml on OpenShift Container Platform version 3.7. operator.yml on OpenShift Container Platform versions 3.9 to 4.5. ❏ All nodes have an active OpenShift Container Platform subscription. ❏ You have performed all the run-once tasks . ❏ You have performed all the environment health checks . ❏ You have checked for PVs with abnormal configurations stuck in a Terminating state by running the following command: USD oc get pv ❏ You have checked for pods whose status is other than Running or Completed by running the following command: USD oc get pods --all-namespaces | egrep -v 'Running | Completed' ❏ You have checked for pods with a high restart count by running the following command: USD oc get pods --all-namespaces --field-selector=status.phase=Running \ -o json | jq '.items[]|select(any( .status.containerStatuses[]; \ .restartCount > 3))|.metadata.name' Even if the pods are in a Running state, a high restart count might indicate underlying problems. ❏ You have removed old builds, deployments, and images from each namespace to be migrated by pruning . ❏ The OpenShift image registry uses a supported storage type . ❏ Direct image migration only: The OpenShift image registry is exposed to external traffic. ❏ You can read and write images to the registry. ❏ The etcd cluster is healthy. ❏ The average API server response time on the source cluster is less than 50 ms. ❏ The cluster certificates are valid for the duration of the migration process. ❏ You have checked for pending certificate-signing requests by running the following command: USD oc get csr -A | grep pending -i ❏ The identity provider is working. ❏ You have set the value of the openshift.io/host.generated annotation parameter to true for each OpenShift Container Platform route, which updates the host name of the route for the target cluster. Otherwise, the migrated routes retain the source cluster host name. 9.3. Target cluster ❏ You have installed Migration Toolkit for Containers Operator version 1.5.1. ❏ All MTC prerequisites are met. ❏ The cluster meets the minimum hardware requirements for the specific platform and installation method, for example, on bare metal . ❏ The cluster has storage classes defined for the storage types used by the source cluster, for example, block volume, file system, or object storage. Note NFS does not require a defined storage class. ❏ The cluster has the correct network configuration and permissions to access external services, for example, databases, source code repositories, container image registries, and CI/CD tools. ❏ External applications and services that use services provided by the cluster have the correct network configuration and permissions to access the cluster. ❏ Internal container image dependencies are met. If an application uses an internal image in the openshift namespace that is not supported by OpenShift Container Platform 4.11, you can manually update the OpenShift Container Platform 3 image stream tag with podman . ❏ The target cluster and the replication repository have sufficient storage space. ❏ The identity provider is working. ❏ DNS records for your application exist on the target cluster. ❏ Certificates that your application uses exist on the target cluster. ❏ You have configured appropriate firewall rules on the target cluster. ❏ You have correctly configured load balancing on the target cluster. ❏ If you migrate objects to an existing namespace on the target cluster that has the same name as the namespace being migrated from the source, the target namespace contains no objects of the same name and type as the objects being migrated. Note Do not create namespaces for your application on the target cluster before migration because this might cause quotas to change. 9.4. Performance ❏ The migration network has a minimum throughput of 10 Gbps. ❏ The clusters have sufficient resources for migration. Note Clusters require additional memory, CPUs, and storage in order to run a migration on top of normal workloads. Actual resource requirements depend on the number of Kubernetes resources being migrated in a single migration plan. You must test migrations in a non-production environment in order to estimate the resource requirements. ❏ The memory and CPU usage of the nodes are healthy. ❏ The etcd disk performance of the clusters has been checked with fio .
[ "oc get pv", "oc get pods --all-namespaces | egrep -v 'Running | Completed'", "oc get pods --all-namespaces --field-selector=status.phase=Running -o json | jq '.items[]|select(any( .status.containerStatuses[]; .restartCount > 3))|.metadata.name'", "oc get csr -A | grep pending -i" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/migrating_from_version_3_to_4/premigration-checklists-3-4
Installing on Apache Karaf
Installing on Apache Karaf Red Hat Fuse 7.13 Install Red Hat Fuse on the Apache Karaf container Red Hat Fuse Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/installing_on_apache_karaf/index
Chapter 5. Deploying Red Hat Quay
Chapter 5. Deploying Red Hat Quay To deploy the Red Hat Quay service on the nodes in your cluster, you use the same Quay container you used to create the configuration file. The differences here are that you: Identify directories where the configuration files and data are stored Run the command with --sysctl net.core.somaxconn=4096 Don't use the config option or password For a basic setup, you can deploy on a single node; for high availability you probably want three or more nodes (for example, quay01, quay02, and quay03). Note The resulting Red Hat Quay service will listen on regular port 8080 and SSL port 8443. This is different from releases of Red Hat Quay, which listened on standard ports 80 and 443, respectively. In this document, we map 8080 and 8443 to standard ports 80 and 443 on the host, respectively. Througout the rest of this document, we assume you have mapped the ports in this way. Here is what you do: Create directories : Create two directories to store configuration information and data on the host. For example: Copy config files : Copy the tarball ( quay-config.tar.gz ) to the configuration directory and unpack it. For example: Deploy Red Hat Quay : Having already authenticated to Quay.io (see Accessing Red Hat Quay ) run Red Hat Quay as a container, as follows: Note Add -e DEBUGLOG=true to the podman run command line for the Quay container to enable debug level logging. Add -e IGNORE_VALIDATION=true to bypass validation during the startup process. Open browser to UI : Once the Quay container has started, go to your web browser and open the URL, to the node running the Quay container. Log into Red Hat Quay : Using the superuser account you created during configuration, log in and make sure Red Hat Quay is working properly. Add more Red Hat Quay nodes : At this point, you have the option of adding more nodes to this Red Hat Quay cluster by simply going to each node, then adding the tarball and starting the Quay container as just shown. Add optional features : To add more features to your Red Hat Quay cluster, such as Clair images scanning and Repository Mirroring, continue on to the section. 5.1. Add Clair image scanning to Red Hat Quay Setting up and deploying Clair image scanning for your Red Hat Quay deployment is described in Clair Security Scanning 5.2. Add repository mirroring Red Hat Quay Enabling repository mirroring allows you to create container image repositories on your Red Hat Quay cluster that exactly match the content of a selected external registry, then sync the contents of those repositories on a regular schedule and on demand. To add the repository mirroring feature to your Red Hat Quay cluster: Run the repository mirroring worker. To do this, you start a quay pod with the repomirror option. Select "Enable Repository Mirroring in the Red Hat Quay Setup tool. Log into your Red Hat Quay Web UI and begin creating mirrored repositories as described in Repository Mirroring in Red Hat Quay . The following procedure assumes you already have a running Red Hat Quay cluster on an OpenShift platform, with the Red Hat Quay Setup container running in your browser: Start the repo mirroring worker : Start the Quay container in repomirror mode. This example assumes you have configured TLS communications using a certificate that is currently stored in /root/ca.crt . If not, then remove the line that adds /root/ca.crt to the container: Log into config tool : Log into the Red Hat Quay Setup Web UI (config tool). Enable repository mirroring : Scroll down the Repository Mirroring section and select the Enable Repository Mirroring check box, as shown here: Select HTTPS and cert verification : If you want to require HTTPS communications and verify certificates during mirroring, select this check box. Save configuration : Select the Save Configuration Changes button. Repository mirroring should now be enabled on your Red Hat Quay cluster. Refer to Repository Mirroring in Red Hat Quay for details on setting up your own mirrored container image repositories.
[ "mkdir -p /mnt/quay/config #optional: if you don't choose to install an Object Store mkdir -p /mnt/quay/storage", "cp quay-config.tar.gz /mnt/quay/config/ tar xvf quay-config.tar.gz config.yaml ssl.cert ssl.key", "sudo podman run --restart=always -p 443:8443 -p 80:8080 --sysctl net.core.somaxconn=4096 --privileged=true -v /mnt/quay/config:/conf/stack:Z -v /mnt/quay/storage:/datastorage:Z -d registry.redhat.io/quay/quay-rhel8:v3.13.3", "sudo podman run -d --name mirroring-worker -v /mnt/quay/config:/conf/stack:Z -v /root/ca.crt:/etc/pki/ca-trust/source/anchors/ca.crt registry.redhat.io/quay/quay-rhel8:v3.13.3 repomirror" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/deploy_red_hat_quay_-_high_availability/deploying_red_hat_quay
Chapter 64. System and Subscription Management
Chapter 64. System and Subscription Management System upgrade may cause Yum to install unneeded 32-bit packages if rdma-core is installed In Red Hat Enterprise Linux 7.4, the rdma-core.noarch packages are obsoleted by rdma-core.i686 and rdma-core.x86_64 . During a system upgrade, Yum replaces the original package with both of the new packages, and installs any required dependencies. This means that the 32-bit package, as well a potentially large amount of its 32-bit dependencies, is installed by default, even if not required. To work around this problem, you can either use the yum update command with the --exclude=\*.i686 option, or you can use yum remove rdma-core.i686 after the upgrade to remove the 32-bit package. (BZ#1458338)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.4_release_notes/known_issues_system_and_subscription_management
Chapter 7. Managing organizations
Chapter 7. Managing organizations Organizations divide Red Hat Satellite resources into logical groups based on ownership, purpose, content, security level, or other divisions. You can create and manage multiple organizations through Red Hat Satellite, then divide and assign your Red Hat subscriptions to each individual organization. This provides a method of managing the content of several individual organizations under one management system. 7.1. Examples of using organizations in Satellite Single Organization Using a single organization is well suited for a small business with a simple system administration chain. In this case, you create a single organization for the business and assign content to it. You can also use the Default Organization for this purpose. Multiple Organizations Using multiple organizations is well suited for a large company that owns several smaller business units. For example, a company with separate system administration and software development groups. In this case, you create one organization for the company and then an organization for each of the business units it owns. You then assign content to each organization based on its needs. External Organizations Using external organizations is well suited for a company that manages external systems for other organizations. For example, a company offering cloud computing and web hosting resources to customers. In this case, you create an organization for the company's own system infrastructure and then an organization for each external business. You then assign content to each organization where necessary. 7.2. Creating an organization Use this procedure to create an organization. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Administer > Organizations . Click New Organization . In the Name field, enter a name for the organization. In the Label field, enter a unique identifier for the organization. This is used for creating and mapping certain assets, such as directories for content storage. Use letters, numbers, underscores, and dashes, but no spaces. Optional: If you do not wish to enable Simple Content Access (SCA), uncheck the Simple Content Access checkbox. For more information on SCA, see Simple Content Access . Note Red Hat does not recommend disabling SCA as entitlement mode is deprecated. Optional: In the Description field, enter a description for the organization. Click Submit . If you have hosts with no organization assigned, select the hosts that you want to add to the organization, then click Proceed to Edit . In the Edit page, assign the infrastructure resources that you want to add to the organization. This includes networking resources, installation media, kickstart templates, and other parameters. You can return to this page at any time by navigating to Administer > Organizations and then selecting an organization to edit. Click Submit . CLI procedure To create an organization, enter the following command: Note Organizations created this way have Simple Content Access (SCA) enabled by default. If you wish to disable SCA, add the --simple-content-access false parameter to the command. Red Hat does not advise you to disable SCA because entitlement mode (not using SCA) is deprecated. Optional: To edit an organization, enter the hammer organization update command. For example, the following command assigns a compute resource to the organization: 7.3. Creating an organization debug certificate If you require a debug certificate for your organization, use the following procedure. Procedure In the Satellite web UI, navigate to Administer > Organizations . Select an organization that you want to generate a debug certificate for. Click Generate and Download . Save the certificate file in a secure location. Debug certificates for provisioning templates Debug Certificates are automatically generated for provisioning template downloads if they do not already exist in the organization for which they are being downloaded. 7.4. Browsing repository content using an organization debug certificate You can view an organization's repository content using a web browser or using the API if you have a debug certificate for that organization. Prerequisites You created and downloaded an organization certificate. For more information, see Section 7.3, "Creating an organization debug certificate" . Procedure Split the private and public keys from the certificate into two files. Open the X.509 certificate, for example, for the default organization: Copy the contents of the file from -----BEGIN RSA PRIVATE KEY----- to -----END RSA PRIVATE KEY----- , into a key.pem file. Copy the contents of the file from -----BEGIN CERTIFICATE----- to -----END CERTIFICATE----- , into a cert.pem file. To use a browser, you must first convert the X.509 certificate to a format your browser supports and then import the certificate. For Firefox users Convert the certificate into the PKCS12 format using the following command: In the Firefox browser, navigate to Edit > Preferences > Advanced Tab . Select View Certificates and click the Your Certificates tab. Click Import and select the .pfx file to load. Enter the following URL in the address bar to browse the accessible paths for all the repositories and check their contents: For CURL users To use the organization debug certificate with CURL, enter the following command: Ensure that the paths to cert.pem and key.pem are the correct absolute paths otherwise the command fails silently. Pulp uses the organization label, therefore, you must enter the organization label into the URL. 7.5. Deleting an organization You can delete an organization if the organization is not associated with any lifecycle environments or host groups. If there are any lifecycle environments or host groups associated with the organization you are about to delete, remove them by navigating to Administer > Organizations and clicking the relevant organization. Important Do not delete Default Organization created during installation because the default organization is a placeholder for any unassociated hosts in your Satellite environment. There must be at least one organization in the environment at any given time. Procedure In the Satellite web UI, navigate to Administer > Organizations . From the list to the right of the name of the organization you want to delete, select Delete . Click OK to delete the organization. CLI procedure Enter the following command to retrieve the ID of the organization that you want to delete: From the output, note the ID of the organization that you want to delete. Enter the following command to delete an organization:
[ "hammer organization create --name \" My_Organization \" --label \" My_Organization_Label \" --description \" My_Organization_Description \"", "hammer organization update --name \" My_Organization \" --compute-resource-ids 1", "vi 'Default Organization-key-cert.pem'", "openssl pkcs12 -keypbe PBE-SHA1-3DES -certpbe PBE-SHA1-3DES -export -in cert.pem -inkey key.pem -out My_Organization_Label .pfx -name My_Organization", "https:// satellite.example.com /pulp/content/", "curl -k --cert cert.pem --key key.pem https:// satellite.example.com /pulp/content/ My_Organization_Label /Library/content/dist/rhel/server/7/7Server/x86_64/os/", "hammer organization list", "hammer organization delete --id Organization_ID" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/administering_red_hat_satellite/managing_organizations_admin
Chapter 13. Component details
Chapter 13. Component details The following table shows the component versions for each Streams for Apache Kafka release. Note Components like the operators, console, and proxy only apply to using Streams for Apache Kafka on OpenShift. Streams for Apache Kafka Apache Kafka Strimzi Operators Kafka Bridge Oauth Cruise Control Console Proxy 2.9.0 3.9.0 0.45.0 0.31 0.15.0 2.5.141 0.6 0.9.0 2.8.0 3.8.0 0.43.0 0.30 0.15.0 2.5.138 0.1 0.8.0 2.7.0 3.7.0 0.40.0 0.28 0.15.0 2.5.137 0.1 0.5.1 2.6.0 3.6.0 0.38.0 0.27 0.14.0 2.5.128 - - 2.5.2 3.5.0 (+3.5.2) 0.36.0 0.26 0.13.0 2.5.123 - - 2.5.1 3.5.0 0.36.0 0.26 0.13.0 2.5.123 - - 2.5.0 3.5.0 0.36.0 0.26 0.13.0 2.5.123 - - 2.4.0 3.4.0 0.34.0 0.25.0 0.12.0 2.5.112 - - 2.3.0 3.3.1 0.32.0 0.22.3 0.11.0 2.5.103 - - 2.2.2 3.2.3 0.29.0 0.21.5 0.10.0 2.5.103 - - 2.2.1 3.2.3 0.29.0 0.21.5 0.10.0 2.5.103 - - 2.2.0 3.2.3 0.29.0 0.21.5 0.10.0 2.5.89 - - 2.1.0 3.1.0 0.28.0 0.21.4 0.10.0 2.5.82 - - 2.0.1 3.0.0 0.26.0 0.20.3 0.9.0 2.5.73 - - 2.0.0 3.0.0 0.26.0 0.20.3 0.9.0 2.5.73 - - 1.8.4 2.8.0 0.24.0 0.20.1 0.8.1 2.5.59 - - 1.8.0 2.8.0 0.24.0 0.20.1 0.8.1 2.5.59 - - 1.7.0 2.7.0 0.22.1 0.19.0 0.7.1 2.5.37 - - 1.6.7 2.6.3 0.20.1 0.19.0 0.6.1 2.5.11 - - 1.6.6 2.6.3 0.20.1 0.19.0 0.6.1 2.5.11 - - 1.6.5 2.6.2 0.20.1 0.19.0 0.6.1 2.5.11 - - 1.6.4 2.6.2 0.20.1 0.19.0 0.6.1 2.5.11 - - 1.6.0 2.6.0 0.20.0 0.19.0 0.6.1 2.5.11 - - 1.5.0 2.5.0 0.18.0 0.16.0 0.5.0 - - - 1.4.1 2.4.0 0.17.0 0.15.2 0.3.0 - - - 1.4.0 2.4.0 0.17.0 0.15.2 0.3.0 - - - 1.3.0 2.3.0 0.14.0 0.14.0 0.1.0 - - - 1.2.0 2.2.1 0.12.1 0.12.2 - - - - 1.1.1 2.1.1 0.11.4 - - - - - 1.1.0 2.1.1 0.11.1 - - - - - 1.0 2.0.0 0.8.1 - - - - -
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/release_notes_for_streams_for_apache_kafka_2.9_on_openshift/ref-component-details-str
Chapter 6. Creating a product
Chapter 6. Creating a product The product listing provides marketing and technical information, showcasing your product's features and advantages to potential customers. It lays the foundation for adding all necessary components to your product for certification. Prerequisites Verify the functionality of your product on the target Red Hat platform, in addition to the specific certification testing requirements. If running your product on the targeted Red Hat platform results in a substandard experience then you must resolve the issues before certification. Procedure Red Hat recommends completing all optional fields in the listing tabs for a comprehensive product listing. More information helps mutual customers make informed choices. Red Hat encourages collaboration with your product manager, marketing representative, or other product experts when entering information for your product listing. Fields marked with an asterisk (*) are mandatory. Procedure Log in to the Red Hat Partner Connect Portal . Go to the Certified technology portal tab and click Visit the portal . On the header bar, click Product management . From the Listing and certification tab click Manage products . From the My Products page, click Create Product . A Create New Product dialog opens. Enter the Product name . From the What kind of product would you like to certify? drop-down, select the required product category and click Create product . For example, select Standalone Application for creating a non containerized product listing. A new page with your Product name opens. It comprises the following tabs: Section 6.1, "Overview" Section 6.2, "Product Information" Section 6.3, "Components" Section 6.4, "Support" Along with the following tabs, the page header provides the Product Score details. Product Score evaluates your product information and displays a score. It can be: Fair Good Excellent Best Click How do I improve my score? to improve your product score. After providing the product listing details, click Save before moving to the section. 6.1. Overview This tab consists of a series of tasks that you must complete to publish your product: Section 6.1.1, "Complete product listing details" Section 6.1.2, "Complete company profile information" Section 6.1.3, "Certify or validate your product" Section 6.1.4, "Validate your product" Section 6.1.5, "Add at least one product component" Section 6.1.6, "Certify components for your listing" 6.1.1. Complete product listing details To complete your product listing details, click Start . The Product Information tab opens. Enter all the essential product details and click Save . 6.1.2. Complete company profile information To complete your company profile information, click Start . After entering all the details, click Submit . To modify the existing details, click Review . The Account Details page opens. Review and modify the Company profile information and click Submit . 6.1.3. Certify or validate your product It is not possible to validate a product that already has a certified component. Certifying a component is not required in order to validate a product. To select validation or certification for your product, click Validate or Certify product . Read the Publication and testing guidelines . To certify, click Add Component and then go to Section 6.1.5, "Add at least one product component" . To validate, click Start validation . After submitting your application for validation, the Red Hat certification team will review and verify the entered details of the Partner validation questionnaire. If at a later date you want to certify your Partner Validated application, complete the certification details. 6.1.4. Validate your product Select What Red Hat products are you validating for? Red Hat Open Shift or Red Hat Enterprise Linux. Select which Red Hat Open Shift or Red Hat Enterprise Linux versions and subversions you want to validate your products for. Click, Start Validation . Enter and complete all the information requested in the Partner validation questionnaire , including documentation, product testing and which Red Hat Open Shift or Red Hat cluster it has been tested on. The entered details in the questionnaire will be used by Red Hat to determine whether to validate the product and if it can be published. 6.1.5. Add at least one product component Click Start . You are redirected to the Components tab. To add a new or existing product component, click Add component . For adding a new component, In the Component Name text box, enter the component name. For What kind of standalone component are you creating? select the component that you wish to certify. For example, for certifying a non containerized component, select Non-containerized Application . For Red Hat Enterprise Linux Version , select the major RHEL version for which you are certifying your component. Note You can't modify the version after creating the product listing. Click Create new component . For adding an existing component, from the Add Component dialog, select Existing Component . From the Available components list, search and select the components that you wish to certify and click the forward arrow. The selected components are added to the Chosen components list. Click Attach existing component . 6.1.6. Certify components for your listing To certify the components for your listing, click Start . If you have existing product components, you can view the list of Attached Components and their details: Name Certification Security Type Created Click more options to archive or remove the components Select the components for certification. After completing all the above tasks you will see a green tick mark corresponding to all the options. The Overview tab also provides the following information: Product contacts - Provides Product marketing and Technical contact information. Click Add contacts to product to provide the contact information Click Edit to update the information. Components in product - Provides the list of the components attached to the product along with their last updated information. Click Add components to product to add new or existing components to your product. Click Edit components to update the existing component information. After publishing the product listing, you can view your Product Readiness Score and Ways to raise your score on the Overview tab. 6.2. Product Information Through this tab you can provide all the essential information about your product. The product details are published along with your product on the Red Hat Ecosystem catalog. General tab: Provide basic details of the product, including product name and description. Enter the Product Name . Optional: Upload the Product Logo according to the defined guidelines. Enter a Brief description and a Long description . Click Save . Features & Benefits tab: Provide important features of your product. Optional: Enter the Title and Description . Optional: To add additional features for your product, click + Add new feature . Click Save . Quick start & Config tab: Add links to any quick start guide or configuration document to help customers deploy and start using your product. Optional: Enter Quick start & configuration instructions . Click Save . Select Hide default instructions check box, if you don't want to display them. Linked resources tab: Add links to supporting documentation to help our customers use your product. The information is mapped to and is displayed in the Documentation section on the product's catalog page. Note It is mandatory to add a minimum of three resources. Red Hat encourages you to add more resources, if available. Select the Type drop-down menu, and enter the Title and Description of the resource. Enter the Resource URL . Optional: To add additional resources for your product, click + Add new Resource . Click Save . FAQs tab: Add frequently asked questions and answers of the product's purpose, operation, installation, or other attribute details. You can include common customer queries about your product and services. Enter Question and Answer . Optional: To add additional FAQs for your product, click + Add new FAQ . Click Save . Support tab: This tab lets you provide contact information of your Support team. Enter the Support description , Support web site , Support phone number , and Support email address . Click Save . Contacts tab: Provide contact information of your marketing and technical team. Enter the Marketing contact email address and Technical contact email address . Optional: To add additional contacts, click + Add another . Click Save . Legal tab: Provide the product related license and policy information. Enter the License Agreement URL for the product and Privacy Policy URL . Click Save . SEO tab: Use this tab to improve the discoverability of your product for our mutual customers, enhancing visibility both within the Red Hat Ecosystem Catalog search and on internet search engines. Providing a higher number of search aliases (key and value pairs) will increase the discoverability of your product. Select the Product Category . Enter the Key and Value to set up Search aliases. Click Save . Optional: To add additional key-value pair, click + Add new key-value pair . Note Add at least one Search alias for your product. Red Hat encourages you to add more aliases, if available. 6.3. Components Use this tab to add components to your product listing. Through this tab you can also view a list of attached components linked to your Product Listing. Alternatively, to attach a component to the Product Listing, you can complete the Add at least one product component option available on the Overview tab of a Container, Operator, or Helm Chart product listing. To add a new or existing product component, click Add component . For adding a new component, in the Component Name text box, enter the component name. For What kind of standalone component are you creating? select the component that you wish to certify. For example, for certifying a non containerized component, select Non-containerized Application . For Red Hat Enterprise Linux Version , select the RHEL version on which you are certifying your non-containerized component. Note You cannot modify the RHEL version after creating the product listing. Click . For adding an existing component, from the Add Component dialog, select Existing Component . From the Available components list, search and select the components that you wish to certify and click the forward arrow. The selected components are added to the Chosen components list. Click Attach existing component . Note You can add the same component to multiple products listings. All attached components must be published before the product listing can be published. After attaching components, you can view the list of Attached Components and their details: Name Certification Security Type Created Click more options to archive or remove the attached components Alternatively, to search for specific components, type the component's name in the Search by component Name text box. 6.4. Support The Red Hat Partner Acceleration Desk (PAD) is a Products and Technologies level partner help desk service that allows the current and prospective partners a central location to ask non-technical questions pertaining to Red Hat offerings, partner programs, product certification, engagement process, and so on. You can also contact the Red Hat Partner Acceleration Desk for any technical questions you may have regarding the Certification. Technical help requests will be redirected to the Certification Operations team. Through the Partner Subscriptions program, Red Hat offers free, not-for-resale software subscriptions that you can use to validate your product on the target Red Hat platform. To request access to the program, follow the instructions on the Partner Subscriptions site. To request support, click Open a support case. See PAD - How to open & manage PAD cases , to open a PAD ticket. To view the list of existing support cases, click View support cases . 6.5. Removing a product After creating a product listing if you wish to remove it, go to the Overview tab and click Delete . A published product must first be unpublished before it can be deleted. Red Hat retains information related to deleted products even after you delete the product.
null
https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_software_certification_workflow_guide/proc_creating-a-product-listing_openshift-sw-cert-workflow-onboarding-certification-partners
Chapter 5. Monitoring and managing upgrade of the storage cluster
Chapter 5. Monitoring and managing upgrade of the storage cluster After running the ceph orch upgrade start command to upgrade the Red Hat Ceph Storage cluster, you can check the status, pause, resume, or stop the upgrade process. The health of the cluster changes to HEALTH_WARNING during an upgrade. If the host of the cluster is offline, the upgrade is paused. Note You have to upgrade one daemon type after the other. If a daemon cannot be upgraded, the upgrade is paused. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all the nodes. At least two Ceph Manager nodes in the storage cluster: one active and one standby. Upgrade for the storage cluster initiated. Procedure Determine whether an upgrade is in process and the version to which the cluster is upgrading: Example Note You do not get a message once the upgrade is successful. Run ceph versions and ceph orch ps commands to verify the new image ID and the version of the storage cluster. Optional: Pause the upgrade process: Example Optional: Resume a paused upgrade process: Example Optional: Stop the upgrade process: Example
[ "ceph orch upgrade status", "ceph orch upgrade pause", "ceph orch upgrade resume", "ceph orch upgrade stop" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/upgrade_guide/monitoring-and-managing-upgrade-of-the-storage-cluster_upgrade
Chapter 20. Configuring network settings by using RHEL system roles
Chapter 20. Configuring network settings by using RHEL system roles By using the network RHEL system role, you can automate network-related configuration and management tasks. 20.1. Configuring an Ethernet connection with a static IP address by using the network RHEL system role with an interface name To connect a Red Hat Enterprise Linux host to an Ethernet network, create a NetworkManager connection profile for the network device. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook. You can use the network RHEL system role to configure an Ethernet connection with static IP addresses, gateways, and DNS settings, and assign them to a specified interface name. Typically, administrators want to reuse a playbook and not maintain individual playbooks for each host to which Ansible should assign static IP addresses. In this case, you can use variables in the playbook and maintain the settings in the inventory. As a result, you need only one playbook to dynamically assign individual settings to multiple hosts. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. A physical or virtual Ethernet device exists in the server configuration. The managed nodes use NetworkManager to configure the network. Procedure Edit the ~/inventory file, and append the host-specific settings to the host entries: managed-node-01.example.com interface=enp1s0 ip_v4=192.0.2.1/24 ip_v6=2001:db8:1::1/64 gateway_v4=192.0.2.254 gateway_v6=2001:db8:1::fffe managed-node-02.example.com interface=enp1s0 ip_v4=192.0.2.2/24 ip_v6=2001:db8:1::2/64 gateway_v4=192.0.2.254 gateway_v6=2001:db8:1::fffe Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure the network hosts: managed-node-01.example.com,managed-node-02.example.com tasks: - name: Ethernet connection profile with static IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: "{{ interface }}" interface_name: "{{ interface }}" type: ethernet autoconnect: yes ip: address: - "{{ ip_v4 }}" - "{{ ip_v6 }}" gateway4: "{{ gateway_v4 }}" gateway6: "{{ gateway_v6 }}" dns: - 192.0.2.200 - 2001:db8:1::ffbb dns_search: - example.com state: up This playbook reads certain values dynamically for each host from the inventory file and uses static values in the playbook for settings which are the same for all hosts. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.network/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Query the Ansible facts of the managed node and verify the active network settings: Additional resources /usr/share/ansible/roles/rhel-system-roles.network/README.md file /usr/share/doc/rhel-system-roles/network/ directory 20.2. Configuring an Ethernet connection with a static IP address by using the network RHEL system role with a device path To connect a Red Hat Enterprise Linux host to an Ethernet network, create a NetworkManager connection profile for the network device. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook. You can use the network RHEL system role to configure an Ethernet connection with static IP addresses, gateways, and DNS settings, and assign them to a device based on its path instead of its name. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. A physical or virtual Ethernet device exists in the server's configuration. The managed nodes use NetworkManager to configure the network. You know the path of the device. You can display the device path by using the udevadm info /sys/class/net/ <device_name> | grep ID_PATH= command. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with static IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: example match: path: - pci-0000:00:0[1-3].0 - &!pci-0000:00:02.0 type: ethernet autoconnect: yes ip: address: - 192.0.2.1/24 - 2001:db8:1::1/64 gateway4: 192.0.2.254 gateway6: 2001:db8:1::fffe dns: - 192.0.2.200 - 2001:db8:1::ffbb dns_search: - example.com state: up The settings specified in the example playbook include the following: match Defines that a condition must be met in order to apply the settings. You can only use this variable with the path option. path Defines the persistent path of a device. You can set it as a fixed path or an expression. Its value can contain modifiers and wildcards. The example applies the settings to devices that match PCI ID 0000:00:0[1-3].0 , but not 0000:00:02.0 . For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.network/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Query the Ansible facts of the managed node and verify the active network settings: Additional resources /usr/share/ansible/roles/rhel-system-roles.network/README.md file /usr/share/doc/rhel-system-roles/network/ directory 20.3. Configuring an Ethernet connection with a dynamic IP address by using the network RHEL system role with an interface name To connect a Red Hat Enterprise Linux host to an Ethernet network, create a NetworkManager connection profile for the network device. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook. You can use the network RHEL system role to configure an Ethernet connection that retrieves its IP addresses, gateways, and DNS settings from a DHCP server and IPv6 stateless address autoconfiguration (SLAAC). With this role you can assign the connection profile to the specified interface name. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. A physical or virtual Ethernet device exists in the servers' configuration. A DHCP server and SLAAC are available in the network. The managed nodes use the NetworkManager service to configure the network. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 interface_name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes state: up The settings specified in the example playbook include the following: dhcp4: yes Enables automatic IPv4 address assignment from DHCP, PPP, or similar services. auto6: yes Enables IPv6 auto-configuration. By default, NetworkManager uses Router Advertisements. If the router announces the managed flag, NetworkManager requests an IPv6 address and prefix from a DHCPv6 server. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.network/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Query the Ansible facts of the managed node and verify that the interface received IP addresses and DNS settings: Additional resources /usr/share/ansible/roles/rhel-system-roles.network/README.md file /usr/share/doc/rhel-system-roles/network/ directory 20.4. Configuring an Ethernet connection with a dynamic IP address by using the network RHEL system role with a device path To connect a Red Hat Enterprise Linux host to an Ethernet network, create a NetworkManager connection profile for the network device. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook. You can use the network RHEL system role to configure an Ethernet connection that retrieves its IP addresses, gateways, and DNS settings from a DHCP server and IPv6 stateless address autoconfiguration (SLAAC). The role can assign the connection profile to a device based on its path instead of an interface name. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. A physical or virtual Ethernet device exists in the server's configuration. A DHCP server and SLAAC are available in the network. The managed hosts use NetworkManager to configure the network. You know the path of the device. You can display the device path by using the udevadm info /sys/class/net/ <device_name> | grep ID_PATH= command. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: example match: path: - pci-0000:00:0[1-3].0 - &!pci-0000:00:02.0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes state: up The settings specified in the example playbook include the following: match: path Defines that a condition must be met in order to apply the settings. You can only use this variable with the path option. path: <path_and_expressions> Defines the persistent path of a device. You can set it as a fixed path or an expression. Its value can contain modifiers and wildcards. The example applies the settings to devices that match PCI ID 0000:00:0[1-3].0 , but not 0000:00:02.0 . dhcp4: yes Enables automatic IPv4 address assignment from DHCP, PPP, or similar services. auto6: yes Enables IPv6 auto-configuration. By default, NetworkManager uses Router Advertisements. If the router announces the managed flag, NetworkManager requests an IPv6 address and prefix from a DHCPv6 server. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.network/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Query the Ansible facts of the managed node and verify that the interface received IP addresses and DNS settings: Additional resources /usr/share/ansible/roles/rhel-system-roles.network/README.md file /usr/share/doc/rhel-system-roles/network/ directory 20.5. Configuring a static Ethernet connection with 802.1X network authentication by using the network RHEL system role Network Access Control (NAC) protects a network from unauthorized clients. You can specify the details that are required for the authentication in NetworkManager connection profiles to enable clients to access the network. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook. You can use an Ansible playbook to copy a private key, a certificate, and the CA certificate to the client, and then use the network RHEL system role to configure a connection profile with 802.1X network authentication. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The network supports 802.1X network authentication. The managed nodes use NetworkManager. The following files required for the TLS authentication exist on the control node: The client key is stored in the /srv/data/client.key file. The client certificate is stored in the /srv/data/client.crt file. The Certificate Authority (CA) certificate is stored in the /srv/data/ca.crt file. Procedure Store your sensitive variables in an encrypted file: Create the vault: After the ansible-vault create command opens an editor, enter the sensitive data in the <key> : <value> format: pwd: <password> Save the changes, and close the editor. Ansible encrypts the data in the vault. Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure an Ethernet connection with 802.1X authentication hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Copy client key for 802.1X authentication ansible.builtin.copy: src: "/srv/data/client.key" dest: "/etc/pki/tls/private/client.key" mode: 0600 - name: Copy client certificate for 802.1X authentication ansible.builtin.copy: src: "/srv/data/client.crt" dest: "/etc/pki/tls/certs/client.crt" - name: Copy CA certificate for 802.1X authentication ansible.builtin.copy: src: "/srv/data/ca.crt" dest: "/etc/pki/ca-trust/source/anchors/ca.crt" - name: Ethernet connection profile with static IP address settings and 802.1X ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: address: - 192.0.2.1/24 - 2001:db8:1::1/64 gateway4: 192.0.2.254 gateway6: 2001:db8:1::fffe dns: - 192.0.2.200 - 2001:db8:1::ffbb dns_search: - example.com ieee802_1x: identity: <user_name> eap: tls private_key: "/etc/pki/tls/private/client.key" private_key_password: "{{ pwd }}" client_cert: "/etc/pki/tls/certs/client.crt" ca_cert: "/etc/pki/ca-trust/source/anchors/ca.crt" domain_suffix_match: example.com state: up The settings specified in the example playbook include the following: ieee802_1x This variable contains the 802.1X-related settings. eap: tls Configures the profile to use the certificate-based TLS authentication method for the Extensible Authentication Protocol (EAP). For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.network/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Access resources on the network that require network authentication. Additional resources /usr/share/ansible/roles/rhel-system-roles.network/README.md file /usr/share/doc/rhel-system-roles/network/ directory Ansible vault 20.6. Configuring a network bond by using the network RHEL system role You can combine network interfaces in a bond to provide a logical interface with higher throughput or redundancy. To configure a bond, create a NetworkManager connection profile. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook. You can use the network RHEL system role to configure a network bond and, if a connection profile for the bond's parent device does not exist, the role can create it as well. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Two or more physical or virtual network devices are installed on the server. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Bond connection profile with two Ethernet ports ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: # Bond profile - name: bond0 type: bond interface_name: bond0 ip: dhcp4: yes auto6: yes bond: mode: active-backup state: up # Port profile for the 1st Ethernet device - name: bond0-port1 interface_name: enp7s0 type: ethernet controller: bond0 state: up # Port profile for the 2nd Ethernet device - name: bond0-port2 interface_name: enp8s0 type: ethernet controller: bond0 state: up The settings specified in the example playbook include the following: type: <profile_type> Sets the type of the profile to create. The example playbook creates three connection profiles: One for the bond and two for the Ethernet devices. dhcp4: yes Enables automatic IPv4 address assignment from DHCP, PPP, or similar services. auto6: yes Enables IPv6 auto-configuration. By default, NetworkManager uses Router Advertisements. If the router announces the managed flag, NetworkManager requests an IPv6 address and prefix from a DHCPv6 server. mode: <bond_mode> Sets the bonding mode. Possible values are: balance-rr (default) active-backup balance-xor broadcast 802.3ad balance-tlb balance-alb . Depending on the mode you set, you need to set additional variables in the playbook. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.network/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Temporarily remove the network cable from one of the network devices and check if the other device in the bond is handling the traffic. Note that there is no method to properly test link failure events using software utilities. Tools that deactivate connections, such as nmcli , show only the bonding driver's ability to handle port configuration changes and not actual link failure events. Additional resources /usr/share/ansible/roles/rhel-system-roles.network/README.md file /usr/share/doc/rhel-system-roles/network/ directory 20.7. Configuring VLAN tagging by using the network RHEL system role If your network uses Virtual Local Area Networks (VLANs) to separate network traffic into logical networks, create a NetworkManager connection profile to configure VLAN tagging. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook. You can use the network RHEL system role to configure VLAN tagging and, if a connection profile for the VLAN's parent device does not exist, the role can create it as well. Note If the VLAN device requires an IP address, default gateway, and DNS settings, configure them on the VLAN device and not on the parent device. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: VLAN connection profile with Ethernet port ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: # Ethernet profile - name: enp1s0 type: ethernet interface_name: enp1s0 autoconnect: yes state: up ip: dhcp4: no auto6: no # VLAN profile - name: enp1s0.10 type: vlan vlan: id: 10 ip: dhcp4: yes auto6: yes parent: enp1s0 state: up The settings specified in the example playbook include the following: type: <profile_type> Sets the type of the profile to create. The example playbook creates two connection profiles: One for the parent Ethernet device and one for the VLAN device. dhcp4: <value> If set to yes , automatic IPv4 address assignment from DHCP, PPP, or similar services is enabled. Disable the IP address configuration on the parent device. auto6: <value> If set to yes , IPv6 auto-configuration is enabled. In this case, by default, NetworkManager uses Router Advertisements and, if the router announces the managed flag, NetworkManager requests an IPv6 address and prefix from a DHCPv6 server. Disable the IP address configuration on the parent device. parent: <parent_device> Sets the parent device of the VLAN connection profile. In the example, the parent is the Ethernet interface. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.network/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Verify the VLAN settings: Additional resources /usr/share/ansible/roles/rhel-system-roles.network/README.md file /usr/share/doc/rhel-system-roles/network/ directory 20.8. Configuring a network bridge by using the network RHEL system role You can connect multiple networks on layer 2 of the Open Systems Interconnection (OSI) model by creating a network bridge. To configure a bridge, create a connection profile in NetworkManager. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook. You can use the network RHEL system role to configure a bridge and, if a connection profile for the bridge's parent device does not exist, the role can create it as well. Note If you want to assign IP addresses, gateways, and DNS settings to a bridge, configure them on the bridge and not on its ports. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Two or more physical or virtual network devices are installed on the server. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Bridge connection profile with two Ethernet ports ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: # Bridge profile - name: bridge0 type: bridge interface_name: bridge0 ip: dhcp4: yes auto6: yes state: up # Port profile for the 1st Ethernet device - name: bridge0-port1 interface_name: enp7s0 type: ethernet controller: bridge0 port_type: bridge state: up # Port profile for the 2nd Ethernet device - name: bridge0-port2 interface_name: enp8s0 type: ethernet controller: bridge0 port_type: bridge state: up The settings specified in the example playbook include the following: type: <profile_type> Sets the type of the profile to create. The example playbook creates three connection profiles: One for the bridge and two for the Ethernet devices. dhcp4: yes Enables automatic IPv4 address assignment from DHCP, PPP, or similar services. auto6: yes Enables IPv6 auto-configuration. By default, NetworkManager uses Router Advertisements. If the router announces the managed flag, NetworkManager requests an IPv6 address and prefix from a DHCPv6 server. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.network/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Display the link status of Ethernet devices that are ports of a specific bridge: Display the status of Ethernet devices that are ports of any bridge device: Additional resources /usr/share/ansible/roles/rhel-system-roles.network/README.md file /usr/share/doc/rhel-system-roles/network/ directory 20.9. Setting the default gateway on an existing connection by using the network RHEL system role A host forwards a network packet to its default gateway if the packet's destination can neither be reached through the directly-connected networks nor through any of the routes configured on the host. To configure the default gateway of a host, set it in the NetworkManager connection profile of the interface that is connected to the same network as the default gateway. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook. In most situations, administrators set the default gateway when they create a connection. However, you can also set or update the default gateway setting on a previously-created connection. Warning You cannot use the network RHEL system role to update only specific values in an existing connection profile. The role ensures that a connection profile exactly matches the settings in a playbook. If a connection profile with the same name already exists, the role applies the settings from the playbook and resets all other settings in the profile to their defaults. To prevent resetting values, always specify the whole configuration of the network connection profile in the playbook, including the settings that you do not want to change. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with static IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: address: - 198.51.100.20/24 - 2001:db8:1::1/64 gateway4: 198.51.100.254 gateway6: 2001:db8:1::fffe dns: - 198.51.100.200 - 2001:db8:1::ffbb dns_search: - example.com state: up For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.network/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Query the Ansible facts of the managed node and verify the active network settings: Additional resources /usr/share/ansible/roles/rhel-system-roles.network/README.md file /usr/share/doc/rhel-system-roles/network/ directory 20.10. Configuring a static route by using the network RHEL system role A static route ensures that you can send traffic to a destination that cannot be reached through the default gateway. You configure static routes in the NetworkManager connection profile of the interface that is connected to the same network as the hop. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook. Warning You cannot use the network RHEL system role to update only specific values in an existing connection profile. The role ensures that a connection profile exactly matches the settings in a playbook. If a connection profile with the same name already exists, the role applies the settings from the playbook and resets all other settings in the profile to their defaults. To prevent resetting values, always specify the whole configuration of the network connection profile in the playbook, including the settings that you do not want to change. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with static IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp7s0 type: ethernet autoconnect: yes ip: address: - 192.0.2.1/24 - 2001:db8:1::1/64 gateway4: 192.0.2.254 gateway6: 2001:db8:1::fffe dns: - 192.0.2.200 - 2001:db8:1::ffbb dns_search: - example.com route: - network: 198.51.100.0 prefix: 24 gateway: 192.0.2.10 - network: 2001:db8:2:: prefix: 64 gateway: 2001:db8:1::10 state: up For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.network/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Display the IPv4 routes: Display the IPv6 routes: Additional resources /usr/share/ansible/roles/rhel-system-roles.network/README.md file /usr/share/doc/rhel-system-roles/network/ directory 20.11. Routing traffic from a specific subnet to a different default gateway by using the network RHEL system role You can use policy-based routing to configure a different default gateway for traffic from certain subnets. For example, you can configure RHEL as a router that, by default, routes all traffic to internet provider A using the default route. However, traffic received from the internal workstations subnet is routed to provider B. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook. You can use the network RHEL system role to configure the connection profiles, including routing tables and rules. This procedure assumes the following network topology: Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The managed nodes use NetworkManager and the firewalld service. The managed nodes you want to configure has four network interfaces: The enp7s0 interface is connected to the network of provider A. The gateway IP in the provider's network is 198.51.100.2 , and the network uses a /30 network mask. The enp1s0 interface is connected to the network of provider B. The gateway IP in the provider's network is 192.0.2.2 , and the network uses a /30 network mask. The enp8s0 interface is connected to the 10.0.0.0/24 subnet with internal workstations. The enp9s0 interface is connected to the 203.0.113.0/24 subnet with the company's servers. Hosts in the internal workstations subnet use 10.0.0.1 as the default gateway. In the procedure, you assign this IP address to the enp8s0 network interface of the router. Hosts in the server subnet use 203.0.113.1 as the default gateway. In the procedure, you assign this IP address to the enp9s0 network interface of the router. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configuring policy-based routing hosts: managed-node-01.example.com tasks: - name: Routing traffic from a specific subnet to a different default gateway ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: Provider-A interface_name: enp7s0 type: ethernet autoconnect: True ip: address: - 198.51.100.1/30 gateway4: 198.51.100.2 dns: - 198.51.100.200 state: up zone: external - name: Provider-B interface_name: enp1s0 type: ethernet autoconnect: True ip: address: - 192.0.2.1/30 route: - network: 0.0.0.0 prefix: 0 gateway: 192.0.2.2 table: 5000 state: up zone: external - name: Internal-Workstations interface_name: enp8s0 type: ethernet autoconnect: True ip: address: - 10.0.0.1/24 route: - network: 10.0.0.0 prefix: 24 table: 5000 routing_rule: - priority: 5 from: 10.0.0.0/24 table: 5000 state: up zone: trusted - name: Servers interface_name: enp9s0 type: ethernet autoconnect: True ip: address: - 203.0.113.1/24 state: up zone: trusted The settings specified in the example playbook include the following: table: <value> Assigns the route from the same list entry as the table variable to the specified routing table. routing_rule: <list> Defines the priority of the specified routing rule and from a connection profile to which routing table the rule is assigned. zone: <zone_name> Assigns the network interface from a connection profile to the specified firewalld zone. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.network/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification On a RHEL host in the internal workstation subnet: Install the traceroute package: Use the traceroute utility to display the route to a host on the internet: The output of the command displays that the router sends packets over 192.0.2.1 , which is the network of provider B. On a RHEL host in the server subnet: Install the traceroute package: Use the traceroute utility to display the route to a host on the internet: The output of the command displays that the router sends packets over 198.51.100.2 , which is the network of provider A. On the RHEL router that you configured using the RHEL system role: Display the rule list: By default, RHEL contains rules for the tables local , main , and default . Display the routes in table 5000 : Display the interfaces and firewall zones: Verify that the external zone has masquerading enabled: Additional resources /usr/share/ansible/roles/rhel-system-roles.network/README.md file /usr/share/doc/rhel-system-roles/network/ directory 20.12. Configuring an ethtool offload feature by using the network RHEL system role Network interface controllers can use the TCP offload engine (TOE) to offload processing certain operations to the network controller. This improves the network throughput. You configure offload features in the connection profile of the network interface. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook. Warning You cannot use the network RHEL system role to update only specific values in an existing connection profile. The role ensures that a connection profile exactly matches the settings in a playbook. If a connection profile with the same name already exists, the role applies the settings from the playbook and resets all other settings in the profile to their defaults. To prevent resetting values, always specify the whole configuration of the network connection profile in the playbook, including the settings that you do not want to change. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address settings and offload features ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes ethtool: features: gro: no gso: yes tx_sctp_segmentation: no state: up The settings specified in the example playbook include the following: gro: no Disables Generic receive offload (GRO). gso: yes Enables Generic segmentation offload (GSO). tx_sctp_segmentation: no Disables TX stream control transmission protocol (SCTP) segmentation. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.network/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Query the Ansible facts of the managed node and verify the offload settings: Additional resources /usr/share/ansible/roles/rhel-system-roles.network/README.md file /usr/share/doc/rhel-system-roles/network/ directory 20.13. Configuring an ethtool coalesce settings by using the network RHEL system role By using interrupt coalescing, the system collects network packets and generates a single interrupt for multiple packets. This increases the amount of data sent to the kernel with one hardware interrupt, which reduces the interrupt load, and maximizes the throughput. You configure coalesce settings in the connection profile of the network interface. By using Ansible and the network RHEL role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook. Warning You cannot use the network RHEL system role to update only specific values in an existing connection profile. The role ensures that a connection profile exactly matches the settings in a playbook. If a connection profile with the same name already exists, the role applies the settings from the playbook and resets all other settings in the profile to their defaults. To prevent resetting values, always specify the whole configuration of the network connection profile in the playbook, including the settings that you do not want to change. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address settings and coalesce settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes ethtool: coalesce: rx_frames: 128 tx_frames: 128 state: up The settings specified in the example playbook include the following: rx_frames: <value> Sets the number of RX frames. gso: <value> Sets the number of TX frames. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.network/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Display the current offload features of the network device: Additional resources /usr/share/ansible/roles/rhel-system-roles.network/README.md file /usr/share/doc/rhel-system-roles/network/ directory 20.14. Increasing the ring buffer size to reduce a high packet drop rate by using the network RHEL system role Increase the size of an Ethernet device's ring buffers if the packet drop rate causes applications to report a loss of data, timeouts, or other issues. Ring buffers are circular buffers where an overflow overwrites existing data. The network card assigns a transmit (TX) and receive (RX) ring buffer. Receive ring buffers are shared between the device driver and the network interface controller (NIC). Data can move from NIC to the kernel through either hardware interrupts or software interrupts, also called SoftIRQs. The kernel uses the RX ring buffer to store incoming packets until the device driver can process them. The device driver drains the RX ring, typically by using SoftIRQs, which puts the incoming packets into a kernel data structure called an sk_buff or skb to begin its journey through the kernel and up to the application that owns the relevant socket. The kernel uses the TX ring buffer to hold outgoing packets which should be sent to the network. These ring buffers reside at the bottom of the stack and are a crucial point at which packet drop can occur, which in turn will adversely affect network performance. You configure ring buffer settings in the NetworkManager connection profiles. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook. Warning You cannot use the network RHEL system role to update only specific values in an existing connection profile. The role ensures that a connection profile exactly matches the settings in a playbook. If a connection profile with the same name already exists, the role applies the settings from the playbook and resets all other settings in the profile to their defaults. To prevent resetting values, always specify the whole configuration of the network connection profile in the playbook, including the settings that you do not want to change. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. You know the maximum ring buffer sizes that the device supports. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address setting and increased ring buffer sizes ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes ethtool: ring: rx: 4096 tx: 4096 state: up The settings specified in the example playbook include the following: rx: <value> Sets the maximum number of received ring buffer entries. tx: <value> Sets the maximum number of transmitted ring buffer entries. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.network/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Display the maximum ring buffer sizes: Additional resources /usr/share/ansible/roles/rhel-system-roles.network/README.md file /usr/share/doc/rhel-system-roles/network/ directory 20.15. Configuring an IPoIB connection by using the network RHEL system role You can use IP over InfiniBand (IPoIB) to send IP packets over an InfiniBand interface. To configure IPoIB, create a NetworkManager connection profile. By using Ansible and the network system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook. You can use the network RHEL system role to configure IPoIB and, if a connection profile for the InfiniBand's parent device does not exist, the role can create it as well. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. An InfiniBand device named mlx4_ib0 is installed in the managed nodes. The managed nodes use NetworkManager to configure the network. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: IPoIB connection profile with static IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: # InfiniBand connection mlx4_ib0 - name: mlx4_ib0 interface_name: mlx4_ib0 type: infiniband # IPoIB device mlx4_ib0.8002 on top of mlx4_ib0 - name: mlx4_ib0.8002 type: infiniband autoconnect: yes infiniband: p_key: 0x8002 transport_mode: datagram parent: mlx4_ib0 ip: address: - 192.0.2.1/24 - 2001:db8:1::1/64 state: up The settings specified in the example playbook include the following: type: <profile_type> Sets the type of the profile to create. The example playbook creates two connection profiles: One for the InfiniBand connection and one for the IPoIB device. parent: <parent_device> Sets the parent device of the IPoIB connection profile. p_key: <value> Sets the InfiniBand partition key. If you set this variable, do not set interface_name on the IPoIB device. transport_mode: <mode> Sets the IPoIB connection operation mode. You can set this variable to datagram (default) or connected . For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.network/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Display the IP settings of the mlx4_ib0.8002 device: Display the partition key (P_Key) of the mlx4_ib0.8002 device: Display the mode of the mlx4_ib0.8002 device: Additional resources /usr/share/ansible/roles/rhel-system-roles.network/README.md file /usr/share/doc/rhel-system-roles/network/ directory 20.16. Network states for the network RHEL system role The network RHEL system role supports state configurations in playbooks to configure the devices. For this, use the network_state variable followed by the state configurations. Benefits of using the network_state variable in a playbook: Using the declarative method with the state configurations, you can configure interfaces, and the NetworkManager creates a profile for these interfaces in the background. With the network_state variable, you can specify the options that you require to change, and all the other options will remain the same as they are. However, with the network_connections variable, you must specify all settings to change the network connection profile. Important You can set only Nmstate YAML instructions in network_state . These instructions differ from the variables you can set in network_connections . For example, to create an Ethernet connection with dynamic IP address settings, use the following vars block in your playbook: Playbook with state configurations Regular playbook vars: network_state: interfaces: - name: enp7s0 type: ethernet state: up ipv4: enabled: true auto-dns: true auto-gateway: true auto-routes: true dhcp: true ipv6: enabled: true auto-dns: true auto-gateway: true auto-routes: true autoconf: true dhcp: true vars: network_connections: - name: enp7s0 interface_name: enp7s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes state: up For example, to only change the connection status of dynamic IP address settings that you created as above, use the following vars block in your playbook: Playbook with state configurations Regular playbook vars: network_state: interfaces: - name: enp7s0 type: ethernet state: down vars: network_connections: - name: enp7s0 interface_name: enp7s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes state: down Additional resources /usr/share/ansible/roles/rhel-system-roles.network/README.md file /usr/share/doc/rhel-system-roles/network/ directory
[ "managed-node-01.example.com interface=enp1s0 ip_v4=192.0.2.1/24 ip_v6=2001:db8:1::1/64 gateway_v4=192.0.2.254 gateway_v6=2001:db8:1::fffe managed-node-02.example.com interface=enp1s0 ip_v4=192.0.2.2/24 ip_v6=2001:db8:1::2/64 gateway_v4=192.0.2.254 gateway_v6=2001:db8:1::fffe", "--- - name: Configure the network hosts: managed-node-01.example.com,managed-node-02.example.com tasks: - name: Ethernet connection profile with static IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: \"{{ interface }}\" interface_name: \"{{ interface }}\" type: ethernet autoconnect: yes ip: address: - \"{{ ip_v4 }}\" - \"{{ ip_v6 }}\" gateway4: \"{{ gateway_v4 }}\" gateway6: \"{{ gateway_v6 }}\" dns: - 192.0.2.200 - 2001:db8:1::ffbb dns_search: - example.com state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m ansible.builtin.setup \"ansible_default_ipv4\": { \"address\": \"192.0.2.1\", \"alias\": \"enp1s0\", \"broadcast\": \"192.0.2.255\", \"gateway\": \"192.0.2.254\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"netmask\": \"255.255.255.0\", \"network\": \"192.0.2.0\", \"prefix\": \"24\", \"type\": \"ether\" }, \"ansible_default_ipv6\": { \"address\": \"2001:db8:1::1\", \"gateway\": \"2001:db8:1::fffe\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"prefix\": \"64\", \"scope\": \"global\", \"type\": \"ether\" }, \"ansible_dns\": { \"nameservers\": [ \"192.0.2.1\", \"2001:db8:1::ffbb\" ], \"search\": [ \"example.com\" ] },", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with static IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: example match: path: - pci-0000:00:0[1-3].0 - &!pci-0000:00:02.0 type: ethernet autoconnect: yes ip: address: - 192.0.2.1/24 - 2001:db8:1::1/64 gateway4: 192.0.2.254 gateway6: 2001:db8:1::fffe dns: - 192.0.2.200 - 2001:db8:1::ffbb dns_search: - example.com state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m ansible.builtin.setup \"ansible_default_ipv4\": { \"address\": \"192.0.2.1\", \"alias\": \"enp1s0\", \"broadcast\": \"192.0.2.255\", \"gateway\": \"192.0.2.254\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"netmask\": \"255.255.255.0\", \"network\": \"192.0.2.0\", \"prefix\": \"24\", \"type\": \"ether\" }, \"ansible_default_ipv6\": { \"address\": \"2001:db8:1::1\", \"gateway\": \"2001:db8:1::fffe\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"prefix\": \"64\", \"scope\": \"global\", \"type\": \"ether\" }, \"ansible_dns\": { \"nameservers\": [ \"192.0.2.1\", \"2001:db8:1::ffbb\" ], \"search\": [ \"example.com\" ] },", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 interface_name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m ansible.builtin.setup \"ansible_default_ipv4\": { \"address\": \"192.0.2.1\", \"alias\": \"enp1s0\", \"broadcast\": \"192.0.2.255\", \"gateway\": \"192.0.2.254\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"netmask\": \"255.255.255.0\", \"network\": \"192.0.2.0\", \"prefix\": \"24\", \"type\": \"ether\" }, \"ansible_default_ipv6\": { \"address\": \"2001:db8:1::1\", \"gateway\": \"2001:db8:1::fffe\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"prefix\": \"64\", \"scope\": \"global\", \"type\": \"ether\" }, \"ansible_dns\": { \"nameservers\": [ \"192.0.2.1\", \"2001:db8:1::ffbb\" ], \"search\": [ \"example.com\" ] },", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: example match: path: - pci-0000:00:0[1-3].0 - &!pci-0000:00:02.0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m ansible.builtin.setup \"ansible_default_ipv4\": { \"address\": \"192.0.2.1\", \"alias\": \"enp1s0\", \"broadcast\": \"192.0.2.255\", \"gateway\": \"192.0.2.254\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"netmask\": \"255.255.255.0\", \"network\": \"192.0.2.0\", \"prefix\": \"24\", \"type\": \"ether\" }, \"ansible_default_ipv6\": { \"address\": \"2001:db8:1::1\", \"gateway\": \"2001:db8:1::fffe\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"prefix\": \"64\", \"scope\": \"global\", \"type\": \"ether\" }, \"ansible_dns\": { \"nameservers\": [ \"192.0.2.1\", \"2001:db8:1::ffbb\" ], \"search\": [ \"example.com\" ] },", "ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>", "pwd: <password>", "--- - name: Configure an Ethernet connection with 802.1X authentication hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Copy client key for 802.1X authentication ansible.builtin.copy: src: \"/srv/data/client.key\" dest: \"/etc/pki/tls/private/client.key\" mode: 0600 - name: Copy client certificate for 802.1X authentication ansible.builtin.copy: src: \"/srv/data/client.crt\" dest: \"/etc/pki/tls/certs/client.crt\" - name: Copy CA certificate for 802.1X authentication ansible.builtin.copy: src: \"/srv/data/ca.crt\" dest: \"/etc/pki/ca-trust/source/anchors/ca.crt\" - name: Ethernet connection profile with static IP address settings and 802.1X ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: address: - 192.0.2.1/24 - 2001:db8:1::1/64 gateway4: 192.0.2.254 gateway6: 2001:db8:1::fffe dns: - 192.0.2.200 - 2001:db8:1::ffbb dns_search: - example.com ieee802_1x: identity: <user_name> eap: tls private_key: \"/etc/pki/tls/private/client.key\" private_key_password: \"{{ pwd }}\" client_cert: \"/etc/pki/tls/certs/client.crt\" ca_cert: \"/etc/pki/ca-trust/source/anchors/ca.crt\" domain_suffix_match: example.com state: up", "ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml", "ansible-playbook --ask-vault-pass ~/playbook.yml", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Bond connection profile with two Ethernet ports ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: # Bond profile - name: bond0 type: bond interface_name: bond0 ip: dhcp4: yes auto6: yes bond: mode: active-backup state: up # Port profile for the 1st Ethernet device - name: bond0-port1 interface_name: enp7s0 type: ethernet controller: bond0 state: up # Port profile for the 2nd Ethernet device - name: bond0-port2 interface_name: enp8s0 type: ethernet controller: bond0 state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: VLAN connection profile with Ethernet port ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: # Ethernet profile - name: enp1s0 type: ethernet interface_name: enp1s0 autoconnect: yes state: up ip: dhcp4: no auto6: no # VLAN profile - name: enp1s0.10 type: vlan vlan: id: 10 ip: dhcp4: yes auto6: yes parent: enp1s0 state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'ip -d addr show enp1s0.10' managed-node-01.example.com | CHANGED | rc=0 >> 4: vlan10@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 52:54:00:72:2f:6e brd ff:ff:ff:ff:ff:ff promiscuity 0 vlan protocol 802.1Q id 10 <REORDER_HDR> numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Bridge connection profile with two Ethernet ports ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: # Bridge profile - name: bridge0 type: bridge interface_name: bridge0 ip: dhcp4: yes auto6: yes state: up # Port profile for the 1st Ethernet device - name: bridge0-port1 interface_name: enp7s0 type: ethernet controller: bridge0 port_type: bridge state: up # Port profile for the 2nd Ethernet device - name: bridge0-port2 interface_name: enp8s0 type: ethernet controller: bridge0 port_type: bridge state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'ip link show master bridge0' managed-node-01.example.com | CHANGED | rc=0 >> 3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:62:61:0e brd ff:ff:ff:ff:ff:ff 4: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:9e:f1:ce brd ff:ff:ff:ff:ff:ff", "ansible managed-node-01.example.com -m command -a 'bridge link show' managed-node-01.example.com | CHANGED | rc=0 >> 3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge0 state forwarding priority 32 cost 100 4: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge0 state listening priority 32 cost 100", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with static IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: address: - 198.51.100.20/24 - 2001:db8:1::1/64 gateway4: 198.51.100.254 gateway6: 2001:db8:1::fffe dns: - 198.51.100.200 - 2001:db8:1::ffbb dns_search: - example.com state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m ansible.builtin.setup \"ansible_default_ipv4\": { \"gateway\": \"198.51.100.254\", \"interface\": \"enp1s0\", }, \"ansible_default_ipv6\": { \"gateway\": \"2001:db8:1::fffe\", \"interface\": \"enp1s0\", }", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with static IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp7s0 type: ethernet autoconnect: yes ip: address: - 192.0.2.1/24 - 2001:db8:1::1/64 gateway4: 192.0.2.254 gateway6: 2001:db8:1::fffe dns: - 192.0.2.200 - 2001:db8:1::ffbb dns_search: - example.com route: - network: 198.51.100.0 prefix: 24 gateway: 192.0.2.10 - network: 2001:db8:2:: prefix: 64 gateway: 2001:db8:1::10 state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'ip -4 route' managed-node-01.example.com | CHANGED | rc=0 >> 198.51.100.0/24 via 192.0.2.10 dev enp7s0", "ansible managed-node-01.example.com -m command -a 'ip -6 route' managed-node-01.example.com | CHANGED | rc=0 >> 2001:db8:2::/64 via 2001:db8:1::10 dev enp7s0 metric 1024 pref medium", "--- - name: Configuring policy-based routing hosts: managed-node-01.example.com tasks: - name: Routing traffic from a specific subnet to a different default gateway ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: Provider-A interface_name: enp7s0 type: ethernet autoconnect: True ip: address: - 198.51.100.1/30 gateway4: 198.51.100.2 dns: - 198.51.100.200 state: up zone: external - name: Provider-B interface_name: enp1s0 type: ethernet autoconnect: True ip: address: - 192.0.2.1/30 route: - network: 0.0.0.0 prefix: 0 gateway: 192.0.2.2 table: 5000 state: up zone: external - name: Internal-Workstations interface_name: enp8s0 type: ethernet autoconnect: True ip: address: - 10.0.0.1/24 route: - network: 10.0.0.0 prefix: 24 table: 5000 routing_rule: - priority: 5 from: 10.0.0.0/24 table: 5000 state: up zone: trusted - name: Servers interface_name: enp9s0 type: ethernet autoconnect: True ip: address: - 203.0.113.1/24 state: up zone: trusted", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "yum install traceroute", "traceroute redhat.com traceroute to redhat.com (209.132.183.105), 30 hops max, 60 byte packets 1 10.0.0.1 (10.0.0.1) 0.337 ms 0.260 ms 0.223 ms 2 192.0.2.1 (192.0.2.1) 0.884 ms 1.066 ms 1.248 ms", "yum install traceroute", "traceroute redhat.com traceroute to redhat.com (209.132.183.105), 30 hops max, 60 byte packets 1 203.0.113.1 (203.0.113.1) 2.179 ms 2.073 ms 1.944 ms 2 198.51.100.2 (198.51.100.2) 1.868 ms 1.798 ms 1.549 ms", "ip rule list 0: from all lookup local 5 : from 10.0.0.0/24 lookup 5000 32766: from all lookup main 32767: from all lookup default", "ip route list table 5000 0.0.0.0/0 via 192.0.2.2 dev enp1s0 proto static metric 100 10.0.0.0/24 dev enp8s0 proto static scope link src 192.0.2.1 metric 102", "firewall-cmd --get-active-zones external interfaces: enp1s0 enp7s0 trusted interfaces: enp8s0 enp9s0", "firewall-cmd --info-zone=external external (active) target: default icmp-block-inversion: no interfaces: enp1s0 enp7s0 sources: services: ssh ports: protocols: masquerade: yes", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address settings and offload features ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes ethtool: features: gro: no gso: yes tx_sctp_segmentation: no state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m ansible.builtin.setup \"ansible_enp1s0\": { \"active\": true, \"device\": \"enp1s0\", \"features\": { \"rx_gro_hw\": \"off, \"tx_gso_list\": \"on, \"tx_sctp_segmentation\": \"off\", }", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address settings and coalesce settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes ethtool: coalesce: rx_frames: 128 tx_frames: 128 state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'ethtool -c enp1s0' managed-node-01.example.com | CHANGED | rc=0 >> rx-frames: 128 tx-frames: 128", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address setting and increased ring buffer sizes ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes ethtool: ring: rx: 4096 tx: 4096 state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'ethtool -g enp1s0' managed-node-01.example.com | CHANGED | rc=0 >> Current hardware settings: RX: 4096 RX Mini: 0 RX Jumbo: 0 TX: 4096", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: IPoIB connection profile with static IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: # InfiniBand connection mlx4_ib0 - name: mlx4_ib0 interface_name: mlx4_ib0 type: infiniband # IPoIB device mlx4_ib0.8002 on top of mlx4_ib0 - name: mlx4_ib0.8002 type: infiniband autoconnect: yes infiniband: p_key: 0x8002 transport_mode: datagram parent: mlx4_ib0 ip: address: - 192.0.2.1/24 - 2001:db8:1::1/64 state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'ip address show mlx4_ib0.8002' managed-node-01.example.com | CHANGED | rc=0 >> inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute ib0.8002 valid_lft forever preferred_lft forever inet6 2001:db8:1::1/64 scope link tentative noprefixroute valid_lft forever preferred_lft forever", "ansible managed-node-01.example.com -m command -a 'cat /sys/class/net/mlx4_ib0.8002/pkey' managed-node-01.example.com | CHANGED | rc=0 >> 0x8002", "ansible managed-node-01.example.com -m command -a 'cat /sys/class/net/mlx4_ib0.8002/mode' managed-node-01.example.com | CHANGED | rc=0 >> datagram", "vars: network_state: interfaces: - name: enp7s0 type: ethernet state: up ipv4: enabled: true auto-dns: true auto-gateway: true auto-routes: true dhcp: true ipv6: enabled: true auto-dns: true auto-gateway: true auto-routes: true autoconf: true dhcp: true", "vars: network_connections: - name: enp7s0 interface_name: enp7s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes state: up", "vars: network_state: interfaces: - name: enp7s0 type: ethernet state: down", "vars: network_connections: - name: enp7s0 interface_name: enp7s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes state: down" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/automating_system_administration_by_using_rhel_system_roles/assembly_configuring-network-settings-by-using-rhel-system-roles_automating-system-administration-by-using-rhel-system-roles
Chapter 3. Customizing the Home page in Red Hat Developer Hub
Chapter 3. Customizing the Home page in Red Hat Developer Hub When using the app-config , you can do the following: Customize and extend the default Home page layout with additional cards that appear based on the plugins you have installed and enabled. Change quick access links. Add, reorganize, and remove the following available cards: Search bar Quick access Headline Markdown Placeholder Catalog starred entities Featured docs 3.1. Customizing the Home page cards Administrators can change the fixed height of cards that are in a 12-column grid. The default Home page is as shown in the following app-config file configuration: dynamicPlugins: frontend: janus-idp.backstage-plugin-dynamic-home-page: dynamicRoutes: - path: / importName: DynamicHomePage mountPoints: - mountPoint: home.page/cards importName: SearchBar config: layouts: xl: { w: 10, h: 1, x: 1 } lg: { w: 10, h: 1, x: 1 } md: { w: 10, h: 1, x: 1 } sm: { w: 10, h: 1, x: 1 } xs: { w: 12, h: 1 } xxs: { w: 12, h: 1 } - mountPoint: home.page/cards importName: QuickAccessCard config: layouts: xl: { w: 7, h: 8 } lg: { w: 7, h: 8 } md: { w: 7, h: 8 } sm: { w: 12, h: 8 } xs: { w: 12, h: 8 } xxs: { w: 12, h: 8 } - mountPoint: home.page/cards importName: CatalogStarredEntitiesCard config: layouts: xl: { w: 5, h: 4, x: 7 } lg: { w: 5, h: 4, x: 7 } md: { w: 5, h: 4, x: 7 } sm: { w: 12, h: 4 } xs: { w: 12, h: 4 } xxs: { w: 12, h: 4 } Prerequisites You have administrative access and can modify the app-config.yaml file for dynamic plugin configurations. Procedure Configure different cards for your Home page in Red Hat Developer Hub as follows: Search dynamicPlugins: frontend: janus-idp.backstage-plugin-dynamic-home-page: mountPoints: - mountPoint: home.page/cards importName: SearchBar config: layouts: xl: { w: 10, h: 1, x: 1 } lg: { w: 10, h: 1, x: 1 } md: { w: 10, h: 1, x: 1 } sm: { w: 10, h: 1, x: 1 } xs: { w: 12, h: 1 } xxs: { w: 12, h: 1 } Table 3.1. Available props Prop Default Description path /search Override the linked search path if needed queryParam query Override the search query parameter name if needed Quick access dynamicPlugins: frontend: janus-idp.backstage-plugin-dynamic-home-page: mountPoints: - mountPoint: home.page/cards importName: QuickAccessCard config: layouts: xl: { h: 8 } lg: { h: 8 } md: { h: 8 } sm: { h: 8 } xs: { h: 8 } xxs: { h: 8 } Table 3.2. Available props Prop Default Description title Quick Access Override the linked search path if needed path none Override the search query parameter name if needed Headline dynamicPlugins: frontend: janus-idp.backstage-plugin-dynamic-home-page: mountPoints: - mountPoint: home.page/cards importName: Headline config: layouts: xl: { h: 1 } lg: { h: 1 } md: { h: 1 } sm: { h: 1 } xs: { h: 1 } xxs: { h: 1 } props: title: Important info Table 3.3. Available props Prop Default Description title none Title Markdown dynamicPlugins: frontend: janus-idp.backstage-plugin-dynamic-home-page: mountPoints: - mountPoint: home.page/cards importName: MarkdownCard config: layouts: xl: { w: 6, h: 4 } lg: { w: 6, h: 4 } md: { w: 6, h: 4 } sm: { w: 6, h: 4 } xs: { w: 6, h: 4 } xxs: { w: 6, h: 4 } props: title: Company links content: | ### RHDH * [Website](https://developers.redhat.com/rhdh/overview) * [Documentation](https://docs.redhat.com/en/documentation/red_hat_developer_hub/) * [GitHub Showcase](https://github.com/redhat-developer/rhdh) * [GitHub Plugins](https://github.com/janus-idp/backstage-plugins) - mountPoint: home.page/cards importName: Markdown config: layouts: xl: { w: 6, h: 4, x: 6 } lg: { w: 6, h: 4, x: 6 } md: { w: 6, h: 4, x: 6 } sm: { w: 6, h: 4, x: 6 } xs: { w: 6, h: 4, x: 6 } xxs: { w: 6, h: 4, x: 6 } props: title: Important company links content: | ### RHDH * [Website](https://developers.redhat.com/rhdh/overview) * [Documentation](https://docs.redhat.com/en/documentation/red_hat_developer_hub/) * [GitHub Showcase](https://github.com/redhat-developer/rhdh) * [GitHub Plugins](https://github.com/janus-idp/backstage-plugins) Placeholder dynamicPlugins: frontend: janus-idp.backstage-plugin-dynamic-home-page: mountPoints: - mountPoint: home.page/cards importName: Placeholder config: layouts: xl: { w: 1, h: 1 } lg: { w: 1, h: 1 } md: { w: 1, h: 1 } sm: { w: 1, h: 1 } xs: { w: 1, h: 1 } xxs: { w: 1, h: 1 } props: showBorder: true debugContent: '1' Catalog starred entities dynamicPlugins: frontend: janus-idp.backstage-plugin-dynamic-home-page: mountPoints: - mountPoint: home.page/cards importName: CatalogStarredEntitiesCard Featured docs dynamicPlugins: frontend: janus-idp.backstage-plugin-dynamic-home-page: mountPoints: - mountPoint: home.page/cards importName: FeaturedDocsCard 3.2. Defining the layout of the Red Hat Developer Hub Home page Prerequisites Include the following optimal parameters in each of your breakpoints: width (w) height (h) position (x and y) Procedure Configure your Developer Hub app-config.yaml configuration file by choosing one of the following options: Use the full space on smaller windows and half of the space on larger windows as follows: dynamicPlugins: frontend: janus-idp.backstage-plugin-dynamic-home-page: mountPoints: - mountPoint: home.page/cards importName: Placeholder config: layouts: xl: { w: 6, h: 2 } lg: { w: 6, h: 2 } md: { w: 6, h: 2 } sm: { w: 12, h: 2 } xs: { w: 12, h: 2 } xxs: { w: 12, h: 2 } props: showBorder: true debugContent: a placeholder Show the cards side by side by defining the x parameter as follows: dynamicPlugins: frontend: janus-idp.backstage-plugin-dynamic-home-page: mountPoints: - mountPoint: home.page/cards importName: Placeholder config: layouts: xl: { w: 6, h: 2 } lg: { w: 6, h: 2 } md: { w: 6, h: 2 } sm: { w: 12, h: 2 } xs: { w: 12, h: 2 } xxs: { w: 12, h: 2 } props: showBorder: true debugContent: left - mountPoint: home.page/cards importName: Placeholder config: layouts: xl: { w: 6, h: 2, x: 6 } lg: { w: 6, h: 2, x: 6 } md: { w: 6, h: 2, x: 6 } sm: { w: 12, h: 2, x: 0 } xs: { w: 12, h: 2, x: 0 } xxs: { w: 12, h: 2, x: 0 } props: showBorder: true debugContent: right However, you can see a second card below this card by default. Show the cards in three columns by defining the x parameter as follows: dynamicPlugins: frontend: janus-idp.backstage-plugin-dynamic-home-page: mountPoints: - mountPoint: home.page/cards importName: Placeholder config: layouts: xl: { w: 4, h: 2 } lg: { w: 4, h: 2 } md: { w: 4, h: 2 } sm: { w: 6, h: 2 } xs: { w: 12, h: 2 } xxs: { w: 12, h: 2 } props: showBorder: true debugContent: left - mountPoint: home.page/cards importName: Placeholder config: layouts: xl: { w: 4, h: 2, x: 4 } lg: { w: 4, h: 2, x: 4 } md: { w: 4, h: 2, x: 4 } sm: { w: 6, h: 2, x: 6 } xs: { w: 12, h: 2 } xxs: { w: 12, h: 2 } props: showBorder: true debugContent: center - mountPoint: home.page/cards importName: Placeholder config: layouts: xl: { w: 4, h: 2, x: 8 } lg: { w: 4, h: 2, x: 8 } md: { w: 4, h: 2, x: 8 } sm: { w: 6, h: 2 } xs: { w: 12, h: 2 } xxs: { w: 12, h: 2 } props: showBorder: true debugContent: right
[ "dynamicPlugins: frontend: janus-idp.backstage-plugin-dynamic-home-page: dynamicRoutes: - path: / importName: DynamicHomePage mountPoints: - mountPoint: home.page/cards importName: SearchBar config: layouts: xl: { w: 10, h: 1, x: 1 } lg: { w: 10, h: 1, x: 1 } md: { w: 10, h: 1, x: 1 } sm: { w: 10, h: 1, x: 1 } xs: { w: 12, h: 1 } xxs: { w: 12, h: 1 } - mountPoint: home.page/cards importName: QuickAccessCard config: layouts: xl: { w: 7, h: 8 } lg: { w: 7, h: 8 } md: { w: 7, h: 8 } sm: { w: 12, h: 8 } xs: { w: 12, h: 8 } xxs: { w: 12, h: 8 } - mountPoint: home.page/cards importName: CatalogStarredEntitiesCard config: layouts: xl: { w: 5, h: 4, x: 7 } lg: { w: 5, h: 4, x: 7 } md: { w: 5, h: 4, x: 7 } sm: { w: 12, h: 4 } xs: { w: 12, h: 4 } xxs: { w: 12, h: 4 }", "dynamicPlugins: frontend: janus-idp.backstage-plugin-dynamic-home-page: mountPoints: - mountPoint: home.page/cards importName: SearchBar config: layouts: xl: { w: 10, h: 1, x: 1 } lg: { w: 10, h: 1, x: 1 } md: { w: 10, h: 1, x: 1 } sm: { w: 10, h: 1, x: 1 } xs: { w: 12, h: 1 } xxs: { w: 12, h: 1 }", "dynamicPlugins: frontend: janus-idp.backstage-plugin-dynamic-home-page: mountPoints: - mountPoint: home.page/cards importName: QuickAccessCard config: layouts: xl: { h: 8 } lg: { h: 8 } md: { h: 8 } sm: { h: 8 } xs: { h: 8 } xxs: { h: 8 }", "dynamicPlugins: frontend: janus-idp.backstage-plugin-dynamic-home-page: mountPoints: - mountPoint: home.page/cards importName: Headline config: layouts: xl: { h: 1 } lg: { h: 1 } md: { h: 1 } sm: { h: 1 } xs: { h: 1 } xxs: { h: 1 } props: title: Important info", "dynamicPlugins: frontend: janus-idp.backstage-plugin-dynamic-home-page: mountPoints: - mountPoint: home.page/cards importName: MarkdownCard config: layouts: xl: { w: 6, h: 4 } lg: { w: 6, h: 4 } md: { w: 6, h: 4 } sm: { w: 6, h: 4 } xs: { w: 6, h: 4 } xxs: { w: 6, h: 4 } props: title: Company links content: | ### RHDH * [Website](https://developers.redhat.com/rhdh/overview) * [Documentation](https://docs.redhat.com/en/documentation/red_hat_developer_hub/) * [GitHub Showcase](https://github.com/redhat-developer/rhdh) * [GitHub Plugins](https://github.com/janus-idp/backstage-plugins) - mountPoint: home.page/cards importName: Markdown config: layouts: xl: { w: 6, h: 4, x: 6 } lg: { w: 6, h: 4, x: 6 } md: { w: 6, h: 4, x: 6 } sm: { w: 6, h: 4, x: 6 } xs: { w: 6, h: 4, x: 6 } xxs: { w: 6, h: 4, x: 6 } props: title: Important company links content: | ### RHDH * [Website](https://developers.redhat.com/rhdh/overview) * [Documentation](https://docs.redhat.com/en/documentation/red_hat_developer_hub/) * [GitHub Showcase](https://github.com/redhat-developer/rhdh) * [GitHub Plugins](https://github.com/janus-idp/backstage-plugins)", "dynamicPlugins: frontend: janus-idp.backstage-plugin-dynamic-home-page: mountPoints: - mountPoint: home.page/cards importName: Placeholder config: layouts: xl: { w: 1, h: 1 } lg: { w: 1, h: 1 } md: { w: 1, h: 1 } sm: { w: 1, h: 1 } xs: { w: 1, h: 1 } xxs: { w: 1, h: 1 } props: showBorder: true debugContent: '1'", "dynamicPlugins: frontend: janus-idp.backstage-plugin-dynamic-home-page: mountPoints: - mountPoint: home.page/cards importName: CatalogStarredEntitiesCard", "dynamicPlugins: frontend: janus-idp.backstage-plugin-dynamic-home-page: mountPoints: - mountPoint: home.page/cards importName: FeaturedDocsCard", "dynamicPlugins: frontend: janus-idp.backstage-plugin-dynamic-home-page: mountPoints: - mountPoint: home.page/cards importName: Placeholder config: layouts: xl: { w: 6, h: 2 } lg: { w: 6, h: 2 } md: { w: 6, h: 2 } sm: { w: 12, h: 2 } xs: { w: 12, h: 2 } xxs: { w: 12, h: 2 } props: showBorder: true debugContent: a placeholder", "dynamicPlugins: frontend: janus-idp.backstage-plugin-dynamic-home-page: mountPoints: - mountPoint: home.page/cards importName: Placeholder config: layouts: xl: { w: 6, h: 2 } lg: { w: 6, h: 2 } md: { w: 6, h: 2 } sm: { w: 12, h: 2 } xs: { w: 12, h: 2 } xxs: { w: 12, h: 2 } props: showBorder: true debugContent: left - mountPoint: home.page/cards importName: Placeholder config: layouts: xl: { w: 6, h: 2, x: 6 } lg: { w: 6, h: 2, x: 6 } md: { w: 6, h: 2, x: 6 } sm: { w: 12, h: 2, x: 0 } xs: { w: 12, h: 2, x: 0 } xxs: { w: 12, h: 2, x: 0 } props: showBorder: true debugContent: right", "dynamicPlugins: frontend: janus-idp.backstage-plugin-dynamic-home-page: mountPoints: - mountPoint: home.page/cards importName: Placeholder config: layouts: xl: { w: 4, h: 2 } lg: { w: 4, h: 2 } md: { w: 4, h: 2 } sm: { w: 6, h: 2 } xs: { w: 12, h: 2 } xxs: { w: 12, h: 2 } props: showBorder: true debugContent: left - mountPoint: home.page/cards importName: Placeholder config: layouts: xl: { w: 4, h: 2, x: 4 } lg: { w: 4, h: 2, x: 4 } md: { w: 4, h: 2, x: 4 } sm: { w: 6, h: 2, x: 6 } xs: { w: 12, h: 2 } xxs: { w: 12, h: 2 } props: showBorder: true debugContent: center - mountPoint: home.page/cards importName: Placeholder config: layouts: xl: { w: 4, h: 2, x: 8 } lg: { w: 4, h: 2, x: 8 } md: { w: 4, h: 2, x: 8 } sm: { w: 6, h: 2 } xs: { w: 12, h: 2 } xxs: { w: 12, h: 2 } props: showBorder: true debugContent: right" ]
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/getting_started_with_red_hat_developer_hub/proc-customize-rhdh-homepage_rhdh-getting-started
Chapter 7. Using MACsec to encrypt layer-2 traffic in the same physical network
Chapter 7. Using MACsec to encrypt layer-2 traffic in the same physical network You can use MACsec to secure the communication between two devices (point-to-point). For example, your branch office is connected over a Metro-Ethernet connection with the central office, you can configure MACsec on the two hosts that connect the offices to increase the security. 7.1. How MACsec increases security Media Access Control security (MACsec) is a layer-2 protocol that secures different traffic types over the Ethernet links, including: Dynamic host configuration protocol (DHCP) address resolution protocol (ARP) IPv4 and IPv6 traffic Any traffic over IP such as TCP or UDP MACsec encrypts and authenticates all traffic in LANs, by default with the GCM-AES-128 algorithm, and uses a pre-shared key to establish the connection between the participant hosts. To change the pre-shared key, you must update the NM configuration on all network hosts that use MACsec. A MACsec connection uses an Ethernet device, such as an Ethernet network card, VLAN, or tunnel device, as a parent. You can either set an IP configuration only on the MACsec device to communicate with other hosts only by using the encrypted connection, or you can also set an IP configuration on the parent device. In the latter case, you can use the parent device to communicate with other hosts using an unencrypted connection and the MACsec device for encrypted connections. MACsec does not require any special hardware. For example, you can use any switch, except if you want to encrypt traffic only between a host and a switch. In this scenario, the switch must also support MACsec. In other words, you can configure MACsec for two common scenarios: Host-to-host Host-to-switch and switch-to-other-hosts Important You can use MACsec only between hosts being in the same physical or virtual LAN. Additional resources MACsec: a different solution to encrypt network traffic 7.2. Configuring a MACsec connection by using nmcli You can use the nmcli utility to configure Ethernet interfaces to use MACsec. For example, you can create a MACsec connection between two hosts that are connected over Ethernet. Procedure On the first host on which you configure MACsec: Create the connectivity association key (CAK) and connectivity-association key name (CKN) for the pre-shared key: Create a 16-byte hexadecimal CAK: Create a 32-byte hexadecimal CKN: On both hosts you want to connect over a MACsec connection: Create the MACsec connection: Use the CAK and CKN generated in the step in the macsec.mka-cak and macsec.mka-ckn parameters. The values must be the same on every host in the MACsec-protected network. Configure the IP settings on the MACsec connection. Configure the IPv4 settings. For example, to set a static IPv4 address, network mask, default gateway, and DNS server to the macsec0 connection, enter: Configure the IPv6 settings. For example, to set a static IPv6 address, network mask, default gateway, and DNS server to the macsec0 connection, enter: Activate the connection: Verification Verify that the traffic is encrypted: Optional: Display the unencrypted traffic: Display MACsec statistics: Display individual counters for each type of protection: integrity-only (encrypt off) and encryption (encrypt on) Additional resources MACsec: a different solution to encrypt network traffic 7.3. Configuring a MACsec connection by using nmstatectl You can configure Ethernet interfaces to use MACsec through the nmstatectl utility in a declarative way. For example, in a YAML file, you describe the desired state of your network, which is supposed to have a MACsec connection between two hosts connected over Ethernet. The nmstatectl utility interprets the YAML file and deploys persistent and consistent network configuration across the hosts. Using the MACsec security standard for securing communication at the link layer, also known as layer 2 of the Open Systems Interconnection (OSI) model provides the following notable benefits: Encryption at layer 2 eliminates the need for encrypting individual services at layer 7. This reduces the overhead associated with managing a large number of certificates for each endpoint on each host. Point-to-point security between directly connected network devices such as routers and switches. No changes needed for applications and higher-layer protocols. Prerequisites A physical or virtual Ethernet Network Interface Controller (NIC) exists in the server configuration. The nmstate package is installed. Procedure On the first host on which you configure MACsec, create the connectivity association key (CAK) and connectivity-association key name (CKN) for the pre-shared key: Create a 16-byte hexadecimal CAK: Create a 32-byte hexadecimal CKN: On both hosts that you want to connect over a MACsec connection, complete the following steps: Create a YAML file, for example create-macsec-connection.yml , with the following settings: Use the CAK and CKN generated in the step in the mka-cak and mka-ckn parameters. The values must be the same on every host in the MACsec-protected network. Optional: In the same YAML configuration file, you can also configure the following settings: A static IPv4 address - 192.0.2.1 with the /32 subnet mask A static IPv6 address - 2001:db8:1::1 with the /64 subnet mask An IPv4 default gateway - 192.0.2.2 An IPv4 DNS server - 192.0.2.200 An IPv6 DNS server - 2001:db8:1::ffbb A DNS search domain - example.com Apply the settings to the system: Verification Display the current state in YAML format: Verify that the traffic is encrypted: Optional: Display the unencrypted traffic: Display MACsec statistics: Display individual counters for each type of protection: integrity-only (encrypt off) and encryption (encrypt on) Additional resources MACsec: a different solution to encrypt network traffic
[ "dd if=/dev/urandom count=16 bs=1 2> /dev/null | hexdump -e '1/2 \"%04x\"' 50b71a8ef0bd5751ea76de6d6c98c03a", "dd if=/dev/urandom count=32 bs=1 2> /dev/null | hexdump -e '1/2 \"%04x\"' f2b4297d39da7330910a74abc0449feb45b5c0b9fc23df1430e1898fcf1c4550", "nmcli connection add type macsec con-name macsec0 ifname macsec0 connection.autoconnect yes macsec.parent enp1s0 macsec.mode psk macsec.mka-cak 50b71a8ef0bd5751ea76de6d6c98c03a macsec.mka-ckn f2b4297d39da7330910a74abc0449feb45b5c0b9fc23df1430e1898fcf1c4550", "nmcli connection modify macsec0 ipv4.method manual ipv4.addresses '192.0.2.1/24' ipv4.gateway '192.0.2.254' ipv4.dns '192.0.2.253'", "nmcli connection modify macsec0 ipv6.method manual ipv6.addresses '2001:db8:1::1/32' ipv6.gateway '2001:db8:1::fffe' ipv6.dns '2001:db8:1::fffd'", "nmcli connection up macsec0", "tcpdump -nn -i enp1s0", "tcpdump -nn -i macsec0", "ip macsec show", "ip -s macsec show", "dd if=/dev/urandom count=16 bs=1 2> /dev/null | hexdump -e '1/2 \"%04x\"' 50b71a8ef0bd5751ea76de6d6c98c03a", "dd if=/dev/urandom count=32 bs=1 2> /dev/null | hexdump -e '1/2 \"%04x\"' f2b4297d39da7330910a74abc0449feb45b5c0b9fc23df1430e1898fcf1c4550", "--- routes: config: - destination: 0.0.0.0/0 next-hop-interface: macsec0 next-hop-address: 192.0.2.2 table-id: 254 - destination: 192.0.2.2/32 next-hop-interface: macsec0 next-hop-address: 0.0.0.0 table-id: 254 dns-resolver: config: search: - example.com server: - 192.0.2.200 - 2001:db8:1::ffbb interfaces: - name: macsec0 type: macsec state: up ipv4: enabled: true address: - ip: 192.0.2.1 prefix-length: 32 ipv6: enabled: true address: - ip: 2001:db8:1::1 prefix-length: 64 macsec: encrypt: true base-iface: enp0s1 mka-cak: 50b71a8ef0bd5751ea76de6d6c98c03a mka-ckn: f2b4297d39da7330910a74abc0449feb45b5c0b9fc23df1430e1898fcf1c4550 port: 0 validation: strict send-sci: true", "nmstatectl apply create-macsec-connection.yml", "nmstatectl show macsec0", "tcpdump -nn -i enp0s1", "tcpdump -nn -i macsec0", "ip macsec show", "ip -s macsec show" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/securing_networks/assembly_using-macsec-to-encrypt-layer-2-traffic-in-the-same-physical-network_securing-networks
Chapter 11. Red Hat Directory Server 11.0
Chapter 11. Red Hat Directory Server 11.0 11.1. Highlighted updates and new features This section documents new features and important updates in Directory Server 11.0. Directory Server introduces new command-line utilities to manage instances Red Hat Directory Server 11.0 introduces the dscreate , dsconf , and dsctl utilities. These utilities simplify managing Directory Server using the command line. For example, you can now use a command with parameters to configure a feature instead of sending complex LDIF statements to the server. The following is an overview of the purpose of each utility: Use the dscreate utility to create new Directory Server instances using the interactive mode or an INF file. Note that the INF file format is different from the one the installer used in Directory Server versions. Use the dsconf utility to manage Directory Server instances during run time. For example, use dsconf to: Configure settings in the cn=config entry Configure plug-ins Configure replication Back up and restore an instance Use the dsctl utility to manage Directory Server instances while they are offline. For example, use dsctl to: Start and stop an instance Re-index the server database Back up and restore an instance These utilities replace the Perl and shell scripts marked as deprecated in Directory Server 10. The scripts are still available in the unsupported 389-ds-base-legacy-tools package, however Red Hat only supports managing Directory Server using the new utilities. Note that configuring Directory Server using LDIF statements is still supported, but Red Hat recommends using the utilities. For further details about using the utilities, see the Red Hat Directory Server 11 Documentation . Directory Server now provides a browser-based user interface This enhancement adds a browser-based interface to Red Hat Directory Server that replaces the Java-based Console used in versions. As a result, administrators can now use the Red Hat Enterprise Linux web console to manage Directory Server instances using a browser. For further details, see the Red Hat Directory Server 11 Documentation . Note that the browser-based user interface does not contain an LDAP browser. The default value of the nsslapd-unhashed-pw-switch parameter is now off In certain situations, for example when synchronizing passwords with Active Directory (AD), a Directory Server plug-in must store the unencrypted password on the hard disk. The nsslapd-unhashed-pw-switch configuration parameter determines whether and how Directory Server stores unencrypted passwords. To improve the security in scenarios that do not require plug-ins to store unencrypted passwords, the default value of the nsslapd-unhashed-pw-switch parameter has been changed in Directory Server 11.0 from on to off . If you want to configure password synchronization with AD, manually enable nsslapd-unhashed-pw-switch on the Directory Server instance that has the Windows synchronization agreement configured: # dsconf -D "cn=Directory Manager" ldap://server.example.com config replace nsslapd-unhashed-pw-switch=on Highlighted updates and new features in the 389-ds-base packages Features in Red Hat Directory Server, that are included in the 389-ds-base packages, are documented in the Red Hat Enterprise Linux 8.1 Release Notes: New password syntax checks in Directory Server Directory Server now provides improved internal operations logging support 11.2. Known issues This section documents known problems and, if applicable, workarounds in Directory Server 11.0. Directory Server settings that are changed outside the web console's window are not automatically visible Because of the design of the Directory Server module in the Red Hat Enterprise Linux 8 web console, the web console does not automatically display the latest settings if a user changes the configuration outside of the console's window. For example, if you change the configuration using the command line while the web console is open, the new settings are not automatically updated in the web console. This applies also if you change the configuration using the web console on a different computer. To work around the problem, manually refresh the web console in the browser if the configuration has been changed outside the console's window. The Directory Server Web Console does not provide an LDAP browser The web console enables administrators to manage and configure Directory Server 11 instances. However, it does not provide an integrated LDAP browser. To manage users and groups in Directory Server, use the dsidm utility. To display and modify directory entries, use a third-party LDAP browser or the OpenLDAP client utilities provided by the openldap-clients package.
[ "dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-unhashed-pw-switch=on" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/release_notes/directory-server-11.0
Chapter 1. Autoscale APIs
Chapter 1. Autoscale APIs 1.1. ClusterAutoscaler [autoscaling.openshift.io/v1] Description ClusterAutoscaler is the Schema for the clusterautoscalers API Type object 1.2. MachineAutoscaler [autoscaling.openshift.io/v1beta1] Description MachineAutoscaler is the Schema for the machineautoscalers API Type object 1.3. HorizontalPodAutoscaler [autoscaling/v2] Description HorizontalPodAutoscaler is the configuration for a horizontal pod autoscaler, which automatically manages the replica count of any resource implementing the scale subresource based on the metrics specified. Type object 1.4. Scale [autoscaling/v1] Description Scale represents a scaling request for a resource. Type object
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/autoscale_apis/autoscale-apis
Chapter 20. OperatorHub [config.openshift.io/v1]
Chapter 20. OperatorHub [config.openshift.io/v1] Description OperatorHub is the Schema for the operatorhubs API. It can be used to change the state of the default hub sources for OperatorHub on the cluster from enabled to disabled and vice versa. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 20.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object OperatorHubSpec defines the desired state of OperatorHub status object OperatorHubStatus defines the observed state of OperatorHub. The current state of the default hub sources will always be reflected here. 20.1.1. .spec Description OperatorHubSpec defines the desired state of OperatorHub Type object Property Type Description disableAllDefaultSources boolean disableAllDefaultSources allows you to disable all the default hub sources. If this is true, a specific entry in sources can be used to enable a default source. If this is false, a specific entry in sources can be used to disable or enable a default source. sources array sources is the list of default hub sources and their configuration. If the list is empty, it implies that the default hub sources are enabled on the cluster unless disableAllDefaultSources is true. If disableAllDefaultSources is true and sources is not empty, the configuration present in sources will take precedence. The list of default hub sources and their current state will always be reflected in the status block. sources[] object HubSource is used to specify the hub source and its configuration 20.1.2. .spec.sources Description sources is the list of default hub sources and their configuration. If the list is empty, it implies that the default hub sources are enabled on the cluster unless disableAllDefaultSources is true. If disableAllDefaultSources is true and sources is not empty, the configuration present in sources will take precedence. The list of default hub sources and their current state will always be reflected in the status block. Type array 20.1.3. .spec.sources[] Description HubSource is used to specify the hub source and its configuration Type object Property Type Description disabled boolean disabled is used to disable a default hub source on cluster name string name is the name of one of the default hub sources 20.1.4. .status Description OperatorHubStatus defines the observed state of OperatorHub. The current state of the default hub sources will always be reflected here. Type object Property Type Description sources array sources encapsulates the result of applying the configuration for each hub source sources[] object HubSourceStatus is used to reflect the current state of applying the configuration to a default source 20.1.5. .status.sources Description sources encapsulates the result of applying the configuration for each hub source Type array 20.1.6. .status.sources[] Description HubSourceStatus is used to reflect the current state of applying the configuration to a default source Type object Property Type Description disabled boolean disabled is used to disable a default hub source on cluster message string message provides more information regarding failures name string name is the name of one of the default hub sources status string status indicates success or failure in applying the configuration 20.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/operatorhubs DELETE : delete collection of OperatorHub GET : list objects of kind OperatorHub POST : create an OperatorHub /apis/config.openshift.io/v1/operatorhubs/{name} DELETE : delete an OperatorHub GET : read the specified OperatorHub PATCH : partially update the specified OperatorHub PUT : replace the specified OperatorHub /apis/config.openshift.io/v1/operatorhubs/{name}/status GET : read status of the specified OperatorHub PATCH : partially update status of the specified OperatorHub PUT : replace status of the specified OperatorHub 20.2.1. /apis/config.openshift.io/v1/operatorhubs HTTP method DELETE Description delete collection of OperatorHub Table 20.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind OperatorHub Table 20.2. HTTP responses HTTP code Reponse body 200 - OK OperatorHubList schema 401 - Unauthorized Empty HTTP method POST Description create an OperatorHub Table 20.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 20.4. Body parameters Parameter Type Description body OperatorHub schema Table 20.5. HTTP responses HTTP code Reponse body 200 - OK OperatorHub schema 201 - Created OperatorHub schema 202 - Accepted OperatorHub schema 401 - Unauthorized Empty 20.2.2. /apis/config.openshift.io/v1/operatorhubs/{name} Table 20.6. Global path parameters Parameter Type Description name string name of the OperatorHub HTTP method DELETE Description delete an OperatorHub Table 20.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 20.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OperatorHub Table 20.9. HTTP responses HTTP code Reponse body 200 - OK OperatorHub schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OperatorHub Table 20.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 20.11. HTTP responses HTTP code Reponse body 200 - OK OperatorHub schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OperatorHub Table 20.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 20.13. Body parameters Parameter Type Description body OperatorHub schema Table 20.14. HTTP responses HTTP code Reponse body 200 - OK OperatorHub schema 201 - Created OperatorHub schema 401 - Unauthorized Empty 20.2.3. /apis/config.openshift.io/v1/operatorhubs/{name}/status Table 20.15. Global path parameters Parameter Type Description name string name of the OperatorHub HTTP method GET Description read status of the specified OperatorHub Table 20.16. HTTP responses HTTP code Reponse body 200 - OK OperatorHub schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified OperatorHub Table 20.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 20.18. HTTP responses HTTP code Reponse body 200 - OK OperatorHub schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified OperatorHub Table 20.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 20.20. Body parameters Parameter Type Description body OperatorHub schema Table 20.21. HTTP responses HTTP code Reponse body 200 - OK OperatorHub schema 201 - Created OperatorHub schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/config_apis/operatorhub-config-openshift-io-v1
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar. Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.10_release_notes/proc_providing-feedback-on-red-hat-documentation_release-notes
Chapter 12. Managing OpenID Connect and SAML Clients
Chapter 12. Managing OpenID Connect and SAML Clients Clients are entities that can request authentication of a user. Clients come in two forms. The first type of client is an application that wants to participate in single-sign-on. These clients just want Red Hat Single Sign-On to provide security for them. The other type of client is one that is requesting an access token so that it can invoke other services on behalf of the authenticated user. This section discusses various aspects around configuring clients and various ways to do it. 12.1. OIDC clients OpenID Connect is the recommended protocol to secure applications. It was designed from the ground up to be web friendly and it works best with HTML5/JavaScript applications. 12.1.1. Creating an OpenID Connect Client To protect an application that uses the OpenID connect protocol, you create a client. Procedure Click Clients in the menu. Click Create to go to the Add Client page. Add client Enter any name for Client ID. Select openid-connect in the Client Protocol drop down box. Enter the base URL of your application in the Root URL field. Click Save . This action creates the client and bring you to the Settings tab. Client settings Additional resources For more information about the OIDC protocol, see OpenID Connect . 12.1.2. Basic settings When you create an OIDC client, you see the following fields on the Settings tab. Client ID The alpha-numeric ID string that is used in OIDC requests and in the Red Hat Single Sign-On database to identify the client. Name The name for the client in Red Hat Single Sign-On UI screen. To localize the name, set up a replacement string value. For example, a string value such as USD{myapp}. See the Server Developer Guide for more information. Description The description of the client. This setting can also be localized. Enabled When turned off, the client cannot request authentication. Consent Required When turned on, users see a consent page that they can use to grant access to that application. It will also display metadata so the user knows the exact information that the client can access. Access Type The type of OIDC client. Confidential For server-side clients that perform browser logins and require client secrets when making an Access Token Request. This setting should be used for server-side applications. Public For client-side clients that perform browser logins. As it is not possible to ensure that secrets can be kept safe with client-side clients, it is important to restrict access by configuring correct redirect URIs. Bearer-only The application allows only bearer token requests. When turned on, this application cannot participate in browser logins. Standard Flow Enabled When enabled, clients can use the OIDC Authorization Code Flow . Implicit Flow Enabled When enabled, clients can use the OIDC Implicit Flow . Direct Access Grants Enabled When enabled, clients can use the OIDC Direct Access Grants . OAuth 2.0 Device Authorization Grant Enabled If this is on, clients are allowed to use the OIDC Device Authorization Grant . OpenID Connect Client Initiated Backchannel Authentication Grant Enabled If this is on, clients are allowed to use the OIDC Client Initiated Backchannel Authentication Grant . Root URL If Red Hat Single Sign-On uses any configured relative URLs, this value is prepended to them. Valid Redirect URIs Required field. Enter a URL pattern and click + to add and - to remove existing URLs and click Save . Exact (case sensitive) string matching is used to compare valid redirect URIs. You can use wildcards at the end of the URL pattern. For example http://host.com/path/* . To avoid security issues, if the passed redirect URI contains the userinfo part or its path manages access to parent directory ( /../ ) no wildcard comparison is performed but the standard and secure exact string matching. The full wildcard * valid redirect URI can also be configured to allow any http or https redirect URI. Please do not use it in production environments. Exclusive redirect URI patterns are typically more secure. See Unspecific Redirect URIs for more information. Base URL This URL is used when Red Hat Single Sign-On needs to link to the client. Admin URL Callback endpoint for a client. The server uses this URL to make callbacks like pushing revocation policies, performing backchannel logout, and other administrative operations. For Red Hat Single Sign-On servlet adapters, this URL can be the root URL of the servlet application. For more information, see Securing Applications and Services Guide . Logo URL URL that references a logo for the Client application. Policy URL URL that the Relying Party Client provides to the End-User to read about how the profile data will be used. Terms of Service URL URL that the Relying Party Client provides to the End-User to read about the Relying Party's terms of service. Web Origins Enter a URL pattern and click + to add and - to remove existing URLs. Click Save . This option handles Cross-Origin Resource Sharing (CORS) . If browser JavaScript attempts an AJAX HTTP request to a server whose domain is different from the one that the JavaScript code came from, the request must use CORS. The server must handle CORS requests, otherwise the browser will not display or allow the request to be processed. This protocol protects against XSS, CSRF, and other JavaScript-based attacks. Domain URLs listed here are embedded within the access token sent to the client application. The client application uses this information to decide whether to allow a CORS request to be invoked on it. Only Red Hat Single Sign-On client adapters support this feature. See Securing Applications and Services Guide for more information. Front Channel Logout If Front Channel Logout is enabled, the application should be able to log out users through the front channel as per OpenID Connect Front-Channel Logout specification. If enabled, you should also provide the Front-Channel Logout URL . Front-Channel Logout URL URL that will be used by Red Hat Single Sign-On to send logout requests to clients through the front-channel. Backchannel Logout URL URL that will cause the client to log itself out when a logout request is sent to this realm (via end_session_endpoint). If omitted, no logout requests are sent to the client. 12.1.3. Advanced settings When you click Advanced Settings , additional fields are displayed. OAuth 2.0 Mutual TLS Certificate Bound Access Tokens Enabled Note To enable mutual TLS in Red Hat Single Sign-On, see Enable mutual SSL in WildFly . Mutual TLS binds an access token and a refresh token together with a client certificate, which is exchanged during a TLS handshake. This binding prevents an attacker from using stolen tokens. This type of token is a holder-of-key token. Unlike bearer tokens, the recipient of a holder-of-key token can verify if the sender of the token is legitimate. If this setting is on, the workflow is: A token request is sent to the token endpoint in an authorization code flow or hybrid flow. Red Hat Single Sign-On requests a client certificate. Red Hat Single Sign-On receives the client certificate. Red Hat Single Sign-On successfully verifies the client certificate. If verification fails, Red Hat Single Sign-On rejects the token. In the following cases, Red Hat Single Sign-On will verify the client sending the access token or the refresh token: A token refresh request is sent to the token endpoint with a holder-of-key refresh token. A UserInfo request is sent to UserInfo endpoint with a holder-of-key access token. A logout request is sent to Logout endpoint with a holder-of-key refresh token. See Mutual TLS Client Certificate Bound Access Tokens in the OAuth 2.0 Mutual TLS Client Authentication and Certificate Bound Access Tokens for more details. Note Currently, Red Hat Single Sign-On client adapters do not support holder-of-key token verification. Red Hat Single Sign-On adapters treat access and refresh tokens as bearer tokens. Advanced Settings for OIDC The Advanced Settings for OpenID Connect allows you to configure overrides at the client level for session and token timeouts . Configuration Description Access Token Lifespan The value overrides the realm option with same name. Client Session Idle The value overrides the realm option with the same name. The value should be shorter than the global SSO Session Idle . Client Session Max The value overrides the realm option with the same name. The value should be shorter than the global SSO Session Max . Client Offline Session Idle This setting allows you to configure a shorter offline session idle timeout for the client. The timeout is amount of time the session remains idle before Red Hat Single Sign-On revokes its offline token. If not set, realm Offline Session Idle is used. Client Offline Session Max This setting allows you to configure a shorter offline session max lifespan for the client. The lifespan is the maximum time before Red Hat Single Sign-On revokes the corresponding offline token. This option needs Offline Session Max Limited enabled globally in the realm, and defaults to Offline Session Max . Proof Key for Code Exchange Code Challenge Method If an attacker steals an authorization code of a legitimate client, Proof Key for Code Exchange (PKCE) prevents the attacker from receiving the tokens that apply to the code. An administrator can select one of these options: (blank) Red Hat Single Sign-On does not apply PKCE unless the client sends appropriate PKCE parameters to Red Hat Single Sign-Ons authorization endpoint. S256 Red Hat Single Sign-On applies to the client PKCE whose code challenge method is S256. plain Red Hat Single Sign-On applies to the client PKCE whose code challenge method is plain. See RFC 7636 Proof Key for Code Exchange by OAuth Public Clients for more details. Signed and Encrypted ID Token Support Red Hat Single Sign-On can encrypt ID tokens according to the Json Web Encryption (JWE) specification. The administrator determines if ID tokens are encrypted for each client. The key used for encrypting the ID token is the Content Encryption Key (CEK). Red Hat Single Sign-On and a client must negotiate which CEK is used and how it is delivered. The method used to determine the CEK is the Key Management Mode. The Key Management Mode that Red Hat Single Sign-On supports is Key Encryption. In Key Encryption: The client generates an asymmetric cryptographic key pair. The public key is used to encrypt the CEK. Red Hat Single Sign-On generates a CEK per ID token Red Hat Single Sign-On encrypts the ID token using this generated CEK Red Hat Single Sign-On encrypts the CEK using the client's public key. The client decrypts this encrypted CEK using their private key The client decrypts the ID token using the decrypted CEK. No party, other than the client, can decrypt the ID token. The client must pass its public key for encrypting CEK to Red Hat Single Sign-On. Red Hat Single Sign-On supports downloading public keys from a URL provided by the client. The client must provide public keys according to the Json Web Keys (JWK) specification. The procedure is: Open the client's Keys tab. Toggle JWKS URL to ON. Input the client's public key URL in the JWKS URL textbox. Key Encryption's algorithms are defined in the Json Web Algorithm (JWA) specification. Red Hat Single Sign-On supports: RSAES-PKCS1-v1_5(RSA1_5) RSAES OAEP using default parameters (RSA-OAEP) RSAES OAEP 256 using SHA-256 and MFG1 (RSA-OAEP-256) The procedure to select the algorithm is: Open the client's Settings tab. Open Fine Grain OpenID Connect Configuration . Select the algorithm from ID Token Encryption Content Encryption Algorithm pulldown menu. ACR to Level of Authentication (LoA) Mapping In the advanced settings of a client, you can define which Authentication Context Class Reference (ACR) value is mapped to which Level of Authentication (LoA) . This mapping can be specified also at the realm as mentioned in the ACR to LoA Mapping . A best practice is to configure this mapping at the realm level, which allows to share the same settings across multiple clients. The Default ACR Values can be used to specify the default values when the login request is sent from this client to Red Hat Single Sign-On without acr_values parameter and without a claims parameter that has an acr claim attached. See offical OIDC dynamic client registration specification . Warning Note that default ACR values are used as the default level, however it cannot be reliably used to enforce login with the particular level. For example, assume that you configure the Default ACR Values to level 2. Then by default, users will be required to authenticate with level 2. However when the user explicitly attaches the parameter into login request such as acr_values=1 , then the level 1 will be used. As a result, if the client really requires level 2, the client is encouraged to check the presence of the acr claim inside ID Token and doublecheck that it contains the requested level 2. For further details see Step-up Authentication and the offical OIDC specification . 12.1.4. Confidential client credentials If the access type of the client is set to confidential , the credentials of the client must be configured under the Credentials tab. Credentials tab The Client Authenticator drop-down list specifies the type of credential to use for your client. Client ID and Secret This choice is the default setting. The secret is automatically generated for you and the clicking Regenerate Secret recreates the secret if necessary. Signed JWT Signed JWT is "Signed Json Web Token". When choosing this credential type you will have to also generate a private key and certificate for the client in the tab Keys . The private key will be used to sign the JWT, while the certificate is used by the server to verify the signature. Keys tab Click on the Generate new keys and certificate button to start this process. Generate keys Select the archive format you want to use. Enter a key password . Enter a store password . Click Generate and Download . When you generate the keys, Red Hat Single Sign-On will store the certificate and you download the private key and certificate for your client. You can also generate keys using an external tool and then import the client's certificate by clicking Import Certificate . Import certificate Select the archive format of the certificate. Enter the store password. Select the certificate file by clicking Import File . Click Import . Importing a certificate is unnecessary if you click Use JWKS URL . In this case, you can provide the URL where the public key is published in JWK format. With this option, if the key is ever changed, Red Hat Single Sign-On reimports the key. If you are using a client secured by Red Hat Single Sign-On adapter, you can configure the JWKS URL in this format, assuming that https://myhost.com/myapp is the root URL of your client application: https://myhost.com/myapp/k_jwks See Server Developer Guide for more details. Warning Red Hat Single Sign-On caches public keys of OIDC clients. If the private key of your client is compromised, update your keys and clear the key cache. See Clearing the cache section for more details. Signed JWT with Client Secret If you select this option, you can use a JWT signed by client secret instead of the private key. The client secret will be used to sign the JWT by the client. X509 Certificate Red Hat Single Sign-On will validate if the client uses proper X509 certificate during the TLS Handshake. Note This option requires mutual TLS in Red Hat Single Sign-On. See Enable mutual SSL in WildFly . X509 certificate The validator also checks the Subject DN field of the certificate with a configured regexp validation expression. For some use cases, it is sufficient to accept all certificates. In that case, you can use (.*?)(?:USD) expression. Two ways exist for Red Hat Single Sign-On to obtain the Client ID from the request: The client_id parameter in the query (described in Section 2.2 of the OAuth 2.0 Specification ). Supply client_id as a form parameter. 12.1.5. Client Secret Rotation Note Client Secret Rotation is Technology Preview and is not fully supported. This feature is disabled by default. To enable start the server with -Dkeycloak.profile=preview or -Dkeycloak.profile.feature.client_secret_rotation=enabled . For more details see Profiles . For a client with Confidential Access Type Red Hat Single Sign-On supports the functionality of rotating client secrets through Client Policies . The client secrets rotation policy provides greater security in order to alleviate problems such as secret leakage. Once enabled, Red Hat Single Sign-On supports up to two concurrently active secrets for each client. The policy manages rotations according to the following settings: Secret expiration: [seconds] - When the secret is rotated, this is the expiration of time of the new secret. The amount, in seconds , added to the secret creation date. Calculated at policy execution time. Rotated secret expiration: [seconds] - When the secret is rotated, this value is the remaining expiration time for the old secret. This value should be always smaller than Secret expiration. When the value is 0, the old secret will be immediately removed during client rotation. The amount, in seconds , added to the secret rotation date. Calculated at policy execution time. Remaining expiration time for rotation during update: [seconds] - Time period when an update to a dynamic client should perform client secret rotation. Calculated at policy execution time. When a client secret rotation occurs, a new main secret is generated and the old client main secret becomes the secondary secret with a new expiration date. 12.1.5.1. Rules for client secret rotation Rotations do not occur automatically or through a background process. In order to perform the rotation, an update action is required on the client, either through the Red Hat Single Sign-On Admin Console through the function of Regenerate Secret , in the client's credentials tab or Admin REST API. When invoking a client update action, secret rotation occurs according to the rules: When the value of Secret expiration is less than the current date. During dynamic client registration client-update request, the client secret will be automatically rotated if the value of Remaining expiration time for rotation during update match the period between the current date and the Secret expiration . Additionally it is possible through Admin REST API to force a client secret rotation at any time. Note During the creation of new clients, if the client secret rotation policy is active, the behavior will be applied automatically. Warning To apply the secret rotation behavior to an existing client, update that client after you define the policy so that the behavior is applied. 12.1.6. Creating an OIDC Client Secret Rotation Policy The following is an example of defining a secret rotation policy: Procedure Click Realm Settings in the left menu. Click Client Policies tab. On Profiles Page, Click Create Create a profile Enter any name for Name . Enter a description that helps you identify the purpose of the profile for Description . Click Save . This action creates the profile and enables you to configure executors. Click Create to configure an executor for this profile. Create a profile executors Select secret-rotation for Executor Type . Enter the maximum duration time of each secret, in seconds, for Secret Expiration . Enter the maximum duration time of each rotated secret, in seconds, for Rotated Secret Expiration . Warning Remember that the Rotated Secret Expiration value must always be less than Secret Expiration . Enter the amount of time, in seconds, after which any update action will update the client for Remain Expiration Time . Click Save . Note In the example above: Each secret is valid for one week. The rotated secret expires after two days. The window for updating dynamic clients starts one day before the secret expires. Return to the Client Policies tab. Click Policies . Click Create . Create the Client Secret Rotation Policy Enter any name for Name . Enter a description that helps you identify the purpose of the policy for Description . Click Save . This action creates the policy and enables you to associate policies with profiles. It also allows you to configure the conditions for policy execution. Under Conditions, click Create . Create the Client Secret Rotation Policy Condition To apply the behavior to all confidential clients select client-access-type in the Condition Type field Note To apply to a specific group of clients, another approach would be to select the client-roles type in the Condition Type field. In this way, you could create specific roles and assign a custom rotation configuration to each role. Add confidential to the field Client Access Type . Click Save . Back in the policy setting, under Client Profiles , in the Add client profile selection menu, select the profile Weekly Client Secret Rotation Profile created earlier. Client Secret Rotation Policy Note To apply the secret rotation behavior to an existing client, follow the following steps: Using the Admin Console Go to some client. Go to tab Credentials . Click Re-generate secret . Using client REST services it can be executed in two ways: Through an update operation on a client Through the regenerate client secret endpoint 12.1.7. Using a service account Each OIDC client has a built-in service account . Use this service account to obtain an access token. Procedure Click Clients in the menu. Select your client. Click the Settings tab. Set the Access Type of your client to confidential . Toggle Service Accounts Enabled to ON . Click Save . Configure your client credentials . Click the Scope tab. Verify that you have roles or toggle Full Scope Allowed to ON . Click the Service Account Roles tab Configure the roles available to this service account for your client. Roles from access tokens are the intersection of: Role scope mappings of a client combined with the role scope mappings inherited from linked client scopes. Service account roles. The REST URL to invoke is /auth/realms/{realm-name}/protocol/openid-connect/token . This URL must be invoked as a POST request and requires that you post the client credentials with the request. By default, client credentials are represented by the clientId and clientSecret of the client in the Authorization: Basic header but you can also authenticate the client with a signed JWT assertion or any other custom mechanism for client authentication. You also need to set the grant_type parameter to "client_credentials" as per the OAuth2 specification. For example, the POST invocation to retrieve a service account can look like this: The response would be similar to this Access Token Response from the OAuth 2.0 specification. Only the access token is returned by default. No refresh token is returned and no user session is created on the Red Hat Single Sign-On side upon successful authentication by default. Due to the lack of refresh token, re-authentication is required when the access token expires. However, this situation does not mean any additional overhead for the Red Hat Single Sign-On server because sessions are not created by default. In this situation, logout is unnecessary. However, issued access tokens can be revoked by sending requests to the OAuth2 Revocation Endpoint as described in the OpenID Connect Endpoints section. Additional resources For more details, see Client Credentials Grant . 12.1.8. Audience support Typically, the environment where Red Hat Single Sign-On is deployed consists of a set of confidential or public client applications that use Red Hat Single Sign-On for authentication. Services ( Resource Servers in the OAuth 2 specification ) are also available that serve requests from client applications and provide resources to these applications. These services require an Access token (Bearer token) to be sent to them to authenticate a request. This token is obtained by the frontend application upon login to Red Hat Single Sign-On. In the environment where trust among services is low, you may encounter this scenario: A frontend client application requires authentication against Red Hat Single Sign-On. Red Hat Single Sign-On authenticates a user. Red Hat Single Sign-On issues a token to the application. The application uses the token to invoke an untrusted service. The untrusted service returns the response to the application. However, it keeps the applications token. The untrusted service then invokes a trusted service using the applications token. This results in broken security as the untrusted service misuses the token to access other services on behalf of the client application. This scenario is unlikely in environments with a high level of trust between services but not in environments where trust is low. In some environments, this workflow may be correct as the untrusted service may have to retrieve data from a trusted service to return data to the original client application. An unlimited audience is useful when a high level of trust exists between services. Otherwise, the audience should be limited. You can limit the audience and, at the same time, allow untrusted services to retrieve data from trusted services. In this case, ensure that the untrusted service and the trusted service are added as audiences to the token. To prevent any misuse of the access token, limit the audience on the token and configure your services to verify the audience on the token. The flow will change as follows: A frontend application authenticates against Red Hat Single Sign-On. Red Hat Single Sign-On authenticates a user. Red Hat Single Sign-On issues a token to the application. The application knows that it will need to invoke an untrusted service so it places scope=<untrusted service> in the authentication request sent to Red Hat Single Sign-On (see Client Scopes section for more details about the scope parameter). The token issued to the application contains a reference to the untrusted service in its audience ( "audience": [ "<untrusted service>" ] ) which declares that the client uses this access token to invoke the untrusted service. The untrusted service invokes a trusted service with the token. Invocation is not successful because the trusted service checks the audience on the token and find that its audience is only for the untrusted service. This behavior is expected and security is not broken. If the client wants to invoke the trusted service later, it must obtain another token by reissuing the SSO login with scope=<trusted service> . The returned token will then contain the trusted service as an audience: "audience": [ "<trusted service>" ] Use this value to invoke the <trusted service> . 12.1.8.1. Setup When setting up audience checking: Ensure that services are configured to check audience on the access token sent to them by adding the flag verify-token-audience in the adapter configuration. See Adapter configuration for details. Ensure that access tokens issued by Red Hat Single Sign-On contain all necessary audiences. Audiences can be added using the client roles as described in the section or hardcoded. See Hardcoded audience . 12.1.8.2. Automatically add audience An Audience Resolve protocol mapper is defined in the default client scope roles . The mapper checks for clients that have at least one client role available for the current token. The client ID of each client is then added as an audience, which is useful if your service (usually bearer-only) clients rely on client roles. For example, for a bearer-only client and a confidential client, you can use the access token issued for the confidential client to invoke the bearer-only client REST service. The bearer-only client will be automatically added as an audience to the access token issued for the confidential client if the following are true: The bearer-only client has any client roles defined on itself. Target user has at least one of those client roles assigned. Confidential client has the role scope mappings for the assigned role. Note If you want to ensure that the audience is not added automatically, do not configure role scope mappings directly on the confidential client. Instead, you can create a dedicated client scope that contains the role scope mappings for the client roles of your dedicated client scope. Assuming that the client scope is added as an optional client scope to the confidential client, the client roles and the audience will be added to the token if explicitly requested by the scope=<trusted service> parameter. Note The frontend client itself is not automatically added to the access token audience, therefore allowing easy differentiation between the access token and the ID token, since the access token will not contain the client for which the token is issued as an audience. If you need the client itself as an audience, see the hardcoded audience option. However, using the same client as both frontend and REST service is not recommended. 12.1.8.3. Hardcoded audience When your service relies on realm roles or does not rely on the roles in the token at all, it can be useful to use a hardcoded audience. A hardcoded audience is a protocol mapper, that will add the client ID of the specified service client as an audience to the token. You can use any custom value, for example a URL, if you want to use a different audience than the client ID. You can add the protocol mapper directly to the frontend client. If the protocol mapper is added directly, the audience will be always added as well. For more control over the protocol mapper, you can create the protocol mapper on the dedicated client scope, which will be called for example good-service . Audience protocol mapper From the Installation tab of the good-service client, you can generate the adapter configuration and you can confirm that verify-token-audience option will be set to true . This forces the adapter to verify the audience if you use this configuration. You need to ensure that the confidential client is able to request good-service as an audience in its tokens. On the confidential client: Click the Client Scopes tab. Assign good-service as an optional (or default) client scope. See Client Scopes Linking section for more details. You can optionally Evaluate Client Scopes and generate an example access token. good-service will be added to the audience of the generated access token if good-service is included in the scope parameter, when you assigned it as an optional client scope. In your confidential client application, ensure that the scope parameter is used. The value good-service must be included when you want to issue the token for accessing good-service . See: parameters forwarding section if your application uses the servlet adapter. javascript adapter section if your application uses the javascript adapter. Note Both the Audience and Audience Resolve protocol mappers add the audiences to the access token only, by default. The ID Token typically contains only a single audience, the client ID for which the token was issued, a requirement of the OpenID Connect specification. However, the access token does not necessarily have the client ID, which was the token issued for, unless the audience mappers added it. 12.2. Creating a SAML client Red Hat Single Sign-On supports SAML 2.0 for registered applications. POST and Redirect bindings are supported. You can choose to require client signature validation. You can have the server sign and/or encrypt responses as well. Procedure Click Clients in the menu. Click Create to go to the Add Client page. Add client Enter the Client ID of the client. This is often a URL and is the expected issuer value in SAML requests sent by the application. Select saml in the Client Protocol drop down box. Enter the Client SAML Endpoint URL. This URL is where you want the Red Hat Single Sign-On server to send SAML requests and responses. Generally, applications have one URL for processing SAML requests. Multiple URLs can be set in the Settings tab of the client. Click Save . This action creates the client and brings you to the Settings tab. Client settings The following list describes each setting: Client ID The alpha-numeric ID string that is used in OIDC requests and in the Red Hat Single Sign-On database to identify the client. This value must match the issuer value sent with AuthNRequests. Red Hat Single Sign-On pulls the issuer from the Authn SAML request and match it to a client by this value. Name The name for the client in a Red Hat Single Sign-On UI screen. To localize the name, set up a replacement string value. For example, a string value such as USD{myapp}. See the Server Developer Guide for more information. Description The description of the client. This setting can also be localized. Enabled When set to OFF, the client cannot request authentication. Consent Required When set to ON, users see a consent page that grants access to that application. The page also displays the metadata of the information that the client can access. If you have ever done a social login to Facebook, you often see a similar page. Red Hat Single Sign-On provides the same functionality. Include AuthnStatement SAML login responses may specify the authentication method used, such as password, as well as timestamps of the login and the session expiration. Include AuthnStatement is enabled by default, so that the AuthnStatement element will be included in login responses. Setting this to OFF prevents clients from determining the maximum session length, which can create client sessions that do not expire. Sign Documents When set to ON, Red Hat Single Sign-On signs the document using the realms private key. Optimize REDIRECT signing key lookup When set to ON, the SAML protocol messages include the Red Hat Single Sign-On native extension. This extension contains a hint with the signing key ID. The SP uses the extension for signature validation instead of attempting to validate the signature using keys. This option applies to REDIRECT bindings where the signature is transferred in query parameters and this information is not found in the signature information. This is contrary to POST binding messages where key ID is always included in document signature. This option is used when Red Hat Single Sign-On server and adapter provide the IDP and SP. This option is only relevant when Sign Documents is set to ON. Sign Assertions The assertion is signed and embedded in the SAML XML Auth response. Signature Algorithm The algorithm used in signing SAML documents. SAML Signature Key Name Signed SAML documents sent using POST binding contain the identification of the signing key in the KeyName element. This action can be controlled by the SAML Signature Key Name option. This option controls the contents of the Keyname . KEY_ID The KeyName contains the key ID. This option is the default option. CERT_SUBJECT The KeyName contains the subject from the certificate corresponding to the realm key. This option is expected by Microsoft Active Directory Federation Services. NONE The KeyName hint is completely omitted from the SAML message. Canonicalization Method The canonicalization method for XML signatures. Encrypt Assertions Encrypts the assertions in SAML documents with the realms private key. The AES algorithm uses a key size of 128 bits. Client Signature Required If Client Signature Required is enabled, documents coming from a client are expected to be signed. Red Hat Single Sign-On will validate this signature using the client public key or cert set up in the Keys tab. Force POST Binding By default, Red Hat Single Sign-On responds using the initial SAML binding of the original request. By enabling Force POST Binding , Red Hat Single Sign-On responds using the SAML POST binding even if the original request used the redirect binding. Front Channel Logout If Front Channel Logout is enabled, the application requires a browser redirect to perform a logout. For example, the application may require a cookie to be reset which could only be done via a redirect. If Front Channel Logout is disabled, Red Hat Single Sign-On invokes a background SAML request to log out of the application. Force Name ID Format If a request has a name ID policy, ignore it and use the value configured in the Admin Console under Name ID Format . Allow ECP Flow If true, this application is allowed to use SAML ECP profile for authentication. Name ID Format The Name ID Format for the subject. This format is used if no name ID policy is specified in a request, or if the Force Name ID Format attribute is set to ON. Root URL When Red Hat Single Sign-On uses a configured relative URL, this value is prepended to the URL. Valid Redirect URIs Enter a URL pattern and click the + sign to add. Click the - sign to remove. Click Save to save these changes. Wildcards values are allowed only at the end of a URL. For example, http://host.com/*USDUSD . This field is used when the exact SAML endpoints are not registered and Red Hat Single Sign-On pulls the Assertion Consumer URL from a request. Base URL If Red Hat Single Sign-On needs to link to a client, this URL is used. Logo URL URL that references a logo for the Client application. Policy URL URL that the Relying Party Client provides to the End-User to read about how the profile data will be used. Terms of Service URL URL that the Relying Party Client provides to the End-User to read about the Relying Party's terms of service. Master SAML Processing URL This URL is used for all SAML requests and the response is directed to the SP. It is used as the Assertion Consumer Service URL and the Single Logout Service URL. If login requests contain the Assertion Consumer Service URL then those login requests will take precedence. This URL must be validated by a registered Valid Redirect URI pattern. Assertion Consumer Service POST Binding URL POST Binding URL for the Assertion Consumer Service. Assertion Consumer Service Redirect Binding URL Redirect Binding URL for the Assertion Consumer Service. Logout Service POST Binding URL POST Binding URL for the Logout Service. Logout Service Redirect Binding URL Redirect Binding URL for the Logout Service. Logout Service Artifact Binding URL Artifact Binding URL for the Logout Service. When set together with the Force Artifact Binding option, Artifact binding is forced for both login and logout flows. Artifact binding is not used for logout unless this property is set. Artifact Binding URL URL to send the HTTP artifact messages to. Artifact Resolution Service URL of the client SOAP endpoint where to send the ArtifactResolve messages to. 12.2.1. IDP Initiated login IDP Initiated Login is a feature that allows you to set up an endpoint on the Red Hat Single Sign-On server that will log you into a specific application/client. In the Settings tab for your client, you need to specify the IDP Initiated SSO URL Name . This is a simple string with no whitespace in it. After this you can reference your client at the following URL: root/auth/realms/{realm}/protocol/saml/clients/{url-name} The IDP initiated login implementation prefers POST over REDIRECT binding (check saml bindings for more information). Therefore the final binding and SP URL are selected in the following way: If the specific Assertion Consumer Service POST Binding URL is defined (inside Fine Grain SAML Endpoint Configuration section of the client settings) POST binding is used through that URL. If the general Master SAML Processing URL is specified then POST binding is used again throught this general URL. As the last resort, if the Assertion Consumer Service Redirect Binding URL is configured (inside Fine Grain SAML Endpoint Configuration ) REDIRECT binding is used with this URL. If your client requires a special relay state, you can also configure this on the Settings tab in the IDP Initiated SSO Relay State field. Alternatively, browsers can specify the relay state in a RelayState query parameter, i.e. root/auth/realms/{realm}/protocol/saml/clients/{url-name}?RelayState=thestate . When using identity brokering , it is possible to set up an IDP Initiated Login for a client from an external IDP. The actual client is set up for IDP Initiated Login at broker IDP as described above. The external IDP has to set up the client for application IDP Initiated Login that will point to a special URL pointing to the broker and representing IDP Initiated Login endpoint for a selected client at the brokering IDP. This means that in client settings at the external IDP: IDP Initiated SSO URL Name is set to a name that will be published as IDP Initiated Login initial point, Assertion Consumer Service POST Binding URL in the Fine Grain SAML Endpoint Configuration section has to be set to the following URL: broker-root/auth/realms/{broker-realm}/broker/{idp-name}/endpoint/clients/{client-id} , where: broker-root is base broker URL broker-realm is name of the realm at broker where external IDP is declared idp-name is name of the external IDP at broker client-id is the value of IDP Initiated SSO URL Name attribute of the SAML client defined at broker. It is this client, which will be made available for IDP Initiated Login from the external IDP. Please note that you can import basic client settings from the brokering IDP into client settings of the external IDP - just use SP Descriptor available from the settings of the identity provider in the brokering IDP, and add clients/ client-id to the endpoint URL. 12.2.2. Using an entity descriptor to create a client Instead of registering a SAML 2.0 client manually, you can import the client using a standard SAML Entity Descriptor XML file. The Add Client page includes an Import option. Add client Procedure Click Select File . Load the file that contains the XML entity descriptor information. Review the information to ensure everything is set up correctly. Some SAML client adapters, such as mod-auth-mellon , need the XML Entity Descriptor for the IDP. You can find this descriptor by going to this URL: where realm is the realm of your client. 12.3. Client links To link from one client to another, Red Hat Single Sign-On provides a redirect endpoint: /realms/realm_name/clients/{client-id}/redirect . If a client accesses this endpoint using a HTTP GET request, Red Hat Single Sign-On returns the configured base URL for the provided Client and Realm in the form of an HTTP 307 (Temporary Redirect) in the response's Location header. As a result of this, a client needs only to know the Realm name and the Client ID to link to them. This indirection avoids hard-coding client base URLs. As an example, given the realm master and the client-id account : This URL temporarily redirects to: http://host:port/auth/realms/master/account 12.4. OIDC token and SAML assertion mappings Applications receiving ID tokens, access tokens, or SAML assertions may require different roles and user metadata. You can use Red Hat Single Sign-On to: Hardcode roles, claims and custom attributes. Pull user metadata into a token or assertion. Rename roles. You perform these actions in the Mappers tab in the Admin Console. Mappers tab New clients do not have built-in mappers but they can inherit some mappers from client scopes. See the client scopes section for more details. Protocol mappers map items (such as an email address, for example) to a specific claim in the identity and access token. The function of a mapper should be self explanatory from its name. You add pre-configured mappers by clicking Add Builtin . Each mapper has a set of common settings. Additional settings are available, depending on the mapper type. Click Edit to a mapper to access the configuration screen to adjust these settings. Mapper config Details on each option can be viewed by hovering over its tooltip. You can use most OIDC mappers to control where the claim gets placed. You opt to include or exclude the claim from the id and access tokens by adjusting the Add to ID token and Add to access token switches. You can add mapper types as follows: Procedure Go to the Mappers tab. Click Create . Add mapper Select a Mapper Type from the list box. 12.4.1. Priority order Mapper implementations have priority order . Priority order is not the configuration property of the mapper. It is the property of the concrete implementation of the mapper. Mappers are sorted by the order in the list of mappers. The changes in the token or assertion are applied in that order with the lowest applying first. Therefore, the implementations that are dependent on other implementations are processed in the necessary order. For example, to compute the roles which will be included with a token: Resolve audiences based on those roles. Process a JavaScript script that uses the roles and audiences already available in the token. 12.4.2. OIDC user session note mappers User session details are defined using mappers and are automatically included when you use or enable a feature on a client. Click Add builtin to include session details. Impersonated user sessions provide the following details: IMPERSONATOR_ID : The ID of an impersonating user. IMPERSONATOR_USERNAME : The username of an impersonating user. Service account sessions provide the following details: clientId : The client ID of the service account. clientAddress : The remote host IP of the service account's authenticated device. clientHost : The remote host name of the service account's authenticated device. 12.4.3. Script mapper Use the Script Mapper to map claims to tokens by running user-defined JavaScript code. For more details about deploying scripts to the server, see JavaScript Providers . When scripts deploy, you should be able to select the deployed scripts from the list of available mappers. 12.5. Generating client adapter config Red Hat Single Sign-On can generate configuration files that you can use to install a client adapter in your application's deployment environment. A number of adapter types are supported for OIDC and SAML. Go to the Installation tab of the client you want to generate configuration for. Select the Format Option you want configuration generated for. All Red Hat Single Sign-On client adapters for OIDC and SAML are supported. The mod-auth-mellon Apache HTTPD adapter for SAML is supported as well as standard SAML entity descriptor files. 12.6. Client scopes Use Red Hat Single Sign-On to define a shared client configuration in an entity called a client scope . A client scope configures protocol mappers and role scope mappings for multiple clients. Client scopes also support the OAuth 2 scope parameter. Client applications use this parameter to request claims or roles in the access token, depending on the requirement of the application. To create a client scope, follow these steps: Click Client Scopes in the menu. Client scopes list Click Create . Name your client scope. Click Save . A client scope has similar tabs to regular clients. You can define protocol mappers and role scope mappings . These mappings can be inherited by other clients and are configured to inherit from this client scope. 12.6.1. Protocol When you create a client scope, choose the Protocol . Clients linked in the same scope must have the same protocol. Each realm has a set of pre-defined built-in client scopes in the menu. SAML protocol: The role_list . This scope contains one protocol mapper for the roles list in the SAML assertion. OpenID Connect protocol: Several client scopes are available: roles This scope is not defined in the OpenID Connect specification and is not added automatically to the scope claim in the access token. This scope has mappers, which are used to add the roles of the user to the access token and add audiences for clients that have at least one client role. These mappers are described in more detail in the Audience section . web-origins This scope is also not defined in the OpenID Connect specification and not added to the scope claiming the access token. This scope is used to add allowed web origins to the access token allowed-origins claim. microprofile-jwt This scope handles claims defined in the MicroProfile/JWT Auth Specification . This scope defines a user property mapper for the upn claim and a realm role mapper for the groups claim. These mappers can be changed so different properties can be used to create the MicroProfile/JWT specific claims. offline_access This scope is used in cases when clients need to obtain offline tokens. More details on offline tokens is available in the Offline Access section and in the OpenID Connect specification . profile email address phone The client scopes profile , email , address and phone are defined in the OpenID Connect specification . These scopes do not have any role scope mappings defined but they do have protocol mappers defined. These mappers correspond to the claims defined in the OpenID Connect specification. For example, when you open the phone client scope and open the Mappers tab, you will see the protocol mappers which correspond to the claims defined in the specification for the scope phone . Client scope mappers When the phone client scope is linked to a client, the client automatically inherits all the protocol mappers defined in the phone client scope. Access tokens issued for this client contain the phone number information about the user, assuming that the user has a defined phone number. Built-in client scopes contain the protocol mappers as defined in the specification. You are free to edit client scopes and create, update, or remove any protocol mappers or role scope mappings. 12.6.2. Consent related settings Client scopes contain options related to the consent screen. Those options are useful if the linked client if Consent Required is enabled on the client. Display On Consent Screen If Display On Consent Screen is enabled, and the scope is added to a client that requires consent, the text specified in Consent Screen Text will be displayed on the consent screen. This text is shown when the user is authenticated and before the user is redirected from Red Hat Single Sign-On to the client. If Display On Consent Screen is disabled, this client scope will not be displayed on the consent screen. Consent Screen Text The text displayed on the consent screen when this client scope is added to a client when consent required defaults to the name of client scope. The value for this text can be customised by specifying a substitution variable with USD{var-name} strings. The customised value is configured within the property files in your theme. See the Server Developer Guide for more information on customisation. 12.6.3. Link client scope with the client Linking between a client scope and a client is configured in the Client Scopes tab of the client. Two ways of linking between client scope and client are available. Default Client Scopes This setting is applicable to the OpenID Connect and SAML clients. Default client scopes are applied when issuing OpenID Connect tokens or SAML assertions for a client. The client will inherit Protocol Mappers and Role Scope Mappings that are defined on the client scope. For the OpenID Connect Protocol, the Mappers and Role Scope Mappings are always applied, regardless of the value used for the scope parameter in the OpenID Connect authorization request. Optional Client Scopes This setting is applicable only for OpenID Connect clients. Optional client scopes are applied when issuing tokens for this client but only when requested by the scope parameter in the OpenID Connect authorization request. 12.6.3.1. Example For this example, assume the client has profile and email linked as default client scopes, and phone and address linked as optional client scopes. The client uses the value of the scope parameter when sending a request to the OpenID Connect authorization endpoint. scope=openid phone The scope parameter contains the string, with the scope values divided by spaces. The value openid is the meta-value used for all OpenID Connect requests. The token will contain mappers and role scope mappings from the default client scopes profile and email as well as phone , an optional client scope requested by the scope parameter. 12.6.4. Evaluating Client Scopes The Mappers tab contains the protocol mappers and the Scope tab contains the role scope mappings declared for this client. They do not contain the mappers and scope mappings inherited from client scopes. It is possible to see the effective protocol mappers (that is the protocol mappers defined on the client itself as well as inherited from the linked client scopes) and the effective role scope mappings used when generating a token for a client. Procedure Click the Client Scopes tab for the client. Open the sub-tab Evaluate . Select the optional client scopes that you want to apply. This will also show you the value of the scope parameter. This parameter needs to be sent from the application to the Red Hat Single Sign-On OpenID Connect authorization endpoint. Evaluating client scopes Note To send a custom value for a scope parameter from your application, see the parameters forwarding section , for servlet adapters or the javascript adapter section , for javascript adapters. All examples are generated for the particular user and issued for the particular client, with the specified value of the scope parameter. The examples include all of the claims and role mappings used. 12.6.5. Client scopes permissions When issuing tokens to a user, the client scope applies only if the user is permitted to use it. When a client scope does not have any role scope mappings defined, each user is permitted to use this client scope. However, when a client scope has role scope mappings defined, the user must be a member of at least one of the roles. There must be an intersection between the user roles and the roles of the client scope. Composite roles are factored into evaluating this intersection. If a user is not permitted to use the client scope, no protocol mappers or role scope mappings will be used when generating tokens. The client scope will not appear in the scope value in the token. 12.6.6. Realm default client scopes Use Realm Default Client Scopes to define sets of client scopes that are automatically linked to newly created clients. Procedure Click the Client Scopes tab for the client. Click Default Client Scopes . From here, select the client scopes that you want to add as Default Client Scopes to newly created clients and Optional Client Scopes . Default client scopes When a client is created, you can unlink the default client scopes, if needed. This is similar to removing Default Roles . 12.6.7. Scopes explained Client scope Client scopes are entities in Red Hat Single Sign-On that are configured at the realm level and can be linked to clients. Client scopes are referenced by their name when a request is sent to the Red Hat Single Sign-On authorization endpoint with a corresponding value of the scope parameter. See the client scopes linking section for more details. Role scope mapping This is available under the Scope tab of a client or client scope. Use Role scope mapping to limit the roles that can be used in the access tokens. See the Role Scope Mappings section for more details. 12.7. Client Policies To make it easy to secure client applications, it is beneficial to realize the following points in a unified way. Setting policies on what configuration a client can have Validation of client configurations Conformance to a required security standards and profiles such as Financial-grade API (FAPI) To realize these points in a unified way, Client Policies concept is introduced. 12.7.1. Use-cases Client Policies realize the following points mentioned as follows. Setting policies on what configuration a client can have Configuration settings on the client can be enforced by client policies during client creation/update, but also during OpenID Connect requests to Red Hat Single Sign-On server, which are related to particular client. Red Hat Single Sign-On supports similar thing also through the Client Registration Policies described in the Securing Applications and Services Guide . However, Client Registration Policies can only cover OIDC Dynamic Client Registration. Client Policies cover not only what Client Registration Policies can do, but other client registration and configuration ways. The current plans are for Client Registration to be replaced by Client Policies. Validation of client configurations Red Hat Single Sign-On supports validation whether the client follows settings like Proof Key for Code Exchange, Request Object Signing Algorithm, Holder-of-Key Token, and so on on some endpoints like Authorization Endpoint, Token Endpoint, and so on. These can be specified by each setting item (on Admin Console, switch, pulldown menu and so on). To make the client application secure, the administrator needs to set many settings in the appropriate way, which makes it difficult for the administrator to secure the client application. Client Policies can do these validation of client configurations mentioned just above and they can also be used to auto-configure some client configuration switches to meet the advanced security requirements. In the future, individual client configuration settings may be replaced by Client Policies directly performing required validations. Conformance to a required security standards and profiles such as FAPI The Global client profiles are client profiles pre-configured in Red Hat Single Sign-On by default. They are pre-configured to be compliant with standard security profiles like FAPI , which makes it easy for the administrator to secure their client application to be compliant with the particular security profile. At this moment, Red Hat Single Sign-On has global profiles for the support of FAPI 1 specification. The administrator will just need to configure the client policies to specify which clients should be compliant with the FAPI. The administrator can configure client profiles and client policies, so that Red Hat Single Sign-On clients can be easily made compliant with various other security profiles like SPA, Native App, Open Banking and so on. 12.7.2. Protocol The client policy concept is independent of any specific protocol. However, Red Hat Single Sign-On currently supports it only just for the OpenID Connect (OIDC) protocol . 12.7.3. Architecture Client Policies consists of the four building blocks: Condition, Executor, Profile and Policy. 12.7.3.1. Condition A condition determines to which client a policy is adopted and when it is adopted. Some conditions are checked at the time of client create/update when some other conditions are checked during client requests (OIDC Authorization request, Token endpoint request and so on). The condition checks whether one specified criteria is satisfied. For example, some condition checks whether the access type of the client is confidential. The condition can not be used solely by itself. It can be used in a policy that is described afterwards. A condition can be configurable the same as other configurable providers. What can be configured depends on each condition's nature. The following conditions are provided: The way of creating/updating a client Dynamic Client Registration (Anonymous or Authenticated with Initial access token or Registration access token) Admin REST API (Admin Console and so on) So for example when creating a client, a condition can be configured to evaluate to true when this client is created by OIDC Dynamic Client Registration without initial access token (Anonymous Dynamic Client Registration). So this condition can be used for example to ensure that all clients registered through OIDC Dynamic Client Registration are FAPI compliant. Author of a client (Checked by presence to the particular role or group) On OpenID Connect dynamic client registration, an author of a client is the end user who was authenticated to get an access token for generating a new client, not Service Account of the existing client that actually accesses the registration endpoint with the access token. On registration by Admin REST API, an author of a client is the end user like the administrator of the Red Hat Single Sign-On. Client Access Type (confidential, public, bearer-only) For example when a client sends an authorization request, a policy is adopted if this client is confidential. Client Scope Evaluates to true if the client has a particular client scope (either as default or as an optional scope used in current request). This can be used for example to ensure that OIDC authorization requests with scope fapi-example-scope need to be FAPI compliant. Client Role Applies for clients with the client role of the specified name Client Domain Name, Host or IP Address Applied for specific domain names of client. Or for the cases when the administrator registers/updates client from particular Host or IP Address. Any Client This condition always evaluates to true. It can be used for example to ensure that all clients in the particular realm are FAPI compliant. 12.7.3.2. Executor An executor specifies what action is executed on a client to which a policy is adopted. The executor executes one or several specified actions. For example, some executor checks whether the value of the parameter redirect_uri in the authorization request matches exactly with one of the pre-registered redirect URIs on Authorization Endpoint and rejects this request if not. The executor can not be used solely by itself. It can be used in a profile that is described afterwards. An executor can be configurable the same as other configurable providers. What can be configured depends on the nature of each executor. An executor acts on various events. An executor implementation can ignore certain types of events (For example, executor for checking OIDC request object acts just on the OIDC authorization request). Events are: Creating a client (including creation through dynamic client registration) Updating a client Sending an authorization request Sending a token request Sending a token refresh request Sending a token revocation request Sending a token introspection request Sending a userinfo request Sending a logout request with a refresh token On each event, an executor can work in multiple phases. For example, on creating/updating a client, the executor can modify the client configuration by auto-configure specific client settings. After that, the executor validates this configuration in validation phase. One of several purposes for this executor is to realize the security requirements of client conformance profiles like FAPI. To do so, the following executors are needed: Enforce secure Client Authentication method is used for the client Enforce Holder-of-key tokens are used Enforce Proof Key for Code Exchange (PKCE) is used Enforce secure signature algorithm for Signed JWT client authentication (private-key-jwt) is used Enforce HTTPS redirect URI and make sure that configured redirect URI does not contain wildcards Enforce OIDC request object satisfying high security level Enforce Response Type of OIDC Hybrid Flow including ID Token used as detached signature as described in the FAPI 1 specification, which means that ID Token returned from Authorization response won't contain user profile data Enforce more secure state and nonce parameters treatment for preventing CSRF Enforce more secure signature algorithm when client registration Enforce binding_message parameter is used for CIBA requests Enforce Client Secret Rotation 12.7.3.3. Profile A profile consists of several executors, which can realize a security profile like FAPI. Profile can be configured by the Admin REST API (Admin Console) together with its executors. Three global profiles exist and they are configured in Red Hat Single Sign-On by default with pre-configured executors compliant with the FAPI Baseline, FAPI Advanced and FAPI CIBA specifications. More details exist in the FAPI section of the Securing Applications and Services Guide . 12.7.3.4. Policy A policy consists of several conditions and profiles. The policy can be adopted to clients satisfying all conditions of this policy. The policy refers several profiles and all executors of these profiles execute their task against the client that this policy is adopted to. 12.7.4. Configuration Policies, profiles, conditions, executors can be configured by Admin REST API, which means also the Admin Console. To do so, there is a tab Realm Realm Settings Client Policies , which means the administrator can have client policies per realm. The Global Client Profiles are automatically available in each realm. However there are no client policies configured by default. This means that the administrator is always required to create any client policy if they want for example the clients of his realm to be FAPI compliant. Global profiles cannot be updated, but the administrator can easily use them as a template and create their own profile if they want to do some slight changes in the global profile configurations. There is JSON Editor available in the Admin Console, which simplifies the creation of new profile based on some global profile. 12.7.5. Backward Compatibility Client Policies can replace Client Registration Policies described in the Securing Applications and Services Guide . However, Client Registration Policies also still co-exist. This means that for example during a Dynamic Client Registration request to create/update a client, both client policies and client registration policies are applied. The current plans are for the Client Registration Policies feature to be removed and the existing client registration policies will be migrated into new client policies automatically. 12.7.6. Client Secret Rotation Example See an example configuration for client secret rotation .
[ "https://myhost.com/myapp/k_jwks", "POST /auth/realms/demo/protocol/openid-connect/token Authorization: Basic cHJvZHVjdC1zYS1jbGllbnQ6cGFzc3dvcmQ= Content-Type: application/x-www-form-urlencoded grant_type=client_credentials", "HTTP/1.1 200 OK Content-Type: application/json;charset=UTF-8 Cache-Control: no-store Pragma: no-cache { \"access_token\":\"2YotnFZFEjr1zCsicMWpAA\", \"token_type\":\"bearer\", \"expires_in\":60 }", "\"audience\": [ \"<trusted service>\" ]", "root/auth/realms/{realm}/protocol/saml/descriptor", "http://host:port/auth/realms/master/clients/account/redirect", "scope=openid phone" ]
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.6/html/server_administration_guide/assembly-managing-clients_server_administration_guide
2.6. Configuring Services on the Real Servers
2.6. Configuring Services on the Real Servers If the real servers are Red Hat Enterprise Linux systems, set the appropriate server daemons to activate at boot time. These daemons can include httpd for Web services or xinetd for FTP or Telnet services. It may also be useful to access the real servers remotely, so the sshd daemon should also be installed and running.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/virtual_server_administration/s1-lvs-server-daemons-VSA
Chapter 10. Service Registry content rule reference
Chapter 10. Service Registry content rule reference This chapter provides reference information on the supported content rule types, their level of support for artifact types, and order of precedence of artifact-specific and global rules. Section 10.1, "Service Registry content rule types" Section 10.2, "Service Registry content rule maturity" Section 10.3, "Service Registry content rule precedence" Additional resources For more information, see the Apicurio Registry REST API documentation . 10.1. Service Registry content rule types You can specify VALIDITY , COMPATIBILITY , and INTEGRITY rule types to govern content evolution in Service Registry. Theses rule types apply to both global rules and artifact-specific rules. Table 10.1. Service Registry content rule types Type Description VALIDITY Validate content before adding it to Service Registry. The possible configuration values for this rule are as follows: FULL : The validation is both syntax and semantic. SYNTAX_ONLY : The validation is syntax only. NONE : All validation checks are disabled. COMPATIBILITY Enforce a compatibility level when updating artifacts (for example, select BACKWARD for backwards compatibility). Ensures that new artifacts are compatible with previously added artifact versions or clients. The possible configuration values for this rule are as follows: FULL : The new artifact is forward and backward compatible with the most recently added artifact. FULL_TRANSITIVE : The new artifact is forward and backward compatible with all previously added artifacts. BACKWARD : Clients using the new artifact can read data written using the most recently added artifact. BACKWARD_TRANSITIVE : Clients using the new artifact can read data written using all previously added artifacts. FORWARD : Clients using the most recently added artifact can read data written using the new artifact. FORWARD_TRANSITIVE : Clients using all previously added artifacts can read data written using the new artifact. NONE : All backward and forward compatibility checks are disabled. INTEGRITY Enforce artifact reference integrity when creating or updating artifacts. Enable and configure this rule to ensure that any artifact references provided are correct. The possible configuration values for this rule are as follows: FULL : All artifact reference integrity checks are enabled. NO_DUPLICATES : Detect if there are any duplicate artifact references. REFS_EXIST : Detect if there are any references to non-existent artifacts. ALL_REFS_MAPPED : Ensure that all artifact references are mapped. NONE : All artifact reference integrity checks are disabled. 10.2. Service Registry content rule maturity Not all content rules are fully implemented for every artifact type supported by Service Registry. The following table shows the current maturity level for each rule and artifact type: Table 10.2. Service Registry content rule maturity matrix Artifact type Validity rule Compatibility rule Integrity rule Avro Full Full Full Protobuf Full Full Full JSON Schema Full Full Mapping detection not supported OpenAPI Full None Full AsyncAPI Syntax Only None Full GraphQL Syntax Only None Mapping detection not supported Kafka Connect Syntax Only None Mapping detection not supported WSDL Full None Mapping detection not supported XML Full None Mapping detection not supported XSD Full None Mapping detection not supported 10.3. Service Registry content rule precedence When you add or update an artifact, Service Registry applies rules to check the validity, compatibility, or integrity of the artifact content. Configured artifact-specific rules override the equivalent configured global rules, as shown in the following table. Table 10.3. Service Registry content rule precedence Artifact-specific rule Global rule Rule applied to this artifact Global rule available for other artifacts? Enabled Enabled Artifact-specific Yes Disabled Enabled Global Yes Disabled Disabled None No Enabled, set to None Enabled None Yes Disabled Enabled, set to None None No
null
https://docs.redhat.com/en/documentation/red_hat_integration/2023.q4/html/service_registry_user_guide/registry-rule-reference_registry
macro::json_output_data_end
macro::json_output_data_end Name macro::json_output_data_end - End the json output. Synopsis Arguments None Description The json_output_data_end macro is designed to be called from the 'json_data' probe from the user's script. It marks the end of the JSON output.
[ "@json_output_data_end()" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-json-output-data-end
7.4. Using Replication with Other Directory Server Features
7.4. Using Replication with Other Directory Server Features Replication interacts with other Directory Server features to provide advanced replication features. The following sections describe feature interactions to better design the replication strategy. 7.4.1. Replication and Access Control The directory service stores ACIs as attributes of entries. This means that the ACI is replicated together with other directory content. This is important because Directory Server evaluates ACIs locally. For more information about designing access control for the directory, see Chapter 9, Designing a Secure Directory . 7.4.2. Replication and Directory Server Plug-ins Replication works with most of the plug-ins delivered with Directory Server. There are some exceptions and limitations in the case of multi-supplier replication with the following plug-ins: Attribute Uniqueness Plug-in The Attribute Uniqueness Plug-in validate attribute values added to local entries to make sure that all values are unique. However, this checking is done directly on the server, not replicated from other suppliers. For example, Example Corp. requires that the mail attribute be unique, but two users are added with the same mail attribute to two different supplier servers at the same time. As long as there it no a naming conflict, then there is no replication conflict, but the mail attribute is not unique. Referential Integrity Plug-in Referential integrity works with multi-supplier replication, provided that this plug-in is enabled on only one supplier in the multi-supplier set. This ensures that referential integrity updates occur on only one of the supplier servers and propagated to the others. Note By default, these plug-ins are disabled, and they must be manually enabled. 7.4.3. Replication and Database Links With chaining to distribute directory entries, the server containing the database link references a remote server that contains the actual data. In this environment, the database link itself cannot be replicated. However, the database that contains the actual data on the remote server can be replicated. Do not use the replication process as a backup for database links. Database links must be backed up manually. For more information about chaining and entry distribution, see Chapter 6, Designing the Directory Topology . Figure 7.10. Replicating Chained Databases 7.4.4. Schema Replication For the standard schema, before replicating data to consumer servers, the supplier server checks whether its own version of the schema is synchronized with the version of the schema stored on the consumer servers. The following conditions apply: If the schema entries on both supplier and consumers are the same, the replication operation proceeds. If the version of the schema on the supplier server is more recent than the version stored on the consumer, the supplier server replicates its schema to the consumer before proceeding with the data replication. If the version of the schema on the supplier server is older than the version stored on the consumer, the server may return many errors during replication because the schema on the consumer cannot support the new data. Note Schema replication still occurs, even if the schemas between the supplier and replica do not match. Replicatable changes include changes to the schema made through the web console, changes made through ldapmodify , and changes made directly to the 99user.ldif file. Custom schema files, and any changes made to custom schema files, are not replicated. A consumer might contain replicated data from two suppliers, each with different schema. Whichever supplier was updated last wins, and its schema is propagated to the consumer. Warning Never update the schema on a consumer server, because the supplier server is unable to resolve conflicts that occur, and replication fails. Schema should be maintained on a supplier server in a replicated topology. The same Directory Server can hold read-write replicas for which it acts as a supplier and read-only replicas for which it acts as a consumer. Therefore, always identify the server that will function as a supplier for the schema, and then set up replication agreements between this supplier and all other servers in the replication environment that will function as consumers for the schema information. Special replication agreements are not required to replicate the schema. If replication has been configured between a supplier and a consumer, schema replication occurs by default. For more information on schema design, see Chapter 3, Designing the Directory Schema . Custom Schema If the standard 99user.ldif file is used for custom schema, these changes are replicated to all consumers. Custom schema files must be copied to each server in order to maintain the information in the same schema file on all servers. Custom schema files, and changes to those files, are not replicated, even if they are made through the web console or ldapmodify . If there are custom schema files, ensure that these files are copied to all servers after making changes on the supplier. After all of the files have been copied, restart the server. For more information on custom schema files, see Section 3.4.7, "Creating Custom Schema Files" . 7.4.5. Replication and Synchronization In order to propagate synchronized Windows entries throughout the Directory Server, use synchronization within a multi-supplier environment. Synchronization agreement should be kept to the lowest amount possible, preferably one per deployment. Multi-supplier replication allows the Windows information to be available throughout the network, while limiting the data access point to a single Directory Server.
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/designing_the_replication_process-using_replication_with_other_ds_features
Chapter 1. What is the Red Hat Hybrid Cloud Console?
Chapter 1. What is the Red Hat Hybrid Cloud Console? You can use the Red Hat Hybrid Cloud Console to access a comprehensive set of hosted services from a single interface. The Hybrid Cloud Console provides the content, tooling, and visibility developers and IT administrators need to build, deploy, and optimize workloads across the hybrid cloud. From the Hybrid Cloud Console, you can connect with your various platforms and then centrally manage and automate your hybrid cloud and the deployments within it. Use the Hybrid Cloud Console to manage your Red Hat Enterprise Linux (RHEL) infrastructure, Red Hat OpenShift clusters, and application services. You can perform the following tasks from the Hybrid Cloud Console: Use Red Hat Insights to reduce risk and downtime, improve compliance, and optimize spend for your RHEL and Red Hat OpenShift resources. View information about your RHEL systems and Red Hat OpenShift clusters nodes from a single interface. Manage, update, and deploy different types of Red Hat OpenShift clusters and install cluster add-ons. Deploy applications on Red Hat OpenShift. Manage security policies and build pipelines. 1.1. Red Hat Enterprise Linux on the Hybrid Cloud Console The Red Hat Hybrid Cloud Console provides a centralized view into operations, security, and subscriptions for Red Hat Enterprise Linux (RHEL). Through tooling, rule-based analytical models, and the support of Red Hat, you can use the console to streamline many of the tasks and analysis required to build and deliver a stable and secure environment for applications on RHEL. Additional resources For more information about Red Hat Enterprise Linux, see the Cloud section on the Red Hat Enterprise Linux documentation page . For information about Red Hat Insights for Red Hat Enterprise Linux, see the Red Hat Insights documentation page . 1.2. Red Hat OpenShift on the Hybrid Cloud Console The Red Hat Hybrid Cloud Console provides centralized reporting and management for Red Hat OpenShift clusters. Using the OpenShift Cluster Manager service, you can streamline and simplify how operators create, register, and upgrade Red Hat OpenShift clusters across supported environments. Clusters contains your OpenShift cluster inventory, and provides the ability to create, manage, and delete OpenShift clusters.
null
https://docs.redhat.com/en/documentation/red_hat_hybrid_cloud_console/1-latest/html/getting_started_with_the_red_hat_hybrid_cloud_console_with_fedramp/hybrid-cloud-console_getting-started
Chapter 6. Services
Chapter 6. Services This section enumerates all the services that are available in the API. 6.1. AffinityGroup This service manages a single affinity group. Table 6.1. Methods summary Name Summary get Retrieve the affinity group details. remove Remove the affinity group. update Update the affinity group. 6.1.1. get GET Retrieve the affinity group details. <affinity_group id="00000000-0000-0000-0000-000000000000"> <name>AF_GROUP_001</name> <cluster id="00000000-0000-0000-0000-000000000000"/> <positive>true</positive> <enforcing>true</enforcing> </affinity_group> Table 6.2. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . group AffinityGroup Out The affinity group. 6.1.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.1.2. remove DELETE Remove the affinity group. Table 6.3. Parameters summary Name Type Direction Summary async Boolean In Indicates if the removal should be performed asynchronously. 6.1.3. update PUT Update the affinity group. Table 6.4. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. group AffinityGroup In/Out The affinity group. 6.2. AffinityGroupHost This service manages a single host to affinity group assignment. Table 6.5. Methods summary Name Summary remove Remove host from the affinity group. 6.2.1. remove DELETE Remove host from the affinity group. Table 6.6. Parameters summary Name Type Direction Summary async Boolean In Indicates if the removal should be performed asynchronously. 6.3. AffinityGroupHostLabel This service manages a single host label assigned to an affinity group. Table 6.7. Methods summary Name Summary remove Remove this label from the affinity group. 6.3.1. remove DELETE Remove this label from the affinity group. Table 6.8. Parameters summary Name Type Direction Summary async Boolean In Indicates if the removal should be performed asynchronously. 6.4. AffinityGroupHostLabels This service manages a collection of all host labels assigned to an affinity group. Table 6.9. Methods summary Name Summary add Adds a host label to the affinity group. list List all host labels assigned to this affinity group. 6.4.1. add POST Adds a host label to the affinity group. For example, to add the label 789 to the affinity group 456 of cluster 123 , send a request like this: With the following body: <affinity_label id="789"/> Table 6.10. Parameters summary Name Type Direction Summary label AffinityLabel In/Out The AffinityLabel object to add to the affinity group. 6.4.2. list GET List all host labels assigned to this affinity group. The order of the returned labels isn't guaranteed. Table 6.11. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . labels AffinityLabel[ ] Out Host labels assigned to the affinity group. max Integer In Sets the maximum number of host labels to return. 6.4.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.4.2.2. max Sets the maximum number of host labels to return. If not specified, all the labels are returned. 6.5. AffinityGroupHosts This service manages a collection of all hosts assigned to an affinity group. Table 6.12. Methods summary Name Summary add Adds a host to the affinity group. list List all hosts assigned to this affinity group. 6.5.1. add POST Adds a host to the affinity group. For example, to add the host 789 to the affinity group 456 of cluster 123 , send a request like this: With the following body: <host id="789"/> Table 6.13. Parameters summary Name Type Direction Summary host Host In/Out The host to be added to the affinity group. 6.5.2. list GET List all hosts assigned to this affinity group. The order of the returned hosts isn't guaranteed. Table 6.14. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . hosts Host[ ] Out The list of hosts assigned to this affinity group. max Integer In Sets the maximum number of hosts to return. 6.5.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.5.2.2. max Sets the maximum number of hosts to return. If not specified, all the hosts are returned. 6.6. AffinityGroupVm This service manages a single virtual machine to affinity group assignment. Table 6.15. Methods summary Name Summary remove Remove this virtual machine from the affinity group. 6.6.1. remove DELETE Remove this virtual machine from the affinity group. Table 6.16. Parameters summary Name Type Direction Summary async Boolean In Indicates if the removal should be performed asynchronously. 6.7. AffinityGroupVmLabel This service manages a single virtual machine label assigned to an affinity group. Table 6.17. Methods summary Name Summary remove Remove this label from the affinity group. 6.7.1. remove DELETE Remove this label from the affinity group. Table 6.18. Parameters summary Name Type Direction Summary async Boolean In Indicates if the removal should be performed asynchronously. 6.8. AffinityGroupVmLabels This service manages a collection of all virtual machine labels assigned to an affinity group. Table 6.19. Methods summary Name Summary add Adds a virtual machine label to the affinity group. list List all virtual machine labels assigned to this affinity group. 6.8.1. add POST Adds a virtual machine label to the affinity group. For example, to add the label 789 to the affinity group 456 of cluster 123 , send a request like this: With the following body: <affinity_label id="789"/> Table 6.20. Parameters summary Name Type Direction Summary label AffinityLabel In/Out The AffinityLabel object to add to the affinity group. 6.8.2. list GET List all virtual machine labels assigned to this affinity group. The order of the returned labels isn't guaranteed. Table 6.21. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . labels AffinityLabel[ ] Out Virtual machine labels assigned to the affinity group. max Integer In Sets the maximum number of virtual machine labels to return. 6.8.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.8.2.2. max Sets the maximum number of virtual machine labels to return. If not specified, all the labels are returned. 6.9. AffinityGroupVms This service manages a collection of all the virtual machines assigned to an affinity group. Table 6.22. Methods summary Name Summary add Adds a virtual machine to the affinity group. list List all virtual machines assigned to this affinity group. 6.9.1. add POST Adds a virtual machine to the affinity group. For example, to add the virtual machine 789 to the affinity group 456 of cluster 123 , send a request like this: With the following body: <vm id="789"/> Table 6.23. Parameters summary Name Type Direction Summary vm Vm In/Out 6.9.2. list GET List all virtual machines assigned to this affinity group. The order of the returned virtual machines isn't guaranteed. Table 6.24. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of virtual machines to return. vms Vm[ ] Out 6.9.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.9.2.2. max Sets the maximum number of virtual machines to return. If not specified, all the virtual machines are returned. 6.10. AffinityGroups The affinity groups service manages virtual machine relationships and dependencies. Table 6.25. Methods summary Name Summary add Create a new affinity group. list List existing affinity groups. 6.10.1. add POST Create a new affinity group. Post a request like in the example below to create a new affinity group: And use the following example in its body: <affinity_group> <name>AF_GROUP_001</name> <hosts_rule> <enforcing>true</enforcing> <positive>true</positive> </hosts_rule> <vms_rule> <enabled>false</enabled> </vms_rule> </affinity_group> Table 6.26. Parameters summary Name Type Direction Summary group AffinityGroup In/Out The affinity group object to create. 6.10.2. list GET List existing affinity groups. The order of the affinity groups results isn't guaranteed. Table 6.27. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . groups AffinityGroup[ ] Out The list of existing affinity groups. max Integer In Sets the maximum number of affinity groups to return. 6.10.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.10.2.2. max Sets the maximum number of affinity groups to return. If not specified all the affinity groups are returned. 6.11. AffinityLabel The details of a single affinity label. Table 6.28. Methods summary Name Summary get Retrieves the details of a label. remove Removes a label from the system and clears all assignments of the removed label. update Updates a label. 6.11.1. get GET Retrieves the details of a label. Table 6.29. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . label AffinityLabel Out 6.11.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.11.2. remove DELETE Removes a label from the system and clears all assignments of the removed label. 6.11.3. update PUT Updates a label. This call will update all metadata, such as the name or description. Table 6.30. Parameters summary Name Type Direction Summary label AffinityLabel In/Out 6.12. AffinityLabelHost This service represents a host that has a specific label when accessed through the affinitylabels/hosts subcollection. Table 6.31. Methods summary Name Summary get Retrieves details about a host that has this label assigned. remove Remove a label from a host. 6.12.1. get GET Retrieves details about a host that has this label assigned. Table 6.32. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . host Host Out 6.12.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.12.2. remove DELETE Remove a label from a host. 6.13. AffinityLabelHosts This service represents list of hosts that have a specific label when accessed through the affinitylabels/hosts subcollection. Table 6.33. Methods summary Name Summary add Add a label to a host. list List all hosts with the label. 6.13.1. add POST Add a label to a host. Table 6.34. Parameters summary Name Type Direction Summary host Host In/Out 6.13.2. list GET List all hosts with the label. The order of the returned hosts isn't guaranteed. Table 6.35. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . hosts Host[ ] Out 6.13.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.14. AffinityLabelVm This service represents a vm that has a specific label when accessed through the affinitylabels/vms subcollection. Table 6.36. Methods summary Name Summary get Retrieves details about a vm that has this label assigned. remove Remove a label from a vm. 6.14.1. get GET Retrieves details about a vm that has this label assigned. Table 6.37. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . vm Vm Out 6.14.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.14.2. remove DELETE Remove a label from a vm. 6.15. AffinityLabelVms This service represents list of vms that have a specific label when accessed through the affinitylabels/vms subcollection. Table 6.38. Methods summary Name Summary add Add a label to a vm. list List all virtual machines with the label. 6.15.1. add POST Add a label to a vm. Table 6.39. Parameters summary Name Type Direction Summary vm Vm In/Out 6.15.2. list GET List all virtual machines with the label. The order of the returned virtual machines isn't guaranteed. Table 6.40. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . vms Vm[ ] Out 6.15.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.16. AffinityLabels Manages the affinity labels available in the system. Table 6.41. Methods summary Name Summary add Creates a new label. list Lists all labels present in the system. 6.16.1. add POST Creates a new label. The label is automatically attached to all entities mentioned in the vms or hosts lists. Table 6.42. Parameters summary Name Type Direction Summary label AffinityLabel In/Out 6.16.2. list GET Lists all labels present in the system. The order of the returned labels isn't guaranteed. Table 6.43. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . labels AffinityLabel[ ] Out max Integer In Sets the maximum number of labels to return. 6.16.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.16.2.2. max Sets the maximum number of labels to return. If not specified all the labels are returned. 6.17. Area This annotation is intended to specify what oVirt area is the annotated concept related to. Currently the following areas are in use, and they are closely related to the oVirt teams, but not necessarily the same: Infrastructure Network SLA Storage Virtualization A concept may be associated to more than one area, or to no area. The value of this annotation is intended for reporting only, and it doesn't affect at all the generated code or the validity of the model 6.18. AssignedAffinityLabel This service represents one label to entity assignment when accessed using the entities/affinitylabels subcollection. Table 6.44. Methods summary Name Summary get Retrieves details about the attached label. remove Removes the label from an entity. 6.18.1. get GET Retrieves details about the attached label. Table 6.45. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . label AffinityLabel Out 6.18.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.18.2. remove DELETE Removes the label from an entity. Does not touch the label itself. 6.19. AssignedAffinityLabels This service is used to list and manipulate affinity labels that are assigned to supported entities when accessed using entities/affinitylabels. Table 6.46. Methods summary Name Summary add Attaches a label to an entity. list Lists all labels that are attached to an entity. 6.19.1. add POST Attaches a label to an entity. Table 6.47. Parameters summary Name Type Direction Summary label AffinityLabel In/Out 6.19.2. list GET Lists all labels that are attached to an entity. The order of the returned entities isn't guaranteed. Table 6.48. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . label AffinityLabel[ ] Out 6.19.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.20. AssignedCpuProfile Table 6.49. Methods summary Name Summary get remove 6.20.1. get GET Table 6.50. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . profile CpuProfile Out 6.20.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.20.2. remove DELETE Table 6.51. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.21. AssignedCpuProfiles Table 6.52. Methods summary Name Summary add Add a new cpu profile for the cluster. list List the CPU profiles assigned to the cluster. 6.21.1. add POST Add a new cpu profile for the cluster. Table 6.53. Parameters summary Name Type Direction Summary profile CpuProfile In/Out 6.21.2. list GET List the CPU profiles assigned to the cluster. The order of the returned CPU profiles isn't guaranteed. Table 6.54. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of profiles to return. profiles CpuProfile[ ] Out 6.21.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.21.2.2. max Sets the maximum number of profiles to return. If not specified all the profiles are returned. 6.22. AssignedDiskProfile Table 6.55. Methods summary Name Summary get remove 6.22.1. get GET Table 6.56. Parameters summary Name Type Direction Summary disk_profile DiskProfile Out follow String In Indicates which inner links should be followed . 6.22.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.22.2. remove DELETE Table 6.57. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.23. AssignedDiskProfiles Table 6.58. Methods summary Name Summary add Add a new disk profile for the storage domain. list Returns the list of disk profiles assigned to the storage domain. 6.23.1. add POST Add a new disk profile for the storage domain. Table 6.59. Parameters summary Name Type Direction Summary profile DiskProfile In/Out 6.23.2. list GET Returns the list of disk profiles assigned to the storage domain. The order of the returned disk profiles isn't guaranteed. Table 6.60. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of profiles to return. profiles DiskProfile[ ] Out 6.23.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.23.2.2. max Sets the maximum number of profiles to return. If not specified all the profiles are returned. 6.24. AssignedPermissions Represents a permission sub-collection, scoped by user, group or some entity type. Table 6.61. Methods summary Name Summary add Assign a new permission to a user or group for specific entity. list List all the permissions of the specific entity. 6.24.1. add POST Assign a new permission to a user or group for specific entity. For example, to assign the UserVmManager role to the virtual machine with id 123 to the user with id 456 send a request like this: With a request body like this: <permission> <role> <name>UserVmManager</name> </role> <user id="456"/> </permission> To assign the SuperUser role to the system to the user with id 456 send a request like this: With a request body like this: <permission> <role> <name>SuperUser</name> </role> <user id="456"/> </permission> If you want to assign permission to the group instead of the user please replace the user element with the group element with proper id of the group. For example to assign the UserRole role to the cluster with id 123 to the group with id 789 send a request like this: With a request body like this: <permission> <role> <name>UserRole</name> </role> <group id="789"/> </permission> Table 6.62. Parameters summary Name Type Direction Summary permission Permission In/Out The permission. 6.24.2. list GET List all the permissions of the specific entity. For example to list all the permissions of the cluster with id 123 send a request like this: <permissions> <permission id="456"> <cluster id="123"/> <role id="789"/> <user id="451"/> </permission> <permission id="654"> <cluster id="123"/> <role id="789"/> <group id="127"/> </permission> </permissions> The order of the returned permissions isn't guaranteed. Table 6.63. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . permissions Permission[ ] Out The list of permissions. 6.24.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.25. AssignedRoles Represents a roles sub-collection, for example scoped by user. Table 6.64. Methods summary Name Summary list Returns the roles assigned to the permission. 6.25.1. list GET Returns the roles assigned to the permission. The order of the returned roles isn't guaranteed. Table 6.65. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of roles to return. roles Role[ ] Out 6.25.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.25.1.2. max Sets the maximum number of roles to return. If not specified all the roles are returned. 6.26. AssignedTag A service to manage assignment of specific tag to specific entities in system. Table 6.66. Methods summary Name Summary get Gets the information about the assigned tag. remove Unassign tag from specific entity in the system. 6.26.1. get GET Gets the information about the assigned tag. For example to retrieve the information about the tag with the id 456 which is assigned to virtual machine with id 123 send a request like this: <tag href="/ovirt-engine/api/tags/456" id="456"> <name>root</name> <description>root</description> <vm href="/ovirt-engine/api/vms/123" id="123"/> </tag> Table 6.67. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . tag Tag Out The assigned tag. 6.26.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.26.2. remove DELETE Unassign tag from specific entity in the system. For example to unassign the tag with id 456 from virtual machine with id 123 send a request like this: Table 6.68. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.27. AssignedTags A service to manage collection of assignment of tags to specific entities in system. Table 6.69. Methods summary Name Summary add Assign tag to specific entity in the system. list List all tags assigned to the specific entity. 6.27.1. add POST Assign tag to specific entity in the system. For example to assign tag mytag to virtual machine with the id 123 send a request like this: With a request body like this: <tag> <name>mytag</name> </tag> Table 6.70. Parameters summary Name Type Direction Summary tag Tag In/Out The assigned tag. 6.27.2. list GET List all tags assigned to the specific entity. For example to list all the tags of the virtual machine with id 123 send a request like this: <tags> <tag href="/ovirt-engine/api/tags/222" id="222"> <name>mytag</name> <description>mytag</description> <vm href="/ovirt-engine/api/vms/123" id="123"/> </tag> </tags> The order of the returned tags isn't guaranteed. Table 6.71. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of tags to return. tags Tag[ ] Out The list of assigned tags. 6.27.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.27.2.2. max Sets the maximum number of tags to return. If not specified all the tags are returned. 6.28. AssignedVnicProfile Table 6.72. Methods summary Name Summary get remove 6.28.1. get GET Table 6.73. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . profile VnicProfile Out 6.28.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.28.2. remove DELETE Table 6.74. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.29. AssignedVnicProfiles Table 6.75. Methods summary Name Summary add Add a new virtual network interface card profile for the network. list Returns the list of VNIC profiles assifned to the network. 6.29.1. add POST Add a new virtual network interface card profile for the network. Table 6.76. Parameters summary Name Type Direction Summary profile VnicProfile In/Out 6.29.2. list GET Returns the list of VNIC profiles assifned to the network. The order of the returned VNIC profiles isn't guaranteed. Table 6.77. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of profiles to return. profiles VnicProfile[ ] Out 6.29.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.29.2.2. max Sets the maximum number of profiles to return. If not specified all the profiles are returned. 6.30. AttachedStorageDomain Table 6.78. Methods summary Name Summary activate This operation activates an attached storage domain. deactivate This operation deactivates an attached storage domain. get remove 6.30.1. activate POST This operation activates an attached storage domain. Once the storage domain is activated it is ready for use with the data center. The activate action does not take any action specific parameters, so the request body should contain an empty action : <action/> Table 6.79. Parameters summary Name Type Direction Summary async Boolean In Indicates if the activation should be performed asynchronously. 6.30.2. deactivate POST This operation deactivates an attached storage domain. Once the storage domain is deactivated it will not be used with the data center. For example, to deactivate storage domain 456 , send the following request: With a request body like this: <action/> If the force parameter is true then the operation will succeed, even if the OVF update which takes place before the deactivation of the storage domain failed. If the force parameter is false and the OVF update failed, the deactivation of the storage domain will also fail. Table 6.80. Parameters summary Name Type Direction Summary async Boolean In Indicates if the deactivation should be performed asynchronously. force Boolean In Indicates if the operation should succeed and the storage domain should be moved to a deactivated state, even if the OVF update for the storage domain failed. 6.30.2.1. force Indicates if the operation should succeed and the storage domain should be moved to a deactivated state, even if the OVF update for the storage domain failed. For example, to deactivate storage domain 456 using force flag, send the following request: With a request body like this: <action> <force>true</force> <action> This parameter is optional, and the default value is false . 6.30.3. get GET Table 6.81. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . storage_domain StorageDomain Out 6.30.3.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.30.4. remove DELETE Table 6.82. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.31. AttachedStorageDomainDisk Manages a single disk available in a storage domain attached to a data center. Important Since version 4.2 of the engine this service is intended only to list disks available in the storage domain, and to register unregistered disks. All the other operations, like copying a disk, moving a disk, etc, have been deprecated and will be removed in the future. To perform those operations use the service that manages all the disks of the system or the service that manages a specific disk . Table 6.83. Methods summary Name Summary copy Copies a disk to the specified storage domain. export Exports a disk to an export storage domain. get Retrieves the description of the disk. move Moves a disk to another storage domain. register Registers an unregistered disk. remove Removes a disk. sparsify Sparsify the disk. update Updates the disk. 6.31.1. copy POST Copies a disk to the specified storage domain. Important Since version 4.2 of the engine this operation is deprecated, and preserved only for backwards compatibility. It will be removed in the future. To copy a disk use the copy operation of the service that manages that disk. Table 6.84. Parameters summary Name Type Direction Summary disk Disk In Description of the resulting disk. storage_domain StorageDomain In The storage domain where the new disk will be created. 6.31.2. export POST Exports a disk to an export storage domain. Important Since version 4.2 of the engine this operation is deprecated, and preserved only for backwards compatibility. It will be removed in the future. To export a disk use the export operation of the service that manages that disk. Table 6.85. Parameters summary Name Type Direction Summary storage_domain StorageDomain In The export storage domain where the disk should be exported to. 6.31.3. get GET Retrieves the description of the disk. Table 6.86. Parameters summary Name Type Direction Summary disk Disk Out The description of the disk. follow String In Indicates which inner links should be followed . 6.31.3.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.31.4. move POST Moves a disk to another storage domain. Important Since version 4.2 of the engine this operation is deprecated, and preserved only for backwards compatibility. It will be removed in the future. To move a disk use the move operation of the service that manages that disk. Table 6.87. Parameters summary Name Type Direction Summary async Boolean In Indicates if the move should be performed asynchronously. filter Boolean In Indicates if the results should be filtered according to the permissions of the user. storage_domain StorageDomain In The storage domain where the disk will be moved to. 6.31.5. register POST Registers an unregistered disk. 6.31.6. remove DELETE Removes a disk. Important Since version 4.2 of the engine this operation is deprecated, and preserved only for backwards compatibility. It will be removed in the future. To remove a disk use the remove operation of the service that manages that disk. 6.31.7. sparsify POST Sparsify the disk. Important Since version 4.2 of the engine this operation is deprecated, and preserved only for backwards compatibility. It will be removed in the future. To remove a disk use the remove operation of the service that manages that disk. 6.31.8. update PUT Updates the disk. Important Since version 4.2 of the engine this operation is deprecated, and preserved only for backwards compatibility. It will be removed in the future. To update a disk use the update operation of the service that manages that disk. Table 6.88. Parameters summary Name Type Direction Summary disk Disk In/Out The update to apply to the disk. 6.32. AttachedStorageDomainDisks Manages the collection of disks available inside an storage domain that is attached to a data center. Table 6.89. Methods summary Name Summary add Adds or registers a disk. list Retrieve the list of disks that are available in the storage domain. 6.32.1. add POST Adds or registers a disk. Important Since version 4.2 of the engine this operation is deprecated, and preserved only for backwards compatibility. It will be removed in the future. To add a new disk use the add operation of the service that manages the disks of the system. To register an unregistered disk use the register operation of the service that manages that disk. Table 6.90. Parameters summary Name Type Direction Summary disk Disk In/Out The disk to add or register. unregistered Boolean In Indicates if a new disk should be added or if an existing unregistered disk should be registered. 6.32.1.1. unregistered Indicates if a new disk should be added or if an existing unregistered disk should be registered. If the value is true then the identifier of the disk to register needs to be provided. For example, to register the disk with id 456 send a request like this: With a request body like this: <disk id="456"/> If the value is false then a new disk will be created in the storage domain. In that case the provisioned_size , format and name attributes are mandatory. For example, to create a new copy on write disk of 1 GiB, send a request like this: With a request body like this: <disk> <name>mydisk</name> <format>cow</format> <provisioned_size>1073741824</provisioned_size> </disk> The default value is false . 6.32.2. list GET Retrieve the list of disks that are available in the storage domain. Table 6.91. Parameters summary Name Type Direction Summary disks Disk[ ] Out List of retrieved disks. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of disks to return. 6.32.2.1. disks List of retrieved disks. The order of the returned disks isn't guaranteed. 6.32.2.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.32.2.3. max Sets the maximum number of disks to return. If not specified all the disks are returned. 6.33. AttachedStorageDomains Manages the storage domains attached to a data center. Table 6.92. Methods summary Name Summary add Attaches an existing storage domain to the data center. list Returns the list of storage domains attached to the data center. 6.33.1. add POST Attaches an existing storage domain to the data center. Table 6.93. Parameters summary Name Type Direction Summary storage_domain StorageDomain In/Out The storage domain to attach to the data center. 6.33.2. list GET Returns the list of storage domains attached to the data center. The order of the returned storage domains isn't guaranteed. Table 6.94. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of storage domains to return. storage_domains StorageDomain[ ] Out A list of storage domains that are attached to the data center. 6.33.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.33.2.2. max Sets the maximum number of storage domains to return. If not specified all the storage domains are returned. 6.34. Balance Table 6.95. Methods summary Name Summary get remove 6.34.1. get GET Table 6.96. Parameters summary Name Type Direction Summary balance Balance Out filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . 6.34.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.34.2. remove DELETE Table 6.97. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.35. Balances Table 6.98. Methods summary Name Summary add Add a balance module to a specified user defined scheduling policy. list Returns the list of balance modules used by the scheduling policy. 6.35.1. add POST Add a balance module to a specified user defined scheduling policy. Table 6.99. Parameters summary Name Type Direction Summary balance Balance In/Out 6.35.2. list GET Returns the list of balance modules used by the scheduling policy. The order of the returned balance modules isn't guaranteed. Table 6.100. Parameters summary Name Type Direction Summary balances Balance[ ] Out filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of balances to return. 6.35.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.35.2.2. max Sets the maximum number of balances to return. If not specified all the balances are returned. 6.36. Bookmark A service to manage a bookmark. Table 6.101. Methods summary Name Summary get Get a bookmark. remove Remove a bookmark. update Update a bookmark. 6.36.1. get GET Get a bookmark. An example for getting a bookmark: <bookmark href="/ovirt-engine/api/bookmarks/123" id="123"> <name>example_vm</name> <value>vm: name=example*</value> </bookmark> Table 6.102. Parameters summary Name Type Direction Summary bookmark Bookmark Out The requested bookmark. follow String In Indicates which inner links should be followed . 6.36.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.36.2. remove DELETE Remove a bookmark. An example for removing a bookmark: Table 6.103. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.36.3. update PUT Update a bookmark. An example for updating a bookmark: With the request body: <bookmark> <name>new_example_vm</name> <value>vm: name=new_example*</value> </bookmark> Table 6.104. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. bookmark Bookmark In/Out The updated bookmark. 6.37. Bookmarks A service to manage bookmarks. Table 6.105. Methods summary Name Summary add Adding a new bookmark. list Listing all the available bookmarks. 6.37.1. add POST Adding a new bookmark. Example of adding a bookmark: <bookmark> <name>new_example_vm</name> <value>vm: name=new_example*</value> </bookmark> Table 6.106. Parameters summary Name Type Direction Summary bookmark Bookmark In/Out The added bookmark. 6.37.2. list GET Listing all the available bookmarks. Example of listing bookmarks: <bookmarks> <bookmark href="/ovirt-engine/api/bookmarks/123" id="123"> <name>database</name> <value>vm: name=database*</value> </bookmark> <bookmark href="/ovirt-engine/api/bookmarks/456" id="456"> <name>example</name> <value>vm: name=example*</value> </bookmark> </bookmarks> The order of the returned bookmarks isn't guaranteed. Table 6.107. Parameters summary Name Type Direction Summary bookmarks Bookmark[ ] Out The list of available bookmarks. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of bookmarks to return. 6.37.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.37.2.2. max Sets the maximum number of bookmarks to return. If not specified all the bookmarks are returned. 6.38. Cluster A service to manage a specific cluster. Table 6.108. Methods summary Name Summary get Gets information about the cluster. refreshglusterhealstatus Refresh the Gluster heal info for all volumes in cluster. remove Removes the cluster from the system. resetemulatedmachine syncallnetworks Synchronizes all networks on the cluster. update Updates information about the cluster. upgrade Start, update or finish upgrade process for the cluster based on the action value. 6.38.1. get GET Gets information about the cluster. An example of getting a cluster: <cluster href="/ovirt-engine/api/clusters/123" id="123"> <actions> <link href="/ovirt-engine/api/clusters/123/resetemulatedmachine" rel="resetemulatedmachine"/> </actions> <name>Default</name> <description>The default server cluster</description> <link href="/ovirt-engine/api/clusters/123/networks" rel="networks"/> <link href="/ovirt-engine/api/clusters/123/permissions" rel="permissions"/> <link href="/ovirt-engine/api/clusters/123/glustervolumes" rel="glustervolumes"/> <link href="/ovirt-engine/api/clusters/123/glusterhooks" rel="glusterhooks"/> <link href="/ovirt-engine/api/clusters/123/affinitygroups" rel="affinitygroups"/> <link href="/ovirt-engine/api/clusters/123/cpuprofiles" rel="cpuprofiles"/> <ballooning_enabled>false</ballooning_enabled> <cpu> <architecture>x86_64</architecture> <type>Intel Nehalem Family</type> </cpu> <error_handling> <on_error>migrate</on_error> </error_handling> <fencing_policy> <enabled>true</enabled> <skip_if_connectivity_broken> <enabled>false</enabled> <threshold>50</threshold> </skip_if_connectivity_broken> <skip_if_sd_active> <enabled>false</enabled> </skip_if_sd_active> </fencing_policy> <gluster_service>false</gluster_service> <ha_reservation>false</ha_reservation> <ksm> <enabled>true</enabled> <merge_across_nodes>true</merge_across_nodes> </ksm> <memory_policy> <over_commit> <percent>100</percent> </over_commit> <transparent_hugepages> <enabled>true</enabled> </transparent_hugepages> </memory_policy> <migration> <auto_converge>inherit</auto_converge> <bandwidth> <assignment_method>auto</assignment_method> </bandwidth> <compressed>inherit</compressed> </migration> <required_rng_sources> <required_rng_source>random</required_rng_source> </required_rng_sources> <scheduling_policy href="/ovirt-engine/api/schedulingpolicies/456" id="456"/> <threads_as_cores>false</threads_as_cores> <trusted_service>false</trusted_service> <tunnel_migration>false</tunnel_migration> <version> <major>4</major> <minor>0</minor> </version> <virt_service>true</virt_service> <data_center href="/ovirt-engine/api/datacenters/111" id="111"/> </cluster> Table 6.109. Parameters summary Name Type Direction Summary cluster Cluster Out filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . 6.38.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.38.2. refreshglusterhealstatus POST Refresh the Gluster heal info for all volumes in cluster. For example, Cluster 123 , send a request like this: 6.38.3. remove DELETE Removes the cluster from the system. Table 6.110. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.38.4. resetemulatedmachine POST Table 6.111. Parameters summary Name Type Direction Summary async Boolean In Indicates if the reset should be performed asynchronously. 6.38.5. syncallnetworks POST Synchronizes all networks on the cluster. With a request body like this: <action/> Table 6.112. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. 6.38.6. update PUT Updates information about the cluster. Only the specified fields are updated; others remain unchanged. For example, to update the cluster's CPU: With a request body like this: <cluster> <cpu> <type>Intel Haswell-noTSX Family</type> </cpu> </cluster> Table 6.113. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. cluster Cluster In/Out 6.38.7. upgrade POST Start, update or finish upgrade process for the cluster based on the action value. This action marks the cluster for upgrade, updates the progress, or clears the upgrade running flag on the cluster based on the action value which takes values of start , stop or update_progress . With a request body like this to mark the cluster for upgrade: <action> <upgrade_action> start </upgrade_action> </action> After starting the upgrade, use a request body like this to update the progress to 15%: <action> <upgrade_action> update_progress </upgrade_action> <upgrade_percent_complete> 15 </upgrade_percent_complete> </action> Table 6.114. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. correlation_id String In Explicitly set the upgrade correlation identifier. upgrade_action ClusterUpgradeAction In The action to be performed. upgrade_percent_complete Integer In Update the upgrade's progress as a percent complete of the total process. 6.38.7.1. correlation_id Explicitly set the upgrade correlation identifier. Use to correlate events detailing the cluster upgrade to the upgrade itself. If not specificed, the correlation id from Correlation-Id http header will be used. 6.39. ClusterEnabledFeature Represents a feature enabled for the cluster. Table 6.115. Methods summary Name Summary get Provides the information about the cluster feature enabled. remove Disables a cluster feature. 6.39.1. get GET Provides the information about the cluster feature enabled. For example, to find details of the enabled feature 456 for cluster 123 , send a request like this: That will return a ClusterFeature object containing the name: <cluster_feature id="456"> <name>libgfapi_supported</name> </cluster_feature> Table 6.116. Parameters summary Name Type Direction Summary feature ClusterFeature Out Retrieved cluster feature that's enabled. follow String In Indicates which inner links should be followed . 6.39.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.39.2. remove DELETE Disables a cluster feature. For example, to disable the feature 456 of cluster 123 send a request like this: 6.40. ClusterEnabledFeatures Provides information about the additional features that are enabled for this cluster. The features that are enabled are the available features for the cluster level Table 6.117. Methods summary Name Summary add Enable an additional feature for a cluster. list Lists the additional features enabled for the cluster. 6.40.1. add POST Enable an additional feature for a cluster. For example, to enable a feature 456 on cluster 123 , send a request like this: The request body should look like this: <cluster_feature id="456"/> Table 6.118. Parameters summary Name Type Direction Summary feature ClusterFeature In/Out 6.40.2. list GET Lists the additional features enabled for the cluster. For example, to get the features enabled for cluster 123 send a request like this: This will return a list of features: <enabled_features> <cluster_feature id="123"> <name>test_feature</name> </cluster_feature> ... </enabled_features> Table 6.119. Parameters summary Name Type Direction Summary features ClusterFeature[ ] Out Retrieved features. follow String In Indicates which inner links should be followed . 6.40.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.41. ClusterExternalProviders This service lists external providers. Table 6.120. Methods summary Name Summary list Returns the list of external providers. 6.41.1. list GET Returns the list of external providers. The order of the returned list of providers is not guaranteed. Table 6.121. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . providers ExternalProvider[ ] Out 6.41.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.42. ClusterFeature Represents a feature enabled for the cluster level Table 6.122. Methods summary Name Summary get Provides the information about the a cluster feature supported by a cluster level. 6.42.1. get GET Provides the information about the a cluster feature supported by a cluster level. For example, to find details of the cluster feature 456 for cluster level 4.1, send a request like this: That will return a ClusterFeature object containing the name: <cluster_feature id="456"> <name>libgfapi_supported</name> </cluster_feature> Table 6.123. Parameters summary Name Type Direction Summary feature ClusterFeature Out Retrieved cluster feature. follow String In Indicates which inner links should be followed . 6.42.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.43. ClusterFeatures Provides information about the cluster features that are supported by a cluster level. Table 6.124. Methods summary Name Summary list Lists the cluster features supported by the cluster level. 6.43.1. list GET Lists the cluster features supported by the cluster level. This will return a list of cluster features supported by the cluster level: <cluster_features> <cluster_feature id="123"> <name>test_feature</name> </cluster_feature> ... </cluster_features> Table 6.125. Parameters summary Name Type Direction Summary features ClusterFeature[ ] Out Retrieved features. follow String In Indicates which inner links should be followed . 6.43.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.44. ClusterLevel Provides information about a specific cluster level. See the ClusterLevels service for more information. Table 6.126. Methods summary Name Summary get Provides the information about the capabilities of the specific cluster level managed by this service. 6.44.1. get GET Provides the information about the capabilities of the specific cluster level managed by this service. For example, to find what CPU types are supported by level 3.6 you can send a request like this: That will return a ClusterLevel object containing the supported CPU types, and other information which describes the cluster level: <cluster_level id="3.6"> <cpu_types> <cpu_type> <name>Intel Nehalem Family</name> <level>3</level> <architecture>x86_64</architecture> </cpu_type> ... </cpu_types> <permits> <permit id="1"> <name>create_vm</name> <administrative>false</administrative> </permit> ... </permits> </cluster_level> Table 6.127. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . level ClusterLevel Out Retreived cluster level. 6.44.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.45. ClusterLevels Provides information about the capabilities of different cluster levels supported by the engine. Version 4.0 of the engine supports levels 4.0 and 3.6. Each of these levels support different sets of CPU types, for example. This service provides that information. Table 6.128. Methods summary Name Summary list Lists the cluster levels supported by the system. 6.45.1. list GET Lists the cluster levels supported by the system. This will return a list of available cluster levels. <cluster_levels> <cluster_level id="4.0"> ... </cluster_level> ... </cluster_levels> The order of the returned cluster levels isn't guaranteed. Table 6.129. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . levels ClusterLevel[ ] Out Retrieved cluster levels. 6.45.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.46. ClusterNetwork A service to manage a specific cluster network. Table 6.130. Methods summary Name Summary get Retrieves the cluster network details. remove Unassigns the network from a cluster. update Updates the network in the cluster. 6.46.1. get GET Retrieves the cluster network details. Table 6.131. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . network Network Out The cluster network. 6.46.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.46.2. remove DELETE Unassigns the network from a cluster. 6.46.3. update PUT Updates the network in the cluster. Table 6.132. Parameters summary Name Type Direction Summary network Network In/Out The cluster network. 6.47. ClusterNetworks A service to manage cluster networks. Table 6.133. Methods summary Name Summary add Assigns the network to a cluster. list Lists the networks that are assigned to the cluster. 6.47.1. add POST Assigns the network to a cluster. Post a request like in the example below to assign the network to a cluster: Use the following example in its body: <network id="123" /> Table 6.134. Parameters summary Name Type Direction Summary network Network In/Out The network object to be assigned to the cluster. 6.47.2. list GET Lists the networks that are assigned to the cluster. The order of the returned clusters isn't guaranteed. Table 6.135. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of networks to return. networks Network[ ] Out The list of networks that are assigned to the cluster. 6.47.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.47.2.2. max Sets the maximum number of networks to return. If not specified, all the networks are returned. 6.48. Clusters A service to manage clusters. Table 6.136. Methods summary Name Summary add Creates a new cluster. list Returns the list of clusters of the system. 6.48.1. add POST Creates a new cluster. This requires the name , cpu.type , and data_center attributes. Identify the data center with either the id or name attribute. With a request body like this: <cluster> <name>mycluster</name> <cpu> <type>Intel Nehalem Family</type> </cpu> <data_center id="123"/> </cluster> To create a cluster with an external network provider to be deployed on every host that is added to the cluster, send a request like this: With a request body containing a reference to the desired provider: <cluster> <name>mycluster</name> <cpu> <type>Intel Nehalem Family</type> </cpu> <data_center id="123"/> <external_network_providers> <external_provider name="ovirt-provider-ovn"/> </external_network_providers> </cluster> Table 6.137. Parameters summary Name Type Direction Summary cluster Cluster In/Out 6.48.2. list GET Returns the list of clusters of the system. The order of the returned clusters is guaranteed only if the sortby clause is included in the search parameter. Table 6.138. Parameters summary Name Type Direction Summary case_sensitive Boolean In Indicates if the search should be performed taking case into account. clusters Cluster[ ] Out filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of clusters to return. search String In A query string used to restrict the returned clusters. 6.48.2.1. case_sensitive Indicates if the search should be performed taking case into account. The default value is true , which means that case is taken into account. To search ignoring case, set it to false . 6.48.2.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.48.2.3. max Sets the maximum number of clusters to return. If not specified, all the clusters are returned. 6.49. Copyable Table 6.139. Methods summary Name Summary copy 6.49.1. copy POST Table 6.140. Parameters summary Name Type Direction Summary async Boolean In Indicates if the copy should be performed asynchronously. 6.50. CpuProfile Table 6.141. Methods summary Name Summary get remove update Update the specified cpu profile in the system. 6.50.1. get GET Table 6.142. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . profile CpuProfile Out 6.50.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.50.2. remove DELETE Table 6.143. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.50.3. update PUT Update the specified cpu profile in the system. Table 6.144. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. profile CpuProfile In/Out 6.51. CpuProfiles Table 6.145. Methods summary Name Summary add Add a new cpu profile to the system. list Returns the list of CPU profiles of the system. 6.51.1. add POST Add a new cpu profile to the system. Table 6.146. Parameters summary Name Type Direction Summary profile CpuProfile In/Out 6.51.2. list GET Returns the list of CPU profiles of the system. The order of the returned list of CPU profiles is random. Table 6.147. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of profiles to return. profile CpuProfile[ ] Out 6.51.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.51.2.2. max Sets the maximum number of profiles to return. If not specified, all the profiles are returned. 6.52. DataCenter A service to manage a data center. Table 6.148. Methods summary Name Summary cleanfinishedtasks Currently, the storage pool manager (SPM) fails to switch to another host if the SPM has uncleared tasks. get Get a data center. remove Removes the data center. setmaster Used for manually setting a storage domain in the data center as a master. update Updates the data center. 6.52.1. cleanfinishedtasks POST Currently, the storage pool manager (SPM) fails to switch to another host if the SPM has uncleared tasks. Clearing all finished tasks enables the SPM switching. For example, to clean all the finished tasks on a data center with ID 123 send a request like this: With a request body like this: <action/> Table 6.149. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. 6.52.2. get GET Get a data center. An example of getting a data center: <data_center href="/ovirt-engine/api/datacenters/123" id="123"> <name>Default</name> <description>The default Data Center</description> <link href="/ovirt-engine/api/datacenters/123/clusters" rel="clusters"/> <link href="/ovirt-engine/api/datacenters/123/storagedomains" rel="storagedomains"/> <link href="/ovirt-engine/api/datacenters/123/permissions" rel="permissions"/> <link href="/ovirt-engine/api/datacenters/123/networks" rel="networks"/> <link href="/ovirt-engine/api/datacenters/123/quotas" rel="quotas"/> <link href="/ovirt-engine/api/datacenters/123/qoss" rel="qoss"/> <link href="/ovirt-engine/api/datacenters/123/iscsibonds" rel="iscsibonds"/> <local>false</local> <quota_mode>disabled</quota_mode> <status>up</status> <storage_format>v3</storage_format> <supported_versions> <version> <major>4</major> <minor>0</minor> </version> </supported_versions> <version> <major>4</major> <minor>0</minor> </version> <mac_pool href="/ovirt-engine/api/macpools/456" id="456"/> </data_center> Table 6.150. Parameters summary Name Type Direction Summary data_center DataCenter Out filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . 6.52.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.52.3. remove DELETE Removes the data center. Without any special parameters, the storage domains attached to the data center are detached and then removed from the storage. If something fails when performing this operation, for example if there is no host available to remove the storage domains from the storage, the complete operation will fail. If the force parameter is true then the operation will always succeed, even if something fails while removing one storage domain, for example. The failure is just ignored and the data center is removed from the database anyway. Table 6.151. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. force Boolean In Indicates if the operation should succeed, and the storage domain removed from the database, even if something fails during the operation. 6.52.3.1. force Indicates if the operation should succeed, and the storage domain removed from the database, even if something fails during the operation. This parameter is optional, and the default value is false . 6.52.4. setmaster POST Used for manually setting a storage domain in the data center as a master. For example, for setting a storage domain with ID '456' as a master on a data center with ID '123', send a request like this: With a request body like this: <action> <storage_domain id="456"/> </action> The new master storage domain can be also specified by its name. Table 6.152. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. storage_domain StorageDomain In The new master storage domain for the data center. 6.52.5. update PUT Updates the data center. The name , description , storage_type , version , storage_format and mac_pool elements are updatable post-creation. For example, to change the name and description of data center 123 send a request like this: With a request body like this: <data_center> <name>myupdatedname</name> <description>An updated description for the data center</description> </data_center> Table 6.153. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. data_center DataCenter In/Out The data center that is being updated. 6.53. DataCenterNetwork A service to manage a specific data center network. Table 6.154. Methods summary Name Summary get Retrieves the data center network details. remove Removes the network. update Updates the network in the data center. 6.53.1. get GET Retrieves the data center network details. Table 6.155. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . network Network Out The data center network. 6.53.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.53.2. remove DELETE Removes the network. 6.53.3. update PUT Updates the network in the data center. Table 6.156. Parameters summary Name Type Direction Summary network Network In/Out The data center network. 6.54. DataCenterNetworks A service to manage data center networks. Table 6.157. Methods summary Name Summary add Create a new network in a data center. list Lists networks in the data center. 6.54.1. add POST Create a new network in a data center. Post a request like in the example below to create a new network in a data center with an ID of 123 . Use the following example in its body: <network> <name>mynetwork</name> </network> Table 6.158. Parameters summary Name Type Direction Summary network Network In/Out The network object to be created in the data center. 6.54.2. list GET Lists networks in the data center. The order of the returned list of networks isn't guaranteed. Table 6.159. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of networks to return. networks Network[ ] Out The list of networks which are in the data center. 6.54.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.54.2.2. max Sets the maximum number of networks to return. If not specified, all the networks are returned. 6.55. DataCenters A service to manage data centers. Table 6.160. Methods summary Name Summary add Creates a new data center. list Lists the data centers. 6.55.1. add POST Creates a new data center. Creation of a new data center requires the name and local elements. For example, to create a data center named mydc that uses shared storage (NFS, iSCSI or fibre channel) send a request like this: With a request body like this: <data_center> <name>mydc</name> <local>false</local> </data_center> Table 6.161. Parameters summary Name Type Direction Summary data_center DataCenter In/Out The data center that is being added. 6.55.2. list GET Lists the data centers. The following request retrieves a representation of the data centers: The above request performed with curl : curl \ --request GET \ --cacert /etc/pki/ovirt-engine/ca.pem \ --header "Version: 4" \ --header "Accept: application/xml" \ --user "admin@internal:mypassword" \ https://myengine.example.com/ovirt-engine/api/datacenters This is what an example response could look like: <data_center href="/ovirt-engine/api/datacenters/123" id="123"> <name>Default</name> <description>The default Data Center</description> <link href="/ovirt-engine/api/datacenters/123/networks" rel="networks"/> <link href="/ovirt-engine/api/datacenters/123/storagedomains" rel="storagedomains"/> <link href="/ovirt-engine/api/datacenters/123/permissions" rel="permissions"/> <link href="/ovirt-engine/api/datacenters/123/clusters" rel="clusters"/> <link href="/ovirt-engine/api/datacenters/123/qoss" rel="qoss"/> <link href="/ovirt-engine/api/datacenters/123/iscsibonds" rel="iscsibonds"/> <link href="/ovirt-engine/api/datacenters/123/quotas" rel="quotas"/> <local>false</local> <quota_mode>disabled</quota_mode> <status>up</status> <supported_versions> <version> <major>4</major> <minor>0</minor> </version> </supported_versions> <version> <major>4</major> <minor>0</minor> </version> </data_center> Note the id code of your Default data center. This code identifies this data center in relation to other resources of your virtual environment. The data center also contains a link to the storage domains collection. The data center uses this collection to attach storage domains from the storage domains main collection. The order of the returned list of data centers is guaranteed only if the sortby clause is included in the search parameter. Table 6.162. Parameters summary Name Type Direction Summary case_sensitive Boolean In Indicates if the search performed using the search parameter should be performed taking case into account. data_centers DataCenter[ ] Out filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of data centers to return. search String In A query string used to restrict the returned data centers. 6.55.2.1. case_sensitive Indicates if the search performed using the search parameter should be performed taking case into account. The default value is true , which means that case is taken into account. If you want to search ignoring case set it to false . 6.55.2.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.55.2.3. max Sets the maximum number of data centers to return. If not specified all the data centers are returned. 6.56. Disk Manages a single disk. Table 6.163. Methods summary Name Summary convert Converts disk format and/or preallocation mode. copy This operation copies a disk to the specified storage domain. export Exports a disk to an export storage domain. get Retrieves the description of the disk. move Moves a disk to another storage domain. reduce Reduces the size of the disk image. refreshlun Refreshes a direct LUN disk with up-to-date information from the storage. remove Removes a disk. sparsify Sparsify the disk. update Updates the parameters of the specified disk. 6.56.1. convert POST Converts disk format and/or preallocation mode. For example, to convert the disk format from preallocated-cow to a sparse-raw image, send a request like the following: With the following request body: <action> <disk> <sparse>true</sparse> <format>raw</format> </disk> </action> Note: In order to sparsify a disk, two conversions might be needed if the disk is on a Block Storage Domain. For example: If a disk is RAW, converting it to QCOW will result in a larger disk. In order to reduce the size, it is possible to convert the disk again to QCOW and keep the same allocation policy. Table 6.164. Parameters summary Name Type Direction Summary disk Disk In The description of the disk. follow String In Indicates which inner links should be followed . 6.56.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.56.2. copy POST This operation copies a disk to the specified storage domain. For example, a disk can be copied using the following request: With a request body like this: <action> <storage_domain id="456"/> <disk> <name>mydisk</name> </disk> </action> If the disk profile or the quota currently used by the disk are not defined for the new storage domain, they can be explicitly specified. If they are not specified, the first available disk profile and the default quota are used. For example, to specify disk profile 987 and quota 753 , send a request body like this: <action> <storage_domain id="456"/> <disk_profile id="987"/> <quota id="753"/> </action> Table 6.165. Parameters summary Name Type Direction Summary async Boolean In Indicates if the copy should be performed asynchronously. disk Disk In disk_profile DiskProfile In Disk profile for the disk in the new storage domain. filter Boolean In Indicates if the results should be filtered according to the permissions of the user. quota Quota In Quota for the disk in the new storage domain. storage_domain StorageDomain In The storage domain where the new disk is created. 6.56.2.1. disk_profile Disk profile for the disk in the new storage domain. Disk profiles are defined for storage domains, so the old disk profile will not exist in the new storage domain. If this parameter is not used, the first disk profile from the new storage domain to which the user has permissions will be assigned to the disk. 6.56.2.2. quota Quota for the disk in the new storage domain. This optional parameter can be used to specify new quota for the disk, because the current quota may not be defined for the new storage domain. If this parameter is not used and the old quota is not defined for the new storage domain, the default (unlimited) quota will be assigned to the disk. 6.56.2.3. storage_domain The storage domain where the new disk is created. This can be specified using the id or name attributes. For example, to copy a disk to the storage domain called mydata , send a request like this: With a request body like this: <action> <storage_domain> <name>mydata</name> </storage_domain> </action> 6.56.3. export POST Exports a disk to an export storage domain. Table 6.166. Parameters summary Name Type Direction Summary async Boolean In Indicates if the export should be performed asynchronously. filter Boolean In Indicates if the results should be filtered according to the permissions of the user. storage_domain StorageDomain In The export storage domain where the disk will be exported to. 6.56.4. get GET Retrieves the description of the disk. Table 6.167. Parameters summary Name Type Direction Summary all_content Boolean In Indicates if all of the attributes of the disk should be included in the response. disk Disk Out The description of the disk. follow String In Indicates which inner links should be followed . 6.56.4.1. all_content Indicates if all of the attributes of the disk should be included in the response. By default the following disk attributes are excluded: vms For example, to retrieve the complete representation of disk '123': 6.56.4.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.56.5. move POST Moves a disk to another storage domain. For example, to move the disk with identifier 123 to a storage domain with identifier 456 send the following request: With the following request body: <action> <storage_domain id="456"/> </action> If the disk profile or the quota used currently by the disk aren't defined for the new storage domain, then they can be explicitly specified. If they aren't then the first available disk profile and the default quota are used. For example, to explicitly use disk profile 987 and quota 753 send a request body like this: <action> <storage_domain id="456"/> <disk_profile id="987"/> <quota id="753"/> </action> Table 6.168. Parameters summary Name Type Direction Summary async Boolean In Indicates if the move should be performed asynchronously. disk_profile DiskProfile In Disk profile for the disk in the new storage domain. filter Boolean In Indicates if the results should be filtered according to the permissions of the user. quota Quota In Quota for the disk in the new storage domain. storage_domain StorageDomain In The storage domain where the disk will be moved to. 6.56.5.1. disk_profile Disk profile for the disk in the new storage domain. Disk profiles are defined for storage domains, so the old disk profile will not exist in the new storage domain. If this parameter is not used, the first disk profile from the new storage domain to which the user has permissions will be assigned to the disk. 6.56.5.2. quota Quota for the disk in the new storage domain. This optional parameter can be used to specify new quota for the disk, because the current quota may not be defined for the new storage domain. If this parameter is not used and the old quota is not defined for the new storage domain, the default (unlimited) quota will be assigned to the disk. 6.56.6. reduce POST Reduces the size of the disk image. Invokes reduce on the logical volume (i.e. this is only applicable for block storage domains). This is applicable for floating disks and disks attached to non-running virtual machines. There is no need to specify the size as the optimal size is calculated automatically. Table 6.169. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.56.7. refreshlun POST Refreshes a direct LUN disk with up-to-date information from the storage. Refreshing a direct LUN disk is useful when: The LUN was added using the API without the host parameter, and therefore does not contain any information from the storage (see DisksService::add ). New information about the LUN is available on the storage and you want to update the LUN with it. To refresh direct LUN disk 123 using host 456 , send the following request: With the following request body: <action> <host id='456'/> </action> Table 6.170. Parameters summary Name Type Direction Summary host Host In The host that will be used to refresh the direct LUN disk. 6.56.8. remove DELETE Removes a disk. Table 6.171. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.56.9. sparsify POST Sparsify the disk. Sparsification frees space in the disk image that is not used by its filesystem. As a result, the image will occupy less space on the storage. Currently sparsification works only on disks without snapshots. Disks having derived disks are also not allowed. 6.56.10. update PUT Updates the parameters of the specified disk. This operation allows updating the following floating disk properties: For Image disks: provisioned_size , alias , description , wipe_after_delete , shareable , backup and disk_profile . For LUN disks: alias , description and shareable . Cinder integration has been replaced by Managed Block Storage. For Managed Block disks: provisioned_size , alias and description . For VM attached disks, the qcow_version can also be updated. For example, a disk's update can be done by using the following request: With a request body like this: <disk> <qcow_version>qcow2_v3</qcow_version> <alias>new-alias</alias> <description>new-desc</description> </disk> Since the backend operation is asynchronous, the disk element that is returned to the user might not be synced with the changed properties. Table 6.172. Parameters summary Name Type Direction Summary disk Disk In/Out The update to apply to the disk. 6.57. DiskAttachment This service manages the attachment of a disk to a virtual machine. Table 6.173. Methods summary Name Summary get Returns the details of the attachment, including the bootable flag and link to the disk. remove Removes the disk attachment. update Update the disk attachment and the disk properties within it. 6.57.1. get GET Returns the details of the attachment, including the bootable flag and link to the disk. An example of getting a disk attachment: <disk_attachment href="/ovirt-engine/api/vms/123/diskattachments/456" id="456"> <active>true</active> <bootable>true</bootable> <interface>virtio</interface> <disk href="/ovirt-engine/api/disks/456" id="456"/> <vm href="/ovirt-engine/api/vms/123" id="123"/> </disk_attachment> Table 6.174. Parameters summary Name Type Direction Summary attachment DiskAttachment Out follow String In Indicates which inner links should be followed . 6.57.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.57.2. remove DELETE Removes the disk attachment. This will only detach the disk from the virtual machine, but won't remove it from the system, unless the detach_only parameter is false . An example of removing a disk attachment: Table 6.175. Parameters summary Name Type Direction Summary detach_only Boolean In Indicates if the disk should only be detached from the virtual machine, but not removed from the system. 6.57.2.1. detach_only Indicates if the disk should only be detached from the virtual machine, but not removed from the system. The default value is true , which won't remove the disk from the system. 6.57.3. update PUT Update the disk attachment and the disk properties within it. Table 6.176. Parameters summary Name Type Direction Summary disk_attachment DiskAttachment In/Out 6.58. DiskAttachments This service manages the set of disks attached to a virtual machine. Each attached disk is represented by a DiskAttachment , containing the bootable flag, the disk interface and the reference to the disk. Table 6.177. Methods summary Name Summary add Adds a new disk attachment to the virtual machine. list List the disk that are attached to the virtual machine. 6.58.1. add POST Adds a new disk attachment to the virtual machine. The attachment parameter can contain just a reference, if the disk already exists: <disk_attachment> <bootable>true</bootable> <pass_discard>true</pass_discard> <interface>ide</interface> <active>true</active> <disk id="123"/> </disk_attachment> Or it can contain the complete representation of the disk, if the disk doesn't exist yet: <disk_attachment> <bootable>true</bootable> <pass_discard>true</pass_discard> <interface>ide</interface> <active>true</active> <disk> <name>mydisk</name> <provisioned_size>1024</provisioned_size> ... </disk> </disk_attachment> In this case the disk will be created and then attached to the virtual machine. In both cases, use the following URL for a virtual machine with an id 345 : Important The server accepts requests that do not contain the active attribute, but the effect is undefined. In some cases the disk will be automatically activated and in other cases it won't. To avoid issues it is strongly recommended to always include the active attribute with the desired value. Table 6.178. Parameters summary Name Type Direction Summary attachment DiskAttachment In/Out The disk attachment to add to the virtual machine. 6.58.2. list GET List the disk that are attached to the virtual machine. The order of the returned list of disks attachments isn't guaranteed. Table 6.179. Parameters summary Name Type Direction Summary attachments DiskAttachment[ ] Out A list of disk attachments that are attached to the virtual machine. follow String In Indicates which inner links should be followed . 6.58.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.59. DiskProfile Table 6.180. Methods summary Name Summary get remove update Update the specified disk profile in the system. 6.59.1. get GET Table 6.181. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . profile DiskProfile Out 6.59.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.59.2. remove DELETE Table 6.182. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.59.3. update PUT Update the specified disk profile in the system. Table 6.183. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. profile DiskProfile In/Out 6.60. DiskProfiles Table 6.184. Methods summary Name Summary add Add a new disk profile to the system. list Returns the list of disk profiles of the system. 6.60.1. add POST Add a new disk profile to the system. Table 6.185. Parameters summary Name Type Direction Summary profile DiskProfile In/Out 6.60.2. list GET Returns the list of disk profiles of the system. The order of the returned list of disk profiles isn't guaranteed. Table 6.186. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of profiles to return. profile DiskProfile[ ] Out 6.60.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.60.2.2. max Sets the maximum number of profiles to return. If not specified all the profiles are returned. 6.61. DiskSnapshot Table 6.187. Methods summary Name Summary get remove 6.61.1. get GET Table 6.188. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . snapshot DiskSnapshot Out 6.61.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.61.2. remove DELETE Table 6.189. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.62. DiskSnapshots Manages the collection of disk snapshots available in an storage domain. Table 6.190. Methods summary Name Summary list Returns the list of disk snapshots of the storage domain. 6.62.1. list GET Returns the list of disk snapshots of the storage domain. The order of the returned list of disk snapshots isn't guaranteed. Table 6.191. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . include_active Boolean In If true return also active snapshots. include_template Boolean In If true return also template snapshots. max Integer In Sets the maximum number of snapshots to return. snapshots DiskSnapshot[ ] Out 6.62.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.62.1.2. include_active If true return also active snapshots. If not specified active snapshots are not returned. 6.62.1.3. include_template If true return also template snapshots. If not specified template snapshots are not returned. 6.62.1.4. max Sets the maximum number of snapshots to return. If not specified all the snapshots are returned. 6.63. Disks Manages the collection of disks available in the system. Table 6.192. Methods summary Name Summary add Adds a new floating disk. list Get list of disks. 6.63.1. add POST Adds a new floating disk. There are three types of disks that can be added - disk image, direct LUN and Managed Block disk. Cinder integration has been replaced by Managed Block Storage. Adding a new image disk: When creating a new floating image Disk , the API requires the storage_domain , provisioned_size and format attributes. Note that block storage domains (i.e. storage domains with the storage type of iSCSI or FCP) do not support the combination of the raw format with sparse=true , so sparse=false must be stated explicitly. To create a new floating image disk with specified provisioned_size , format and name on a storage domain with an id 123 and enabled for incremental backup, send a request as follows: With a request body as follows: <disk> <storage_domains> <storage_domain id="123"/> </storage_domains> <name>mydisk</name> <provisioned_size>1048576</provisioned_size> <format>cow</format> <backup>incremental</backup> </disk> Adding a new direct LUN disk: When adding a new floating direct LUN via the API, there are two flavors that can be used: With a host element - in this case, the host is used for sanity checks (e.g., that the LUN is visible) and to retrieve basic information about the LUN (e.g., size and serial). Without a host element - in this case, the operation is a database-only operation, and the storage is never accessed. To create a new floating direct LUN disk with a host element with an id 123 , specified alias , type and logical_unit with an id 456 (that has the attributes address , port and target ), send a request as follows: With a request body as follows: <disk> <alias>mylun</alias> <lun_storage> <host id="123"/> <type>iscsi</type> <logical_units> <logical_unit id="456"> <address>10.35.10.20</address> <port>3260</port> <target>iqn.2017-01.com.myhost:444</target> </logical_unit> </logical_units> </lun_storage> </disk> To create a new floating direct LUN disk without using a host, remove the host element. Adding a new Cinder disk: Cinder integration has been replaced by Managed Block Storage. Adding a floating disks in order to upload disk snapshots: Since version 4.2 of the engine it is possible to upload disks with snapshots. This request should be used to create the base image of the images chain (The consecutive disk snapshots (images), should be created using disk-attachments element when creating a snapshot). The disk has to be created with the same disk identifier and image identifier of the uploaded image. I.e. the identifiers should be saved as part of the backup process. The image identifier can be also fetched using the qemu-img info command. For example, if the disk image is stored into a file named b7a4c6c5-443b-47c5-967f-6abc79675e8b/myimage.img : USD qemu-img info b7a4c6c5-443b-47c5-967f-6abc79675e8b/myimage.img image: b548366b-fb51-4b41-97be-733c887fe305 file format: qcow2 virtual size: 1.0G (1073741824 bytes) disk size: 196K cluster_size: 65536 backing file: ad58716a-1fe9-481f-815e-664de1df04eb backing file format: raw To create a disk with with the disk identifier and image identifier obtained with the qemu-img info command shown above, send a request like this: With a request body as follows: <disk id="b7a4c6c5-443b-47c5-967f-6abc79675e8b"> <image_id>b548366b-fb51-4b41-97be-733c887fe305</image_id> <storage_domains> <storage_domain id="123"/> </storage_domains> <name>mydisk</name> <provisioned_size>1048576</provisioned_size> <format>cow</format> </disk> Table 6.193. Parameters summary Name Type Direction Summary disk Disk In/Out The disk. 6.63.2. list GET Get list of disks. You will get a XML response which will look like this one: <disks> <disk id="123"> <actions>...</actions> <name>MyDisk</name> <description>MyDisk description</description> <link href="/ovirt-engine/api/disks/123/permissions" rel="permissions"/> <link href="/ovirt-engine/api/disks/123/statistics" rel="statistics"/> <actual_size>5345845248</actual_size> <alias>MyDisk alias</alias> ... <status>ok</status> <storage_type>image</storage_type> <wipe_after_delete>false</wipe_after_delete> <disk_profile id="123"/> <quota id="123"/> <storage_domains>...</storage_domains> </disk> ... </disks> The order of the returned list of disks is guaranteed only if the sortby clause is included in the search parameter. Table 6.194. Parameters summary Name Type Direction Summary case_sensitive Boolean In Indicates if the search performed using the search parameter should be performed taking case into account. disks Disk[ ] Out List of retrieved disks. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of disks to return. search String In A query string used to restrict the returned disks. 6.63.2.1. case_sensitive Indicates if the search performed using the search parameter should be performed taking case into account. The default value is true , which means that case is taken into account. If you want to search ignoring case set it to false . 6.63.2.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.63.2.3. max Sets the maximum number of disks to return. If not specified all the disks are returned. 6.64. Domain A service to view details of an authentication domain in the system. Table 6.195. Methods summary Name Summary get Gets the authentication domain information. 6.64.1. get GET Gets the authentication domain information. Usage: Will return the domain information: <domain href="/ovirt-engine/api/domains/5678" id="5678"> <name>internal-authz</name> <link href="/ovirt-engine/api/domains/5678/users" rel="users"/> <link href="/ovirt-engine/api/domains/5678/groups" rel="groups"/> <link href="/ovirt-engine/api/domains/5678/users?search={query}" rel="users/search"/> <link href="/ovirt-engine/api/domains/5678/groups?search={query}" rel="groups/search"/> </domain> Table 6.196. Parameters summary Name Type Direction Summary domain Domain Out The authentication domain. follow String In Indicates which inner links should be followed . 6.64.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.65. DomainGroup Table 6.197. Methods summary Name Summary get 6.65.1. get GET Table 6.198. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . get Group Out 6.65.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.66. DomainGroups Table 6.199. Methods summary Name Summary list Returns the list of groups. 6.66.1. list GET Returns the list of groups. The order of the returned list of groups isn't guaranteed. Table 6.200. Parameters summary Name Type Direction Summary case_sensitive Boolean In Indicates if the search performed using the search parameter should be performed taking case into account. follow String In Indicates which inner links should be followed . groups Group[ ] Out max Integer In Sets the maximum number of groups to return. search String In A query string used to restrict the returned groups. 6.66.1.1. case_sensitive Indicates if the search performed using the search parameter should be performed taking case into account. The default value is true , which means that case is taken into account. If you want to search ignoring case set it to false . 6.66.1.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.66.1.3. max Sets the maximum number of groups to return. If not specified all the groups are returned. 6.67. DomainUser A service to view a domain user in the system. Table 6.201. Methods summary Name Summary get Gets the domain user information. 6.67.1. get GET Gets the domain user information. Usage: Will return the domain user information: <user href="/ovirt-engine/api/users/1234" id="1234"> <name>admin</name> <namespace>*</namespace> <principal>admin</principal> <user_name>admin@internal-authz</user_name> <domain href="/ovirt-engine/api/domains/5678" id="5678"> <name>internal-authz</name> </domain> <groups/> </user> Table 6.202. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . user User Out The domain user. 6.67.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.68. DomainUserGroups A service that shows a user's group membership in the AAA extension. Table 6.203. Methods summary Name Summary list Returns the list of groups that the user is a member of. 6.68.1. list GET Returns the list of groups that the user is a member of. Table 6.204. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . groups Group[ ] Out The list of groups that the user is a member of. 6.68.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.69. DomainUsers A service to list all domain users in the system. Table 6.205. Methods summary Name Summary list List all the users in the domain. 6.69.1. list GET List all the users in the domain. Usage: Will return the list of users in the domain: <users> <user href="/ovirt-engine/api/domains/5678/users/1234" id="1234"> <name>admin</name> <namespace>*</namespace> <principal>admin</principal> <user_name>admin@internal-authz</user_name> <domain href="/ovirt-engine/api/domains/5678" id="5678"> <name>internal-authz</name> </domain> <groups/> </user> </users> The order of the returned list of users isn't guaranteed. Table 6.206. Parameters summary Name Type Direction Summary case_sensitive Boolean In Indicates if the search performed using the search parameter should be performed taking case into account. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of users to return. search String In A query string used to restrict the returned users. users User[ ] Out The list of users in the domain. 6.69.1.1. case_sensitive Indicates if the search performed using the search parameter should be performed taking case into account. The default value is true , which means that case is taken into account. If you want to search ignoring case set it to false . 6.69.1.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.69.1.3. max Sets the maximum number of users to return. If not specified all the users are returned. 6.70. Domains A service to list all authentication domains in the system. Table 6.207. Methods summary Name Summary list List all the authentication domains in the system. 6.70.1. list GET List all the authentication domains in the system. Usage: Will return the list of domains: <domains> <domain href="/ovirt-engine/api/domains/5678" id="5678"> <name>internal-authz</name> <link href="/ovirt-engine/api/domains/5678/users" rel="users"/> <link href="/ovirt-engine/api/domains/5678/groups" rel="groups"/> <link href="/ovirt-engine/api/domains/5678/users?search={query}" rel="users/search"/> <link href="/ovirt-engine/api/domains/5678/groups?search={query}" rel="groups/search"/> </domain> </domains> The order of the returned list of domains isn't guaranteed. Table 6.208. Parameters summary Name Type Direction Summary domains Domain[ ] Out The list of domains. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of domains to return. 6.70.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.70.1.2. max Sets the maximum number of domains to return. If not specified all the domains are returned. 6.71. EngineKatelloErrata A service to manage Katello errata assigned to the engine. The information is retrieved from Katello. Table 6.209. Methods summary Name Summary list Retrieves the representation of the Katello errata. 6.71.1. list GET Retrieves the representation of the Katello errata. You will receive response in XML like this one: <katello_errata> <katello_erratum href="/ovirt-engine/api/katelloerrata/123" id="123"> <name>RHBA-2013:XYZ</name> <description>The description of the erratum</description> <title>some bug fix update</title> <type>bugfix</type> <issued>2013-11-20T02:00:00.000+02:00</issued> <solution>Few guidelines regarding the solution</solution> <summary>Updated packages that fix one bug are now available for XYZ</summary> <packages> <package> <name>libipa_hbac-1.9.2-82.11.el6_4.i686</name> </package> ... </packages> </katello_erratum> ... </katello_errata> The order of the returned list of erratum isn't guaranteed. Table 6.210. Parameters summary Name Type Direction Summary errata KatelloErratum[ ] Out A representation of Katello errata. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of errata to return. 6.71.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.71.1.2. max Sets the maximum number of errata to return. If not specified all the errata are returned. 6.72. Event A service to manage an event in the system. Table 6.211. Methods summary Name Summary get Get an event. remove Removes an event from internal audit log. 6.72.1. get GET Get an event. An example of getting an event: <event href="/ovirt-engine/api/events/123" id="123"> <description>Host example.com was added by admin@internal-authz.</description> <code>42</code> <correlation_id>135</correlation_id> <custom_id>-1</custom_id> <flood_rate>30</flood_rate> <origin>oVirt</origin> <severity>normal</severity> <time>2016-12-11T11:13:44.654+02:00</time> <cluster href="/ovirt-engine/api/clusters/456" id="456"/> <host href="/ovirt-engine/api/hosts/789" id="789"/> <user href="/ovirt-engine/api/users/987" id="987"/> </event> Note that the number of fields changes according to the information that resides on the event. For example, for storage domain related events you will get the storage domain reference, as well as the reference for the data center this storage domain resides in. Table 6.212. Parameters summary Name Type Direction Summary event Event Out follow String In Indicates which inner links should be followed . 6.72.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.72.2. remove DELETE Removes an event from internal audit log. An event can be removed by sending following request Table 6.213. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.73. EventSubscription A service to manage a specific event-subscription in the system. Table 6.214. Methods summary Name Summary get Gets the information about the event-subscription. remove Removes the event-subscription from the system. 6.73.1. get GET Gets the information about the event-subscription. For example to retrieve the information about the subscription of user '123' to the event 'vm_console_detected': <event-subscription href="/ovirt-engine/api/users/123/event-subscriptions/vm_console_detected"> <event>vm_console_detected</event> <notification_method>smtp</notification_method> <user href="/ovirt-engine/api/users/123" id="123"/> <address>[email protected]</address> </event-subscription> Table 6.215. Parameters summary Name Type Direction Summary event_subscription EventSubscription Out The event-subscription. 6.73.2. remove DELETE Removes the event-subscription from the system. For example to remove user 123's subscription to vm_console_detected event: Table 6.216. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.74. EventSubscriptions Represents a service to manage collection of event-subscription of a user. Table 6.217. Methods summary Name Summary add Add a new event-subscription to the system. list List the event-subscriptions for the provided user. 6.74.1. add POST Add a new event-subscription to the system. An event-subscription is always added in the context of a user. For example, to add new event-subscription for host_high_cpu_use for user 123 , and have the notification sent to the e-mail address: [email protected] , send a request like this: With a request body like this: <event_subscription> <event>host_high_cpu_use</event> <address>[email protected]</address> </event_subscription> The event name will become the ID of the new event-subscription entity: GET ... /api/users/123/eventsubscriptions/host_high_cpu_use Note that no user id is provided in the request body. This is because the user-id (in this case 123) is already known to the API from the context. Note also that event-subscription entity contains notification-method field, but it is not provided either in the request body. This is because currently it's always set to SMTP as SNMP notifications are still unsupported by the API layer. Table 6.218. Parameters summary Name Type Direction Summary event_subscription EventSubscription In/Out The added event-subscription. 6.74.2. list GET List the event-subscriptions for the provided user. For example to list event-subscriptions for user 123 : <event-subscriptions> <event-subscription href="/ovirt-engine/api/users/123/event-subscriptions/host_install_failed"> <event>host_install_failed</event> <notification_method>smtp</notification_method> <user href="/ovirt-engine/api/users/123" id="123"/> <address>[email protected]</address> </event-subscription> <event-subscription href="/ovirt-engine/api/users/123/event-subscriptions/vm_paused"> <event>vm_paused</event> <notification_method>smtp</notification_method> <user href="/ovirt-engine/api/users/123" id="123"/> <address>[email protected]</address> </event-subscription> </event-subscriptions> Table 6.219. Parameters summary Name Type Direction Summary event_subscriptions EventSubscription[ ] Out List of the event-subscriptions for the specified user follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of event-subscriptions to return. 6.74.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.74.2.2. max Sets the maximum number of event-subscriptions to return. If not specified all the event-subscriptions are returned. 6.75. Events A service to manage events in the system. Table 6.220. Methods summary Name Summary add Adds an external event to the internal audit log. list Get list of events. undelete 6.75.1. add POST Adds an external event to the internal audit log. This is intended for integration with external systems that detect or produce events relevant for the administrator of the system. For example, an external monitoring tool may be able to detect that a file system is full inside the guest operating system of a virtual machine. This event can be added to the internal audit log sending a request like this: Events can also be linked to specific objects. For example, the above event could be linked to the specific virtual machine where it happened, using the vm link: Note When using links, like the vm in the example, only the id attribute is accepted. The name attribute, if provided, is simply ignored. Table 6.221. Parameters summary Name Type Direction Summary event Event In/Out 6.75.2. list GET Get list of events. To the above request we get following response: <events> <event href="/ovirt-engine/api/events/2" id="2"> <description>User admin@internal-authz logged out.</description> <code>31</code> <correlation_id>1e892ea9</correlation_id> <custom_id>-1</custom_id> <flood_rate>30</flood_rate> <origin>oVirt</origin> <severity>normal</severity> <time>2016-09-14T12:14:34.541+02:00</time> <user href="/ovirt-engine/api/users/57d91d48-00da-0137-0138-000000000244" id="57d91d48-00da-0137-0138-000000000244"/> </event> <event href="/ovirt-engine/api/events/1" id="1"> <description>User admin logged in.</description> <code>30</code> <correlation_id>1fbd81f4</correlation_id> <custom_id>-1</custom_id> <flood_rate>30</flood_rate> <origin>oVirt</origin> <severity>normal</severity> <time>2016-09-14T11:54:35.229+02:00</time> <user href="/ovirt-engine/api/users/57d91d48-00da-0137-0138-000000000244" id="57d91d48-00da-0137-0138-000000000244"/> </event> </events> The following events occur: id="1" - The API logs in the admin user account. id="2" - The API logs out of the admin user account. The order of the returned list of events is always garanteed. If the sortby clause is included in the search parameter, then the events will be ordered according to that clause. If the sortby clause isn't included, then the events will be sorted by the numeric value of the id attribute, starting with the highest value. This, combined with the max parameter, simplifies obtaining the most recent event: Table 6.222. Parameters summary Name Type Direction Summary case_sensitive Boolean In Indicates if the search performed using the search parameter should be performed taking case into account. events Event[ ] Out follow String In Indicates which inner links should be followed . from Integer In Indicates the event index after which events should be returned. max Integer In Sets the maximum number of events to return. search String In The events service provides search queries similar to other resource services. 6.75.2.1. case_sensitive Indicates if the search performed using the search parameter should be performed taking case into account. The default value is true , which means that case is taken into account. If you want to search ignoring case set it to false . 6.75.2.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.75.2.3. from Indicates the event index after which events should be returned. The indexes of events are strictly increasing, so when this parameter is used only the events with greater indexes will be returned. For example, the following request will return only the events with indexes greater than 123 : This parameter is optional, and if not specified then the first event returned will be most recently generated. 6.75.2.4. max Sets the maximum number of events to return. If not specified all the events are returned. 6.75.2.5. search The events service provides search queries similar to other resource services. We can search by providing specific severity. To the above request we get a list of events which severity is equal to normal : <events> <event href="/ovirt-engine/api/events/2" id="2"> <description>User admin@internal-authz logged out.</description> <code>31</code> <correlation_id>1fbd81f4</correlation_id> <custom_id>-1</custom_id> <flood_rate>30</flood_rate> <origin>oVirt</origin> <severity>normal</severity> <time>2016-09-14T11:54:35.229+02:00</time> <user href="/ovirt-engine/api/users/57d91d48-00da-0137-0138-000000000244" id="57d91d48-00da-0137-0138-000000000244"/> </event> <event href="/ovirt-engine/api/events/1" id="1"> <description>Affinity Rules Enforcement Manager started.</description> <code>10780</code> <custom_id>-1</custom_id> <flood_rate>30</flood_rate> <origin>oVirt</origin> <severity>normal</severity> <time>2016-09-14T11:52:18.861+02:00</time> </event> </events> A virtualization environment generates a large amount of events after a period of time. However, the API only displays a default number of events for one search query. To display more than the default, the API separates results into pages with the page command in a search query. The following search query tells the API to paginate results using a page value in combination with the sortby clause: Below example paginates event resources. The URL-encoded request is: Increase the page value to view the page of results. 6.75.3. undelete POST Table 6.223. Parameters summary Name Type Direction Summary async Boolean In Indicates if the un-delete should be performed asynchronously. 6.76. ExternalComputeResource Manages a single external compute resource. Compute resource is a term of host external provider. The external provider also needs to know to where the provisioned host needs to register. The login details of the engine are saved as a compute resource in the external provider side. Table 6.224. Methods summary Name Summary get Retrieves external compute resource details. 6.76.1. get GET Retrieves external compute resource details. For example, to get the details of compute resource 234 of provider 123 , send a request like this: It will return a response like this: <external_compute_resource href="/ovirt-engine/api/externalhostproviders/123/computeresources/234" id="234"> <name>hostname</name> <provider>oVirt</provider> <url>https://hostname/api</url> <user>admin@internal</user> <external_host_provider href="/ovirt-engine/api/externalhostproviders/123" id="123"/> </external_compute_resource> Table 6.225. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . resource ExternalComputeResource Out External compute resource information 6.76.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.77. ExternalComputeResources Manages a collection of external compute resources. Compute resource is a term of host external provider. The external provider also needs to know to where the provisioned host needs to register. The login details of the engine is saved as a compute resource in the external provider side. Table 6.226. Methods summary Name Summary list Retrieves a list of external compute resources. 6.77.1. list GET Retrieves a list of external compute resources. For example, to retrieve the compute resources of external host provider 123 , send a request like this: It will return a response like this: <external_compute_resources> <external_compute_resource href="/ovirt-engine/api/externalhostproviders/123/computeresources/234" id="234"> <name>hostname</name> <provider>oVirt</provider> <url>https://address/api</url> <user>admin@internal</user> <external_host_provider href="/ovirt-engine/api/externalhostproviders/123" id="123"/> </external_compute_resource> ... </external_compute_resources> The order of the returned list of compute resources isn't guaranteed. Table 6.227. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of resources to return. resources ExternalComputeResource[ ] Out List of external computer resources. 6.77.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.77.1.2. max Sets the maximum number of resources to return. If not specified all the resources are returned. 6.78. ExternalDiscoveredHost This service manages a single discovered host. Table 6.228. Methods summary Name Summary get Get discovered host info. 6.78.1. get GET Get discovered host info. Retrieves information about an host that is managed in external provider management system, such as Foreman. The information includes hostname, address, subnet, base image and more. For example, to get the details of host 234 from provider 123 , send a request like this: The result will be like this: <external_discovered_host href="/ovirt-engine/api/externalhostproviders/123/discoveredhosts/234" id="234"> <name>mac001a4ad04040</name> <ip>10.34.67.43</ip> <last_report>2017-04-24 11:05:41 UTC</last_report> <mac>00:1a:4a:d0:40:40</mac> <subnet_name>sat0</subnet_name> <external_host_provider href="/ovirt-engine/api/externalhostproviders/123" id="123"/> </external_discovered_host> Table 6.229. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . host ExternalDiscoveredHost Out Host's hardware and config information. 6.78.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.79. ExternalDiscoveredHosts This service manages external discovered hosts. Table 6.230. Methods summary Name Summary list Get list of discovered hosts' information. 6.79.1. list GET Get list of discovered hosts' information. Discovered hosts are fetched from third-party providers such as Foreman. To list all discovered hosts for provider 123 send the following: <external_discovered_hosts> <external_discovered_host href="/ovirt-engine/api/externalhostproviders/123/discoveredhosts/456" id="456"> <name>mac001a4ad04031</name> <ip>10.34.67.42</ip> <last_report>2017-04-24 11:05:41 UTC</last_report> <mac>00:1a:4a:d0:40:31</mac> <subnet_name>sat0</subnet_name> <external_host_provider href="/ovirt-engine/api/externalhostproviders/123" id="123"/> </external_discovered_host> <external_discovered_host href="/ovirt-engine/api/externalhostproviders/123/discoveredhosts/789" id="789"> <name>mac001a4ad04040</name> <ip>10.34.67.43</ip> <last_report>2017-04-24 11:05:41 UTC</last_report> <mac>00:1a:4a:d0:40:40</mac> <subnet_name>sat0</subnet_name> <external_host_provider href="/ovirt-engine/api/externalhostproviders/123" id="123"/> </external_discovered_host> ... </external_discovered_hosts> The order of the returned list of hosts isn't guaranteed. Table 6.231. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . hosts ExternalDiscoveredHost[ ] Out List of discovered hosts max Integer In Sets the maximum number of hosts to return. 6.79.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.79.1.2. max Sets the maximum number of hosts to return. If not specified all the hosts are returned. 6.80. ExternalHost Table 6.232. Methods summary Name Summary get 6.80.1. get GET Table 6.233. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . host ExternalHost Out 6.80.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.81. ExternalHostGroup This service manages a single host group information. Host group is a term of host provider - the host group includes provision details that are applied to new discovered host. Information such as subnet, operating system, domain, etc. Table 6.234. Methods summary Name Summary get Get host group information. 6.81.1. get GET Get host group information. For example, to get the details of hostgroup 234 of provider 123 , send a request like this: It will return a response like this: <external_host_group href="/ovirt-engine/api/externalhostproviders/123/hostgroups/234" id="234"> <name>rhel7</name> <architecture_name>x86_64</architecture_name> <domain_name>s.com</domain_name> <operating_system_name>RedHat 7.3</operating_system_name> <subnet_name>sat0</subnet_name> <external_host_provider href="/ovirt-engine/api/externalhostproviders/123" id="123"/> </external_host_group> Table 6.235. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . group ExternalHostGroup Out Host group information. 6.81.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.82. ExternalHostGroups This service manages hostgroups. Table 6.236. Methods summary Name Summary list Get host groups list from external host provider. 6.82.1. list GET Get host groups list from external host provider. Host group is a term of host providers - the host group includes provision details. This API returns all possible hostgroups exposed by the external provider. For example, to get the details of all host groups of provider 123 , send a request like this: The response will be like this: <external_host_groups> <external_host_group href="/ovirt-engine/api/externalhostproviders/123/hostgroups/234" id="234"> <name>rhel7</name> <architecture_name>x86_64</architecture_name> <domain_name>example.com</domain_name> <operating_system_name>RedHat 7.3</operating_system_name> <subnet_name>sat0</subnet_name> <external_host_provider href="/ovirt-engine/api/externalhostproviders/123" id="123"/> </external_host_group> ... </external_host_groups> The order of the returned list of host groups isn't guaranteed. Table 6.237. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . groups ExternalHostGroup[ ] Out List of all hostgroups available for external host provider max Integer In Sets the maximum number of groups to return. 6.82.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.82.1.2. max Sets the maximum number of groups to return. If not specified all the groups are returned. 6.83. ExternalHostProvider Represents an external host provider, such as Foreman or Satellite. See Foreman documentation for details. See Satellite documentation for details. Table 6.238. Methods summary Name Summary get Get external host provider information Host provider, Foreman or Satellite, can be set as an external provider in ovirt. importcertificates Import the SSL certificates of the external host provider. remove testconnectivity In order to test connectivity for external provider we need to run following request where 123 is an id of a provider. update Update the specified external host provider in the system. 6.83.1. get GET Get external host provider information Host provider, Foreman or Satellite, can be set as an external provider in ovirt. To see details about specific host providers attached to ovirt use this API. For example, to get the details of host provider 123 , send a request like this: The response will be like this: <external_host_provider href="/ovirt-engine/api/externalhostproviders/123" id="123"> <name>mysatellite</name> <requires_authentication>true</requires_authentication> <url>https://mysatellite.example.com</url> <username>admin</username> </external_host_provider> Table 6.239. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . provider ExternalHostProvider Out 6.83.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.83.2. importcertificates POST Import the SSL certificates of the external host provider. Table 6.240. Parameters summary Name Type Direction Summary certificates Certificate[ ] In 6.83.3. remove DELETE Table 6.241. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.83.4. testconnectivity POST In order to test connectivity for external provider we need to run following request where 123 is an id of a provider. Table 6.242. Parameters summary Name Type Direction Summary async Boolean In Indicates if the test should be performed asynchronously. 6.83.5. update PUT Update the specified external host provider in the system. Table 6.243. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. provider ExternalHostProvider In/Out 6.84. ExternalHostProviders Table 6.244. Methods summary Name Summary add Add a new external host provider to the system. list Returns the list of external host providers. 6.84.1. add POST Add a new external host provider to the system. Table 6.245. Parameters summary Name Type Direction Summary provider ExternalHostProvider In/Out 6.84.2. list GET Returns the list of external host providers. The order of the returned list of host providers isn't guaranteed. Table 6.246. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of providers to return. providers ExternalHostProvider[ ] Out search String In A query string used to restrict the returned external host providers. 6.84.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.84.2.2. max Sets the maximum number of providers to return. If not specified all the providers are returned. 6.85. ExternalHosts Table 6.247. Methods summary Name Summary list Return the list of external hosts. 6.85.1. list GET Return the list of external hosts. The order of the returned list of hosts isn't guaranteed. Table 6.248. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . hosts ExternalHost[ ] Out max Integer In Sets the maximum number of hosts to return. 6.85.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.85.1.2. max Sets the maximum number of hosts to return. If not specified all the hosts are returned. 6.86. ExternalNetworkProviderConfiguration Describes how an external network provider is provisioned by the system on the host. Table 6.249. Methods summary Name Summary get Returns the information about an external network provider on the host. 6.86.1. get GET Returns the information about an external network provider on the host. Table 6.250. Parameters summary Name Type Direction Summary configuration ExternalNetworkProviderConfiguration Out follow String In Indicates which inner links should be followed . 6.86.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.87. ExternalNetworkProviderConfigurations A service to list all external network providers provisioned by the system on the host. Table 6.251. Methods summary Name Summary list Returns the list of all external network providers on the host. 6.87.1. list GET Returns the list of all external network providers on the host. The order of the returned list of networks is not guaranteed. Table 6.252. Parameters summary Name Type Direction Summary configurations ExternalNetworkProviderConfiguration[ ] Out follow String In Indicates which inner links should be followed . 6.87.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.88. ExternalProvider Provides capability to manage external providers. Table 6.253. Methods summary Name Summary importcertificates Import the SSL certificates of the external host provider. testconnectivity In order to test connectivity for external provider we need to run following request where 123 is an id of a provider. 6.88.1. importcertificates POST Import the SSL certificates of the external host provider. Table 6.254. Parameters summary Name Type Direction Summary certificates Certificate[ ] In 6.88.2. testconnectivity POST In order to test connectivity for external provider we need to run following request where 123 is an id of a provider. Table 6.255. Parameters summary Name Type Direction Summary async Boolean In Indicates if the test should be performed asynchronously. 6.89. ExternalProviderCertificate A service to view specific certificate for external provider. Table 6.256. Methods summary Name Summary get Get specific certificate. 6.89.1. get GET Get specific certificate. And here is sample response: <certificate id="0"> <organization>provider.example.com</organization> <subject>CN=provider.example.com</subject> <content>...</content> </certificate> Table 6.257. Parameters summary Name Type Direction Summary certificate Certificate Out The details of the certificate. follow String In Indicates which inner links should be followed . 6.89.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.90. ExternalProviderCertificates A service to view certificates for external provider. Table 6.258. Methods summary Name Summary list Returns the chain of certificates presented by the external provider. 6.90.1. list GET Returns the chain of certificates presented by the external provider. And here is sample response: <certificates> <certificate id="789">...</certificate> ... </certificates> The order of the returned certificates is always guaranteed to be the sign order: the first is the certificate of the server itself, the second the certificate of the CA that signs the first, so on. Table 6.259. Parameters summary Name Type Direction Summary certificates Certificate[ ] Out List containing certificate details. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of certificates to return. 6.90.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.90.1.2. max Sets the maximum number of certificates to return. If not specified all the certificates are returned. 6.91. ExternalTemplateImports Provides capability to import external templates. Currently supports OVA only. Table 6.260. Methods summary Name Summary add This operation is used to import a template from external hypervisor. 6.91.1. add POST This operation is used to import a template from external hypervisor. For example import of a template OVA can be facilitated using the following request: With request body of type ExternalTemplateImport , for example: <external_template_import> <template> <name>my_template</name> </template> <cluster id="2b18aca2-4469-11eb-9449-482ae35a5f83" /> <storage_domain id="8bb5ade5-e988-4000-8b93-dbfc6717fe50" /> <url>ova:///mnt/ova/ova_template.ova</url> <host id="8bb5ade5-e988-4000-8b93-dbfc6717fe50" /> </external_template_import> Table 6.261. Parameters summary Name Type Direction Summary import ExternalTemplateImport In/Out 6.92. ExternalVmImports Provides capability to import external virtual machines. Table 6.262. Methods summary Name Summary add This operation is used to import a virtual machine from external hypervisor, such as KVM, XEN or VMware. 6.92.1. add POST This operation is used to import a virtual machine from external hypervisor, such as KVM, XEN or VMware. For example import of a virtual machine from VMware can be facilitated using the following request: With request body of type ExternalVmImport , for example: <external_vm_import> <vm> <name>my_vm</name> </vm> <cluster id="360014051136c20574f743bdbd28177fd" /> <storage_domain id="8bb5ade5-e988-4000-8b93-dbfc6717fe50" /> <name>vm_name_as_is_in_vmware</name> <sparse>true</sparse> <username>vmware_user</username> <password>123456</password> <provider>VMWARE</provider> <url>vpx://wmware_user@vcenter-host/DataCenter/Cluster/esxi-host?no_verify=1</url> <drivers_iso id="virtio-win-1.6.7.iso" /> </external_vm_import> Table 6.263. Parameters summary Name Type Direction Summary import ExternalVmImport In/Out 6.93. FenceAgent A service to manage fence agent for a specific host. Table 6.264. Methods summary Name Summary get Gets details of this fence agent. remove Removes a fence agent for a specific host. update Update a fencing-agent. 6.93.1. get GET Gets details of this fence agent. And here is sample response: <agent id="0"> <type>apc</type> <order>1</order> <ip>192.168.1.101</ip> <user>user</user> <password>xxx</password> <port>9</port> <options>name1=value1, name2=value2</options> </agent> Table 6.265. Parameters summary Name Type Direction Summary agent Agent Out Fence agent details. follow String In Indicates which inner links should be followed . 6.93.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.93.2. remove DELETE Removes a fence agent for a specific host. Table 6.266. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.93.3. update PUT Update a fencing-agent. Table 6.267. Parameters summary Name Type Direction Summary agent Agent In/Out Fence agent details. async Boolean In Indicates if the update should be performed asynchronously. 6.94. FenceAgents A service to manage fence agents for a specific host. Table 6.268. Methods summary Name Summary add Add a new fencing-agent to the host. list Returns the list of fencing agents configured for the host. 6.94.1. add POST Add a new fencing-agent to the host. apc, bladecenter, wti fencing agent/s sample request: <agent> <type>apc</type> <order>1</order> <ip>192.168.1.101</ip> <user>user</user> <password>xxx</password> <port>9</port> <options>slot=7[,name1=value1, name2=value2,...]</options> </agent> apc_snmp, hpblade, ilo, ilo2, ilo_ssh, redfish, rsa fencing agent/s sample request: <agent> <type>apc_snmp</type> <order>1</order> <ip>192.168.1.101</ip> <user>user</user> <password>xxx</password> <port>9</port> <options>[name1=value1, name2=value2,...]</options> </agent> cisco_ucs, drac5, eps fencing agent/s sample request: <agent> <type>cisco_ucs</type> <order>1</order> <ip>192.168.1.101</ip> <user>user</user> <password>xxx</password> <options>slot=7[,name1=value1, name2=value2,...]</options> </agent> drac7, ilo3, ilo4, ipmilan, rsb fencing agent/s sample request: <agent> <type>drac7</type> <order>1</order> <ip>192.168.1.101</ip> <user>user</user> <password>xxx</password> <options>[name1=value1, name2=value2,...]</options> </agent> Table 6.269. Parameters summary Name Type Direction Summary agent Agent In/Out 6.94.2. list GET Returns the list of fencing agents configured for the host. And here is sample response: <agents> <agent id="0"> <type>apc</type> <order>1</order> <ip>192.168.1.101</ip> <user>user</user> <password>xxx</password> <port>9</port> <options>name1=value1, name2=value2</options> </agent> </agents> The order of the returned list of fencing agents isn't guaranteed. Table 6.270. Parameters summary Name Type Direction Summary agents Agent[ ] Out List of fence agent details. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of agents to return. 6.94.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.94.2.2. max Sets the maximum number of agents to return. If not specified all the agents are returned. 6.95. File Table 6.271. Methods summary Name Summary get 6.95.1. get GET Table 6.272. Parameters summary Name Type Direction Summary file File Out follow String In Indicates which inner links should be followed . 6.95.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.96. Files Provides a way for clients to list available files. This service is specifically targeted to ISO storage domains, which contain ISO images and virtual floppy disks (VFDs) that an administrator uploads. The addition of a CD-ROM device to a virtual machine requires an ISO image from the files of an ISO storage domain. Table 6.273. Methods summary Name Summary list Returns the list of ISO images and virtual floppy disks available in the storage domain. 6.96.1. list GET Returns the list of ISO images and virtual floppy disks available in the storage domain. The order of the returned list is not guaranteed. If the refresh parameter is false , the returned list may not reflect recent changes to the storage domain; for example, it may not contain a new ISO file that was recently added. This is because the server caches the list of files to improve performance. To get the very latest results, set the refresh parameter to true . The default value of the refresh parameter is true , but it can be changed using the configuration value ForceRefreshDomainFilesByDefault : Important Setting the value of the refresh parameter to true has an impact on the performance of the server. Use it only if necessary. Table 6.274. Parameters summary Name Type Direction Summary case_sensitive Boolean In Indicates if the search performed using the search parameter should take case into account. file File[ ] Out follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of files to return. refresh Boolean In Indicates whether the list of files should be refreshed from the storage domain, rather than showing cached results that are updated at certain intervals. search String In A query string used to restrict the returned files. 6.96.1.1. case_sensitive Indicates if the search performed using the search parameter should take case into account. The default value is true . 6.96.1.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.96.1.3. max Sets the maximum number of files to return. If not specified, all the files are returned. 6.97. Filter Table 6.275. Methods summary Name Summary get remove 6.97.1. get GET Table 6.276. Parameters summary Name Type Direction Summary filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . result Filter Out 6.97.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.97.2. remove DELETE Table 6.277. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.98. Filters Manages the filters used by an scheduling policy. Table 6.278. Methods summary Name Summary add Add a filter to a specified user defined scheduling policy. list Returns the list of filters used by the scheduling policy. 6.98.1. add POST Add a filter to a specified user defined scheduling policy. Table 6.279. Parameters summary Name Type Direction Summary filter Filter In/Out 6.98.2. list GET Returns the list of filters used by the scheduling policy. The order of the returned list of filters isn't guaranteed. Table 6.280. Parameters summary Name Type Direction Summary filter Boolean In Indicates if the results should be filtered according to the permissions of the user. filters Filter[ ] Out follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of filters to return. 6.98.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.98.2.2. max Sets the maximum number of filters to return. If not specified all the filters are returned. 6.99. Follow 6.100. GlusterBrick This service manages a single gluster brick. Table 6.281. Methods summary Name Summary get Get details of a brick. remove Removes a brick. replace Replaces this brick with a new one. 6.100.1. get GET Get details of a brick. Retrieves status details of brick from underlying gluster volume with header All-Content set to true . This is the equivalent of running gluster volume status <volumename> <brickname> detail . For example, to get the details of brick 234 of gluster volume 123 , send a request like this: Which will return a response body like this: <brick id="234"> <name>host1:/rhgs/data/brick1</name> <brick_dir>/rhgs/data/brick1</brick_dir> <server_id>111</server_id> <status>up</status> <device>/dev/mapper/RHGS_vg1-lv_vmaddldisks</device> <fs_name>xfs</fs_name> <gluster_clients> <gluster_client> <bytes_read>2818417648</bytes_read> <bytes_written>1384694844</bytes_written> <client_port>1011</client_port> <host_name>client2</host_name> </gluster_client> </gluster_clients> <memory_pools> <memory_pool> <name>data-server:fd_t</name> <alloc_count>1626348</alloc_count> <cold_count>1020</cold_count> <hot_count>4</hot_count> <max_alloc>23</max_alloc> <max_stdalloc>0</max_stdalloc> <padded_size>140</padded_size> <pool_misses>0</pool_misses> </memory_pool> </memory_pools> <mnt_options>rw,seclabel,noatime,nodiratime,attr2,inode64,sunit=512,swidth=2048,noquota</mnt_options> <pid>25589</pid> <port>49155</port> </brick> Table 6.282. Parameters summary Name Type Direction Summary brick GlusterBrick Out follow String In Indicates which inner links should be followed . 6.100.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.100.2. remove DELETE Removes a brick. Removes a brick from the underlying gluster volume and deletes entries from database. This can be used only when removing a single brick without data migration. To remove multiple bricks and with data migration, use migrate instead. For example, to delete brick 234 from gluster volume 123 , send a request like this: Table 6.283. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.100.3. replace POST Replaces this brick with a new one. Important This operation has been deprecated since version 3.5 of the engine and will be removed in the future. Use add brick(s) and migrate brick(s) instead. Table 6.284. Parameters summary Name Type Direction Summary async Boolean In Indicates if the replacement should be performed asynchronously. force Boolean In 6.101. GlusterBricks This service manages the gluster bricks in a gluster volume Table 6.285. Methods summary Name Summary activate Activate the bricks post data migration of remove brick operation. add Adds a list of bricks to gluster volume. list Lists the bricks of a gluster volume. migrate Start migration of data prior to removing bricks. remove Removes bricks from gluster volume. stopmigrate Stops migration of data from bricks for a remove brick operation. 6.101.1. activate POST Activate the bricks post data migration of remove brick operation. Used to activate brick(s) once the data migration from bricks is complete but user no longer wishes to remove bricks. The bricks that were previously marked for removal will now be used as normal bricks. For example, to retain the bricks that on glustervolume 123 from which data was migrated, send a request like this: With a request body like this: <action> <bricks> <brick> <name>host1:/rhgs/brick1</name> </brick> </bricks> </action> Table 6.286. Parameters summary Name Type Direction Summary async Boolean In Indicates if the activation should be performed asynchronously. bricks GlusterBrick[ ] In The list of bricks that need to be re-activated. 6.101.2. add POST Adds a list of bricks to gluster volume. Used to expand a gluster volume by adding bricks. For replicated volume types, the parameter replica_count needs to be passed. In case the replica count is being increased, then the number of bricks needs to be equivalent to the number of replica sets. For example, to add bricks to gluster volume 123 , send a request like this: With a request body like this: <bricks> <brick> <server_id>111</server_id> <brick_dir>/export/data/brick3</brick_dir> </brick> </bricks> Table 6.287. Parameters summary Name Type Direction Summary bricks GlusterBrick[ ] In/Out The list of bricks to be added to the volume replica_count Integer In Replica count of volume post add operation. stripe_count Integer In Stripe count of volume post add operation. 6.101.3. list GET Lists the bricks of a gluster volume. For example, to list bricks of gluster volume 123 , send a request like this: Provides an output as below: <bricks> <brick id="234"> <name>host1:/rhgs/data/brick1</name> <brick_dir>/rhgs/data/brick1</brick_dir> <server_id>111</server_id> <status>up</status> </brick> <brick id="233"> <name>host2:/rhgs/data/brick1</name> <brick_dir>/rhgs/data/brick1</brick_dir> <server_id>222</server_id> <status>up</status> </brick> </bricks> The order of the returned list is based on the brick order provided at gluster volume creation. Table 6.288. Parameters summary Name Type Direction Summary bricks GlusterBrick[ ] Out follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of bricks to return. 6.101.3.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.101.3.2. max Sets the maximum number of bricks to return. If not specified all the bricks are returned. 6.101.4. migrate POST Start migration of data prior to removing bricks. Removing bricks is a two-step process, where the data on bricks to be removed, is first migrated to remaining bricks. Once migration is completed the removal of bricks is confirmed via the API remove . If at any point, the action needs to be cancelled stopmigrate has to be called. For instance, to delete a brick from a gluster volume with id 123 , send a request: With a request body like this: <action> <bricks> <brick> <name>host1:/rhgs/brick1</name> </brick> </bricks> </action> The migration process can be tracked from the job id returned from the API using job and steps in job using step Table 6.289. Parameters summary Name Type Direction Summary async Boolean In Indicates if the migration should be performed asynchronously. bricks GlusterBrick[ ] In List of bricks for which data migration needs to be started. 6.101.5. remove DELETE Removes bricks from gluster volume. The recommended way to remove bricks without data loss is to first migrate the data using stopmigrate and then removing them. If migrate was not called on bricks prior to remove, the bricks are removed without data migration which may lead to data loss. For example, to delete the bricks from gluster volume 123 , send a request like this: With a request body like this: <bricks> <brick> <name>host:brick_directory</name> </brick> </bricks> Table 6.290. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. bricks GlusterBrick[ ] In The list of bricks to be removed replica_count Integer In Replica count of volume post add operation. 6.101.6. stopmigrate POST Stops migration of data from bricks for a remove brick operation. To cancel data migration that was started as part of the 2-step remove brick process in case the user wishes to continue using the bricks. The bricks that were marked for removal will function as normal bricks post this operation. For example, to stop migration of data from the bricks of gluster volume 123 , send a request like this: With a request body like this: <bricks> <brick> <name>host:brick_directory</name> </brick> </bricks> Table 6.291. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. bricks GlusterBrick[ ] In List of bricks for which data migration needs to be stopped. 6.101.6.1. bricks List of bricks for which data migration needs to be stopped. This list should match the arguments passed to migrate . 6.102. GlusterHook Table 6.292. Methods summary Name Summary disable Resolves status conflict of hook among servers in cluster by disabling Gluster hook in all servers of the cluster. enable Resolves status conflict of hook among servers in cluster by disabling Gluster hook in all servers of the cluster. get remove Removes the this Gluster hook from all servers in cluster and deletes it from the database. resolve Resolves missing hook conflict depending on the resolution type. 6.102.1. disable POST Resolves status conflict of hook among servers in cluster by disabling Gluster hook in all servers of the cluster. This updates the hook status to DISABLED in database. Table 6.293. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. 6.102.2. enable POST Resolves status conflict of hook among servers in cluster by disabling Gluster hook in all servers of the cluster. This updates the hook status to DISABLED in database. Table 6.294. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. 6.102.3. get GET Table 6.295. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . hook GlusterHook Out 6.102.3.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.102.4. remove DELETE Removes the this Gluster hook from all servers in cluster and deletes it from the database. Table 6.296. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.102.5. resolve POST Resolves missing hook conflict depending on the resolution type. For ADD resolves by copying hook stored in engine database to all servers where the hook is missing. The engine maintains a list of all servers where hook is missing. For COPY resolves conflict in hook content by copying hook stored in engine database to all servers where the hook is missing. The engine maintains a list of all servers where the content is conflicting. If a host id is passed as parameter, the hook content from the server is used as the master to copy to other servers in cluster. Table 6.297. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. host Host In resolution_type String In 6.103. GlusterHooks Table 6.298. Methods summary Name Summary list Returns the list of hooks. 6.103.1. list GET Returns the list of hooks. The order of the returned list of hooks isn't guaranteed. Table 6.299. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . hooks GlusterHook[ ] Out max Integer In Sets the maximum number of hooks to return. 6.103.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.103.1.2. max Sets the maximum number of hooks to return. If not specified all the hooks are returned. 6.104. GlusterVolume This service manages a single gluster volume. Table 6.300. Methods summary Name Summary get Get the gluster volume details. getprofilestatistics Get gluster volume profile statistics. rebalance Rebalance the gluster volume. remove Removes the gluster volume. resetalloptions Resets all the options set in the gluster volume. resetoption Resets a particular option in the gluster volume. setoption Sets a particular option in the gluster volume. start Starts the gluster volume. startprofile Start profiling the gluster volume. stop Stops the gluster volume. stopprofile Stop profiling the gluster volume. stoprebalance Stop rebalancing the gluster volume. 6.104.1. get GET Get the gluster volume details. For example, to get details of a gluster volume with identifier 123 in cluster 456 , send a request like this: This GET request will return the following output: <gluster_volume id="123"> <name>data</name> <link href="/ovirt-engine/api/clusters/456/glustervolumes/123/glusterbricks" rel="glusterbricks"/> <disperse_count>0</disperse_count> <options> <option> <name>storage.owner-gid</name> <value>36</value> </option> <option> <name>performance.io-cache</name> <value>off</value> </option> <option> <name>cluster.data-self-heal-algorithm</name> <value>full</value> </option> </options> <redundancy_count>0</redundancy_count> <replica_count>3</replica_count> <status>up</status> <stripe_count>0</stripe_count> <transport_types> <transport_type>tcp</transport_type> </transport_types> <volume_type>replicate</volume_type> </gluster_volume> Table 6.301. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . volume GlusterVolume Out Representation of the gluster volume. 6.104.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.104.2. getprofilestatistics POST Get gluster volume profile statistics. For example, to get profile statistics for a gluster volume with identifier 123 in cluster 456 , send a request like this: Table 6.302. Parameters summary Name Type Direction Summary details GlusterVolumeProfileDetails Out Gluster volume profiling information returned from the action. 6.104.3. rebalance POST Rebalance the gluster volume. Rebalancing a gluster volume helps to distribute the data evenly across all the bricks. After expanding or shrinking a gluster volume (without migrating data), we need to rebalance the data among the bricks. In a non-replicated volume, all bricks should be online to perform the rebalance operation. In a replicated volume, at least one of the bricks in the replica should be online. For example, to rebalance a gluster volume with identifier 123 in cluster 456 , send a request like this: Table 6.303. Parameters summary Name Type Direction Summary async Boolean In Indicates if the rebalance should be performed asynchronously. fix_layout Boolean In If set to true, rebalance will only fix the layout so that new data added to the volume is distributed across all the hosts. force Boolean In Indicates if the rebalance should be force started. 6.104.3.1. fix_layout If set to true, rebalance will only fix the layout so that new data added to the volume is distributed across all the hosts. But it will not migrate/rebalance the existing data. Default is false . 6.104.3.2. force Indicates if the rebalance should be force started. The rebalance command can be executed with the force option even when the older clients are connected to the cluster. However, this could lead to a data loss situation. Default is false . 6.104.4. remove DELETE Removes the gluster volume. For example, to remove a volume with identifier 123 in cluster 456 , send a request like this: Table 6.304. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.104.5. resetalloptions POST Resets all the options set in the gluster volume. For example, to reset all options in a gluster volume with identifier 123 in cluster 456 , send a request like this: Table 6.305. Parameters summary Name Type Direction Summary async Boolean In Indicates if the reset should be performed asynchronously. 6.104.6. resetoption POST Resets a particular option in the gluster volume. For example, to reset a particular option option1 in a gluster volume with identifier 123 in cluster 456 , send a request like this: With the following request body: <action> <option name="option1"/> </action> Table 6.306. Parameters summary Name Type Direction Summary async Boolean In Indicates if the reset should be performed asynchronously. force Boolean In option Option In Option to reset. 6.104.7. setoption POST Sets a particular option in the gluster volume. For example, to set option1 with value value1 in a gluster volume with identifier 123 in cluster 456 , send a request like this: With the following request body: <action> <option name="option1" value="value1"/> </action> Table 6.307. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. option Option In Option to set. 6.104.8. start POST Starts the gluster volume. A Gluster Volume should be started to read/write data. For example, to start a gluster volume with identifier 123 in cluster 456 , send a request like this: Table 6.308. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. force Boolean In Indicates if the volume should be force started. 6.104.8.1. force Indicates if the volume should be force started. If a gluster volume is started already but few/all bricks are down then force start can be used to bring all the bricks up. Default is false . 6.104.9. startprofile POST Start profiling the gluster volume. For example, to start profiling a gluster volume with identifier 123 in cluster 456 , send a request like this: Table 6.309. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. 6.104.10. stop POST Stops the gluster volume. Stopping a volume will make its data inaccessible. For example, to stop a gluster volume with identifier 123 in cluster 456 , send a request like this: Table 6.310. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. force Boolean In 6.104.11. stopprofile POST Stop profiling the gluster volume. For example, to stop profiling a gluster volume with identifier 123 in cluster 456 , send a request like this: Table 6.311. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. 6.104.12. stoprebalance POST Stop rebalancing the gluster volume. For example, to stop rebalancing a gluster volume with identifier 123 in cluster 456 , send a request like this: Table 6.312. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. 6.105. GlusterVolumes This service manages a collection of gluster volumes available in a cluster. Table 6.313. Methods summary Name Summary add Creates a new gluster volume. list Lists all gluster volumes in the cluster. 6.105.1. add POST Creates a new gluster volume. The volume is created based on properties of the volume parameter. The properties name , volume_type and bricks are required. For example, to add a volume with name myvolume to the cluster 123 , send the following request: With the following request body: <gluster_volume> <name>myvolume</name> <volume_type>replicate</volume_type> <replica_count>3</replica_count> <bricks> <brick> <server_id>server1</server_id> <brick_dir>/exp1</brick_dir> </brick> <brick> <server_id>server2</server_id> <brick_dir>/exp1</brick_dir> </brick> <brick> <server_id>server3</server_id> <brick_dir>/exp1</brick_dir> </brick> <bricks> </gluster_volume> Table 6.314. Parameters summary Name Type Direction Summary volume GlusterVolume In/Out The gluster volume definition from which to create the volume is passed as input and the newly created volume is returned. 6.105.2. list GET Lists all gluster volumes in the cluster. For example, to list all Gluster Volumes in cluster 456 , send a request like this: The order of the returned list of volumes isn't guaranteed. Table 6.315. Parameters summary Name Type Direction Summary case_sensitive Boolean In Indicates if the search performed using the search parameter should be performed taking case into account. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of volumes to return. search String In A query string used to restrict the returned volumes. volumes GlusterVolume[ ] Out 6.105.2.1. case_sensitive Indicates if the search performed using the search parameter should be performed taking case into account. The default value is true , which means that case is taken into account. If you want to search ignoring case set it to false . 6.105.2.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.105.2.3. max Sets the maximum number of volumes to return. If not specified all the volumes are returned. 6.106. Group Manages a group of users. Use this service to either get groups details or remove groups. In order to add new groups please use service that manages the collection of groups. Table 6.316. Methods summary Name Summary get Gets the system group information. remove Removes the system group. 6.106.1. get GET Gets the system group information. Usage: Will return the group information: <group href="/ovirt-engine/api/groups/123" id="123"> <name>mygroup</name> <link href="/ovirt-engine/api/groups/123/roles" rel="roles"/> <link href="/ovirt-engine/api/groups/123/permissions" rel="permissions"/> <link href="/ovirt-engine/api/groups/123/tags" rel="tags"/> <domain_entry_id>476652557A382F67696B6D2B32762B37796E46476D513D3D</domain_entry_id> <namespace>DC=example,DC=com</namespace> <domain href="/ovirt-engine/api/domains/ABCDEF" id="ABCDEF"> <name>myextension-authz</name> </domain> </group> Table 6.317. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . get Group Out The system group. 6.106.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.106.2. remove DELETE Removes the system group. Usage: Table 6.318. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.107. Groups Manages the collection of groups of users. Table 6.319. Methods summary Name Summary add Add group from a directory service. list List all the groups in the system. 6.107.1. add POST Add group from a directory service. Please note that domain name is name of the authorization provider. For example, to add the Developers group from the internal-authz authorization provider send a request like this: With a request body like this: <group> <name>Developers</name> <domain> <name>internal-authz</name> </domain> </group> Table 6.320. Parameters summary Name Type Direction Summary group Group In/Out The group to be added. 6.107.2. list GET List all the groups in the system. Usage: Will return the list of groups: <groups> <group href="/ovirt-engine/api/groups/123" id="123"> <name>mygroup</name> <link href="/ovirt-engine/api/groups/123/roles" rel="roles"/> <link href="/ovirt-engine/api/groups/123/permissions" rel="permissions"/> <link href="/ovirt-engine/api/groups/123/tags" rel="tags"/> <domain_entry_id>476652557A382F67696B6D2B32762B37796E46476D513D3D</domain_entry_id> <namespace>DC=example,DC=com</namespace> <domain href="/ovirt-engine/api/domains/ABCDEF" id="ABCDEF"> <name>myextension-authz</name> </domain> </group> ... </groups> The order of the returned list of groups isn't guaranteed. Table 6.321. Parameters summary Name Type Direction Summary case_sensitive Boolean In Indicates if the search performed using the search parameter should be performed taking case into account. follow String In Indicates which inner links should be followed . groups Group[ ] Out The list of groups. max Integer In Sets the maximum number of groups to return. search String In A query string used to restrict the returned groups. 6.107.2.1. case_sensitive Indicates if the search performed using the search parameter should be performed taking case into account. The default value is true , which means that case is taken into account. If you want to search ignoring case set it to false . 6.107.2.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.107.2.3. max Sets the maximum number of groups to return. If not specified all the groups are returned. 6.108. Host A service to manage a host. Table 6.322. Methods summary Name Summary activate Activates the host for use, for example to run virtual machines. approve Approve a pre-installed Hypervisor host for usage in the virtualization environment. commitnetconfig Marks the network configuration as good and persists it inside the host. copyhostnetworks Copy the network configuration of the specified host to current host. deactivate Deactivates the host to perform maintenance tasks. discoveriscsi Discovers iSCSI targets on the host, using the initiator details. enrollcertificate Enrolls the certificate of the host. fence Controls the host's power management device. forceselectspm To manually set a host as the storage pool manager (SPM). get Gets the host details. install Installs the latest version of VDSM and related software on the host. iscsidiscover This method has been deprecated since Engine version 4. iscsilogin Login to iSCSI targets on the host, using the target details. refresh Refresh the host devices and capabilities. remove Remove the host from the system. setupnetworks This method is used to change the configuration of the network interfaces of a host. syncallnetworks To synchronize all networks on the host, send a request like this: [source] ---- POST /ovirt-engine/api/hosts/123/syncallnetworks ---- With a request body like this: [source,xml] ---- <action/> ---- unregisteredstoragedomainsdiscover Discovers the block Storage Domains which are candidates to be imported to the setup. update Update the host properties. upgrade Upgrades VDSM and selected software on the host. upgradecheck Check if there are upgrades available for the host. 6.108.1. activate POST Activates the host for use, for example to run virtual machines. Table 6.323. Parameters summary Name Type Direction Summary async Boolean In Indicates if the activation should be performed asynchronously. 6.108.2. approve POST Approve a pre-installed Hypervisor host for usage in the virtualization environment. This action also accepts an optional cluster element to define the target cluster for this host. Table 6.324. Parameters summary Name Type Direction Summary activate Boolean In When set to 'true', this host will be activated after its approval completes. async Boolean In Indicates if the approval should be performed asynchronously. cluster Cluster In The cluster where the host will be added after it is approved. host Host In The host to approve. reboot Boolean In Indicates if the host should be rebooted after successful installation. 6.108.2.1. activate When set to 'true', this host will be activated after its approval completes. When set to 'false' the host will remain in 'maintenance' status after its approval. Absence of this parameter will be interpreted as 'true', since the desired default behavior is activating the host after approval. 6.108.2.2. reboot Indicates if the host should be rebooted after successful installation. The default value is true . 6.108.3. commitnetconfig POST Marks the network configuration as good and persists it inside the host. An API user commits the network configuration to persist a host network interface attachment or detachment, or persist the creation and deletion of a bonded interface. Important Networking configuration is only committed after the engine has established that host connectivity is not lost as a result of the configuration changes. If host connectivity is lost, the host requires a reboot and automatically reverts to the networking configuration. For example, to commit the network configuration of host with id 123 send a request like this: With a request body like this: <action/> Important Since Red Hat Virtualization Manager 4.3, it is possible to also specify commit_on_success in the setupnetworks request, in which case the new configuration is automatically saved in the {hypervisor-name} upon completing the setup and re-establishing connectivity between the {hypervisor-name} and Red Hat Virtualization Manager, and without waiting for a separate commitnetconfig request. Table 6.325. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. 6.108.4. copyhostnetworks POST Copy the network configuration of the specified host to current host. Important Any network attachments that are not present on the source host will be erased from the target host by the copy operation. To copy networks from another host, send a request like this: With a request body like this: <action> <source_host id="456"/> </action> Table 6.326. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. source_host Host In The host to copy networks from. 6.108.5. deactivate POST Deactivates the host to perform maintenance tasks. Table 6.327. Parameters summary Name Type Direction Summary async Boolean In Indicates if the deactivation should be performed asynchronously. reason String In stop_gluster_service Boolean In Indicates if the gluster service should be stopped as part of deactivating the host. 6.108.5.1. stop_gluster_service Indicates if the gluster service should be stopped as part of deactivating the host. It can be used while performing maintenance operations on the gluster host. Default value for this variable is false . 6.108.6. discoveriscsi POST Discovers iSCSI targets on the host, using the initiator details. Returns a list of IscsiDetails objects containing the discovered data. For example, to discover iSCSI targets available in myiscsi.example.com , from host 123 , send a request like this: With a request body like this: <action> <iscsi> <address>myiscsi.example.com</address> </iscsi> </action> The result will be like this: <discovered_targets> <iscsi_details> <address>10.35.1.72</address> <port>3260</port> <portal>10.35.1.72:3260,1</portal> <target>iqn.2015-08.com.tgt:444</target> </iscsi_details> </discovered_targets> Important When using this method to discover iscsi targets, you can use an FQDN or an IP address, but you must use the iscsi details from the discovered targets results to log in using the iscsilogin method. Table 6.328. Parameters summary Name Type Direction Summary async Boolean In Indicates if the discovery should be performed asynchronously. discovered_targets IscsiDetails[ ] Out The discovered targets including all connection information. iscsi IscsiDetails In The target iSCSI device. 6.108.7. enrollcertificate POST Enrolls the certificate of the host. Useful in case you get a warning that it is about to expire or has already expired. Table 6.329. Parameters summary Name Type Direction Summary async Boolean In Indicates if the enrollment should be performed asynchronously. 6.108.8. fence POST Controls the host's power management device. For example, to start the host. This can be done via: Table 6.330. Parameters summary Name Type Direction Summary async Boolean In Indicates if the fencing should be performed asynchronously. fence_type String In maintenance_after_restart Boolean In Indicates if host should be put into maintenance after restart. power_management PowerManagement Out 6.108.9. forceselectspm POST To manually set a host as the storage pool manager (SPM). With a request body like this: <action/> Table 6.331. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. 6.108.10. get GET Gets the host details. Table 6.332. Parameters summary Name Type Direction Summary all_content Boolean In Indicates if all of the attributes of the host should be included in the response. filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . host Host Out The queried host. 6.108.10.1. all_content Indicates if all of the attributes of the host should be included in the response. By default the following attributes are excluded: hosted_engine For example, to retrieve the complete representation of host '123': Note These attributes are not included by default because retrieving them impacts performance. They are seldom used and require additional queries to the database. Use this parameter with caution and only when specifically required. 6.108.10.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.108.11. install POST Installs the latest version of VDSM and related software on the host. The action also performs every configuration steps on the host which is done during adding host to the engine: kdump configuration, hosted-engine deploy, kernel options changes, etc. The host type defines additional parameters for the action. Example of installing a host, using curl and JSON, plain: curl \ --verbose \ --cacert /etc/pki/ovirt-engine/ca.pem \ --request PUT \ --header "Content-Type: application/json" \ --header "Accept: application/json" \ --header "Version: 4" \ --user "admin@internal:..." \ --data ' { "root_password": "myrootpassword" } ' \ "https://engine.example.com/ovirt-engine/api/hosts/123" Example of installing a host using curl and JSON with hosted engine components: curl \ curl \ --verbose \ --cacert /etc/pki/ovirt-engine/ca.pem \ --request PUT \ --header "Content-Type: application/json" \ --header "Accept: application/json" \ --header "Version: 4" \ --user "admin@internal:..." \ --data ' { "root_password": "myrootpassword" "deploy_hosted_engine" : "true" } ' \ "https://engine.example.com/ovirt-engine/api/hosts/123" Important Since version 4.1.2 of the engine, when a host is reinstalled we override the host firewall definitions by default. Table 6.333. Parameters summary Name Type Direction Summary activate Boolean In When set to 'true', this host will be activated after its installation completes. async Boolean In Indicates if the installation should be performed asynchronously. deploy_hosted_engine Boolean In When set to true this host will also deploy the self-hosted engine components. host Host In The override_iptables property is used to indicate if the firewall configuration should be replaced by the default one. image String In When installing {hypervisor-name}, an ISO image file is required. reboot Boolean In Indicates if the host should be rebooted after successful installation. root_password String In The password of the root user used to connect to the host via SSH. ssh Ssh In The SSH details used to connect to the host. undeploy_hosted_engine Boolean In When set to true this host will un-deploy the self-hosted engine components, and this host will not function as part of the High Availability cluster. 6.108.11.1. activate When set to 'true', this host will be activated after its installation completes. When set to 'false' the host will remain in 'maintenance' status after its installation. Absence of this parameter will be interpreted as 'true', since the desired default behavior is activating the host after install. 6.108.11.2. deploy_hosted_engine When set to true this host will also deploy the self-hosted engine components. A missing value is treated as true i.e deploy. Omitting this parameter means false and will not perform any operation in the self-hosted engine area. 6.108.11.3. reboot Indicates if the host should be rebooted after successful installation. The default value is true . 6.108.11.4. undeploy_hosted_engine When set to true this host will un-deploy the self-hosted engine components, and this host will not function as part of the High Availability cluster. A missing value is treated as true i.e un-deploy. Omitting this parameter means false and will not perform any operation in the self-hosted engine area. 6.108.12. iscsidiscover POST This method has been deprecated since Engine version 4.4.6. DiscoverIscsi should be used instead. Discovers iSCSI targets on the host, using the initiator details. Returns an array of strings containing the discovered data. For example, to discover iSCSI targets available in myiscsi.example.com , from host 123 , send a request like this: With a request body like this: <action> <iscsi> <address>myiscsi.example.com</address> </iscsi> </action> Table 6.334. Parameters summary Name Type Direction Summary async Boolean In Indicates if the discovery should be performed asynchronously. iscsi IscsiDetails In The target iSCSI device. iscsi_targets String[ ] Out The iSCSI targets. 6.108.12.1. iscsi_targets The iSCSI targets. * 6.108.13. iscsilogin POST Login to iSCSI targets on the host, using the target details. Important When using this method to log in, you must use the iscsi details from the discovered targets results in the discoveriscsi method. Table 6.335. Parameters summary Name Type Direction Summary async Boolean In Indicates if the login should be performed asynchronously. iscsi IscsiDetails In The target iSCSI device. 6.108.14. refresh POST Refresh the host devices and capabilities. Table 6.336. Parameters summary Name Type Direction Summary async Boolean In Indicates if the refresh should be performed asynchronously. 6.108.15. remove DELETE Remove the host from the system. Table 6.337. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. force Boolean In Indicates that the host should be removed even if it is non-responsive, or if it is part of a Gluster Storage cluster and has volume bricks on it. 6.108.16. setupnetworks POST This method is used to change the configuration of the network interfaces of a host. For example, if you have a host with three network interfaces eth0 , eth1 and eth2 and you want to configure a new bond using eth0 and eth1 , and put a VLAN on top of it. Using a simple shell script and the curl command line HTTP client that can be done as follows: Note This is valid for version 4 of the API. In versions some elements were represented as XML attributes instead of XML elements. In particular the options and ip elements were represented as follows: <options name="mode" value="4"/> <options name="miimon" value="100"/> <ip address="192.168.122.10" netmask="255.255.255.0"/> The same thing can be done using the Python SDK with the following code: # Find the service that manages the collection of hosts: hosts_service = connection.system_service().hosts_service() # Find the host: host = hosts_service.list(search='name=myhost')[0] # Find the service that manages the host: host_service = hosts_service.host_service(host.id) # Configure the network adding a bond with two slaves and attaching it to a # network with an static IP address: host_service.setup_networks( modified_bonds=[ types.HostNic( name='bond0', bonding=types.Bonding( options=[ types.Option( name='mode', value='4', ), types.Option( name='miimon', value='100', ), ], slaves=[ types.HostNic( name='eth1', ), types.HostNic( name='eth2', ), ], ), ), ], modified_network_attachments=[ types.NetworkAttachment( network=types.Network( name='myvlan', ), host_nic=types.HostNic( name='bond0', ), ip_address_assignments=[ types.IpAddressAssignment( assignment_method=types.BootProtocol.STATIC, ip=types.Ip( address='192.168.122.10', netmask='255.255.255.0', ), ), ], dns_resolver_configuration=types.DnsResolverConfiguration( name_servers=[ '1.1.1.1', '2.2.2.2', ], ), ), ], ) # After modifying the network configuration it is very important to make it # persistent: host_service.commit_net_config() Important To make sure that the network configuration has been saved in the host, and that it will be applied when the host is rebooted, remember to call commitnetconfig . Important Since Red Hat Virtualization Manager 4.3, it is possible to also specify commit_on_success in the setupnetworks request, in which case the new configuration is automatically saved in the {hypervisor-name} upon completing the setup and re-establishing connectivity between the {hypervisor-name} and Red Hat Virtualization Manager, and without waiting for a separate commitnetconfig request. Table 6.338. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. check_connectivity Boolean In commit_on_success Boolean In Specifies whether to automatically save the configuration in the {hypervisor-name} upon completing the setup and re-establishing connectivity between the {hypervisor-name} and Red Hat Virtualization Manager, and without waiting for a separate commitnetconfig request. connectivity_timeout Integer In modified_bonds HostNic[ ] In modified_labels NetworkLabel[ ] In modified_network_attachments NetworkAttachment[ ] In removed_bonds HostNic[ ] In removed_labels NetworkLabel[ ] In removed_network_attachments NetworkAttachment[ ] In synchronized_network_attachments NetworkAttachment[ ] In A list of network attachments that will be synchronized. 6.108.16.1. commit_on_success Specifies whether to automatically save the configuration in the {hypervisor-name} upon completing the setup and re-establishing connectivity between the {hypervisor-name} and Red Hat Virtualization Manager, and without waiting for a separate commitnetconfig request. The default value is false , which means that the configuration will not be saved automatically. 6.108.17. syncallnetworks POST To synchronize all networks on the host, send a request like this: With a request body like this: <action/> Table 6.339. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. 6.108.18. unregisteredstoragedomainsdiscover POST Discovers the block Storage Domains which are candidates to be imported to the setup. For FCP no arguments are required. Table 6.340. Parameters summary Name Type Direction Summary async Boolean In Indicates if the discovery should be performed asynchronously. iscsi IscsiDetails In storage_domains StorageDomain[ ] Out 6.108.19. update PUT Update the host properties. For example, to update a the kernel command line of a host send a request like this: With request body like this: <host> <os> <custom_kernel_cmdline>vfio_iommu_type1.allow_unsafe_interrupts=1</custom_kernel_cmdline> </os> </host> Table 6.341. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. host Host In/Out 6.108.20. upgrade POST Upgrades VDSM and selected software on the host. Table 6.342. Parameters summary Name Type Direction Summary async Boolean In Indicates if the upgrade should be performed asynchronously. image String In This property is no longer relevant, since Vintage Node is no longer supported, and has been deprecated. reboot Boolean In Indicates if the host should be rebooted after the upgrade. timeout Integer In Upgrade timeout. 6.108.20.1. reboot Indicates if the host should be rebooted after the upgrade. By default the host is rebooted. Note This parameter is ignored for {hypervisor-name}, which is always rebooted after the upgrade. 6.108.20.2. timeout Upgrade timeout. The maximum time to wait for upgrade to finish in minutes. Default value is specified by ANSIBLE_PLAYBOOK_EXEC_DEFAULT_TIMEOUT configration option. 6.108.21. upgradecheck POST Check if there are upgrades available for the host. If there are upgrades available an icon will be displayed to host status icon in the Administration Portal. Audit log messages are also added to indicate the availability of upgrades. The upgrade can be started from the webadmin or by using the upgrade host action. 6.109. HostCpuUnits Table 6.343. Methods summary Name Summary list Returns the List of all host's CPUs with detailed information about the topology (socket, core) and with information about the current CPU pinning. 6.109.1. list GET Returns the List of all host's CPUs with detailed information about the topology (socket, core) and with information about the current CPU pinning. Table 6.344. Parameters summary Name Type Direction Summary cpu_units HostCpuUnit[ ] Out follow String In Indicates which inner links should be followed . 6.109.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.110. HostDevice A service to access a particular device of a host. Table 6.345. Methods summary Name Summary get Retrieve information about a particular host's device. 6.110.1. get GET Retrieve information about a particular host's device. An example of getting a host device: <host_device href="/ovirt-engine/api/hosts/123/devices/456" id="456"> <name>usb_1_9_1_1_0</name> <capability>usb</capability> <host href="/ovirt-engine/api/hosts/123" id="123"/> <parent_device href="/ovirt-engine/api/hosts/123/devices/789" id="789"> <name>usb_1_9_1</name> </parent_device> </host_device> Table 6.346. Parameters summary Name Type Direction Summary device HostDevice Out follow String In Indicates which inner links should be followed . 6.110.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.111. HostDevices A service to access host devices. Table 6.347. Methods summary Name Summary list List the devices of a host. 6.111.1. list GET List the devices of a host. The order of the returned list of devices isn't guaranteed. Table 6.348. Parameters summary Name Type Direction Summary devices HostDevice[ ] Out follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of devices to return. 6.111.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.111.1.2. max Sets the maximum number of devices to return. If not specified all the devices are returned. 6.112. HostHook Table 6.349. Methods summary Name Summary get 6.112.1. get GET Table 6.350. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . hook Hook Out 6.112.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.113. HostHooks Table 6.351. Methods summary Name Summary list Returns the list of hooks configured for the host. 6.113.1. list GET Returns the list of hooks configured for the host. The order of the returned list of hooks is random. Table 6.352. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . hooks Hook[ ] Out max Integer In Sets the maximum number of hooks to return. 6.113.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.113.1.2. max Sets the maximum number of hooks to return. If not specified, all the hooks are returned. 6.114. HostNic A service to manage a network interface of a host. Table 6.353. Methods summary Name Summary get updatevirtualfunctionsconfiguration The action updates virtual function configuration in case the current resource represents an SR-IOV enabled NIC. 6.114.1. get GET Table 6.354. Parameters summary Name Type Direction Summary all_content Boolean In Indicates if all of the attributes of the host network interface should be included in the response. follow String In Indicates which inner links should be followed . nic HostNic Out 6.114.1.1. all_content Indicates if all of the attributes of the host network interface should be included in the response. By default the following attributes are excluded: virtual_functions_configuration For example, to retrieve the complete representation network interface '456' of host '123': Note These attributes are not included by default because retrieving them impacts performance. They are seldom used and require additional queries to the database. Use this parameter with caution and only when specifically required. 6.114.1.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.114.2. updatevirtualfunctionsconfiguration POST The action updates virtual function configuration in case the current resource represents an SR-IOV enabled NIC. The input should be consisted of at least one of the following properties: allNetworksAllowed numberOfVirtualFunctions Please see the HostNicVirtualFunctionsConfiguration type for the meaning of the properties. Table 6.355. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. virtual_functions_configuration HostNicVirtualFunctionsConfiguration In 6.115. HostNics A service to manage the network interfaces of a host. Table 6.356. Methods summary Name Summary list Returns the list of network interfaces of the host. 6.115.1. list GET Returns the list of network interfaces of the host. The order of the returned list of network interfaces isn't guaranteed. Table 6.357. Parameters summary Name Type Direction Summary all_content Boolean In Indicates if all of the attributes of the host network interface should be included in the response. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of NICs to return. nics HostNic[ ] Out 6.115.1.1. all_content Indicates if all of the attributes of the host network interface should be included in the response. By default the following attributes are excluded: virtual_functions_configuration For example, to retrieve the complete representation of network interface '456' of host '123': Note These attributes are not included by default because retrieving them impacts performance. They are seldom used and require additional queries to the database. Use this parameter with caution and only when specifically required. 6.115.1.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.115.1.3. max Sets the maximum number of NICs to return. If not specified all the NICs are returned. 6.116. HostNumaNode Table 6.358. Methods summary Name Summary get 6.116.1. get GET Table 6.359. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . node NumaNode Out 6.116.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.117. HostNumaNodes Table 6.360. Methods summary Name Summary list Returns the list of NUMA nodes of the host. 6.117.1. list GET Returns the list of NUMA nodes of the host. The order of the returned list of NUMA nodes isn't guaranteed. Table 6.361. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of nodes to return. nodes NumaNode[ ] Out 6.117.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.117.1.2. max Sets the maximum number of nodes to return. If not specified all the nodes are returned. 6.118. HostStorage A service to manage host storages. Table 6.362. Methods summary Name Summary list Get list of storages. 6.118.1. list GET Get list of storages. The XML response you get will be like this one: <host_storages> <host_storage id="123"> ... </host_storage> ... </host_storages> The order of the returned list of storages isn't guaranteed. Table 6.363. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . report_status Boolean In Indicates if the status of the LUNs in the storage should be checked. storages HostStorage[ ] Out Retrieved list of storages. 6.118.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.118.1.2. report_status Indicates if the status of the LUNs in the storage should be checked. Checking the status of the LUN is an heavy weight operation and this data is not always needed by the user. This parameter will give the option to not perform the status check of the LUNs. The default is true for backward compatibility. Here an example with the LUN status : <host_storage id="123"> <logical_units> <logical_unit id="123"> <lun_mapping>0</lun_mapping> <paths>1</paths> <product_id>lun0</product_id> <serial>123</serial> <size>10737418240</size> <status>used</status> <vendor_id>LIO-ORG</vendor_id> <volume_group_id>123</volume_group_id> </logical_unit> </logical_units> <type>iscsi</type> <host id="123"/> </host_storage> Here an example without the LUN status : <host_storage id="123"> <logical_units> <logical_unit id="123"> <lun_mapping>0</lun_mapping> <paths>1</paths> <product_id>lun0</product_id> <serial>123</serial> <size>10737418240</size> <vendor_id>LIO-ORG</vendor_id> <volume_group_id>123</volume_group_id> </logical_unit> </logical_units> <type>iscsi</type> <host id="123"/> </host_storage> 6.119. Hosts A service that manages hosts. Table 6.364. Methods summary Name Summary add Creates a new host. list Get a list of all available hosts. 6.119.1. add POST Creates a new host. The host is created based on the attributes of the host parameter. The name , address , and root_password properties are required. For example, to add a host, send the following request: With the following request body: <host> <name>myhost</name> <address>myhost.example.com</address> <root_password>myrootpassword</root_password> </host> Note The root_password element is only included in the client-provided initial representation and is not exposed in the representations returned from subsequent requests. Important Since version 4.1.2 of the engine, when a host is newly added, the host's firewall definitions are overridden by default. To add a hosted engine host, use the optional deploy_hosted_engine parameter: If the cluster has a default external network provider that is supported for automatic deployment, the external network provider is deployed when adding the host. Only external network providers for OVN are supported for the automatic deployment. To deploy an external network provider other than the one defined in the clusters, overwrite the external network provider when adding hosts, by sending the following request: With a request body that contains a reference to the desired provider in the external_network_provider_configuration : <host> <name>myhost</name> <address>myhost.example.com</address> <root_password>123456</root_password> <external_network_provider_configurations> <external_network_provider_configuration> <external_network_provider name="ovirt-provider-ovn"/> </external_network_provider_configuration> </external_network_provider_configurations> </host> Table 6.365. Parameters summary Name Type Direction Summary activate Boolean In When set to true , this host will be activated after its installation completes. deploy_hosted_engine Boolean In When set to true , this host deploys the hosted engine components. host Host In/Out The host definition with which the new host is created is passed as a parameter, and the newly created host is returned. reboot Boolean In Indicates if the host should be rebooted after successful installation. undeploy_hosted_engine Boolean In When set to true , this host un-deploys the hosted engine components and does not function as part of the High Availability cluster. 6.119.1.1. activate When set to true , this host will be activated after its installation completes. When set to false the host will remain in maintenance status after its installation. Absence of this parameter will be interpreted as true , since the desired default behavior is activating the host after install. 6.119.1.2. deploy_hosted_engine When set to true , this host deploys the hosted engine components. A missing value is treated as true , i.e., deploy the hosted engine components. Omitting this parameter equals false , and the host performs no operation in the hosted engine area. 6.119.1.3. reboot Indicates if the host should be rebooted after successful installation. The default value is true . 6.119.1.4. undeploy_hosted_engine When set to true , this host un-deploys the hosted engine components and does not function as part of the High Availability cluster. A missing value is treated as true , i.e., un-deploy. Omitting this parameter equals false and the host performs no operation in the hosted engine area. 6.119.2. list GET Get a list of all available hosts. For example, to list the hosts send the following request: The response body will be similar to this: <hosts> <host href="/ovirt-engine/api/hosts/123" id="123"> ... </host> <host href="/ovirt-engine/api/hosts/456" id="456"> ... </host> ... </host> The order of the returned list of hosts is guaranteed only if the sortby clause is included in the search parameter. Table 6.366. Parameters summary Name Type Direction Summary all_content Boolean In Indicates if all of the attributes of the hosts should be included in the response. case_sensitive Boolean In Indicates if the search performed using the search parameter should be performed taking case into account. check_vms_in_affinity_closure Boolean In This parameter can be used with migration_target_of to get valid migration targets for the listed virtual machines and all other virtual machines that are in positive enforcing affinity with the listed virtual machines. filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . hosts Host[ ] Out max Integer In Sets the maximum number of hosts to return. migration_target_of String In Accepts a comma-separated list of virtual machine IDs and returns the hosts that these virtual machines can be migrated to. search String In A query string used to restrict the returned hosts. 6.119.2.1. all_content Indicates if all of the attributes of the hosts should be included in the response. By default the following host attributes are excluded: hosted_engine For example, to retrieve the complete representation of the hosts: Note These attributes are not included by default because retrieving them impacts performance. They are seldom used and require additional queries to the database. Use this parameter with caution and only when specifically required. 6.119.2.2. case_sensitive Indicates if the search performed using the search parameter should be performed taking case into account. The default value is true , which means that case is taken into account. If you want to search ignoring case set it to false . 6.119.2.3. check_vms_in_affinity_closure This parameter can be used with migration_target_of to get valid migration targets for the listed virtual machines and all other virtual machines that are in positive enforcing affinity with the listed virtual machines. This is useful in case the virtual machines will be migrated together with others in positive affinity groups. The default value is false . 6.119.2.4. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.119.2.5. max Sets the maximum number of hosts to return. If not specified all the hosts are returned. 6.119.2.6. migration_target_of Accepts a comma-separated list of virtual machine IDs and returns the hosts that these virtual machines can be migrated to. For example, to retrieve the list of hosts to which the virtual machine with ID 123 and the virtual machine with ID 456 can be migrated to, send the following request: 6.120. Icon A service to manage an icon (read-only). Table 6.367. Methods summary Name Summary get Get an icon. 6.120.1. get GET Get an icon. You will get a XML response like this one: <icon id="123"> <data>Some binary data here</data> <media_type>image/png</media_type> </icon> Table 6.368. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . icon Icon Out Retrieved icon. 6.120.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.121. Icons A service to manage icons. Table 6.369. Methods summary Name Summary list Get a list of icons. 6.121.1. list GET Get a list of icons. You will get a XML response which is similar to this one: <icons> <icon id="123"> <data>...</data> <media_type>image/png</media_type> </icon> ... </icons> The order of the returned list of icons isn't guaranteed. Table 6.370. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . icons Icon[ ] Out Retrieved list of icons. max Integer In Sets the maximum number of icons to return. 6.121.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.121.1.2. max Sets the maximum number of icons to return. If not specified all the icons are returned. 6.122. Image Table 6.371. Methods summary Name Summary get import Imports an image. 6.122.1. get GET Table 6.372. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . image Image Out 6.122.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.122.2. import POST Imports an image. If the import_as_template parameter is true then the image will be imported as a template, otherwise it will be imported as a disk. When imported as a template, the name of the template can be specified by the optional template.name parameter. If that parameter is not specified, then the name of the template will be automatically assigned by the engine as GlanceTemplate-x (where x will be seven random hexadecimal characters). When imported as a disk, the name of the disk can be specified by the optional disk.name parameter. If that parameter is not specified, then the name of the disk will be automatically assigned by the engine as GlanceDisk-x (where x will be the seven hexadecimal characters of the image identifier). It is recommended to always explicitly specify the template or disk name, to avoid these automatic names generated by the engine. Table 6.373. Parameters summary Name Type Direction Summary async Boolean In Indicates if the import should be performed asynchronously. cluster Cluster In The cluster to which the image should be imported if the import_as_template parameter is set to true . disk Disk In The disk to import. import_as_template Boolean In Specifies if a template should be created from the imported disk. storage_domain StorageDomain In The storage domain to which the disk should be imported. template Template In The name of the template being created if the import_as_template parameter is set to true . 6.123. ImageTransfer This service provides a mechanism to control an image transfer. The client will have to create a transfer by using add of the image transfers service, stating the image to transfer data to/from. After doing that, the transfer is managed by this service. Using oVirt's Python's SDK: Uploading a disk with id 123 (on a random host in the data center): transfers_service = system_service.image_transfers_service() transfer = transfers_service.add( types.ImageTransfer( disk=types.Disk( id='123' ) ) ) Uploading a disk with id 123 on host id 456 : transfers_service = system_service.image_transfers_service() transfer = transfers_service.add( types.ImageTransfer( disk=types.Disk( id='123' ), host=types.Host( id='456' ) ) ) If the user wishes to download a disk rather than upload, he/she should specify download as the direction attribute of the transfer. This will grant a read permission from the image, instead of a write permission. E.g: transfers_service = system_service.image_transfers_service() transfer = transfers_service.add( types.ImageTransfer( disk=types.Disk( id='123' ), direction=types.ImageTransferDirection.DOWNLOAD ) ) Transfers have phases, which govern the flow of the upload/download. A client implementing such a flow should poll/check the transfer's phase and act accordingly. All the possible phases can be found in ImageTransferPhase . After adding a new transfer, its phase will be initializing . The client will have to poll on the transfer's phase until it changes. When the phase becomes transferring , the session is ready to start the transfer. For example: transfer_service = transfers_service.image_transfer_service(transfer.id) while transfer.phase == types.ImageTransferPhase.INITIALIZING: time.sleep(3) transfer = transfer_service.get() At that stage, if the phase of the transfer is paused_system , the session was not successfully established. This can happen if ovirt-imageio is not running in the selected host. Table 6.374. Methods summary Name Summary cancel Cancel the image transfer session. extend Extend the image transfer session. finalize After finishing to transfer the data, finalize the transfer. get Get the image transfer entity. pause Pause the image transfer session. resume Resume the image transfer session. 6.123.1. cancel POST Cancel the image transfer session. This terminates the transfer operation and removes the partial image. 6.123.2. extend POST Extend the image transfer session. 6.123.3. finalize POST After finishing to transfer the data, finalize the transfer. This will make sure that the data being transferred is valid and fits the image entity that was targeted in the transfer. Specifically, will verify that if the image entity is a QCOW disk, the data uploaded is indeed a QCOW file, and that the image doesn't have a backing file. 6.123.4. get GET Get the image transfer entity. Table 6.375. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . image_transfer ImageTransfer Out 6.123.4.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.123.5. pause POST Pause the image transfer session. 6.123.6. resume POST Resume the image transfer session. The client will need to poll the transfer's phase until it is different than resuming . For example: transfer_service = transfers_service.image_transfer_service(transfer.id) transfer_service.resume() transfer = transfer_service.get() while transfer.phase == types.ImageTransferPhase.RESUMING: time.sleep(1) transfer = transfer_service.get() 6.124. ImageTransfers This service manages image transfers, for performing Image I/O API in Red Hat Virtualization. Please refer to image transfer for further documentation. Table 6.376. Methods summary Name Summary add Add a new image transfer. list Retrieves the list of image transfers that are currently being performed. 6.124.1. add POST Add a new image transfer. An image, disk or disk snapshot needs to be specified in order to make a new transfer. Important The image attribute is deprecated since version 4.2 of the engine. Use the disk or snapshot attributes instead. Creating a new image transfer for downloading or uploading a disk : To create an image transfer to download or upload a disk with id 123 , send the following request: With a request body like this: <image_transfer> <disk id="123"/> <direction>upload|download</direction> </image_transfer> Creating a new image transfer for downloading or uploading a disk_snapshot : To create an image transfer to download or upload a disk_snapshot with id 456 , send the following request: With a request body like this: <image_transfer> <snapshot id="456"/> <direction>download|upload</direction> </image_transfer> Table 6.377. Parameters summary Name Type Direction Summary image_transfer ImageTransfer In/Out The image transfer to add. 6.124.2. list GET Retrieves the list of image transfers that are currently being performed. The order of the returned list of image transfers is not guaranteed. Table 6.378. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . image_transfer ImageTransfer[ ] Out A list of image transfers that are currently being performed. 6.124.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.125. Images Manages the set of images available in an storage domain or in an OpenStack image provider. Table 6.379. Methods summary Name Summary list Returns the list of images available in the storage domain or provider. 6.125.1. list GET Returns the list of images available in the storage domain or provider. The order of the returned list of images isn't guaranteed. Table 6.380. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . images Image[ ] Out max Integer In Sets the maximum number of images to return. 6.125.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.125.1.2. max Sets the maximum number of images to return. If not specified all the images are returned. 6.126. InstanceType Table 6.381. Methods summary Name Summary get Get a specific instance type and it's attributes. remove Removes a specific instance type from the system. update Update a specific instance type and it's attributes. 6.126.1. get GET Get a specific instance type and it's attributes. Table 6.382. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . instance_type InstanceType Out 6.126.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.126.2. remove DELETE Removes a specific instance type from the system. If a virtual machine was created using an instance type X after removal of the instance type the virtual machine's instance type will be set to custom . Table 6.383. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.126.3. update PUT Update a specific instance type and it's attributes. All the attributes are editable after creation. If a virtual machine was created using an instance type X and some configuration in instance type X was updated, the virtual machine's configuration will be updated automatically by the engine. For example, to update the memory of instance type 123 to 1 GiB and set the cpu topology to 2 sockets and 1 core, send a request like this: <instance_type> <memory>1073741824</memory> <cpu> <topology> <cores>1</cores> <sockets>2</sockets> <threads>1</threads> </topology> </cpu> </instance_type> Table 6.384. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. instance_type InstanceType In/Out 6.127. InstanceTypeGraphicsConsole Table 6.385. Methods summary Name Summary get Gets graphics console configuration of the instance type. remove Remove the graphics console from the instance type. 6.127.1. get GET Gets graphics console configuration of the instance type. Table 6.386. Parameters summary Name Type Direction Summary console GraphicsConsole Out The information about the graphics console of the instance type. follow String In Indicates which inner links should be followed . 6.127.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.127.2. remove DELETE Remove the graphics console from the instance type. Table 6.387. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.128. InstanceTypeGraphicsConsoles Table 6.388. Methods summary Name Summary add Add new graphics console to the instance type. list Lists all the configured graphics consoles of the instance type. 6.128.1. add POST Add new graphics console to the instance type. Table 6.389. Parameters summary Name Type Direction Summary console GraphicsConsole In/Out 6.128.2. list GET Lists all the configured graphics consoles of the instance type. The order of the returned list of graphics consoles isn't guaranteed. Table 6.390. Parameters summary Name Type Direction Summary consoles GraphicsConsole[ ] Out The list of graphics consoles of the instance type. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of consoles to return. 6.128.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.128.2.2. max Sets the maximum number of consoles to return. If not specified all the consoles are returned. 6.129. InstanceTypeNic Table 6.391. Methods summary Name Summary get Gets network interface configuration of the instance type. remove Remove the network interface from the instance type. update Updates the network interface configuration of the instance type. 6.129.1. get GET Gets network interface configuration of the instance type. Table 6.392. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . nic Nic Out 6.129.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.129.2. remove DELETE Remove the network interface from the instance type. Table 6.393. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.129.3. update PUT Updates the network interface configuration of the instance type. Table 6.394. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. nic Nic In/Out 6.130. InstanceTypeNics Table 6.395. Methods summary Name Summary add Add new network interface to the instance type. list Lists all the configured network interface of the instance type. 6.130.1. add POST Add new network interface to the instance type. Table 6.396. Parameters summary Name Type Direction Summary nic Nic In/Out 6.130.2. list GET Lists all the configured network interface of the instance type. The order of the returned list of network interfaces isn't guaranteed. Table 6.397. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of NICs to return. nics Nic[ ] Out search String In A query string used to restrict the returned templates. 6.130.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.130.2.2. max Sets the maximum number of NICs to return. If not specified all the NICs are returned. 6.131. InstanceTypeWatchdog Table 6.398. Methods summary Name Summary get Gets watchdog configuration of the instance type. remove Remove a watchdog from the instance type. update Updates the watchdog configuration of the instance type. 6.131.1. get GET Gets watchdog configuration of the instance type. Table 6.399. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . watchdog Watchdog Out 6.131.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.131.2. remove DELETE Remove a watchdog from the instance type. Table 6.400. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.131.3. update PUT Updates the watchdog configuration of the instance type. Table 6.401. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. watchdog Watchdog In/Out 6.132. InstanceTypeWatchdogs Table 6.402. Methods summary Name Summary add Add new watchdog to the instance type. list Lists all the configured watchdogs of the instance type. 6.132.1. add POST Add new watchdog to the instance type. Table 6.403. Parameters summary Name Type Direction Summary watchdog Watchdog In/Out 6.132.2. list GET Lists all the configured watchdogs of the instance type. The order of the returned list of watchdogs isn't guaranteed. Table 6.404. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of watchdogs to return. search String In A query string used to restrict the returned templates. watchdogs Watchdog[ ] Out 6.132.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.132.2.2. max Sets the maximum number of watchdogs to return. If not specified all the watchdogs are returned. 6.133. InstanceTypes Table 6.405. Methods summary Name Summary add Creates a new instance type. list Lists all existing instance types in the system. 6.133.1. add POST Creates a new instance type. This requires only a name attribute and can include all hardware configurations of the virtual machine. With a request body like this: <instance_type> <name>myinstancetype</name> </template> Creating an instance type with all hardware configurations with a request body like this: <instance_type> <name>myinstancetype</name> <console> <enabled>true</enabled> </console> <cpu> <topology> <cores>2</cores> <sockets>2</sockets> <threads>1</threads> </topology> </cpu> <custom_cpu_model>AMD Opteron_G2</custom_cpu_model> <custom_emulated_machine>q35</custom_emulated_machine> <display> <monitors>1</monitors> <single_qxl_pci>true</single_qxl_pci> <smartcard_enabled>true</smartcard_enabled> <type>spice</type> </display> <high_availability> <enabled>true</enabled> <priority>1</priority> </high_availability> <io> <threads>2</threads> </io> <memory>4294967296</memory> <memory_policy> <ballooning>true</ballooning> <guaranteed>268435456</guaranteed> </memory_policy> <migration> <auto_converge>inherit</auto_converge> <compressed>inherit</compressed> <policy id="00000000-0000-0000-0000-000000000000"/> </migration> <migration_downtime>2</migration_downtime> <os> <boot> <devices> <device>hd</device> </devices> </boot> </os> <rng_device> <rate> <bytes>200</bytes> <period>2</period> </rate> <source>urandom</source> </rng_device> <soundcard_enabled>true</soundcard_enabled> <usb> <enabled>true</enabled> <type>native</type> </usb> <virtio_scsi> <enabled>true</enabled> </virtio_scsi> </instance_type> Table 6.406. Parameters summary Name Type Direction Summary instance_type InstanceType In/Out 6.133.2. list GET Lists all existing instance types in the system. The order of the returned list of instance types isn't guaranteed. Table 6.407. Parameters summary Name Type Direction Summary case_sensitive Boolean In Indicates if the search performed using the search parameter should be performed taking case into account. follow String In Indicates which inner links should be followed . instance_type InstanceType[ ] Out max Integer In Sets the maximum number of instance types to return. search String In A query string used to restrict the returned templates. 6.133.2.1. case_sensitive Indicates if the search performed using the search parameter should be performed taking case into account. The default value is true , which means that case is taken into account. If you want to search ignoring case set it to false . 6.133.2.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.133.2.3. max Sets the maximum number of instance types to return. If not specified all the instance types are returned. 6.134. IscsiBond Table 6.408. Methods summary Name Summary get remove Removes of an existing iSCSI bond. update Updates an iSCSI bond. 6.134.1. get GET Table 6.409. Parameters summary Name Type Direction Summary bond IscsiBond Out The iSCSI bond. follow String In Indicates which inner links should be followed . 6.134.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.134.2. remove DELETE Removes of an existing iSCSI bond. For example, to remove the iSCSI bond 456 send a request like this: Table 6.410. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.134.3. update PUT Updates an iSCSI bond. Updating of an iSCSI bond can be done on the name and the description attributes only. For example, to update the iSCSI bond 456 of data center 123 , send a request like this: The request body should look like this: <iscsi_bond> <name>mybond</name> <description>My iSCSI bond</description> </iscsi_bond> Table 6.411. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. bond IscsiBond In/Out The iSCSI bond to update. 6.135. IscsiBonds Table 6.412. Methods summary Name Summary add Create a new iSCSI bond on a data center. list Returns the list of iSCSI bonds configured in the data center. 6.135.1. add POST Create a new iSCSI bond on a data center. For example, to create a new iSCSI bond on data center 123 using storage connections 456 and 789 , send a request like this: The request body should look like this: <iscsi_bond> <name>mybond</name> <storage_connections> <storage_connection id="456"/> <storage_connection id="789"/> </storage_connections> <networks> <network id="abc"/> </networks> </iscsi_bond> Table 6.413. Parameters summary Name Type Direction Summary bond IscsiBond In/Out 6.135.2. list GET Returns the list of iSCSI bonds configured in the data center. The order of the returned list of iSCSI bonds isn't guaranteed. Table 6.414. Parameters summary Name Type Direction Summary bonds IscsiBond[ ] Out follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of bonds to return. 6.135.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.135.2.2. max Sets the maximum number of bonds to return. If not specified all the bonds are returned. 6.136. Job A service to manage a job. Table 6.415. Methods summary Name Summary clear Set an external job execution to be cleared by the system. end Marks an external job execution as ended. get Retrieves a job. 6.136.1. clear POST Set an external job execution to be cleared by the system. For example, to set a job with identifier 123 send the following request: With the following request body: <action/> Table 6.416. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. 6.136.2. end POST Marks an external job execution as ended. For example, to terminate a job with identifier 123 send the following request: With the following request body: <action> <force>true</force> <status>finished</status> </action> Table 6.417. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. force Boolean In Indicates if the job should be forcibly terminated. succeeded Boolean In Indicates if the job should be marked as successfully finished or as failed. 6.136.2.1. succeeded Indicates if the job should be marked as successfully finished or as failed. This parameter is optional, and the default value is true . 6.136.3. get GET Retrieves a job. You will receive response in XML like this one: <job href="/ovirt-engine/api/jobs/123" id="123"> <actions> <link href="/ovirt-engine/api/jobs/123/clear" rel="clear"/> <link href="/ovirt-engine/api/jobs/123/end" rel="end"/> </actions> <description>Adding Disk</description> <link href="/ovirt-engine/api/jobs/123/steps" rel="steps"/> <auto_cleared>true</auto_cleared> <end_time>2016-12-12T23:07:29.758+02:00</end_time> <external>false</external> <last_updated>2016-12-12T23:07:29.758+02:00</last_updated> <start_time>2016-12-12T23:07:26.593+02:00</start_time> <status>failed</status> <owner href="/ovirt-engine/api/users/456" id="456"/> </job> Table 6.418. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . job Job Out Retrieves the representation of the job. 6.136.3.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.137. Jobs A service to manage jobs. Table 6.419. Methods summary Name Summary add Add an external job. list Retrieves the representation of the jobs. 6.137.1. add POST Add an external job. For example, to add a job with the following request: With the following request body: <job> <description>Doing some work</description> <auto_cleared>true</auto_cleared> </job> The response should look like: <job href="/ovirt-engine/api/jobs/123" id="123"> <actions> <link href="/ovirt-engine/api/jobs/123/clear" rel="clear"/> <link href="/ovirt-engine/api/jobs/123/end" rel="end"/> </actions> <description>Doing some work</description> <link href="/ovirt-engine/api/jobs/123/steps" rel="steps"/> <auto_cleared>true</auto_cleared> <external>true</external> <last_updated>2016-12-13T02:15:42.130+02:00</last_updated> <start_time>2016-12-13T02:15:42.130+02:00</start_time> <status>started</status> <owner href="/ovirt-engine/api/users/456" id="456"/> </job> Table 6.420. Parameters summary Name Type Direction Summary job Job In/Out Job that will be added. 6.137.2. list GET Retrieves the representation of the jobs. You will receive response in XML like this one: <jobs> <job href="/ovirt-engine/api/jobs/123" id="123"> <actions> <link href="/ovirt-engine/api/jobs/123/clear" rel="clear"/> <link href="/ovirt-engine/api/jobs/123/end" rel="end"/> </actions> <description>Adding Disk</description> <link href="/ovirt-engine/api/jobs/123/steps" rel="steps"/> <auto_cleared>true</auto_cleared> <end_time>2016-12-12T23:07:29.758+02:00</end_time> <external>false</external> <last_updated>2016-12-12T23:07:29.758+02:00</last_updated> <start_time>2016-12-12T23:07:26.593+02:00</start_time> <status>failed</status> <owner href="/ovirt-engine/api/users/456" id="456"/> </job> ... </jobs> The order of the returned list of jobs isn't guaranteed. Table 6.421. Parameters summary Name Type Direction Summary case_sensitive Boolean In Indicates if the search performed using the search parameter should be performed taking case into account. follow String In Indicates which inner links should be followed . jobs Job[ ] Out A representation of jobs. max Integer In Sets the maximum number of jobs to return. search String In A query string used to restrict the returned jobs. 6.137.2.1. case_sensitive Indicates if the search performed using the search parameter should be performed taking case into account. The default value is true , which means that case is taken into account. If you want to search ignoring case set it to false . 6.137.2.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.137.2.3. max Sets the maximum number of jobs to return. If not specified all the jobs are returned. 6.138. KatelloErrata A service to manage Katello errata. The information is retrieved from Katello. Table 6.422. Methods summary Name Summary list Retrieves the representation of the Katello errata. 6.138.1. list GET Retrieves the representation of the Katello errata. You will receive response in XML like this one: <katello_errata> <katello_erratum href="/ovirt-engine/api/katelloerrata/123" id="123"> <name>RHBA-2013:XYZ</name> <description>The description of the erratum</description> <title>some bug fix update</title> <type>bugfix</type> <issued>2013-11-20T02:00:00.000+02:00</issued> <solution>Few guidelines regarding the solution</solution> <summary>Updated packages that fix one bug are now available for XYZ</summary> <packages> <package> <name>libipa_hbac-1.9.2-82.11.el6_4.i686</name> </package> ... </packages> </katello_erratum> ... </katello_errata> The order of the returned list of erratum isn't guaranteed. Table 6.423. Parameters summary Name Type Direction Summary errata KatelloErratum[ ] Out A representation of Katello errata. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of errata to return. 6.138.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.138.1.2. max Sets the maximum number of errata to return. If not specified all the errata are returned. 6.139. KatelloErratum A service to manage a Katello erratum. Table 6.424. Methods summary Name Summary get Retrieves a Katello erratum. 6.139.1. get GET Retrieves a Katello erratum. You will receive response in XML like this one: <katello_erratum href="/ovirt-engine/api/katelloerrata/123" id="123"> <name>RHBA-2013:XYZ</name> <description>The description of the erratum</description> <title>some bug fix update</title> <type>bugfix</type> <issued>2013-11-20T02:00:00.000+02:00</issued> <solution>Few guidelines regarding the solution</solution> <summary>Updated packages that fix one bug are now available for XYZ</summary> <packages> <package> <name>libipa_hbac-1.9.2-82.11.el6_4.i686</name> </package> ... </packages> </katello_erratum> Table 6.425. Parameters summary Name Type Direction Summary erratum KatelloErratum Out Retrieves the representation of the Katello erratum. follow String In Indicates which inner links should be followed . 6.139.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.140. LinkLayerDiscoveryProtocol A service to fetch information elements received by Link Layer Discovery Protocol (LLDP). Table 6.426. Methods summary Name Summary list Fetches information elements received by LLDP. 6.140.1. list GET Fetches information elements received by LLDP. Table 6.427. Parameters summary Name Type Direction Summary elements LinkLayerDiscoveryProtocolElement[ ] Out Retrieves a list of information elements received by LLDP. follow String In Indicates which inner links should be followed . 6.140.1.1. elements Retrieves a list of information elements received by LLDP. For example, to retrieve the information elements received on the NIC 321 on host 123 , send a request like this: It will return a response like this: <link_layer_discovery_protocol_elements> ... <link_layer_discovery_protocol_element> <name>Port Description</name> <properties> <property> <name>port description</name> <value>Summit300-48-Port 1001</value> </property> </properties> <type>4</type> </link_layer_discovery_protocol_element> ... <link_layer_discovery_protocol_elements> 6.140.1.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.141. MacPool Table 6.428. Methods summary Name Summary get remove Removes a MAC address pool. update Updates a MAC address pool. 6.141.1. get GET Table 6.429. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . pool MacPool Out 6.141.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.141.2. remove DELETE Removes a MAC address pool. For example, to remove the MAC address pool having id 123 send a request like this: Table 6.430. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.141.3. update PUT Updates a MAC address pool. The name , description , allow_duplicates , and ranges attributes can be updated. For example, to update the MAC address pool of id 123 send a request like this: With a request body like this: <mac_pool> <name>UpdatedMACPool</name> <description>An updated MAC address pool</description> <allow_duplicates>false</allow_duplicates> <ranges> <range> <from>00:1A:4A:16:01:51</from> <to>00:1A:4A:16:01:e6</to> </range> <range> <from>02:1A:4A:01:00:00</from> <to>02:1A:4A:FF:FF:FF</to> </range> </ranges> </mac_pool> Table 6.431. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. pool MacPool In/Out 6.142. MacPools Table 6.432. Methods summary Name Summary add Creates a new MAC address pool. list Return the list of MAC address pools of the system. 6.142.1. add POST Creates a new MAC address pool. Creation of a MAC address pool requires values for the name and ranges attributes. For example, to create MAC address pool send a request like this: With a request body like this: <mac_pool> <name>MACPool</name> <description>A MAC address pool</description> <allow_duplicates>true</allow_duplicates> <default_pool>false</default_pool> <ranges> <range> <from>00:1A:4A:16:01:51</from> <to>00:1A:4A:16:01:e6</to> </range> </ranges> </mac_pool> Table 6.433. Parameters summary Name Type Direction Summary pool MacPool In/Out 6.142.2. list GET Return the list of MAC address pools of the system. The returned list of MAC address pools isn't guaranteed. Table 6.434. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of pools to return. pools MacPool[ ] Out 6.142.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.142.2.2. max Sets the maximum number of pools to return. If not specified all the pools are returned. 6.143. Measurable 6.144. Moveable Table 6.435. Methods summary Name Summary move 6.144.1. move POST Table 6.436. Parameters summary Name Type Direction Summary async Boolean In Indicates if the move should be performed asynchronously. 6.145. Network A service managing a network Table 6.437. Methods summary Name Summary get Gets a logical network. remove Removes a logical network, or the association of a logical network to a data center. update Updates a logical network. 6.145.1. get GET Gets a logical network. For example: Will respond: <network href="/ovirt-engine/api/networks/123" id="123"> <name>ovirtmgmt</name> <description>Default Management Network</description> <link href="/ovirt-engine/api/networks/123/permissions" rel="permissions"/> <link href="/ovirt-engine/api/networks/123/vnicprofiles" rel="vnicprofiles"/> <link href="/ovirt-engine/api/networks/123/networklabels" rel="networklabels"/> <mtu>0</mtu> <stp>false</stp> <usages> <usage>vm</usage> </usages> <data_center href="/ovirt-engine/api/datacenters/456" id="456"/> </network> Table 6.438. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . network Network Out 6.145.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.145.2. remove DELETE Removes a logical network, or the association of a logical network to a data center. For example, to remove the logical network 123 send a request like this: Each network is bound exactly to one data center. So if we disassociate network with data center it has the same result as if we would just remove that network. However it might be more specific to say we're removing network 456 of data center 123 . For example, to remove the association of network 456 to data center 123 send a request like this: Note To remove an external logical network, the network has to be removed directly from its provider by OpenStack Networking API . The entity representing the external network inside Red Hat Virtualization is removed automatically, if auto_sync is enabled for the provider, otherwise the entity has to be removed using this method. Table 6.439. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.145.3. update PUT Updates a logical network. The name , description , ip , vlan , stp and display attributes can be updated. For example, to update the description of the logical network 123 send a request like this: With a request body like this: <network> <description>My updated description</description> </network> The maximum transmission unit of a network is set using a PUT request to specify the integer value of the mtu attribute. For example, to set the maximum transmission unit send a request like this: With a request body like this: <network> <mtu>1500</mtu> </network> Note Updating external networks is not propagated to the provider. Table 6.440. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. network Network In/Out 6.146. NetworkAttachment Table 6.441. Methods summary Name Summary get remove update Update the specified network attachment on the host. 6.146.1. get GET Table 6.442. Parameters summary Name Type Direction Summary attachment NetworkAttachment Out follow String In Indicates which inner links should be followed . 6.146.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.146.2. remove DELETE Table 6.443. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.146.3. update PUT Update the specified network attachment on the host. Table 6.444. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. attachment NetworkAttachment In/Out 6.147. NetworkAttachments Manages the set of network attachments of a host or host NIC. Table 6.445. Methods summary Name Summary add Add a new network attachment to the network interface. list Returns the list of network attachments of the host or host NIC. 6.147.1. add POST Add a new network attachment to the network interface. Table 6.446. Parameters summary Name Type Direction Summary attachment NetworkAttachment In/Out 6.147.2. list GET Returns the list of network attachments of the host or host NIC. The order of the returned list of network attachments isn't guaranteed. Table 6.447. Parameters summary Name Type Direction Summary attachments NetworkAttachment[ ] Out follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of attachments to return. 6.147.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.147.2.2. max Sets the maximum number of attachments to return. If not specified all the attachments are returned. 6.148. NetworkFilter Manages a network filter. <network_filter id="00000019-0019-0019-0019-00000000026b"> <name>example-network-filter-b</name> <version> <major>4</major> <minor>0</minor> <build>-1</build> <revision>-1</revision> </version> </network_filter> Please note that version is referring to the minimal support version for the specific filter. Table 6.448. Methods summary Name Summary get Retrieves a representation of the network filter. 6.148.1. get GET Retrieves a representation of the network filter. Table 6.449. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . network_filter NetworkFilter Out 6.148.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.149. NetworkFilters Represents a readonly network filters sub-collection. The network filter enables to filter packets send to/from the VM's nic according to defined rules. For more information please refer to NetworkFilter service documentation Network filters are supported in different versions, starting from version 3.0. A network filter is defined for each vnic profile. A vnic profile is defined for a specific network. A network can be assigned to several different clusters. In the future, each network will be defined in cluster level. Currently, each network is being defined at data center level. Potential network filters for each network are determined by the network's data center compatibility version V. V must be >= the network filter version in order to configure this network filter for a specific network. Please note, that if a network is assigned to cluster with a version supporting a network filter, the filter may not be available due to the data center version being smaller then the network filter's version. Example of listing all of the supported network filters for a specific cluster: Output: <network_filters> <network_filter id="00000019-0019-0019-0019-00000000026c"> <name>example-network-filter-a</name> <version> <major>4</major> <minor>0</minor> <build>-1</build> <revision>-1</revision> </version> </network_filter> <network_filter id="00000019-0019-0019-0019-00000000026b"> <name>example-network-filter-b</name> <version> <major>4</major> <minor>0</minor> <build>-1</build> <revision>-1</revision> </version> </network_filter> <network_filter id="00000019-0019-0019-0019-00000000026a"> <name>example-network-filter-a</name> <version> <major>3</major> <minor>0</minor> <build>-1</build> <revision>-1</revision> </version> </network_filter> </network_filters> Table 6.450. Methods summary Name Summary list Retrieves the representations of the network filters. 6.149.1. list GET Retrieves the representations of the network filters. The order of the returned list of network filters isn't guaranteed. Table 6.451. Parameters summary Name Type Direction Summary filters NetworkFilter[ ] Out follow String In Indicates which inner links should be followed . 6.149.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.150. NetworkLabel Table 6.452. Methods summary Name Summary get remove Removes a label from a logical network. 6.150.1. get GET Table 6.453. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . label NetworkLabel Out 6.150.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.150.2. remove DELETE Removes a label from a logical network. For example, to remove the label exemplary from a logical network having id 123 send the following request: Table 6.454. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.151. NetworkLabels Manages the ser of labels attached to a network or to a host NIC. Table 6.455. Methods summary Name Summary add Attaches label to logical network. list Returns the list of labels attached to the network or host NIC. 6.151.1. add POST Attaches label to logical network. You can attach labels to a logical network to automate the association of that logical network with physical host network interfaces to which the same label has been attached. For example, to attach the label mylabel to a logical network having id 123 send a request like this: With a request body like this: <network_label id="mylabel"/> Table 6.456. Parameters summary Name Type Direction Summary label NetworkLabel In/Out 6.151.2. list GET Returns the list of labels attached to the network or host NIC. The order of the returned list of labels isn't guaranteed. Table 6.457. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . labels NetworkLabel[ ] Out max Integer In Sets the maximum number of labels to return. 6.151.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.151.2.2. max Sets the maximum number of labels to return. If not specified all the labels are returned. 6.152. Networks Manages logical networks. The engine creates a default ovirtmgmt network on installation. This network acts as the management network for access to hypervisor hosts. This network is associated with the Default cluster and is a member of the Default data center. Table 6.458. Methods summary Name Summary add Creates a new logical network, or associates an existing network with a data center. list List logical networks. 6.152.1. add POST Creates a new logical network, or associates an existing network with a data center. Creation of a new network requires the name and data_center elements. For example, to create a network named mynetwork for data center 123 send a request like this: With a request body like this: <network> <name>mynetwork</name> <data_center id="123"/> </network> To associate the existing network 456 with the data center 123 send a request like this: With a request body like this: <network> <name>ovirtmgmt</name> </network> To create a network named exnetwork on top of an external OpenStack network provider 456 send a request like this: <network> <name>exnetwork</name> <external_provider id="456"/> <data_center id="123"/> </network> Table 6.459. Parameters summary Name Type Direction Summary network Network In/Out 6.152.2. list GET List logical networks. For example: Will respond: <networks> <network href="/ovirt-engine/api/networks/123" id="123"> <name>ovirtmgmt</name> <description>Default Management Network</description> <link href="/ovirt-engine/api/networks/123/permissions" rel="permissions"/> <link href="/ovirt-engine/api/networks/123/vnicprofiles" rel="vnicprofiles"/> <link href="/ovirt-engine/api/networks/123/networklabels" rel="networklabels"/> <mtu>0</mtu> <stp>false</stp> <usages> <usage>vm</usage> </usages> <data_center href="/ovirt-engine/api/datacenters/456" id="456"/> </network> ... </networks> The order of the returned list of networks is guaranteed only if the sortby clause is included in the search parameter. Table 6.460. Parameters summary Name Type Direction Summary case_sensitive Boolean In Indicates if the search performed using the search parameter should be performed taking case into account. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of networks to return. networks Network[ ] Out search String In A query string used to restrict the returned networks. 6.152.2.1. case_sensitive Indicates if the search performed using the search parameter should be performed taking case into account. The default value is true , which means that case is taken into account. If you want to search ignoring case set it to false . 6.152.2.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.152.2.3. max Sets the maximum number of networks to return. If not specified all the networks are returned. 6.153. NicNetworkFilterParameter This service manages a parameter for a network filter. Table 6.461. Methods summary Name Summary get Retrieves a representation of the network filter parameter. remove Removes the filter parameter. update Updates the network filter parameter. 6.153.1. get GET Retrieves a representation of the network filter parameter. Table 6.462. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . parameter NetworkFilterParameter Out The representation of the network filter parameter. 6.153.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.153.2. remove DELETE Removes the filter parameter. For example, to remove the filter parameter with id 123 on NIC 456 of virtual machine 789 send a request like this: 6.153.3. update PUT Updates the network filter parameter. For example, to update the network filter parameter having with with id 123 on NIC 456 of virtual machine 789 send a request like this: With a request body like this: <network_filter_parameter> <name>updatedName</name> <value>updatedValue</value> </network_filter_parameter> Table 6.463. Parameters summary Name Type Direction Summary parameter NetworkFilterParameter In/Out The network filter parameter that is being updated. 6.154. NicNetworkFilterParameters This service manages a collection of parameters for network filters. Table 6.464. Methods summary Name Summary add Add a network filter parameter. list Retrieves the representations of the network filter parameters. 6.154.1. add POST Add a network filter parameter. For example, to add the parameter for the network filter on NIC 456 of virtual machine 789 send a request like this: With a request body like this: <network_filter_parameter> <name>IP</name> <value>10.0.1.2</value> </network_filter_parameter> Table 6.465. Parameters summary Name Type Direction Summary parameter NetworkFilterParameter In/Out The network filter parameter that is being added. 6.154.2. list GET Retrieves the representations of the network filter parameters. The order of the returned list of network filters isn't guaranteed. Table 6.466. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . parameters NetworkFilterParameter[ ] Out The list of the network filter parameters. 6.154.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.155. OpenstackImage Table 6.467. Methods summary Name Summary get import Imports a virtual machine from a Glance image storage domain. 6.155.1. get GET Table 6.468. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . image OpenStackImage Out 6.155.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.155.2. import POST Imports a virtual machine from a Glance image storage domain. For example, to import the image with identifier 456 from the storage domain with identifier 123 send a request like this: With a request body like this: <action> <storage_domain> <name>images0</name> </storage_domain> <cluster> <name>images0</name> </cluster> </action> Table 6.469. Parameters summary Name Type Direction Summary async Boolean In Indicates if the import should be performed asynchronously. cluster Cluster In This parameter is mandatory in case of using import_as_template and indicates which cluster should be used for import glance image as template. disk Disk In import_as_template Boolean In Indicates whether the image should be imported as a template. storage_domain StorageDomain In template Template In 6.156. OpenstackImageProvider Table 6.470. Methods summary Name Summary get importcertificates Import the SSL certificates of the external host provider. remove testconnectivity In order to test connectivity for external provider we need to run following request where 123 is an id of a provider. update Update the specified OpenStack image provider in the system. 6.156.1. get GET Table 6.471. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . provider OpenStackImageProvider Out 6.156.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.156.2. importcertificates POST Import the SSL certificates of the external host provider. Table 6.472. Parameters summary Name Type Direction Summary certificates Certificate[ ] In 6.156.3. remove DELETE Table 6.473. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.156.4. testconnectivity POST In order to test connectivity for external provider we need to run following request where 123 is an id of a provider. Table 6.474. Parameters summary Name Type Direction Summary async Boolean In Indicates if the test should be performed asynchronously. 6.156.5. update PUT Update the specified OpenStack image provider in the system. Table 6.475. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. provider OpenStackImageProvider In/Out 6.157. OpenstackImageProviders Table 6.476. Methods summary Name Summary add Add a new OpenStack image provider to the system. list Returns the list of providers. 6.157.1. add POST Add a new OpenStack image provider to the system. Table 6.477. Parameters summary Name Type Direction Summary provider OpenStackImageProvider In/Out 6.157.2. list GET Returns the list of providers. The order of the returned list of providers isn't guaranteed. Table 6.478. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of providers to return. providers OpenStackImageProvider[ ] Out search String In A query string used to restrict the returned OpenStack image providers. 6.157.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.157.2.2. max Sets the maximum number of providers to return. If not specified all the providers are returned. 6.158. OpenstackImages Table 6.479. Methods summary Name Summary list Lists the images of a Glance image storage domain. 6.158.1. list GET Lists the images of a Glance image storage domain. The order of the returned list of images isn't guaranteed. Table 6.480. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . images OpenStackImage[ ] Out max Integer In Sets the maximum number of images to return. 6.158.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.158.1.2. max Sets the maximum number of images to return. If not specified all the images are returned. 6.159. OpenstackNetwork Table 6.481. Methods summary Name Summary get import This operation imports an external network into Red Hat Virtualization. 6.159.1. get GET Table 6.482. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . network OpenStackNetwork Out 6.159.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.159.2. import POST This operation imports an external network into Red Hat Virtualization. The network will be added to the specified data center. Table 6.483. Parameters summary Name Type Direction Summary async Boolean In Indicates if the import should be performed asynchronously. data_center DataCenter In The data center into which the network is to be imported. 6.159.2.1. data_center The data center into which the network is to be imported. Data center is mandatory, and can be specified using the id or name attributes. The rest of the attributes will be ignored. Note If auto_sync is enabled for the provider, the network might be imported automatically. To prevent this, automatic import can be disabled by setting the auto_sync to false, and enabling it again after importing the network. 6.160. OpenstackNetworkProvider This service manages the OpenStack network provider. Table 6.484. Methods summary Name Summary get Returns the representation of the object managed by this service. importcertificates Import the SSL certificates of the external host provider. remove Removes the provider. testconnectivity In order to test connectivity for external provider we need to run following request where 123 is an id of a provider. update Updates the provider. 6.160.1. get GET Returns the representation of the object managed by this service. For example, to get the OpenStack network provider with identifier 1234 , send a request like this: Table 6.485. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . provider OpenStackNetworkProvider Out 6.160.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.160.2. importcertificates POST Import the SSL certificates of the external host provider. Table 6.486. Parameters summary Name Type Direction Summary certificates Certificate[ ] In 6.160.3. remove DELETE Removes the provider. For example, to remove the OpenStack network provider with identifier 1234 , send a request like this: Table 6.487. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.160.4. testconnectivity POST In order to test connectivity for external provider we need to run following request where 123 is an id of a provider. Table 6.488. Parameters summary Name Type Direction Summary async Boolean In Indicates if the test should be performed asynchronously. 6.160.5. update PUT Updates the provider. For example, to update provider_name , requires_authentication , url , tenant_name and type properties, for the OpenStack network provider with identifier 1234 , send a request like this: With a request body like this: <openstack_network_provider> <name>ovn-network-provider</name> <requires_authentication>false</requires_authentication> <url>http://some_server_url.domain.com:9696</url> <tenant_name>oVirt</tenant_name> <type>external</type> </openstack_network_provider> Table 6.489. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. provider OpenStackNetworkProvider In/Out The provider to update. 6.161. OpenstackNetworkProviders This service manages OpenStack network providers. Table 6.490. Methods summary Name Summary add The operation adds a new network provider to the system. list Returns the list of providers. 6.161.1. add POST The operation adds a new network provider to the system. If the type property is not present, a default value of NEUTRON will be used. Table 6.491. Parameters summary Name Type Direction Summary provider OpenStackNetworkProvider In/Out 6.161.2. list GET Returns the list of providers. The order of the returned list of providers isn't guaranteed. Table 6.492. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of providers to return. providers OpenStackNetworkProvider[ ] Out search String In A query string used to restrict the returned OpenStack network providers. 6.161.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.161.2.2. max Sets the maximum number of providers to return. If not specified all the providers are returned. 6.162. OpenstackNetworks Table 6.493. Methods summary Name Summary list Returns the list of networks. 6.162.1. list GET Returns the list of networks. The order of the returned list of networks isn't guaranteed. Table 6.494. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of networks to return. networks OpenStackNetwork[ ] Out 6.162.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.162.1.2. max Sets the maximum number of networks to return. If not specified all the networks are returned. 6.163. OpenstackSubnet Table 6.495. Methods summary Name Summary get remove 6.163.1. get GET Table 6.496. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . subnet OpenStackSubnet Out 6.163.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.163.2. remove DELETE Table 6.497. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.164. OpenstackSubnets Table 6.498. Methods summary Name Summary add list Returns the list of sub-networks. 6.164.1. add POST Table 6.499. Parameters summary Name Type Direction Summary subnet OpenStackSubnet In/Out 6.164.2. list GET Returns the list of sub-networks. The order of the returned list of sub-networks isn't guaranteed. Table 6.500. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of sub-networks to return. subnets OpenStackSubnet[ ] Out 6.164.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.164.2.2. max Sets the maximum number of sub-networks to return. If not specified all the sub-networks are returned. 6.165. OpenstackVolumeAuthenticationKey Openstack Volume (Cinder) integration has been replaced by Managed Block Storage. Table 6.501. Methods summary Name Summary get remove update Update the specified authentication key. 6.165.1. get GET Table 6.502. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . key OpenstackVolumeAuthenticationKey Out 6.165.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.165.2. remove DELETE Table 6.503. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.165.3. update PUT Update the specified authentication key. Table 6.504. Parameters summary Name Type Direction Summary key OpenstackVolumeAuthenticationKey In/Out 6.166. OpenstackVolumeAuthenticationKeys Openstack Volume (Cinder) integration has been replaced by Managed Block Storage. Table 6.505. Methods summary Name Summary add Add a new authentication key to the OpenStack volume provider. list Returns the list of authentication keys. 6.166.1. add POST Add a new authentication key to the OpenStack volume provider. Table 6.506. Parameters summary Name Type Direction Summary key OpenstackVolumeAuthenticationKey In/Out 6.166.2. list GET Returns the list of authentication keys. The order of the returned list of authentication keys isn't guaranteed. Table 6.507. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . keys OpenstackVolumeAuthenticationKey[ ] Out max Integer In Sets the maximum number of keys to return. 6.166.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.166.2.2. max Sets the maximum number of keys to return. If not specified all the keys are returned. 6.167. OpenstackVolumeProvider Openstack Volume (Cinder) integration has been replaced by Managed Block Storage. Table 6.508. Methods summary Name Summary get importcertificates Import the SSL certificates of the external host provider. remove testconnectivity In order to test connectivity for external provider we need to run following request where 123 is an id of a provider. update Update the specified OpenStack volume provider in the system. 6.167.1. get GET Table 6.509. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . provider OpenStackVolumeProvider Out 6.167.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.167.2. importcertificates POST Import the SSL certificates of the external host provider. Table 6.510. Parameters summary Name Type Direction Summary certificates Certificate[ ] In 6.167.3. remove DELETE Table 6.511. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. force Boolean In Indicates if the operation should succeed, and the provider removed from the database, even if something fails during the operation. 6.167.3.1. force Indicates if the operation should succeed, and the provider removed from the database, even if something fails during the operation. This parameter is optional, and the default value is false . 6.167.4. testconnectivity POST In order to test connectivity for external provider we need to run following request where 123 is an id of a provider. Table 6.512. Parameters summary Name Type Direction Summary async Boolean In Indicates if the test should be performed asynchronously. 6.167.5. update PUT Update the specified OpenStack volume provider in the system. Table 6.513. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. provider OpenStackVolumeProvider In/Out 6.168. OpenstackVolumeProviders Openstack Volume (Cinder) integration has been replaced by Managed Block Storage. Table 6.514. Methods summary Name Summary add Adds a new volume provider. list Retrieves the list of volume providers. 6.168.1. add POST Adds a new volume provider. For example: With a request body like this: <openstack_volume_provider> <name>mycinder</name> <url>https://mycinder.example.com:8776</url> <data_center> <name>mydc</name> </data_center> <requires_authentication>true</requires_authentication> <username>admin</username> <password>mypassword</password> <tenant_name>mytenant</tenant_name> </openstack_volume_provider> Table 6.515. Parameters summary Name Type Direction Summary provider OpenStackVolumeProvider In/Out 6.168.2. list GET Retrieves the list of volume providers. The order of the returned list of volume providers isn't guaranteed. Table 6.516. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of providers to return. providers OpenStackVolumeProvider[ ] Out search String In A query string used to restrict the returned volume providers. 6.168.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.168.2.2. max Sets the maximum number of providers to return. If not specified all the providers are returned. 6.169. OpenstackVolumeType Openstack Volume (Cinder) integration has been replaced by Managed Block Storage. Table 6.517. Methods summary Name Summary get 6.169.1. get GET Table 6.518. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . type OpenStackVolumeType Out 6.169.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.170. OpenstackVolumeTypes Openstack Volume (Cinder) integration has been replaced by Managed Block Storage. Table 6.519. Methods summary Name Summary list Returns the list of volume types. 6.170.1. list GET Returns the list of volume types. The order of the returned list of volume types isn't guaranteed. Table 6.520. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of volume types to return. types OpenStackVolumeType[ ] Out 6.170.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.170.1.2. max Sets the maximum number of volume types to return. If not specified all the volume types are returned. 6.171. OperatingSystem Table 6.521. Methods summary Name Summary get 6.171.1. get GET Table 6.522. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . operating_system OperatingSystemInfo Out 6.171.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.172. OperatingSystems Manages the set of types of operating systems available in the system. Table 6.523. Methods summary Name Summary list Returns the list of types of operating system available in the system. 6.172.1. list GET Returns the list of types of operating system available in the system. The order of the returned list of operating systems isn't guaranteed. Table 6.524. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of networks to return. operating_system OperatingSystemInfo[ ] Out 6.172.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.172.1.2. max Sets the maximum number of networks to return. If not specified all the networks are returned. 6.173. Permission Table 6.525. Methods summary Name Summary get remove 6.173.1. get GET Table 6.526. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . permission Permission Out 6.173.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.173.2. remove DELETE Table 6.527. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.174. Permit A service to manage a specific permit of the role. Table 6.528. Methods summary Name Summary get Gets the information about the permit of the role. remove Removes the permit from the role. 6.174.1. get GET Gets the information about the permit of the role. For example to retrieve the information about the permit with the id 456 of the role with the id 123 send a request like this: <permit href="/ovirt-engine/api/roles/123/permits/456" id="456"> <name>change_vm_cd</name> <administrative>false</administrative> <role href="/ovirt-engine/api/roles/123" id="123"/> </permit> Table 6.529. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . permit Permit Out The permit of the role. 6.174.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.174.2. remove DELETE Removes the permit from the role. For example to remove the permit with id 456 from the role with id 123 send a request like this: Table 6.530. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.175. Permits Represents a permits sub-collection of the specific role. Table 6.531. Methods summary Name Summary add Adds a permit to the role. list List the permits of the role. 6.175.1. add POST Adds a permit to the role. The permit name can be retrieved from the cluster_levels service. For example to assign a permit create_vm to the role with id 123 send a request like this: With a request body like this: <permit> <name>create_vm</name> </permit> Table 6.532. Parameters summary Name Type Direction Summary permit Permit In/Out The permit to add. 6.175.2. list GET List the permits of the role. For example to list the permits of the role with the id 123 send a request like this: <permits> <permit href="/ovirt-engine/api/roles/123/permits/5" id="5"> <name>change_vm_cd</name> <administrative>false</administrative> <role href="/ovirt-engine/api/roles/123" id="123"/> </permit> <permit href="/ovirt-engine/api/roles/123/permits/7" id="7"> <name>connect_to_vm</name> <administrative>false</administrative> <role href="/ovirt-engine/api/roles/123" id="123"/> </permit> </permits> The order of the returned list of permits isn't guaranteed. Table 6.533. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of permits to return. permits Permit[ ] Out List of permits. 6.175.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.175.2.2. max Sets the maximum number of permits to return. If not specified all the permits are returned. 6.176. Qos Table 6.534. Methods summary Name Summary get Get specified QoS in the data center. remove Remove specified QoS from datacenter. update Update the specified QoS in the dataCenter. 6.176.1. get GET Get specified QoS in the data center. You will get response like this one below: <qos href="/ovirt-engine/api/datacenters/123/qoss/123" id="123"> <name>123</name> <description>123</description> <max_iops>1</max_iops> <max_throughput>1</max_throughput> <type>storage</type> <data_center href="/ovirt-engine/api/datacenters/123" id="123"/> </qos> Table 6.535. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . qos Qos Out Queried QoS object. 6.176.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.176.2. remove DELETE Remove specified QoS from datacenter. Table 6.536. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.176.3. update PUT Update the specified QoS in the dataCenter. For example with curl: You will receive response like this: <qos href="/ovirt-engine/api/datacenters/123/qoss/123" id="123"> <name>321</name> <description>321</description> <max_iops>10</max_iops> <max_throughput>1</max_throughput> <type>storage</type> <data_center href="/ovirt-engine/api/datacenters/123" id="123"/> </qos> Table 6.537. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. qos Qos In/Out Updated QoS object. 6.177. Qoss Manages the set of quality of service configurations available in a data center. Table 6.538. Methods summary Name Summary add Add a new QoS to the dataCenter. list Returns the list of quality of service configurations available in the data center. 6.177.1. add POST Add a new QoS to the dataCenter. The response will look as follows: <qos href="/ovirt-engine/api/datacenters/123/qoss/123" id="123"> <name>123</name> <description>123</description> <max_iops>10</max_iops> <type>storage</type> <data_center href="/ovirt-engine/api/datacenters/123" id="123"/> </qos> Table 6.539. Parameters summary Name Type Direction Summary qos Qos In/Out Added QoS object. 6.177.2. list GET Returns the list of quality of service configurations available in the data center. You will get response which will look like this: <qoss> <qos href="/ovirt-engine/api/datacenters/123/qoss/1" id="1">...</qos> <qos href="/ovirt-engine/api/datacenters/123/qoss/2" id="2">...</qos> <qos href="/ovirt-engine/api/datacenters/123/qoss/3" id="3">...</qos> </qoss> The returned list of quality of service configurations isn't guaranteed. Table 6.540. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of QoS descriptors to return. qoss Qos[ ] Out List of queried QoS objects. 6.177.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.177.2.2. max Sets the maximum number of QoS descriptors to return. If not specified all the descriptors are returned. 6.178. Quota Table 6.541. Methods summary Name Summary get Retrieves a quota. remove Delete a quota. update Updates a quota. 6.178.1. get GET Retrieves a quota. An example of retrieving a quota: <quota id="456"> <name>myquota</name> <description>My new quota for virtual machines</description> <cluster_hard_limit_pct>20</cluster_hard_limit_pct> <cluster_soft_limit_pct>80</cluster_soft_limit_pct> <storage_hard_limit_pct>20</storage_hard_limit_pct> <storage_soft_limit_pct>80</storage_soft_limit_pct> </quota> Table 6.542. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . quota Quota Out 6.178.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.178.2. remove DELETE Delete a quota. An example of deleting a quota: Table 6.543. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.178.3. update PUT Updates a quota. An example of updating a quota: <quota> <cluster_hard_limit_pct>30</cluster_hard_limit_pct> <cluster_soft_limit_pct>70</cluster_soft_limit_pct> <storage_hard_limit_pct>20</storage_hard_limit_pct> <storage_soft_limit_pct>80</storage_soft_limit_pct> </quota> Table 6.544. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. quota Quota In/Out 6.179. QuotaClusterLimit Table 6.545. Methods summary Name Summary get remove 6.179.1. get GET Table 6.546. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . limit QuotaClusterLimit Out 6.179.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.179.2. remove DELETE Table 6.547. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.180. QuotaClusterLimits Manages the set of quota limits configured for a cluster. Table 6.548. Methods summary Name Summary add Add a cluster limit to a specified Quota. list Returns the set of quota limits configured for the cluster. 6.180.1. add POST Add a cluster limit to a specified Quota. Table 6.549. Parameters summary Name Type Direction Summary limit QuotaClusterLimit In/Out 6.180.2. list GET Returns the set of quota limits configured for the cluster. The returned list of quota limits isn't guaranteed. Table 6.550. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . limits QuotaClusterLimit[ ] Out max Integer In Sets the maximum number of limits to return. 6.180.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.180.2.2. max Sets the maximum number of limits to return. If not specified all the limits are returned. 6.181. QuotaStorageLimit Table 6.551. Methods summary Name Summary get remove 6.181.1. get GET Table 6.552. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . limit QuotaStorageLimit Out 6.181.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.181.2. remove DELETE Table 6.553. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. 6.182. QuotaStorageLimits Manages the set of storage limits configured for a quota. Table 6.554. Methods summary Name Summary add Adds a storage limit to a specified quota. list Returns the list of storage limits configured for the quota. 6.182.1. add POST Adds a storage limit to a specified quota. To create a 100GiB storage limit for all storage domains in a data center, send a request like this: With a request body like this: <quota_storage_limit> <limit>100</limit> </quota_storage_limit> To create a 50GiB storage limit for a storage domain with the ID 000 , send a request like this: With a request body like this: <quota_storage_limit> <limit>50</limit> <storage_domain id="000"/> </quota_storage_limit> Table 6.555. Parameters summary Name Type Direction Summary limit QuotaStorageLimit In/Out 6.182.2. list GET Returns the list of storage limits configured for the quota. The order of the returned list of storage limits is not guaranteed. Table 6.556. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . limits QuotaStorageLimit[ ] Out max Integer In Sets the maximum number of limits to return. 6.182.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.182.2.2. max Sets the maximum number of limits to return. If not specified, all the limits are returned. 6.183. Quotas Manages the set of quotas configured for a data center. Table 6.557. Methods summary Name Summary add Creates a new quota. list Lists quotas of a data center. 6.183.1. add POST Creates a new quota. An example of creating a new quota: <quota> <name>myquota</name> <description>My new quota for virtual machines</description> </quota> Table 6.558. Parameters summary Name Type Direction Summary quota Quota In/Out 6.183.2. list GET Lists quotas of a data center. The order of the returned list of quotas isn't guaranteed. Table 6.559. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of quota descriptors to return. quotas Quota[ ] Out 6.183.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.183.2.2. max Sets the maximum number of quota descriptors to return. If not specified all the descriptors are returned. 6.184. Role Table 6.560. Methods summary Name Summary get Get the role. remove Removes the role. update Updates a role. 6.184.1. get GET Get the role. You will receive XML response like this one: <role id="123"> <name>MyRole</name> <description>MyRole description</description> <link href="/ovirt-engine/api/roles/123/permits" rel="permits"/> <administrative>true</administrative> <mutable>false</mutable> </role> Table 6.561. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . role Role Out Retrieved role. 6.184.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.184.2. remove DELETE Removes the role. To remove the role you need to know its id, then send request like this: Table 6.562. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.184.3. update PUT Updates a role. You are allowed to update name , description and administrative attributes after role is created. Within this endpoint you can't add or remove roles permits you need to use service that manages permits of role. For example to update role's name , description and administrative attributes send a request like this: With a request body like this: <role> <name>MyNewRoleName</name> <description>My new description of the role</description> <administrative>true</administrative> </group> Table 6.563. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. role Role In/Out Updated role. 6.185. Roles Provides read-only access to the global set of roles Table 6.564. Methods summary Name Summary add Create a new role. list List roles. 6.185.1. add POST Create a new role. The role can be administrative or non-administrative and can have different permits. For example, to add the MyRole non-administrative role with permits to login and create virtual machines send a request like this (note that you have to pass permit id): With a request body like this: <role> <name>MyRole</name> <description>My custom role to create virtual machines</description> <administrative>false</administrative> <permits> <permit id="1"/> <permit id="1300"/> </permits> </group> Table 6.565. Parameters summary Name Type Direction Summary role Role In/Out Role that will be added. 6.185.2. list GET List roles. You will receive response in XML like this one: <roles> <role id="123"> <name>SuperUser</name> <description>Roles management administrator</description> <link href="/ovirt-engine/api/roles/123/permits" rel="permits"/> <administrative>true</administrative> <mutable>false</mutable> </role> ... </roles> The order of the returned list of roles isn't guaranteed. Table 6.566. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of roles to return. roles Role[ ] Out Retrieved list of roles. 6.185.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.185.2.2. max Sets the maximum number of roles to return. If not specified all the roles are returned. 6.186. SchedulingPolicies Manages the set of scheduling policies available in the system. Table 6.567. Methods summary Name Summary add Add a new scheduling policy to the system. list Returns the list of scheduling policies available in the system. 6.186.1. add POST Add a new scheduling policy to the system. Table 6.568. Parameters summary Name Type Direction Summary policy SchedulingPolicy In/Out 6.186.2. list GET Returns the list of scheduling policies available in the system. The order of the returned list of scheduling policies isn't guaranteed. Table 6.569. Parameters summary Name Type Direction Summary filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of policies to return. policies SchedulingPolicy[ ] Out 6.186.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.186.2.2. max Sets the maximum number of policies to return. If not specified all the policies are returned. 6.187. SchedulingPolicy Table 6.570. Methods summary Name Summary get remove update Update the specified user defined scheduling policy in the system. 6.187.1. get GET Table 6.571. Parameters summary Name Type Direction Summary filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . policy SchedulingPolicy Out 6.187.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.187.2. remove DELETE Table 6.572. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.187.3. update PUT Update the specified user defined scheduling policy in the system. Table 6.573. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. policy SchedulingPolicy In/Out 6.188. SchedulingPolicyUnit Table 6.574. Methods summary Name Summary get remove 6.188.1. get GET Table 6.575. Parameters summary Name Type Direction Summary filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . unit SchedulingPolicyUnit Out 6.188.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.188.2. remove DELETE Table 6.576. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.189. SchedulingPolicyUnits Manages the set of scheduling policy units available in the system. Table 6.577. Methods summary Name Summary list Returns the list of scheduling policy units available in the system. 6.189.1. list GET Returns the list of scheduling policy units available in the system. The order of the returned list of scheduling policy units isn't guaranteed. Table 6.578. Parameters summary Name Type Direction Summary filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of policy units to return. units SchedulingPolicyUnit[ ] Out 6.189.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.189.1.2. max Sets the maximum number of policy units to return. If not specified all the policy units are returned. 6.190. Snapshot Table 6.579. Methods summary Name Summary get remove restore Restores a virtual machine snapshot. 6.190.1. get GET Table 6.580. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . snapshot Snapshot Out 6.190.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.190.2. remove DELETE Table 6.581. Parameters summary Name Type Direction Summary all_content Boolean In Indicates if all the attributes of the virtual machine snapshot should be included in the response. async Boolean In Indicates if the remove should be performed asynchronously. 6.190.2.1. all_content Indicates if all the attributes of the virtual machine snapshot should be included in the response. By default the attribute initialization.configuration.data is excluded. For example, to retrieve the complete representation of the snapshot with id 456 of the virtual machine with id 123 send a request like this: 6.190.3. restore POST Restores a virtual machine snapshot. For example, to restore the snapshot with identifier 456 of virtual machine with identifier 123 send a request like this: With an empty action in the body: <action/> Note Confirm that the commit operation is finished and the virtual machine is down before running the virtual machine. Table 6.582. Parameters summary Name Type Direction Summary async Boolean In Indicates if the restore should be performed asynchronously. disks Disk[ ] In Specify the disks included in the snapshot's restore. restore_memory Boolean In 6.190.3.1. disks Specify the disks included in the snapshot's restore. For each disk parameter, it is also required to specify its image_id . For example, to restore a snapshot with an identifier 456 of a virtual machine with identifier 123 , including a disk with identifier 111 and image_id of 222 , send a request like this: Request body: <action> <disks> <disk id="111"> <image_id>222</image_id> </disk> </disks> </action> 6.191. SnapshotCdrom Table 6.583. Methods summary Name Summary get 6.191.1. get GET Table 6.584. Parameters summary Name Type Direction Summary cdrom Cdrom Out follow String In Indicates which inner links should be followed . 6.191.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.192. SnapshotCdroms Manages the set of CD-ROM devices of a virtual machine snapshot. Table 6.585. Methods summary Name Summary list Returns the list of CD-ROM devices of the snapshot. 6.192.1. list GET Returns the list of CD-ROM devices of the snapshot. The order of the returned list of CD-ROM devices isn't guaranteed. Table 6.586. Parameters summary Name Type Direction Summary cdroms Cdrom[ ] Out follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of CDROMS to return. 6.192.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.192.1.2. max Sets the maximum number of CDROMS to return. If not specified all the CDROMS are returned. 6.193. SnapshotDisk Table 6.587. Methods summary Name Summary get 6.193.1. get GET Table 6.588. Parameters summary Name Type Direction Summary disk Disk Out follow String In Indicates which inner links should be followed . 6.193.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.194. SnapshotDisks Manages the set of disks of an snapshot. Table 6.589. Methods summary Name Summary list Returns the list of disks of the snapshot. 6.194.1. list GET Returns the list of disks of the snapshot. The order of the returned list of disks isn't guaranteed. Table 6.590. Parameters summary Name Type Direction Summary disks Disk[ ] Out follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of disks to return. 6.194.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.194.1.2. max Sets the maximum number of disks to return. If not specified all the disks are returned. 6.195. SnapshotNic Table 6.591. Methods summary Name Summary get 6.195.1. get GET Table 6.592. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . nic Nic Out 6.195.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.196. SnapshotNics Manages the set of NICs of an snapshot. Table 6.593. Methods summary Name Summary list Returns the list of NICs of the snapshot. 6.196.1. list GET Returns the list of NICs of the snapshot. The order of the returned list of NICs isn't guaranteed. Table 6.594. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of NICs to return. nics Nic[ ] Out 6.196.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.196.1.2. max Sets the maximum number of NICs to return. If not specified all the NICs are returned. 6.197. Snapshots Manages the set of snapshots of a storage domain or virtual machine. Table 6.595. Methods summary Name Summary add Creates a virtual machine snapshot. list Returns the list of snapshots of the storage domain or virtual machine. 6.197.1. add POST Creates a virtual machine snapshot. For example, to create a new snapshot for virtual machine 123 send a request like this: With a request body like this: <snapshot> <description>My snapshot</description> </snapshot> For including only a sub-set of disks in the snapshots, add disk_attachments element to the request body. Note that disks which are not specified in disk_attachments element will not be a part of the snapshot. If an empty disk_attachments element is passed, the snapshot will include only the virtual machine configuration. If no disk_attachments element is passed, then all the disks will be included in the snapshot. For each disk, image_id element can be specified for setting the new active image id. This is used in order to restore a chain of images from backup. I.e. when restoring a disk with snapshots, the relevant image_id should be specified for each snapshot (so the identifiers of the disk snapshots are identical to the backup). <snapshot> <description>My snapshot</description> <disk_attachments> <disk_attachment> <disk id="123"> <image_id>456</image_id> </disk> </disk_attachment> </disk_attachments> </snapshot> Important When a snapshot is created, the default value for the persist_memorystate attribute is true . That means that the content of the memory of the virtual machine will be included in the snapshot, and it also means that the virtual machine will be paused for a longer time. That can negatively affect applications that are very sensitive to timing (NTP servers, for example). In those cases make sure that you set the attribute to false : <snapshot> <description>My snapshot</description> <persist_memorystate>false</persist_memorystate> </snapshot> Table 6.596. Parameters summary Name Type Direction Summary snapshot Snapshot In/Out 6.197.2. list GET Returns the list of snapshots of the storage domain or virtual machine. The order of the returned list of snapshots isn't guaranteed. Table 6.597. Parameters summary Name Type Direction Summary all_content Boolean In Indicates if all the attributes of the virtual machine snapshot should be included in the response. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of snapshots to return. snapshots Snapshot[ ] Out 6.197.2.1. all_content Indicates if all the attributes of the virtual machine snapshot should be included in the response. By default the attribute initialization.configuration.data is excluded. For example, to retrieve the complete representation of the virtual machine with id 123 snapshots send a request like this: 6.197.2.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.197.2.3. max Sets the maximum number of snapshots to return. If not specified all the snapshots are returned. 6.198. SshPublicKey Table 6.598. Methods summary Name Summary get remove update Replaces the key with a new resource. 6.198.1. get GET Table 6.599. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . key SshPublicKey Out 6.198.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.198.2. remove DELETE Table 6.600. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.198.3. update PUT Replaces the key with a new resource. Important Since version 4.4.8 of the engine this operation is deprecated, and preserved only for backwards compatibility. It will be removed in the future. Instead please use DELETE followed by add operation . Table 6.601. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. key SshPublicKey In/Out 6.199. SshPublicKeys Table 6.602. Methods summary Name Summary add list Returns a list of SSH public keys of the user. 6.199.1. add POST Table 6.603. Parameters summary Name Type Direction Summary key SshPublicKey In/Out 6.199.2. list GET Returns a list of SSH public keys of the user. For example, to retrieve the list of SSH keys of user with identifier 123 , send a request like this: The result will be the following XML document: <ssh_public_keys> <ssh_public_key href="/ovirt-engine/api/users/123/sshpublickeys/456" id="456"> <content>ssh-rsa ...</content> <user href="/ovirt-engine/api/users/123" id="123"/> </ssh_public_key> </ssh_public_keys> Or the following JSON object { "ssh_public_key": [ { "content": "ssh-rsa ...", "user": { "href": "/ovirt-engine/api/users/123", "id": "123" }, "href": "/ovirt-engine/api/users/123/sshpublickeys/456", "id": "456" } ] } The order of the returned list of keys is not guaranteed. Table 6.604. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . keys SshPublicKey[ ] Out max Integer In Sets the maximum number of keys to return. 6.199.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.199.2.2. max Sets the maximum number of keys to return. If not specified all the keys are returned. 6.200. Statistic Table 6.605. Methods summary Name Summary get 6.200.1. get GET Table 6.606. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . statistic Statistic Out 6.200.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.201. Statistics Table 6.607. Methods summary Name Summary list Retrieves a list of statistics. 6.201.1. list GET Retrieves a list of statistics. For example, to retrieve the statistics for virtual machine 123 send a request like this: The result will be like this: <statistics> <statistic href="/ovirt-engine/api/vms/123/statistics/456" id="456"> <name>memory.installed</name> <description>Total memory configured</description> <kind>gauge</kind> <type>integer</type> <unit>bytes</unit> <values> <value> <datum>1073741824</datum> </value> </values> <vm href="/ovirt-engine/api/vms/123" id="123"/> </statistic> ... </statistics> Just a single part of the statistics can be retrieved by specifying its id at the end of the URI. That means: Outputs: <statistic href="/ovirt-engine/api/vms/123/statistics/456" id="456"> <name>memory.installed</name> <description>Total memory configured</description> <kind>gauge</kind> <type>integer</type> <unit>bytes</unit> <values> <value> <datum>1073741824</datum> </value> </values> <vm href="/ovirt-engine/api/vms/123" id="123"/> </statistic> The order of the returned list of statistics isn't guaranteed. Table 6.608. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of statistics to return. statistics Statistic[ ] Out 6.201.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.201.1.2. max Sets the maximum number of statistics to return. If not specified all the statistics are returned. 6.202. Step A service to manage a step. Table 6.609. Methods summary Name Summary end Marks an external step execution as ended. get Retrieves a step. 6.202.1. end POST Marks an external step execution as ended. For example, to terminate a step with identifier 456 which belongs to a job with identifier 123 send the following request: With the following request body: <action> <force>true</force> <succeeded>true</succeeded> </action> Table 6.610. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. force Boolean In Indicates if the step should be forcibly terminated. succeeded Boolean In Indicates if the step should be marked as successfully finished or as failed. 6.202.1.1. succeeded Indicates if the step should be marked as successfully finished or as failed. This parameter is optional, and the default value is true . 6.202.2. get GET Retrieves a step. You will receive response in XML like this one: <step href="/ovirt-engine/api/jobs/123/steps/456" id="456"> <actions> <link href="/ovirt-engine/api/jobs/123/steps/456/end" rel="end"/> </actions> <description>Validating</description> <end_time>2016-12-12T23:07:26.627+02:00</end_time> <external>false</external> <number>0</number> <start_time>2016-12-12T23:07:26.605+02:00</start_time> <status>finished</status> <type>validating</type> <job href="/ovirt-engine/api/jobs/123" id="123"/> </step> Table 6.611. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . step Step Out Retrieves the representation of the step. 6.202.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.203. Steps A service to manage steps. Table 6.612. Methods summary Name Summary add Add an external step to an existing job or to an existing step. list Retrieves the representation of the steps. 6.203.1. add POST Add an external step to an existing job or to an existing step. For example, to add a step to job with identifier 123 send the following request: With the following request body: <step> <description>Validating</description> <start_time>2016-12-12T23:07:26.605+02:00</start_time> <status>started</status> <type>validating</type> </step> The response should look like: <step href="/ovirt-engine/api/jobs/123/steps/456" id="456"> <actions> <link href="/ovirt-engine/api/jobs/123/steps/456/end" rel="end"/> </actions> <description>Validating</description> <link href="/ovirt-engine/api/jobs/123/steps/456/statistics" rel="statistics"/> <external>true</external> <number>2</number> <start_time>2016-12-13T01:06:15.380+02:00</start_time> <status>started</status> <type>validating</type> <job href="/ovirt-engine/api/jobs/123" id="123"/> </step> Table 6.613. Parameters summary Name Type Direction Summary step Step In/Out Step that will be added. 6.203.2. list GET Retrieves the representation of the steps. You will receive response in XML like this one: <steps> <step href="/ovirt-engine/api/jobs/123/steps/456" id="456"> <actions> <link href="/ovirt-engine/api/jobs/123/steps/456/end" rel="end"/> </actions> <description>Validating</description> <link href="/ovirt-engine/api/jobs/123/steps/456/statistics" rel="statistics"/> <external>true</external> <number>2</number> <start_time>2016-12-13T01:06:15.380+02:00</start_time> <status>started</status> <type>validating</type> <job href="/ovirt-engine/api/jobs/123" id="123"/> </step> ... </steps> The order of the returned list of steps isn't guaranteed. Table 6.614. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of steps to return. steps Step[ ] Out A representation of steps. 6.203.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.203.2.2. max Sets the maximum number of steps to return. If not specified all the steps are returned. 6.204. Storage Table 6.615. Methods summary Name Summary get 6.204.1. get GET Table 6.616. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . report_status Boolean In Indicates if the status of the LUNs in the storage should be checked. storage HostStorage Out 6.204.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.204.1.2. report_status Indicates if the status of the LUNs in the storage should be checked. Checking the status of the LUN is an heavy weight operation and this data is not always needed by the user. This parameter will give the option to not perform the status check of the LUNs. The default is true for backward compatibility. Here an example with the LUN status : <host_storage id="360014051136c20574f743bdbd28177fd"> <logical_units> <logical_unit id="360014051136c20574f743bdbd28177fd"> <lun_mapping>0</lun_mapping> <paths>1</paths> <product_id>lun0</product_id> <serial>SLIO-ORG_lun0_1136c205-74f7-43bd-bd28-177fd5ce6993</serial> <size>10737418240</size> <status>used</status> <vendor_id>LIO-ORG</vendor_id> <volume_group_id>O9Du7I-RahN-ECe1-dZ1w-nh0b-64io-MNzIBZ</volume_group_id> </logical_unit> </logical_units> <type>iscsi</type> <host id="8bb5ade5-e988-4000-8b93-dbfc6717fe50"/> </host_storage> Here an example without the LUN status : <host_storage id="360014051136c20574f743bdbd28177fd"> <logical_units> <logical_unit id="360014051136c20574f743bdbd28177fd"> <lun_mapping>0</lun_mapping> <paths>1</paths> <product_id>lun0</product_id> <serial>SLIO-ORG_lun0_1136c205-74f7-43bd-bd28-177fd5ce6993</serial> <size>10737418240</size> <vendor_id>LIO-ORG</vendor_id> <volume_group_id>O9Du7I-RahN-ECe1-dZ1w-nh0b-64io-MNzIBZ</volume_group_id> </logical_unit> </logical_units> <type>iscsi</type> <host id="8bb5ade5-e988-4000-8b93-dbfc6717fe50"/> </host_storage> 6.205. StorageDomain Table 6.617. Methods summary Name Summary get Retrieves the description of the storage domain. isattached Used for querying if the storage domain is already attached to a data center using the is_attached boolean field, which is part of the storage server. reduceluns This operation reduces logical units from the storage domain. refreshluns This operation refreshes the LUN size. remove Removes the storage domain. update Updates a storage domain. updateovfstore This operation forces the update of the OVF_STORE of this storage domain. 6.205.1. get GET Retrieves the description of the storage domain. Table 6.618. Parameters summary Name Type Direction Summary filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . storage_domain StorageDomain Out The description of the storage domain. 6.205.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.205.2. isattached POST Used for querying if the storage domain is already attached to a data center using the is_attached boolean field, which is part of the storage server. IMPORTANT: Executing this API will cause the host to disconnect from the storage domain. Table 6.619. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. host Host In Indicates the data center's host. is_attached Boolean Out Indicates whether the storage domain is attached to the data center. 6.205.3. reduceluns POST This operation reduces logical units from the storage domain. In order to do so the data stored on the provided logical units will be moved to other logical units of the storage domain and only then they will be reduced from the storage domain. For example, in order to reduce two logical units from a storage domain send a request like this: With a request body like this: <action> <logical_units> <logical_unit id="1IET_00010001"/> <logical_unit id="1IET_00010002"/> </logical_units> </action> Table 6.620. Parameters summary Name Type Direction Summary logical_units LogicalUnit[ ] In The logical units that need to be reduced from the storage domain. 6.205.4. refreshluns POST This operation refreshes the LUN size. After increasing the size of the underlying LUN on the storage server, the user can refresh the LUN size. This action forces a rescan of the provided LUNs and updates the database with the new size, if required. For example, in order to refresh the size of two LUNs send a request like this: With a request body like this: <action> <logical_units> <logical_unit id="1IET_00010001"/> <logical_unit id="1IET_00010002"/> </logical_units> </action> Table 6.621. Parameters summary Name Type Direction Summary async Boolean In Indicates if the refresh should be performed asynchronously. logical_units LogicalUnit[ ] In The LUNs that need to be refreshed. 6.205.5. remove DELETE Removes the storage domain. Without any special parameters, the storage domain is detached from the system and removed from the database. The storage domain can then be imported to the same or to a different setup, with all the data on it. If the storage is not accessible the operation will fail. If the destroy parameter is true then the operation will always succeed, even if the storage is not accessible, the failure is just ignored and the storage domain is removed from the database anyway. If the format parameter is true then the actual storage is formatted, and the metadata is removed from the LUN or directory, so it can no longer be imported to the same or to a different setup. Table 6.622. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. destroy Boolean In Indicates if the operation should succeed, and the storage domain removed from the database, even if the storage is not accessible. format Boolean In Indicates if the actual storage should be formatted, removing all the metadata from the underlying LUN or directory: [source] ---- DELETE /ovirt-engine/api/storageDomains/123?format=true ---- This parameter is optional, and the default value is false . host String In Indicates which host should be used to remove the storage domain. 6.205.5.1. destroy Indicates if the operation should succeed, and the storage domain removed from the database, even if the storage is not accessible. This parameter is optional, and the default value is false . When the value of destroy is true the host parameter will be ignored. 6.205.5.2. host Indicates which host should be used to remove the storage domain. This parameter is mandatory, except if the destroy parameter is included and its value is true , in that case the host parameter will be ignored. The value should contain the name or the identifier of the host. For example, to use the host named myhost to remove the storage domain with identifier 123 send a request like this: 6.205.6. update PUT Updates a storage domain. Not all of the StorageDomain 's attributes are updatable after creation. Those that can be updated are: name , description , comment , warning_low_space_indicator , critical_space_action_blocker and wipe_after_delete. (Note that changing the wipe_after_delete attribute will not change the wipe after delete property of disks that already exist). To update the name and wipe_after_delete attributes of a storage domain with an identifier 123 , send a request as follows: With a request body as follows: <storage_domain> <name>data2</name> <wipe_after_delete>true</wipe_after_delete> </storage_domain> Table 6.623. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. storage_domain StorageDomain In/Out The updated storage domain. 6.205.7. updateovfstore POST This operation forces the update of the OVF_STORE of this storage domain. The OVF_STORE is a disk image that contains the metadata of virtual machines and disks that reside in the storage domain. This metadata is used in case the domain is imported or exported to or from a different data center or a different installation. By default the OVF_STORE is updated periodically (set by default to 60 minutes) but users might want to force an update after an important change, or when the they believe the OVF_STORE is corrupt. When initiated by the user, OVF_STORE update will be performed whether an update is needed or not. Table 6.624. Parameters summary Name Type Direction Summary async Boolean In Indicates if the OVF_STORE update should be performed asynchronously. 6.206. StorageDomainContentDisk Table 6.625. Methods summary Name Summary get 6.206.1. get GET Table 6.626. Parameters summary Name Type Direction Summary disk Disk Out filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . 6.206.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.207. StorageDomainContentDisks Manages the set of disks available in a storage domain. Table 6.627. Methods summary Name Summary list Returns the list of disks available in the storage domain. 6.207.1. list GET Returns the list of disks available in the storage domain. The order of the returned list of disks is guaranteed only if the sortby clause is included in the search parameter. Table 6.628. Parameters summary Name Type Direction Summary case_sensitive Boolean In Indicates if the search performed using the search parameter should be performed taking case into account. disks Disk[ ] Out follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of disks to return. search String In A query string used to restrict the returned disks. 6.207.1.1. case_sensitive Indicates if the search performed using the search parameter should be performed taking case into account. The default value is true , which means that case is taken into account. If you want to search ignoring case set it to false . 6.207.1.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.207.1.3. max Sets the maximum number of disks to return. If not specified all the disks are returned. 6.208. StorageDomainDisk Manages a single disk available in a storage domain. Important Since version 4.2 of the engine this service is intended only to list disks available in the storage domain, and to register unregistered disks. All the other operations, like copying a disk, moving a disk, etc, have been deprecated and will be removed in the future. To perform those operations use the service that manages all the disks of the system or the service that manages a specific disk . Table 6.629. Methods summary Name Summary copy Copies a disk to the specified storage domain. export Exports a disk to an export storage domain. get Retrieves the description of the disk. move Moves a disk to another storage domain. reduce Reduces the size of the disk image. remove Removes a disk. sparsify Sparsify the disk. update Updates the disk. 6.208.1. copy POST Copies a disk to the specified storage domain. Important Since version 4.2 of the engine this operation is deprecated, and preserved only for backwards compatibility. It will be removed in the future. To copy a disk use the copy operation of the service that manages that disk. Table 6.630. Parameters summary Name Type Direction Summary disk Disk In Description of the resulting disk. storage_domain StorageDomain In The storage domain where the new disk will be created. 6.208.2. export POST Exports a disk to an export storage domain. Important Since version 4.2 of the engine this operation is deprecated, and preserved only for backwards compatibility. It will be removed in the future. To export a disk use the export operation of the service that manages that disk. Table 6.631. Parameters summary Name Type Direction Summary storage_domain StorageDomain In The export storage domain where the disk should be exported to. 6.208.3. get GET Retrieves the description of the disk. Table 6.632. Parameters summary Name Type Direction Summary disk Disk Out The description of the disk. follow String In Indicates which inner links should be followed . 6.208.3.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.208.4. move POST Moves a disk to another storage domain. Important Since version 4.2 of the engine this operation is deprecated, and preserved only for backwards compatibility. It will be removed in the future. To move a disk use the move operation of the service that manages that disk. Table 6.633. Parameters summary Name Type Direction Summary async Boolean In Indicates if the move should be performed asynchronously. filter Boolean In Indicates if the results should be filtered according to the permissions of the user. storage_domain StorageDomain In The storage domain where the disk will be moved to. 6.208.5. reduce POST Reduces the size of the disk image. Invokes reduce on the logical volume (i.e. this is only applicable for block storage domains). This is applicable for floating disks and disks attached to non-running virtual machines. There is no need to specify the size as the optimal size is calculated automatically. Table 6.634. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.208.6. remove DELETE Removes a disk. Important Since version 4.2 of the engine this operation is deprecated, and preserved only for backwards compatibility. It will be removed in the future. To remove a disk use the remove operation of the service that manages that disk. 6.208.7. sparsify POST Sparsify the disk. Important Since version 4.2 of the engine this operation is deprecated, and preserved only for backwards compatibility. It will be removed in the future. To remove a disk use the remove operation of the service that manages that disk. 6.208.8. update PUT Updates the disk. Important Since version 4.2 of the engine this operation is deprecated, and preserved only for backwards compatibility. It will be removed in the future. To update a disk use the update operation of the service that manages that disk. Table 6.635. Parameters summary Name Type Direction Summary disk Disk In/Out The update to apply to the disk. 6.209. StorageDomainDisks Manages the collection of disks available inside a specific storage domain. Table 6.636. Methods summary Name Summary add Adds or registers a disk. list Retrieves the list of disks that are available in the storage domain. 6.209.1. add POST Adds or registers a disk. Important Since version 4.2 of the Red Hat Virtualization Manager this operation is deprecated, and preserved only for backwards compatibility. It will be removed in the future. To add a new disk use the add operation of the service that manages the disks of the system. To register an unregistered disk use the register operation of the service that manages that disk. Table 6.637. Parameters summary Name Type Direction Summary disk Disk In/Out The disk to add or register. unregistered Boolean In Indicates if a new disk should be added or if an existing unregistered disk should be registered. 6.209.1.1. unregistered Indicates if a new disk should be added or if an existing unregistered disk should be registered. If the value is true then the identifier of the disk to register needs to be provided. For example, to register the disk with ID 456 send a request like this: With a request body like this: <disk id="456"/> If the value is false then a new disk will be created in the storage domain. In that case the provisioned_size , format , and name attributes are mandatory. For example, to create a new copy on write disk of 1 GiB, send a request like this: With a request body like this: <disk> <name>mydisk</name> <format>cow</format> <provisioned_size>1073741824</provisioned_size> </disk> The default value is false . This parameter has been deprecated since version 4.2 of the Red Hat Virtualization Manager. 6.209.2. list GET Retrieves the list of disks that are available in the storage domain. The order of the returned list of disks is not guaranteed. Table 6.638. Parameters summary Name Type Direction Summary disks Disk[ ] Out The list of retrieved disks. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of disks to return. unregistered Boolean In Indicates whether to retrieve a list of registered or unregistered disks in the storage domain. 6.209.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.209.2.2. max Sets the maximum number of disks to return. If not specified, all the disks are returned. 6.209.2.3. unregistered Indicates whether to retrieve a list of registered or unregistered disks in the storage domain. To get a list of unregistered disks in the storage domain the call should indicate the unregistered flag. For example, to get a list of unregistered disks the REST API call should look like this: The default value of the unregistered flag is false . The request only applies to storage domains that are attached. 6.210. StorageDomainServerConnection Table 6.639. Methods summary Name Summary get remove Detaches a storage connection from storage. 6.210.1. get GET Table 6.640. Parameters summary Name Type Direction Summary connection StorageConnection Out follow String In Indicates which inner links should be followed . 6.210.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.210.2. remove DELETE Detaches a storage connection from storage. Table 6.641. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. 6.211. StorageDomainServerConnections Manages the set of connections to storage servers that exist in a storage domain. Table 6.642. Methods summary Name Summary add list Returns the list of connections to storage servers that existin the storage domain. 6.211.1. add POST Table 6.643. Parameters summary Name Type Direction Summary connection StorageConnection In/Out 6.211.2. list GET Returns the list of connections to storage servers that existin the storage domain. The order of the returned list of connections isn't guaranteed. Table 6.644. Parameters summary Name Type Direction Summary connections StorageConnection[ ] Out follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of connections to return. 6.211.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.211.2.2. max Sets the maximum number of connections to return. If not specified all the connections are returned. 6.212. StorageDomainTemplate Table 6.645. Methods summary Name Summary get import Action to import a template from an export storage domain. register Register the Template means importing the Template from the data domain by inserting the configuration of the Template and disks into the database without the copy process. remove 6.212.1. get GET Table 6.646. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . template Template Out 6.212.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.212.2. import POST Action to import a template from an export storage domain. For example, to import the template 456 from the storage domain 123 send the following request: With the following request body: <action> <storage_domain> <name>myexport</name> </storage_domain> <cluster> <name>mycluster</name> </cluster> </action> If you register an entity without specifying the cluster ID or name, the cluster name from the entity's OVF will be used (unless the register request also includes the cluster mapping). Table 6.647. Parameters summary Name Type Direction Summary async Boolean In Indicates if the import should be performed asynchronously. clone Boolean In Use the optional clone parameter to generate new UUIDs for the imported template and its entities. cluster Cluster In exclusive Boolean In storage_domain StorageDomain In template Template In vm Vm In 6.212.2.1. clone Use the optional clone parameter to generate new UUIDs for the imported template and its entities. You can import a template with the clone parameter set to false when importing a template from an export domain, with templates that were exported by a different Red Hat Virtualization environment. 6.212.3. register POST Register the Template means importing the Template from the data domain by inserting the configuration of the Template and disks into the database without the copy process. Table 6.648. Parameters summary Name Type Direction Summary allow_partial_import Boolean In Indicates whether a template is allowed to be registered with only some of its disks. async Boolean In Indicates if the registration should be performed asynchronously. clone Boolean In cluster Cluster In exclusive Boolean In registration_configuration RegistrationConfiguration In This parameter describes how the template should be registered. template Template In vnic_profile_mappings VnicProfileMapping[ ] In Deprecated attribute describing mapping rules for virtual NIC profiles that will be applied during the import\register process. 6.212.3.1. allow_partial_import Indicates whether a template is allowed to be registered with only some of its disks. If this flag is true , the system will not fail in the validation process if an image is not found, but instead it will allow the template to be registered without the missing disks. This is mainly used during registration of a template when some of the storage domains are not available. The default value is false . 6.212.3.2. registration_configuration This parameter describes how the template should be registered. This parameter is optional. If the parameter is not specified, the template will be registered with the same configuration that it had in the original environment where it was created. 6.212.3.3. vnic_profile_mappings Deprecated attribute describing mapping rules for virtual NIC profiles that will be applied during the import\register process. Warning Please note that this attribute has been deprecated since version 4.2.1 of the engine, and preserved only for backward compatibility. It will be removed in the future. To specify vnic_profile_mappings use the vnic_profile_mappings attribute inside the RegistrationConfiguration type. 6.212.4. remove DELETE Table 6.649. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.213. StorageDomainTemplates Manages the set of templates available in a storage domain. Table 6.650. Methods summary Name Summary list Returns the list of templates availalbe in the storage domain. 6.213.1. list GET Returns the list of templates availalbe in the storage domain. The order of the returned list of templates isn't guaranteed. Table 6.651. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of templates to return. templates Template[ ] Out unregistered Boolean In Indicates whether to retrieve a list of registered or unregistered templates which contain disks on the storage domain. 6.213.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.213.1.2. max Sets the maximum number of templates to return. If not specified all the templates are returned. 6.213.1.3. unregistered Indicates whether to retrieve a list of registered or unregistered templates which contain disks on the storage domain. To get a list of unregistered templates the call should indicate the unregistered flag. For example, to get a list of unregistered templates the REST API call should look like this: The default value of the unregisterd flag is false . The request only apply to storage domains that are attached. 6.214. StorageDomainVm Table 6.652. Methods summary Name Summary get import Imports a virtual machine from an export storage domain. register remove Deletes a virtual machine from an export storage domain. 6.214.1. get GET Table 6.653. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . vm Vm Out 6.214.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.214.2. import POST Imports a virtual machine from an export storage domain. For example, send a request like this: With a request body like this: <action> <storage_domain> <name>mydata</name> </storage_domain> <cluster> <name>mycluster</name> </cluster> </action> To import a virtual machine as a new entity add the clone parameter: <action> <storage_domain> <name>mydata</name> </storage_domain> <cluster> <name>mycluster</name> </cluster> <clone>true</clone> <vm> <name>myvm</name> </vm> </action> Include an optional disks parameter to choose which disks to import. For example, to import the disks of the template that have the identifiers 123 and 456 send the following request body: <action> <cluster> <name>mycluster</name> </cluster> <vm> <name>myvm</name> </vm> <disks> <disk id="123"/> <disk id="456"/> </disks> </action> If you register an entity without specifying the cluster ID or name, the cluster name from the entity's OVF will be used (unless the register request also includes the cluster mapping). Table 6.654. Parameters summary Name Type Direction Summary async Boolean In Indicates if the import should be performed asynchronously. clone Boolean In Indicates if the identifiers of the imported virtual machine should be regenerated. cluster Cluster In collapse_snapshots Boolean In Indicates of the snapshots of the virtual machine that is imported should be collapsed, so that the result will be a virtual machine without snapshots. exclusive Boolean In storage_domain StorageDomain In vm Vm In 6.214.2.1. clone Indicates if the identifiers of the imported virtual machine should be regenerated. By default when a virtual machine is imported the identifiers are preserved. This means that the same virtual machine can't be imported multiple times, as that identifiers needs to be unique. To allow importing the same machine multiple times set this parameter to true , as the default is false . 6.214.2.2. collapse_snapshots Indicates of the snapshots of the virtual machine that is imported should be collapsed, so that the result will be a virtual machine without snapshots. This parameter is optional, and if it isn't explicitly specified the default value is false . 6.214.3. register POST Table 6.655. Parameters summary Name Type Direction Summary allow_partial_import Boolean In Indicates whether a virtual machine is allowed to be registered with only some of its disks. async Boolean In Indicates if the registration should be performed asynchronously. clone Boolean In cluster Cluster In reassign_bad_macs Boolean In Indicates if the problematic MAC addresses should be re-assigned during the import process by the engine. registration_configuration RegistrationConfiguration In This parameter describes how the virtual machine should be registered. vm Vm In vnic_profile_mappings VnicProfileMapping[ ] In Deprecated attribute describing mapping rules for virtual NIC profiles that will be applied during the import\register process. 6.214.3.1. allow_partial_import Indicates whether a virtual machine is allowed to be registered with only some of its disks. If this flag is true , the engine will not fail in the validation process if an image is not found, but instead it will allow the virtual machine to be registered without the missing disks. This is mainly used during registration of a virtual machine when some of the storage domains are not available. The default value is false . 6.214.3.2. reassign_bad_macs Indicates if the problematic MAC addresses should be re-assigned during the import process by the engine. A MAC address would be considered as a problematic one if one of the following is true: It conflicts with a MAC address that is already allocated to a virtual machine in the target environment. It's out of the range of the target MAC address pool. 6.214.3.3. registration_configuration This parameter describes how the virtual machine should be registered. This parameter is optional. If the parameter is not specified, the virtual machine will be registered with the same configuration that it had in the original environment where it was created. 6.214.3.4. vnic_profile_mappings Deprecated attribute describing mapping rules for virtual NIC profiles that will be applied during the import\register process. Warning Please note that this attribute has been deprecated since version 4.2.1 of the engine, and preserved only for backward compatibility. It will be removed in the future. To specify vnic_profile_mappings use the vnic_profile_mappings attribute inside the RegistrationConfiguration type. 6.214.4. remove DELETE Deletes a virtual machine from an export storage domain. For example, to delete the virtual machine 456 from the storage domain 123 , send a request like this: Table 6.656. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.215. StorageDomainVmDiskAttachment Returns the details of the disks attached to a virtual machine in the export domain. Table 6.657. Methods summary Name Summary get Returns the details of the attachment with all its properties and a link to the disk. 6.215.1. get GET Returns the details of the attachment with all its properties and a link to the disk. Table 6.658. Parameters summary Name Type Direction Summary attachment DiskAttachment Out The disk attachment. follow String In Indicates which inner links should be followed . 6.215.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.216. StorageDomainVmDiskAttachments Returns the details of a disk attached to a virtual machine in the export domain. Table 6.659. Methods summary Name Summary list List the disks that are attached to the virtual machine. 6.216.1. list GET List the disks that are attached to the virtual machine. The order of the returned list of disk attachments isn't guaranteed. Table 6.660. Parameters summary Name Type Direction Summary attachments DiskAttachment[ ] Out follow String In Indicates which inner links should be followed . 6.216.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.217. StorageDomainVms Lists the virtual machines of an export storage domain. For example, to retrieve the virtual machines that are available in the storage domain with identifier 123 send the following request: This will return the following response body: <vms> <vm id="456" href="/api/storagedomains/123/vms/456"> <name>vm1</name> ... <storage_domain id="123" href="/api/storagedomains/123"/> <actions> <link rel="import" href="/api/storagedomains/123/vms/456/import"/> </actions> </vm> </vms> Virtual machines and templates in these collections have a similar representation to their counterparts in the top-level Vm and Template collections, except they also contain a StorageDomain reference and an import action. Table 6.661. Methods summary Name Summary list Returns the list of virtual machines of the export storage domain. 6.217.1. list GET Returns the list of virtual machines of the export storage domain. The order of the returned list of virtual machines isn't guaranteed. Table 6.662. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of virtual machines to return. unregistered Boolean In Indicates whether to retrieve a list of registered or unregistered virtual machines which contain disks on the storage domain. vm Vm[ ] Out 6.217.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.217.1.2. max Sets the maximum number of virtual machines to return. If not specified all the virtual machines are returned. 6.217.1.3. unregistered Indicates whether to retrieve a list of registered or unregistered virtual machines which contain disks on the storage domain. To get a list of unregistered virtual machines the call should indicate the unregistered flag. For example, to get a list of unregistered virtual machines the REST API call should look like this: The default value of the unregisterd flag is false . The request only apply to storage domains that are attached. 6.218. StorageDomains Manages the set of storage domains in the system. Table 6.663. Methods summary Name Summary add Adds a new storage domain. list Returns the list of storage domains in the system. 6.218.1. add POST Adds a new storage domain. Creation of a new StorageDomain requires the name , type , host , and storage attributes. Identify the host attribute with the id or name attributes. In Red Hat Virtualization 3.6 and later you can enable the wipe after delete option by default on the storage domain. To configure this, specify wipe_after_delete in the POST request. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist. To add a new storage domain with specified name , type , storage.type , storage.address , and storage.path , and using a host with an id 123 , send a request like this: With a request body like this: <storage_domain> <name>mydata</name> <type>data</type> <storage> <type>nfs</type> <address>mynfs.example.com</address> <path>/exports/mydata</path> </storage> <host> <name>myhost</name> </host> </storage_domain> To create a new NFS ISO storage domain send a request like this: <storage_domain> <name>myisos</name> <type>iso</type> <storage> <type>nfs</type> <address>mynfs.example.com</address> <path>/export/myisos</path> </storage> <host> <name>myhost</name> </host> </storage_domain> To create a new iSCSI storage domain send a request like this: <storage_domain> <name>myiscsi</name> <type>data</type> <storage> <type>iscsi</type> <logical_units> <logical_unit id="3600144f09dbd050000004eedbd340001"/> <logical_unit id="3600144f09dbd050000004eedbd340002"/> </logical_units> </storage> <host> <name>myhost</name> </host> </storage_domain> Table 6.664. Parameters summary Name Type Direction Summary storage_domain StorageDomain In/Out The storage domain to add. 6.218.2. list GET Returns the list of storage domains in the system. The order of the returned list of storage domains is guaranteed only if the sortby clause is included in the search parameter. Table 6.665. Parameters summary Name Type Direction Summary case_sensitive Boolean In Indicates if the search should be performed taking case into account. filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of storage domains to return. search String In A query string used to restrict the returned storage domains. storage_domains StorageDomain[ ] Out A list of the storage domains in the system. 6.218.2.1. case_sensitive Indicates if the search should be performed taking case into account. The default value is true , which means that case is taken into account. If you want to search ignoring case, set it to false . 6.218.2.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.218.2.3. max Sets the maximum number of storage domains to return. If not specified, all the storage domains are returned. 6.219. StorageServerConnection Table 6.666. Methods summary Name Summary get remove Removes a storage connection. update Updates the storage connection. 6.219.1. get GET Table 6.667. Parameters summary Name Type Direction Summary conection StorageConnection Out follow String In Indicates which inner links should be followed . 6.219.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.219.2. remove DELETE Removes a storage connection. A storage connection can only be deleted if neither storage domain nor LUN disks reference it. The host name or id is optional; providing it disconnects (unmounts) the connection from that host. Table 6.668. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. host String In The name or identifier of the host from which the connection would be unmounted (disconnected). 6.219.2.1. host The name or identifier of the host from which the connection would be unmounted (disconnected). If not provided, no host will be disconnected. For example, to use the host with identifier 456 to delete the storage connection with identifier 123 send a request like this: 6.219.3. update PUT Updates the storage connection. For example, to change the address of an NFS storage server, send a request like this: PUT /ovirt-engine/api/storageconnections/123 With a request body like this: <storage_connection> <address>mynewnfs.example.com</address> </storage_connection> To change the connection of an iSCSI storage server, send a request like this: PUT /ovirt-engine/api/storageconnections/123 With a request body like this: <storage_connection> <port>3260</port> <target>iqn.2017-01.com.myhost:444</target> </storage_connection> Table 6.669. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. connection StorageConnection In/Out force Boolean In Indicates if the operation should succeed regardless to the relevant storage domain's status (i. 6.219.3.1. force Indicates if the operation should succeed regardless to the relevant storage domain's status (i.e. updating is also applicable when storage domain's status is not maintenance). This parameter is optional, and the default value is false . 6.220. StorageServerConnectionExtension Table 6.670. Methods summary Name Summary get remove update Update a storage server connection extension for the given host. 6.220.1. get GET Table 6.671. Parameters summary Name Type Direction Summary extension StorageConnectionExtension Out follow String In Indicates which inner links should be followed . 6.220.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.220.2. remove DELETE Table 6.672. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.220.3. update PUT Update a storage server connection extension for the given host. To update the storage connection 456 of host 123 send a request like this: With a request body like this: <storage_connection_extension> <target>iqn.2016-01.com.example:mytarget</target> <username>myuser</username> <password>mypassword</password> </storage_connection_extension> Table 6.673. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. extension StorageConnectionExtension In/Out 6.221. StorageServerConnectionExtensions Table 6.674. Methods summary Name Summary add Creates a new storage server connection extension for the given host. list Returns the list os storage connection extensions. 6.221.1. add POST Creates a new storage server connection extension for the given host. The extension lets the user define credentials for an iSCSI target for a specific host. For example to use myuser and mypassword as the credentials when connecting to the iSCSI target from host 123 send a request like this: With a request body like this: <storage_connection_extension> <target>iqn.2016-01.com.example:mytarget</target> <username>myuser</username> <password>mypassword</password> </storage_connection_extension> Table 6.675. Parameters summary Name Type Direction Summary extension StorageConnectionExtension In/Out 6.221.2. list GET Returns the list os storage connection extensions. The order of the returned list of storage connections isn't guaranteed. Table 6.676. Parameters summary Name Type Direction Summary extensions StorageConnectionExtension[ ] Out follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of extensions to return. 6.221.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.221.2.2. max Sets the maximum number of extensions to return. If not specified all the extensions are returned. 6.222. StorageServerConnections Table 6.677. Methods summary Name Summary add Creates a new storage connection. list Returns the list of storage connections. 6.222.1. add POST Creates a new storage connection. For example, to create a new storage connection for the NFS server mynfs.example.com and NFS share /export/mydata send a request like this: With a request body like this: <storage_connection> <type>nfs</type> <address>mynfs.example.com</address> <path>/export/mydata</path> <host> <name>myhost</name> </host> </storage_connection> Table 6.678. Parameters summary Name Type Direction Summary connection StorageConnection In/Out 6.222.2. list GET Returns the list of storage connections. The order of the returned list of connections isn't guaranteed. Table 6.679. Parameters summary Name Type Direction Summary connections StorageConnection[ ] Out follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of connections to return. 6.222.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.222.2.2. max Sets the maximum number of connections to return. If not specified all the connections are returned. 6.223. System Table 6.680. Methods summary Name Summary get Returns basic information describing the API, like the product name, the version number and a summary of the number of relevant objects. reloadconfigurations 6.223.1. get GET Returns basic information describing the API, like the product name, the version number and a summary of the number of relevant objects. We get following response: <api> <link rel="capabilities" href="/api/capabilities"/> <link rel="clusters" href="/api/clusters"/> <link rel="clusters/search" href="/api/clusters?search={query}"/> <link rel="datacenters" href="/api/datacenters"/> <link rel="datacenters/search" href="/api/datacenters?search={query}"/> <link rel="events" href="/api/events"/> <link rel="events/search" href="/api/events?search={query}"/> <link rel="hosts" href="/api/hosts"/> <link rel="hosts/search" href="/api/hosts?search={query}"/> <link rel="networks" href="/api/networks"/> <link rel="roles" href="/api/roles"/> <link rel="storagedomains" href="/api/storagedomains"/> <link rel="storagedomains/search" href="/api/storagedomains?search={query}"/> <link rel="tags" href="/api/tags"/> <link rel="templates" href="/api/templates"/> <link rel="templates/search" href="/api/templates?search={query}"/> <link rel="users" href="/api/users"/> <link rel="groups" href="/api/groups"/> <link rel="domains" href="/api/domains"/> <link rel="vmpools" href="/api/vmpools"/> <link rel="vmpools/search" href="/api/vmpools?search={query}"/> <link rel="vms" href="/api/vms"/> <link rel="vms/search" href="/api/vms?search={query}"/> <product_info> <name>oVirt Engine</name> <vendor>ovirt.org</vendor> <version> <build>4</build> <full_version>4.0.4</full_version> <major>4</major> <minor>0</minor> <revision>0</revision> </version> </product_info> <special_objects> <blank_template href="/ovirt-engine/api/templates/00000000-0000-0000-0000-000000000000" id="00000000-0000-0000-0000-000000000000"/> <root_tag href="/ovirt-engine/api/tags/00000000-0000-0000-0000-000000000000" id="00000000-0000-0000-0000-000000000000"/> </special_objects> <summary> <hosts> <active>0</active> <total>0</total> </hosts> <storage_domains> <active>0</active> <total>1</total> </storage_domains> <users> <active>1</active> <total>1</total> </users> <vms> <active>0</active> <total>0</total> </vms> </summary> <time>2016-09-14T12:00:48.132+02:00</time> </api> The entry point provides a user with links to the collections in a virtualization environment. The rel attribute of each collection link provides a reference point for each link. The entry point also contains other data such as product_info , special_objects and summary . Table 6.681. Parameters summary Name Type Direction Summary api Api Out follow String In Indicates which inner links should be followed . 6.223.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.223.2. reloadconfigurations POST Table 6.682. Parameters summary Name Type Direction Summary async Boolean In Indicates if the reload should be performed asynchronously. 6.224. SystemOption A service that provides values of specific configuration option of the system. Table 6.683. Methods summary Name Summary get Get the values of specific configuration option. 6.224.1. get GET Get the values of specific configuration option. For example to retrieve the values of configuration option MigrationPolicies send a request like this: The response to that request will be the following: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <system_option href="/ovirt-engine/api/options/MigrationPolicies" id="MigrationPolicies"> <name>MigrationPolicies</name> <values> <system_option_value> <value>[{"id":{"uuid":"80554327-0569-496b-bdeb-fcbbf52b827b"},...}]</value> <version>4.2</version> </system_option_value> <system_option_value> <value>[{"id":{"uuid":"80554327-0569-496b-bdeb-fcbbf52b827b"},...}]</value> <version>4.3</version> </system_option_value> <system_option_value> <value>[{"id":{"uuid":"80554327-0569-496b-bdeb-fcbbf52b827b"},...}]</value> <version>4.4</version> </system_option_value> <system_option_value> <value>[{"id":{"uuid":"80554327-0569-496b-bdeb-fcbbf52b827b"},...}]</value> <version>4.5</version> </system_option_value> <system_option_value> <value>[{"id":{"uuid":"80554327-0569-496b-bdeb-fcbbf52b827b"},...}]</value> <version>4.6</version> </system_option_value> <system_option_value> <value>[{"id":{"uuid":"80554327-0569-496b-bdeb-fcbbf52b827b"},...}]</value> <version>4.7</version> </system_option_value> </values> </system_option> Note The appropriate permissions are required to query configuration options. Some options can be queried only by users with administrator permissions. Important There is NO backward compatibility and no guarantee about the names or values of the options. Options may be removed and their meaning can be changed at any point. We strongly discourage the use of this service for applications other than the ones that are released simultaneously with the engine. Usage by other applications is not supported. Therefore there will be no documentation listing accessible configuration options. Table 6.684. Parameters summary Name Type Direction Summary option SystemOption Out The returned configuration option of the system. version String In Optional version parameter that specifies that only particular version of the configuration option should be returned. 6.224.1.1. version Optional version parameter that specifies that only particular version of the configuration option should be returned. If this parameter isn't used then all the versions will be returned. For example, to get the value of the MigrationPolicies option but only for version 4.2 send a request like this: The response to that request will be like this: <system_option href="/ovirt-engine/api/options/MigrationPolicies" id="MigrationPolicies"> <name>MigrationPolicies</name> <values> <system_option_value> <value>[{"id":{"uuid":"80554327-0569-496b-bdeb-fcbbf52b827b"},...}]</value> <version>4.2</version> </system_option_value> </values> </system_option> 6.225. SystemOptions Service that provides values of configuration options of the system. 6.226. SystemPermissions This service doesn't add any new methods, it is just a placeholder for the annotation that specifies the path of the resource that manages the permissions assigned to the system object. Table 6.685. Methods summary Name Summary add Assign a new permission to a user or group for specific entity. list List all the permissions of the specific entity. 6.226.1. add POST Assign a new permission to a user or group for specific entity. For example, to assign the UserVmManager role to the virtual machine with id 123 to the user with id 456 send a request like this: With a request body like this: <permission> <role> <name>UserVmManager</name> </role> <user id="456"/> </permission> To assign the SuperUser role to the system to the user with id 456 send a request like this: With a request body like this: <permission> <role> <name>SuperUser</name> </role> <user id="456"/> </permission> If you want to assign permission to the group instead of the user please replace the user element with the group element with proper id of the group. For example to assign the UserRole role to the cluster with id 123 to the group with id 789 send a request like this: With a request body like this: <permission> <role> <name>UserRole</name> </role> <group id="789"/> </permission> Table 6.686. Parameters summary Name Type Direction Summary permission Permission In/Out The permission. 6.226.2. list GET List all the permissions of the specific entity. For example to list all the permissions of the cluster with id 123 send a request like this: <permissions> <permission id="456"> <cluster id="123"/> <role id="789"/> <user id="451"/> </permission> <permission id="654"> <cluster id="123"/> <role id="789"/> <group id="127"/> </permission> </permissions> The order of the returned permissions isn't guaranteed. Table 6.687. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . permissions Permission[ ] Out The list of permissions. 6.226.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.227. Tag A service to manage a specific tag in the system. Table 6.688. Methods summary Name Summary get Gets the information about the tag. remove Removes the tag from the system. update Updates the tag entity. 6.227.1. get GET Gets the information about the tag. For example to retrieve the information about the tag with the id 123 send a request like this: <tag href="/ovirt-engine/api/tags/123" id="123"> <name>root</name> <description>root</description> </tag> Table 6.689. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . tag Tag Out The tag. 6.227.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.227.2. remove DELETE Removes the tag from the system. For example to remove the tag with id 123 send a request like this: Table 6.690. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.227.3. update PUT Updates the tag entity. For example to update parent tag to tag with id 456 of the tag with id 123 send a request like this: With request body like: <tag> <parent id="456"/> </tag> You may also specify a tag name instead of id. For example to update parent tag to tag with name mytag of the tag with id 123 send a request like this: <tag> <parent> <name>mytag</name> </parent> </tag> Table 6.691. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. tag Tag In/Out The updated tag. 6.228. Tags Represents a service to manage collection of the tags in the system. Table 6.692. Methods summary Name Summary add Add a new tag to the system. list List the tags in the system. 6.228.1. add POST Add a new tag to the system. For example, to add new tag with name mytag to the system send a request like this: With a request body like this: <tag> <name>mytag</name> </tag> Note The root tag is a special pseudo-tag assumed as the default parent tag if no parent tag is specified. The root tag cannot be deleted nor assigned a parent tag. To create new tag with specific parent tag send a request body like this: <tag> <name>mytag</name> <parent> <name>myparenttag</name> </parent> </tag> Table 6.693. Parameters summary Name Type Direction Summary tag Tag In/Out The added tag. 6.228.2. list GET List the tags in the system. For example to list the full hierarchy of the tags in the system send a request like this: <tags> <tag href="/ovirt-engine/api/tags/222" id="222"> <name>root2</name> <description>root2</description> <parent href="/ovirt-engine/api/tags/111" id="111"/> </tag> <tag href="/ovirt-engine/api/tags/333" id="333"> <name>root3</name> <description>root3</description> <parent href="/ovirt-engine/api/tags/222" id="222"/> </tag> <tag href="/ovirt-engine/api/tags/111" id="111"> <name>root</name> <description>root</description> </tag> </tags> In the XML output you can see the following hierarchy of the tags: The order of the returned list of tags isn't guaranteed. Table 6.694. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of tags to return. tags Tag[ ] Out List of all tags in the system. 6.228.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.228.2.2. max Sets the maximum number of tags to return. If not specified all the tags are returned. 6.229. Template Manages the virtual machine template and template versions. Table 6.695. Methods summary Name Summary export Exports a template to the data center export domain. get Returns the information about this template or template version. remove Removes a virtual machine template. update Updates the template. 6.229.1. export POST Exports a template to the data center export domain. For example, send the following request: With a request body like this: <action> <storage_domain id="456"/> <exclusive>true<exclusive/> </action> Since version 4.2 of the engine it is also possible to export a template as a virtual appliance (OVA). For example, to export template 123 as an OVA file named myvm.ova that is placed in the directory /home/ovirt/ on host myhost : With a request body like this: <action> <host> <name>myhost</name> </host> <directory>/home/ovirt</directory> <filename>myvm.ova</filename> </action> Table 6.696. Parameters summary Name Type Direction Summary exclusive Boolean In Indicates if the existing templates with the same name should be overwritten. storage_domain StorageDomain In Specifies the destination export storage domain. 6.229.1.1. exclusive Indicates if the existing templates with the same name should be overwritten. The export action reports a failed action if a template of the same name exists in the destination domain. Set this parameter to true to change this behavior and overwrite any existing template. 6.229.2. get GET Returns the information about this template or template version. Table 6.697. Parameters summary Name Type Direction Summary filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . template Template Out The information about the template or template version. 6.229.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.229.3. remove DELETE Removes a virtual machine template. Table 6.698. Parameters summary Name Type Direction Summary async Boolean In Indicates if the removal should be performed asynchronously. 6.229.4. update PUT Updates the template. The name , description , type , memory , cpu , topology , os , high_availability , display , stateless , usb , and timezone elements can be updated after a template has been created. For example, to update a template so that it has 1 GiB of memory send a request like this: With the following request body: <template> <memory>1073741824</memory> </template> The version_name name attribute is the only one that can be updated within the version attribute used for template versions: <template> <version> <version_name>mytemplate_2</version_name> </version> </template> Table 6.699. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. template Template In/Out 6.230. TemplateCdrom A service managing a CD-ROM device on templates. Table 6.700. Methods summary Name Summary get Returns the information about this CD-ROM device. 6.230.1. get GET Returns the information about this CD-ROM device. For example, to get information about the CD-ROM device of template 123 send a request like: Table 6.701. Parameters summary Name Type Direction Summary cdrom Cdrom Out The information about the CD-ROM device. follow String In Indicates which inner links should be followed . 6.230.1.1. cdrom The information about the CD-ROM device. The information consists of cdrom attribute containing reference to the CD-ROM device, the template, and optionally the inserted disk. If there is a disk inserted then the file attribute will contain a reference to the ISO image: <cdrom href="..." id="00000000-0000-0000-0000-000000000000"> <template href="/ovirt-engine/api/templates/123" id="123"/> <file id="mycd.iso"/> </cdrom> If there is no disk inserted then the file attribute won't be reported: <cdrom href="..." id="00000000-0000-0000-0000-000000000000"> <template href="/ovirt-engine/api/templates/123" id="123"/> </cdrom> 6.230.1.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.231. TemplateCdroms Lists the CD-ROM devices of a template. Table 6.702. Methods summary Name Summary list Returns the list of CD-ROM devices of the template. 6.231.1. list GET Returns the list of CD-ROM devices of the template. The order of the returned list of CD-ROM devices isn't guaranteed. Table 6.703. Parameters summary Name Type Direction Summary cdroms Cdrom[ ] Out The list of CD-ROM devices of the template. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of CD-ROMs to return. 6.231.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.231.1.2. max Sets the maximum number of CD-ROMs to return. If not specified all the CD-ROMs are returned. 6.232. TemplateDisk Table 6.704. Methods summary Name Summary copy Copy the specified disk attached to the template to a specific storage domain. export get remove 6.232.1. copy POST Copy the specified disk attached to the template to a specific storage domain. Table 6.705. Parameters summary Name Type Direction Summary async Boolean In Indicates if the copy should be performed asynchronously. filter Boolean In Indicates if the results should be filtered according to the permissions of the user. storage_domain StorageDomain In 6.232.2. export POST Table 6.706. Parameters summary Name Type Direction Summary async Boolean In Indicates if the export should be performed asynchronously. filter Boolean In Indicates if the results should be filtered according to the permissions of the user. storage_domain StorageDomain In 6.232.3. get GET Table 6.707. Parameters summary Name Type Direction Summary disk Disk Out follow String In Indicates which inner links should be followed . 6.232.3.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.232.4. remove DELETE Table 6.708. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.233. TemplateDiskAttachment This service manages the attachment of a disk to a template. Table 6.709. Methods summary Name Summary get Returns the details of the attachment. remove Removes the disk from the template. 6.233.1. get GET Returns the details of the attachment. Table 6.710. Parameters summary Name Type Direction Summary attachment DiskAttachment Out follow String In Indicates which inner links should be followed . 6.233.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.233.2. remove DELETE Removes the disk from the template. The disk will only be removed if there are other existing copies of the disk on other storage domains. A storage domain has to be specified to determine which of the copies should be removed (template disks can have copies on multiple storage domains). Table 6.711. Parameters summary Name Type Direction Summary force Boolean In storage_domain String In Specifies the identifier of the storage domain the image to be removed resides on. 6.234. TemplateDiskAttachments This service manages the set of disks attached to a template. Each attached disk is represented by a DiskAttachment . Table 6.712. Methods summary Name Summary list List the disks that are attached to the template. 6.234.1. list GET List the disks that are attached to the template. The order of the returned list of attachments isn't guaranteed. Table 6.713. Parameters summary Name Type Direction Summary attachments DiskAttachment[ ] Out follow String In Indicates which inner links should be followed . 6.234.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.235. TemplateDisks Table 6.714. Methods summary Name Summary list Returns the list of disks of the template. 6.235.1. list GET Returns the list of disks of the template. The order of the returned list of disks isn't guaranteed. Table 6.715. Parameters summary Name Type Direction Summary disks Disk[ ] Out follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of disks to return. 6.235.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.235.1.2. max Sets the maximum number of disks to return. If not specified all the disks are returned. 6.236. TemplateGraphicsConsole Table 6.716. Methods summary Name Summary get Gets graphics console configuration of the template. remove Remove the graphics console from the template. 6.236.1. get GET Gets graphics console configuration of the template. Table 6.717. Parameters summary Name Type Direction Summary console GraphicsConsole Out The information about the graphics console of the template. follow String In Indicates which inner links should be followed . 6.236.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.236.2. remove DELETE Remove the graphics console from the template. Table 6.718. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.237. TemplateGraphicsConsoles Table 6.719. Methods summary Name Summary add Add new graphics console to the template. list Lists all the configured graphics consoles of the template. 6.237.1. add POST Add new graphics console to the template. Table 6.720. Parameters summary Name Type Direction Summary console GraphicsConsole In/Out 6.237.2. list GET Lists all the configured graphics consoles of the template. The order of the returned list of graphics consoles isn't guaranteed. Table 6.721. Parameters summary Name Type Direction Summary consoles GraphicsConsole[ ] Out The list of graphics consoles of the template. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of consoles to return. 6.237.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.237.2.2. max Sets the maximum number of consoles to return. If not specified all the consoles are returned. 6.238. TemplateMediatedDevice Table 6.722. Methods summary Name Summary get Gets mediated device configuration of the template. remove Remove the mediated device from the template. update Updates the information about the mediated device. 6.238.1. get GET Gets mediated device configuration of the template. Table 6.723. Parameters summary Name Type Direction Summary device VmMediatedDevice Out The information about the mediated device of the template. follow String In Indicates which inner links should be followed . 6.238.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.238.2. remove DELETE Remove the mediated device from the template. Table 6.724. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.238.3. update PUT Updates the information about the mediated device. You can update the information using specParams element. For example, to update a mediated device, send a request like this: with response body: <vm_mediated_device href="/ovirt-engine/api/templates/123/mediateddevices/00000000-0000-0000-0000-000000000000" id="00000000-0000-0000-0000-000000000000"> <template href="/ovirt-engine/api/templates/123" id="123"/> <spec_params> <property> <name>mdevType</name> <value>nvidia-11</value> </property> </spec_params> </vm_mediated_device> Table 6.725. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. devices VmMediatedDevice In/Out The information about the mediated device. 6.238.3.1. devices The information about the mediated device. The request data must contain specParams properties. The response data contains complete information about the updated mediated device. 6.239. TemplateMediatedDevices A service that manages mediated devices of a template. Table 6.726. Methods summary Name Summary add Add new mediated device to the template. list Lists all the configured mediated devices of the template. 6.239.1. add POST Add new mediated device to the template. Table 6.727. Parameters summary Name Type Direction Summary device VmMediatedDevice In/Out 6.239.2. list GET Lists all the configured mediated devices of the template. The order of the returned list of mediated devices isn't guaranteed. Table 6.728. Parameters summary Name Type Direction Summary devices VmMediatedDevice[ ] Out The list of mediated devices of the template. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of mediated devices to return. 6.239.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.239.2.2. max Sets the maximum number of mediated devices to return. If not specified all the mediated devices are returned. 6.240. TemplateNic Table 6.729. Methods summary Name Summary get remove update Update the specified network interface card attached to the template. 6.240.1. get GET Table 6.730. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . nic Nic Out 6.240.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.240.2. remove DELETE Table 6.731. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.240.3. update PUT Update the specified network interface card attached to the template. Table 6.732. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. nic Nic In/Out 6.241. TemplateNics Table 6.733. Methods summary Name Summary add Add a new network interface card to the template. list Returns the list of NICs of the template. 6.241.1. add POST Add a new network interface card to the template. Table 6.734. Parameters summary Name Type Direction Summary nic Nic In/Out 6.241.2. list GET Returns the list of NICs of the template. The order of the returned list of NICs isn't guaranteed. Table 6.735. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of NICs to return. nics Nic[ ] Out 6.241.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.241.2.2. max Sets the maximum number of NICs to return. If not specified all the NICs are returned. 6.242. TemplateWatchdog Table 6.736. Methods summary Name Summary get remove update Update the watchdog for the template identified by the given id. 6.242.1. get GET Table 6.737. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . watchdog Watchdog Out 6.242.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.242.2. remove DELETE Table 6.738. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.242.3. update PUT Update the watchdog for the template identified by the given id. Table 6.739. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. watchdog Watchdog In/Out 6.243. TemplateWatchdogs Table 6.740. Methods summary Name Summary add Add a watchdog to the template identified by the given id. list Returns the list of watchdogs. 6.243.1. add POST Add a watchdog to the template identified by the given id. Table 6.741. Parameters summary Name Type Direction Summary watchdog Watchdog In/Out 6.243.2. list GET Returns the list of watchdogs. The order of the returned list of watchdogs isn't guaranteed. Table 6.742. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of watchdogs to return. watchdogs Watchdog[ ] Out 6.243.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.243.2.2. max Sets the maximum number of watchdogs to return. If not specified all the watchdogs are returned. 6.244. Templates This service manages the virtual machine templates available in the system. Table 6.743. Methods summary Name Summary add Creates a new template. list Returns the list of virtual machine templates. 6.244.1. add POST Creates a new template. This requires the name and vm elements. To identify the virtual machine use the vm.id or vm.name attributes. For example, to create a template from a virtual machine with the identifier 123 send a request like this: With a request body like this: <template> <name>mytemplate</name> <vm id="123"/> </template> Since version 4.3, in order to create virtual machine template from a snapshot send a request body like this: <template> <name>mytemplate</name> <vm id="123"> <snapshots> <snapshot id="456"/> </snapshots> </vm> </template> The disks of the template can be customized, making some of their characteristics different from the disks of the original virtual machine. To do so use the vm.disk_attachments attribute, specifying the identifier of the disk of the original virtual machine and the characteristics that you want to change. For example, if the original virtual machine has a disk with the identifier 456 , and, for that disk, you want to change the name to mydisk the format to Copy On Write and make it sparse , send a request body like this: <template> <name>mytemplate</name> <vm id="123"> <disk_attachments> <disk_attachment> <disk id="456"> <name>mydisk</name> <format>cow</format> <sparse>true</sparse> </disk> </disk_attachment> </disk_attachments> </vm> </template> The template can be created as a sub-version of an existing template. This requires the name and vm attributes for the new template, and the base_template and version_name attributes for the new template version. The base_template and version_name attributes must be specified within a version section enclosed in the template section. Identify the virtual machine with the id or name attributes. <template> <name>mytemplate</name> <vm id="123"/> <version> <base_template id="456"/> <version_name>mytemplate_001</version_name> </version> </template> The destination storage domain of the template can be customized, in one of two ways: Globally, at the request level. The request must list the desired disk attachments to be created on the storage domain. If the disk attachments are not listed, the global storage domain parameter will be ignored. <template> <name>mytemplate</name> <storage_domain id="123"/> <vm id="456"> <disk_attachments> <disk_attachment> <disk id="789"> <format>cow</format> <sparse>true</sparse> </disk> </disk_attachment> </disk_attachments> </vm> </template> Per each disk attachment. Specify the desired storage domain for each disk attachment. Specifying the global storage definition will override the storage domain per disk attachment specification. <template> <name>mytemplate</name> <vm id="123"> <disk_attachments> <disk_attachment> <disk id="456"> <format>cow</format> <sparse>true</sparse> <storage_domains> <storage_domain id="789"/> </storage_domains> </disk> </disk_attachment> </disk_attachments> </vm> </template> Table 6.744. Parameters summary Name Type Direction Summary clone_permissions Boolean In Specifies if the permissions of the virtual machine should be copied to the template. seal Boolean In Seals the template. template Template In/Out The information about the template or template version. 6.244.1.1. clone_permissions Specifies if the permissions of the virtual machine should be copied to the template. If this optional parameter is provided, and its value is true , then the permissions of the virtual machine (only the direct ones, not the inherited ones) will be copied to the created template. For example, to create a template from the myvm virtual machine copying its permissions, send a request like this: With a request body like this: <template> <name>mytemplate<name> <vm> <name>myvm<name> </vm> </template> 6.244.1.2. seal Seals the template. If this optional parameter is provided and its value is true , then the template is sealed after creation. Sealing erases all host-specific configuration from the filesystem: SSH keys, UDEV rules, MAC addresses, system ID, hostname, and so on, thus making it easier to use the template to create multiple virtual machines without manual intervention. Currently, sealing is supported only for Linux operating systems. 6.244.2. list GET Returns the list of virtual machine templates. For example: Will return the list of virtual machines and virtual machine templates. The order of the returned list of templates is not guaranteed. Table 6.745. Parameters summary Name Type Direction Summary case_sensitive Boolean In Indicates if the search performed using the search parameter should be performed taking case into account. filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of templates to return. search String In A query string used to restrict the returned templates. templates Template[ ] Out The list of virtual machine templates. 6.244.2.1. case_sensitive Indicates if the search performed using the search parameter should be performed taking case into account. The default value is true , which means that case is taken into account. If you want to search ignoring case set it to false . 6.244.2.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.244.2.3. max Sets the maximum number of templates to return. If not specified, all the templates are returned. 6.245. UnmanagedNetwork Table 6.746. Methods summary Name Summary get remove 6.245.1. get GET Table 6.747. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . network UnmanagedNetwork Out 6.245.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.245.2. remove DELETE Table 6.748. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.246. UnmanagedNetworks Table 6.749. Methods summary Name Summary list Returns the list of unmanaged networks of the host. 6.246.1. list GET Returns the list of unmanaged networks of the host. The order of the returned list of networks isn't guaranteed. Table 6.750. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of networks to return. networks UnmanagedNetwork[ ] Out 6.246.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.246.1.2. max Sets the maximum number of networks to return. If not specified all the networks are returned. 6.247. User A service to manage a user in the system. Use this service to either get users details or remove users. In order to add new users please use users . Table 6.751. Methods summary Name Summary get Gets the system user information. remove Removes the system user. update Updates information about the user. 6.247.1. get GET Gets the system user information. Usage: Will return the user information: <user href="/ovirt-engine/api/users/1234" id="1234"> <name>admin</name> <link href="/ovirt-engine/api/users/1234/sshpublickeys" rel="sshpublickeys"/> <link href="/ovirt-engine/api/users/1234/roles" rel="roles"/> <link href="/ovirt-engine/api/users/1234/permissions" rel="permissions"/> <link href="/ovirt-engine/api/users/1234/tags" rel="tags"/> <department></department> <domain_entry_id>23456</domain_entry_id> <email>[email protected]</email> <last_name>Lastname</last_name> <namespace>*</namespace> <principal>user1</principal> <user_name>user1@domain-authz</user_name> <domain href="/ovirt-engine/api/domains/45678" id="45678"> <name>domain-authz</name> </domain> </user> Table 6.752. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . user User Out The system user. 6.247.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.247.2. remove DELETE Removes the system user. Usage: Table 6.753. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.247.3. update PUT Updates information about the user. Only the user_options field can be updated. For example, to update user options: With a request body like this: <user> <user_options> <property> <name>test</name> <value>["any","JSON"]</value> </property> </user_options> </user> Important Since version 4.4.5 of the engine this operation is deprecated, and preserved only for backwards compatibility. It will be removed in the future. Please use the options endpoint instead. Table 6.754. Parameters summary Name Type Direction Summary user User In/Out 6.248. UserOption Table 6.755. Methods summary Name Summary get Returns a user profile property of type JSON. remove Deletes an existing property of type JSON. 6.248.1. get GET Returns a user profile property of type JSON. Example request(for user with identifier 123 and option with identifier 456 ): The result will be the following XML document: <user_option href="/ovirt-engine/api/users/123/options/456" id="456"> <name>SomeName</name> <content>["any", "JSON"]</content> <user href="/ovirt-engine/api/users/123" id="123"/> </user_option> Table 6.756. Parameters summary Name Type Direction Summary option UserOption Out 6.248.2. remove DELETE Deletes an existing property of type JSON. Example request(for user with identifier 123 and option with identifier 456 ): 6.249. UserOptions Table 6.757. Methods summary Name Summary add Adds a new user profile property of type JSON. list Returns a list of user profile properties of type JSON. 6.249.1. add POST Adds a new user profile property of type JSON. Example request(for user with identifier 123 ): Payload: <user_option> <name>SomeName</name> <content>["any", "JSON"]</content> </user_option> Table 6.758. Parameters summary Name Type Direction Summary option UserOption In/Out 6.249.2. list GET Returns a list of user profile properties of type JSON. Example request(for user with identifier 123 ): The result will be the following XML document: <user_options> <user_option href="/ovirt-engine/api/users/123/options/456" id="456"> <name>SomeName</name> <content>["any", "JSON"]</content> <user href="/ovirt-engine/api/users/123" id="123"/> </user_option> </user_options> Table 6.759. Parameters summary Name Type Direction Summary options UserOption[ ] Out 6.250. Users A service to manage the users in the system. Table 6.760. Methods summary Name Summary add Add user from a directory service. list List all the users in the system. 6.250.1. add POST Add user from a directory service. For example, to add the myuser user from the myextension-authz authorization provider send a request like this: With a request body like this: <user> <user_name>myuser@myextension-authz</user_name> <domain> <name>myextension-authz</name> </domain> </user> In case you are working with Active Directory you have to pass user principal name (UPN) as username , followed by authorization provider name. Due to bug 1147900 you need to provide also principal parameter set to UPN of the user. For example, to add the user with UPN [email protected] from the myextension-authz authorization provider send a request body like this: <user> <principal>[email protected]</principal> <user_name>[email protected]@myextension-authz</user_name> <domain> <name>myextension-authz</name> </domain> </user> Table 6.761. Parameters summary Name Type Direction Summary user User In/Out 6.250.2. list GET List all the users in the system. Usage: Will return the list of users: <users> <user href="/ovirt-engine/api/users/1234" id="1234"> <name>admin</name> <link href="/ovirt-engine/api/users/1234/sshpublickeys" rel="sshpublickeys"/> <link href="/ovirt-engine/api/users/1234/roles" rel="roles"/> <link href="/ovirt-engine/api/users/1234/permissions" rel="permissions"/> <link href="/ovirt-engine/api/users/1234/tags" rel="tags"/> <domain_entry_id>23456</domain_entry_id> <namespace>*</namespace> <principal>user1</principal> <user_name>user1@domain-authz</user_name> <domain href="/ovirt-engine/api/domains/45678" id="45678"> <name>domain-authz</name> </domain> </user> </users> The order of the returned list of users isn't guaranteed. Table 6.762. Parameters summary Name Type Direction Summary case_sensitive Boolean In Indicates if the search performed using the search parameter should be performed taking case into account. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of users to return. search String In A query string used to restrict the returned users. users User[ ] Out The list of users. 6.250.2.1. case_sensitive Indicates if the search performed using the search parameter should be performed taking case into account. The default value is true , which means that case is taken into account. If you want to search ignoring case set it to false . 6.250.2.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.250.2.3. max Sets the maximum number of users to return. If not specified all the users are returned. 6.251. VirtualFunctionAllowedNetwork Table 6.763. Methods summary Name Summary get remove 6.251.1. get GET Table 6.764. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . network Network Out 6.251.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.251.2. remove DELETE Table 6.765. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.252. VirtualFunctionAllowedNetworks Table 6.766. Methods summary Name Summary add list Returns the list of networks. 6.252.1. add POST Table 6.767. Parameters summary Name Type Direction Summary network Network In/Out 6.252.2. list GET Returns the list of networks. The order of the returned list of networks isn't guaranteed. Table 6.768. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of networks to return. networks Network[ ] Out 6.252.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.252.2.2. max Sets the maximum number of networks to return. If not specified all the networks are returned. 6.253. Vm Table 6.769. Methods summary Name Summary autopincpuandnumanodes Apply an automatic CPU and NUMA configuration on the VM. cancelmigration This operation stops any migration of a virtual machine to another physical host. clone commitsnapshot Permanently restores the virtual machine to the state of the previewed snapshot. detach Detaches a virtual machine from a pool. export Exports the virtual machine. freezefilesystems Freezes virtual machine file systems. get Retrieves the description of the virtual machine. logon Initiates the automatic user logon to access a virtual machine from an external console. maintenance Sets the global maintenance mode on the hosted engine virtual machine. migrate Migrates a virtual machine to another physical host. previewsnapshot Temporarily restores the virtual machine to the state of a snapshot. reboot Sends a reboot request to a virtual machine. remove Removes the virtual machine, including the virtual disks attached to it. reordermacaddresses reset Sends a reset request to a virtual machine. screenshot Captures screenshot of the current state of the VM. shutdown This operation sends a shutdown request to a virtual machine. start Starts the virtual machine. stop This operation forces a virtual machine to power-off. suspend This operation saves the virtual machine state to disk and stops it. thawfilesystems Thaws virtual machine file systems. ticket Generates a time-sensitive authentication token for accessing a virtual machine's display. undosnapshot Restores the virtual machine to the state it had before previewing the snapshot. update Update the virtual machine in the system for the given virtual machine id. 6.253.1. autopincpuandnumanodes POST Apply an automatic CPU and NUMA configuration on the VM. Important Since version 4.5 of the engine this operation is deprecated, and preserved only for backwards compatibility. It will be removed in the future. Instead please use PUT followed by update operation . An example for a request: With a request body like this: <action> <optimize_cpu_settings>true</optimize_cpu_settings> </action> Table 6.770. Parameters summary Name Type Direction Summary async Boolean In Indicates if the detach action should be performed asynchronously. optimize_cpu_settings Boolean In Specifies how the auto CPU and NUMA configuration is applied. 6.253.1.1. optimize_cpu_settings Specifies how the auto CPU and NUMA configuration is applied. If set to true, will adjust the CPU topology to fit the VM pinned host hardware. Otherwise, it will use the VM CPU topology. 6.253.2. cancelmigration POST This operation stops any migration of a virtual machine to another physical host. The cancel migration action does not take any action specific parameters; therefore, the request body should contain an empty action : <action/> Table 6.771. Parameters summary Name Type Direction Summary async Boolean In Indicates if the migration should cancelled asynchronously. 6.253.3. clone POST Table 6.772. Parameters summary Name Type Direction Summary async Boolean In Indicates if the clone should be performed asynchronously. discard_snapshots Boolean In Use the discard_snapshots parameter when the virtual machine should be clone with its snapshots collapsed. storage_domain StorageDomain In The storage domain on which the virtual machines disks will be copied to. vm Vm In 6.253.3.1. discard_snapshots Use the discard_snapshots parameter when the virtual machine should be clone with its snapshots collapsed. Default is true. 6.253.4. commitsnapshot POST Permanently restores the virtual machine to the state of the previewed snapshot. See the preview_snapshot operation for details. Table 6.773. Parameters summary Name Type Direction Summary async Boolean In Indicates if the snapshots should be committed asynchronously. 6.253.5. detach POST Detaches a virtual machine from a pool. The detach action does not take any action specific parameters; therefore, the request body should contain an empty action : <action/> Table 6.774. Parameters summary Name Type Direction Summary async Boolean In Indicates if the detach action should be performed asynchronously. 6.253.6. export POST Exports the virtual machine. A virtual machine can be exported to an export domain. For example, to export virtual machine 123 to the export domain myexport : With a request body like this: <action> <storage_domain> <name>myexport</name> </storage_domain> <exclusive>true</exclusive> <discard_snapshots>true</discard_snapshots> </action> Since version 4.2 of the engine it is also possible to export a virtual machine as a virtual appliance (OVA). For example, to export virtual machine 123 as an OVA file named myvm.ova that is placed in the directory /home/ovirt/ on host myhost : With a request body like this: <action> <host> <name>myhost</name> </host> <directory>/home/ovirt</directory> <filename>myvm.ova</filename> </action> Note Confirm that the export operation has completed before attempting any actions on the export domain. Table 6.775. Parameters summary Name Type Direction Summary async Boolean In Indicates if the export should be performed asynchronously. discard_snapshots Boolean In Use the discard_snapshots parameter when the virtual machine should be exported with all of its snapshots collapsed. exclusive Boolean In Use the exclusive parameter when the virtual machine should be exported even if another copy of it already exists in the export domain (override). storage_domain StorageDomain In The (export) storage domain to export the virtual machine to. 6.253.7. freezefilesystems POST Freezes virtual machine file systems. This operation freezes a virtual machine's file systems using the QEMU guest agent when taking a live snapshot of a running virtual machine. Normally, this is done automatically by the manager, but this must be executed manually with the API for virtual machines using OpenStack Volume (Cinder) disks. Example: <action/> Table 6.776. Parameters summary Name Type Direction Summary async Boolean In Indicates if the freeze should be performed asynchronously. 6.253.8. get GET Retrieves the description of the virtual machine. Table 6.777. Parameters summary Name Type Direction Summary all_content Boolean In Indicates if all of the attributes of the virtual machine should be included in the response. filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . next_run Boolean In Indicates if the returned result describes the virtual machine as it is currently running or if describes the virtual machine with the modifications that have already been performed but that will only come into effect when the virtual machine is restarted. ovf_as_ova Boolean In Indicates if the results should expose the OVF as it appears in OVA files of that VM. vm Vm Out Description of the virtual machine. 6.253.8.1. all_content Indicates if all of the attributes of the virtual machine should be included in the response. By default the following attributes are excluded: console initialization.configuration.data - The OVF document describing the virtual machine. rng_source soundcard virtio_scsi For example, to retrieve the complete representation of the virtual machine '123': Note These attributes are not included by default as they reduce performance. These attributes are seldom used and require additional queries to the database. Only use this parameter when required as it will reduce performance. 6.253.8.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.253.8.3. next_run Indicates if the returned result describes the virtual machine as it is currently running or if describes the virtual machine with the modifications that have already been performed but that will only come into effect when the virtual machine is restarted. By default the value is false . If the parameter is included in the request, but without a value, it is assumed that the value is true . The the following request: Is equivalent to using the value true : 6.253.8.4. ovf_as_ova Indicates if the results should expose the OVF as it appears in OVA files of that VM. The OVF document describing the virtual machine. This parameter will work only when all_content=True is set. The OVF will be presented in initialization.configuration.data . For example: 6.253.9. logon POST Initiates the automatic user logon to access a virtual machine from an external console. This action requires the ovirt-guest-agent-gdm-plugin and the ovirt-guest-agent-pam-module packages to be installed and the ovirt-guest-agent service to be running on the virtual machine. Users require the appropriate user permissions for the virtual machine in order to access the virtual machine from an external console. For example: Request body: <action/> Table 6.778. Parameters summary Name Type Direction Summary async Boolean In Indicates if the logon should be performed asynchronously. 6.253.10. maintenance POST Sets the global maintenance mode on the hosted engine virtual machine. This action has no effect on other virtual machines. Example: <action> <maintenance_enabled>true<maintenance_enabled/> </action> Table 6.779. Parameters summary Name Type Direction Summary async Boolean In Indicates if the global maintenance action should be performed asynchronously. maintenance_enabled Boolean In Indicates if global maintenance should be enabled or disabled. 6.253.11. migrate POST Migrates a virtual machine to another physical host. Example: To specify a specific host to migrate the virtual machine to: <action> <host id="2ab5e1da-b726-4274-bbf7-0a42b16a0fc3"/> </action> Table 6.780. Parameters summary Name Type Direction Summary async Boolean In Indicates if the migration should be performed asynchronously. cluster Cluster In Specifies the cluster the virtual machine should migrate to. force Boolean In Specifies that the virtual machine should migrate even if the virtual machine is defined as non-migratable. host Host In Specifies a specific host that the virtual machine should migrate to. migrate_vms_in_affinity_closure Boolean In Migrate also all other virtual machines in positive enforcing affinity groups with this virtual machine, that are running on the same host. 6.253.11.1. cluster Specifies the cluster the virtual machine should migrate to. This is an optional parameter. By default, the virtual machine is migrated to another host within the same cluster. Warning Live migration to another cluster is not supported. Strongly consider the target cluster's hardware architecture and network architecture before attempting a migration. 6.253.11.2. force Specifies that the virtual machine should migrate even if the virtual machine is defined as non-migratable. This is an optional parameter. By default, it is set to false . 6.253.11.3. host Specifies a specific host that the virtual machine should migrate to. This is an optional parameter. By default, the Red Hat Virtualization Manager automatically selects a default host for migration within the same cluster. If an API user requires a specific host, the user can specify the host with either an id or name parameter. 6.253.11.4. migrate_vms_in_affinity_closure Migrate also all other virtual machines in positive enforcing affinity groups with this virtual machine, that are running on the same host. The default value is false . 6.253.12. previewsnapshot POST Temporarily restores the virtual machine to the state of a snapshot. The snapshot is indicated with the snapshot.id parameter. It is restored temporarily, so that the content can be inspected. Once that inspection is finished, the state of the virtual machine can be made permanent, using the commit_snapshot method, or discarded using the undo_snapshot method. Table 6.781. Parameters summary Name Type Direction Summary async Boolean In Indicates if the preview should be performed asynchronously. disks Disk[ ] In Specify the disks included in the snapshot's preview. lease StorageDomainLease In Specify the lease storage domain ID to use in the preview of the snapshot. restore_memory Boolean In snapshot Snapshot In vm Vm In 6.253.12.1. disks Specify the disks included in the snapshot's preview. For each disk parameter, it is also required to specify its image_id . For example, to preview a snapshot with identifier 456 which includes a disk with identifier 111 and its image_id as 222 , send a request like this: Request body: <action> <disks> <disk id="111"> <image_id>222</image_id> </disk> </disks> <snapshot id="456"/> </action> 6.253.12.2. lease Specify the lease storage domain ID to use in the preview of the snapshot. If lease parameter is not passed, then the previewed snapshot lease storage domain will be used. If lease parameter is passed with empty storage domain parameter, then no lease will be used for the snapshot preview. If lease parameter is passed with storage domain parameter then the storage domain ID can be only one of the leases domain IDs that belongs to one of the virtual machine snapshots. This is an optional parameter, set by default to null 6.253.13. reboot POST Sends a reboot request to a virtual machine. For example: The reboot action does not take any action specific parameters; therefore, the request body should contain an empty action : <action/> To reboot the VM even if a backup is running for it, the action should include the 'force' element. For example, to force reboot virtual machine 123 : <action> <force>true</force> </action> Table 6.782. Parameters summary Name Type Direction Summary async Boolean In Indicates if the reboot should be performed asynchronously. force Boolean In Indicates if the VM should be forcibly rebooted even if a backup is running for it. 6.253.14. remove DELETE Removes the virtual machine, including the virtual disks attached to it. For example, to remove the virtual machine with identifier 123 : Table 6.783. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. detach_only Boolean In Indicates if the attached virtual disks should be detached first and preserved instead of being removed. force Boolean In Indicates if the virtual machine should be forcibly removed. 6.253.14.1. force Indicates if the virtual machine should be forcibly removed. Locked virtual machines and virtual machines with locked disk images cannot be removed without this flag set to true. 6.253.15. reordermacaddresses POST Table 6.784. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. 6.253.16. reset POST Sends a reset request to a virtual machine. For example: The reset action does not take any action specific parameters; therefore, the request body should contain an empty action : <action/> Table 6.785. Parameters summary Name Type Direction Summary async Boolean In Indicates if the reset should be performed asynchronously. 6.253.17. screenshot POST Captures screenshot of the current state of the VM. For example: The screenshot action does not take any action specific parameters; therefore, the request body should contain an empty action : <action/> 6.253.18. shutdown POST This operation sends a shutdown request to a virtual machine. For example: The shutdown action does not take any action specific parameters; therefore, the request body should contain an empty action : <action/> To shutdown the VM even if a backup is running for it, the action should include the 'force' element. For example, to force shutdown virtual machine 123 : <action> <force>true</force> </action> Table 6.786. Parameters summary Name Type Direction Summary async Boolean In Indicates if the shutdown should be performed asynchronously. force Boolean In Indicates if the VM should be forcibly shutdown even if a backup is running for it. reason String In The reason the virtual machine was stopped. 6.253.18.1. reason The reason the virtual machine was stopped. Optionally set by user when shutting down the virtual machine. 6.253.19. start POST Starts the virtual machine. If the virtual environment is complete and the virtual machine contains all necessary components to function, it can be started. This example starts the virtual machine: With a request body: <action/> Table 6.787. Parameters summary Name Type Direction Summary async Boolean In Indicates if the start action should be performed asynchronously. authorized_key AuthorizedKey In filter Boolean In Indicates if the results should be filtered according to the permissions of the user. pause Boolean In If set to true , start the virtual machine in paused mode. use_cloud_init Boolean In If set to true , the initialization type is set to cloud-init . use_ignition Boolean In If set to true , the initialization type is set to Ignition . use_initialization Boolean In If set to true , the initialization type is set by the VM's OS. use_sysprep Boolean In If set to true , the initialization type is set to Sysprep . vm Vm In The definition of the virtual machine for this specific run. volatile Boolean In Indicates that this run configuration will be discarded even in the case of guest-initiated reboot. 6.253.19.1. pause If set to true , start the virtual machine in paused mode. The default is false . 6.253.19.2. use_cloud_init If set to true , the initialization type is set to cloud-init . The default value is false . See cloud-init documentation for details. 6.253.19.3. use_ignition If set to true , the initialization type is set to Ignition . The default value is false . See Ignition documentation for details. 6.253.19.4. use_initialization If set to true , the initialization type is set by the VM's OS. Windows will set to Sysprep , Linux to cloud-init and RedHat CoreOS to Ignition . If any of the initialization-types are explicitly set (useCloudInit, useSysprep or useIgnition), they will be prioritized and this flag will be ignored. The default value is false . 6.253.19.5. use_sysprep If set to true , the initialization type is set to Sysprep . The default value is false . See Sysprep for details. 6.253.19.6. vm The definition of the virtual machine for this specific run. For example: <action> <vm> <os> <boot> <devices> <device>cdrom</device> </devices> </boot> </os> </vm> </action> This will set the boot device to the CDROM only for this specific start. After the virtual machine is powered off, this definition will be reverted. 6.253.19.7. volatile Indicates that this run configuration will be discarded even in the case of guest-initiated reboot. The default value is false . 6.253.20. stop POST This operation forces a virtual machine to power-off. For example: The stop action does not take any action specific parameters; therefore, the request body should contain an empty action : <action/> To stop the VM even if a backup is running for it, the action should include the 'force' element. For example, to force stop virtual machine 123 : <action> <force>true</force> </action> Table 6.788. Parameters summary Name Type Direction Summary async Boolean In Indicates if the stop action should be performed asynchronously. force Boolean In Indicates if the VM should be forcibly stop even if a backup is running for it. reason String In The reason the virtual machine was stopped. 6.253.20.1. reason The reason the virtual machine was stopped. Optionally set by user when shutting down the virtual machine. 6.253.21. suspend POST This operation saves the virtual machine state to disk and stops it. Start a suspended virtual machine and restore the virtual machine state with the start action. For example: The suspend action does not take any action specific parameters; therefore, the request body should contain an empty action : <action/> Table 6.789. Parameters summary Name Type Direction Summary async Boolean In Indicates if the suspend action should be performed asynchronously. 6.253.22. thawfilesystems POST Thaws virtual machine file systems. This operation thaws a virtual machine's file systems using the QEMU guest agent when taking a live snapshot of a running virtual machine. Normally, this is done automatically by the manager, but this must be executed manually with the API for virtual machines using OpenStack Volume (Cinder) disks. Example: <action/> Table 6.790. Parameters summary Name Type Direction Summary async Boolean In Indicates if the thaw file systems action should be performed asynchronously. 6.253.23. ticket POST Generates a time-sensitive authentication token for accessing a virtual machine's display. For example: The client-provided action optionally includes a desired ticket value and/or an expiry time in seconds. The response specifies the actual ticket value and expiry used. <action> <ticket> <value>abcd12345</value> <expiry>120</expiry> </ticket> </action> Important If the virtual machine is configured to support only one graphics protocol then the generated authentication token will be valid for that protocol. But if the virtual machine is configured to support multiple protocols, VNC and SPICE, then the authentication token will only be valid for the SPICE protocol. In order to obtain an authentication token for a specific protocol, for example for VNC, use the ticket method of the service , which manages the graphics consoles of the virtual machine, by sending a request: Table 6.791. Parameters summary Name Type Direction Summary async Boolean In Indicates if the generation of the ticket should be performed asynchronously. ticket Ticket In/Out 6.253.24. undosnapshot POST Restores the virtual machine to the state it had before previewing the snapshot. See the preview_snapshot operation for details. Table 6.792. Parameters summary Name Type Direction Summary async Boolean In Indicates if the undo snapshot action should be performed asynchronously. 6.253.25. update PUT Update the virtual machine in the system for the given virtual machine id. Table 6.793. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. next_run Boolean In Indicates if the update should be applied to the virtual machine immediately or if it should be applied only when the virtual machine is restarted. vm Vm In/Out 6.253.25.1. next_run Indicates if the update should be applied to the virtual machine immediately or if it should be applied only when the virtual machine is restarted. The default value is false , so by default changes are applied immediately. 6.254. VmApplication A service that provides information about an application installed in a virtual machine. Table 6.794. Methods summary Name Summary get Returns the information about the application. 6.254.1. get GET Returns the information about the application. Table 6.795. Parameters summary Name Type Direction Summary application Application Out The information about the application. filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . 6.254.1.1. application The information about the application. The information consists of name attribute containing the name of the application (which is an arbitrary string that may also contain additional information such as version) and vm attribute identifying the virtual machine. For example, a request like this: May return information like this: <application href="/ovirt-engine/api/vms/123/applications/789" id="789"> <name>ovirt-guest-agent-common-1.0.12-3.el7</name> <vm href="/ovirt-engine/api/vms/123" id="123"/> </application> 6.254.1.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.255. VmApplications A service that provides information about applications installed in a virtual machine. Table 6.796. Methods summary Name Summary list Returns a list of applications installed in the virtual machine. 6.255.1. list GET Returns a list of applications installed in the virtual machine. The order of the returned list of applications isn't guaranteed. Table 6.797. Parameters summary Name Type Direction Summary applications Application[ ] Out A list of applications installed in the virtual machine. filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of applications to return. 6.255.1.1. applications A list of applications installed in the virtual machine. For example, a request like this: May return a list like this: <applications> <application href="/ovirt-engine/api/vms/123/applications/456" id="456"> <name>kernel-3.10.0-327.36.1.el7</name> <vm href="/ovirt-engine/api/vms/123" id="123"/> </application> <application href="/ovirt-engine/api/vms/123/applications/789" id="789"> <name>ovirt-guest-agent-common-1.0.12-3.el7</name> <vm href="/ovirt-engine/api/vms/123" id="123"/> </application> </applications> 6.255.1.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.255.1.3. max Sets the maximum number of applications to return. If not specified all the applications are returned. 6.256. VmBackup A service managing a backup of a virtual machines. Table 6.798. Methods summary Name Summary finalize Finalize the virtual machine backup entity. get Returns information about the virtual machine backup. 6.256.1. finalize POST Finalize the virtual machine backup entity. End backup, unlock resources, and perform cleanups. To finalize a virtual machine with an id '123' and a backup with an id '456' send a request as follows: With a request body as follows: <action /> 6.256.2. get GET Returns information about the virtual machine backup. Table 6.799. Parameters summary Name Type Direction Summary backup Backup Out The information about the virtual machine backup entities. follow String In Indicates which inner links should be followed . 6.256.2.1. backup The information about the virtual machine backup entities. <backups> <backup id="backup-uuid"> <from_checkpoint_id>-checkpoint-uuid</from_checkpoint_id> <link href="/ovirt-engine/api/vms/vm-uuid/backups/backup-uuid/disks" rel="disks"/> <status>initializing</status> <creation_date> </backup> </backups> 6.256.2.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.257. VmBackupDisk Table 6.800. Methods summary Name Summary get Retrieves the description of the disk. 6.257.1. get GET Retrieves the description of the disk. Table 6.801. Parameters summary Name Type Direction Summary disk Disk Out The description of the disk. follow String In Indicates which inner links should be followed . 6.257.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.258. VmBackupDisks Table 6.802. Methods summary Name Summary list Returns the list of disks in backup. 6.258.1. list GET Returns the list of disks in backup. Table 6.803. Parameters summary Name Type Direction Summary disks Disk[ ] Out The list of retrieved disks. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of disks to return. 6.258.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.258.1.2. max Sets the maximum number of disks to return. If not specified, all the disks are returned. 6.259. VmBackups Lists the backups of a virtual machine. Table 6.804. Methods summary Name Summary add Adds a new backup entity to a virtual machine. list The list of virtual machine backups. 6.259.1. add POST Adds a new backup entity to a virtual machine. For example, to start a new incremental backup of a virtual machine since checkpoint id -checkpoint-uuid , send a request like this: With a request body like this: <backup> <from_checkpoint_id>-checkpoint-uuid</from_checkpoint_id> <disks> <disk id="disk-uuid" /> ... </disks> </backup> The response body: <backup id="backup-uuid"> <from_checkpoint_id>-checkpoint-uuid</from_checkpoint_id> <to_checkpoint_id>new-checkpoint-uuid</to_checkpoint_id> <disks> <disk id="disk-uuid" /> ... ... </disks> <status>initializing</status> <creation_date> </backup> To provide the ID of the created backup, send a request like this: With a request body like this: <backup id="backup-uuid"> <from_checkpoint_id>-checkpoint-uuid</from_checkpoint_id> <disks> <disk id="disk-uuid" /> ... </disks> </backup> Table 6.805. Parameters summary Name Type Direction Summary backup Backup In/Out The information about the virtual machine backup entity. require_consistency Boolean In Indicates if the backup will fail if VM failed to freeze or not. use_active Boolean In Indicate whether to use the active volume for performing the backup. 6.259.1.1. require_consistency Indicates if the backup will fail if VM failed to freeze or not. If requireConsistency=True VM backup will fail in case of a failure to freeze the VM. The REST API call should look like this: The default value of the requireConsistency flag is false . 6.259.1.2. use_active Indicate whether to use the active volume for performing the backup. If useActive=False a snapshot will be created for the backup operation. The REST API call should look like this: The default value of the useActive flag is false . 6.259.2. list GET The list of virtual machine backups. Table 6.806. Parameters summary Name Type Direction Summary backups Backup[ ] Out The information about the virtual machine backup entities. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of virtual machine backups to return. 6.259.2.1. backups The information about the virtual machine backup entities. <backups> <backup id="backup-uuid"> <from_checkpoint_id>-checkpoint-uuid</from_checkpoint_id> <disks> <disk id="disk-uuid" /> ... ... </disks> <status>initiailizing</status> <creation_date> </backup> </backups> 6.259.2.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.259.2.3. max Sets the maximum number of virtual machine backups to return. If not specified, all the virtual machine backups are returned. 6.260. VmCdrom Manages a CDROM device of a virtual machine. Changing and ejecting the disk is done using always the update method, to change the value of the file attribute. Table 6.807. Methods summary Name Summary get Returns the information about this CDROM device. update Updates the information about this CDROM device. 6.260.1. get GET Returns the information about this CDROM device. The information consists of cdrom attribute containing reference to the CDROM device, the virtual machine, and optionally the inserted disk. If there is a disk inserted then the file attribute will contain a reference to the ISO image: <cdrom href="..." id="00000000-0000-0000-0000-000000000000"> <file id="mycd.iso"/> <vm href="/ovirt-engine/api/vms/123" id="123"/> </cdrom> If there is no disk inserted then the file attribute won't be reported: <cdrom href="..." id="00000000-0000-0000-0000-000000000000"> <vm href="/ovirt-engine/api/vms/123" id="123"/> </cdrom> Table 6.808. Parameters summary Name Type Direction Summary cdrom Cdrom Out The information about the CDROM device. current Boolean In Indicates if the operation should return the information for the currently running virtual machine. follow String In Indicates which inner links should be followed . 6.260.1.1. current Indicates if the operation should return the information for the currently running virtual machine. This parameter is optional, and the default value is false . 6.260.1.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.260.2. update PUT Updates the information about this CDROM device. It allows to change or eject the disk by changing the value of the file attribute. For example, to insert or change the disk send a request like this: The body should contain the new value for the file attribute: <cdrom> <file id="mycd.iso"/> </cdrom> The value of the id attribute, mycd.iso in this example, should correspond to a file available in an attached ISO storage domain. To eject the disk use a file with an empty id : <cdrom> <file id=""/> </cdrom> By default the above operations change permanently the disk that will be visible to the virtual machine after the boot, but they do not have any effect on the currently running virtual machine. If you want to change the disk that is visible to the current running virtual machine, add the current=true parameter. For example, to eject the current disk send a request like this: With a request body like this: <cdrom> <file id=""/> </cdrom> Important The changes made with the current=true parameter are never persisted, so they won't have any effect after the virtual machine is rebooted. Table 6.809. Parameters summary Name Type Direction Summary cdrom Cdrom In/Out The information about the CDROM device. current Boolean In Indicates if the update should apply to the currently running virtual machine, or to the virtual machine after the boot. 6.260.2.1. current Indicates if the update should apply to the currently running virtual machine, or to the virtual machine after the boot. This parameter is optional, and the default value is false , which means that by default the update will have effect only after the boot. 6.261. VmCdroms Manages the CDROM devices of a virtual machine. Currently virtual machines have exactly one CDROM device. No new devices can be added, and the existing one can't be removed, thus there are no add or remove methods. Changing and ejecting CDROM disks is done with the update method of the service that manages the CDROM device. Table 6.810. Methods summary Name Summary add Add a cdrom to a virtual machine identified by the given id. list Returns the list of CDROM devices of the virtual machine. 6.261.1. add POST Add a cdrom to a virtual machine identified by the given id. Table 6.811. Parameters summary Name Type Direction Summary cdrom Cdrom In/Out 6.261.2. list GET Returns the list of CDROM devices of the virtual machine. The order of the returned list of CD-ROM devices isn't guaranteed. Table 6.812. Parameters summary Name Type Direction Summary cdroms Cdrom[ ] Out The list of CDROM devices of the virtual machine. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of CDROMs to return. 6.261.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.261.2.2. max Sets the maximum number of CDROMs to return. If not specified all the CDROMs are returned. 6.262. VmCheckpoint A service managing a checkpoint of a virtual machines. Table 6.813. Methods summary Name Summary get Returns information about the virtual machine checkpoint. remove Remove the virtual machine checkpoint entity. 6.262.1. get GET Returns information about the virtual machine checkpoint. Table 6.814. Parameters summary Name Type Direction Summary checkpoint Checkpoint Out The information about the virtual machine checkpoint entity. follow String In Indicates which inner links should be followed . 6.262.1.1. checkpoint The information about the virtual machine checkpoint entity. <checkpoint id="checkpoint-uuid"> <link href="/ovirt-engine/api/vms/vm-uuid/checkpoints/checkpoint-uuid/disks" rel="disks"/> <parent_id>parent-checkpoint-uuid</parent_id> <creation_date>xxx</creation_date> <vm href="/ovirt-engine/api/vms/vm-uuid" id="vm-uuid"/> </checkpoint> 6.262.1.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.262.2. remove DELETE Remove the virtual machine checkpoint entity. Remove the checkpoint from libvirt and the database. Table 6.815. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.263. VmCheckpointDisk Table 6.816. Methods summary Name Summary get Retrieves the description of the disk. 6.263.1. get GET Retrieves the description of the disk. Table 6.817. Parameters summary Name Type Direction Summary disk Disk Out The description of the disk. follow String In Indicates which inner links should be followed . 6.263.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.264. VmCheckpointDisks Table 6.818. Methods summary Name Summary list Returns the list of disks in checkpoint. 6.264.1. list GET Returns the list of disks in checkpoint. Table 6.819. Parameters summary Name Type Direction Summary disks Disk[ ] Out The list of retrieved disks. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of disks to return. 6.264.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.264.1.2. max Sets the maximum number of disks to return. If not specified, all the disks are returned. 6.265. VmCheckpoints Lists the checkpoints of a virtual machine. Table 6.820. Methods summary Name Summary list The list of virtual machine checkpoints. 6.265.1. list GET The list of virtual machine checkpoints. To get a list of checkpoints for a virtual machine with an id '123', send a request as follows: Table 6.821. Parameters summary Name Type Direction Summary checkpoints Checkpoint[ ] Out The information about the virtual machine checkpoint entities. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of virtual machine checkpoints to return. 6.265.1.1. checkpoints The information about the virtual machine checkpoint entities. <checkpoints> <checkpoint id="checkpoint-uuid"> <link href="/ovirt-engine/api/vms/vm-uuid/checkpoints/checkpoint-uuid/disks" rel="disks"/> <parent_id>parent-checkpoint-uuid</parent_id> <creation_date>xxx</creation_date> <vm href="/ovirt-engine/api/vm-uuid" id="vm-uuid"/> </checkpoint> </checkpoints> 6.265.1.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.265.1.3. max Sets the maximum number of virtual machine checkpoints to return. If not specified, all the virtual machine checkpoints are returned. 6.266. VmDisk Table 6.822. Methods summary Name Summary activate deactivate export get move reduce Reduces the size of the disk image. remove Detach the disk from the virtual machine. update 6.266.1. activate POST Table 6.823. Parameters summary Name Type Direction Summary async Boolean In Indicates if the activation should be performed asynchronously. 6.266.2. deactivate POST Table 6.824. Parameters summary Name Type Direction Summary async Boolean In Indicates if the deactivation should be performed asynchronously. 6.266.3. export POST Table 6.825. Parameters summary Name Type Direction Summary async Boolean In Indicates if the export should be performed asynchronously. filter Boolean In Indicates if the results should be filtered according to the permissions of the user. 6.266.4. get GET Table 6.826. Parameters summary Name Type Direction Summary disk Disk Out follow String In Indicates which inner links should be followed . 6.266.4.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.266.5. move POST Table 6.827. Parameters summary Name Type Direction Summary async Boolean In Indicates if the move should be performed asynchronously. filter Boolean In Indicates if the results should be filtered according to the permissions of the user. 6.266.6. reduce POST Reduces the size of the disk image. Invokes reduce on the logical volume (i.e. this is only applicable for block storage domains). This is applicable for floating disks and disks attached to non-running virtual machines. There is no need to specify the size as the optimal size is calculated automatically. Table 6.828. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.266.7. remove DELETE Detach the disk from the virtual machine. Note In version 3 of the API this used to also remove the disk completely from the system, but starting with version 4 it doesn't. If you need to remove it completely use the remove method of the top level disk service . Table 6.829. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.266.8. update PUT Table 6.830. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. disk Disk In/Out 6.267. VmDisks Table 6.831. Methods summary Name Summary add list Returns the list of disks of the virtual machine. 6.267.1. add POST Table 6.832. Parameters summary Name Type Direction Summary disk Disk In/Out 6.267.2. list GET Returns the list of disks of the virtual machine. The order of the returned list of disks isn't guaranteed. Table 6.833. Parameters summary Name Type Direction Summary disks Disk[ ] Out follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of disks to return. 6.267.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.267.2.2. max Sets the maximum number of disks to return. If not specified all the disks are returned. 6.268. VmGraphicsConsole Table 6.834. Methods summary Name Summary get Retrieves the graphics console configuration of the virtual machine. proxyticket remoteviewerconnectionfile Generates the file which is compatible with remote-viewer client. remove Remove the graphics console from the virtual machine. ticket Generates a time-sensitive authentication token for accessing this virtual machine's console. 6.268.1. get GET Retrieves the graphics console configuration of the virtual machine. Important By default, when the current parameter is not specified, the data returned corresponds to the execution of the virtual machine. In the current implementation of the system this means that the address and port attributes will not be populated because the system does not know what address and port will be used for the execution. Since in most cases those attributes are needed, it is strongly advised to aways explicitly include the current parameter with the value true . Table 6.835. Parameters summary Name Type Direction Summary console GraphicsConsole Out The information about the graphics console of the virtual machine. current Boolean In Specifies if the data returned should correspond to the execution of the virtual machine, or to the current execution. follow String In Indicates which inner links should be followed . 6.268.1.1. current Specifies if the data returned should correspond to the execution of the virtual machine, or to the current execution. Important The address and port attributes will not be populated unless the value is true . For example, to get data for the current execution of the virtual machine, including the address and port attributes, send a request like this: The default value is false . 6.268.1.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.268.2. proxyticket POST Table 6.836. Parameters summary Name Type Direction Summary async Boolean In Indicates if the generation of the ticket should be performed asynchronously. proxy_ticket ProxyTicket Out 6.268.3. remoteviewerconnectionfile POST Generates the file which is compatible with remote-viewer client. Use the following request to generate remote viewer connection file of the graphics console. Note that this action generates the file only if virtual machine is running. The remoteviewerconnectionfile action does not take any action specific parameters, so the request body should contain an empty action : <action/> The response contains the file, which can be used with remote-viewer client. <action> <remote_viewer_connection_file> [virt-viewer] type=spice host=192.168.1.101 port=-1 password=123456789 delete-this-file=1 fullscreen=0 toggle-fullscreen=shift+f11 release-cursor=shift+f12 secure-attention=ctrl+alt+end tls-port=5900 enable-smartcard=0 enable-usb-autoshare=0 usb-filter=null tls-ciphers=DEFAULT host-subject=O=local,CN=example.com ca=... </remote_viewer_connection_file> </action> E.g., to fetch the content of remote viewer connection file and save it into temporary file, user can use oVirt Python SDK as follows: # Find the virtual machine: vm = vms_service.list(search='name=myvm')[0] # Locate the service that manages the virtual machine, as that is where # the locators are defined: vm_service = vms_service.vm_service(vm.id) # Find the graphic console of the virtual machine: graphics_consoles_service = vm_service.graphics_consoles_service() graphics_console = graphics_consoles_service.list()[0] # Generate the remote viewer connection file: console_service = graphics_consoles_service.console_service(graphics_console.id) remote_viewer_connection_file = console_service.remote_viewer_connection_file() # Write the content to file "/tmp/remote_viewer_connection_file.vv" path = "/tmp/remote_viewer_connection_file.vv" with open(path, "w") as f: f.write(remote_viewer_connection_file) When you create the remote viewer connection file, then you can connect to virtual machine graphic console, as follows: #!/bin/sh -ex remote-viewer --ovirt-ca-file=/etc/pki/ovirt-engine/ca.pem /tmp/remote_viewer_connection_file.vv Table 6.837. Parameters summary Name Type Direction Summary remote_viewer_connection_file String Out Contains the file which is compatible with remote-viewer client. 6.268.3.1. remote_viewer_connection_file Contains the file which is compatible with remote-viewer client. User can use the content of this attribute to create a file, which can be passed to remote-viewer client to connect to virtual machine graphic console. 6.268.4. remove DELETE Remove the graphics console from the virtual machine. Table 6.838. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.268.5. ticket POST Generates a time-sensitive authentication token for accessing this virtual machine's console. The client-provided action optionally includes a desired ticket value and/or an expiry time in seconds. In any case, the response specifies the actual ticket value and expiry used. <action> <ticket> <value>abcd12345</value> <expiry>120</expiry> </ticket> </action> Table 6.839. Parameters summary Name Type Direction Summary ticket Ticket In/Out The generated ticket that can be used to access this console. 6.269. VmGraphicsConsoles Table 6.840. Methods summary Name Summary add Add new graphics console to the virtual machine. list Lists all the configured graphics consoles of the virtual machine. 6.269.1. add POST Add new graphics console to the virtual machine. Table 6.841. Parameters summary Name Type Direction Summary console GraphicsConsole In/Out 6.269.2. list GET Lists all the configured graphics consoles of the virtual machine. Important By default, when the current parameter is not specified, the data returned corresponds to the execution of the virtual machine. In the current implementation of the system this means that the address and port attributes will not be populated because the system does not know what address and port will be used for the execution. Since in most cases those attributes are needed, it is strongly advised to aways explicitly include the current parameter with the value true . The order of the returned list of graphics consoles is not guaranteed. Table 6.842. Parameters summary Name Type Direction Summary consoles GraphicsConsole[ ] Out The list of graphics consoles of the virtual machine. current Boolean In Specifies if the data returned should correspond to the execution of the virtual machine, or to the current execution. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of consoles to return. 6.269.2.1. current Specifies if the data returned should correspond to the execution of the virtual machine, or to the current execution. Important The address and port attributes will not be populated unless the value is true . For example, to get data for the current execution of the virtual machine, including the address and port attributes, send a request like this: The default value is false . 6.269.2.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.269.2.3. max Sets the maximum number of consoles to return. If not specified all the consoles are returned. 6.270. VmHostDevice A service to manage individual host device attached to a virtual machine. Table 6.843. Methods summary Name Summary get Retrieve information about particular host device attached to given virtual machine. remove Remove the attachment of this host device from given virtual machine. 6.270.1. get GET Retrieve information about particular host device attached to given virtual machine. Example: <host_device href="/ovirt-engine/api/hosts/543/devices/456" id="456"> <name>pci_0000_04_00_0</name> <capability>pci</capability> <iommu_group>30</iommu_group> <placeholder>true</placeholder> <product id="0x13ba"> <name>GM107GL [Quadro K2200]</name> </product> <vendor id="0x10de"> <name>NVIDIA Corporation</name> </vendor> <host href="/ovirt-engine/api/hosts/543" id="543"/> <parent_device href="/ovirt-engine/api/hosts/543/devices/456" id="456"> <name>pci_0000_00_03_0</name> </parent_device> <vm href="/ovirt-engine/api/vms/123" id="123"/> </host_device> Table 6.844. Parameters summary Name Type Direction Summary device HostDevice Out Retrieved information about the host device attached to given virtual machine. follow String In Indicates which inner links should be followed . 6.270.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.270.2. remove DELETE Remove the attachment of this host device from given virtual machine. Note In case this device serves as an IOMMU placeholder, it cannot be removed (remove will result only in setting its placeholder flag to true ). Note that all IOMMU placeholder devices will be removed automatically as soon as there will be no more non-placeholder devices (all devices from given IOMMU group are detached). Table 6.845. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.271. VmHostDevices A service to manage host devices attached to a virtual machine. Table 6.846. Methods summary Name Summary add Attach target device to given virtual machine. list List the host devices assigned to given virtual machine. 6.271.1. add POST Attach target device to given virtual machine. Example: With request body of type HostDevice , for example <host_device id="123" /> Note A necessary precondition for a successful host device attachment is that the virtual machine must be pinned to exactly one host. The device ID is then taken relative to this host. Note Attachment of a PCI device that is part of a bigger IOMMU group will result in attachment of the remaining devices from that IOMMU group as "placeholders". These devices are then identified using the placeholder attribute of the HostDevice type set to true . In case you want attach a device that already serves as an IOMMU placeholder, simply issue an explicit Add operation for it, and its placeholder flag will be cleared, and the device will be accessible to the virtual machine. Table 6.847. Parameters summary Name Type Direction Summary device HostDevice In/Out The host device to be attached to given virtual machine. 6.271.2. list GET List the host devices assigned to given virtual machine. The order of the returned list of devices isn't guaranteed. Table 6.848. Parameters summary Name Type Direction Summary device HostDevice[ ] Out Retrieved list of host devices attached to given virtual machine. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of devices to return. 6.271.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.271.2.2. max Sets the maximum number of devices to return. If not specified all the devices are returned. 6.272. VmMediatedDevice Table 6.849. Methods summary Name Summary get Retrieves the configuration of mediated devices in the virtual machine. remove Remove the mediated device from the virtual machine. update Updates the information about the mediated device. 6.272.1. get GET Retrieves the configuration of mediated devices in the virtual machine. Table 6.850. Parameters summary Name Type Direction Summary device VmMediatedDevice Out The information about the mediated device of the virtual machine. follow String In Indicates which inner links should be followed . 6.272.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.272.2. remove DELETE Remove the mediated device from the virtual machine. Table 6.851. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.272.3. update PUT Updates the information about the mediated device. You can update the information using specParams element. For example, to update a mediated device, send a request like this: with response body: <vm_mediated_device href="/ovirt-engine/api/vms/123/mediateddevices/00000000-0000-0000-0000-000000000000" id="00000000-0000-0000-0000-000000000000"> <vm href="/ovirt-engine/api/vms/123" id="123"/> <spec_params> <property> <name>mdevType</name> <value>nvidia-11</value> </property> </spec_params> </vm_mediated_device> Table 6.852. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. device VmMediatedDevice In/Out The information about the mediated device. 6.272.3.1. device The information about the mediated device. The request data must contain specParams properties. The response data contains complete information about the updated mediated device. 6.273. VmMediatedDevices A service that manages mediated devices of a VM. Table 6.853. Methods summary Name Summary add Add a new mediated device to the virtual machine. list Lists all the configured mediated devices of the virtual machine. 6.273.1. add POST Add a new mediated device to the virtual machine. Table 6.854. Parameters summary Name Type Direction Summary device VmMediatedDevice In/Out 6.273.2. list GET Lists all the configured mediated devices of the virtual machine. The order of the returned list of mediated devices is not guaranteed. Table 6.855. Parameters summary Name Type Direction Summary devices VmMediatedDevice[ ] Out The list of mediated devices of the virtual machine. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of mediated devices to return. 6.273.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.273.2.2. max Sets the maximum number of mediated devices to return. If not specified all the mediated devices are returned. 6.274. VmNic Table 6.856. Methods summary Name Summary activate deactivate get remove Removes the NIC. update Updates the NIC. 6.274.1. activate POST Table 6.857. Parameters summary Name Type Direction Summary async Boolean In Indicates if the activation should be performed asynchronously. 6.274.2. deactivate POST Table 6.858. Parameters summary Name Type Direction Summary async Boolean In Indicates if the deactivation should be performed asynchronously. 6.274.3. get GET Table 6.859. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . nic Nic Out 6.274.3.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.274.4. remove DELETE Removes the NIC. For example, to remove the NIC with id 456 from the virtual machine with id 123 send a request like this: Important The hotplugging feature only supports virtual machine operating systems with hotplugging operations. Example operating systems include: Red Hat Enterprise Linux 6 Red Hat Enterprise Linux 5 Windows Server 2008 and Windows Server 2003 Table 6.860. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.274.5. update PUT Updates the NIC. For example, to update the NIC having with 456 belonging to virtual the machine with id 123 send a request like this: With a request body like this: <nic> <name>mynic</name> <interface>e1000</interface> <vnic_profile id='789'/> </nic> Important The hotplugging feature only supports virtual machine operating systems with hotplugging operations. Example operating systems include: Red Hat Enterprise Linux 6 Red Hat Enterprise Linux 5 Windows Server 2008 and Windows Server 2003 Table 6.861. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. nic Nic In/Out 6.275. VmNics Table 6.862. Methods summary Name Summary add Adds a NIC to the virtual machine. list Returns the list of NICs of the virtual machine. 6.275.1. add POST Adds a NIC to the virtual machine. The following example adds to the virtual machine 123 a network interface named mynic using virtio and the NIC profile 456 . <nic> <name>mynic</name> <interface>virtio</interface> <vnic_profile id="456"/> </nic> The following example sends that request using curl : curl \ --request POST \ --header "Version: 4" \ --header "Content-Type: application/xml" \ --header "Accept: application/xml" \ --user "admin@internal:mypassword" \ --cacert /etc/pki/ovirt-engine/ca.pem \ --data ' <nic> <name>mynic</name> <interface>virtio</interface> <vnic_profile id="456"/> </nic> ' \ https://myengine.example.com/ovirt-engine/api/vms/123/nics Important The hotplugging feature only supports virtual machine operating systems with hotplugging operations. Example operating systems include: Red Hat Enterprise Linux 6 Red Hat Enterprise Linux 5 Windows Server 2008 and Windows Server 2003 Table 6.863. Parameters summary Name Type Direction Summary nic Nic In/Out 6.275.2. list GET Returns the list of NICs of the virtual machine. The order of the returned list of NICs isn't guaranteed. Table 6.864. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of NICs to return. nics Nic[ ] Out 6.275.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.275.2.2. max Sets the maximum number of NICs to return. If not specified all the NICs are returned. 6.276. VmNumaNode Table 6.865. Methods summary Name Summary get remove Removes a virtual NUMA node. update Updates a virtual NUMA node. 6.276.1. get GET Table 6.866. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . node VirtualNumaNode Out 6.276.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.276.2. remove DELETE Removes a virtual NUMA node. An example of removing a virtual NUMA node: Note It's required to remove the numa nodes from the highest index first. Table 6.867. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.276.3. update PUT Updates a virtual NUMA node. An example of pinning a virtual NUMA node to a physical NUMA node on the host: The request body should contain the following: <vm_numa_node> <numa_node_pins> <numa_node_pin> <index>0</index> </numa_node_pin> </numa_node_pins> </vm_numa_node> Table 6.868. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. node VirtualNumaNode In/Out 6.277. VmNumaNodes Table 6.869. Methods summary Name Summary add Creates a new virtual NUMA node for the virtual machine. list Lists virtual NUMA nodes of a virtual machine. 6.277.1. add POST Creates a new virtual NUMA node for the virtual machine. An example of creating a NUMA node: The request body can contain the following: <vm_numa_node> <cpu> <cores> <core> <index>0</index> </core> </cores> </cpu> <index>0</index> <memory>1024</memory> <numa_tune_mode>strict</numa_tune_mode> </vm_numa_node> Table 6.870. Parameters summary Name Type Direction Summary node VirtualNumaNode In/Out 6.277.2. list GET Lists virtual NUMA nodes of a virtual machine. The order of the returned list of NUMA nodes isn't guaranteed. Table 6.871. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of nodes to return. nodes VirtualNumaNode[ ] Out 6.277.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.277.2.2. max Sets the maximum number of nodes to return. If not specified all the nodes are returned. 6.278. VmPool A service to manage a virtual machines pool. Table 6.872. Methods summary Name Summary allocatevm This operation allocates a virtual machine in the virtual machine pool. get Get the virtual machine pool. remove Removes a virtual machine pool. update Update the virtual machine pool. 6.278.1. allocatevm POST This operation allocates a virtual machine in the virtual machine pool. The allocate virtual machine action does not take any action specific parameters, so the request body should contain an empty action : <action/> Table 6.873. Parameters summary Name Type Direction Summary async Boolean In Indicates if the allocation should be performed asynchronously. 6.278.2. get GET Get the virtual machine pool. You will get a XML response like that one: <vm_pool id="123"> <actions>...</actions> <name>MyVmPool</name> <description>MyVmPool description</description> <link href="/ovirt-engine/api/vmpools/123/permissions" rel="permissions"/> <max_user_vms>1</max_user_vms> <prestarted_vms>0</prestarted_vms> <size>100</size> <stateful>false</stateful> <type>automatic</type> <use_latest_template_version>false</use_latest_template_version> <cluster id="123"/> <template id="123"/> <vm id="123">...</vm> ... </vm_pool> Table 6.874. Parameters summary Name Type Direction Summary filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . pool VmPool Out Retrieved virtual machines pool. 6.278.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.278.3. remove DELETE Removes a virtual machine pool. Table 6.875. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.278.4. update PUT Update the virtual machine pool. The name , description , size , prestarted_vms and max_user_vms attributes can be updated after the virtual machine pool has been created. <vmpool> <name>VM_Pool_B</name> <description>Virtual Machine Pool B</description> <size>3</size> <prestarted_vms>1</size> <max_user_vms>2</size> </vmpool> Table 6.876. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. pool VmPool In/Out The virtual machine pool that is being updated. seal Boolean In Specifies if virtual machines created for the pool should be sealed after creation. 6.278.4.1. seal Specifies if virtual machines created for the pool should be sealed after creation. If this optional parameter is provided, and its value is true , virtual machines created for the pool will be sealed after creation. If the value is 'false', the virtual machines will not be sealed. If the parameter is not provided, the virtual machines will be sealed, only if they are created from a sealed template and their guest OS is not set to Windows. This parameter affects only the virtual machines created when the pool is updated. For example, to update a virtual machine pool and to seal the additional virtual machines that are created, send a request like this: With the following body: <vmpool> <name>VM_Pool_B</name> <description>Virtual Machine Pool B</description> <size>7</size> </vmpool> 6.279. VmPools Provides read-write access to virtual machines pools. Table 6.877. Methods summary Name Summary add Creates a new virtual machine pool. list Get a list of available virtual machines pools. 6.279.1. add POST Creates a new virtual machine pool. A new pool requires the name , cluster and template attributes. Identify the cluster and template with the id or name nested attributes: With the following body: <vmpool> <name>mypool</name> <cluster id="123"/> <template id="456"/> </vmpool> Table 6.878. Parameters summary Name Type Direction Summary pool VmPool In/Out Pool to add. seal Boolean In Specifies if virtual machines created for the pool should be sealed after creation. 6.279.1.1. seal Specifies if virtual machines created for the pool should be sealed after creation. If this optional parameter is provided, and its value is true , virtual machines created for the pool will be sealed after creation. If the value is 'false', the virtual machines will not be sealed. If the parameter is not provided, the virtual machines will be sealed, only if they are created from a sealed template and their guest OS is not set to Windows. This parameter affects only the virtual machines created when the pool is created. For example, to create a virtual machine pool with 5 virtual machines and to seal them, send a request like this: With the following body: <vmpool> <name>mypool</name> <cluster id="123"/> <template id="456"/> <size>5</size> </vmpool> 6.279.2. list GET Get a list of available virtual machines pools. You will receive the following response: <vm_pools> <vm_pool id="123"> ... </vm_pool> ... </vm_pools> The order of the returned list of pools is guaranteed only if the sortby clause is included in the search parameter. Table 6.879. Parameters summary Name Type Direction Summary case_sensitive Boolean In Indicates if the search performed using the search parameter should be performed taking case into account. filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of pools to return. pools VmPool[ ] Out Retrieved pools. search String In A query string used to restrict the returned pools. 6.279.2.1. case_sensitive Indicates if the search performed using the search parameter should be performed taking case into account. The default value is true , which means that case is taken into account. If you want to search ignoring case set it to false . 6.279.2.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.279.2.3. max Sets the maximum number of pools to return. If this value is not specified, all of the pools are returned. 6.280. VmReportedDevice Table 6.880. Methods summary Name Summary get 6.280.1. get GET Table 6.881. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . reported_device ReportedDevice Out 6.280.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.281. VmReportedDevices Table 6.882. Methods summary Name Summary list Returns the list of reported devices of the virtual machine. 6.281.1. list GET Returns the list of reported devices of the virtual machine. The order of the returned list of devices isn't guaranteed. Table 6.883. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of devices to return. reported_device ReportedDevice[ ] Out 6.281.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.281.1.2. max Sets the maximum number of devices to return. If not specified all the devices are returned. 6.282. VmSession Table 6.884. Methods summary Name Summary get 6.282.1. get GET Table 6.885. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . session Session Out 6.282.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.283. VmSessions Provides information about virtual machine user sessions. Table 6.886. Methods summary Name Summary list Lists all user sessions for this virtual machine. 6.283.1. list GET Lists all user sessions for this virtual machine. For example, to retrieve the session information for virtual machine 123 send a request like this: The response body will contain something like this: <sessions> <session href="/ovirt-engine/api/vms/123/sessions/456" id="456"> <console_user>true</console_user> <ip> <address>192.168.122.1</address> </ip> <user href="/ovirt-engine/api/users/789" id="789"/> <vm href="/ovirt-engine/api/vms/123" id="123"/> </session> ... </sessions> The order of the returned list of sessions isn't guaranteed. Table 6.887. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of sessions to return. sessions Session[ ] Out 6.283.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.283.1.2. max Sets the maximum number of sessions to return. If not specified all the sessions are returned. 6.284. VmWatchdog A service managing a watchdog on virtual machines. Table 6.888. Methods summary Name Summary get Returns the information about the watchdog. remove Removes the watchdog from the virtual machine. update Updates the information about the watchdog. 6.284.1. get GET Returns the information about the watchdog. Table 6.889. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . watchdog Watchdog Out The information about the watchdog. 6.284.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.284.1.2. watchdog The information about the watchdog. The information consists of model element, action element and the reference to the virtual machine. It may look like this: <watchdogs> <watchdog href="/ovirt-engine/api/vms/123/watchdogs/00000000-0000-0000-0000-000000000000" id="00000000-0000-0000-0000-000000000000"> <vm href="/ovirt-engine/api/vms/123" id="123"/> <action>poweroff</action> <model>i6300esb</model> </watchdog> </watchdogs> 6.284.2. remove DELETE Removes the watchdog from the virtual machine. For example, to remove a watchdog from a virtual machine, send a request like this: Table 6.890. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.284.3. update PUT Updates the information about the watchdog. You can update the information using action and model elements. For example, to update a watchdog, send a request like this: with response body: <watchdog href="/ovirt-engine/api/vms/123/watchdogs/00000000-0000-0000-0000-000000000000" id="00000000-0000-0000-0000-000000000000"> <vm href="/ovirt-engine/api/vms/123" id="123"/> <action>reset</action> <model>i6300esb</model> </watchdog> Table 6.891. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. watchdog Watchdog In/Out The information about the watchdog. 6.284.3.1. watchdog The information about the watchdog. The request data must contain at least one of model and action elements. The response data contains complete information about the updated watchdog. 6.285. VmWatchdogs Lists the watchdogs of a virtual machine. Table 6.892. Methods summary Name Summary add Adds new watchdog to the virtual machine. list The list of watchdogs of the virtual machine. 6.285.1. add POST Adds new watchdog to the virtual machine. For example, to add a watchdog to a virtual machine, send a request like this: with response body: <watchdog href="/ovirt-engine/api/vms/123/watchdogs/00000000-0000-0000-0000-000000000000" id="00000000-0000-0000-0000-000000000000"> <vm href="/ovirt-engine/api/vms/123" id="123"/> <action>poweroff</action> <model>i6300esb</model> </watchdog> Table 6.893. Parameters summary Name Type Direction Summary watchdog Watchdog In/Out The information about the watchdog. 6.285.1.1. watchdog The information about the watchdog. The request data must contain model element (such as i6300esb ) and action element (one of none , reset , poweroff , dump , pause ). The response data additionally contains references to the added watchdog and to the virtual machine. 6.285.2. list GET The list of watchdogs of the virtual machine. The order of the returned list of watchdogs isn't guaranteed. Table 6.894. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of watchdogs to return. watchdogs Watchdog[ ] Out The information about the watchdog. 6.285.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.285.2.2. max Sets the maximum number of watchdogs to return. If not specified all the watchdogs are returned. 6.285.2.3. watchdogs The information about the watchdog. The information consists of model element, action element and the reference to the virtual machine. It may look like this: <watchdogs> <watchdog href="/ovirt-engine/api/vms/123/watchdogs/00000000-0000-0000-0000-000000000000" id="00000000-0000-0000-0000-000000000000"> <vm href="/ovirt-engine/api/vms/123" id="123"/> <action>poweroff</action> <model>i6300esb</model> </watchdog> </watchdogs> 6.286. Vms Table 6.895. Methods summary Name Summary add Creates a new virtual machine. list Returns the list of virtual machines of the system. 6.286.1. add POST Creates a new virtual machine. The virtual machine can be created in different ways: From a template. In this case the identifier or name of the template must be provided. For example, using a plain shell script and XML: #!/bin/sh -ex url="https://engine.example.com/ovirt-engine/api" user="admin@internal" password="..." curl \ --verbose \ --cacert /etc/pki/ovirt-engine/ca.pem \ --user "USD{user}:USD{password}" \ --request POST \ --header "Version: 4" \ --header "Content-Type: application/xml" \ --header "Accept: application/xml" \ --data ' <vm> <name>myvm</name> <template> <name>Blank</name> </template> <cluster> <name>mycluster</name> </cluster> </vm> ' \ "USD{url}/vms" From a snapshot. In this case the identifier of the snapshot has to be provided. For example, using a plain shel script and XML: #!/bin/sh -ex url="https://engine.example.com/ovirt-engine/api" user="admin@internal" password="..." curl \ --verbose \ --cacert /etc/pki/ovirt-engine/ca.pem \ --user "USD{user}:USD{password}" \ --request POST \ --header "Content-Type: application/xml" \ --header "Accept: application/xml" \ --data ' <vm> <name>myvm</name> <snapshots> <snapshot id="266742a5-6a65-483c-816d-d2ce49746680"/> </snapshots> <cluster> <name>mycluster</name> </cluster> </vm> ' \ "USD{url}/vms" When creating a virtual machine from a template or from a snapshot it is usually useful to explicitly indicate in what storage domain to create the disks for the virtual machine. If the virtual machine is created from a template then this is achieved passing a set of disk_attachment elements that indicate the mapping: <vm> ... <disk_attachments> <disk_attachment> <disk id="8d4bd566-6c86-4592-a4a7-912dbf93c298"> <storage_domains> <storage_domain id="9cb6cb0a-cf1d-41c2-92ca-5a6d665649c9"/> </storage_domains> </disk> <disk_attachment> </disk_attachments> </vm> When the virtual machine is created from a snapshot this set of disks is slightly different, it uses the image_id attribute instead of id . <vm> ... <disk_attachments> <disk_attachment> <disk> <image_id>8d4bd566-6c86-4592-a4a7-912dbf93c298</image_id> <storage_domains> <storage_domain id="9cb6cb0a-cf1d-41c2-92ca-5a6d665649c9"/> </storage_domains> </disk> <disk_attachment> </disk_attachments> </vm> It is possible to specify additional virtual machine parameters in the XML description, e.g. a virtual machine of desktop type, with 2 GiB of RAM and additional description can be added sending a request body like the following: <vm> <name>myvm</name> <description>My Desktop Virtual Machine</description> <type>desktop</type> <memory>2147483648</memory> ... </vm> A bootable CDROM device can be set like this: <vm> ... <os> <boot dev="cdrom"/> </os> </vm> In order to boot from CDROM, you first need to insert a disk, as described in the CDROM service . Then booting from that CDROM can be specified using the os.boot.devices attribute: <vm> ... <os> <boot> <devices> <device>cdrom</device> </devices> </boot> </os> </vm> In all cases the name or identifier of the cluster where the virtual machine will be created is mandatory. Table 6.896. Parameters summary Name Type Direction Summary auto_pinning_policy AutoPinningPolicy In Specifies if and how the auto CPU and NUMA configuration is applied. clone Boolean In Specifies if the virtual machine should be independent of the template. clone_permissions Boolean In Specifies if the permissions of the template should be copied to the virtual machine. filter Boolean In Relevant for admin users only. seal Boolean In Specifies if the virtual machine should be sealed after creation. vm Vm In/Out 6.286.1.1. auto_pinning_policy Specifies if and how the auto CPU and NUMA configuration is applied. Important Since version 4.5 of the engine this operation is deprecated, and preserved only for backwards compatibility. It will be removed in the future. Instead please use POST followed by add operation . An example for a request: With a request body like this: <vm> <name>myvm<name> <template> <name>mytemplate<name> </template> <cluster> <name>mycluster<name> </cluster> <placement_policy> <hosts> <host> <name>myhost</name> </host> </hosts> </placement_policy> </vm> 6.286.1.2. clone Specifies if the virtual machine should be independent of the template. When a virtual machine is created from a template by default the disks of the virtual machine depend on the disks of the template, they are using the copy on write mechanism so that only the differences from the template take up real storage space. If this parameter is specified and the value is true then the disks of the created virtual machine will be cloned , and independent of the template. For example, to create an independent virtual machine, send a request like this: With a request body like this: <vm> <name>myvm<name> <template> <name>mytemplate<name> </template> <cluster> <name>mycluster<name> </cluster> </vm> Note When this parameter is true the permissions of the template will also be copied, as when using clone_permissions=true . 6.286.1.3. clone_permissions Specifies if the permissions of the template should be copied to the virtual machine. If this optional parameter is provided, and its values is true then the permissions of the template (only the direct ones, not the inherited ones) will be copied to the created virtual machine. For example, to create a virtual machine from the mytemplate template copying its permissions, send a request like this: With a request body like this: <vm> <name>myvm<name> <template> <name>mytemplate<name> </template> <cluster> <name>mycluster<name> </cluster> </vm> 6.286.1.4. filter Relevant for admin users only. Indicates whether to assign UserVmManager role on the created Virtual Machine for this user. This will enable the user to later access the Virtual Machine as though he were a non-admin user, foregoing his admin permissions (by providing filter=true). Note admin-as-user (meaning providing filter=true) POST requests on an existing Virtual Machine will fail unless the Virtual Machine has been previously created by the admin as a user (meaning with filter=true). 6.286.1.5. seal Specifies if the virtual machine should be sealed after creation. If this optional parameter is provided, and its value is true , the virtual machine will be sealed after creation. If the value is 'false', the virtual machine will not be sealed. If the parameter is not provided, the virtual machine will be sealed, only if it is created from a sealed template and its guest OS is not set to Windows. For example, to create a virtual machine from the mytemplate template and to seal it, send a request like this: With a request body like this: <vm> <name>myvm<name> <template> <name>mytemplate<name> </template> <cluster> <name>mycluster<name> </cluster> </vm> 6.286.2. list GET Returns the list of virtual machines of the system. The order of the returned list of virtual machines is guaranteed only if the sortby clause is included in the search parameter. Table 6.897. Parameters summary Name Type Direction Summary all_content Boolean In Indicates if all the attributes of the virtual machines should be included in the response. case_sensitive Boolean In Indicates if the search performed using the search parameter should be performed taking case into account. filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . max Integer In The maximum number of results to return. ovf_as_ova Boolean In Indicates if the results should expose the OVF as it appears in OVA files of that VM. search String In A query string used to restrict the returned virtual machines. vms Vm[ ] Out 6.286.2.1. all_content Indicates if all the attributes of the virtual machines should be included in the response. By default the following attributes are excluded: console initialization.configuration.data - The OVF document describing the virtual machine. rng_source soundcard virtio_scsi For example, to retrieve the complete representation of the virtual machines send a request like this: Note The reason for not including these attributes is performance: they are seldom used and they require additional queries to the database. So try to use the this parameter only when it is really needed. 6.286.2.2. case_sensitive Indicates if the search performed using the search parameter should be performed taking case into account. The default value is true , which means that case is taken into account. If you want to search ignoring case set it to false . 6.286.2.3. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.286.2.4. ovf_as_ova Indicates if the results should expose the OVF as it appears in OVA files of that VM. The OVF document describing the virtual machine. This parameter will work only when all_content=True is set. The OVF will be presented in initialization.configuration.data . For example: 6.287. VnicProfile This service manages a vNIC profile. Table 6.898. Methods summary Name Summary get Retrieves details about a vNIC profile. remove Removes the vNIC profile. update Updates details of a vNIC profile. 6.287.1. get GET Retrieves details about a vNIC profile. Table 6.899. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . profile VnicProfile Out 6.287.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.287.2. remove DELETE Removes the vNIC profile. Table 6.900. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.287.3. update PUT Updates details of a vNIC profile. Table 6.901. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. profile VnicProfile In/Out The vNIC profile that is being updated. 6.288. VnicProfiles This service manages the collection of all vNIC profiles. Table 6.902. Methods summary Name Summary add Add a vNIC profile. list List all vNIC profiles. 6.288.1. add POST Add a vNIC profile. For example to add vNIC profile 123 to network 456 send a request to: With the following body: <vnic_profile id="123"> <name>new_vNIC_name</name> <pass_through> <mode>disabled</mode> </pass_through> <port_mirroring>false</port_mirroring> </vnic_profile> Please note that there is a default network filter to each VNIC profile. For more details of how the default network filter is calculated please refer to the documentation in NetworkFilters . Note The automatically created vNIC profile for the external network will be without network filter. The output of creating a new VNIC profile depends in the body arguments that were given. In case no network filter was given, the default network filter will be configured. For example: <vnic_profile href="/ovirt-engine/api/vnicprofiles/123" id="123"> <name>new_vNIC_name</name> <link href="/ovirt-engine/api/vnicprofiles/123/permissions" rel="permissions"/> <pass_through> <mode>disabled</mode> </pass_through> <port_mirroring>false</port_mirroring> <network href="/ovirt-engine/api/networks/456" id="456"/> <network_filter href="/ovirt-engine/api/networkfilters/789" id="789"/> </vnic_profile> In case an empty network filter was given, no network filter will be configured for the specific VNIC profile regardless of the VNIC profile's default network filter. For example: <vnic_profile> <name>no_network_filter</name> <network_filter/> </vnic_profile> In case that a specific valid network filter id was given, the VNIC profile will be configured with the given network filter regardless of the VNIC profiles's default network filter. For example: <vnic_profile> <name>user_choice_network_filter</name> <network_filter id= "0000001b-001b-001b-001b-0000000001d5"/> </vnic_profile> Table 6.903. Parameters summary Name Type Direction Summary profile VnicProfile In/Out The vNIC profile that is being added. 6.288.2. list GET List all vNIC profiles. The order of the returned list of vNIC profiles isn't guaranteed. Table 6.904. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of profiles to return. profiles VnicProfile[ ] Out The list of all vNIC profiles. 6.288.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.288.2.2. max Sets the maximum number of profiles to return. If not specified all the profiles are returned. 6.289. Weight Table 6.905. Methods summary Name Summary get remove 6.289.1. get GET Table 6.906. Parameters summary Name Type Direction Summary filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . weight Weight Out 6.289.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.289.2. remove DELETE Table 6.907. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.290. Weights Table 6.908. Methods summary Name Summary add Add a weight to a specified user defined scheduling policy. list Returns the list of weights. 6.290.1. add POST Add a weight to a specified user defined scheduling policy. Table 6.909. Parameters summary Name Type Direction Summary weight Weight In/Out 6.290.2. list GET Returns the list of weights. The order of the returned list of weights isn't guaranteed. Table 6.910. Parameters summary Name Type Direction Summary filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of weights to return. weights Weight[ ] Out 6.290.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.290.2.2. max Sets the maximum number of weights to return. If not specified all the weights are returned.
[ "<affinity_group id=\"00000000-0000-0000-0000-000000000000\"> <name>AF_GROUP_001</name> <cluster id=\"00000000-0000-0000-0000-000000000000\"/> <positive>true</positive> <enforcing>true</enforcing> </affinity_group>", "DELETE /ovirt-engine/api/clusters/000-000/affinitygroups/123-456", "POST /ovirt-engine/api/clusters/123/affinitygroups/456/hostlabels", "<affinity_label id=\"789\"/>", "POST /ovirt-engine/api/clusters/123/affinitygroups/456/hosts", "<host id=\"789\"/>", "POST /ovirt-engine/api/clusters/123/affinitygroups/456/vmlabels", "<affinity_label id=\"789\"/>", "POST /ovirt-engine/api/clusters/123/affinitygroups/456/vms", "<vm id=\"789\"/>", "POST /ovirt-engine/api/clusters/000-000/affinitygroups", "<affinity_group> <name>AF_GROUP_001</name> <hosts_rule> <enforcing>true</enforcing> <positive>true</positive> </hosts_rule> <vms_rule> <enabled>false</enabled> </vms_rule> </affinity_group>", "POST /ovirt-engine/api/vms/123/permissions", "<permission> <role> <name>UserVmManager</name> </role> <user id=\"456\"/> </permission>", "POST /ovirt-engine/api/permissions", "<permission> <role> <name>SuperUser</name> </role> <user id=\"456\"/> </permission>", "POST /ovirt-engine/api/clusters/123/permissions", "<permission> <role> <name>UserRole</name> </role> <group id=\"789\"/> </permission>", "GET /ovirt-engine/api/clusters/123/permissions", "<permissions> <permission id=\"456\"> <cluster id=\"123\"/> <role id=\"789\"/> <user id=\"451\"/> </permission> <permission id=\"654\"> <cluster id=\"123\"/> <role id=\"789\"/> <group id=\"127\"/> </permission> </permissions>", "GET /ovirt-engine/api/vms/123/tags/456", "<tag href=\"/ovirt-engine/api/tags/456\" id=\"456\"> <name>root</name> <description>root</description> <vm href=\"/ovirt-engine/api/vms/123\" id=\"123\"/> </tag>", "DELETE /ovirt-engine/api/vms/123/tags/456", "POST /ovirt-engine/api/vms/123/tags", "<tag> <name>mytag</name> </tag>", "GET /ovirt-engine/api/vms/123/tags", "<tags> <tag href=\"/ovirt-engine/api/tags/222\" id=\"222\"> <name>mytag</name> <description>mytag</description> <vm href=\"/ovirt-engine/api/vms/123\" id=\"123\"/> </tag> </tags>", "POST /ovirt-engine/api/datacenters/123/storagedomains/456/activate", "<action/>", "POST /ovirt-engine/api/datacenters/123/storagedomains/456/deactivate", "<action/>", "POST /ovirt-engine/api/datacenters/123/storagedomains/456/deactivate", "<action> <force>true</force> <action>", "POST /ovirt-engine/api/storagedomains/123/disks?unregistered=true", "<disk id=\"456\"/>", "POST /ovirt-engine/api/storagedomains/123/disks", "<disk> <name>mydisk</name> <format>cow</format> <provisioned_size>1073741824</provisioned_size> </disk>", "GET /ovirt-engine/api/bookmarks/123", "<bookmark href=\"/ovirt-engine/api/bookmarks/123\" id=\"123\"> <name>example_vm</name> <value>vm: name=example*</value> </bookmark>", "DELETE /ovirt-engine/api/bookmarks/123", "PUT /ovirt-engine/api/bookmarks/123", "<bookmark> <name>new_example_vm</name> <value>vm: name=new_example*</value> </bookmark>", "POST /ovirt-engine/api/bookmarks", "<bookmark> <name>new_example_vm</name> <value>vm: name=new_example*</value> </bookmark>", "GET /ovirt-engine/api/bookmarks", "<bookmarks> <bookmark href=\"/ovirt-engine/api/bookmarks/123\" id=\"123\"> <name>database</name> <value>vm: name=database*</value> </bookmark> <bookmark href=\"/ovirt-engine/api/bookmarks/456\" id=\"456\"> <name>example</name> <value>vm: name=example*</value> </bookmark> </bookmarks>", "GET /ovirt-engine/api/clusters/123", "<cluster href=\"/ovirt-engine/api/clusters/123\" id=\"123\"> <actions> <link href=\"/ovirt-engine/api/clusters/123/resetemulatedmachine\" rel=\"resetemulatedmachine\"/> </actions> <name>Default</name> <description>The default server cluster</description> <link href=\"/ovirt-engine/api/clusters/123/networks\" rel=\"networks\"/> <link href=\"/ovirt-engine/api/clusters/123/permissions\" rel=\"permissions\"/> <link href=\"/ovirt-engine/api/clusters/123/glustervolumes\" rel=\"glustervolumes\"/> <link href=\"/ovirt-engine/api/clusters/123/glusterhooks\" rel=\"glusterhooks\"/> <link href=\"/ovirt-engine/api/clusters/123/affinitygroups\" rel=\"affinitygroups\"/> <link href=\"/ovirt-engine/api/clusters/123/cpuprofiles\" rel=\"cpuprofiles\"/> <ballooning_enabled>false</ballooning_enabled> <cpu> <architecture>x86_64</architecture> <type>Intel Nehalem Family</type> </cpu> <error_handling> <on_error>migrate</on_error> </error_handling> <fencing_policy> <enabled>true</enabled> <skip_if_connectivity_broken> <enabled>false</enabled> <threshold>50</threshold> </skip_if_connectivity_broken> <skip_if_sd_active> <enabled>false</enabled> </skip_if_sd_active> </fencing_policy> <gluster_service>false</gluster_service> <ha_reservation>false</ha_reservation> <ksm> <enabled>true</enabled> <merge_across_nodes>true</merge_across_nodes> </ksm> <memory_policy> <over_commit> <percent>100</percent> </over_commit> <transparent_hugepages> <enabled>true</enabled> </transparent_hugepages> </memory_policy> <migration> <auto_converge>inherit</auto_converge> <bandwidth> <assignment_method>auto</assignment_method> </bandwidth> <compressed>inherit</compressed> </migration> <required_rng_sources> <required_rng_source>random</required_rng_source> </required_rng_sources> <scheduling_policy href=\"/ovirt-engine/api/schedulingpolicies/456\" id=\"456\"/> <threads_as_cores>false</threads_as_cores> <trusted_service>false</trusted_service> <tunnel_migration>false</tunnel_migration> <version> <major>4</major> <minor>0</minor> </version> <virt_service>true</virt_service> <data_center href=\"/ovirt-engine/api/datacenters/111\" id=\"111\"/> </cluster>", "POST /ovirt-engine/api/clusters/123/refreshglusterhealstatus", "DELETE /ovirt-engine/api/clusters/00000000-0000-0000-0000-000000000000", "POST /ovirt-engine/api/clusters/123/syncallnetworks", "<action/>", "PUT /ovirt-engine/api/clusters/123", "<cluster> <cpu> <type>Intel Haswell-noTSX Family</type> </cpu> </cluster>", "POST /ovirt-engine/api/clusters/123/upgrade", "<action> <upgrade_action> start </upgrade_action> </action>", "<action> <upgrade_action> update_progress </upgrade_action> <upgrade_percent_complete> 15 </upgrade_percent_complete> </action>", "GET /ovirt-engine/api/clusters/123/enabledfeatures/456", "<cluster_feature id=\"456\"> <name>libgfapi_supported</name> </cluster_feature>", "DELETE /ovirt-engine/api/clusters/123/enabledfeatures/456", "POST /ovirt-engine/api/clusters/123/enabledfeatures", "<cluster_feature id=\"456\"/>", "GET /ovirt-engine/api/clusters/123/enabledfeatures", "<enabled_features> <cluster_feature id=\"123\"> <name>test_feature</name> </cluster_feature> </enabled_features>", "GET /ovirt-engine/api/clusterlevels/4.1/clusterfeatures/456", "<cluster_feature id=\"456\"> <name>libgfapi_supported</name> </cluster_feature>", "GET /ovirt-engine/api/clusterlevels/4.1/clusterfeatures", "<cluster_features> <cluster_feature id=\"123\"> <name>test_feature</name> </cluster_feature> </cluster_features>", "GET /ovirt-engine/api/clusterlevels/3.6", "<cluster_level id=\"3.6\"> <cpu_types> <cpu_type> <name>Intel Nehalem Family</name> <level>3</level> <architecture>x86_64</architecture> </cpu_type> </cpu_types> <permits> <permit id=\"1\"> <name>create_vm</name> <administrative>false</administrative> </permit> </permits> </cluster_level>", "GET /ovirt-engine/api/clusterlevels", "<cluster_levels> <cluster_level id=\"4.0\"> </cluster_level> </cluster_levels>", "POST /ovirt-engine/api/clusters/123/networks", "<network id=\"123\" />", "POST /ovirt-engine/api/clusters", "<cluster> <name>mycluster</name> <cpu> <type>Intel Nehalem Family</type> </cpu> <data_center id=\"123\"/> </cluster>", "POST /ovirt-engine/api/clusters", "<cluster> <name>mycluster</name> <cpu> <type>Intel Nehalem Family</type> </cpu> <data_center id=\"123\"/> <external_network_providers> <external_provider name=\"ovirt-provider-ovn\"/> </external_network_providers> </cluster>", "POST /ovirt-engine/api/datacenters/123/cleanfinishedtasks", "<action/>", "GET /ovirt-engine/api/datacenters/123", "<data_center href=\"/ovirt-engine/api/datacenters/123\" id=\"123\"> <name>Default</name> <description>The default Data Center</description> <link href=\"/ovirt-engine/api/datacenters/123/clusters\" rel=\"clusters\"/> <link href=\"/ovirt-engine/api/datacenters/123/storagedomains\" rel=\"storagedomains\"/> <link href=\"/ovirt-engine/api/datacenters/123/permissions\" rel=\"permissions\"/> <link href=\"/ovirt-engine/api/datacenters/123/networks\" rel=\"networks\"/> <link href=\"/ovirt-engine/api/datacenters/123/quotas\" rel=\"quotas\"/> <link href=\"/ovirt-engine/api/datacenters/123/qoss\" rel=\"qoss\"/> <link href=\"/ovirt-engine/api/datacenters/123/iscsibonds\" rel=\"iscsibonds\"/> <local>false</local> <quota_mode>disabled</quota_mode> <status>up</status> <storage_format>v3</storage_format> <supported_versions> <version> <major>4</major> <minor>0</minor> </version> </supported_versions> <version> <major>4</major> <minor>0</minor> </version> <mac_pool href=\"/ovirt-engine/api/macpools/456\" id=\"456\"/> </data_center>", "DELETE /ovirt-engine/api/datacenters/123", "POST /ovirt-engine/api/datacenters/123/setmaster", "<action> <storage_domain id=\"456\"/> </action>", "PUT /ovirt-engine/api/datacenters/123", "<data_center> <name>myupdatedname</name> <description>An updated description for the data center</description> </data_center>", "POST /ovirt-engine/api/datacenters/123/networks", "<network> <name>mynetwork</name> </network>", "POST /ovirt-engine/api/datacenters", "<data_center> <name>mydc</name> <local>false</local> </data_center>", "GET /ovirt-engine/api/datacenters", "curl --request GET --cacert /etc/pki/ovirt-engine/ca.pem --header \"Version: 4\" --header \"Accept: application/xml\" --user \"admin@internal:mypassword\" https://myengine.example.com/ovirt-engine/api/datacenters", "<data_center href=\"/ovirt-engine/api/datacenters/123\" id=\"123\"> <name>Default</name> <description>The default Data Center</description> <link href=\"/ovirt-engine/api/datacenters/123/networks\" rel=\"networks\"/> <link href=\"/ovirt-engine/api/datacenters/123/storagedomains\" rel=\"storagedomains\"/> <link href=\"/ovirt-engine/api/datacenters/123/permissions\" rel=\"permissions\"/> <link href=\"/ovirt-engine/api/datacenters/123/clusters\" rel=\"clusters\"/> <link href=\"/ovirt-engine/api/datacenters/123/qoss\" rel=\"qoss\"/> <link href=\"/ovirt-engine/api/datacenters/123/iscsibonds\" rel=\"iscsibonds\"/> <link href=\"/ovirt-engine/api/datacenters/123/quotas\" rel=\"quotas\"/> <local>false</local> <quota_mode>disabled</quota_mode> <status>up</status> <supported_versions> <version> <major>4</major> <minor>0</minor> </version> </supported_versions> <version> <major>4</major> <minor>0</minor> </version> </data_center>", "POST /ovirt-engine/api/disks/123/convert", "<action> <disk> <sparse>true</sparse> <format>raw</format> </disk> </action>", "POST /ovirt-engine/api/disks/123/copy", "<action> <storage_domain id=\"456\"/> <disk> <name>mydisk</name> </disk> </action>", "<action> <storage_domain id=\"456\"/> <disk_profile id=\"987\"/> <quota id=\"753\"/> </action>", "POST /ovirt-engine/api/storagedomains/123/disks/789", "<action> <storage_domain> <name>mydata</name> </storage_domain> </action>", "GET /ovirt-engine/api/disks/123?all_content=true", "POST /ovirt-engine/api/disks/123/move", "<action> <storage_domain id=\"456\"/> </action>", "<action> <storage_domain id=\"456\"/> <disk_profile id=\"987\"/> <quota id=\"753\"/> </action>", "POST /ovirt-engine/api/disks/123/refreshlun", "<action> <host id='456'/> </action>", "PUT /ovirt-engine/api/disks/123", "<disk> <qcow_version>qcow2_v3</qcow_version> <alias>new-alias</alias> <description>new-desc</description> </disk>", "GET /ovirt-engine/api/vms/123/diskattachments/456", "<disk_attachment href=\"/ovirt-engine/api/vms/123/diskattachments/456\" id=\"456\"> <active>true</active> <bootable>true</bootable> <interface>virtio</interface> <disk href=\"/ovirt-engine/api/disks/456\" id=\"456\"/> <vm href=\"/ovirt-engine/api/vms/123\" id=\"123\"/> </disk_attachment>", "DELETE /ovirt-engine/api/vms/123/diskattachments/456?detach_only=true", "PUT /vms/{vm:id}/disksattachments/{attachment:id} <disk_attachment> <bootable>true</bootable> <interface>ide</interface> <active>true</active> <disk> <name>mydisk</name> <provisioned_size>1024</provisioned_size> </disk> </disk_attachment>", "<disk_attachment> <bootable>true</bootable> <pass_discard>true</pass_discard> <interface>ide</interface> <active>true</active> <disk id=\"123\"/> </disk_attachment>", "<disk_attachment> <bootable>true</bootable> <pass_discard>true</pass_discard> <interface>ide</interface> <active>true</active> <disk> <name>mydisk</name> <provisioned_size>1024</provisioned_size> </disk> </disk_attachment>", "POST /ovirt-engine/api/vms/345/diskattachments", "POST /ovirt-engine/api/disks", "<disk> <storage_domains> <storage_domain id=\"123\"/> </storage_domains> <name>mydisk</name> <provisioned_size>1048576</provisioned_size> <format>cow</format> <backup>incremental</backup> </disk>", "POST /ovirt-engine/api/disks", "<disk> <alias>mylun</alias> <lun_storage> <host id=\"123\"/> <type>iscsi</type> <logical_units> <logical_unit id=\"456\"> <address>10.35.10.20</address> <port>3260</port> <target>iqn.2017-01.com.myhost:444</target> </logical_unit> </logical_units> </lun_storage> </disk>", "qemu-img info b7a4c6c5-443b-47c5-967f-6abc79675e8b/myimage.img image: b548366b-fb51-4b41-97be-733c887fe305 file format: qcow2 virtual size: 1.0G (1073741824 bytes) disk size: 196K cluster_size: 65536 backing file: ad58716a-1fe9-481f-815e-664de1df04eb backing file format: raw", "POST /ovirt-engine/api/disks", "<disk id=\"b7a4c6c5-443b-47c5-967f-6abc79675e8b\"> <image_id>b548366b-fb51-4b41-97be-733c887fe305</image_id> <storage_domains> <storage_domain id=\"123\"/> </storage_domains> <name>mydisk</name> <provisioned_size>1048576</provisioned_size> <format>cow</format> </disk>", "GET /ovirt-engine/api/disks", "<disks> <disk id=\"123\"> <actions>...</actions> <name>MyDisk</name> <description>MyDisk description</description> <link href=\"/ovirt-engine/api/disks/123/permissions\" rel=\"permissions\"/> <link href=\"/ovirt-engine/api/disks/123/statistics\" rel=\"statistics\"/> <actual_size>5345845248</actual_size> <alias>MyDisk alias</alias> <status>ok</status> <storage_type>image</storage_type> <wipe_after_delete>false</wipe_after_delete> <disk_profile id=\"123\"/> <quota id=\"123\"/> <storage_domains>...</storage_domains> </disk> </disks>", "GET /ovirt-engine/api/domains/5678", "<domain href=\"/ovirt-engine/api/domains/5678\" id=\"5678\"> <name>internal-authz</name> <link href=\"/ovirt-engine/api/domains/5678/users\" rel=\"users\"/> <link href=\"/ovirt-engine/api/domains/5678/groups\" rel=\"groups\"/> <link href=\"/ovirt-engine/api/domains/5678/users?search={query}\" rel=\"users/search\"/> <link href=\"/ovirt-engine/api/domains/5678/groups?search={query}\" rel=\"groups/search\"/> </domain>", "GET /ovirt-engine/api/domains/5678/users/1234", "<user href=\"/ovirt-engine/api/users/1234\" id=\"1234\"> <name>admin</name> <namespace>*</namespace> <principal>admin</principal> <user_name>admin@internal-authz</user_name> <domain href=\"/ovirt-engine/api/domains/5678\" id=\"5678\"> <name>internal-authz</name> </domain> <groups/> </user>", "GET /ovirt-engine/api/domains/5678/users", "<users> <user href=\"/ovirt-engine/api/domains/5678/users/1234\" id=\"1234\"> <name>admin</name> <namespace>*</namespace> <principal>admin</principal> <user_name>admin@internal-authz</user_name> <domain href=\"/ovirt-engine/api/domains/5678\" id=\"5678\"> <name>internal-authz</name> </domain> <groups/> </user> </users>", "GET /ovirt-engine/api/domains", "<domains> <domain href=\"/ovirt-engine/api/domains/5678\" id=\"5678\"> <name>internal-authz</name> <link href=\"/ovirt-engine/api/domains/5678/users\" rel=\"users\"/> <link href=\"/ovirt-engine/api/domains/5678/groups\" rel=\"groups\"/> <link href=\"/ovirt-engine/api/domains/5678/users?search={query}\" rel=\"users/search\"/> <link href=\"/ovirt-engine/api/domains/5678/groups?search={query}\" rel=\"groups/search\"/> </domain> </domains>", "GET /ovirt-engine/api/katelloerrata", "<katello_errata> <katello_erratum href=\"/ovirt-engine/api/katelloerrata/123\" id=\"123\"> <name>RHBA-2013:XYZ</name> <description>The description of the erratum</description> <title>some bug fix update</title> <type>bugfix</type> <issued>2013-11-20T02:00:00.000+02:00</issued> <solution>Few guidelines regarding the solution</solution> <summary>Updated packages that fix one bug are now available for XYZ</summary> <packages> <package> <name>libipa_hbac-1.9.2-82.11.el6_4.i686</name> </package> </packages> </katello_erratum> </katello_errata>", "GET /ovirt-engine/api/events/123", "<event href=\"/ovirt-engine/api/events/123\" id=\"123\"> <description>Host example.com was added by admin@internal-authz.</description> <code>42</code> <correlation_id>135</correlation_id> <custom_id>-1</custom_id> <flood_rate>30</flood_rate> <origin>oVirt</origin> <severity>normal</severity> <time>2016-12-11T11:13:44.654+02:00</time> <cluster href=\"/ovirt-engine/api/clusters/456\" id=\"456\"/> <host href=\"/ovirt-engine/api/hosts/789\" id=\"789\"/> <user href=\"/ovirt-engine/api/users/987\" id=\"987\"/> </event>", "DELETE /ovirt-engine/api/events/123", "GET /ovirt-engine/api/users/123/vm_console_detected", "<event-subscription href=\"/ovirt-engine/api/users/123/event-subscriptions/vm_console_detected\"> <event>vm_console_detected</event> <notification_method>smtp</notification_method> <user href=\"/ovirt-engine/api/users/123\" id=\"123\"/> <address>[email protected]</address> </event-subscription>", "DELETE /ovirt-engine/api/users/123/vm_console_detected", "POST /ovirt-engine/api/users/123/eventsubscriptions", "<event_subscription> <event>host_high_cpu_use</event> <address>[email protected]</address> </event_subscription>", "GET /ovirt-engine/api/users/123/event-subscriptions", "<event-subscriptions> <event-subscription href=\"/ovirt-engine/api/users/123/event-subscriptions/host_install_failed\"> <event>host_install_failed</event> <notification_method>smtp</notification_method> <user href=\"/ovirt-engine/api/users/123\" id=\"123\"/> <address>[email protected]</address> </event-subscription> <event-subscription href=\"/ovirt-engine/api/users/123/event-subscriptions/vm_paused\"> <event>vm_paused</event> <notification_method>smtp</notification_method> <user href=\"/ovirt-engine/api/users/123\" id=\"123\"/> <address>[email protected]</address> </event-subscription> </event-subscriptions>", "POST /ovirt-engine/api/events <event> <description>File system /home is full</description> <severity>alert</severity> <origin>mymonitor</origin> <custom_id>1467879754</custom_id> </event>", "POST /ovirt-engine/api/events <event> <description>File system /home is full</description> <severity>alert</severity> <origin>mymonitor</origin> <custom_id>1467879754</custom_id> <vm id=\"aae98225-5b73-490d-a252-899209af17e9\"/> </event>", "GET /ovirt-engine/api/events", "<events> <event href=\"/ovirt-engine/api/events/2\" id=\"2\"> <description>User admin@internal-authz logged out.</description> <code>31</code> <correlation_id>1e892ea9</correlation_id> <custom_id>-1</custom_id> <flood_rate>30</flood_rate> <origin>oVirt</origin> <severity>normal</severity> <time>2016-09-14T12:14:34.541+02:00</time> <user href=\"/ovirt-engine/api/users/57d91d48-00da-0137-0138-000000000244\" id=\"57d91d48-00da-0137-0138-000000000244\"/> </event> <event href=\"/ovirt-engine/api/events/1\" id=\"1\"> <description>User admin logged in.</description> <code>30</code> <correlation_id>1fbd81f4</correlation_id> <custom_id>-1</custom_id> <flood_rate>30</flood_rate> <origin>oVirt</origin> <severity>normal</severity> <time>2016-09-14T11:54:35.229+02:00</time> <user href=\"/ovirt-engine/api/users/57d91d48-00da-0137-0138-000000000244\" id=\"57d91d48-00da-0137-0138-000000000244\"/> </event> </events>", "GET /ovirt-engine/api/events?max=1", "GET /ovirt-engine/api/events?from=123", "GET /ovirt-engine/api/events?search=severity%3Dnormal", "<events> <event href=\"/ovirt-engine/api/events/2\" id=\"2\"> <description>User admin@internal-authz logged out.</description> <code>31</code> <correlation_id>1fbd81f4</correlation_id> <custom_id>-1</custom_id> <flood_rate>30</flood_rate> <origin>oVirt</origin> <severity>normal</severity> <time>2016-09-14T11:54:35.229+02:00</time> <user href=\"/ovirt-engine/api/users/57d91d48-00da-0137-0138-000000000244\" id=\"57d91d48-00da-0137-0138-000000000244\"/> </event> <event href=\"/ovirt-engine/api/events/1\" id=\"1\"> <description>Affinity Rules Enforcement Manager started.</description> <code>10780</code> <custom_id>-1</custom_id> <flood_rate>30</flood_rate> <origin>oVirt</origin> <severity>normal</severity> <time>2016-09-14T11:52:18.861+02:00</time> </event> </events>", "sortby time asc page 1", "GET /ovirt-engine/api/events?search=sortby%20time%20asc%20page%201", "GET /ovirt-engine/api/events?search=sortby%20time%20asc%20page%202", "GET /ovirt-engine/api/externalhostproviders/123/computeresources/234", "<external_compute_resource href=\"/ovirt-engine/api/externalhostproviders/123/computeresources/234\" id=\"234\"> <name>hostname</name> <provider>oVirt</provider> <url>https://hostname/api</url> <user>admin@internal</user> <external_host_provider href=\"/ovirt-engine/api/externalhostproviders/123\" id=\"123\"/> </external_compute_resource>", "GET /ovirt-engine/api/externalhostproviders/123/computeresources", "<external_compute_resources> <external_compute_resource href=\"/ovirt-engine/api/externalhostproviders/123/computeresources/234\" id=\"234\"> <name>hostname</name> <provider>oVirt</provider> <url>https://address/api</url> <user>admin@internal</user> <external_host_provider href=\"/ovirt-engine/api/externalhostproviders/123\" id=\"123\"/> </external_compute_resource> </external_compute_resources>", "GET /ovirt-engine/api/externalhostproviders/123/discoveredhosts/234", "<external_discovered_host href=\"/ovirt-engine/api/externalhostproviders/123/discoveredhosts/234\" id=\"234\"> <name>mac001a4ad04040</name> <ip>10.34.67.43</ip> <last_report>2017-04-24 11:05:41 UTC</last_report> <mac>00:1a:4a:d0:40:40</mac> <subnet_name>sat0</subnet_name> <external_host_provider href=\"/ovirt-engine/api/externalhostproviders/123\" id=\"123\"/> </external_discovered_host>", "GET /ovirt-engine/api/externalhostproviders/123/discoveredhost", "<external_discovered_hosts> <external_discovered_host href=\"/ovirt-engine/api/externalhostproviders/123/discoveredhosts/456\" id=\"456\"> <name>mac001a4ad04031</name> <ip>10.34.67.42</ip> <last_report>2017-04-24 11:05:41 UTC</last_report> <mac>00:1a:4a:d0:40:31</mac> <subnet_name>sat0</subnet_name> <external_host_provider href=\"/ovirt-engine/api/externalhostproviders/123\" id=\"123\"/> </external_discovered_host> <external_discovered_host href=\"/ovirt-engine/api/externalhostproviders/123/discoveredhosts/789\" id=\"789\"> <name>mac001a4ad04040</name> <ip>10.34.67.43</ip> <last_report>2017-04-24 11:05:41 UTC</last_report> <mac>00:1a:4a:d0:40:40</mac> <subnet_name>sat0</subnet_name> <external_host_provider href=\"/ovirt-engine/api/externalhostproviders/123\" id=\"123\"/> </external_discovered_host> </external_discovered_hosts>", "GET /ovirt-engine/api/externalhostproviders/123/hostgroups/234", "<external_host_group href=\"/ovirt-engine/api/externalhostproviders/123/hostgroups/234\" id=\"234\"> <name>rhel7</name> <architecture_name>x86_64</architecture_name> <domain_name>s.com</domain_name> <operating_system_name>RedHat 7.3</operating_system_name> <subnet_name>sat0</subnet_name> <external_host_provider href=\"/ovirt-engine/api/externalhostproviders/123\" id=\"123\"/> </external_host_group>", "GET /ovirt-engine/api/externalhostproviders/123/hostgroups", "<external_host_groups> <external_host_group href=\"/ovirt-engine/api/externalhostproviders/123/hostgroups/234\" id=\"234\"> <name>rhel7</name> <architecture_name>x86_64</architecture_name> <domain_name>example.com</domain_name> <operating_system_name>RedHat 7.3</operating_system_name> <subnet_name>sat0</subnet_name> <external_host_provider href=\"/ovirt-engine/api/externalhostproviders/123\" id=\"123\"/> </external_host_group> </external_host_groups>", "GET /ovirt-engine/api/externalhostproviders/123", "<external_host_provider href=\"/ovirt-engine/api/externalhostproviders/123\" id=\"123\"> <name>mysatellite</name> <requires_authentication>true</requires_authentication> <url>https://mysatellite.example.com</url> <username>admin</username> </external_host_provider>", "POST /ovirt-engine/api/externalhostproviders/123/testconnectivity", "POST /ovirt-engine/api/externalhostproviders/123/testconnectivity", "GET /ovirt-engine/api/externalhostproviders/123/certificate/0", "<certificate id=\"0\"> <organization>provider.example.com</organization> <subject>CN=provider.example.com</subject> <content>...</content> </certificate>", "GET /ovirt-engine/api/externalhostproviders/123/certificates", "<certificates> <certificate id=\"789\">...</certificate> </certificates>", "POST /externaltemplateimports", "<external_template_import> <template> <name>my_template</name> </template> <cluster id=\"2b18aca2-4469-11eb-9449-482ae35a5f83\" /> <storage_domain id=\"8bb5ade5-e988-4000-8b93-dbfc6717fe50\" /> <url>ova:///mnt/ova/ova_template.ova</url> <host id=\"8bb5ade5-e988-4000-8b93-dbfc6717fe50\" /> </external_template_import>", "POST /externalvmimports", "<external_vm_import> <vm> <name>my_vm</name> </vm> <cluster id=\"360014051136c20574f743bdbd28177fd\" /> <storage_domain id=\"8bb5ade5-e988-4000-8b93-dbfc6717fe50\" /> <name>vm_name_as_is_in_vmware</name> <sparse>true</sparse> <username>vmware_user</username> <password>123456</password> <provider>VMWARE</provider> <url>vpx://wmware_user@vcenter-host/DataCenter/Cluster/esxi-host?no_verify=1</url> <drivers_iso id=\"virtio-win-1.6.7.iso\" /> </external_vm_import>", "GET /ovirt-engine/api/hosts/123/fenceagents/0", "<agent id=\"0\"> <type>apc</type> <order>1</order> <ip>192.168.1.101</ip> <user>user</user> <password>xxx</password> <port>9</port> <options>name1=value1, name2=value2</options> </agent>", "DELETE /ovirt-engine/api/hosts/123/fenceagents/0", "POST /ovirt-engine/api/hosts/123/fenceagents You should consult the /usr/sbin/fence_<agent_name> manual page for the legal parameters to [name1=value1, name2=value2,...] in the options field. If any parameter in options appears by name that means that it is mandatory. For example in <options>slot=7[,name1=value1, name2=value2,...]</options> slot is mandatory.", "<agent> <type>apc</type> <order>1</order> <ip>192.168.1.101</ip> <user>user</user> <password>xxx</password> <port>9</port> <options>slot=7[,name1=value1, name2=value2,...]</options> </agent>", "<agent> <type>apc_snmp</type> <order>1</order> <ip>192.168.1.101</ip> <user>user</user> <password>xxx</password> <port>9</port> <options>[name1=value1, name2=value2,...]</options> </agent>", "<agent> <type>cisco_ucs</type> <order>1</order> <ip>192.168.1.101</ip> <user>user</user> <password>xxx</password> <options>slot=7[,name1=value1, name2=value2,...]</options> </agent>", "<agent> <type>drac7</type> <order>1</order> <ip>192.168.1.101</ip> <user>user</user> <password>xxx</password> <options>[name1=value1, name2=value2,...]</options> </agent>", "GET /ovirt-engine/api/hosts/123/fenceagents", "<agents> <agent id=\"0\"> <type>apc</type> <order>1</order> <ip>192.168.1.101</ip> <user>user</user> <password>xxx</password> <port>9</port> <options>name1=value1, name2=value2</options> </agent> </agents>", "engine-config -s ForceRefreshDomainFilesByDefault=false", "GET /ovirt-engine/api/clusters/567/glustervolumes/123/glusterbricks/234", "<brick id=\"234\"> <name>host1:/rhgs/data/brick1</name> <brick_dir>/rhgs/data/brick1</brick_dir> <server_id>111</server_id> <status>up</status> <device>/dev/mapper/RHGS_vg1-lv_vmaddldisks</device> <fs_name>xfs</fs_name> <gluster_clients> <gluster_client> <bytes_read>2818417648</bytes_read> <bytes_written>1384694844</bytes_written> <client_port>1011</client_port> <host_name>client2</host_name> </gluster_client> </gluster_clients> <memory_pools> <memory_pool> <name>data-server:fd_t</name> <alloc_count>1626348</alloc_count> <cold_count>1020</cold_count> <hot_count>4</hot_count> <max_alloc>23</max_alloc> <max_stdalloc>0</max_stdalloc> <padded_size>140</padded_size> <pool_misses>0</pool_misses> </memory_pool> </memory_pools> <mnt_options>rw,seclabel,noatime,nodiratime,attr2,inode64,sunit=512,swidth=2048,noquota</mnt_options> <pid>25589</pid> <port>49155</port> </brick>", "DELETE /ovirt-engine/api/clusters/567/glustervolumes/123/glusterbricks/234", "POST /ovirt-engine/api/clusters/567/glustervolumes/123/glusterbricks/activate", "<action> <bricks> <brick> <name>host1:/rhgs/brick1</name> </brick> </bricks> </action>", "POST /ovirt-engine/api/clusters/567/glustervolumes/123/glusterbricks", "<bricks> <brick> <server_id>111</server_id> <brick_dir>/export/data/brick3</brick_dir> </brick> </bricks>", "GET /ovirt-engine/api/clusters/567/glustervolumes/123/glusterbricks", "<bricks> <brick id=\"234\"> <name>host1:/rhgs/data/brick1</name> <brick_dir>/rhgs/data/brick1</brick_dir> <server_id>111</server_id> <status>up</status> </brick> <brick id=\"233\"> <name>host2:/rhgs/data/brick1</name> <brick_dir>/rhgs/data/brick1</brick_dir> <server_id>222</server_id> <status>up</status> </brick> </bricks>", "POST /ovirt-engine/api/clusters/567/glustervolumes/123/glusterbricks/migrate", "<action> <bricks> <brick> <name>host1:/rhgs/brick1</name> </brick> </bricks> </action>", "DELETE /ovirt-engine/api/clusters/567/glustervolumes/123/glusterbricks", "<bricks> <brick> <name>host:brick_directory</name> </brick> </bricks>", "POST /ovirt-engine/api/clusters/567/glustervolumes/123/glusterbricks/stopmigrate", "<bricks> <brick> <name>host:brick_directory</name> </brick> </bricks>", "GET /ovirt-engine/api/clusters/456/glustervolumes/123", "<gluster_volume id=\"123\"> <name>data</name> <link href=\"/ovirt-engine/api/clusters/456/glustervolumes/123/glusterbricks\" rel=\"glusterbricks\"/> <disperse_count>0</disperse_count> <options> <option> <name>storage.owner-gid</name> <value>36</value> </option> <option> <name>performance.io-cache</name> <value>off</value> </option> <option> <name>cluster.data-self-heal-algorithm</name> <value>full</value> </option> </options> <redundancy_count>0</redundancy_count> <replica_count>3</replica_count> <status>up</status> <stripe_count>0</stripe_count> <transport_types> <transport_type>tcp</transport_type> </transport_types> <volume_type>replicate</volume_type> </gluster_volume>", "POST /ovirt-engine/api/clusters/456/glustervolumes/123/getprofilestatistics", "POST /ovirt-engine/api/clusters/456/glustervolumes/123/rebalance", "DELETE /ovirt-engine/api/clusters/456/glustervolumes/123", "POST /ovirt-engine/api/clusters/456/glustervolumes/123/resetalloptions", "POST /ovirt-engine/api/clusters/456/glustervolumes/123/resetoption", "<action> <option name=\"option1\"/> </action>", "POST /ovirt-engine/api/clusters/456/glustervolumes/123/setoption", "<action> <option name=\"option1\" value=\"value1\"/> </action>", "POST /ovirt-engine/api/clusters/456/glustervolumes/123/start", "POST /ovirt-engine/api/clusters/456/glustervolumes/123/startprofile", "POST /ovirt-engine/api/clusters/456/glustervolumes/123/stop", "POST /ovirt-engine/api/clusters/456/glustervolumes/123/stopprofile", "POST /ovirt-engine/api/clusters/456/glustervolumes/123/stoprebalance", "POST /ovirt-engine/api/clusters/123/glustervolumes", "<gluster_volume> <name>myvolume</name> <volume_type>replicate</volume_type> <replica_count>3</replica_count> <bricks> <brick> <server_id>server1</server_id> <brick_dir>/exp1</brick_dir> </brick> <brick> <server_id>server2</server_id> <brick_dir>/exp1</brick_dir> </brick> <brick> <server_id>server3</server_id> <brick_dir>/exp1</brick_dir> </brick> <bricks> </gluster_volume>", "GET /ovirt-engine/api/clusters/456/glustervolumes", "GET /ovirt-engine/api/groups/123", "<group href=\"/ovirt-engine/api/groups/123\" id=\"123\"> <name>mygroup</name> <link href=\"/ovirt-engine/api/groups/123/roles\" rel=\"roles\"/> <link href=\"/ovirt-engine/api/groups/123/permissions\" rel=\"permissions\"/> <link href=\"/ovirt-engine/api/groups/123/tags\" rel=\"tags\"/> <domain_entry_id>476652557A382F67696B6D2B32762B37796E46476D513D3D</domain_entry_id> <namespace>DC=example,DC=com</namespace> <domain href=\"/ovirt-engine/api/domains/ABCDEF\" id=\"ABCDEF\"> <name>myextension-authz</name> </domain> </group>", "DELETE /ovirt-engine/api/groups/123", "POST /ovirt-engine/api/groups", "<group> <name>Developers</name> <domain> <name>internal-authz</name> </domain> </group>", "GET /ovirt-engine/api/groups", "<groups> <group href=\"/ovirt-engine/api/groups/123\" id=\"123\"> <name>mygroup</name> <link href=\"/ovirt-engine/api/groups/123/roles\" rel=\"roles\"/> <link href=\"/ovirt-engine/api/groups/123/permissions\" rel=\"permissions\"/> <link href=\"/ovirt-engine/api/groups/123/tags\" rel=\"tags\"/> <domain_entry_id>476652557A382F67696B6D2B32762B37796E46476D513D3D</domain_entry_id> <namespace>DC=example,DC=com</namespace> <domain href=\"/ovirt-engine/api/domains/ABCDEF\" id=\"ABCDEF\"> <name>myextension-authz</name> </domain> </group> </groups>", "POST /ovirt-engine/api/hosts/123/commitnetconfig", "<action/>", "POST /ovirt-engine/api/hosts/123/copyhostnetworks", "<action> <source_host id=\"456\"/> </action>", "POST /ovirt-engine/api/hosts/123/discoveriscsi", "<action> <iscsi> <address>myiscsi.example.com</address> </iscsi> </action>", "<discovered_targets> <iscsi_details> <address>10.35.1.72</address> <port>3260</port> <portal>10.35.1.72:3260,1</portal> <target>iqn.2015-08.com.tgt:444</target> </iscsi_details> </discovered_targets>", "#!/bin/sh -ex url=\"https://engine.example.com/ovirt-engine/api\" user=\"admin@internal\" password=\"...\" curl --verbose --cacert /etc/pki/ovirt-engine/ca.pem --user \"USD{user}:USD{password}\" --request POST --header \"Version: 4\" --header \"Content-Type: application/xml\" --header \"Accept: application/xml\" --data ' <action> <fence_type>start</fence_type> </action> ' \"USD{url}/hosts/123/fence\"", "POST /ovirt-engine/api/hosts/123/forceselectspm", "<action/>", "GET /ovirt-engine/api/hosts/123", "GET /ovirt-engine/api/hosts/123?all_content=true", "curl --verbose --cacert /etc/pki/ovirt-engine/ca.pem --request PUT --header \"Content-Type: application/json\" --header \"Accept: application/json\" --header \"Version: 4\" --user \"admin@internal:...\" --data ' { \"root_password\": \"myrootpassword\" } ' \"https://engine.example.com/ovirt-engine/api/hosts/123\"", "curl curl --verbose --cacert /etc/pki/ovirt-engine/ca.pem --request PUT --header \"Content-Type: application/json\" --header \"Accept: application/json\" --header \"Version: 4\" --user \"admin@internal:...\" --data ' { \"root_password\": \"myrootpassword\" \"deploy_hosted_engine\" : \"true\" } ' \"https://engine.example.com/ovirt-engine/api/hosts/123\"", "POST /ovirt-engine/api/hosts/123/iscsidiscover", "<action> <iscsi> <address>myiscsi.example.com</address> </iscsi> </action>", "#!/bin/sh -ex url=\"https://engine.example.com/ovirt-engine/api\" user=\"admin@internal\" password=\"...\" curl --verbose --cacert /etc/pki/ovirt-engine/ca.pem --user \"USD{user}:USD{password}\" --request DELETE --header \"Version: 4\" \"USD{url}/hosts/1ff7a191-2f3b-4eff-812b-9f91a30c3acc\"", "#!/bin/sh -ex url=\"https://engine.example.com/ovirt-engine/api\" user=\"admin@internal\" password=\"...\" curl --verbose --cacert /etc/pki/ovirt-engine/ca.pem --user \"USD{user}:USD{password}\" --request POST --header \"Version: 4\" --header \"Content-Type: application/xml\" --header \"Accept: application/xml\" --data ' <action> <modified_bonds> <host_nic> <name>bond0</name> <bonding> <options> <option> <name>mode</name> <value>4</value> </option> <option> <name>miimon</name> <value>100</value> </option> </options> <slaves> <host_nic> <name>eth1</name> </host_nic> <host_nic> <name>eth2</name> </host_nic> </slaves> </bonding> </host_nic> </modified_bonds> <modified_network_attachments> <network_attachment> <network> <name>myvlan</name> </network> <host_nic> <name>bond0</name> </host_nic> <ip_address_assignments> <ip_address_assignment> <assignment_method>static</assignment_method> <ip> <address>192.168.122.10</address> <netmask>255.255.255.0</netmask> </ip> </ip_address_assignment> </ip_address_assignments> <dns_resolver_configuration> <name_servers> <name_server>1.1.1.1</name_server> <name_server>2.2.2.2</name_server> </name_servers> </dns_resolver_configuration> </network_attachment> </modified_network_attachments> </action> ' \"USD{url}/hosts/1ff7a191-2f3b-4eff-812b-9f91a30c3acc/setupnetworks\"", "<options name=\"mode\" value=\"4\"/> <options name=\"miimon\" value=\"100\"/> <ip address=\"192.168.122.10\" netmask=\"255.255.255.0\"/>", "Find the service that manages the collection of hosts: hosts_service = connection.system_service().hosts_service() Find the host: host = hosts_service.list(search='name=myhost')[0] Find the service that manages the host: host_service = hosts_service.host_service(host.id) Configure the network adding a bond with two slaves and attaching it to a network with an static IP address: host_service.setup_networks( modified_bonds=[ types.HostNic( name='bond0', bonding=types.Bonding( options=[ types.Option( name='mode', value='4', ), types.Option( name='miimon', value='100', ), ], slaves=[ types.HostNic( name='eth1', ), types.HostNic( name='eth2', ), ], ), ), ], modified_network_attachments=[ types.NetworkAttachment( network=types.Network( name='myvlan', ), host_nic=types.HostNic( name='bond0', ), ip_address_assignments=[ types.IpAddressAssignment( assignment_method=types.BootProtocol.STATIC, ip=types.Ip( address='192.168.122.10', netmask='255.255.255.0', ), ), ], dns_resolver_configuration=types.DnsResolverConfiguration( name_servers=[ '1.1.1.1', '2.2.2.2', ], ), ), ], ) After modifying the network configuration it is very important to make it persistent: host_service.commit_net_config()", "POST /ovirt-engine/api/hosts/123/syncallnetworks", "<action/>", "PUT /ovirt-engine/api/hosts/123", "<host> <os> <custom_kernel_cmdline>vfio_iommu_type1.allow_unsafe_interrupts=1</custom_kernel_cmdline> </os> </host>", "GET /ovirt-engine/api/hosts/123/devices/456", "<host_device href=\"/ovirt-engine/api/hosts/123/devices/456\" id=\"456\"> <name>usb_1_9_1_1_0</name> <capability>usb</capability> <host href=\"/ovirt-engine/api/hosts/123\" id=\"123\"/> <parent_device href=\"/ovirt-engine/api/hosts/123/devices/789\" id=\"789\"> <name>usb_1_9_1</name> </parent_device> </host_device>", "GET /ovirt-engine/api/hosts/123/nics/456?all_content=true", "GET /ovirt-engine/api/hosts/123/nics?all_content=true", "GET /ovirt-engine/api/hosts/123/storage", "<host_storages> <host_storage id=\"123\"> </host_storage> </host_storages>", "<host_storage id=\"123\"> <logical_units> <logical_unit id=\"123\"> <lun_mapping>0</lun_mapping> <paths>1</paths> <product_id>lun0</product_id> <serial>123</serial> <size>10737418240</size> <status>used</status> <vendor_id>LIO-ORG</vendor_id> <volume_group_id>123</volume_group_id> </logical_unit> </logical_units> <type>iscsi</type> <host id=\"123\"/> </host_storage>", "<host_storage id=\"123\"> <logical_units> <logical_unit id=\"123\"> <lun_mapping>0</lun_mapping> <paths>1</paths> <product_id>lun0</product_id> <serial>123</serial> <size>10737418240</size> <vendor_id>LIO-ORG</vendor_id> <volume_group_id>123</volume_group_id> </logical_unit> </logical_units> <type>iscsi</type> <host id=\"123\"/> </host_storage>", "POST /ovirt-engine/api/hosts", "<host> <name>myhost</name> <address>myhost.example.com</address> <root_password>myrootpassword</root_password> </host>", "POST /ovirt-engine/api/hosts?deploy_hosted_engine=true", "POST /ovirt-engine/api/hosts", "<host> <name>myhost</name> <address>myhost.example.com</address> <root_password>123456</root_password> <external_network_provider_configurations> <external_network_provider_configuration> <external_network_provider name=\"ovirt-provider-ovn\"/> </external_network_provider_configuration> </external_network_provider_configurations> </host>", "GET /ovirt-engine/api/hosts", "<hosts> <host href=\"/ovirt-engine/api/hosts/123\" id=\"123\"> </host> <host href=\"/ovirt-engine/api/hosts/456\" id=\"456\"> </host> </host>", "GET /ovirt-engine/api/hosts?all_content=true", "GET /ovirt-engine/api/hosts?migration_target_of=123,456&check_vms_in_affinity_closure=true", "GET /ovirt-engine/api/hosts?migration_target_of=123,456", "GET /ovirt-engine/api/icons/123", "<icon id=\"123\"> <data>Some binary data here</data> <media_type>image/png</media_type> </icon>", "GET /ovirt-engine/api/icons", "<icons> <icon id=\"123\"> <data>...</data> <media_type>image/png</media_type> </icon> </icons>", "transfers_service = system_service.image_transfers_service() transfer = transfers_service.add( types.ImageTransfer( disk=types.Disk( id='123' ) ) )", "transfers_service = system_service.image_transfers_service() transfer = transfers_service.add( types.ImageTransfer( disk=types.Disk( id='123' ), host=types.Host( id='456' ) ) )", "transfers_service = system_service.image_transfers_service() transfer = transfers_service.add( types.ImageTransfer( disk=types.Disk( id='123' ), direction=types.ImageTransferDirection.DOWNLOAD ) )", "transfer_service = transfers_service.image_transfer_service(transfer.id) while transfer.phase == types.ImageTransferPhase.INITIALIZING: time.sleep(3) transfer = transfer_service.get()", "transfer_service = transfers_service.image_transfer_service(transfer.id) transfer_service.resume() transfer = transfer_service.get() while transfer.phase == types.ImageTransferPhase.RESUMING: time.sleep(1) transfer = transfer_service.get()", "POST /ovirt-engine/api/imagetransfers", "<image_transfer> <disk id=\"123\"/> <direction>upload|download</direction> </image_transfer>", "POST /ovirt-engine/api/imagetransfers", "<image_transfer> <snapshot id=\"456\"/> <direction>download|upload</direction> </image_transfer>", "GET /ovirt-engine/api/instancetypes/123", "DELETE /ovirt-engine/api/instancetypes/123", "PUT /ovirt-engine/api/instancetypes/123", "<instance_type> <memory>1073741824</memory> <cpu> <topology> <cores>1</cores> <sockets>2</sockets> <threads>1</threads> </topology> </cpu> </instance_type>", "POST /ovirt-engine/api/instancetypes", "<instance_type> <name>myinstancetype</name> </template>", "<instance_type> <name>myinstancetype</name> <console> <enabled>true</enabled> </console> <cpu> <topology> <cores>2</cores> <sockets>2</sockets> <threads>1</threads> </topology> </cpu> <custom_cpu_model>AMD Opteron_G2</custom_cpu_model> <custom_emulated_machine>q35</custom_emulated_machine> <display> <monitors>1</monitors> <single_qxl_pci>true</single_qxl_pci> <smartcard_enabled>true</smartcard_enabled> <type>spice</type> </display> <high_availability> <enabled>true</enabled> <priority>1</priority> </high_availability> <io> <threads>2</threads> </io> <memory>4294967296</memory> <memory_policy> <ballooning>true</ballooning> <guaranteed>268435456</guaranteed> </memory_policy> <migration> <auto_converge>inherit</auto_converge> <compressed>inherit</compressed> <policy id=\"00000000-0000-0000-0000-000000000000\"/> </migration> <migration_downtime>2</migration_downtime> <os> <boot> <devices> <device>hd</device> </devices> </boot> </os> <rng_device> <rate> <bytes>200</bytes> <period>2</period> </rate> <source>urandom</source> </rng_device> <soundcard_enabled>true</soundcard_enabled> <usb> <enabled>true</enabled> <type>native</type> </usb> <virtio_scsi> <enabled>true</enabled> </virtio_scsi> </instance_type>", "DELETE /ovirt-engine/api/datacenters/123/iscsibonds/456", "PUT /ovirt-engine/api/datacenters/123/iscsibonds/1234", "<iscsi_bond> <name>mybond</name> <description>My iSCSI bond</description> </iscsi_bond>", "POST /ovirt-engine/api/datacenters/123/iscsibonds", "<iscsi_bond> <name>mybond</name> <storage_connections> <storage_connection id=\"456\"/> <storage_connection id=\"789\"/> </storage_connections> <networks> <network id=\"abc\"/> </networks> </iscsi_bond>", "POST /ovirt-engine/api/jobs/clear", "<action/>", "POST /ovirt-engine/api/jobs/end", "<action> <force>true</force> <status>finished</status> </action>", "GET /ovirt-engine/api/jobs/123", "<job href=\"/ovirt-engine/api/jobs/123\" id=\"123\"> <actions> <link href=\"/ovirt-engine/api/jobs/123/clear\" rel=\"clear\"/> <link href=\"/ovirt-engine/api/jobs/123/end\" rel=\"end\"/> </actions> <description>Adding Disk</description> <link href=\"/ovirt-engine/api/jobs/123/steps\" rel=\"steps\"/> <auto_cleared>true</auto_cleared> <end_time>2016-12-12T23:07:29.758+02:00</end_time> <external>false</external> <last_updated>2016-12-12T23:07:29.758+02:00</last_updated> <start_time>2016-12-12T23:07:26.593+02:00</start_time> <status>failed</status> <owner href=\"/ovirt-engine/api/users/456\" id=\"456\"/> </job>", "POST /ovirt-engine/api/jobs", "<job> <description>Doing some work</description> <auto_cleared>true</auto_cleared> </job>", "<job href=\"/ovirt-engine/api/jobs/123\" id=\"123\"> <actions> <link href=\"/ovirt-engine/api/jobs/123/clear\" rel=\"clear\"/> <link href=\"/ovirt-engine/api/jobs/123/end\" rel=\"end\"/> </actions> <description>Doing some work</description> <link href=\"/ovirt-engine/api/jobs/123/steps\" rel=\"steps\"/> <auto_cleared>true</auto_cleared> <external>true</external> <last_updated>2016-12-13T02:15:42.130+02:00</last_updated> <start_time>2016-12-13T02:15:42.130+02:00</start_time> <status>started</status> <owner href=\"/ovirt-engine/api/users/456\" id=\"456\"/> </job>", "GET /ovirt-engine/api/jobs", "<jobs> <job href=\"/ovirt-engine/api/jobs/123\" id=\"123\"> <actions> <link href=\"/ovirt-engine/api/jobs/123/clear\" rel=\"clear\"/> <link href=\"/ovirt-engine/api/jobs/123/end\" rel=\"end\"/> </actions> <description>Adding Disk</description> <link href=\"/ovirt-engine/api/jobs/123/steps\" rel=\"steps\"/> <auto_cleared>true</auto_cleared> <end_time>2016-12-12T23:07:29.758+02:00</end_time> <external>false</external> <last_updated>2016-12-12T23:07:29.758+02:00</last_updated> <start_time>2016-12-12T23:07:26.593+02:00</start_time> <status>failed</status> <owner href=\"/ovirt-engine/api/users/456\" id=\"456\"/> </job> </jobs>", "GET /ovirt-engine/api/katelloerrata", "<katello_errata> <katello_erratum href=\"/ovirt-engine/api/katelloerrata/123\" id=\"123\"> <name>RHBA-2013:XYZ</name> <description>The description of the erratum</description> <title>some bug fix update</title> <type>bugfix</type> <issued>2013-11-20T02:00:00.000+02:00</issued> <solution>Few guidelines regarding the solution</solution> <summary>Updated packages that fix one bug are now available for XYZ</summary> <packages> <package> <name>libipa_hbac-1.9.2-82.11.el6_4.i686</name> </package> </packages> </katello_erratum> </katello_errata>", "GET /ovirt-engine/api/katelloerrata/123", "<katello_erratum href=\"/ovirt-engine/api/katelloerrata/123\" id=\"123\"> <name>RHBA-2013:XYZ</name> <description>The description of the erratum</description> <title>some bug fix update</title> <type>bugfix</type> <issued>2013-11-20T02:00:00.000+02:00</issued> <solution>Few guidelines regarding the solution</solution> <summary>Updated packages that fix one bug are now available for XYZ</summary> <packages> <package> <name>libipa_hbac-1.9.2-82.11.el6_4.i686</name> </package> </packages> </katello_erratum>", "GET ovirt-engine/api/hosts/123/nics/321/linklayerdiscoveryprotocolelements", "<link_layer_discovery_protocol_elements> <link_layer_discovery_protocol_element> <name>Port Description</name> <properties> <property> <name>port description</name> <value>Summit300-48-Port 1001</value> </property> </properties> <type>4</type> </link_layer_discovery_protocol_element> <link_layer_discovery_protocol_elements>", "DELETE /ovirt-engine/api/macpools/123", "PUT /ovirt-engine/api/macpools/123", "<mac_pool> <name>UpdatedMACPool</name> <description>An updated MAC address pool</description> <allow_duplicates>false</allow_duplicates> <ranges> <range> <from>00:1A:4A:16:01:51</from> <to>00:1A:4A:16:01:e6</to> </range> <range> <from>02:1A:4A:01:00:00</from> <to>02:1A:4A:FF:FF:FF</to> </range> </ranges> </mac_pool>", "POST /ovirt-engine/api/macpools", "<mac_pool> <name>MACPool</name> <description>A MAC address pool</description> <allow_duplicates>true</allow_duplicates> <default_pool>false</default_pool> <ranges> <range> <from>00:1A:4A:16:01:51</from> <to>00:1A:4A:16:01:e6</to> </range> </ranges> </mac_pool>", "GET /ovirt-engine/api/networks/123", "<network href=\"/ovirt-engine/api/networks/123\" id=\"123\"> <name>ovirtmgmt</name> <description>Default Management Network</description> <link href=\"/ovirt-engine/api/networks/123/permissions\" rel=\"permissions\"/> <link href=\"/ovirt-engine/api/networks/123/vnicprofiles\" rel=\"vnicprofiles\"/> <link href=\"/ovirt-engine/api/networks/123/networklabels\" rel=\"networklabels\"/> <mtu>0</mtu> <stp>false</stp> <usages> <usage>vm</usage> </usages> <data_center href=\"/ovirt-engine/api/datacenters/456\" id=\"456\"/> </network>", "DELETE /ovirt-engine/api/networks/123", "DELETE /ovirt-engine/api/datacenters/123/networks/456", "PUT /ovirt-engine/api/networks/123", "<network> <description>My updated description</description> </network>", "PUT /ovirt-engine/api/datacenters/123/networks/456", "<network> <mtu>1500</mtu> </network>", "<network_filter id=\"00000019-0019-0019-0019-00000000026b\"> <name>example-network-filter-b</name> <version> <major>4</major> <minor>0</minor> <build>-1</build> <revision>-1</revision> </version> </network_filter>", "GET http://localhost:8080/ovirt-engine/api/clusters/{cluster:id}/networkfilters", "<network_filters> <network_filter id=\"00000019-0019-0019-0019-00000000026c\"> <name>example-network-filter-a</name> <version> <major>4</major> <minor>0</minor> <build>-1</build> <revision>-1</revision> </version> </network_filter> <network_filter id=\"00000019-0019-0019-0019-00000000026b\"> <name>example-network-filter-b</name> <version> <major>4</major> <minor>0</minor> <build>-1</build> <revision>-1</revision> </version> </network_filter> <network_filter id=\"00000019-0019-0019-0019-00000000026a\"> <name>example-network-filter-a</name> <version> <major>3</major> <minor>0</minor> <build>-1</build> <revision>-1</revision> </version> </network_filter> </network_filters>", "DELETE /ovirt-engine/api/networks/123/networklabels/exemplary", "POST /ovirt-engine/api/networks/123/networklabels", "<network_label id=\"mylabel\"/>", "POST /ovirt-engine/api/networks", "<network> <name>mynetwork</name> <data_center id=\"123\"/> </network>", "POST /ovirt-engine/api/datacenters/123/networks", "<network> <name>ovirtmgmt</name> </network>", "POST /ovirt-engine/api/networks", "<network> <name>exnetwork</name> <external_provider id=\"456\"/> <data_center id=\"123\"/> </network>", "GET /ovirt-engine/api/networks", "<networks> <network href=\"/ovirt-engine/api/networks/123\" id=\"123\"> <name>ovirtmgmt</name> <description>Default Management Network</description> <link href=\"/ovirt-engine/api/networks/123/permissions\" rel=\"permissions\"/> <link href=\"/ovirt-engine/api/networks/123/vnicprofiles\" rel=\"vnicprofiles\"/> <link href=\"/ovirt-engine/api/networks/123/networklabels\" rel=\"networklabels\"/> <mtu>0</mtu> <stp>false</stp> <usages> <usage>vm</usage> </usages> <data_center href=\"/ovirt-engine/api/datacenters/456\" id=\"456\"/> </network> </networks>", "DELETE /ovirt-engine/api/vms/789/nics/456/networkfilterparameters/123", "PUT /ovirt-engine/api/vms/789/nics/456/networkfilterparameters/123", "<network_filter_parameter> <name>updatedName</name> <value>updatedValue</value> </network_filter_parameter>", "POST /ovirt-engine/api/vms/789/nics/456/networkfilterparameters", "<network_filter_parameter> <name>IP</name> <value>10.0.1.2</value> </network_filter_parameter>", "POST /ovirt-engine/api/openstackimageproviders/123/images/456/import", "<action> <storage_domain> <name>images0</name> </storage_domain> <cluster> <name>images0</name> </cluster> </action>", "POST /ovirt-engine/api/externalhostproviders/123/testconnectivity", "GET /ovirt-engine/api/openstacknetworkproviders/1234", "DELETE /ovirt-engine/api/openstacknetworkproviders/1234", "POST /ovirt-engine/api/externalhostproviders/123/testconnectivity", "PUT /ovirt-engine/api/openstacknetworkproviders/1234", "<openstack_network_provider> <name>ovn-network-provider</name> <requires_authentication>false</requires_authentication> <url>http://some_server_url.domain.com:9696</url> <tenant_name>oVirt</tenant_name> <type>external</type> </openstack_network_provider>", "POST /ovirt-engine/api/externalhostproviders/123/testconnectivity", "POST /ovirt-engine/api/openstackvolumeproviders", "<openstack_volume_provider> <name>mycinder</name> <url>https://mycinder.example.com:8776</url> <data_center> <name>mydc</name> </data_center> <requires_authentication>true</requires_authentication> <username>admin</username> <password>mypassword</password> <tenant_name>mytenant</tenant_name> </openstack_volume_provider>", "GET /ovirt-engine/api/roles/123/permits/456", "<permit href=\"/ovirt-engine/api/roles/123/permits/456\" id=\"456\"> <name>change_vm_cd</name> <administrative>false</administrative> <role href=\"/ovirt-engine/api/roles/123\" id=\"123\"/> </permit>", "DELETE /ovirt-engine/api/roles/123/permits/456", "POST /ovirt-engine/api/roles/123/permits", "<permit> <name>create_vm</name> </permit>", "GET /ovirt-engine/api/roles/123/permits", "<permits> <permit href=\"/ovirt-engine/api/roles/123/permits/5\" id=\"5\"> <name>change_vm_cd</name> <administrative>false</administrative> <role href=\"/ovirt-engine/api/roles/123\" id=\"123\"/> </permit> <permit href=\"/ovirt-engine/api/roles/123/permits/7\" id=\"7\"> <name>connect_to_vm</name> <administrative>false</administrative> <role href=\"/ovirt-engine/api/roles/123\" id=\"123\"/> </permit> </permits>", "GET /ovirt-engine/api/datacenters/123/qoss/123", "<qos href=\"/ovirt-engine/api/datacenters/123/qoss/123\" id=\"123\"> <name>123</name> <description>123</description> <max_iops>1</max_iops> <max_throughput>1</max_throughput> <type>storage</type> <data_center href=\"/ovirt-engine/api/datacenters/123\" id=\"123\"/> </qos>", "DELETE /ovirt-engine/api/datacenters/123/qoss/123", "PUT /ovirt-engine/api/datacenters/123/qoss/123", "curl -u admin@internal:123456 -X PUT -H \"content-type: application/xml\" -d \"<qos><name>321</name><description>321</description><max_iops>10</max_iops></qos>\" https://engine/ovirt-engine/api/datacenters/123/qoss/123", "<qos href=\"/ovirt-engine/api/datacenters/123/qoss/123\" id=\"123\"> <name>321</name> <description>321</description> <max_iops>10</max_iops> <max_throughput>1</max_throughput> <type>storage</type> <data_center href=\"/ovirt-engine/api/datacenters/123\" id=\"123\"/> </qos>", "POST /ovirt-engine/api/datacenters/123/qoss", "<qos href=\"/ovirt-engine/api/datacenters/123/qoss/123\" id=\"123\"> <name>123</name> <description>123</description> <max_iops>10</max_iops> <type>storage</type> <data_center href=\"/ovirt-engine/api/datacenters/123\" id=\"123\"/> </qos>", "GET /ovirt-engine/api/datacenter/123/qoss", "<qoss> <qos href=\"/ovirt-engine/api/datacenters/123/qoss/1\" id=\"1\">...</qos> <qos href=\"/ovirt-engine/api/datacenters/123/qoss/2\" id=\"2\">...</qos> <qos href=\"/ovirt-engine/api/datacenters/123/qoss/3\" id=\"3\">...</qos> </qoss>", "GET /ovirt-engine/api/datacenters/123/quotas/456", "<quota id=\"456\"> <name>myquota</name> <description>My new quota for virtual machines</description> <cluster_hard_limit_pct>20</cluster_hard_limit_pct> <cluster_soft_limit_pct>80</cluster_soft_limit_pct> <storage_hard_limit_pct>20</storage_hard_limit_pct> <storage_soft_limit_pct>80</storage_soft_limit_pct> </quota>", "DELETE /ovirt-engine/api/datacenters/123-456/quotas/654-321 -0472718ab224 HTTP/1.1 Accept: application/xml Content-type: application/xml", "PUT /ovirt-engine/api/datacenters/123/quotas/456", "<quota> <cluster_hard_limit_pct>30</cluster_hard_limit_pct> <cluster_soft_limit_pct>70</cluster_soft_limit_pct> <storage_hard_limit_pct>20</storage_hard_limit_pct> <storage_soft_limit_pct>80</storage_soft_limit_pct> </quota>", "POST /ovirt-engine/api/datacenters/123/quotas/456/quotastoragelimits", "<quota_storage_limit> <limit>100</limit> </quota_storage_limit>", "POST /ovirt-engine/api/datacenters/123/quotas/456/quotastoragelimits", "<quota_storage_limit> <limit>50</limit> <storage_domain id=\"000\"/> </quota_storage_limit>", "POST /ovirt-engine/api/datacenters/123/quotas", "<quota> <name>myquota</name> <description>My new quota for virtual machines</description> </quota>", "GET /ovirt-engine/api/roles/123", "<role id=\"123\"> <name>MyRole</name> <description>MyRole description</description> <link href=\"/ovirt-engine/api/roles/123/permits\" rel=\"permits\"/> <administrative>true</administrative> <mutable>false</mutable> </role>", "DELETE /ovirt-engine/api/roles/{role_id}", "PUT /ovirt-engine/api/roles/123", "<role> <name>MyNewRoleName</name> <description>My new description of the role</description> <administrative>true</administrative> </group>", "POST /ovirt-engine/api/roles", "<role> <name>MyRole</name> <description>My custom role to create virtual machines</description> <administrative>false</administrative> <permits> <permit id=\"1\"/> <permit id=\"1300\"/> </permits> </group>", "GET /ovirt-engine/api/roles", "<roles> <role id=\"123\"> <name>SuperUser</name> <description>Roles management administrator</description> <link href=\"/ovirt-engine/api/roles/123/permits\" rel=\"permits\"/> <administrative>true</administrative> <mutable>false</mutable> </role> </roles>", "GET /ovirt-engine/api/vms/123/snapshots/456?all_content=true", "POST /ovirt-engine/api/vms/123/snapshots/456/restore", "<action/>", "POST /ovirt-engine/api/vms/123/snapshots/456/restore", "<action> <disks> <disk id=\"111\"> <image_id>222</image_id> </disk> </disks> </action>", "POST /ovirt-engine/api/vms/123/snapshots", "<snapshot> <description>My snapshot</description> </snapshot>", "<snapshot> <description>My snapshot</description> <disk_attachments> <disk_attachment> <disk id=\"123\"> <image_id>456</image_id> </disk> </disk_attachment> </disk_attachments> </snapshot>", "<snapshot> <description>My snapshot</description> <persist_memorystate>false</persist_memorystate> </snapshot>", "GET /ovirt-engine/api/vms/123/snapshots?all_content=true", "GET /ovirt-engine/api/users/123/sshpublickeys", "<ssh_public_keys> <ssh_public_key href=\"/ovirt-engine/api/users/123/sshpublickeys/456\" id=\"456\"> <content>ssh-rsa ...</content> <user href=\"/ovirt-engine/api/users/123\" id=\"123\"/> </ssh_public_key> </ssh_public_keys>", "{ \"ssh_public_key\": [ { \"content\": \"ssh-rsa ...\", \"user\": { \"href\": \"/ovirt-engine/api/users/123\", \"id\": \"123\" }, \"href\": \"/ovirt-engine/api/users/123/sshpublickeys/456\", \"id\": \"456\" } ] }", "GET /ovirt-engine/api/vms/123/statistics", "<statistics> <statistic href=\"/ovirt-engine/api/vms/123/statistics/456\" id=\"456\"> <name>memory.installed</name> <description>Total memory configured</description> <kind>gauge</kind> <type>integer</type> <unit>bytes</unit> <values> <value> <datum>1073741824</datum> </value> </values> <vm href=\"/ovirt-engine/api/vms/123\" id=\"123\"/> </statistic> </statistics>", "GET /ovirt-engine/api/vms/123/statistics/456", "<statistic href=\"/ovirt-engine/api/vms/123/statistics/456\" id=\"456\"> <name>memory.installed</name> <description>Total memory configured</description> <kind>gauge</kind> <type>integer</type> <unit>bytes</unit> <values> <value> <datum>1073741824</datum> </value> </values> <vm href=\"/ovirt-engine/api/vms/123\" id=\"123\"/> </statistic>", "POST /ovirt-engine/api/jobs/123/steps/456/end", "<action> <force>true</force> <succeeded>true</succeeded> </action>", "GET /ovirt-engine/api/jobs/123/steps/456", "<step href=\"/ovirt-engine/api/jobs/123/steps/456\" id=\"456\"> <actions> <link href=\"/ovirt-engine/api/jobs/123/steps/456/end\" rel=\"end\"/> </actions> <description>Validating</description> <end_time>2016-12-12T23:07:26.627+02:00</end_time> <external>false</external> <number>0</number> <start_time>2016-12-12T23:07:26.605+02:00</start_time> <status>finished</status> <type>validating</type> <job href=\"/ovirt-engine/api/jobs/123\" id=\"123\"/> </step>", "POST /ovirt-engine/api/jobs/123/steps", "<step> <description>Validating</description> <start_time>2016-12-12T23:07:26.605+02:00</start_time> <status>started</status> <type>validating</type> </step>", "<step href=\"/ovirt-engine/api/jobs/123/steps/456\" id=\"456\"> <actions> <link href=\"/ovirt-engine/api/jobs/123/steps/456/end\" rel=\"end\"/> </actions> <description>Validating</description> <link href=\"/ovirt-engine/api/jobs/123/steps/456/statistics\" rel=\"statistics\"/> <external>true</external> <number>2</number> <start_time>2016-12-13T01:06:15.380+02:00</start_time> <status>started</status> <type>validating</type> <job href=\"/ovirt-engine/api/jobs/123\" id=\"123\"/> </step>", "GET /ovirt-engine/api/job/123/steps", "<steps> <step href=\"/ovirt-engine/api/jobs/123/steps/456\" id=\"456\"> <actions> <link href=\"/ovirt-engine/api/jobs/123/steps/456/end\" rel=\"end\"/> </actions> <description>Validating</description> <link href=\"/ovirt-engine/api/jobs/123/steps/456/statistics\" rel=\"statistics\"/> <external>true</external> <number>2</number> <start_time>2016-12-13T01:06:15.380+02:00</start_time> <status>started</status> <type>validating</type> <job href=\"/ovirt-engine/api/jobs/123\" id=\"123\"/> </step> </steps>", "<host_storage id=\"360014051136c20574f743bdbd28177fd\"> <logical_units> <logical_unit id=\"360014051136c20574f743bdbd28177fd\"> <lun_mapping>0</lun_mapping> <paths>1</paths> <product_id>lun0</product_id> <serial>SLIO-ORG_lun0_1136c205-74f7-43bd-bd28-177fd5ce6993</serial> <size>10737418240</size> <status>used</status> <vendor_id>LIO-ORG</vendor_id> <volume_group_id>O9Du7I-RahN-ECe1-dZ1w-nh0b-64io-MNzIBZ</volume_group_id> </logical_unit> </logical_units> <type>iscsi</type> <host id=\"8bb5ade5-e988-4000-8b93-dbfc6717fe50\"/> </host_storage>", "<host_storage id=\"360014051136c20574f743bdbd28177fd\"> <logical_units> <logical_unit id=\"360014051136c20574f743bdbd28177fd\"> <lun_mapping>0</lun_mapping> <paths>1</paths> <product_id>lun0</product_id> <serial>SLIO-ORG_lun0_1136c205-74f7-43bd-bd28-177fd5ce6993</serial> <size>10737418240</size> <vendor_id>LIO-ORG</vendor_id> <volume_group_id>O9Du7I-RahN-ECe1-dZ1w-nh0b-64io-MNzIBZ</volume_group_id> </logical_unit> </logical_units> <type>iscsi</type> <host id=\"8bb5ade5-e988-4000-8b93-dbfc6717fe50\"/> </host_storage>", "POST /ovirt-engine/api/storageDomains/123/reduceluns", "<action> <logical_units> <logical_unit id=\"1IET_00010001\"/> <logical_unit id=\"1IET_00010002\"/> </logical_units> </action>", "Note that this operation is only applicable to block storage domains (i.e., storage domains with the xref:types-storage_type[storage type] of iSCSI or FCP).", "POST /ovirt-engine/api/storageDomains/262b056b-aede-40f1-9666-b883eff59d40/refreshluns", "<action> <logical_units> <logical_unit id=\"1IET_00010001\"/> <logical_unit id=\"1IET_00010002\"/> </logical_units> </action>", "DELETE /ovirt-engine/api/storageDomains/123?destroy=true", "DELETE /ovirt-engine/api/storageDomains/123?host=myhost", "PUT /ovirt-engine/api/storageDomains/123", "<storage_domain> <name>data2</name> <wipe_after_delete>true</wipe_after_delete> </storage_domain>", "POST /ovirt-engine/api/storagedomains/123/disks?unregistered=true", "<disk id=\"456\"/>", "POST /ovirt-engine/api/storagedomains/123/disks", "<disk> <name>mydisk</name> <format>cow</format> <provisioned_size>1073741824</provisioned_size> </disk>", "GET /ovirt-engine/api/storagedomains/123/disks?unregistered=true", "POST /ovirt-engine/api/storagedomains/123/templates/456/import", "<action> <storage_domain> <name>myexport</name> </storage_domain> <cluster> <name>mycluster</name> </cluster> </action>", "GET /ovirt-engine/api/storagedomains/123/templates?unregistered=true", "POST /ovirt-engine/api/storagedomains/123/vms/456/import", "<action> <storage_domain> <name>mydata</name> </storage_domain> <cluster> <name>mycluster</name> </cluster> </action>", "<action> <storage_domain> <name>mydata</name> </storage_domain> <cluster> <name>mycluster</name> </cluster> <clone>true</clone> <vm> <name>myvm</name> </vm> </action>", "<action> <cluster> <name>mycluster</name> </cluster> <vm> <name>myvm</name> </vm> <disks> <disk id=\"123\"/> <disk id=\"456\"/> </disks> </action>", "DELETE /ovirt-engine/api/storagedomains/123/vms/456", "GET /ovirt-engine/api/storagedomains/123/vms", "<vms> <vm id=\"456\" href=\"/api/storagedomains/123/vms/456\"> <name>vm1</name> <storage_domain id=\"123\" href=\"/api/storagedomains/123\"/> <actions> <link rel=\"import\" href=\"/api/storagedomains/123/vms/456/import\"/> </actions> </vm> </vms>", "GET /ovirt-engine/api/storagedomains/123/vms?unregistered=true", "POST /ovirt-engine/api/storageDomains", "<storage_domain> <name>mydata</name> <type>data</type> <storage> <type>nfs</type> <address>mynfs.example.com</address> <path>/exports/mydata</path> </storage> <host> <name>myhost</name> </host> </storage_domain>", "<storage_domain> <name>myisos</name> <type>iso</type> <storage> <type>nfs</type> <address>mynfs.example.com</address> <path>/export/myisos</path> </storage> <host> <name>myhost</name> </host> </storage_domain>", "<storage_domain> <name>myiscsi</name> <type>data</type> <storage> <type>iscsi</type> <logical_units> <logical_unit id=\"3600144f09dbd050000004eedbd340001\"/> <logical_unit id=\"3600144f09dbd050000004eedbd340002\"/> </logical_units> </storage> <host> <name>myhost</name> </host> </storage_domain>", "DELETE /ovirt-engine/api/storageconnections/123?host=456", "PUT /ovirt-engine/api/storageconnections/123", "<storage_connection> <address>mynewnfs.example.com</address> </storage_connection>", "PUT /ovirt-engine/api/storageconnections/123", "<storage_connection> <port>3260</port> <target>iqn.2017-01.com.myhost:444</target> </storage_connection>", "PUT /ovirt-engine/api/hosts/123/storageconnectionextensions/456", "<storage_connection_extension> <target>iqn.2016-01.com.example:mytarget</target> <username>myuser</username> <password>mypassword</password> </storage_connection_extension>", "POST /ovirt-engine/api/hosts/123/storageconnectionextensions", "<storage_connection_extension> <target>iqn.2016-01.com.example:mytarget</target> <username>myuser</username> <password>mypassword</password> </storage_connection_extension>", "POST /ovirt-engine/api/storageconnections", "<storage_connection> <type>nfs</type> <address>mynfs.example.com</address> <path>/export/mydata</path> <host> <name>myhost</name> </host> </storage_connection>", "GET /ovirt-engine/api", "<api> <link rel=\"capabilities\" href=\"/api/capabilities\"/> <link rel=\"clusters\" href=\"/api/clusters\"/> <link rel=\"clusters/search\" href=\"/api/clusters?search={query}\"/> <link rel=\"datacenters\" href=\"/api/datacenters\"/> <link rel=\"datacenters/search\" href=\"/api/datacenters?search={query}\"/> <link rel=\"events\" href=\"/api/events\"/> <link rel=\"events/search\" href=\"/api/events?search={query}\"/> <link rel=\"hosts\" href=\"/api/hosts\"/> <link rel=\"hosts/search\" href=\"/api/hosts?search={query}\"/> <link rel=\"networks\" href=\"/api/networks\"/> <link rel=\"roles\" href=\"/api/roles\"/> <link rel=\"storagedomains\" href=\"/api/storagedomains\"/> <link rel=\"storagedomains/search\" href=\"/api/storagedomains?search={query}\"/> <link rel=\"tags\" href=\"/api/tags\"/> <link rel=\"templates\" href=\"/api/templates\"/> <link rel=\"templates/search\" href=\"/api/templates?search={query}\"/> <link rel=\"users\" href=\"/api/users\"/> <link rel=\"groups\" href=\"/api/groups\"/> <link rel=\"domains\" href=\"/api/domains\"/> <link rel=\"vmpools\" href=\"/api/vmpools\"/> <link rel=\"vmpools/search\" href=\"/api/vmpools?search={query}\"/> <link rel=\"vms\" href=\"/api/vms\"/> <link rel=\"vms/search\" href=\"/api/vms?search={query}\"/> <product_info> <name>oVirt Engine</name> <vendor>ovirt.org</vendor> <version> <build>4</build> <full_version>4.0.4</full_version> <major>4</major> <minor>0</minor> <revision>0</revision> </version> </product_info> <special_objects> <blank_template href=\"/ovirt-engine/api/templates/00000000-0000-0000-0000-000000000000\" id=\"00000000-0000-0000-0000-000000000000\"/> <root_tag href=\"/ovirt-engine/api/tags/00000000-0000-0000-0000-000000000000\" id=\"00000000-0000-0000-0000-000000000000\"/> </special_objects> <summary> <hosts> <active>0</active> <total>0</total> </hosts> <storage_domains> <active>0</active> <total>1</total> </storage_domains> <users> <active>1</active> <total>1</total> </users> <vms> <active>0</active> <total>0</total> </vms> </summary> <time>2016-09-14T12:00:48.132+02:00</time> </api>", "GET /ovirt-engine/api/options/MigrationPolicies", "<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?> <system_option href=\"/ovirt-engine/api/options/MigrationPolicies\" id=\"MigrationPolicies\"> <name>MigrationPolicies</name> <values> <system_option_value> <value>[{\"id\":{\"uuid\":\"80554327-0569-496b-bdeb-fcbbf52b827b\"},...}]</value> <version>4.2</version> </system_option_value> <system_option_value> <value>[{\"id\":{\"uuid\":\"80554327-0569-496b-bdeb-fcbbf52b827b\"},...}]</value> <version>4.3</version> </system_option_value> <system_option_value> <value>[{\"id\":{\"uuid\":\"80554327-0569-496b-bdeb-fcbbf52b827b\"},...}]</value> <version>4.4</version> </system_option_value> <system_option_value> <value>[{\"id\":{\"uuid\":\"80554327-0569-496b-bdeb-fcbbf52b827b\"},...}]</value> <version>4.5</version> </system_option_value> <system_option_value> <value>[{\"id\":{\"uuid\":\"80554327-0569-496b-bdeb-fcbbf52b827b\"},...}]</value> <version>4.6</version> </system_option_value> <system_option_value> <value>[{\"id\":{\"uuid\":\"80554327-0569-496b-bdeb-fcbbf52b827b\"},...}]</value> <version>4.7</version> </system_option_value> </values> </system_option>", "GET /ovirt-engine/api/options/MigrationPolicies?version=4.2", "<system_option href=\"/ovirt-engine/api/options/MigrationPolicies\" id=\"MigrationPolicies\"> <name>MigrationPolicies</name> <values> <system_option_value> <value>[{\"id\":{\"uuid\":\"80554327-0569-496b-bdeb-fcbbf52b827b\"},...}]</value> <version>4.2</version> </system_option_value> </values> </system_option>", "POST /ovirt-engine/api/vms/123/permissions", "<permission> <role> <name>UserVmManager</name> </role> <user id=\"456\"/> </permission>", "POST /ovirt-engine/api/permissions", "<permission> <role> <name>SuperUser</name> </role> <user id=\"456\"/> </permission>", "POST /ovirt-engine/api/clusters/123/permissions", "<permission> <role> <name>UserRole</name> </role> <group id=\"789\"/> </permission>", "GET /ovirt-engine/api/clusters/123/permissions", "<permissions> <permission id=\"456\"> <cluster id=\"123\"/> <role id=\"789\"/> <user id=\"451\"/> </permission> <permission id=\"654\"> <cluster id=\"123\"/> <role id=\"789\"/> <group id=\"127\"/> </permission> </permissions>", "GET /ovirt-engine/api/tags/123", "<tag href=\"/ovirt-engine/api/tags/123\" id=\"123\"> <name>root</name> <description>root</description> </tag>", "DELETE /ovirt-engine/api/tags/123", "PUT /ovirt-engine/api/tags/123", "<tag> <parent id=\"456\"/> </tag>", "<tag> <parent> <name>mytag</name> </parent> </tag>", "POST /ovirt-engine/api/tags", "<tag> <name>mytag</name> </tag>", "<tag> <name>mytag</name> <parent> <name>myparenttag</name> </parent> </tag>", "GET /ovirt-engine/api/tags", "<tags> <tag href=\"/ovirt-engine/api/tags/222\" id=\"222\"> <name>root2</name> <description>root2</description> <parent href=\"/ovirt-engine/api/tags/111\" id=\"111\"/> </tag> <tag href=\"/ovirt-engine/api/tags/333\" id=\"333\"> <name>root3</name> <description>root3</description> <parent href=\"/ovirt-engine/api/tags/222\" id=\"222\"/> </tag> <tag href=\"/ovirt-engine/api/tags/111\" id=\"111\"> <name>root</name> <description>root</description> </tag> </tags>", "root: (id: 111) - root2 (id: 222) - root3 (id: 333)", "POST /ovirt-engine/api/templates/123/export", "<action> <storage_domain id=\"456\"/> <exclusive>true<exclusive/> </action>", "POST /ovirt-engine/api/templates/123/export", "<action> <host> <name>myhost</name> </host> <directory>/home/ovirt</directory> <filename>myvm.ova</filename> </action>", "DELETE /ovirt-engine/api/templates/123", "PUT /ovirt-engine/api/templates/123", "<template> <memory>1073741824</memory> </template>", "<template> <version> <version_name>mytemplate_2</version_name> </version> </template>", "GET /ovirt-engine/api/templates/123/cdroms/", "<cdrom href=\"...\" id=\"00000000-0000-0000-0000-000000000000\"> <template href=\"/ovirt-engine/api/templates/123\" id=\"123\"/> <file id=\"mycd.iso\"/> </cdrom>", "<cdrom href=\"...\" id=\"00000000-0000-0000-0000-000000000000\"> <template href=\"/ovirt-engine/api/templates/123\" id=\"123\"/> </cdrom>", "DELETE /ovirt-engine/api/templates/{template:id}/diskattachments/{attachment:id}?storage_domain=072fbaa1-08f3-4a40-9f34-a5ca22dd1d74", "PUT /ovirt-engine/api/templates/123/mediateddevices/00000000-0000-0000-0000-000000000000 <vm_mediated_device> <spec_params> <property> <name>mdevType</name> <value>nvidia-11</value> </property> </spec_params> </vm_mediated_device>", "<vm_mediated_device href=\"/ovirt-engine/api/templates/123/mediateddevices/00000000-0000-0000-0000-000000000000\" id=\"00000000-0000-0000-0000-000000000000\"> <template href=\"/ovirt-engine/api/templates/123\" id=\"123\"/> <spec_params> <property> <name>mdevType</name> <value>nvidia-11</value> </property> </spec_params> </vm_mediated_device>", "POST /ovirt-engine/api/templates", "<template> <name>mytemplate</name> <vm id=\"123\"/> </template>", "<template> <name>mytemplate</name> <vm id=\"123\"> <snapshots> <snapshot id=\"456\"/> </snapshots> </vm> </template>", "<template> <name>mytemplate</name> <vm id=\"123\"> <disk_attachments> <disk_attachment> <disk id=\"456\"> <name>mydisk</name> <format>cow</format> <sparse>true</sparse> </disk> </disk_attachment> </disk_attachments> </vm> </template>", "<template> <name>mytemplate</name> <vm id=\"123\"/> <version> <base_template id=\"456\"/> <version_name>mytemplate_001</version_name> </version> </template>", "<template> <name>mytemplate</name> <storage_domain id=\"123\"/> <vm id=\"456\"> <disk_attachments> <disk_attachment> <disk id=\"789\"> <format>cow</format> <sparse>true</sparse> </disk> </disk_attachment> </disk_attachments> </vm> </template>", "<template> <name>mytemplate</name> <vm id=\"123\"> <disk_attachments> <disk_attachment> <disk id=\"456\"> <format>cow</format> <sparse>true</sparse> <storage_domains> <storage_domain id=\"789\"/> </storage_domains> </disk> </disk_attachment> </disk_attachments> </vm> </template>", "POST /ovirt-engine/api/templates?clone_permissions=true", "<template> <name>mytemplate<name> <vm> <name>myvm<name> </vm> </template>", "GET /ovirt-engine/api/templates", "GET /ovirt-engine/api/users/1234", "<user href=\"/ovirt-engine/api/users/1234\" id=\"1234\"> <name>admin</name> <link href=\"/ovirt-engine/api/users/1234/sshpublickeys\" rel=\"sshpublickeys\"/> <link href=\"/ovirt-engine/api/users/1234/roles\" rel=\"roles\"/> <link href=\"/ovirt-engine/api/users/1234/permissions\" rel=\"permissions\"/> <link href=\"/ovirt-engine/api/users/1234/tags\" rel=\"tags\"/> <department></department> <domain_entry_id>23456</domain_entry_id> <email>[email protected]</email> <last_name>Lastname</last_name> <namespace>*</namespace> <principal>user1</principal> <user_name>user1@domain-authz</user_name> <domain href=\"/ovirt-engine/api/domains/45678\" id=\"45678\"> <name>domain-authz</name> </domain> </user>", "DELETE /ovirt-engine/api/users/1234", "PUT /ovirt-engine/api/users/123", "<user> <user_options> <property> <name>test</name> <value>[\"any\",\"JSON\"]</value> </property> </user_options> </user>", "GET /ovirt-engine/api/users/123/options/456", "<user_option href=\"/ovirt-engine/api/users/123/options/456\" id=\"456\"> <name>SomeName</name> <content>[\"any\", \"JSON\"]</content> <user href=\"/ovirt-engine/api/users/123\" id=\"123\"/> </user_option>", "DELETE /ovirt-engine/api/users/123/options/456", "POST /ovirt-engine/api/users/123/options", "<user_option> <name>SomeName</name> <content>[\"any\", \"JSON\"]</content> </user_option>", "GET /ovirt-engine/api/users/123/options", "<user_options> <user_option href=\"/ovirt-engine/api/users/123/options/456\" id=\"456\"> <name>SomeName</name> <content>[\"any\", \"JSON\"]</content> <user href=\"/ovirt-engine/api/users/123\" id=\"123\"/> </user_option> </user_options>", "POST /ovirt-engine/api/users", "<user> <user_name>myuser@myextension-authz</user_name> <domain> <name>myextension-authz</name> </domain> </user>", "<user> <principal>[email protected]</principal> <user_name>[email protected]@myextension-authz</user_name> <domain> <name>myextension-authz</name> </domain> </user>", "GET /ovirt-engine/api/users", "<users> <user href=\"/ovirt-engine/api/users/1234\" id=\"1234\"> <name>admin</name> <link href=\"/ovirt-engine/api/users/1234/sshpublickeys\" rel=\"sshpublickeys\"/> <link href=\"/ovirt-engine/api/users/1234/roles\" rel=\"roles\"/> <link href=\"/ovirt-engine/api/users/1234/permissions\" rel=\"permissions\"/> <link href=\"/ovirt-engine/api/users/1234/tags\" rel=\"tags\"/> <domain_entry_id>23456</domain_entry_id> <namespace>*</namespace> <principal>user1</principal> <user_name>user1@domain-authz</user_name> <domain href=\"/ovirt-engine/api/domains/45678\" id=\"45678\"> <name>domain-authz</name> </domain> </user> </users>", "POST /ovirt-engine/api/vms/123/autopincpuandnumanodes", "<action> <optimize_cpu_settings>true</optimize_cpu_settings> </action>", "POST /ovirt-engine/api/vms/123/cancelmigration", "<action/>", "POST /ovirt-engine/api/vms/123/detach", "<action/>", "POST /ovirt-engine/api/vms/123/export", "<action> <storage_domain> <name>myexport</name> </storage_domain> <exclusive>true</exclusive> <discard_snapshots>true</discard_snapshots> </action>", "POST /ovirt-engine/api/vms/123/export", "<action> <host> <name>myhost</name> </host> <directory>/home/ovirt</directory> <filename>myvm.ova</filename> </action>", "POST /ovirt-engine/api/vms/123/freezefilesystems", "<action/>", "GET /ovirt-engine/api/vms/123?all_content=true", "GET /vms/{vm:id};next_run", "GET /vms/{vm:id};next_run=true", "GET /vms/{vm:id}?all_content=true&ovf_as_ova=true", "POST /ovirt-engine/api/vms/123/logon", "<action/>", "POST /ovirt-engine/api/vms/123/maintenance", "<action> <maintenance_enabled>true<maintenance_enabled/> </action>", "POST /ovirt-engine/api/vms/123/migrate", "<action> <host id=\"2ab5e1da-b726-4274-bbf7-0a42b16a0fc3\"/> </action>", "POST /ovirt-engine/api/vms/123/previewsnapshot", "<action> <disks> <disk id=\"111\"> <image_id>222</image_id> </disk> </disks> <snapshot id=\"456\"/> </action>", "POST /ovirt-engine/api/vms/123/reboot", "<action/>", "POST /ovirt-engine/api/vms/123/reboot", "<action> <force>true</force> </action>", "DELETE /ovirt-engine/api/vms/123", "POST /ovirt-engine/api/vms/123/reset", "<action/>", "POST /ovirt-engine/api/vms/123/screenshot", "<action/>", "POST /ovirt-engine/api/vms/123/shutdown", "<action/>", "POST /ovirt-engine/api/vms/123/shutdown", "<action> <force>true</force> </action>", "POST /ovirt-engine/api/vms/123/start", "<action/>", "<action> <vm> <os> <boot> <devices> <device>cdrom</device> </devices> </boot> </os> </vm> </action>", "POST /ovirt-engine/api/vms/123/stop", "<action/>", "POST /ovirt-engine/api/vms/123/stop", "<action> <force>true</force> </action>", "POST /ovirt-engine/api/vms/123/suspend", "<action/>", "POST /api/vms/123/thawfilesystems", "<action/>", "POST /ovirt-engine/api/vms/123/ticket", "<action> <ticket> <value>abcd12345</value> <expiry>120</expiry> </ticket> </action>", "POST /ovirt-engine/api/vms/123/graphicsconsoles/456/ticket", "GET /ovirt-engine/api/vms/123/applications/789", "<application href=\"/ovirt-engine/api/vms/123/applications/789\" id=\"789\"> <name>ovirt-guest-agent-common-1.0.12-3.el7</name> <vm href=\"/ovirt-engine/api/vms/123\" id=\"123\"/> </application>", "GET /ovirt-engine/api/vms/123/applications/", "<applications> <application href=\"/ovirt-engine/api/vms/123/applications/456\" id=\"456\"> <name>kernel-3.10.0-327.36.1.el7</name> <vm href=\"/ovirt-engine/api/vms/123\" id=\"123\"/> </application> <application href=\"/ovirt-engine/api/vms/123/applications/789\" id=\"789\"> <name>ovirt-guest-agent-common-1.0.12-3.el7</name> <vm href=\"/ovirt-engine/api/vms/123\" id=\"123\"/> </application> </applications>", "POST /ovirt-engine/api/vms/123/backups/456/finalize", "<action />", "<backups> <backup id=\"backup-uuid\"> <from_checkpoint_id>previous-checkpoint-uuid</from_checkpoint_id> <link href=\"/ovirt-engine/api/vms/vm-uuid/backups/backup-uuid/disks\" rel=\"disks\"/> <status>initializing</status> <creation_date> </backup> </backups>", "POST /ovirt-engine/api/vms/123/backups", "<backup> <from_checkpoint_id>previous-checkpoint-uuid</from_checkpoint_id> <disks> <disk id=\"disk-uuid\" /> </disks> </backup>", "<backup id=\"backup-uuid\"> <from_checkpoint_id>previous-checkpoint-uuid</from_checkpoint_id> <to_checkpoint_id>new-checkpoint-uuid</to_checkpoint_id> <disks> <disk id=\"disk-uuid\" /> </disks> <status>initializing</status> <creation_date> </backup>", "POST /ovirt-engine/api/vms/123/backups", "<backup id=\"backup-uuid\"> <from_checkpoint_id>previous-checkpoint-uuid</from_checkpoint_id> <disks> <disk id=\"disk-uuid\" /> </disks> </backup>", "POST /ovirt-engine/api/vms/123/backups?require_consistency=true", "POST /ovirt-engine/api/vms/123/backups?use_active=false", "<backups> <backup id=\"backup-uuid\"> <from_checkpoint_id>previous-checkpoint-uuid</from_checkpoint_id> <disks> <disk id=\"disk-uuid\" /> </disks> <status>initiailizing</status> <creation_date> </backup> </backups>", "<cdrom href=\"...\" id=\"00000000-0000-0000-0000-000000000000\"> <file id=\"mycd.iso\"/> <vm href=\"/ovirt-engine/api/vms/123\" id=\"123\"/> </cdrom>", "<cdrom href=\"...\" id=\"00000000-0000-0000-0000-000000000000\"> <vm href=\"/ovirt-engine/api/vms/123\" id=\"123\"/> </cdrom>", "PUT /ovirt-engine/api/vms/123/cdroms/00000000-0000-0000-0000-000000000000", "<cdrom> <file id=\"mycd.iso\"/> </cdrom>", "<cdrom> <file id=\"\"/> </cdrom>", "PUT /ovirt-engine/api/vms/123/cdroms/00000000-0000-0000-0000-000000000000?current=true", "<cdrom> <file id=\"\"/> </cdrom>", "<checkpoint id=\"checkpoint-uuid\"> <link href=\"/ovirt-engine/api/vms/vm-uuid/checkpoints/checkpoint-uuid/disks\" rel=\"disks\"/> <parent_id>parent-checkpoint-uuid</parent_id> <creation_date>xxx</creation_date> <vm href=\"/ovirt-engine/api/vms/vm-uuid\" id=\"vm-uuid\"/> </checkpoint>", "GET /ovirt-engine/api/vms/123/checkpoints", "<checkpoints> <checkpoint id=\"checkpoint-uuid\"> <link href=\"/ovirt-engine/api/vms/vm-uuid/checkpoints/checkpoint-uuid/disks\" rel=\"disks\"/> <parent_id>parent-checkpoint-uuid</parent_id> <creation_date>xxx</creation_date> <vm href=\"/ovirt-engine/api/vm-uuid\" id=\"vm-uuid\"/> </checkpoint> </checkpoints>", "GET /ovit-engine/api/vms/123/graphicsconsoles/456?current=true", "POST /ovirt-engine/api/vms/123/graphicsconsoles/456/remoteviewerconnectionfile", "<action/>", "<action> <remote_viewer_connection_file> [virt-viewer] type=spice host=192.168.1.101 port=-1 password=123456789 delete-this-file=1 fullscreen=0 toggle-fullscreen=shift+f11 release-cursor=shift+f12 secure-attention=ctrl+alt+end tls-port=5900 enable-smartcard=0 enable-usb-autoshare=0 usb-filter=null tls-ciphers=DEFAULT host-subject=O=local,CN=example.com ca= </remote_viewer_connection_file> </action>", "Find the virtual machine: vm = vms_service.list(search='name=myvm')[0] Locate the service that manages the virtual machine, as that is where the locators are defined: vm_service = vms_service.vm_service(vm.id) Find the graphic console of the virtual machine: graphics_consoles_service = vm_service.graphics_consoles_service() graphics_console = graphics_consoles_service.list()[0] Generate the remote viewer connection file: console_service = graphics_consoles_service.console_service(graphics_console.id) remote_viewer_connection_file = console_service.remote_viewer_connection_file() Write the content to file \"/tmp/remote_viewer_connection_file.vv\" path = \"/tmp/remote_viewer_connection_file.vv\" with open(path, \"w\") as f: f.write(remote_viewer_connection_file)", "#!/bin/sh -ex remote-viewer --ovirt-ca-file=/etc/pki/ovirt-engine/ca.pem /tmp/remote_viewer_connection_file.vv", "POST /ovirt-engine/api/vms/123/graphicsconsoles/456/ticket", "<action> <ticket> <value>abcd12345</value> <expiry>120</expiry> </ticket> </action>", "GET /ovirt-engine/api/vms/123/graphicsconsoles?current=true", "GET /ovirt-engine/api/vms/123/hostdevices/456", "<host_device href=\"/ovirt-engine/api/hosts/543/devices/456\" id=\"456\"> <name>pci_0000_04_00_0</name> <capability>pci</capability> <iommu_group>30</iommu_group> <placeholder>true</placeholder> <product id=\"0x13ba\"> <name>GM107GL [Quadro K2200]</name> </product> <vendor id=\"0x10de\"> <name>NVIDIA Corporation</name> </vendor> <host href=\"/ovirt-engine/api/hosts/543\" id=\"543\"/> <parent_device href=\"/ovirt-engine/api/hosts/543/devices/456\" id=\"456\"> <name>pci_0000_00_03_0</name> </parent_device> <vm href=\"/ovirt-engine/api/vms/123\" id=\"123\"/> </host_device>", "DELETE /ovirt-engine/api/vms/123/hostdevices/456", "POST /ovirt-engine/api/vms/123/hostdevices", "<host_device id=\"123\" />", "PUT /ovirt-engine/api/vms/123/mediateddevices/00000000-0000-0000-0000-000000000000 <vm_mediated_device> <spec_params> <property> <name>mdevType</name> <value>nvidia-11</value> </property> </spec_params> </vm_mediated_device>", "<vm_mediated_device href=\"/ovirt-engine/api/vms/123/mediateddevices/00000000-0000-0000-0000-000000000000\" id=\"00000000-0000-0000-0000-000000000000\"> <vm href=\"/ovirt-engine/api/vms/123\" id=\"123\"/> <spec_params> <property> <name>mdevType</name> <value>nvidia-11</value> </property> </spec_params> </vm_mediated_device>", "DELETE /ovirt-engine/api/vms/123/nics/456", "PUT /ovirt-engine/api/vms/123/nics/456", "<nic> <name>mynic</name> <interface>e1000</interface> <vnic_profile id='789'/> </nic>", "POST /ovirt-engine/api/vms/123/nics", "<nic> <name>mynic</name> <interface>virtio</interface> <vnic_profile id=\"456\"/> </nic>", "curl --request POST --header \"Version: 4\" --header \"Content-Type: application/xml\" --header \"Accept: application/xml\" --user \"admin@internal:mypassword\" --cacert /etc/pki/ovirt-engine/ca.pem --data ' <nic> <name>mynic</name> <interface>virtio</interface> <vnic_profile id=\"456\"/> </nic> ' https://myengine.example.com/ovirt-engine/api/vms/123/nics", "DELETE /ovirt-engine/api/vms/123/numanodes/456", "PUT /ovirt-engine/api/vms/123/numanodes/456", "<vm_numa_node> <numa_node_pins> <numa_node_pin> <index>0</index> </numa_node_pin> </numa_node_pins> </vm_numa_node>", "POST /ovirt-engine/api/vms/c7ecd2dc/numanodes Accept: application/xml Content-type: application/xml", "<vm_numa_node> <cpu> <cores> <core> <index>0</index> </core> </cores> </cpu> <index>0</index> <memory>1024</memory> <numa_tune_mode>strict</numa_tune_mode> </vm_numa_node>", "POST /ovirt-engine/api/vmpools/123/allocatevm", "<action/>", "GET /ovirt-engine/api/vmpools/123", "<vm_pool id=\"123\"> <actions>...</actions> <name>MyVmPool</name> <description>MyVmPool description</description> <link href=\"/ovirt-engine/api/vmpools/123/permissions\" rel=\"permissions\"/> <max_user_vms>1</max_user_vms> <prestarted_vms>0</prestarted_vms> <size>100</size> <stateful>false</stateful> <type>automatic</type> <use_latest_template_version>false</use_latest_template_version> <cluster id=\"123\"/> <template id=\"123\"/> <vm id=\"123\">...</vm> </vm_pool>", "DELETE /ovirt-engine/api/vmpools/123", "PUT /ovirt-engine/api/vmpools/123", "<vmpool> <name>VM_Pool_B</name> <description>Virtual Machine Pool B</description> <size>3</size> <prestarted_vms>1</size> <max_user_vms>2</size> </vmpool>", "PUT /ovirt-engine/api/vmpools/123?seal=true", "<vmpool> <name>VM_Pool_B</name> <description>Virtual Machine Pool B</description> <size>7</size> </vmpool>", "POST /ovirt-engine/api/vmpools", "<vmpool> <name>mypool</name> <cluster id=\"123\"/> <template id=\"456\"/> </vmpool>", "POST /ovirt-engine/api/vmpools?seal=true", "<vmpool> <name>mypool</name> <cluster id=\"123\"/> <template id=\"456\"/> <size>5</size> </vmpool>", "GET /ovirt-engine/api/vmpools", "<vm_pools> <vm_pool id=\"123\"> </vm_pool> </vm_pools>", "GET /ovirt-engine/api/vms/123/sessions", "<sessions> <session href=\"/ovirt-engine/api/vms/123/sessions/456\" id=\"456\"> <console_user>true</console_user> <ip> <address>192.168.122.1</address> </ip> <user href=\"/ovirt-engine/api/users/789\" id=\"789\"/> <vm href=\"/ovirt-engine/api/vms/123\" id=\"123\"/> </session> </sessions>", "<watchdogs> <watchdog href=\"/ovirt-engine/api/vms/123/watchdogs/00000000-0000-0000-0000-000000000000\" id=\"00000000-0000-0000-0000-000000000000\"> <vm href=\"/ovirt-engine/api/vms/123\" id=\"123\"/> <action>poweroff</action> <model>i6300esb</model> </watchdog> </watchdogs>", "DELETE /ovirt-engine/api/vms/123/watchdogs/00000000-0000-0000-0000-000000000000", "PUT /ovirt-engine/api/vms/123/watchdogs <watchdog> <action>reset</action> </watchdog>", "<watchdog href=\"/ovirt-engine/api/vms/123/watchdogs/00000000-0000-0000-0000-000000000000\" id=\"00000000-0000-0000-0000-000000000000\"> <vm href=\"/ovirt-engine/api/vms/123\" id=\"123\"/> <action>reset</action> <model>i6300esb</model> </watchdog>", "POST /ovirt-engine/api/vms/123/watchdogs <watchdog> <action>poweroff</action> <model>i6300esb</model> </watchdog>", "<watchdog href=\"/ovirt-engine/api/vms/123/watchdogs/00000000-0000-0000-0000-000000000000\" id=\"00000000-0000-0000-0000-000000000000\"> <vm href=\"/ovirt-engine/api/vms/123\" id=\"123\"/> <action>poweroff</action> <model>i6300esb</model> </watchdog>", "<watchdogs> <watchdog href=\"/ovirt-engine/api/vms/123/watchdogs/00000000-0000-0000-0000-000000000000\" id=\"00000000-0000-0000-0000-000000000000\"> <vm href=\"/ovirt-engine/api/vms/123\" id=\"123\"/> <action>poweroff</action> <model>i6300esb</model> </watchdog> </watchdogs>", "#!/bin/sh -ex url=\"https://engine.example.com/ovirt-engine/api\" user=\"admin@internal\" password=\"...\" curl --verbose --cacert /etc/pki/ovirt-engine/ca.pem --user \"USD{user}:USD{password}\" --request POST --header \"Version: 4\" --header \"Content-Type: application/xml\" --header \"Accept: application/xml\" --data ' <vm> <name>myvm</name> <template> <name>Blank</name> </template> <cluster> <name>mycluster</name> </cluster> </vm> ' \"USD{url}/vms\"", "#!/bin/sh -ex url=\"https://engine.example.com/ovirt-engine/api\" user=\"admin@internal\" password=\"...\" curl --verbose --cacert /etc/pki/ovirt-engine/ca.pem --user \"USD{user}:USD{password}\" --request POST --header \"Content-Type: application/xml\" --header \"Accept: application/xml\" --data ' <vm> <name>myvm</name> <snapshots> <snapshot id=\"266742a5-6a65-483c-816d-d2ce49746680\"/> </snapshots> <cluster> <name>mycluster</name> </cluster> </vm> ' \"USD{url}/vms\"", "<vm> <disk_attachments> <disk_attachment> <disk id=\"8d4bd566-6c86-4592-a4a7-912dbf93c298\"> <storage_domains> <storage_domain id=\"9cb6cb0a-cf1d-41c2-92ca-5a6d665649c9\"/> </storage_domains> </disk> <disk_attachment> </disk_attachments> </vm>", "<vm> <disk_attachments> <disk_attachment> <disk> <image_id>8d4bd566-6c86-4592-a4a7-912dbf93c298</image_id> <storage_domains> <storage_domain id=\"9cb6cb0a-cf1d-41c2-92ca-5a6d665649c9\"/> </storage_domains> </disk> <disk_attachment> </disk_attachments> </vm>", "<vm> <name>myvm</name> <description>My Desktop Virtual Machine</description> <type>desktop</type> <memory>2147483648</memory> </vm>", "<vm> <os> <boot dev=\"cdrom\"/> </os> </vm>", "<vm> <os> <boot> <devices> <device>cdrom</device> </devices> </boot> </os> </vm>", "POST /ovirt-engine/api/vms?auto_pinning_policy=existing/adjust", "<vm> <name>myvm<name> <template> <name>mytemplate<name> </template> <cluster> <name>mycluster<name> </cluster> <placement_policy> <hosts> <host> <name>myhost</name> </host> </hosts> </placement_policy> </vm>", "POST /ovirt-engine/vms?clone=true", "<vm> <name>myvm<name> <template> <name>mytemplate<name> </template> <cluster> <name>mycluster<name> </cluster> </vm>", "POST /ovirt-engine/api/vms?clone_permissions=true", "<vm> <name>myvm<name> <template> <name>mytemplate<name> </template> <cluster> <name>mycluster<name> </cluster> </vm>", "POST /ovirt-engine/api/vms?seal=true", "<vm> <name>myvm<name> <template> <name>mytemplate<name> </template> <cluster> <name>mycluster<name> </cluster> </vm>", "GET /ovirt-engine/api/vms?all_content=true", "GET /vms?all_content=true&ovf_as_ova=true", "POST /ovirt-engine/api/networks/456/vnicprofiles", "<vnic_profile id=\"123\"> <name>new_vNIC_name</name> <pass_through> <mode>disabled</mode> </pass_through> <port_mirroring>false</port_mirroring> </vnic_profile>", "<vnic_profile href=\"/ovirt-engine/api/vnicprofiles/123\" id=\"123\"> <name>new_vNIC_name</name> <link href=\"/ovirt-engine/api/vnicprofiles/123/permissions\" rel=\"permissions\"/> <pass_through> <mode>disabled</mode> </pass_through> <port_mirroring>false</port_mirroring> <network href=\"/ovirt-engine/api/networks/456\" id=\"456\"/> <network_filter href=\"/ovirt-engine/api/networkfilters/789\" id=\"789\"/> </vnic_profile>", "<vnic_profile> <name>no_network_filter</name> <network_filter/> </vnic_profile>", "<vnic_profile> <name>user_choice_network_filter</name> <network_filter id= \"0000001b-001b-001b-001b-0000000001d5\"/> </vnic_profile>" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/rest_api_guide/services
Chapter 27. Relax-and-Recover (ReaR)
Chapter 27. Relax-and-Recover (ReaR) When a software or hardware failure breaks the system, the system administrator faces three tasks to restore it to the fully functioning state on a new hardware environment: booting a rescue system on the new hardware replicating the original storage layout restoring user and system files Most backup software solves only the third problem. To solve the first and second problems, use Relax-and-Recover (ReaR) , a disaster recovery and system migration utility. Backup software creates backups. ReaR complements backup software by creating a rescue system . Booting the rescue system on a new hardware allows you to issue the rear recover command, which starts the recovery process. During this process, ReaR replicates the partition layout and filesystems, prompts for restoring user and system files from the backup created by backup software, and finally installs the boot loader. By default, the rescue system created by ReaR only restores the storage layout and the boot loader, but not the actual user and system files. This chapter describes how to use ReaR. 27.1. Basic ReaR Usage 27.1.1. Installing ReaR Install the rear package by running the following command as root: 27.1.2. Configuring ReaR ReaR is configured in the /etc/rear/local.conf file. Specify the rescue system configuration by adding these lines: Substitute output format with rescue system format, for example, ISO for an ISO disk image or USB for a bootable USB. Substitute output location with where it will be put, for example, file:///mnt/rescue_system/ for a local filesystem directory or sftp://backup:[email protected]/ for an SFTP directory. Example 27.1. Configuring Rescue System Format and Location To configure ReaR to output the rescue system as an ISO image into the /mnt/rescue_system/ directory, add these lines to the /etc/rear/local.conf file: See section "Rescue Image Configuration" of the rear(8) man page for a list of all options. ISO-specific Configuration Using the configuration in Example 27.1, "Configuring Rescue System Format and Location" results into two equivalent output files in two locations: /var/lib/rear/output/ - rear 's default output location /mnt/rescue_system/ HOSTNAME /rear-localhost.iso - output location specified in OUTPUT_URL However, usually you need only one ISO image. To make ReaR create an ISO image only in the directory specified by a user, add these lines to /etc/rear/local.conf : Substitute output location with the desired location for the output. 27.1.3. Creating a Rescue System The following example shows how to create a rescue system with verbose output: With the configuration from Example 27.1, "Configuring Rescue System Format and Location" , ReaR prints the above output. The last two lines confirm that the rescue system has been successfully created and copied to the configured backup location /mnt/rescue_system/ . Because the system's host name is rhel7 , the backup location now contains directory rhel7/ with the rescue system and auxiliary files: Transfer the rescue system to an external medium to not lose it in case of a disaster. 27.1.4. Scheduling ReaR The /etc/cron.d/rear crontab file provided by the rear package runs the rear mkrescue command automatically daily at 1:30 AM to schedule the Relax-and-Recover (ReaR) utility for regularly creating a rescue system. The command only creates a rescue system and not the backup of the data. You still need to schedule a periodic backup of data by yourself. For example: You can add another crontab that will schedule the rear mkbackuponly command. You can also change the existing crontab to run the rear mkbackup command instead of the default /usr/sbin/rear checklayout || /usr/sbin/rear mkrescure command. You can schedule an external backup, if an external backup method is in use. The details depend on the backup method that you are using in ReaR. For more details, see Integrating ReaR with Backup Software . Note The /etc/cron.d/rear crontab file provided in the rear package is deprecated because, by default, it is not sufficient to perform a backup. For details, see the corresponding Deprecated functionality shells and command line tools . 27.1.5. Performing a System Rescue To perform a restore or migration: Boot the rescue system on the new hardware. For example, burn the ISO image to a DVD and boot from the DVD. In the console interface, select the "Recover" option: Figure 27.1. Rescue system: menu You are taken to the prompt: Figure 27.2. Rescue system: prompt Warning Once you have started recovery in the step, it probably cannot be undone and you may lose anything stored on the physical disks of the system. Run the rear recover command to perform the restore or migration. The rescue system then recreates the partition layout and filesystems: Figure 27.3. Rescue system: running "rear recover" Restore user and system files from the backup into the /mnt/local/ directory. Example 27.2. Restoring User and System Files In this example, the backup file is a tar archive created per instructions in Section 27.2.1.1, "Configuring the Internal Backup Method" . First, copy the archive from its storage, then unpack the files into /mnt/local/ , then delete the archive: The new storage has to have enough space both for the archive and the extracted files. Verify that the files have been restored: Figure 27.4. Rescue system: restoring user and system files from the backup Ensure that SELinux relabels the files on the boot: Otherwise you may be unable to log in the system, because the /etc/passwd file may have the incorrect SELinux context. Finish the recovery by entering exit . ReaR will then reinstall the boot loader. After that, reboot the system: Figure 27.5. Rescue system: finishing recovery Upon reboot, SELinux will relabel the whole filesystem. Then you will be able to log in to the recovered system. 27.2. Integrating ReaR with Backup Software The main purpose of ReaR is to produce a rescue system, but it can also be integrated with backup software. What integration means is different for the built-in, supported, and unsupported backup methods. 27.2.1. The Built-in Backup Method ReaR includes a built-in, or internal, backup method. This method is fully integrated with ReaR, which has these advantages: a rescue system and a full-system backup can be created using a single rear mkbackup command the rescue system restores files from the backup automatically As a result, ReaR can cover the whole process of creating both the rescue system and the full-system backup. 27.2.1.1. Configuring the Internal Backup Method To make ReaR use its internal backup method, add these lines to /etc/rear/local.conf : These lines configure ReaR to create an archive with a full-system backup using the tar command. Substitute backup location with one of the options from the "Backup Software Integration" section of the rear(8) man page. Make sure that the backup location has enough space. Example 27.3. Adding tar Backups To expand the example in Section 27.1, "Basic ReaR Usage" , configure ReaR to also output a tar full-system backup into the /srv/backup/ directory: The internal backup method allows further configuration. To keep old backup archives when new ones are created, add this line: By default, ReaR creates a full backup on each run. To make the backups incremental, meaning that only the changed files are backed up on each run, add this line: This automatically sets NETFS_KEEP_OLD_BACKUP_COPY to y . To ensure that a full backup is done regularly in addition to incremental backups, add this line: Substitute "Day" with one of the "Mon", "Tue", "Wed", "Thu". "Fri", "Sat", "Sun". ReaR can also include both the rescue system and the backup in the ISO image. To achieve this, set the BACKUP_URL directive to iso:///backup/ : This is the simplest method of full-system backup, because the rescue system does not need the user to fetch the backup during recovery. However, it needs more storage. Also, single-ISO backups cannot be incremental. Example 27.4. Configuring Single-ISO Rescue System and Backups This configuration creates a rescue system and a backup file as a single ISO image and puts it into the /srv/backup/ directory: Note The ISO image might be large in this scenario. Therefore, Red Hat recommends creating only one ISO image, not two. For details, see the section called "ISO-specific Configuration" . To use rsync instead of tar , add this line: Note that incremental backups are only supported when using tar . 27.2.1.2. Creating a Backup Using the Internal Backup Method With BACKUP=NETFS set, ReaR can create either a rescue system, a backup file, or both. To create a rescue system only , run: To create a backup only , run: To create a rescue system and a backup , run: Note that triggering backup with ReaR is only possible if using the NETFS method. ReaR cannot trigger other backup methods. Note When restoring, the rescue system created with the BACKUP=NETFS setting expects the backup to be present before executing rear recover . Hence, once the rescue system boots, copy the backup file into the directory specified in BACKUP_URL , unless using a single ISO image. Only then run rear recover . To avoid recreating the rescue system unnecessarily, you can check whether storage layout has changed since the last rescue system was created using these commands: Non-zero status indicates a change in disk layout. Non-zero status is also returned if ReaR configuration has changed. Important The rear checklayout command does not check whether a rescue system is currently present in the output location, and can return 0 even if it is not there. So it does not guarantee that a rescue system is available, only that the layout has not changed since the last rescue system has been created. Example 27.5. Using rear checklayout To create a rescue system, but only if the layout has changed, use this command: 27.2.2. Supported Backup Methods In addition to the NETFS internal backup method, ReaR supports several external backup methods. This means that the rescue system restores files from the backup automatically, but the backup creation cannot be triggered using ReaR. For a list and configuration options of the supported external backup methods, see the "Backup Software Integration" section of the rear(8) man page. 27.2.3. Unsupported Backup Methods With unsupported backup methods, there are two options: The rescue system prompts the user to manually restore the files. This scenario is the one described in "Basic ReaR Usage", except for the backup file format, which may take a different form than a tar archive. ReaR executes the custom commands provided by the user. To configure this, set the BACKUP directive to EXTERNAL . Then specify the commands to be run during backing up and restoration using the EXTERNAL_BACKUP and EXTERNAL_RESTORE directives. Optionally, also specify the EXTERNAL_IGNORE_ERRORS and EXTERNAL_CHECK directives. See /usr/share/rear/conf/default.conf for an example configuration. 27.2.4. Creating Multiple Backups With the version 2.00, ReaR supports creation of multiple backups. Backup methods that support this feature are: BACKUP=NETFS (internal method) BACKUP=BORG (external method) You can specify individual backups with the -C option of the rear command. The argument is a basename of the additional backup configuration file in the /etc/rear/ directory. The method, destination, and the options for each specific backup are defined in the specific configuration file, not in the main configuration file. To perform the basic recovery of the system: Basic recovery of the system Create the ReaR recovery system ISO image together with a backup of the files of the basic system: Back the files up in the /home directories: Note that the specified configuration file should contain the directories needed for a basic recovery of the system, such as /boot , /root , and /usr . Recovery of the system in the rear recovery shell To recover the system in the rear recovery shell, use the following sequence of commands:
[ "~]# yum install rear", "OUTPUT= output format OUTPUT_URL= output location", "OUTPUT=ISO OUTPUT_URL=file:///mnt/rescue_system/", "OUTPUT=ISO BACKUP=NETFS OUTPUT_URL=null BACKUP_URL=\"iso:///backup\" ISO_DIR=\" output location \"", "~]# rear -v mkrescue Relax-and-Recover 1.17.2 / Git Using log file: /var/log/rear/rear-rhel7.log mkdir: created directory '/var/lib/rear/output' Creating disk layout Creating root filesystem layout TIP: To login as root via ssh you need to set up /root/.ssh/authorized_keys or SSH_ROOT_PASSWORD in your configuration file Copying files and directories Copying binaries and libraries Copying kernel modules Creating initramfs Making ISO image Wrote ISO image: /var/lib/rear/output/rear-rhel7.iso (124M) Copying resulting files to file location", "~]# ls -lh /mnt/rescue_system/rhel7/ total 124M -rw-------. 1 root root 202 Jun 10 15:27 README -rw-------. 1 root root 166K Jun 10 15:27 rear.log -rw-------. 1 root root 124M Jun 10 15:27 rear-rhel7.iso -rw-------. 1 root root 274 Jun 10 15:27 VERSION", "~]# scp [email protected]:/srv/backup/rhel7/backup.tar.gz /mnt/local/ ~]# tar xf /mnt/local/backup.tar.gz -C /mnt/local/ ~]# rm -f /mnt/local/backup.tar.gz", "~]# ls /mnt/local/", "~]# touch /mnt/local/.autorelabel", "BACKUP=NETFS BACKUP_URL= backup location", "OUTPUT=ISO OUTPUT_URL=file:///mnt/rescue_system/ BACKUP=NETFS BACKUP_URL=file:///srv/backup/", "NETFS_KEEP_OLD_BACKUP_COPY=y", "BACKUP_TYPE=incremental", "FULLBACKUPDAY= \"Day\"", "BACKUP_URL=iso:///backup/", "OUTPUT=ISO OUTPUT_URL=file:///srv/backup/ BACKUP=NETFS BACKUP_URL=iso:///backup/", "BACKUP_PROG=rsync", "rear mkrescue", "rear mkbackuponly", "rear mkbackup", "~]# rear checklayout ~]# echo USD?", "~]# rear checklayout || rear mkrescue", "~]# rear -C basic_system mkbackup", "~]# rear -C home_backup mkbackuponly", "~]# rear -C basic_system recover", "~]# rear -C home_backup restoreonly" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system_administrators_guide/ch-Relax-and-Recover_ReaR
Chapter 3. Common deployment patterns
Chapter 3. Common deployment patterns Red Hat AMQ 7 can be set up in a large variety of topologies. The following are some of the common deployment patterns you can implement using AMQ components. 3.1. Central broker The central broker pattern is relatively easy to set up and maintain. It is also relatively robust. Routes are typically local, because the broker and its clients are always within one network hop of each other, no matter how many nodes are added. This pattern is also known as hub and spoke , with the central broker as the hub and the clients the spokes. Figure 3.1. Central broker pattern The only critical element is the central broker node. The focus of your maintenance efforts is on keeping this broker available to its clients. 3.2. Routed messaging When routing messages to remote destinations, the broker stores them in a local queue before forwarding them to their destination. However, sometimes an application requires sending request and response messages in real time, and having the broker store and forward messages is too costly. With AMQ you can use a router in place of a broker to avoid such costs. Unlike a broker, a router does not store messages before forwarding them to a destination. Instead, it works as a lightweight conduit and directly connects two endpoints. Figure 3.2. Brokerless routed messaging pattern 3.3. Highly available brokers To ensure brokers are available for their clients, deploy a highly available (HA) master-slave pair to create a backup group. You might, for example, deploy two master-slave groups on two nodes. Such a deployment would provide a backup for each active broker, as seen in the following diagram. Figure 3.3. Master-slave pair Under normal operating conditions one master broker is active on each node, which can be either a physical server or a virtual machine. If one node fails, the slave on the other node takes over. The result is two active brokers residing on the same healthy node. By deploying master-slave pairs, you can scale out an entire network of such backup groups. Larger deployments of this type are useful for distributing the message processing load across many brokers. The broker network in the following diagram consists of eight master-slave groups distributed over eight nodes. Figure 3.4. Master-slave network 3.4. Router pair behind a load balancer Deploying two routers behind a load balancer provides high availability, resiliency, and increased scalability for a single-datacenter deployment. Endpoints make their connections to a known URL, supported by the load balancer. , the load balancer spreads the incoming connections among the routers so that the connection and messaging load is distributed. If one of the routers fails, the endpoints connected to it will reconnect to the remaining active router. Figure 3.5. Router pair behind a load balancer For even greater scalability, you can use a larger number of routers, three or four for example. Each router connects directly to all of the others. 3.5. Router pair in a DMZ In this deployment architecture, the router network is providing a layer of protection and isolation between the clients in the outside world and the brokers backing an enterprise application. Figure 3.6. Router pair in a DMZ Important notes about the DMZ topology: Security for the connections within the deployment is separate from the security used for external clients. For example, your deployment might use a private Certificate Authority (CA) for internal security, issuing x.509 certificates to each router and broker for authentication, although external users might use a different, public CA. Inter-router connections between the enterprise and the DMZ are always established from the enterprise to the DMZ for security. Therefore, no connections are permitted from the outside into the enterprise. The AMQP protocol enables bi-directional communication after a connection is established, however. 3.6. Router pairs in different data centers You can use a more complex topology in a deployment of AMQ components that spans multiple locations. You can, for example, deploy a pair of load-balanced routers in each of four locations. You might include two backbone routers in the center to provide redundant connectivity between all locations. The following diagram is an example deployment spanning multiple locations. Figure 3.7. Multiple interconnected routers Revised on 2021-06-16 17:55:52 UTC
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/introducing_red_hat_amq_7/common_deployment_patterns
Updating Red Hat Satellite
Updating Red Hat Satellite Red Hat Satellite 6.16 Update Satellite Server and Capsule to a new minor release Red Hat Satellite Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/updating_red_hat_satellite/index
Chapter 6. PodMonitor [monitoring.coreos.com/v1]
Chapter 6. PodMonitor [monitoring.coreos.com/v1] Description PodMonitor defines monitoring for a set of pods. Type object Required spec 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of desired Pod selection for target discovery by Prometheus. 6.1.1. .spec Description Specification of desired Pod selection for target discovery by Prometheus. Type object Required podMetricsEndpoints selector Property Type Description attachMetadata object Attaches node metadata to discovered targets. Requires Prometheus v2.35.0 and above. jobLabel string The label to use to retrieve the job name from. labelLimit integer Per-scrape limit on number of labels that will be accepted for a sample. Only valid in Prometheus versions 2.27.0 and newer. labelNameLengthLimit integer Per-scrape limit on length of labels name that will be accepted for a sample. Only valid in Prometheus versions 2.27.0 and newer. labelValueLengthLimit integer Per-scrape limit on length of labels value that will be accepted for a sample. Only valid in Prometheus versions 2.27.0 and newer. namespaceSelector object Selector to select which namespaces the Endpoints objects are discovered from. podMetricsEndpoints array A list of endpoints allowed as part of this PodMonitor. podMetricsEndpoints[] object PodMetricsEndpoint defines a scrapeable endpoint of a Kubernetes Pod serving Prometheus metrics. podTargetLabels array (string) PodTargetLabels transfers labels on the Kubernetes Pod onto the target. sampleLimit integer SampleLimit defines per-scrape limit on number of scraped samples that will be accepted. selector object Selector to select Pod objects. targetLimit integer TargetLimit defines a limit on the number of scraped targets that will be accepted. 6.1.2. .spec.attachMetadata Description Attaches node metadata to discovered targets. Requires Prometheus v2.35.0 and above. Type object Property Type Description node boolean When set to true, Prometheus must have permissions to get Nodes. 6.1.3. .spec.namespaceSelector Description Selector to select which namespaces the Endpoints objects are discovered from. Type object Property Type Description any boolean Boolean describing whether all namespaces are selected in contrast to a list restricting them. matchNames array (string) List of namespace names to select from. 6.1.4. .spec.podMetricsEndpoints Description A list of endpoints allowed as part of this PodMonitor. Type array 6.1.5. .spec.podMetricsEndpoints[] Description PodMetricsEndpoint defines a scrapeable endpoint of a Kubernetes Pod serving Prometheus metrics. Type object Property Type Description authorization object Authorization section for this endpoint basicAuth object BasicAuth allow an endpoint to authenticate over basic authentication. More info: https://prometheus.io/docs/operating/configuration/#endpoint bearerTokenSecret object Secret to mount to read bearer token for scraping targets. The secret needs to be in the same namespace as the pod monitor and accessible by the Prometheus Operator. enableHttp2 boolean Whether to enable HTTP2. filterRunning boolean Drop pods that are not running. (Failed, Succeeded). Enabled by default. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase followRedirects boolean FollowRedirects configures whether scrape requests follow HTTP 3xx redirects. honorLabels boolean HonorLabels chooses the metric's labels on collisions with target labels. honorTimestamps boolean HonorTimestamps controls whether Prometheus respects the timestamps present in scraped data. interval string Interval at which metrics should be scraped If not specified Prometheus' global scrape interval is used. metricRelabelings array MetricRelabelConfigs to apply to samples before ingestion. metricRelabelings[] object RelabelConfig allows dynamic rewriting of the label set for targets, alerts, scraped samples and remote write samples. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config oauth2 object OAuth2 for the URL. Only valid in Prometheus versions 2.27.0 and newer. params object Optional HTTP URL parameters params{} array (string) path string HTTP path to scrape for metrics. If empty, Prometheus uses the default value (e.g. /metrics ). port string Name of the pod port this endpoint refers to. Mutually exclusive with targetPort. proxyUrl string ProxyURL eg http://proxyserver:2195 Directs scrapes to proxy through this endpoint. relabelings array RelabelConfigs to apply to samples before scraping. Prometheus Operator automatically adds relabelings for a few standard Kubernetes fields. The original scrape job's name is available via the __tmp_prometheus_job_name label. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config relabelings[] object RelabelConfig allows dynamic rewriting of the label set for targets, alerts, scraped samples and remote write samples. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config scheme string HTTP scheme to use for scraping. http and https are the expected values unless you rewrite the scheme label via relabeling. If empty, Prometheus uses the default value http . scrapeTimeout string Timeout after which the scrape is ended If not specified, the Prometheus global scrape interval is used. targetPort integer-or-string Deprecated: Use 'port' instead. tlsConfig object TLS configuration to use when scraping the endpoint. 6.1.6. .spec.podMetricsEndpoints[].authorization Description Authorization section for this endpoint Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 6.1.7. .spec.podMetricsEndpoints[].authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.8. .spec.podMetricsEndpoints[].basicAuth Description BasicAuth allow an endpoint to authenticate over basic authentication. More info: https://prometheus.io/docs/operating/configuration/#endpoint Type object Property Type Description password object The secret in the service monitor namespace that contains the password for authentication. username object The secret in the service monitor namespace that contains the username for authentication. 6.1.9. .spec.podMetricsEndpoints[].basicAuth.password Description The secret in the service monitor namespace that contains the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.10. .spec.podMetricsEndpoints[].basicAuth.username Description The secret in the service monitor namespace that contains the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.11. .spec.podMetricsEndpoints[].bearerTokenSecret Description Secret to mount to read bearer token for scraping targets. The secret needs to be in the same namespace as the pod monitor and accessible by the Prometheus Operator. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.12. .spec.podMetricsEndpoints[].metricRelabelings Description MetricRelabelConfigs to apply to samples before ingestion. Type array 6.1.13. .spec.podMetricsEndpoints[].metricRelabelings[] Description RelabelConfig allows dynamic rewriting of the label set for targets, alerts, scraped samples and remote write samples. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config Type object Property Type Description action string Action to perform based on the regex matching. Uppercase and Lowercase actions require Prometheus >= v2.36.0. DropEqual and KeepEqual actions require Prometheus >= v2.41.0. Default: "Replace" modulus integer Modulus to take of the hash of the source label values. Only applicable when the action is HashMod . regex string Regular expression against which the extracted value is matched. replacement string Replacement value against which a Replace action is performed if the regular expression matches. Regex capture groups are available. separator string Separator is the string between concatenated SourceLabels. sourceLabels array (string) The source labels select values from existing labels. Their content is concatenated using the configured Separator and matched against the configured regular expression. targetLabel string Label to which the resulting string is written in a replacement. It is mandatory for Replace , HashMod , Lowercase , Uppercase , KeepEqual and DropEqual actions. Regex capture groups are available. 6.1.14. .spec.podMetricsEndpoints[].oauth2 Description OAuth2 for the URL. Only valid in Prometheus versions 2.27.0 and newer. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object The secret or configmap containing the OAuth2 client id clientSecret object The secret containing the OAuth2 client secret endpointParams object (string) Parameters to append to the token URL scopes array (string) OAuth2 scopes used for the token request tokenUrl string The URL to fetch the token from 6.1.15. .spec.podMetricsEndpoints[].oauth2.clientId Description The secret or configmap containing the OAuth2 client id Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 6.1.16. .spec.podMetricsEndpoints[].oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 6.1.17. .spec.podMetricsEndpoints[].oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.18. .spec.podMetricsEndpoints[].oauth2.clientSecret Description The secret containing the OAuth2 client secret Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.19. .spec.podMetricsEndpoints[].params Description Optional HTTP URL parameters Type object 6.1.20. .spec.podMetricsEndpoints[].relabelings Description RelabelConfigs to apply to samples before scraping. Prometheus Operator automatically adds relabelings for a few standard Kubernetes fields. The original scrape job's name is available via the __tmp_prometheus_job_name label. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config Type array 6.1.21. .spec.podMetricsEndpoints[].relabelings[] Description RelabelConfig allows dynamic rewriting of the label set for targets, alerts, scraped samples and remote write samples. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config Type object Property Type Description action string Action to perform based on the regex matching. Uppercase and Lowercase actions require Prometheus >= v2.36.0. DropEqual and KeepEqual actions require Prometheus >= v2.41.0. Default: "Replace" modulus integer Modulus to take of the hash of the source label values. Only applicable when the action is HashMod . regex string Regular expression against which the extracted value is matched. replacement string Replacement value against which a Replace action is performed if the regular expression matches. Regex capture groups are available. separator string Separator is the string between concatenated SourceLabels. sourceLabels array (string) The source labels select values from existing labels. Their content is concatenated using the configured Separator and matched against the configured regular expression. targetLabel string Label to which the resulting string is written in a replacement. It is mandatory for Replace , HashMod , Lowercase , Uppercase , KeepEqual and DropEqual actions. Regex capture groups are available. 6.1.22. .spec.podMetricsEndpoints[].tlsConfig Description TLS configuration to use when scraping the endpoint. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 6.1.23. .spec.podMetricsEndpoints[].tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 6.1.24. .spec.podMetricsEndpoints[].tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 6.1.25. .spec.podMetricsEndpoints[].tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.26. .spec.podMetricsEndpoints[].tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 6.1.27. .spec.podMetricsEndpoints[].tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 6.1.28. .spec.podMetricsEndpoints[].tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.29. .spec.podMetricsEndpoints[].tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.30. .spec.selector Description Selector to select Pod objects. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 6.1.31. .spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 6.1.32. .spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 6.2. API endpoints The following API endpoints are available: /apis/monitoring.coreos.com/v1/podmonitors GET : list objects of kind PodMonitor /apis/monitoring.coreos.com/v1/namespaces/{namespace}/podmonitors DELETE : delete collection of PodMonitor GET : list objects of kind PodMonitor POST : create a PodMonitor /apis/monitoring.coreos.com/v1/namespaces/{namespace}/podmonitors/{name} DELETE : delete a PodMonitor GET : read the specified PodMonitor PATCH : partially update the specified PodMonitor PUT : replace the specified PodMonitor 6.2.1. /apis/monitoring.coreos.com/v1/podmonitors Table 6.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind PodMonitor Table 6.2. HTTP responses HTTP code Reponse body 200 - OK PodMonitorList schema 401 - Unauthorized Empty 6.2.2. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/podmonitors Table 6.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 6.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of PodMonitor Table 6.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind PodMonitor Table 6.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.8. HTTP responses HTTP code Reponse body 200 - OK PodMonitorList schema 401 - Unauthorized Empty HTTP method POST Description create a PodMonitor Table 6.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.10. Body parameters Parameter Type Description body PodMonitor schema Table 6.11. HTTP responses HTTP code Reponse body 200 - OK PodMonitor schema 201 - Created PodMonitor schema 202 - Accepted PodMonitor schema 401 - Unauthorized Empty 6.2.3. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/podmonitors/{name} Table 6.12. Global path parameters Parameter Type Description name string name of the PodMonitor namespace string object name and auth scope, such as for teams and projects Table 6.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a PodMonitor Table 6.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 6.15. Body parameters Parameter Type Description body DeleteOptions schema Table 6.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified PodMonitor Table 6.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 6.18. HTTP responses HTTP code Reponse body 200 - OK PodMonitor schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified PodMonitor Table 6.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 6.20. Body parameters Parameter Type Description body Patch schema Table 6.21. HTTP responses HTTP code Reponse body 200 - OK PodMonitor schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified PodMonitor Table 6.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.23. Body parameters Parameter Type Description body PodMonitor schema Table 6.24. HTTP responses HTTP code Reponse body 200 - OK PodMonitor schema 201 - Created PodMonitor schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/monitoring_apis/podmonitor-monitoring-coreos-com-v1
Inspecting your SBOM using Red Hat Trusted Profile Analyzer
Inspecting your SBOM using Red Hat Trusted Profile Analyzer Red Hat Trusted Application Pipeline 1.4 Learn how to scan your SBOM to gain actionable information about the security posture of your application. Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.4/html/inspecting_your_sbom_using_red_hat_trusted_profile_analyzer/index
Chapter 18. Concurrent Versioning System
Chapter 18. Concurrent Versioning System The Concurrent Versioning System (CVS) is a free revision-control system. It is used to monitor and keep track of modifications to a central set of files which are usually accessed by several different users. It is commonly used by programmers to manage a source code repository and is widely used by open source developers. In Red Hat Enterprise Linux, the cvs package provides CVS. Enter the following command to see if the cvs package is installed: If it is not installed and you want to use CVS, use the yum utility as root to install it: 18.1. CVS and SELinux The cvs daemon runs labeled with the cvs_t type. By default in Red Hat Enterprise Linux, CVS is only allowed to read and write certain directories. The label cvs_data_t defines which areas cvs has read and write access to. When using CVS with SELinux, assigning the correct label is essential for clients to have full access to the area reserved for CVS data.
[ "~]USD rpm -q cvs package cvs is not installed", "~]# yum install cvs" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/chap-managing_confined_services-concurrent_versioning_system
Appendix A. Supported Configurations
Appendix A. Supported Configurations A.1. Supported Data Sources and Translators The following table provides a list of data sources and translators that are supported by Red Hat. Table A.1. Supported Data Sources and Translators Data Source Translator Driver Apache Accumulo 1.6.5 accumulo - Apache Cassandra 2.2.4 cassandra - Apache Hive hive v2.0 Hive JDBC driver Apache Solr 4.9.0 solr - Cloudera Hadoop - - EDS 5.5.1 teiid current Teiid Driver Files - delimited, fixed length file - Generic Datasource-JDBC ansi jdbc-ansi - Generic Datasource-JDBC simple (postgresql84) jdbc-simple postgresql 8.4 Google Spreadsheet google-spreadsheet - Greenplum 4.1 greenplum postgresql 9.0 Hortonworks Hadoop - - Hortonworks Data Platform 2.4 hive 1.2.1 HPE Vertica 7.2.1 vertica 7.2.1-0 HSQL (for test/examples only) - - IBM DB2 9.7 db2 Universal Driver v4.x IBM DB2 10 db2 Universal Driver v4.x IBM DB2 11.1 db2 Universal Driver v4.x IBM Informix 12.10 informix 4.10.JC5DE Ingres 10 ingres 4.1.4 Intel Hadoop - - JBoss Data Grid 6.4 (remote client - hotrod) infinispan-cache-dsl Hot Rod Client JBoss Data Grid 6.4 (library mode) infinispan-cache - JBoss Data Grid 7.1.0 (hotrod) infinispan-hotrod Hot Rod Client LDAP/ActiveDirectory v3 ldap - Mainframe (CICS,IMS,VSAM) - - MariaDB mysql5 mysql 5.1.22 ModeShape/JCR 3.1 modeshape 3.8.4 MongoDB 2.2 mongodb - MongoDB 3.4 mongodb - MS Access 2010 - - MS Access 2013 - - MS Excel 2010 excel - MS Excel 2013 excel - MS SQL Server 2008 sqlserver Microsoft SQL Server JDBC Driver 4 MS SQL Server 2012 sqlserver Microsoft SQL Server JDBC Driver 4 MS SQL Server 2014 sqlserver Microsoft SQL Server JDBC Driver 4 MySQL 5.1 mysql5 v5.1 MySQL 5.5 mysql5 v5.5 Netezza 6.0.2 netezza - Oracle 10g R2 oracle Oracle JDBC Driver v10 Oracle 11g R2 oracle Oracle JDBC Driver v12 Oracle 12c oracle Oracle JDBC Driver v12 PostgreSQL 8.4 postgresql postgresql 8.4 PostgreSQL 9.2 postgresql postgresql 9.2 PostgreSQL 9.6 postgresql JDBC4 PostgreSQL Driver, Version 42.2.2.jre6 PostgreSQL 10.1 postgresql JDBC4 PostgreSQL Driver, Version 42.2.2.jre6 REST/JSON over HTTP ws - Red Hat Enterprise Linux 5.5/6 PostgreSQL config - - Red Hat Directory Server (RHDS) 9.0 ldap - Red Hat JBoss Data Virtualization teiid current Teiid Driver Salesforce.com API 22 salesforce - SAP Netweaver Gateway (OData) sap-nw-gateway - Support SAP Service Registry as a Data Source - - SAP HANA v1.0 SP 10+ hana ngdbc.jar v1.00 in-memory JDBC driver SAP IQ 16 sybaseiq jconn4 (JDBC 4.0) Sybase ASE 15 sybase jconn4 (JDBC 4.0) Teradata Express 12 teradata - Webservices ws - XML Files FILE - Note MS Excel is supported in so much as there is a write procedure.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/appe-supported_configurations
Installing on IBM Power
Installing on IBM Power OpenShift Container Platform 4.17 Installing OpenShift Container Platform on IBM Power Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_ibm_power/index
function::ansi_set_color
function::ansi_set_color Name function::ansi_set_color - Set the ansi Select Graphic Rendition mode. Synopsis Arguments fg Foreground color to set. Description Sends ansi code for Select Graphic Rendition mode for the given forground color. Black (30), Blue (34), Green (32), Cyan (36), Red (31), Purple (35), Brown (33), Light Gray (37).
[ "ansi_set_color(fg:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-ansi-set-color
1.2. A Review of Certificate System Subsystems
1.2. A Review of Certificate System Subsystems Red Hat Certificate System provides five different subsystems, each focusing on different aspects of a PKI deployment. These subsystems work together to create a public key infrastructure (PKI). Depending on what subsystems are installed, a PKI can function as a token management system (TMS) or a non token management system. For descriptions of the subsystems and TMS and non-TMS environments, see the A Review of Certificate System Subsystems section in the Red Hat Certificate System Planning, Installation, and Deployment Guide . Enterprise Security Client The Enterprise Security Client is not a subsystem since it does not perform any operations with certificates, keys, or tokens. The Enterprise Security Client is a user interface which allows people to manage certificates on smart cards very easily. The Enterprise Security Client sends all token operations, such as certificate requests, to the token processing system (TPS), which then sends them to the certificate authority (CA). For more information, see For more information, see Red Hat Certificate System Managing Smart Cards with the Enterprise Security Client .
null
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/overview-subsystems
Chapter 1. Red Hat Software Collections 3.7
Chapter 1. Red Hat Software Collections 3.7 This chapter serves as an overview of the Red Hat Software Collections 3.7 content set. It provides a list of components and their descriptions, sums up changes in this version, documents relevant compatibility information, and lists known issues. 1.1. About Red Hat Software Collections For certain applications, more recent versions of some software components are often needed in order to use their latest new features. Red Hat Software Collections is a Red Hat offering that provides a set of dynamic programming languages, database servers, and various related packages that are either more recent than their equivalent versions included in the base Red Hat Enterprise Linux system, or are available for this system for the first time. Red Hat Software Collections 3.7 is available for Red Hat Enterprise Linux 7. For a complete list of components that are distributed as part of Red Hat Software Collections and a brief summary of their features, see Section 1.2, "Main Features" . Red Hat Software Collections does not replace the default system tools provided with Red Hat Enterprise Linux 7. Instead, a parallel set of tools is installed in the /opt/ directory and can be optionally enabled per application by the user using the supplied scl utility. The default versions of Perl or PostgreSQL, for example, remain those provided by the base Red Hat Enterprise Linux system. Note In Red Hat Enterprise Linux 8, similar components are provided as Application Streams . All Red Hat Software Collections components are fully supported under Red Hat Enterprise Linux Subscription Level Agreements, are functionally complete, and are intended for production use. Important bug fix and security errata are issued to Red Hat Software Collections subscribers in a similar manner to Red Hat Enterprise Linux for at least two years from the release of each major version. In each major release stream, each version of a selected component remains backward compatible. For detailed information about length of support for individual components, refer to the Red Hat Software Collections Product Life Cycle document. 1.1.1. Red Hat Developer Toolset Red Hat Developer Toolset is a part of Red Hat Software Collections, included as a separate Software Collection. For more information about Red Hat Developer Toolset, refer to the Red Hat Developer Toolset Release Notes and the Red Hat Developer Toolset User Guide . 1.2. Main Features Table 1.1, "Red Hat Software Collections Components" lists components that are supported at the time of the Red Hat Software Collections 3.7 release. All Software Collections are currently supported only on Red Hat Enterprise Linux 7. Table 1.1. Red Hat Software Collections Components Component Software Collection Description Red Hat Developer Toolset 10.1 devtoolset-10 Red Hat Developer Toolset is designed for developers working on the Red Hat Enterprise Linux platform. It provides current versions of the GNU Compiler Collection , GNU Debugger , and other development, debugging, and performance monitoring tools. For a complete list of components, see the Red Hat Developer Toolset Components table in the Red Hat Developer Toolset User Guide . Perl 5.30.1 rh-perl530 A release of Perl, a high-level programming language that is commonly used for system administration utilities and web programming. The rh-perl530 Software Collection provides additional utilities, scripts, and database connectors for MySQL , PostgreSQL , and SQLite . It includes the DateTime Perl module and the mod_perl Apache httpd module, which is supported only with the httpd24 Software Collection. Additionally, it provides the cpanm utility for easy installation of CPAN modules, the LWP::UserAgent module for communicating with the HTTP servers, and the LWP::Protocol::https module for securing the communication. The rh-perl530 packaging is aligned with upstream; the perl530-perl package installs also core modules, while the interpreter is provided by the perl-interpreter package. PHP 7.3.20 rh-php73 A release of PHP 7.3 with PEAR 1.10.9, APCu 5.1.17, and the Xdebug extension. Python 2.7.18 python27 A release of Python 2.7 with a number of additional utilities. This Python version provides various features and enhancements, including an ordered dictionary type, faster I/O operations, and improved forward compatibility with Python 3. The python27 Software Collections contains the Python 2.7.13 interpreter , a set of extension libraries useful for programming web applications and mod_wsgi (only supported with the httpd24 Software Collection), MySQL and PostgreSQL database connectors, and numpy and scipy . Python 3.8.6 rh-python38 The rh-python38 Software Collection contains Python 3.8, which introduces new Python modules, such as contextvars , dataclasses , or importlib.resources , new language features, improved developer experience, and performance improvements . In addition, a set of popular extension libraries is provided, including mod_wsgi (supported only together with the httpd24 Software Collection), numpy , scipy , and the psycopg2 PostgreSQL database connector. Ruby 2.6.7 rh-ruby26 A release of Ruby 2.6. This version provides multiple performance improvements and new features, such as endless ranges, the Binding#source_location method, and the USDSAFE process global state . Ruby 2.6.0 maintains source-level backward compatibility with Ruby 2.5. Ruby 2.7.3 rh-ruby27 A release of Ruby 2.7. This version provides multiple performance improvements and new features, such as Compaction GC or command-line interface for the LALR(1) parser generator, and an enhancement to REPL. Ruby 2.7 maintains source-level backward compatibility with Ruby 2.6. Ruby 3.0.1 rh-ruby30 A release of Ruby 3.0. This version provides multiple performance improvements and new features, such as Ractor , Fiber Scheduler and the RBS language . Ruby 3.0 maintains source-level backward compatibility with Ruby 2.7. MariaDB 10.3.27 rh-mariadb103 A release of MariaDB, an alternative to MySQL for users of Red Hat Enterprise Linux. For all practical purposes, MySQL is binary compatible with MariaDB and can be replaced with it without any data conversions. This version introduces system-versioned tables, invisible columns, a new instant ADD COLUMN operation for InnoDB , and a JDBC connector for MariaDB and MySQL . MariaDB 10.5.9 rh-mariadb105 A release of MariaDB, an alternative to MySQL for users of Red Hat Enterprise Linux. For all practical purposes, MySQL is binary compatible with MariaDB and can be replaced with it without any data conversions. This version includes various new features, MariaDB Galera Cluster upgraded to version 4, and PAM plug-in version 2.0 . MySQL 8.0.21 rh-mysql80 A release of the MySQL server, which introduces a number of new security and account management features and enhancements. PostgreSQL 10.15 rh-postgresql10 A release of PostgreSQL, which includes a significant performance improvement and a number of new features, such as logical replication using the publish and subscribe keywords, or stronger password authentication based on the SCRAM-SHA-256 mechanism . PostgreSQL 12.5 rh-postgresql12 A release of PostgreSQL, which provides the pgaudit extension, various enhancements to partitioning and parallelism, support for the SQL/JSON path language, and performance improvements. PostgreSQL 13.2 rh-postgresql13 A release of PostgreSQL, which enables improved query planning and introduces various performance improvements and two new packages, pg_repack and plpython3 . Node.js 12.21.0 rh-nodejs12 A release of Node.js with V8 engine version 7.6, support for ES6 modules, and improved support for native modules. Node.js 14.16.0 rh-nodejs14 A release of Node.js with V8 version 8.3, a new experimental WebAssembly System Interface (WASI), and a new experimental Async Local Storage API. nginx 1.16.1 rh-nginx116 A release of nginx, a web and proxy server with a focus on high concurrency, performance, and low memory usage. This version introduces numerous updates related to SSL, several new directives and parameters, and various enhancements. nginx 1.18.0 rh-nginx118 A release of nginx, a web and proxy server with a focus on high concurrency, performance, and low memory usage. This version introduces enhancements to HTTP request rate and connection limiting, and a new auth_delay directive . In addition, support for new variables has been added to multiple directives. Apache httpd 2.4.34 httpd24 A release of the Apache HTTP Server (httpd), including a high performance event-based processing model, enhanced SSL module and FastCGI support . The mod_auth_kerb , mod_auth_mellon , and ModSecurity modules are also included. Varnish Cache 6.0.6 rh-varnish6 A release of Varnish Cache, a high-performance HTTP reverse proxy. This version includes support for Unix Domain Sockets (both for clients and for back-end servers), new level of the VCL language ( vcl 4.1 ), and improved HTTP/2 support . Maven 3.6.1 rh-maven36 A release of Maven, a software project management and comprehension tool. This release provides various enhancements and bug fixes. Git 2.27.0 rh-git227 A release of Git, a distributed revision control system with a decentralized architecture. As opposed to centralized version control systems with a client-server model, Git ensures that each working copy of a Git repository is its exact copy with complete revision history. This version introduces numerous enhancements; for example, the git checkout command split into git switch and git restore , and changed behavior of the git rebase command . In addition, Git Large File Storage (LFS) has been updated to version 2.11.0. Redis 5.0.5 rh-redis5 A release of Redis 5.0, a persistent key-value database . Redis now provides redis-trib , a cluster management tool . HAProxy 1.8.24 rh-haproxy18 A release of HAProxy 1.8, a reliable, high-performance network load balancer for TCP and HTTP-based applications. JDK Mission Control 8.0.0 rh-jmc This Software Collection includes JDK Mission Control (JMC) , a powerful profiler for HotSpot JVMs. JMC provides an advanced set of tools for efficient and detailed analysis of extensive data collected by the JDK Flight Recorder. JMC requires JDK version 11 or later to run. Target Java applications must run with at least OpenJDK version 8 so that JMC can access JDK Flight Recorder features. The rh-jmc Software Collection requires the rh-maven36 Software Collection. Previously released Software Collections remain available in the same distribution channels. All Software Collections, including retired components, are listed in the Table 1.2, "All Available Software Collections" . Software Collections that are no longer supported are marked with an asterisk ( * ). See the Red Hat Software Collections Product Life Cycle document for information on the length of support for individual components. For detailed information regarding previously released components, refer to the Release Notes for earlier versions of Red Hat Software Collections. Table 1.2. All Available Software Collections Component Software Collection Availability Architectures supported on RHEL7 Components New in Red Hat Software Collections 3.7 MariaDB 10.5.9 rh-mariadb105 RHEL7 x86_64, s390x, ppc64le PostgreSQL 13.2 rh-postgresql13 RHEL7 x86_64, s390x, ppc64le Ruby 3.0.1 rh-ruby30 RHEL7 x86_64, s390x, ppc64le Table 1.2. All Available Software Collections Components Updated in Red Hat Software Collections 3.7 Red Hat Developer Toolset 10.1 devtoolset-10 RHEL7 x86_64, s390x, ppc64, ppc64le JDK Mission Control 8.0.0 rh-jmc RHEL7 x86_64 Ruby 2.7.3 rh-ruby27 RHEL7 x86_64, s390x, aarch64, ppc64le Ruby 2.6.7 rh-ruby26 RHEL7 x86_64, s390x, aarch64, ppc64le Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 3.6 Git 2.27.0 rh-git227 RHEL7 x86_64, s390x, ppc64le nginx 1.18.0 rh-nginx118 RHEL7 x86_64, s390x, ppc64le Node.js 14.16.0 rh-nodejs14 RHEL7 x86_64, s390x, ppc64le Apache httpd 2.4.34 httpd24 RHEL7 x86_64, s390x, aarch64, ppc64le PHP 7.3.20 rh-php73 RHEL7 x86_64, s390x, aarch64, ppc64le HAProxy 1.8.24 rh-haproxy18 RHEL7 x86_64 Perl 5.30.1 rh-perl530 RHEL7 x86_64, s390x, aarch64, ppc64le Ruby 2.5.9 rh-ruby25 * RHEL7 x86_64, s390x, aarch64, ppc64le Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 3.5 Red Hat Developer Toolset 9.1 devtoolset-9 RHEL7 x86_64, s390x, aarch64, ppc64, ppc64le Python 3.8.6 rh-python38 RHEL7 x86_64, s390x, aarch64, ppc64le Varnish Cache 6.0.6 rh-varnish6 RHEL7 x86_64, s390x, aarch64, ppc64le Apache httpd 2.4.34 (the last update for RHEL6) httpd24 (RHEL6)* RHEL6 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 3.4 Node.js 12.21.0 rh-nodejs12 RHEL7 x86_64, s390x, aarch64, ppc64le nginx 1.16.1 rh-nginx116 RHEL7 x86_64, s390x, aarch64, ppc64le PostgreSQL 12.5 rh-postgresql12 RHEL7 x86_64, s390x, aarch64, ppc64le Maven 3.6.1 rh-maven36 RHEL7 x86_64, s390x, aarch64, ppc64le Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 3.3 Red Hat Developer Toolset 8.1 devtoolset-8 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64, ppc64le MariaDB 10.3.27 rh-mariadb103 RHEL7 x86_64, s390x, aarch64, ppc64le Redis 5.0.5 rh-redis5 RHEL7 x86_64, s390x, aarch64, ppc64le Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 3.2 PHP 7.2.24 rh-php72 * RHEL7 x86_64, s390x, aarch64, ppc64le MySQL 8.0.21 rh-mysql80 RHEL7 x86_64, s390x, aarch64, ppc64le Node.js 10.21.0 rh-nodejs10 * RHEL7 x86_64, s390x, aarch64, ppc64le nginx 1.14.1 rh-nginx114 * RHEL7 x86_64, s390x, aarch64, ppc64le Git 2.18.4 rh-git218 * RHEL7 x86_64, s390x, aarch64, ppc64le Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 3.1 Red Hat Developer Toolset 7.1 devtoolset-7 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64, ppc64le Perl 5.26.3 rh-perl526 * RHEL7 x86_64, s390x, aarch64, ppc64le MongoDB 3.6.3 rh-mongodb36 * RHEL7 x86_64, s390x, aarch64, ppc64le Varnish Cache 5.2.1 rh-varnish5 * RHEL7 x86_64, s390x, aarch64, ppc64le PostgreSQL 10.15 rh-postgresql10 RHEL7 x86_64, s390x, aarch64, ppc64le PHP 7.0.27 rh-php70 * RHEL6, RHEL7 x86_64 MySQL 5.7.24 rh-mysql57 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 3.0 PHP 7.1.8 rh-php71 * RHEL7 x86_64, s390x, aarch64, ppc64le nginx 1.12.1 rh-nginx112 * RHEL7 x86_64, s390x, aarch64, ppc64le Python 3.6.12 rh-python36 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Maven 3.5.0 rh-maven35 * RHEL7 x86_64, s390x, aarch64, ppc64le MariaDB 10.2.22 rh-mariadb102 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le PostgreSQL 9.6.19 rh-postgresql96 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le MongoDB 3.4.9 rh-mongodb34 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Node.js 8.11.4 rh-nodejs8 * RHEL7 x86_64, s390x, aarch64, ppc64le Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 2.4 Red Hat Developer Toolset 6.1 devtoolset-6 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64, ppc64le Scala 2.10.6 rh-scala210 * RHEL7 x86_64 nginx 1.10.2 rh-nginx110 * RHEL6, RHEL7 x86_64 Node.js 6.11.3 rh-nodejs6 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Ruby 2.4.6 rh-ruby24 * RHEL6, RHEL7 x86_64 Ruby on Rails 5.0.1 rh-ror50 * RHEL6, RHEL7 x86_64 Eclipse 4.6.3 rh-eclipse46 * RHEL7 x86_64 Python 2.7.18 python27 RHEL6*, RHEL7 x86_64, s390x, aarch64, ppc64le Thermostat 1.6.6 rh-thermostat16 * RHEL6, RHEL7 x86_64 Maven 3.3.9 rh-maven33 * RHEL6, RHEL7 x86_64 Common Java Packages rh-java-common * RHEL6, RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 2.3 Git 2.9.3 rh-git29 * RHEL6, RHEL7 x86_64, s390x, aarch64, ppc64le Redis 3.2.4 rh-redis32 * RHEL6, RHEL7 x86_64 Perl 5.24.0 rh-perl524 * RHEL6, RHEL7 x86_64 Python 3.5.1 rh-python35 * RHEL6, RHEL7 x86_64 MongoDB 3.2.10 rh-mongodb32 * RHEL6, RHEL7 x86_64 Ruby 2.3.8 rh-ruby23 * RHEL6, RHEL7 x86_64 PHP 5.6.25 rh-php56 * RHEL6, RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 2.2 Red Hat Developer Toolset 4.1 devtoolset-4 * RHEL6, RHEL7 x86_64 MariaDB 10.1.29 rh-mariadb101 * RHEL6, RHEL7 x86_64 MongoDB 3.0.11 upgrade collection rh-mongodb30upg * RHEL6, RHEL7 x86_64 Node.js 4.6.2 rh-nodejs4 * RHEL6, RHEL7 x86_64 PostgreSQL 9.5.14 rh-postgresql95 * RHEL6, RHEL7 x86_64 Ruby on Rails 4.2.6 rh-ror42 * RHEL6, RHEL7 x86_64 MongoDB 2.6.9 rh-mongodb26 * RHEL6, RHEL7 x86_64 Thermostat 1.4.4 thermostat1 * RHEL6, RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 2.1 Varnish Cache 4.0.3 rh-varnish4 * RHEL6, RHEL7 x86_64 nginx 1.8.1 rh-nginx18 * RHEL6, RHEL7 x86_64 Node.js 0.10 nodejs010 * RHEL6, RHEL7 x86_64 Maven 3.0.5 maven30 * RHEL6, RHEL7 x86_64 V8 3.14.5.10 v8314 * RHEL6, RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 2.0 Red Hat Developer Toolset 3.1 devtoolset-3 * RHEL6, RHEL7 x86_64 Perl 5.20.1 rh-perl520 * RHEL6, RHEL7 x86_64 Python 3.4.2 rh-python34 * RHEL6, RHEL7 x86_64 Ruby 2.2.9 rh-ruby22 * RHEL6, RHEL7 x86_64 Ruby on Rails 4.1.5 rh-ror41 * RHEL6, RHEL7 x86_64 MariaDB 10.0.33 rh-mariadb100 * RHEL6, RHEL7 x86_64 MySQL 5.6.40 rh-mysql56 * RHEL6, RHEL7 x86_64 PostgreSQL 9.4.14 rh-postgresql94 * RHEL6, RHEL7 x86_64 Passenger 4.0.50 rh-passenger40 * RHEL6, RHEL7 x86_64 PHP 5.4.40 php54 * RHEL6, RHEL7 x86_64 PHP 5.5.21 php55 * RHEL6, RHEL7 x86_64 nginx 1.6.2 nginx16 * RHEL6, RHEL7 x86_64 DevAssistant 0.9.3 devassist09 * RHEL6, RHEL7 x86_64 Table 1.2. All Available Software Collections Components Last Updated in Red Hat Software Collections 1 Git 1.9.4 git19 * RHEL6, RHEL7 x86_64 Perl 5.16.3 perl516 * RHEL6, RHEL7 x86_64 Python 3.3.2 python33 * RHEL6, RHEL7 x86_64 Ruby 1.9.3 ruby193 * RHEL6, RHEL7 x86_64 Ruby 2.0.0 ruby200 * RHEL6, RHEL7 x86_64 Ruby on Rails 4.0.2 ror40 * RHEL6, RHEL7 x86_64 MariaDB 5.5.53 mariadb55 * RHEL6, RHEL7 x86_64 MongoDB 2.4.9 mongodb24 * RHEL6, RHEL7 x86_64 MySQL 5.5.52 mysql55 * RHEL6, RHEL7 x86_64 PostgreSQL 9.2.18 postgresql92 * RHEL6, RHEL7 x86_64 Legend: RHEL6 - Red Hat Enterprise Linux 6 RHEL7 - Red Hat Enterprise Linux 7 x86_64 - AMD and Intel 64-bit architectures s390x - The 64-bit IBM Z architecture aarch64 - The 64-bit ARM architecture ppc64 - IBM POWER, big endian ppc64le - IBM POWER, little endian * - Retired component; this Software Collection is no longer supported The tables above list the latest versions available through asynchronous updates. Note that Software Collections released in Red Hat Software Collections 2.0 and later include a rh- prefix in their names. Eclipse is available as a part of the Red Hat Developer Tools offering. 1.3. Changes in Red Hat Software Collections 3.7 1.3.1. Overview Architectures The Red Hat Software Collections offering contains packages for Red Hat Enterprise Linux 7 running on the following architectures: AMD and Intel 64-bit architectures 64-bit IBM Z IBM POWER, little endian For a full list of components and their availability, see Table 1.2, "All Available Software Collections" . New Software Collections Red Hat Software Collections 3.7 adds the following new Software Collections: rh-mariadb105 - see Section 1.3.3, "Changes in MariaDB" rh-postgresql13 - see Section 1.3.4, "Changes in PostgreSQL" rh-ruby30 - see Section 1.3.5, "Changes in Ruby" All new Software Collections are available only for Red Hat Enterprise Linux 7. Updated Software Collections The following components has been updated in Red Hat Software Collections 3.7: devtoolset-10 - see Section 1.3.2, "Changes in Red Hat Developer Toolset" rh-jmc - see Section 1.3.6, "Changes in JDK Mission Control" rh-ruby27 - see Section 1.3.5, "Changes in Ruby" rh-ruby26 - see Section 1.3.5, "Changes in Ruby" In addition, a new package, rh-postgresql12-pg_repack is now available for PostgreSQL 12. Red Hat Software Collections Container Images The following container images are new in Red Hat Software Collections 3.7: rhscl/mariadb-105-rhel7 rhscl/postgresql-13-rhel7 rhscl/ruby-30-rhel7 The following container images have been updated in Red Hat Software Collections 3.7 rhscl/devtoolset-10-toolchain-rhel7 rhscl/devtoolset-10-perftools-rhel7 rhscl/ruby-27-rhel7 rhscl/ruby-26-rhel7 For more information about Red Hat Software Collections container images, see Section 3.4, "Red Hat Software Collections Container Images" . 1.3.2. Changes in Red Hat Developer Toolset The following components have been upgraded in Red Hat Developer Toolset 10.1 compared to the release: SystemTap to version 4.4 Dyninst to version 10.2.1 elfutils to version 0.182 In addition, bug fix updates are available for the following components: GCC GDB binutils annobin For detailed information on changes in 10.1, see the Red Hat Developer Toolset User Guide . 1.3.3. Changes in MariaDB The new rh-mariadb105 Software Collection provides MariaDB 10.5.9 . Notable enhancements over the previously available version 10.3 include: MariaDB now uses the unix_socket authentication plug-in by default. The plug-in enables users to use operating system credentials when connecting to MariaDB through the local Unix socket file. MariaDB supports a new FLUSH SSL command to reload SSL certificates without a server restart. MariaDB adds mariadb-* named binaries and mysql* symbolic links pointing to the mariadb-* binaires. For example, the mysqladmin , mysqlaccess , and mysqlshow symlinks point to the mariadb-admin , mariadb-access , and mariadb-show binaries, respectively. MariaDB supports a new INET6 data type for storing IPv6 addresses. MariaDB now uses the Perl Compatible Regular Expressions (PCRE) library version 2. The SUPER privilege has been split into several privileges to better align with each user role. As a result, certain statements have changed required privileges. MariaDB adds a new global variable, binlog_row_metadata , as well as system variables and status variables to control the amount of metadata logged. The default value of the eq_range_index_dive_limit variable has been changed from 0 to 200 . A new SHUTDOWN WAIT FOR ALL SLAVES server command and a new mysqladmin shutdown --wait-for-all-slaves option have been added to instruct the server to shut down only after the last binlog event has been sent to all connected replicas. In parallel replication, the slave_parallel_mode variable now defaults to optimistic . The InnoDB storage engine introduces the following changes: InnoDB now supports an instant DROP COLUMN operation and enables users to change the column order. Defaults of the following variables have been changed: innodb_adaptive_hash_index to OFF and innodb_checksum_algorithm to full_crc32 . Several InnoDB variables have been removed or deprecated. MariaDB Galera Cluster has been upgraded to version 4 with the following notable changes: Galera adds a new streaming replication feature, which supports replicating transactions of unlimited size. During an execution of streaming replication, a cluster replicates a transaction in small fragments. Galera now fully supports Global Transaction ID (GTID). The default value for the wsrep_on option in the /etc/my.cnf.d/galera.cnf file has changed from 1 to 0 to prevent end users from starting wsrep replication without configuring required additional options. Changes to the PAM plug-in in MariaDB 10.5 include: MariaDB 10.5 adds a new version of the Pluggable Authentication Modules (PAM) plug-in. The PAM plug-in version 2.0 performs PAM authentication using a separate setuid root helper binary, which enables MariaDB to utilize additional PAM modules. In MariaDB 10.5 , the Pluggable Authentication Modules (PAM) plug-in and its related files have been moved to a new subpackage, mariadb-pam . This subpackage contains both PAM plug-in versions: version 2.0 is the default, and version 1.0 is available as the auth_pam_v1 shared object library. The rh-mariadb105-mariadb-pam package is not installed by default with the MariaDB server. To make the PAM authentication plug-in available in MariaDB 10.5 , install the rh-mariadb105-mariadb-pam package manually. The rh-mariadb105 Software Collection includes the rh-mariadb105-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-mariadb105*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-mariadb105* packages. Note that the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system. To find out more about syspaths, see the Red Hat Software Collections Packaging Guide . For compatibility notes and migration instructions, see Section 5.1, "Migrating to MariaDB 10.5" . For detailed changes in MariaDB 10.5 , see the upstream documentation . 1.3.4. Changes in PostgreSQL The new rh-postgresql13 Software Collection includes PostgreSQL 13.2 . This release introduces various enhancements over version 12, such as: Performance improvements resulting from de-duplication of B-tree index entries Improved performance for queries that use aggregates or partitioned tables Improved query planning when using extended statistics Parallelized vacuuming of indexes Incremental sorting For detailed changes, see the upstream release notes for PostgreSQL 13 . The following new subpackages are available with the rh-postgresql13 Software Collection: The pg_repack package provides a PostgreSQL extension that lets you remove bloat from tables and indexes, and optionally restore the physical order of clustered indexes. For details, see the upstream documentation regarding usage and examples . The pg_repack subpackage is now available also for the rh-postgresql12 Software Collection. The plpython3 package provides the PL/Python procedural language extension based on Python 3 . PL/Python enables you to write PostgreSQL functions in the Python programming language. For details, see the upstream documentation . Previously released PostgreSQL Software Collections include only the plpython package based on Python 2 . Red Hat Enterprise Linux 8 provides only plpython3 . The rh-postgresql13 Software Collection includes both plpython and plpython3 , so that you can migrate to plpython3 before upgrading to Red Hat Enterprise Linux 8. In addition, the rh-postgresql13 Software Collection includes the rh-postgresql13-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and others. After installing the rh-postgresql13*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-postgresql13* packages. Note that the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system. To find out more about syspaths, see the Red Hat Software Collections Packaging Guide . Note that support for Just-In-Time (JIT) compilation, available in upstream since PostgreSQL 11 , is not provided by the rh-postgresql13 Software Collection. For information on migration, see Section 5.3, "Migrating to PostgreSQL 13" . 1.3.5. Changes in Ruby The new rh-ruby30 Software Collection provides Ruby 3.0.1 , which introduces a number of performance improvements, bug fixes, and new features. Notable enhancements include: Concurrency and parallelism features: Ractor , an Actor-model abstraction that provides thread-safe parallel execution, is provided as an experimental feature. Fiber Scheduler has been introduced as an experimental feature. Fiber Scheduler intercepts blocking operations, which enables light-weight concurrency without changing existing code. Static analysis features: The RBS language has been introduced, which describes the structure of Ruby programs. The rbs gem has been added to parse type definitions written in RBS . The TypeProf utility has been introduced, which is a type analysis tool for Ruby code. Pattern matching with the case / in expression is no longer experimental. One-line pattern matching has been redesigned as an experimental feature. Find pattern has been added as an experimental feature. The following performance improvements have been implemented: Pasting long code to the Interactive Ruby Shell (IRB) is now significantly faster. The measure command has been added to IRB for time measurement. Other notable changes include: Keyword arguments have been separated from other arguments, see the upstream documentation for details. The default directory for user-installed gems is now USDHOME/.local/share/gem/ unless the USDHOME/.gem/ directory is already present. For more information about changes in Ruby 3.0 , see the upstream announcement for version 3.0.0 and 3.0.1 . The rh-ruby27 and rh-ruby26 Software Collections have been updated with security and bug fixes. 1.3.6. Changes in JDK Mission Control JDK Mission Control (JMC), provided by the rh-jmc Software Collection, has been upgraded from version 7.1.1 to version 8.0.0. Notable enhancements include: The Treemap viewer has been added to the JOverflow plug-in for visualizing memory usage by classes. The Threads graph has been enhanced with more filtering and zoom options. JDK Mission Control now provides support for opening JDK Flight Recorder recordings compressed with the LZ4 algorithm. New columns have been added to the Memory and TLAB views to help you identify areas of allocation pressure. Graph view has been added to improve visualization of stack traces. The Percentage column has been added to histogram tables. For more information, see the upstream release notes . 1.4. Compatibility Information Red Hat Software Collections 3.7 is available for all supported releases of Red Hat Enterprise Linux 7 on AMD and Intel 64-bit architectures, 64-bit IBM Z, and IBM POWER, little endian. Certain previously released components are available also for the 64-bit ARM architecture. For a full list of available components, see Table 1.2, "All Available Software Collections" . 1.5. Known Issues rh-mariadb105 component, BZ# 1942526 When the OQGraph storage engine plug-in is loaded to the MariaDB 10.5 server, MariaDB does not warn about dropping a non-existent table. In particular, when the user attempts to drop a non-existent table using the DROP TABLE or DROP TABLE IF EXISTS SQL commands, MariaDB neither returns an error message nor logs a warning. Note that the OQGraph plug-in is provided by the mariadb-oqgraph-engine package, which is not installed by default. rh-mariadb component The rh-mariadb103 Software Collection provides the Pluggable Authentication Modules (PAM) plug-in version 1.0. The rh-mariadb105 Software Collection provides the plug-in versions 1.0 and 2.0, version 2.0 is the default. The PAM plug-in version 1.0 in MariaDB does not work. To work around this problem, use the PAM plug-in version 2.0 provided by rh-mariadb105 . rh-ruby27 component, BZ# 1836201 When a custom script requires the Psych YAML parser and afterwards uses the Gem.load_yaml method, running the script fails with the following error message: To work around this problem, add the gem 'psych' line to the script somewhere above the require 'psych' line: ... gem 'psych' ... require 'psych' Gem.load_yaml multiple components, BZ# 1716378 Certain files provided by the Software Collections debuginfo packages might conflict with the corresponding debuginfo package files from the base Red Hat Enterprise Linux system or from other versions of Red Hat Software Collections components. For example, the python27-python-debuginfo package files might conflict with the corresponding files from the python-debuginfo package installed on the core system. Similarly, files from the httpd24-mod_auth_mellon-debuginfo package might conflict with similar files provided by the base system mod_auth_mellon-debuginfo package. To work around this problem, uninstall the base system debuginfo package prior to installing the Software Collection debuginfo package. rh-mysql80 component, BZ# 1646363 The mysql-connector-java database connector does not work with the MySQL 8.0 server. To work around this problem, use the mariadb-java-client database connector from the rh-mariadb103 Software Collection. rh-mysql80 component, BZ# 1646158 The default character set has been changed to utf8mb4 in MySQL 8.0 but this character set is unsupported by the php-mysqlnd database connector. Consequently, php-mysqlnd fails to connect in the default configuration. To work around this problem, specify a known character set as a parameter of the MySQL server configuration. For example, modify the /etc/opt/rh/rh-mysql80/my.cnf.d/mysql-server.cnf file to read: httpd24 component, BZ# 1429006 Since httpd 2.4.27 , the mod_http2 module is no longer supported with the default prefork Multi-Processing Module (MPM). To enable HTTP/2 support, edit the configuration file at /opt/rh/httpd24/root/etc/httpd/conf.modules.d/00-mpm.conf and switch to the event or worker MPM. Note that the HTTP/2 server-push feature does not work on the 64-bit ARM architecture, 64-bit IBM Z, and IBM POWER, little endian. httpd24 component, BZ# 1224763 When using the mod_proxy_fcgi module with FastCGI Process Manager (PHP-FPM), httpd uses port 8000 for the FastCGI protocol by default instead of the correct port 9000 . To work around this problem, specify the correct port explicitly in configuration. httpd24 component, BZ# 1382706 When SELinux is enabled, the LD_LIBRARY_PATH environment variable is not passed through to CGI scripts invoked by httpd . As a consequence, in some cases it is impossible to invoke executables from Software Collections enabled in the /opt/rh/httpd24/service-environment file from CGI scripts run by httpd . To work around this problem, set LD_LIBRARY_PATH as desired from within the CGI script. httpd24 component Compiling external applications against the Apache Portable Runtime (APR) and APR-util libraries from the httpd24 Software Collection is not supported. The LD_LIBRARY_PATH environment variable is not set in httpd24 because it is not required by any application in this Software Collection. scl-utils component In Red Hat Enterprise Linux 7.5 and earlier, due to an architecture-specific macro bug in the scl-utils package, the <collection>/root/usr/lib64/ directory does not have the correct package ownership on the 64-bit ARM architecture and on IBM POWER, little endian. As a consequence, this directory is not removed when a Software Collection is uninstalled. To work around this problem, manually delete <collection>/root/usr/lib64/ when removing a Software Collection. maven component When the user has installed both the Red Hat Enterprise Linux system version of maven-local package and the rh-maven*-maven-local package, XMvn , a tool used for building Java RPM packages, run from the Maven Software Collection tries to read the configuration file from the base system and fails. To work around this problem, uninstall the maven-local package from the base Red Hat Enterprise Linux system. perl component It is impossible to install more than one mod_perl.so library. As a consequence, it is not possible to use the mod_perl module from more than one Perl Software Collection. httpd , mariadb , mysql , nodejs , perl , php , python , and ruby components, BZ# 1072319 When uninstalling the httpd24 , rh-mariadb* , rh-mysql* , rh-nodejs* , rh-perl* , rh-php* , python27 , rh-python* , or rh-ruby* packages, the order of uninstalling can be relevant due to ownership of dependent packages. As a consequence, some directories and files might not be removed properly and might remain on the system. mariadb , mysql components, BZ# 1194611 Since MariaDB 10 and MySQL 5.6 , the rh-mariadb*-mariadb-server and rh-mysql*-mysql-server packages no longer provide the test database by default. Although this database is not created during initialization, the grant tables are prefilled with the same values as when test was created by default. As a consequence, upon a later creation of the test or test_* databases, these databases have less restricted access rights than is default for new databases. Additionally, when running benchmarks, the run-all-tests script no longer works out of the box with example parameters. You need to create a test database before running the tests and specify the database name in the --database parameter. If the parameter is not specified, test is taken by default but you need to make sure the test database exist. mariadb , mysql , postgresql components Red Hat Software Collections contains the MySQL 8.0 , MariaDB 10.3 , MariaDB 10.5 , PostgreSQL 10 , PostgreSQL 12 , and PostgreSQL 13 database servers. The core Red Hat Enterprise Linux 7 provides earlier versions of the MariaDB and PostgreSQL databases (client library and daemon). Client libraries are also used in database connectors for dynamic languages, libraries, and so on. The client library packaged in the Red Hat Software Collections database packages in the PostgreSQL component is not supposed to be used, as it is included only for purposes of server utilities and the daemon. Users are instead expected to use the system library and the database connectors provided with the core system. A protocol, which is used between the client library and the daemon, is stable across database versions, so, for example, using the PostgreSQL 10 client library with the PostgreSQL 12 or 13 daemon works as expected. mariadb , mysql components MariaDB and MySQL do not make use of the /opt/ provider / collection /root prefix when creating log files. Note that log files are saved in the /var/opt/ provider / collection /log/ directory, not in /opt/ provider / collection /root/var/log/ . 1.6. Other Notes rh-ruby* , rh-python* , rh-php* components Using Software Collections on a read-only NFS has several limitations. Ruby gems cannot be installed while the rh-ruby* Software Collection is on a read-only NFS. Consequently, for example, when the user tries to install the ab gem using the gem install ab command, an error message is displayed, for example: The same problem occurs when the user tries to update or install gems from an external source by running the bundle update or bundle install commands. When installing Python packages on a read-only NFS using the Python Package Index (PyPI), running the pip command fails with an error message similar to this: Installing packages from PHP Extension and Application Repository (PEAR) on a read-only NFS using the pear command fails with the error message: This is an expected behavior. httpd component Language modules for Apache are supported only with the Red Hat Software Collections version of Apache httpd and not with the Red Hat Enterprise Linux system versions of httpd . For example, the mod_wsgi module from the rh-python35 Collection can be used only with the httpd24 Collection. all components Since Red Hat Software Collections 2.0, configuration files, variable data, and runtime data of individual Collections are stored in different directories than in versions of Red Hat Software Collections. coreutils , util-linux , screen components Some utilities, for example, su , login , or screen , do not export environment settings in all cases, which can lead to unexpected results. It is therefore recommended to use sudo instead of su and set the env_keep environment variable in the /etc/sudoers file. Alternatively, you can run commands in a reverse order; for example: instead of When using tools like screen or login , you can use the following command to preserve the environment settings: source /opt/rh/<collection_name>/enable python component When the user tries to install more than one scldevel package from the python27 and rh-python* Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_python , %scl_ prefix _python ). php component When the user tries to install more than one scldevel package from the rh-php* Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_php , %scl_ prefix _php ). ruby component When the user tries to install more than one scldevel package from the rh-ruby* Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_ruby , %scl_ prefix _ruby ). perl component When the user tries to install more than one scldevel package from the rh-perl* Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_perl , %scl_ prefix _perl ). nginx component When the user tries to install more than one scldevel package from the rh-nginx* Software Collections, a transaction check error message is returned. This is an expected behavior because the user can install only one set of the macro files provided by the packages ( %scl_nginx , %scl_ prefix _nginx ). python component To mitigate the Web Cache Poisoning CVE-2021-23336 in the Python urllib library, the default separator for the urllib.parse.parse_qsl and urllib.parse.parse_qs functions is being changed from both ampersand ( & ) and semicolon ( ; ) to only an ampersand. This change has been implemented in the python27 and rh-python38 Software Collections with the release of the RHSA-2021:3252 and RHSA-2021:3254 advisories. The change of the default separator is potentially backwards incompatible, therefore Red Hat provides a way to configure the behavior in Python packages where the default separator has been changed. In addition, the affected urllib parsing functions issue a warning if they detect that a customer's application has been affected by the change. For more information, see the Mitigation of Web Cache Poisoning in the Python urllib library (CVE-2021-23336) Knowledgebase article. python component The release of the RHSA-2021:3254 advisory introduces the following change in the rh-python38 Software Collection: To mitigate CVE-2021-29921 , the Python ipaddress module now rejects IPv4 addresses with leading zeros with an AddressValueError: Leading zeros are not permitted error. Customers who rely on the behavior can pre-process their IPv4 address inputs to strip the leading zeros off. For example: To strip the leading zeros off with an explicit loop for readability, use: 1.7. Deprecated Functionality httpd24 component, BZ# 1434053 Previously, in an SSL/TLS configuration requiring name-based SSL virtual host selection, the mod_ssl module rejected requests with a 400 Bad Request error, if the host name provided in the Host: header did not match the host name provided in a Server Name Indication (SNI) header. Such requests are no longer rejected if the configured SSL/TLS security parameters are identical between the selected virtual hosts, in-line with the behavior of upstream mod_ssl .
[ "superclass mismatch for class Mark (TypeError)", "gem 'psych' require 'psych' Gem.load_yaml", "[mysqld] character-set-server=utf8", "ERROR: While executing gem ... (Errno::EROFS) Read-only file system @ dir_s_mkdir - /opt/rh/rh-ruby22/root/usr/local/share/gems", "Read-only file system: '/opt/rh/rh-python34/root/usr/lib/python3.4/site-packages/ipython-3.1.0.dist-info'", "Cannot install, php_dir for channel \"pear.php.net\" is not writeable by the current user", "su -l postgres -c \"scl enable rh-postgresql94 psql\"", "scl enable rh-postgresql94 bash su -l postgres -c psql", ">>> def reformat_ip(address): return '.'.join(part.lstrip('0') if part != '0' else part for part in address.split('.')) >>> reformat_ip('0127.0.0.1') '127.0.0.1'", "def reformat_ip(address): parts = [] for part in address.split('.'): if part != \"0\": part = part.lstrip('0') parts.append(part) return '.'.join(parts)" ]
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.7_release_notes/chap-rhscl
Chapter 5. Quota management
Chapter 5. Quota management As a cloud administrator, you can set and manage quotas for a project. Each project is allocated resources, and project users are granted access to consume these resources. This enables multiple projects to use a single cloud without interfering with each other's permissions and resources. A set of resource quotas are preconfigured when a new project is created. The quotas include the amount of VCPUs, instances, RAM, and floating IPs that can be assigned to projects. Quotas can be enforced at both the project and the project-user level. You can set or modify Compute and Block Storage quotas for new and existing projects using the dashboard. For more information, see Managing projects . 5.1. Viewing Compute quotas for a user Run the following command to list the currently set quota values for a user. Procedure Example 5.2. Updating compute quotas for a user Run the following commands to update a particular quota value: Example Note To view a list of options for the quota-update command, run: 5.3. Setting Object Storage quotas for a user Object Storage quotas can be classified under the following categories: Container quotas - Limits the total size (in bytes) or number of objects that can be stored in a single container. Account quotas - Limits the total size (in bytes) that a user has available in the Object Storage service. To set either container quotas or the account quotas, the Object Storage proxy server must have the parameters container_quotas or account_quotas (or both) added to the [pipeline:main] section of the proxy-server.conf file: Use the following command to view and update the Object Storage quotas. All users included in a project can view the quotas placed on the project. To update the Object Storage quotas on a project, you must have the role of a ResellerAdmin in the project. To view account quotas: To update quotas: For example, to place a 5 GB quota on an account:
[ "nova quota-show --user [USER-ID] --tenant [TENANT-ID]", "nova quota-show --user 3b9763e4753843529db15085874b1e84 --tenant a4ee0cbb97e749dca6de584c0b1568a6 +-----------------------------+-------+ | Quota | Limit | +-----------------------------+-------+ | instances | 10 | | cores | 20 | | ram | 51200 | | floating_ips | 5 | | fixed_ips | -1 | | metadata_items | 128 | | injected_files | 5 | | injected_file_content_bytes | 10240 | | injected_file_path_bytes | 255 | | key_pairs | 100 | | security_groups | 10 | | security_group_rules | 20 | | server_groups | 10 | | server_group_members | 10 | +-----------------------------+-------+", "nova quota-update --user [USER-ID] --[QUOTA_NAME] [QUOTA_VALUE] [TENANT-ID] nova quota-show --user [USER-ID] --tenant [TENANT-ID]", "nova quota-update --user 3b9763e4753843529db15085874b1e84 --floating-ips 10 a4ee0cbb97e749dca6de584c0b1568a6 nova quota-show --user 3b9763e4753843529db15085874b1e84 --tenant a4ee0cbb97e749dca6de584c0b1568a6 +-----------------------------+-------+ | Quota | Limit | +-----------------------------+-------+ | instances | 10 | | cores | 20 | | ram | 51200 | | floating_ips | 10 | | ... | | +-----------------------------+-------+", "nova help quota-update", "[pipeline:main] pipeline = catch_errors [...] tempauth container-quotas account-quotas slo dlo proxy-logging proxy-server [filter:account_quotas] use = egg:swift#account_quotas [filter:container_quotas] use = egg:swift#container_quotas", "swift stat Account: AUTH_b36ed2d326034beba0a9dd1fb19b70f9 Containers: 0 Objects: 0 Bytes: 0 Meta Quota-Bytes: 214748364800 X-Timestamp: 1351050521.29419 Content-Type: text/plain; charset=utf-8 Accept-Ranges: bytes", "swift post -m quota-bytes:<BYTES>", "swift post -m quota-bytes:5368709120" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/managing_openstack_identity_resources/quota_management
API Guide
API Guide Red Hat Satellite 6.11 A guide to using the Red Hat Satellite Representational State Transfer (REST) API Red Hat Satellite Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/api_guide/index
Chapter 4. Model management
Chapter 4. Model management There are a various ways you can organize and manage your custom or downloaded models on RHEL AI 4.1. Uploading your models to a registry After you fine-tune a model, you can upload the model to an external registry. RHEL AI currently supports uploading models to AWS S3 buckets. Prerequisites You installed RHEL AI on your preferred platform. You initialized InstructLab. Log in to your preferred registry. Procedure You can upload your models to a specific registry with the following command USD ilab model upload --model <name-of-model> --destination <registry-location> --dest-type <registry-type> where: <name-of-model> Specify the checkpoint name you want to upload. For example, --model samples_0801 . You can also specify the path to the checkpoint. <registry-location> Specify where you want to upload the model. For example, --destination example-s3-bucket <registry-type> Specify the model type. Valid values include: s3 . Example ilab model upload command to an s3 bucket USD ilab model upload --model samples_0801 --destination example-s3-bucket --dest-type s3
[ "ilab model upload --model <name-of-model> --destination <registry-location> --dest-type <registry-type>", "ilab model upload --model samples_0801 --destination example-s3-bucket --dest-type s3" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.4/html/building_and_maintaining_your_rhel_ai_environment/managing_download_models
A.3. Device Mapper Support for the udev Device Manager
A.3. Device Mapper Support for the udev Device Manager The primary role of the udev device manager is to provide a dynamic way of setting up nodes in the /dev directory. The creation of these nodes is directed by the application of udev rules in user space. These rules are processed on udev events sent from the kernel directly as a result of adding, removing or changing particular devices. This provides a convenient and central mechanism for hotplugging support. Besides creating the actual nodes, the udev device manager is able to create symbolic links which you can name. This provides you the freedom to choose their own customized naming and directory structure in the /dev directory, if needed. Each udev event contains basic information about the device being processed, such as its name, the subsystem it belongs to, the device's type, its major and minor number used, and the type of the event. Given that, and having the possibility of accessing all the information found in the /sys directory that is also accessible within udev rules, you are able to utilize simple filters based on this information and run the rules conditionally based on this information. The udev device manager also provides a centralized way of setting up the nodes' permissions. You can easily add a customized set of rules to define the permissions for any device specified by any bit of information that is available while processing the event. It is also possible to add program hooks in udev rules directly. The udev device manager can call these programs to provide further processing that is needed to handle the event. Also, the program can export environment variables as a result of this processing. Any results given can be used further in the rules as a supplementary source of information. Any software using the udev library is able to receive and process udev events with all the information that is available, so the processing is not bound to the udev daemon only. A.3.1. udev Integration with the Device Mapper The Device Mapper provides direct support for udev integration. This synchronizes the Device Mapper with all udev processing related to Device Mapper devices, including LVM devices. The synchronization is needed since the rule application in the udev daemon is a form of parallel processing with the program that is the source of the device's changes (such as dmsetup and LVM). Without this support, it was a common problem for a user to try to remove a device that was still open and processed by udev rules as a result of a change event; this was particularly common when there was a very short time between changes for that device. Red Hat Enterprise Linux provides officially supported udev rules for Device Mapper devices in general and for LVM as well. Table A.1, "udev Rules for Device-Mapper Devices" summarizes these rules, which are installed in /lib/udev/rules.d . Table A.1. udev Rules for Device-Mapper Devices Filename Description 10-dm.rules Contains general Device Mapper rules and creates the symlinks in /dev/mapper with a /dev/dm- N target where N is a number assigned dynamically to a device by the kernel ( /dev/dm- N is a node) NOTE: /dev/dm- N nodes should never be used in scripts to access the device since the N number is assigned dynamically and changes with the sequence of how devices are activated. Therefore, true names in the /dev/mapper directory should be used. This layout is to support udev requirements of how nodes/symlinks should be created. 11-dm-lvm.rules Contains rules applied for LVM devices and creates the symlinks for the volume group's logical volumes. The symlinks are created in the /dev/ vgname directory with a /dev/dm- N target. NOTE: To be consistent with the standard for naming all future rules for Device Mapper subsystems, udev rules should follow the format 11-dm- subsystem_name .rules . Any libdevmapper users providing udev rules as well should follow this standard. 13-dm-disk.rules Contains rules to be applied for all Device Mapper devices in general and creates symlinks in the /dev/disk/by-id and the /dev/disk/by-uuid directories. 95-dm-notify.rules Contains the rule to notify the waiting process using libdevmapper (just like LVM and dmsetup ). The notification is done after all rules are applied, to ensure any udev processing is complete. Notified process is then resumed. 69-dm-lvm-metad.rules Contains a hook to trigger an LVM scan on any newly appeared block device in the system and do any LVM autoactivation if possible. This supports the lvmetad daemon, which is set with use_lvmetad=1 in the lvm.conf file. The lvmetad daemon and autoactivation are not supported in a clustered environment. You can add additional customized permission rules by means of the 12-dm-permissions.rules file. This file is not installed in the /lib/udev/rules directory; it is found in the /usr/share/doc/device-mapper- version directory. The 12-dm-permissions.rules file is a template containing hints for how to set the permissions, based on some matching rules given as an example; the file contains examples for some common situations. You can edit this file and place it manually in the /etc/udev/rules.d directory where it will survive updates, so the settings will remain. These rules set all basic variables that could be used by any other rules while processing the events. The following variables are set in 10-dm.rules : DM_NAME : Device Mapper device name DM_UUID : Device Mapper device UUID DM_SUSPENDED : the suspended state of Device Mapper device DM_UDEV_RULES_VSN : udev rules version (this is primarily for all other rules to check that previously mentioned variables are set directly by official Device Mapper rules) The following variables are set in 11-dm-lvm.rules : DM_LV_NAME : logical volume name DM_VG_NAME : volume group name DM_LV_LAYER : LVM layer name All these variables can be used in the 12-dm-permissions.rules file to define a permission for specific Device Mapper devices, as documented in the 12-dm-permissions.rules file. A.3.2. Commands and Interfaces that Support udev Table A.2, "dmsetup Commands to Support udev" summarizes the dmsetup commands that support udev integration. Table A.2. dmsetup Commands to Support udev Command Description dmsetup udevcomplete Used to notify that udev has completed processing the rules and unlocks waiting process (called from within udev rules in 95-dm-notify.rules ). dmsetup udevcomplete_all Used for debugging purposes to manually unlock all waiting processes. dmsetup udevcookies Used for debugging purposes, to show all existing cookies (system-wide semaphores). dmsetup udevcreatecookie Used to create a cookie (semaphore) manually. This is useful to run more processes under one synchronization resource. dmsetup udevreleasecookie Used to wait for all udev processing related to all processes put under that one synchronization cookie. The dmsetup options that support udev integration are as follows. --udevcookie Needs to be defined for all dmsetup processes we would like to add into a udev transaction. It is used in conjunction with udevcreatecookie and udevreleasecookie : Besides using the --udevcookie option, you can just export the variable into an environment of the process: --noudevrules Disables udev rules. Nodes/symlinks will be created by libdevmapper itself (the old way). This option is for debugging purposes, if udev does not work correctly. --noudevsync Disables udev synchronization. This is also for debugging purposes. For more information on the dmsetup command and its options, see the dmsetup (8) man page. The LVM commands support the following options that support udev integration: --noudevrules : as for the dmsetup command, disables udev rules. --noudevsync : as for the dmsetup command, disables udev synchronization. The lvm.conf file includes the following options that support udev integration: udev_rules : enables/disables udev_rules for all LVM2 commands globally. udev_sync : enables/disables udev synchronization for all LVM commands globally. For more information on the lvm.conf file options, see the inline comments in the lvm.conf file.
[ "COOKIE=USD(dmsetup udevcreatecookie) dmsetup command --udevcookie USDCOOKIE . dmsetup command --udevcookie USDCOOKIE . . dmsetup command --udevcookie USDCOOKIE . dmsetup udevreleasecookie --udevcookie USDCOOKIE", "export DM_UDEV_COOKIE=USD(dmsetup udevcreatecookie) dmsetup command dmsetup command dmsetup command" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/logical_volume_manager_administration/udev_device_manager
Chapter 2. BMCEventSubscription [metal3.io/v1alpha1]
Chapter 2. BMCEventSubscription [metal3.io/v1alpha1] Description BMCEventSubscription is the Schema for the fast eventing API Type object 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object status object 2.1.1. .spec Description Type object Property Type Description context string Arbitrary user-provided context for the event destination string A webhook URL to send events to hostName string A reference to a BareMetalHost httpHeadersRef object A secret containing HTTP headers which should be passed along to the Destination when making a request 2.1.2. .spec.httpHeadersRef Description A secret containing HTTP headers which should be passed along to the Destination when making a request Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 2.1.3. .status Description Type object Property Type Description error string subscriptionID string 2.2. API endpoints The following API endpoints are available: /apis/metal3.io/v1alpha1/bmceventsubscriptions GET : list objects of kind BMCEventSubscription /apis/metal3.io/v1alpha1/namespaces/{namespace}/bmceventsubscriptions DELETE : delete collection of BMCEventSubscription GET : list objects of kind BMCEventSubscription POST : create a BMCEventSubscription /apis/metal3.io/v1alpha1/namespaces/{namespace}/bmceventsubscriptions/{name} DELETE : delete a BMCEventSubscription GET : read the specified BMCEventSubscription PATCH : partially update the specified BMCEventSubscription PUT : replace the specified BMCEventSubscription /apis/metal3.io/v1alpha1/namespaces/{namespace}/bmceventsubscriptions/{name}/status GET : read status of the specified BMCEventSubscription PATCH : partially update status of the specified BMCEventSubscription PUT : replace status of the specified BMCEventSubscription 2.2.1. /apis/metal3.io/v1alpha1/bmceventsubscriptions HTTP method GET Description list objects of kind BMCEventSubscription Table 2.1. HTTP responses HTTP code Reponse body 200 - OK BMCEventSubscriptionList schema 401 - Unauthorized Empty 2.2.2. /apis/metal3.io/v1alpha1/namespaces/{namespace}/bmceventsubscriptions HTTP method DELETE Description delete collection of BMCEventSubscription Table 2.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind BMCEventSubscription Table 2.3. HTTP responses HTTP code Reponse body 200 - OK BMCEventSubscriptionList schema 401 - Unauthorized Empty HTTP method POST Description create a BMCEventSubscription Table 2.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.5. Body parameters Parameter Type Description body BMCEventSubscription schema Table 2.6. HTTP responses HTTP code Reponse body 200 - OK BMCEventSubscription schema 201 - Created BMCEventSubscription schema 202 - Accepted BMCEventSubscription schema 401 - Unauthorized Empty 2.2.3. /apis/metal3.io/v1alpha1/namespaces/{namespace}/bmceventsubscriptions/{name} Table 2.7. Global path parameters Parameter Type Description name string name of the BMCEventSubscription HTTP method DELETE Description delete a BMCEventSubscription Table 2.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified BMCEventSubscription Table 2.10. HTTP responses HTTP code Reponse body 200 - OK BMCEventSubscription schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified BMCEventSubscription Table 2.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.12. HTTP responses HTTP code Reponse body 200 - OK BMCEventSubscription schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified BMCEventSubscription Table 2.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.14. Body parameters Parameter Type Description body BMCEventSubscription schema Table 2.15. HTTP responses HTTP code Reponse body 200 - OK BMCEventSubscription schema 201 - Created BMCEventSubscription schema 401 - Unauthorized Empty 2.2.4. /apis/metal3.io/v1alpha1/namespaces/{namespace}/bmceventsubscriptions/{name}/status Table 2.16. Global path parameters Parameter Type Description name string name of the BMCEventSubscription HTTP method GET Description read status of the specified BMCEventSubscription Table 2.17. HTTP responses HTTP code Reponse body 200 - OK BMCEventSubscription schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified BMCEventSubscription Table 2.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.19. HTTP responses HTTP code Reponse body 200 - OK BMCEventSubscription schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified BMCEventSubscription Table 2.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.21. Body parameters Parameter Type Description body BMCEventSubscription schema Table 2.22. HTTP responses HTTP code Reponse body 200 - OK BMCEventSubscription schema 201 - Created BMCEventSubscription schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/provisioning_apis/bmceventsubscription-metal3-io-v1alpha1
A.3. ss
A.3. ss ss is a command-line utility that prints statistical information about sockets, allowing administrators to assess device performance over time. By default, ss lists open non-listening TCP sockets that have established connections, but a number of useful options are provided to help administrators filter out statistics about specific sockets. One commonly used command is ss -tmpie , which displays all TCP sockets ( t , internal TCP information ( i ), socket memory usage ( m ), processes using the socket ( p ), and detailed socket information ( i ). Red Hat recommends ss over netstat in Red Hat Enterprise Linux 7. ss is provided by the iproute package. For more information, see the man page:
[ "man ss" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-tool_reference-ss
15.3. Checking a Package's Signature
15.3. Checking a Package's Signature If you wish to verify that a package has not been corrupted or tampered with, examine only the md5sum by typing the following command at a shell prompt ( <rpm-file> with file name of the RPM package): The message <rpm-file> : md5 OK is displayed. This brief message means that the file was not corrupted by the download. To see a more verbose message, replace -K with -Kvv in the command. On the other hand, how trustworthy is the developer who created the package? If the package is signed with the developer's GnuPG key , you know that the developer really is who they say they are. An RPM package can be signed using Gnu Privacy Guard (or GnuPG), to help you make certain your downloaded package is trustworthy. GnuPG is a tool for secure communication; it is a complete and free replacement for the encryption technology of PGP, an electronic privacy program. With GnuPG, you can authenticate the validity of documents and encrypt/decrypt data to and from other recipients. GnuPG is capable of decrypting and verifying PGP 5. x files as well. During installation, GnuPG is installed by default. That way you can immediately start using GnuPG to verify any packages that you receive from Red Hat. First, you must import Red Hat's public key. 15.3.1. Importing Keys To verify Red Hat packages, you must import the Red Hat GPG key. To do so, execute the following command at a shell prompt: To display a list of all keys installed for RPM verification, execute the command: For the Red Hat key, the output includes: To display details about a specific key, use rpm -qi followed by the output from the command:
[ "-K --nosignature <rpm-file>", "--import /usr/share/rhn/RPM-GPG-KEY", "-qa gpg-pubkey*", "gpg-pubkey-db42a60e-37ea5438", "-qi gpg-pubkey-db42a60e-37ea5438" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Package_Management_with_RPM-Checking_a_Packages_Signature
Chapter 65. registered
Chapter 65. registered This chapter describes the commands under the registered command. 65.1. registered limit create Create a registered limit Usage: Table 65.1. Positional Arguments Value Summary <resource-name> The name of the resource to limit Table 65.2. Optional Arguments Value Summary -h, --help Show this help message and exit --description <description> Description of the registered limit --region <region> Region for the registered limit to affect --service <service> Service responsible for the resource to limit (required) --default-limit <default-limit> The default limit for the resources to assume (required) Table 65.3. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 65.4. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 65.5. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 65.6. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 65.2. registered limit delete Delete a registered limit Usage: Table 65.7. Positional Arguments Value Summary <registered-limit-id> Registered limit to delete (id) Table 65.8. Optional Arguments Value Summary -h, --help Show this help message and exit 65.3. registered limit list List registered limits Usage: Table 65.9. Optional Arguments Value Summary -h, --help Show this help message and exit --service <service> Service responsible for the resource to limit --resource-name <resource-name> The name of the resource to limit --region <region> Region for the limit to affect. Table 65.10. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 65.11. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 65.12. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 65.13. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 65.4. registered limit set Update information about a registered limit Usage: Table 65.14. Positional Arguments Value Summary <registered-limit-id> Registered limit to update (id) Table 65.15. Optional Arguments Value Summary -h, --help Show this help message and exit --service <service> Service to be updated responsible for the resource to limit. Either --service, --resource-name or --region must be different than existing value otherwise it will be duplicate entry --resource-name <resource-name> Resource to be updated responsible for the resource to limit. Either --service, --resource-name or --region must be different than existing value otherwise it will be duplicate entry --default-limit <default-limit> The default limit for the resources to assume --description <description> Description to update of the registered limit --region <region> Region for the registered limit to affect. either --service, --resource-name or --region must be different than existing value otherwise it will be duplicate entry Table 65.16. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 65.17. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 65.18. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 65.19. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 65.5. registered limit show Display registered limit details Usage: Table 65.20. Positional Arguments Value Summary <registered-limit-id> Registered limit to display (id) Table 65.21. Optional Arguments Value Summary -h, --help Show this help message and exit Table 65.22. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 65.23. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 65.24. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 65.25. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack registered limit create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--description <description>] [--region <region>] --service <service> --default-limit <default-limit> <resource-name>", "openstack registered limit delete [-h] <registered-limit-id> [<registered-limit-id> ...]", "openstack registered limit list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--service <service>] [--resource-name <resource-name>] [--region <region>]", "openstack registered limit set [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--service <service>] [--resource-name <resource-name>] [--default-limit <default-limit>] [--description <description>] [--region <region>] <registered-limit-id>", "openstack registered limit show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <registered-limit-id>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/registered
Chapter 2. The Ceph File System Metadata Server
Chapter 2. The Ceph File System Metadata Server As a storage administrator, you can learn about the different states of the Ceph File System (CephFS) Metadata Server (MDS), along with learning about CephFS MDS ranking mechanics, configuring the MDS standby daemon, and cache size limits. Knowing these concepts can enable you to configure the MDS daemons for a storage environment. 2.1. Prerequisites A running, and healthy Red Hat Ceph Storage cluster. Installation of the Ceph Metadata Server daemons ( ceph-mds ). See the Management of MDS service using the Ceph Orchestrator section in the Red Hat Ceph Storage File System Guide for details on configuring MDS daemons. 2.2. Metadata Server daemon states The Metadata Server (MDS) daemons operate in two states: Active - manages metadata for files and directories stores on the Ceph File System. Standby - serves as a backup, and becomes active when an active MDS daemon becomes unresponsive. By default, a Ceph File System uses only one active MDS daemon. However, systems with many clients benefit from multiple active MDS daemons. You can configure the file system to use multiple active MDS daemons so that you can scale metadata performance for larger workloads. The active MDS daemons dynamically share the metadata workload when metadata load patterns change. Note that systems with multiple active MDS daemons still require standby MDS daemons to remain highly available. What Happens When the Active MDS Daemon Fails When the active MDS becomes unresponsive, a Ceph Monitor daemon waits a number of seconds equal to the value specified in the mds_beacon_grace option. If the active MDS is still unresponsive after the specified time period has passed, the Ceph Monitor marks the MDS daemon as laggy . One of the standby daemons becomes active, depending on the configuration. Note To change the value of mds_beacon_grace , add this option to the Ceph configuration file and specify the new value. 2.3. Metadata Server ranks Each Ceph File System (CephFS) has a number of ranks, one by default, which starts at zero. Ranks define how the metadata workload is shared between multiple Metadata Server (MDS) daemons. The number of ranks is the maximum number of MDS daemons that can be active at one time. Each MDS daemon handles a subset of the CephFS metadata that is assigned to that rank. Each MDS daemon initially starts without a rank. The Ceph Monitor assigns a rank to the daemon. The MDS daemon can only hold one rank at a time. Daemons only lose ranks when they are stopped. The max_mds setting controls how many ranks will be created. The actual number of ranks in the CephFS is only increased if a spare daemon is available to accept the new rank. Rank States Ranks can be: Up - A rank that is assigned to the MDS daemon. Failed - A rank that is not associated with any MDS daemon. Damaged - A rank that is damaged; its metadata is corrupted or missing. Damaged ranks are not assigned to any MDS daemons until the operator fixes the problem, and uses the ceph mds repaired command on the damaged rank. 2.4. Metadata Server cache size limits You can limit the size of the Ceph File System (CephFS) Metadata Server (MDS) cache by: A memory limit : Use the mds_cache_memory_limit option. Red Hat recommends a value between 8 GB and 64 GB for mds_cache_memory_limit . Setting more cache can cause issues with recovery. This limit is approximately 66% of the desired maximum memory use of the MDS. Important Red Hat recommends using memory limits instead of inode count limits. Inode count : Use the mds_cache_size option. By default, limiting the MDS cache by inode count is disabled. In addition, you can specify a cache reservation by using the mds_cache_reservation option for MDS operations. The cache reservation is limited as a percentage of the memory or inode limit and is set to 5% by default. The intent of this parameter is to have the MDS maintain an extra reserve of memory for its cache for new metadata operations to use. As a consequence, the MDS should in general operate below its memory limit because it will recall old state from clients to drop unused metadata in its cache. The mds_cache_reservation option replaces the mds_health_cache_threshold option in all situations, except when MDS nodes send a health alert to the Ceph Monitors indicating the cache is too large. By default, mds_health_cache_threshold is 150% of the maximum cache size. Be aware that the cache limit is not a hard limit. Potential bugs in the CephFS client or MDS or misbehaving applications might cause the MDS to exceed its cache size. The mds_health_cache_threshold option configures the storage cluster health warning message, so that operators can investigate why the MDS cannot shrink its cache. Additional Resources See the Metadata Server daemon configuration reference section in the Red Hat Ceph Storage File System Guide for more information. 2.5. File system affinity You can configure a Ceph File System (CephFS) to prefer a particular Ceph Metadata Server (MDS) over another Ceph MDS. For example, you have MDS running on newer, faster hardware that you want to give preference to over a standby MDS running on older, maybe slower hardware. You can specify this preference by setting the mds_join_fs option, which enforces this file system affinity. Ceph Monitors give preference to MDS standby daemons with mds_join_fs equal to the file system name with the failed rank. The standby-replay daemons are selected before choosing another standby daemon. If no standby daemon exists with the mds_join_fs option, then the Ceph Monitors will choose an ordinary standby for replacement or any other available standby as a last resort. The Ceph Monitors will periodically examine Ceph File Systems to see if a standby with a stronger affinity is available to replace the Ceph MDS that has a lower affinity. Additional Resources See the Configuring file system affinity section in the Red Hat Ceph Storage File System Guide for details. 2.6. Management of MDS service using the Ceph Orchestrator As a storage administrator, you can use the Ceph Orchestrator with Cephadm in the backend to deploy the MDS service. By default, a Ceph File System (CephFS) uses only one active MDS daemon. However, systems with many clients benefit from multiple active MDS daemons. This section covers the following administrative tasks: Deploying the MDS service using the command line interface . Deploying the MDS service using the service specification . Removing the MDS service using the Ceph Orchestrator . 2.6.1. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all the nodes. Hosts are added to the cluster. All manager, monitor, and OSD daemons are deployed. 2.6.2. Deploying the MDS service using the command line interface Using the Ceph Orchestrator, you can deploy the Metadata Server (MDS) service using the placement specification in the command line interface. Ceph File System (CephFS) requires one or more MDS. Note Ensure you have at least two pools, one for Ceph file system (CephFS) data and one for CephFS metadata. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. All manager, monitor, and OSD daemons are deployed. Procedure Log into the Cephadm shell: Example There are two ways of deploying MDS daemons using placement specification: Method 1 Use ceph fs volume to create the MDS daemons. This creates the CephFS volume and pools associated with the CephFS, and also starts the MDS service on the hosts. Syntax Note By default, replicated pools are created for this command. Example Method 2 Create the pools, CephFS, and then deploy MDS service using placement specification: Create the pools for CephFS: Syntax Example Typically, the metadata pool can start with a conservative number of Placement Groups (PGs) as it generally has far fewer objects than the data pool. It is possible to increase the number of PGs if needed. The pool sizes range from 64 PGs to 512 PGs. Size the data pool is proportional to the number and sizes of files you expect in the file system. Important For the metadata pool, consider to use: A higher replication level because any data loss to this pool can make the whole file system inaccessible. Storage with lower latency such as Solid-State Drive (SSD) disks because this directly affects the observed latency of file system operations on clients. Create the file system for the data pools and metadata pools: Syntax Example Deploy MDS service using the ceph orch apply command: Syntax Example Verification List the service: Example Check the CephFS status: Example List the hosts, daemons, and processes: Syntax Example Additional Resources See the Red Hat Ceph Storage File System Guide for more information about creating the Ceph File System (CephFS). For information on pools, see Pools . 2.6.3. Deploying the MDS service using the service specification Using the Ceph Orchestrator, you can deploy the MDS service using the service specification. Note Ensure you have at least two pools, one for the Ceph File System (CephFS) data and one for the CephFS metadata. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. All manager, monitor, and OSD daemons are deployed. Procedure Create the mds.yaml file: Example Edit the mds.yaml file to include the following details: Syntax Example Mount the YAML file under a directory in the container: Example Navigate to the directory: Example Log into the Cephadm shell: Example Navigate to the following directory: Example Deploy MDS service using service specification: Syntax Example Once the MDS services is deployed and functional, create the CephFS: Syntax Example Verification List the service: Example List the hosts, daemons, and processes: Syntax Example Additional Resources See the Red Hat Ceph Storage File System Guide for more information about creating the Ceph File System (CephFS). 2.6.4. Removing the MDS service using the Ceph Orchestrator You can remove the service using the ceph orch rm command. Alternatively, you can remove the file system and the associated pools. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all the nodes. Hosts are added to the cluster. At least one MDS daemon deployed on the hosts. Procedure There are two ways of removing MDS daemons from the cluster: Method 1 Remove the CephFS volume, associated pools, and the services: Log into the Cephadm shell: Example Set the configuration parameter mon_allow_pool_delete to true : Example Remove the file system: Syntax Example This command will remove the file system, its data, and metadata pools. It also tries to remove the MDS using the enabled ceph-mgr Orchestrator module. Method 2 Use the ceph orch rm command to remove the MDS service from the entire cluster: List the service: Example Remove the service Syntax Example Verification List the hosts, daemons, and processes: Syntax Example Additional Resources See Deploying the MDS service using the command line interface section in the Red Hat Ceph Storage Operations Guide for more information. See Deploying the MDS service using the service specification section in the Red Hat Ceph Storage Operations Guide for more information. 2.7. Configuring file system affinity Set the Ceph File System (CephFS) affinity for a particular Ceph Metadata Server (MDS). Prerequisites A healthy, and running Ceph File System. Root-level access to a Ceph Monitor node. Procedure Check the current state of a Ceph File System: Example Set the file system affinity: Syntax Example After a Ceph MDS failover event, the file system favors the standby daemon for which the affinity is set. Example 1 The mds.b daemon now has the join_fscid=27 in the file system dump output. Important If a file system is in a degraded or undersized state, then no failover will occur to enforce the file system affinity. Additional Resources See the File system affinity section in the Red Hat Ceph Storage File System Guide for more details. 2.8. Configuring multiple active Metadata Server daemons Configure multiple active Metadata Server (MDS) daemons to scale metadata performance for large systems. Important Do not convert all standby MDS daemons to active ones. A Ceph File System (CephFS) requires at least one standby MDS daemon to remain highly available. Prerequisites Ceph administration capabilities on the MDS node. Root-level access to a Ceph Monitor node. Procedure Set the max_mds parameter to the desired number of active MDS daemons: Syntax Example This example increases the number of active MDS daemons to two in the CephFS called cephfs Note Ceph only increases the actual number of ranks in the CephFS if a spare MDS daemon is available to take the new rank. Verify the number of active MDS daemons: Syntax Example Additional Resources See the Metadata Server daemons states section in the Red Hat Ceph Storage File System Guide for more details. See the Decreasing the number of active MDS Daemons section in the Red Hat Ceph Storage File System Guide for more details. See the Managing Ceph users section in the Red Hat Ceph Storage Administration Guide for more details. 2.9. Configuring the number of standby daemons Each Ceph File System (CephFS) can specify the required number of standby daemons to be considered healthy. This number also includes the standby-replay daemon waiting for a rank failure. Prerequisites Root-level access to a Ceph Monitor node. Procedure Set the expected number of standby daemons for a particular CephFS: Syntax Note Setting the NUMBER to zero disables the daemon health check. Example This example sets the expected standby daemon count to two. 2.10. Configuring the standby-replay Metadata Server Configure each Ceph File System (CephFS) by adding a standby-replay Metadata Server (MDS) daemon. Doing this reduces failover time if the active MDS becomes unavailable. This specific standby-replay daemon follows the active MDS's metadata journal. The standby-replay daemon is only used by the active MDS of the same rank, and is not available to other ranks. Important If using standby-replay, then every active MDS must have a standby-replay daemon. Prerequisites Root-level access to a Ceph Monitor node. Procedure Set the standby-replay for a particular CephFS: Syntax Example In this example, the Boolean value is 1 , which enables the standby-replay daemons to be assigned to the active Ceph MDS daemons. Additional Resources See the Using the ceph mds fail command section in the Red Hat Ceph Storage File System Guide for details. 2.11. Ephemeral pinning policies An ephemeral pin is a static partition of subtrees, and can be set with a policy using extended attributes. A policy can automatically set ephemeral pins to directories. When setting an ephemeral pin to a directory, it is automatically assigned to a particular rank, as to be uniformly distributed across all Ceph MDS ranks. Determining which rank gets assigned is done by a consistent hash and the directory's inode number. Ephemeral pins do not persist when the directory's inode is dropped from file system cache. When failing over a Ceph Metadata Server (MDS), the ephemeral pin is recorded in its journal so the Ceph MDS standby server does not lose this information. There are two types of policies for using ephemeral pins: Note: Installation of the attr package is a prerequisite for the ephemeral pinning policies. Distributed This policy enforces that all of a directory's immediate children must be ephemerally pinned. For example, use a distributed policy to spread a user's home directory across the entire Ceph File System cluster. Enable this policy by setting the ceph.dir.pin.distributed extended attribute. Random This policy enforces a chance that any descendent subdirectory might be ephemerally pinned. You can customize the percent of directories that can be ephemerally pinned. Enable this policy by setting the ceph.dir.pin.random and setting a percentage. Red Hat recommends setting this percentage to a value smaller than 1% ( 0.01 ). Having too many subtree partitions can cause slow performance. You can set the maximum percentage by setting the mds_export_ephemeral_random_max Ceph MDS configuration option. The parameters mds_export_ephemeral_distributed and mds_export_ephemeral_random are already enabled. Note For more information see the Why does Red Hat recommend less than 0.01% chance for any descendent subdirectory to be pinned by setting the ceph.dir.pin.random attribute? Additional Resources See the Manually pinning directory trees to a particular rank section in the Red Hat Ceph Storage File System Guide for details on manually setting pins. 2.12. Manually pinning directory trees to a particular rank Sometimes it might be desirable to override the dynamic balancer with explicit mappings of metadata to a particular Ceph Metadata Server (MDS) rank. You can do this manually to evenly spread the load of an application or to limit the impact of users' metadata requests on the Ceph File System cluster. Manually pinning directories is also known as an export pin by setting the ceph.dir.pin extended attribute. A directory's export pin is inherited from its closest parent directory, but can be overwritten by setting an export pin on that directory. Setting an export pin on a directory affects all of its sub-directories, for example: 1 Directories a/ and a/b both start without an export pin set. 2 Directories a/ and a/b are now pinned to rank 1 . 3 Directory a/b is now pinned to rank 0 and directory a/ and the rest of its sub-directories are still pinned to rank 1 . Prerequisites A running Red Hat Ceph Storage cluster. A running Ceph File System. Root-level access to the CephFS client. Installation of the attr package. Procedure Set the export pin on a directory: Syntax Example Additional Resources See the Ephemeral pinning policies section in the Red Hat Ceph Storage File System Guide for details on automatically setting pins. 2.13. Decreasing the number of active Metadata Server daemons How to decrease the number of active Ceph File System (CephFS) Metadata Server (MDS) daemons. Prerequisites The rank that you will remove must be active first, meaning that you must have the same number of MDS daemons as specified by the max_mds parameter. Root-level access to a Ceph Monitor node. Procedure Set the same number of MDS daemons as specified by the max_mds parameter: Syntax Example On a node with administration capabilities, change the max_mds parameter to the desired number of active MDS daemons: Syntax Example Wait for the storage cluster to stabilize to the new max_mds value by watching the Ceph File System status. Verify the number of active MDS daemons: Syntax Example Additional Resources See the Metadata Server daemons states section in the Red Hat Ceph Storage File System Guide . See the Configuring multiple active Metadata Server daemons section in the Red Hat Ceph Storage File System Guide . 2.14. Additional Resources See the Red Hat Ceph Storage Installation Guide for details on installing a Red Hat Ceph Storage cluster.
[ "cephadm shell", "ceph fs volume create FILESYSTEM_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3 \"", "ceph fs volume create test --placement=\"2 host01 host02\"", "ceph osd pool create DATA_POOL [ PG_NUM ] ceph osd pool create METADATA_POOL [ PG_NUM ]", "ceph osd pool create cephfs_data 64 ceph osd pool create cephfs_metadata 64", "ceph fs new FILESYSTEM_NAME METADATA_POOL DATA_POOL", "ceph fs new test cephfs_metadata cephfs_data", "ceph orch apply mds FILESYSTEM_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3 \"", "ceph orch apply mds test --placement=\"2 host01 host02\"", "ceph orch ls", "ceph fs ls ceph fs status", "ceph orch ps --daemon_type= DAEMON_NAME", "ceph orch ps --daemon_type=mds", "touch mds.yaml", "service_type: mds service_id: FILESYSTEM_NAME placement: hosts: - HOST_NAME_1 - HOST_NAME_2 - HOST_NAME_3", "service_type: mds service_id: fs_name placement: hosts: - host01 - host02", "cephadm shell --mount mds.yaml:/var/lib/ceph/mds/mds.yaml", "cd /var/lib/ceph/mds/", "cephadm shell", "cd /var/lib/ceph/mds/", "ceph orch apply -i FILE_NAME .yaml", "ceph orch apply -i mds.yaml", "ceph fs new CEPHFS_NAME METADATA_POOL DATA_POOL", "ceph fs new test metadata_pool data_pool", "ceph orch ls", "ceph orch ps --daemon_type= DAEMON_NAME", "ceph orch ps --daemon_type=mds", "cephadm shell", "ceph config set mon mon_allow_pool_delete true", "ceph fs volume rm FILESYSTEM_NAME --yes-i-really-mean-it", "ceph fs volume rm cephfs-new --yes-i-really-mean-it", "ceph orch ls", "ceph orch rm SERVICE_NAME", "ceph orch rm mds.test", "ceph orch ps", "ceph orch ps", "ceph fs dump dumped fsmap epoch 399 Filesystem 'cephfs01' (27) e399 max_mds 1 in 0 up {0=20384} failed damaged stopped [mds.a{0:20384} state up:active seq 239 addr [v2:127.0.0.1:6854/966242805,v1:127.0.0.1:6855/966242805]] Standby daemons: [mds.b{-1:10420} state up:standby seq 2 addr [v2:127.0.0.1:6856/2745199145,v1:127.0.0.1:6857/2745199145]]", "ceph config set STANDBY_DAEMON mds_join_fs FILE_SYSTEM_NAME", "ceph config set mds.b mds_join_fs cephfs01", "ceph fs dump dumped fsmap epoch 405 e405 Filesystem 'cephfs01' (27) max_mds 1 in 0 up {0=10420} failed damaged stopped [mds.b{0:10420} state up:active seq 274 join_fscid=27 addr [v2:127.0.0.1:6856/2745199145,v1:127.0.0.1:6857/2745199145]] 1 Standby daemons: [mds.a{-1:10720} state up:standby seq 2 addr [v2:127.0.0.1:6854/1340357658,v1:127.0.0.1:6855/1340357658]]", "ceph fs set NAME max_mds NUMBER", "ceph fs set cephfs max_mds 2", "ceph fs status NAME", "ceph fs status cephfs cephfs - 0 clients ====== +------+--------+-------+---------------+-------+-------+--------+--------+ | RANK | STATE | MDS | ACTIVITY | DNS | INOS | DIRS | CAPS | +------+--------+-------+---------------+-------+-------+--------+--------+ | 0 | active | node1 | Reqs: 0 /s | 10 | 12 | 12 | 0 | | 1 | active | node2 | Reqs: 0 /s | 10 | 12 | 12 | 0 | +------+--------+-------+---------------+-------+-------+--------+--------+ +-----------------+----------+-------+-------+ | POOL | TYPE | USED | AVAIL | +-----------------+----------+-------+-------+ | cephfs_metadata | metadata | 4638 | 26.7G | | cephfs_data | data | 0 | 26.7G | +-----------------+----------+-------+-------+ +-------------+ | STANDBY MDS | +-------------+ | node3 | +-------------+", "ceph fs set FS_NAME standby_count_wanted NUMBER", "ceph fs set cephfs standby_count_wanted 2", "ceph fs set FS_NAME allow_standby_replay 1", "ceph fs set cephfs allow_standby_replay 1", "setfattr -n ceph.dir.pin.distributed -v 1 DIRECTORY_PATH", "setfattr -n ceph.dir.pin.random -v PERCENTAGE DIRECTORY_PATH", "mkdir -p a/b 1 setfattr -n ceph.dir.pin -v 1 a/ 2 setfattr -n ceph.dir.pin -v 0 a/b 3", "setfattr -n ceph.dir.pin -v RANK PATH_TO_DIRECTORY", "setfattr -n ceph.dir.pin -v 2 cephfs/home", "ceph fs status NAME", "ceph fs status cephfs cephfs - 0 clients +------+--------+-------+---------------+-------+-------+--------+--------+ | RANK | STATE | MDS | ACTIVITY | DNS | INOS | DIRS | CAPS | +------+--------+-------+---------------+-------+-------+--------+--------+ | 0 | active | node1 | Reqs: 0 /s | 10 | 12 | 12 | 0 | | 1 | active | node2 | Reqs: 0 /s | 10 | 12 | 12 | 0 | +------+--------+-------+---------------+-------+-------+--------+--------+ +-----------------+----------+-------+-------+ | POOL | TYPE | USED | AVAIL | +-----------------+----------+-------+-------+ | cephfs_metadata | metadata | 4638 | 26.7G | | cephfs_data | data | 0 | 26.7G | +-----------------+----------+-------+-------+ +-------------+ | Standby MDS | +-------------+ | node3 | +-------------+", "ceph fs set NAME max_mds NUMBER", "ceph fs set cephfs max_mds 1", "ceph fs status NAME", "ceph fs status cephfs cephfs - 0 clients +------+--------+-------+---------------+-------+-------+--------+--------+ | RANK | STATE | MDS | ACTIVITY | DNS | INOS | DIRS | CAPS | +------+--------+-------+---------------+-------+-------+--------+--------+ | 0 | active | node1 | Reqs: 0 /s | 10 | 12 | 12 | 0 | +------+--------+-------+---------------+-------+-------+--------|--------+ +-----------------+----------+-------+-------+ | POOl | TYPE | USED | AVAIL | +-----------------+----------+-------+-------+ | cephfs_metadata | metadata | 4638 | 26.7G | | cephfs_data | data | 0 | 26.7G | +-----------------+----------+-------+-------+ +-------------+ | Standby MDS | +-------------+ | node3 | | node2 | +-------------+" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/file_system_guide/the-ceph-file-system-metadata-server
Chapter 8. Common Deployment Scenarios
Chapter 8. Common Deployment Scenarios This section provides a brief overview of common deployment scenarios for Red Hat Satellite. Note that many variations and combinations of the following layouts are possible. 8.1. Single Location An integrated Capsule is a virtual Capsule Server that is created by default in Satellite Server during the installation process. This means Satellite Server can be used to provision directly connected hosts for Satellite deployment in a single geographical location, therefore only one physical server is needed. The base systems of isolated Capsules can be directly managed by Satellite Server, however it is not recommended to use this layout to manage other hosts in remote locations. 8.2. Single Location with Segregated Subnets Your infrastructure might require multiple isolated subnets even if Red Hat Satellite is deployed in a single geographic location. This can be achieved for example by deploying multiple Capsule Servers with DHCP and DNS services, but the recommended way is to create segregated subnets using a single Capsule. This Capsule is then used to manage hosts and compute resources in those segregated networks to ensure they only have to access the Capsule for provisioning, configuration, errata, and general management. For more information on configuring subnets see Managing Hosts . 8.3. Multiple Locations It is recommended to create at least one Capsule Server per geographic location. This practice can save bandwidth since hosts obtain content from a local Capsule Server. Synchronization of content from remote repositories is done only by the Capsule, not by each host in a location. In addition, this layout makes the provisioning infrastructure more reliable and easier to configure. See Figure 1.1, "Red Hat Satellite System Architecture" for an illustration of this approach. 8.4. Disconnected Satellite In high security environments where hosts are required to function in a closed network disconnected from the Internet, Red Hat Satellite can provision systems with the latest security updates, errata, packages and other content. In such case, Satellite Server does not have direct access to the Internet, but the layout of other infrastructure components is not affected. For information about installing Satellite Server from a disconnected network, see Installing Satellite Server in a Disconnected Network Environment . For information about upgrading a disconnected Satellite, see Upgrading a Disconnected Satellite Server in Upgrading and Updating Red Hat Satellite . There are two options for importing content to a disconnected Satellite Server: Disconnected Satellite with Content ISO - in this setup, you download ISO images with content from the Red Hat Customer Portal and extract them to Satellite Server or a local web server. The content on Satellite Server is then synchronized locally. This allows for complete network isolation of Satellite Server, however, the release frequency of content ISO images is around six weeks and not all product content is included. To see the products in your subscription for which content ISO images are available, log on to the Red Hat Customer Portal at https://access.redhat.com , navigate to Downloads > Red Hat Satellite , and click Content ISOs . For instructions on how to import content ISOs to a disconnected Satellite, see Configuring Satellite to Synchronize Content with a Local CDN Server in the Content Management Guide . Note that Content ISOs previously hosted at redhat.com for import into Satellite Server have been deprecated and will be removed in the Satellite version. Disconnected Satellite with Inter-Satellite Synchronization - in this setup, you install a connected Satellite Server and export content from it to populate a disconnected Satellite using some storage device. This allows for exporting both Red Hat provided and custom content at the frequency you choose, but requires deploying an additional server with a separate subscription. For instructions on how to configure Inter-Satellite Synchronization in Satellite, see Synchronizing Content Between Satellite Servers in Managing Content . The above methods for importing content to a disconnected Satellite Server can also be used to speed up the initial population of a connected Satellite. 8.5. Capsule with External Services You can configure a Capsule Server (integrated or standalone) to use external DNS, DHCP, or TFTP service. If you already have a server that provides these services in your environment, you can integrate it with your Satellite deployment. For information about how to configure a Capsule with external services, see Configuring Capsule Server with External Services in Installing Capsule Server .
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/satellite_overview_concepts_and_deployment_considerations/chap-architecture_guide-deployment_scenarios
Chapter 9. Docker registry
Chapter 9. Docker registry Note Docker authentication is disabled by default. To enable see the Enabling and disabling features chapter. This section describes how you can configure a Docker registry to use Red Hat build of Keycloak as its authentication server. For more information on how to set up and configure a Docker registry, see the Docker Registry Configuration Guide . 9.1. Docker registry configuration file installation For users with more advanced Docker registry configurations, it is generally recommended to provide your own registry configuration file. The Red Hat build of Keycloak Docker provider supports this mechanism via the Registry Config File Format Option. Choosing this option will generate output similar to the following: This output can then be copied into any existing registry config file. See the registry config file specification for more information on how the file should be set up, or start with a basic example . Warning Don't forget to configure the rootcertbundle field with the location of the Red Hat build of Keycloak realm's public key. The auth configuration will not work without this argument. 9.2. Docker registry environment variable override installation Often times it is appropriate to use a simple environment variable override for develop or POC Docker registries. While this approach is usually not recommended for production use, it can be helpful when one requires quick-and-dirty way to stand up a registry. Simply use the Variable Override Format Option from the client details, and an output should appear like the one below: Warning Don't forget to configure the REGISTRY_AUTH_TOKEN_ROOTCERTBUNDLE override with the location of the Red Hat build of Keycloak realm's public key. The auth configuration will not work without this argument. 9.3. Docker Compose YAML File Warning This installation method is meant to be an easy way to get a docker registry authenticating against a Red Hat build of Keycloak server. It is intended for development purposes only and should never be used in a production or production-like environment. The zip file installation mechanism provides a quickstart for developers who want to understand how the Red Hat build of Keycloak server can interact with the Docker registry. In order to configure: Procedure From the desired realm, create a client configuration. At this point you will not have a Docker registry - the quickstart will take care of that part. Choose the Docker Compose YAML option from the from Action menu and select the Download adapter config option to download the ZIP file. Unzip the archive to the desired location, and open the directory. Start the Docker registry with docker-compose up Note it is recommended that you configure the Docker registry client in a realm other than 'master', since the HTTP Basic auth flow will not present forms. Once the above configuration has taken place, and the keycloak server and Docker registry are running, docker authentication should be successful:
[ "auth: token: realm: http://localhost:8080/realms/master/protocol/docker-v2/auth service: docker-test issuer: http://localhost:8080/realms/master", "REGISTRY_AUTH_TOKEN_REALM: http://localhost:8080/realms/master/protocol/docker-v2/auth REGISTRY_AUTH_TOKEN_SERVICE: docker-test REGISTRY_AUTH_TOKEN_ISSUER: http://localhost:8080/realms/master", "docker login localhost:5000 -u USDusername Password: ******* Login Succeeded" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/securing_applications_and_services_guide/docker-registry-
Chapter 7. DaemonSet [apps/v1]
Chapter 7. DaemonSet [apps/v1] Description DaemonSet represents the configuration of a daemon set. Type object 7.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object DaemonSetSpec is the specification of a daemon set. status object DaemonSetStatus represents the current status of a daemon set. 7.1.1. .spec Description DaemonSetSpec is the specification of a daemon set. Type object Required selector template Property Type Description minReadySeconds integer The minimum number of seconds for which a newly created DaemonSet pod should be ready without any of its container crashing, for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready). revisionHistoryLimit integer The number of old history to retain to allow rollback. This is a pointer to distinguish between explicit zero and not specified. Defaults to 10. selector LabelSelector A label query over pods that are managed by the daemon set. Must match in order to be controlled. It must match the pod template's labels. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors template PodTemplateSpec An object that describes the pod that will be created. The DaemonSet will create exactly one copy of this pod on every node that matches the template's node selector (or on every node if no node selector is specified). The only allowed template.spec.restartPolicy value is "Always". More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller#pod-template updateStrategy object DaemonSetUpdateStrategy is a struct used to control the update strategy for a DaemonSet. 7.1.2. .spec.updateStrategy Description DaemonSetUpdateStrategy is a struct used to control the update strategy for a DaemonSet. Type object Property Type Description rollingUpdate object Spec to control the desired behavior of daemon set rolling update. type string Type of daemon set update. Can be "RollingUpdate" or "OnDelete". Default is RollingUpdate. Possible enum values: - "OnDelete" Replace the old daemons only when it's killed - "RollingUpdate" Replace the old daemons by new ones using rolling update i.e replace them on each node one after the other. 7.1.3. .spec.updateStrategy.rollingUpdate Description Spec to control the desired behavior of daemon set rolling update. Type object Property Type Description maxSurge IntOrString The maximum number of nodes with an existing available DaemonSet pod that can have an updated DaemonSet pod during during an update. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). This can not be 0 if MaxUnavailable is 0. Absolute number is calculated from percentage by rounding up to a minimum of 1. Default value is 0. Example: when this is set to 30%, at most 30% of the total number of nodes that should be running the daemon pod (i.e. status.desiredNumberScheduled) can have their a new pod created before the old pod is marked as deleted. The update starts by launching new pods on 30% of nodes. Once an updated pod is available (Ready for at least minReadySeconds) the old DaemonSet pod on that node is marked deleted. If the old pod becomes unavailable for any reason (Ready transitions to false, is evicted, or is drained) an updated pod is immediatedly created on that node without considering surge limits. Allowing surge implies the possibility that the resources consumed by the daemonset on any given node can double if the readiness check fails, and so resource intensive daemonsets should take into account that they may cause evictions during disruption. maxUnavailable IntOrString The maximum number of DaemonSet pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Absolute number is calculated from percentage by rounding up. This cannot be 0 if MaxSurge is 0 Default value is 1. Example: when this is set to 30%, at most 30% of the total number of nodes that should be running the daemon pod (i.e. status.desiredNumberScheduled) can have their pods stopped for an update at any given time. The update starts by stopping at most 30% of those DaemonSet pods and then brings up new DaemonSet pods in their place. Once the new pods are available, it then proceeds onto other DaemonSet pods, thus ensuring that at least 70% of original number of DaemonSet pods are available at all times during the update. 7.1.4. .status Description DaemonSetStatus represents the current status of a daemon set. Type object Required currentNumberScheduled numberMisscheduled desiredNumberScheduled numberReady Property Type Description collisionCount integer Count of hash collisions for the DaemonSet. The DaemonSet controller uses this field as a collision avoidance mechanism when it needs to create the name for the newest ControllerRevision. conditions array Represents the latest available observations of a DaemonSet's current state. conditions[] object DaemonSetCondition describes the state of a DaemonSet at a certain point. currentNumberScheduled integer The number of nodes that are running at least 1 daemon pod and are supposed to run the daemon pod. More info: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/ desiredNumberScheduled integer The total number of nodes that should be running the daemon pod (including nodes correctly running the daemon pod). More info: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/ numberAvailable integer The number of nodes that should be running the daemon pod and have one or more of the daemon pod running and available (ready for at least spec.minReadySeconds) numberMisscheduled integer The number of nodes that are running the daemon pod, but are not supposed to run the daemon pod. More info: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/ numberReady integer numberReady is the number of nodes that should be running the daemon pod and have one or more of the daemon pod running with a Ready Condition. numberUnavailable integer The number of nodes that should be running the daemon pod and have none of the daemon pod running and available (ready for at least spec.minReadySeconds) observedGeneration integer The most recent generation observed by the daemon set controller. updatedNumberScheduled integer The total number of nodes that are running updated daemon pod 7.1.5. .status.conditions Description Represents the latest available observations of a DaemonSet's current state. Type array 7.1.6. .status.conditions[] Description DaemonSetCondition describes the state of a DaemonSet at a certain point. Type object Required type status Property Type Description lastTransitionTime Time Last time the condition transitioned from one status to another. message string A human readable message indicating details about the transition. reason string The reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of DaemonSet condition. 7.2. API endpoints The following API endpoints are available: /apis/apps/v1/daemonsets GET : list or watch objects of kind DaemonSet /apis/apps/v1/watch/daemonsets GET : watch individual changes to a list of DaemonSet. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps/v1/namespaces/{namespace}/daemonsets DELETE : delete collection of DaemonSet GET : list or watch objects of kind DaemonSet POST : create a DaemonSet /apis/apps/v1/watch/namespaces/{namespace}/daemonsets GET : watch individual changes to a list of DaemonSet. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps/v1/namespaces/{namespace}/daemonsets/{name} DELETE : delete a DaemonSet GET : read the specified DaemonSet PATCH : partially update the specified DaemonSet PUT : replace the specified DaemonSet /apis/apps/v1/watch/namespaces/{namespace}/daemonsets/{name} GET : watch changes to an object of kind DaemonSet. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/apps/v1/namespaces/{namespace}/daemonsets/{name}/status GET : read status of the specified DaemonSet PATCH : partially update status of the specified DaemonSet PUT : replace status of the specified DaemonSet 7.2.1. /apis/apps/v1/daemonsets HTTP method GET Description list or watch objects of kind DaemonSet Table 7.1. HTTP responses HTTP code Reponse body 200 - OK DaemonSetList schema 401 - Unauthorized Empty 7.2.2. /apis/apps/v1/watch/daemonsets HTTP method GET Description watch individual changes to a list of DaemonSet. deprecated: use the 'watch' parameter with a list operation instead. Table 7.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 7.2.3. /apis/apps/v1/namespaces/{namespace}/daemonsets HTTP method DELETE Description delete collection of DaemonSet Table 7.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 7.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind DaemonSet Table 7.5. HTTP responses HTTP code Reponse body 200 - OK DaemonSetList schema 401 - Unauthorized Empty HTTP method POST Description create a DaemonSet Table 7.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.7. Body parameters Parameter Type Description body DaemonSet schema Table 7.8. HTTP responses HTTP code Reponse body 200 - OK DaemonSet schema 201 - Created DaemonSet schema 202 - Accepted DaemonSet schema 401 - Unauthorized Empty 7.2.4. /apis/apps/v1/watch/namespaces/{namespace}/daemonsets HTTP method GET Description watch individual changes to a list of DaemonSet. deprecated: use the 'watch' parameter with a list operation instead. Table 7.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 7.2.5. /apis/apps/v1/namespaces/{namespace}/daemonsets/{name} Table 7.10. Global path parameters Parameter Type Description name string name of the DaemonSet HTTP method DELETE Description delete a DaemonSet Table 7.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 7.12. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified DaemonSet Table 7.13. HTTP responses HTTP code Reponse body 200 - OK DaemonSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified DaemonSet Table 7.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.15. HTTP responses HTTP code Reponse body 200 - OK DaemonSet schema 201 - Created DaemonSet schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified DaemonSet Table 7.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.17. Body parameters Parameter Type Description body DaemonSet schema Table 7.18. HTTP responses HTTP code Reponse body 200 - OK DaemonSet schema 201 - Created DaemonSet schema 401 - Unauthorized Empty 7.2.6. /apis/apps/v1/watch/namespaces/{namespace}/daemonsets/{name} Table 7.19. Global path parameters Parameter Type Description name string name of the DaemonSet HTTP method GET Description watch changes to an object of kind DaemonSet. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 7.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 7.2.7. /apis/apps/v1/namespaces/{namespace}/daemonsets/{name}/status Table 7.21. Global path parameters Parameter Type Description name string name of the DaemonSet HTTP method GET Description read status of the specified DaemonSet Table 7.22. HTTP responses HTTP code Reponse body 200 - OK DaemonSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified DaemonSet Table 7.23. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.24. HTTP responses HTTP code Reponse body 200 - OK DaemonSet schema 201 - Created DaemonSet schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified DaemonSet Table 7.25. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.26. Body parameters Parameter Type Description body DaemonSet schema Table 7.27. HTTP responses HTTP code Reponse body 200 - OK DaemonSet schema 201 - Created DaemonSet schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/workloads_apis/daemonset-apps-v1
Chapter 7. Using image streams with Kubernetes resources
Chapter 7. Using image streams with Kubernetes resources Image streams, being OpenShift Container Platform native resources, work out of the box with all the rest of native resources available in OpenShift Container Platform, such as builds or deployments. It is also possible to make them work with native Kubernetes resources, such as jobs, replication controllers, replica sets or Kubernetes deployments. 7.1. Enabling image streams with Kubernetes resources When using image streams with Kubernetes resources, you can only reference image streams that reside in the same project as the resource. The image stream reference must consist of a single segment value, for example ruby:2.5 , where ruby is the name of an image stream that has a tag named 2.5 and resides in the same project as the resource making the reference. Note This feature can not be used in the default namespace, nor in any openshift- or kube- namespace. There are two ways to enable image streams with Kubernetes resources: Enabling image stream resolution on a specific resource. This allows only this resource to use the image stream name in the image field. Enabling image stream resolution on an image stream. This allows all resources pointing to this image stream to use it in the image field. Procedure You can use oc set image-lookup to enable image stream resolution on a specific resource or image stream resolution on an image stream. To allow all resources to reference the image stream named mysql , enter the following command: USD oc set image-lookup mysql This sets the Imagestream.spec.lookupPolicy.local field to true. Imagestream with image lookup enabled apiVersion: image.openshift.io/v1 kind: ImageStream metadata: annotations: openshift.io/display-name: mysql name: mysql namespace: myproject spec: lookupPolicy: local: true When enabled, the behavior is enabled for all tags within the image stream. Then you can query the image streams and see if the option is set: USD oc set image-lookup imagestream --list You can enable image lookup on a specific resource. To allow the Kubernetes deployment named mysql to use image streams, run the following command: USD oc set image-lookup deploy/mysql This sets the alpha.image.policy.openshift.io/resolve-names annotation on the deployment. Deployment with image lookup enabled apiVersion: apps/v1 kind: Deployment metadata: name: mysql namespace: myproject spec: replicas: 1 template: metadata: annotations: alpha.image.policy.openshift.io/resolve-names: '*' spec: containers: - image: mysql:latest imagePullPolicy: Always name: mysql You can disable image lookup. To disable image lookup, pass --enabled=false : USD oc set image-lookup deploy/mysql --enabled=false
[ "oc set image-lookup mysql", "apiVersion: image.openshift.io/v1 kind: ImageStream metadata: annotations: openshift.io/display-name: mysql name: mysql namespace: myproject spec: lookupPolicy: local: true", "oc set image-lookup imagestream --list", "oc set image-lookup deploy/mysql", "apiVersion: apps/v1 kind: Deployment metadata: name: mysql namespace: myproject spec: replicas: 1 template: metadata: annotations: alpha.image.policy.openshift.io/resolve-names: '*' spec: containers: - image: mysql:latest imagePullPolicy: Always name: mysql", "oc set image-lookup deploy/mysql --enabled=false" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/images/using-imagestreams-with-kube-resources
Preface
Preface Thank you for your interest in Red Hat Ansible Automation Platform. Ansible Automation Platform is a commercial offering that helps teams manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environments. This guide helps you to understand the installation, migration and upgrade requirements for deploying the Ansible Automation Platform Operator on OpenShift Container Platform.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/deploying_the_red_hat_ansible_automation_platform_operator_on_openshift_container_platform/pr01
7.273. wpa_supplicant
7.273. wpa_supplicant 7.273.1. RHBA-2013:0431 - wpa_supplicant bug fix and enhancement update Updated wpa_supplicant packages that fix multiple bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. The wpa_supplicant packages contain a WPA (Wi-Fi Protected Access) Supplicant utility for Linux, BSD, and Windows with support for WPA and WPA2 (IEEE 802.11i/RSN). The supplicant is an IEEE 802.1X/WPA component that is used in client workstations. It implements key negotiation with a WPA Authenticator and it controls the roaming and IEEE 802.11 authentication and association of the WLAN driver. Bug Fixes BZ#813579 When roaming from one Access Point (AP) to another and the connection was disrupted, NetworkManager did not always automatically reconnect. This update includes a number of backported upstream patches to improve Proactive Key Caching (PKC), also known as Opportunistic Key Caching (OKC). As a result, WPA connections now roam more reliably. BZ#837402 Previously, the supplicant would attempt to roam to slightly stronger access points, increasing the chance of a disconnection. This bug has been fixed and the supplicant now only attempts to roam to a stronger access point when the current signal is significantly degraded. Enhancement BZ#672976 The "wpa_gui" program was removed from "wpa_supplicant" in the 6.0 release as per BZ#553349, however, the man page was still being installed. This upgrade removes the man page. All users of wpa_supplicant are advised to upgrade to these updated packages, which fix these bugs and add this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/wpa_supplicant
2.2. File System Fragmentation
2.2. File System Fragmentation Red Hat Enterprise Linux 6.4 introduces improvements to file fragmentation management in GFS2. With Red Hat Enterprise Linux 6.4, simultaneous writes result in less file fragmentation and therefore better performance for these workloads. While there is no defragmentation tool for GFS2 on Red Hat Enterprise Linux, you can defragment individual files by identifying them with the filefrag tool, copying them to temporary files, and renaming the temporary files to replace the originals. (This procedure can also be done in versions prior to Red Hat Enterprise Linux 6.4 as long as the writing is done sequentially.)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/global_file_system_2/s1-filefragment-gfs2
Chapter 30. Installing and configuring Smart Router
Chapter 30. Installing and configuring Smart Router Smart Router (KIE Server router) is a lightweight Java component that you can use as an integration layer between multiple KIE Servers, client applications, and other components. Depending on your deployment and execution environment, Smart Router can aggregate multiple independent KIE Server instances as though they are a single server. Smart Router provides the following features: Data aggregation Collects data from all KIE Server instances (one instance from each group) when there is a client application request and aggregates the results in a single response. Routing Functions as a single endpoint that receives calls from client applications to any of your services and routes each call automatically to the KIE Server that runs the specific service. This means that KIE Servers do not need to have the same services deployed. Load balancing Provides efficient load balancing. Load balancing requests for a Smart Router cluster must be managed externally with standard load balancing tools. Authentication Authenticates KIE Server instances by using a system property flag and can enable HTTPS traffic. Environment Management Manages the changing environment, for example adding or removing server instances. 30.1. Load-balancing KIE Server instances with Smart Router You can use Smart Router to aggregate multiple independent KIE Server instances as though they are a single server. It performs the role of an intelligent load balancer because it can route requests to individual KIE Server instances and aggregate data from different KIE Server instances. Smart Router uses aliases to perform as a proxy. Prerequisites Multiple KIE Server instances are installed. Note You do not need to configure KIE Server as unmanaged for Smart Router. An unmanaged KIE Server instance does not connect to the controller. For example, if you connect an unmanaged KIE Server instance to Smart Router and register Smart Router with the controller, then Business Central contacts the unmanaged KIE Server instance by using Smart Router. Procedure Navigate to the Software Downloads page in the Red Hat Customer Portal (login required), and select the product and version from the drop-down options: PRODUCT: Process Automation Manager Version: 7.13.5 Download Red Hat Process Automation Manager 7.13.5 Add-Ons . Extract the downloaded rhpam-7.13.5-add-ons.zip file to a temporary directory. The rhpam-7.13.5-smart-router.jar file is in the extracted rhpam-7.13.5-add-ons directory. Copy the rhpam-7.13.5-smart-router.jar file to the location where you will run the file. Enter the following command to start Smart Router: java -Dorg.kie.server.router.host=<ROUTER_HOST> -Dorg.kie.server.router.port=<ROUTER_PORT> -Dorg.kie.server.controller=<CONTROLLER_URL> -Dorg.kie.server.controller.user=<CONTROLLER_USER> -Dorg.kie.server.controller.pwd=<CONTROLLER_PWD> -Dorg.kie.server.router.config.watcher.enabled=true -Dorg.kie.server.router.repo=<NFS_STORAGE> -jar rhpam-7.13.5-smart-router.jar The properties in the preceding command have the following default values: org.kie.server.controller is the URL of the server controller, for example: org.kie.server.router.config.watcher.enabled is an optional settings to enable the watcher service system property. Note Instead of specifying configuration properties in the command line, you can use a configuration file. For information about configuring Smart Router using a file, see Section 30.5, "Configuring Smart Router settings using a configuration file" . On every KIE Server instance that must connect to the Smart Router, set the org.kie.server.router system property to the Smart Router URL. To access Smart Router from the client side, use the Smart Router URL instead of the KIE Server URL, for example: In this example, smartrouter.example.com is the Smart Router URL, and USERNAME and PASSWORD are the log in credentials for the Smart Router configuration. To create a new container in an umanaged KIE Server so that you can fill it with example data, send the following HTTP request: Review the contents the create-container.xml file: A message about the deployed container is displayed in the Smart Router console. For example: To display a list of containers, enter the following command: The list of containers is displayed: To initiate a process using the Smart Router URL, enter the following command: 30.2. Configuring Smart Router for TLS support You can configure Smart Router (KIE Server Router) for Transport Layer Security (TLS) support to allow HTTPS traffic. In addition, you can disable unsecure HTTP connections to Smart Router. Prerequisites KIE Server is installed on each node of a Red Hat JBoss EAP 7.4 cluster. Smart Router is installed and configured. For more information, see Section 30.1, "Load-balancing KIE Server instances with Smart Router" . Procedure To start Smart Router, use one of the following methods: To start Smart Router with TLS support and HTTPS enabled as well as allowing HTTP connections, enter the following command: java -Dorg.kie.server.router.tls.keystore = <KEYSTORE_PATH> -Dorg.kie.server.router.tls.keystore.password = <KEYSTORE_PASSWORD> -Dorg.kie.server.router.tls.keystore.keyalias = <KEYSTORE_ALIAS> -Dorg.kie.server.router.tls.port = <HTTPS_PORT> -jar rhpam-7.13.5-smart-router.jar In this example, replace the following variables: <KEYSTORE_PATH> : The path where the keystore will be stored. <KEYSTORE_PASSWORD> : The keystore password. <KEYSTORE_ALIAS> : The alias name used to store the certificate. <HTTPS_PORT> : The HTTPS port. The default HTTPS port is 9443 . To start Smart Router with TLS support and HTTPS enabled and with HTTP connections disabled, enter the following command: java -Dorg.kie.server.router.tls.keystore = <KEYSTORE_PATH> -Dorg.kie.server.router.tls.keystore.password = <KEYSTORE_PASSWORD> -Dorg.kie.server.router.tls.keystore.keyalias = <KEYSTORE_ALIAS> -Dorg.kie.server.router.tls.port = <HTTPS_PORT> -Dorg.kie.server.router.port=0 -jar rhpam-7.13.5-smart-router.jar When the org.kie.server.router.port system property is set to 0 , then the HTTP listener is not registered. If TLS is configured and the HTTP listener is not registered, then Smart Router listens only on the HTTPS port. Note If TLS is not configured and you disable HTTP by setting org.kie.server.router.port to 0 , then an error occurs and Smart Router stops. 30.3. Configuring Smart Router for endpoint authentication You can configure Smart Router (KIE Server Router) for endpoint authentication. Prerequisites KIE Server is installed on each node of a Red Hat JBoss EAP 7.4 cluster. Smart Router is installed and configured. For more information, see Section 30.1, "Load-balancing KIE Server instances with Smart Router" . Procedure To start Smart Router with endpoint authentication enabled, configure the management credentials: Add the following properties to your KIE Server configuration: The default username is the KIE Server ID. Add the following property to your Smart Router configuration: The password property values are true or false (default). Note Enabling endpoint authentication means any any operation that lists, adds or removes containers must be authenticated. Optional: Add users to Smart Router. For example: java -jar rhpam-7.13.5-smart-router.jar -addUser <USERNAME> <PASSWORD> Optional: Remove users from Smart Router. For example: java -jar rhpam-7.13.5-smart-router.jar -removeUser <USERNAME> 30.4. Configuring Smart Router behavior In a clustered environment with multiple KIE Servers, the default behavior is to send requests to each KIE Server in parallel and a host of each KIE Server is sent the request using the "round-robin" method. In the following example environment, each KIE Server is deployed with the same KJAR but each KJAR version is different: Table 30.1. Example environment Server Name KJAR version Hosts kie-server1 kjar:1.0 (alias=kjar, group-id=com.example, artifact-id=sample-kjar, version=1.0) 129.0.1.1, 129.0.1.2, 129.0.1.3 kie-server2 kjar:2.0 (alias=kjar, group-id=com.example, artifact-id=sample-kjar, version=2.0) 129.0.2.1, 129.0.2.2, 129.0.2.3 kie-server3 kjar:3.0 (alias=kjar, group-id=com.example, artifact-id=sample-kjar, version=3.0) 129.0.3.1, 129.0.3.2, 129.0.3.3 If you send a request, the request is sent to kie-server1 (129.0.1.2) , kie-server2 (129.0.2.3) , and kie-server3 (129.0.3.1) . If you send a second request, that request is sent to the host of each KIE Server. For example, kie-server1 (129.0.1.3) , kie-server2 (129.0.2.1) , and kie-server3 (129.0.3.2) . Smart Router has three components that you can modify to change this behavior: ContainerResolver The component responsible for finding the container id to use when interacting with servers. RestrictionPolicy The component responsible for disallowing Smart Router to use specific endpoints. ConfigRepository The component responsible for maintaining the Smart Router configuration. This is mainly related to the routing table. IdentityService The component responsible for allowing you to use your own identity provider. This is for KIE Server instances. Smart Router uses the ServiceLoader utility to implement these components: ContainerResolver META-INF/services/org.kie.server.router.spi.ContainerResolver RestrictionPolicy META-INF/services/org.kie.server.router.spi.RestrictionPolicy ConfigRepository META-INF/services/org.kie.server.router.spi.ConfigRepository IdentityService META-INF/services/org.kie.server.router.identity.IdentityService For example, for the above scenario, you can customize the ContainerResolver to make Smart Router search for the latest version of the KJAR process across all available KIE Servers and to always start with that process. This scenario would mean that each KIE Server hosts a single KJAR and each version will share the same alias. Since Smart Router is an executable jar, to include extensions, you need to modify the command. For example: java -cp LOCATION/router-ext-7.13.5.redhat-00002.jar:rhpam-7.13.5-smart-router.jar org.kie.server.router.KieServerRouter Once the service is started you will see log output stating the implementation that is used for the components: 30.5. Configuring Smart Router settings using a configuration file Instead of configuring Smart Router settings in the command line, you can use a configuration file. In this case, settings, including any passwords, are not visible in the command line terminal and server logs. Procedure Create a configuration file. This file can contain any number of lines in the property = value format. The file can include any of the following properties. All of the properties are optional. Table 30.2. Supported properties in the Smart Router configuration file Property name Description Default value org.kie.server.router.id Identifier of the Smart Router, for identification to other components. N/A org.kie.server.router.name Name of the Smart Router, for identification to other components. N/A org.kie.server.router.host The host name for the machine that runs the Smart Router localhost org.kie.server.router.port The port for incoming HTTP connections. If you configure TLS connections, you can set this property to 0 to disable HTTP connections. 9000 org.kie.server.router.url.external The external URL for access to the Smart Router N/A org.kie.server.router.tls.port The port for incoming TLS connections N/A org.kie.server.router.tls.keystore The keystore file for TLS connections N/A org.kie.server.router.tls.keystore.password The password for the keystore for TLS connections N/A org.kie.server.router.tls.keystore.keyalias The alias name that refers to the TLS certificate in the keystore N/A org.kie.server.router.repo The directory for storing the current repository The current working directory org.kie.router.identity.provider The optional custom provider class for authenticating KIE Server instances with Smart Router. This class must implement the org.kie.server.router.identity.IdentityManager interface, For the source code of this interface, see the GitHub repository . N/A org.kie.server.controller The URL for connecting to the controller N/A org.kie.server.controller.user The user name for connecting to the controller kieserver org.kie.server.controller.pwd The password for connecting to the controller kieserver1! org.kie.server.controller.token The authentication token for connecting to the controller N/A org.kie.server.controller.retry.interval The interval, in seconds, for retrying connection to the controller if it failed 10 org.kie.server.controller.retry.limit The maximum number of retries for connection to the controller if it failed infinite org.kie.server.router.config.watcher.enabled If set to true , Smart Router periodically scans the configuration file and applies any changes false org.kie.server.router.config.watcher.interval The interval, in seconds, for rescanning the configuration file 5 org.kie.server.router.management.password If set to true , Smart Router requires a password to authenticate a connection from KIE Server false Start Smart Router using the following command line: java -Dorg.kie.server.router.config.file=<CONFIG_FILE> -jar rhpam-7.13.5-smart-router.jar Replace <CONFIG_FILE> with the name of the configuration file.
[ "java -Dorg.kie.server.router.host=<ROUTER_HOST> -Dorg.kie.server.router.port=<ROUTER_PORT> -Dorg.kie.server.controller=<CONTROLLER_URL> -Dorg.kie.server.controller.user=<CONTROLLER_USER> -Dorg.kie.server.controller.pwd=<CONTROLLER_PWD> -Dorg.kie.server.router.config.watcher.enabled=true -Dorg.kie.server.router.repo=<NFS_STORAGE> -jar rhpam-7.13.5-smart-router.jar", "org.kie.server.router.host=localhost org.kie.server.router.port=9000 org.kie.server.controller= N/A org.kie.server.controller.user=kieserver org.kie.server.controller.pwd=kieserver1! org.kie.server.router.repo= <CURRENT_WORKING_DIR> org.kie.server.router.config.watcher.enabled=false", "org.kie.server.controller=http://<HOST>:<PORT>/controller/rest/controller", "KieServicesConfiguration config = KieServicesFactory.newRestConfiguration(\"http://smartrouter.example.com:9000\", \"USERNAME\", \"PASSWORD\");", "curl -v -X POST -H 'Content-type: application/xml' -H 'X-KIE-Content-Type: xstream' -d @create-container.xml -u USD{KIE_CRED} http://USD{KIE-SERVER-HOST}:USD{KIE-SERVER-PORT}/kie-server/services/rest/server/config/", "<script> <create-container> <container container-id=\"example:timer-test:1.1\"> <release-id> <group-id>example</group-id> <artifact-id>timer-test</artifact-id> <version>1.1</version> </release-id> <config-items> <itemName>RuntimeStrategy</itemName> <itemValue>PER_PROCESS_INSTANCE</itemValue> <itemType></itemType> </config-items> </container> </create-container> </script>", "INFO: Added http://localhost:8180/kie-server/services/rest/server as server location for container example:timer-test:1.1", "curl http://localhost:9000/mgmt/list", "{ \"containerInfo\": [{ \"alias\": \"timer-test\", \"containerId\": \"example:timer-test:1.1\", \"releaseId\": \"example:timer-test:1.1\" }], \"containers\": [ {\"example:timer-test:1.1\": [\"http://localhost:8180/kie-server/services/rest/server\"]}, {\"timer-test\": [\"http://localhost:8180/kie-server/services/rest/server\"]} ], \"servers\": [ {\"kieserver2\": []}, {\"kieserver1\": [\"http://localhost:8180/kie-server/services/rest/server\"]} ] }", "curl -s -X POST -H 'Content-type: application/json' -H 'X-KIE-Content-Type: json' -d '{\"timerDuration\":\"9s\"}' -u kieserver:kieserver1! http://localhost:9000/containers/example:timer-test:1.1/processes/timer-test.TimerProcess/instances", "java -Dorg.kie.server.router.tls.keystore = <KEYSTORE_PATH> -Dorg.kie.server.router.tls.keystore.password = <KEYSTORE_PASSWORD> -Dorg.kie.server.router.tls.keystore.keyalias = <KEYSTORE_ALIAS> -Dorg.kie.server.router.tls.port = <HTTPS_PORT> -jar rhpam-7.13.5-smart-router.jar", "java -Dorg.kie.server.router.tls.keystore = <KEYSTORE_PATH> -Dorg.kie.server.router.tls.keystore.password = <KEYSTORE_PASSWORD> -Dorg.kie.server.router.tls.keystore.keyalias = <KEYSTORE_ALIAS> -Dorg.kie.server.router.tls.port = <HTTPS_PORT> -Dorg.kie.server.router.port=0 -jar rhpam-7.13.5-smart-router.jar", "`org.kie.server.router.management.username` `org.kie.server.router.management.password`", "`org.kie.server.router.management.password`", "java -jar rhpam-7.13.5-smart-router.jar -addUser <USERNAME> <PASSWORD>", "java -jar rhpam-7.13.5-smart-router.jar -removeUser <USERNAME>", "java -cp LOCATION/router-ext-7.13.5.redhat-00002.jar:rhpam-7.13.5-smart-router.jar org.kie.server.router.KieServerRouter", "Mar 01, 2017 1:47:10 PM org.kie.server.router.KieServerRouter <init> INFO: KIE Server router repository implementation is InMemoryConfigRepository Mar 01, 2017 1:47:10 PM org.kie.server.router.proxy.KieServerProxyClient <init> INFO: Using 'LatestVersionContainerResolver' container resolver and restriction policy 'ByPassUserNotAllowedRestrictionPolicy' Mar 01, 2017 1:47:10 PM org.xnio.Xnio <clinit> INFO: XNIO version 3.3.6.Final Mar 01, 2017 1:47:10 PM org.xnio.nio.NioXnio <clinit> INFO: XNIO NIO Implementation Version 3.3.6.Final Mar 01, 2017 1:47:11 PM org.kie.server.router.KieServerRouter start INFO: KieServerRouter started on localhost:9000 at Wed Mar 01 13:47:11 CET 2017", "java -Dorg.kie.server.router.config.file=<CONFIG_FILE> -jar rhpam-7.13.5-smart-router.jar" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/installing_and_configuring_red_hat_process_automation_manager/installing-and-configuring-smart-router-con_clustering-runtime-standalone
Chapter 72. Kubernetes Service Account
Chapter 72. Kubernetes Service Account Since Camel 2.17 Only producer is supported The Kubernetes Service Account component is one of the Kubernetes Components which provides a producer to execute Kubernetes Service Account operations. 72.1. Dependencies When using kubernetes-service-accounts with Red Hat build of Apache Camel for Spring Boot, use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency> 72.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 72.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 72.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 72.3. Component Options The Kubernetes Service Account component supports 3 options, which are listed below. Name Description Default Type kubernetesClient (producer) Autowired To use an existing kubernetes client. KubernetesClient lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 72.4. Endpoint Options The Kubernetes Service Account endpoint is configured using URI syntax: with the following path and query parameters: 72.4.1. Path Parameters (1 parameters) Name Description Default Type masterUrl (producer) Required Kubernetes Master url. String 72.4.2. Query Parameters (21 parameters) Name Description Default Type apiVersion (producer) The Kubernetes API Version to use. String dnsDomain (producer) The dns domain, used for ServiceCall EIP. String kubernetesClient (producer) Default KubernetesClient to use if provided. KubernetesClient namespace (producer) The namespace. String operation (producer) Producer operation to do on Kubernetes. String portName (producer) The port name, used for ServiceCall EIP. String portProtocol (producer) The port protocol, used for ServiceCall EIP. tcp String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer caCertData (security) The CA Cert Data. String caCertFile (security) The CA Cert File. String clientCertData (security) The Client Cert Data. String clientCertFile (security) The Client Cert File. String clientKeyAlgo (security) The Key Algorithm used by the client. String clientKeyData (security) The Client Key data. String clientKeyFile (security) The Client Key file. String clientKeyPassphrase (security) The Client Key Passphrase. String oauthToken (security) The Auth Token. String password (security) Password to connect to Kubernetes. String trustCerts (security) Define if the certs we used are trusted anyway or not. Boolean username (security) Username to connect to Kubernetes. String 72.5. Message Headers The Kubernetes Service Account component supports 5 message header(s), which is/are listed below: Name Description Default Type CamelKubernetesOperation (producer) Constant: KUBERNETES_OPERATION The Producer operation. String CamelKubernetesNamespaceName (producer) Constant: KUBERNETES_NAMESPACE_NAME The namespace name. String CamelKubernetesServiceAccountsLabels (producer) Constant: KUBERNETES_SERVICE_ACCOUNTS_LABELS The service account labels. Map CamelKubernetesServiceAccountName (producer) Constant: KUBERNETES_SERVICE_ACCOUNT_NAME The service account name. String CamelKubernetesServiceAccount (producer) Constant: KUBERNETES_SERVICE_ACCOUNT A service account object. ServiceAccount 72.6. Supported producer operation listServiceAccounts listServiceAccountsByLabels getServiceAccount createServiceAccount updateServiceAccount deleteServiceAccount 72.7. Kubernetes ServiceAccounts Produce Examples listServiceAccounts: this operation lists the service account on a kubernetes cluster. from("direct:list"). toF("kubernetes-service-accounts:///?kubernetesClient=#kubernetesClient&operation=listServiceAccounts"). to("mock:result"); This operation returns a List of services from your cluster. listServiceAccountsByLabels: this operation lists the service account by labels on a kubernetes cluster. from("direct:listByLabels").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put("key1", "value1"); labels.put("key2", "value2"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_SERVICE_ACCOUNTS_LABELS, labels); } }); toF("kubernetes-service-accounts:///?kubernetesClient=#kubernetesClient&operation=listServiceAccountsByLabels"). to("mock:result"); This operation returns a List of Services from your cluster, using a label selector (with key1 and key2, with value value1 and value2). 72.8. Spring Boot Auto-Configuration The component supports 102 options, which are listed below. Name Description Default Type camel.cluster.kubernetes.attributes Custom service attributes. Map camel.cluster.kubernetes.cluster-labels Set the labels used to identify the pods composing the cluster. Map camel.cluster.kubernetes.config-map-name Set the name of the ConfigMap used to do optimistic locking (defaults to 'leaders'). String camel.cluster.kubernetes.connection-timeout-millis Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer camel.cluster.kubernetes.enabled Sets if the Kubernetes cluster service should be enabled or not, default is false. false Boolean camel.cluster.kubernetes.id Cluster Service ID. String camel.cluster.kubernetes.jitter-factor A jitter factor to apply in order to prevent all pods to call Kubernetes APIs in the same instant. Double camel.cluster.kubernetes.kubernetes-namespace Set the name of the Kubernetes namespace containing the pods and the configmap (autodetected by default). String camel.cluster.kubernetes.lease-duration-millis The default duration of the lease for the current leader. Long camel.cluster.kubernetes.master-url Set the URL of the Kubernetes master (read from Kubernetes client properties by default). String camel.cluster.kubernetes.order Service lookup order/priority. Integer camel.cluster.kubernetes.pod-name Set the name of the current pod (autodetected from container host name by default). String camel.cluster.kubernetes.renew-deadline-millis The deadline after which the leader must stop its services because it may have lost the leadership. Long camel.cluster.kubernetes.retry-period-millis The time between two subsequent attempts to check and acquire the leadership. It is randomized using the jitter factor. Long camel.component.kubernetes-config-maps.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-config-maps.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-config-maps.enabled Whether to enable auto configuration of the kubernetes-config-maps component. This is enabled by default. Boolean camel.component.kubernetes-config-maps.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-config-maps.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-custom-resources.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-custom-resources.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-custom-resources.enabled Whether to enable auto configuration of the kubernetes-custom-resources component. This is enabled by default. Boolean camel.component.kubernetes-custom-resources.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-custom-resources.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-deployments.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-deployments.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-deployments.enabled Whether to enable auto configuration of the kubernetes-deployments component. This is enabled by default. Boolean camel.component.kubernetes-deployments.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-deployments.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-events.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-events.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-events.enabled Whether to enable auto configuration of the kubernetes-events component. This is enabled by default. Boolean camel.component.kubernetes-events.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-events.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-hpa.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-hpa.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-hpa.enabled Whether to enable auto configuration of the kubernetes-hpa component. This is enabled by default. Boolean camel.component.kubernetes-hpa.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-hpa.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-job.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-job.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-job.enabled Whether to enable auto configuration of the kubernetes-job component. This is enabled by default. Boolean camel.component.kubernetes-job.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-job.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-namespaces.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-namespaces.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-namespaces.enabled Whether to enable auto configuration of the kubernetes-namespaces component. This is enabled by default. Boolean camel.component.kubernetes-namespaces.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-namespaces.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-nodes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-nodes.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-nodes.enabled Whether to enable auto configuration of the kubernetes-nodes component. This is enabled by default. Boolean camel.component.kubernetes-nodes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-nodes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes-claims.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes-claims.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes-claims component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes-claims.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes-claims.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-pods.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-pods.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-pods.enabled Whether to enable auto configuration of the kubernetes-pods component. This is enabled by default. Boolean camel.component.kubernetes-pods.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-pods.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-replication-controllers.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-replication-controllers.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-replication-controllers.enabled Whether to enable auto configuration of the kubernetes-replication-controllers component. This is enabled by default. Boolean camel.component.kubernetes-replication-controllers.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-replication-controllers.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-resources-quota.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-resources-quota.enabled Whether to enable auto configuration of the kubernetes-resources-quota component. This is enabled by default. Boolean camel.component.kubernetes-resources-quota.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-resources-quota.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-secrets.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-secrets.enabled Whether to enable auto configuration of the kubernetes-secrets component. This is enabled by default. Boolean camel.component.kubernetes-secrets.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-secrets.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-service-accounts.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-service-accounts.enabled Whether to enable auto configuration of the kubernetes-service-accounts component. This is enabled by default. Boolean camel.component.kubernetes-service-accounts.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-service-accounts.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-services.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-services.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-services.enabled Whether to enable auto configuration of the kubernetes-services component. This is enabled by default. Boolean camel.component.kubernetes-services.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-services.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-build-configs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-build-configs.enabled Whether to enable auto configuration of the openshift-build-configs component. This is enabled by default. Boolean camel.component.openshift-build-configs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-build-configs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-builds.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-builds.enabled Whether to enable auto configuration of the openshift-builds component. This is enabled by default. Boolean camel.component.openshift-builds.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-builds.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-deploymentconfigs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-deploymentconfigs.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.openshift-deploymentconfigs.enabled Whether to enable auto configuration of the openshift-deploymentconfigs component. This is enabled by default. Boolean camel.component.openshift-deploymentconfigs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-deploymentconfigs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency>", "kubernetes-service-accounts:masterUrl", "from(\"direct:list\"). toF(\"kubernetes-service-accounts:///?kubernetesClient=#kubernetesClient&operation=listServiceAccounts\"). to(\"mock:result\");", "from(\"direct:listByLabels\").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put(\"key1\", \"value1\"); labels.put(\"key2\", \"value2\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_SERVICE_ACCOUNTS_LABELS, labels); } }); toF(\"kubernetes-service-accounts:///?kubernetesClient=#kubernetesClient&operation=listServiceAccountsByLabels\"). to(\"mock:result\");" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-kubernetes-service-account-component-starter
5.6. Resource Operations
5.6. Resource Operations To ensure that resources remain healthy, you can add a monitoring operation to a resource's definition. If you do not specify a monitoring operation for a resource, by default the pcs command will create a monitoring operation, with an interval that is determined by the resource agent. If the resource agent does not provide a default monitoring interval, the pcs command will create a monitoring operation with an interval of 60 seconds. Table 5.4, "Properties of an Operation" summarizes the properties of a resource monitoring operation. Table 5.4. Properties of an Operation Field Description id Unique name for the action. The system assigns this when you configure an operation. name The action to perform. Common values: monitor , start , stop interval How frequently (in seconds) to perform the operation. Default value: 0 , meaning never. timeout How long to wait before declaring the action has failed. If you find that your system includes a resource that takes a long time to start or stop or perform a non-recurring monitor action at startup, and requires more time than the system allows before declaring that the start action has failed, you can increase this value from the default of 20 or the value of timeout in "op defaults". on-fail The action to take if this action ever fails. Allowed values: * ignore - Pretend the resource did not fail * block - Do not perform any further operations on the resource * stop - Stop the resource and do not start it elsewhere * restart - Stop the resource and start it again (possibly on a different node) * fence - STONITH the node on which the resource failed * standby - Move all resources away from the node on which the resource failed The default for the stop operation is fence when STONITH is enabled and block otherwise. All other operations default to restart . enabled If false , the operation is treated as if it does not exist. Allowed values: true , false You can configure monitoring operations when you create a resource, using the following command. For example, the following command creates an IPaddr2 resource with a monitoring operation. The new resource is called VirtualIP with an IP address of 192.168.0.99 and a netmask of 24 on eth2 . A monitoring operation will be performed every 30 seconds. Alternately, you can add a monitoring operation to an existing resource with the following command. Use the following command to delete a configured resource operation. Note You must specify the exact operation properties to properly remove an existing operation. To change the values of a monitoring option, you remove the existing operation, then add the new operation. For example, you can create a VirtualIP with the following command. By default, this command creates these operations. The change the stop timeout operation, execute the following commands. To set global default values for monitoring operations, use the following command. For example, the following command sets a global default of a timeout value of 240s for all monitoring operations. To display the currently configured default values for monitoring operations, do not specify any options when you execute the pcs resource op defaults command. For example, following command displays the default monitoring operation values for a cluster which has been configured with a timeout value of 240s.
[ "pcs resource create resource_id standard:provider:type|type [ resource_options ] [op operation_action operation_options [ operation_type operation_options ]...]", "pcs resource create VirtualIP ocf:heartbeat:IPaddr2 ip=192.168.0.99 cidr_netmask=24 nic=eth2 op monitor interval=30s", "pcs resource op add resource_id operation_action [ operation_properties ]", "pcs resource op remove resource_id operation_name operation_properties", "pcs resource create VirtualIP ocf:heartbeat:IPaddr2 ip=192.168.0.99 cidr_netmask=24 nic=eth2", "Operations: start interval=0s timeout=20s (VirtualIP-start-timeout-20s) stop interval=0s timeout=20s (VirtualIP-stop-timeout-20s) monitor interval=10s timeout=20s (VirtualIP-monitor-interval-10s)", "pcs resource op remove VirtualIP stop interval=0s timeout=20s pcs resource op add VirtualIP stop interval=0s timeout=40s pcs resource show VirtualIP Resource: VirtualIP (class=ocf provider=heartbeat type=IPaddr2) Attributes: ip=192.168.0.99 cidr_netmask=24 nic=eth2 Operations: start interval=0s timeout=20s (VirtualIP-start-timeout-20s) monitor interval=10s timeout=20s (VirtualIP-monitor-interval-10s) stop interval=0s timeout=40s (VirtualIP-name-stop-interval-0s-timeout-40s)", "pcs resource op defaults [ options ]", "pcs resource op defaults timeout=240s", "pcs resource op defaults timeout: 240s" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-resourceoperate-HAAR
Part III. Post-installation tasks
Part III. Post-installation tasks It is essential to manage and secure RHEL systems across different platforms. It includes instructions for registering systems, configuring the system purpose. It also provides details on installing a 64k kernel on ARM and modifying subscription services to maintain system configuration and security.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/automatically_installing_rhel/post-installation-tasks
8.2. Create a local cache
8.2. Create a local cache Creating a local cache, using default configuration options as defined by the JCache API specification, is as simple as doing the following: Warning By default, the JCache API specifies that data should be stored as storeByValue , so that object state mutations outside of operations to the cache, won't have an impact in the objects stored in the cache. JBoss Data Grid has so far implemented this using serialization/marshalling to make copies to store in the cache, and that way adhere to the spec. Hence, if using default JCache configuration with Infinispan, data stored must be marshallable. Alternatively, JCache can be configured to store data by reference. To do that simply call: Library Mode With Library mode a CacheManager may be configured by specifying the location of a configuration file via the URL parameter of CachingProvider.getCacheManager . This allows the opportunity to define clustered caches, which a reference can be later obtained to using the CacheManager.getCache method; otherwise local caches can only be used, created from the CacheManager.createCache . Client-Server Mode With Client-Server mode specific configurations of a remote CacheManager is performed by passing standard HotRod client properties via properties parameter of CachingProvider.getCacheManager . The remote servers referenced must be running and able to receive the request. If not specified the default address and port will be used (127.0.0.1:11222). In addition, contrary to Library mode, the first time a cache reference is obtained CacheManager.createCache must be used so that the cache may be registered internally. Subsequent queries may be performed via CacheManager.getCache . Report a bug
[ "import javax.cache.*; import javax.cache.configuration.*; // Retrieve the system wide cache manager CacheManager cacheManager = Caching.getCachingProvider().getCacheManager(); // Define a named cache with default JCache configuration Cache<String, String> cache = cacheManager.createCache(\"namedCache\", new MutableConfiguration<String, String>());", "Cache<String, String> cache = cacheManager.createCache(\"namedCache\", new MutableConfiguration<String, String>().setStoreByValue(false));" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/developer_guide/create_a_local_cache
About
About OpenShift Container Platform 4.10 Introduction to OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/about/index
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/configuring_red_hat_build_of_openjdk_21_on_rhel/making-open-source-more-inclusive
3.6. Configuring IP Networking with ip Commands
3.6. Configuring IP Networking with ip Commands As a system administrator, you can configure a network interface using the ip command, but but changes are not persistent across reboots; when you reboot, you will lose any changes. The commands for the ip utility, sometimes referred to as iproute2 after the upstream package name, are documented in the man ip(8) page. The package name in Red Hat Enterprise Linux 7 is iproute . If necessary, you can check that the ip utility is installed by checking its version number as follows: The ip commands can be used to add and remove addresses and routes to interfaces in parallel with NetworkManager , which will preserve them and recognize them in nmcli , nmtui , control-center , and the D-Bus API. To bring an interface down: Note The ip link set ifname command sets a network interface in IFF_UP state and enables it from the kernel's scope. This is different from the ifup ifname command for initscripts or NetworkManager 's activation state of a device. In fact, NetworkManager always sets an interface up even if it is currently disconnected. Disconnecting the device through the nmcli tool, does not remove the IFF_UP flag. In this way, NetworkManager gets notifications about the carrier state. Note that the ip utility replaces the ifconfig utility because the net-tools package (which provides ifconfig ) does not support InfiniBand addresses. For information about available OBJECTs, use the ip help command. For example: ip link help and ip addr help . Note ip commands given on the command line will not persist after a system restart. Where persistence is required, make use of configuration files ( ifcfg files) or add the commands to a script. Examples of using the command line and configuration files for each task are included after nmtui and nmcli examples but before explaining the use of one of the graphical user interfaces to NetworkManager , namely, control-center and nm-connection-editor . The ip utility can be used to assign IP addresses to an interface with the following form: ip addr [ add | del ] address dev ifname Assigning a Static Address Using ip Commands To assign an IP address to an interface: Further examples and command options can be found in the ip-address(8) manual page. Configuring Multiple Addresses Using ip Commands As the ip utility supports assigning multiple addresses to the same interface it is no longer necessary to use the alias interface method of binding multiple addresses to the same interface. The ip command to assign an address can be repeated multiple times in order to assign multiple address. For example: For more details on the commands for the ip utility, see the ip(8) manual page. Note ip commands given on the command line will not persist after a system restart.
[ "~]USD ip -V ip utility, iproute2-ss130716", "ip link set ifname down", "~]# ip address add 10.0.0.3/24 dev enp1s0 You can view the address assignment of a specific device: ~]# ip addr show dev enp1s0 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether f0:de:f1:7b:6e:5f brd ff:ff:ff:ff:ff:ff inet 10.0.0.3/24 brd 10.0.0.255 scope global global enp1s0 valid_lft 58682sec preferred_lft 58682sec inet6 fe80::f2de:f1ff:fe7b:6e5f/64 scope link valid_lft forever preferred_lft forever", "~]# ip address add 192.168.2.223/24 dev enp1s0 ~]# ip address add 192.168.4.223/24 dev enp1s0 ~]# ip addr 3: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:fb:77:9e brd ff:ff:ff:ff:ff:ff inet 192.168. 2 .223/24 scope global enp1s0 inet 192.168. 4 .223/24 scope global enp1s0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-Configuring_IP_Networking_with_ip_Commands
function::bytes_to_string
function::bytes_to_string Name function::bytes_to_string - Human readable string for given bytes Synopsis Arguments bytes Number of bytes to translate. Description Returns a string representing the number of bytes (up to 1024 bytes), the number of kilobytes (when less than 1024K) postfixed by 'K', the number of megabytes (when less than 1024M) postfixed by 'M' or the number of gigabytes postfixed by 'G'. If representing K, M or G, and the number is amount is less than 100, it includes a '.' plus the remainer. The returned string will be 5 characters wide (padding with whitespace at the front) unless negative or representing more than 9999G bytes.
[ "bytes_to_string:string(bytes:long)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-bytes-to-string