title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
β | url
stringlengths 79
342
|
---|---|---|---|
Chapter 8. Upgrading Service Telemetry Framework to version 1.5 | Chapter 8. Upgrading Service Telemetry Framework to version 1.5 To upgrade Service Telemetry Framework (STF) 1.4 to STF 1.5, you must complete the following steps: Replace AMQ Certificate Manager with Certificate Manager. Remove the ClusterServiceVersion and Subscription objects for Smart Gateway Operator and Service Telemetry Operator in the service-telemetry namespace on your Red Hat OpenShift Container Platform environment. Upgrade Red Hat OpenShift Container Platform from 4.8 to 4.10. Re-enable the operators that you removed. Update the AMQ Interconnect CA Certificate on Red Hat OpenStack Platform (RHOSP). Prerequisites You have backed up your data. There is an outage during the Red Hat OpenShift Container Platform upgrade. You cannot reconfigure the ServiceTelemetry and SmartGateway objects during the Operators replacement. You have prepared your environment for upgrade from Red Hat OpenShift Container Platform 4.8 to the supported version, 4.10. The Red Hat OpenShift Container Platform cluster is fully-connected. STF does not support disconnected or restricted-network clusters. 8.1. Removing the Service Telemetry Framework 1.4 Operators Remove the Service Telemetry Framework (STF) 1.4 Operators and the AMQ Certificate Manager Operator from the Red Hat OpenShift Container Platform 4.8. Procedure Remove the Service Telemetry Operator. Remove the Smart Gateway Operator. Remove the AMQ Certificate Manager Operator. Remove the Grafana Operator. Additional resources For more information about removing Operators from the Red Hat OpenShift Container Platform, see Deleting Operators from a cluster . 8.1.1. Removing the Service Telemetry Operator As part of upgrading your Service Telemetry Framework (STF) installation, you must remove the Service Telemetry Operator in the service-telemetry namespace on your Red Hat OpenShift Container Platform environment. Procedure Change to the service-telemetry project: USD oc project service-telemetry Remove the Service Telemetry Operator Subscription: USD oc delete sub --selector=operators.coreos.com/service-telemetry-operator.service-telemetry subscription.operators.coreos.com "service-telemetry-operator" deleted Remove the Service Telemetry Operator ClusterServiceVersion : USD oc delete csv --selector=operators.coreos.com/service-telemetry-operator.service-telemetry clusterserviceversion.operators.coreos.com "service-telemetry-operator.v1.4.1669718959" deleted Verification Verify that the Service Telemetry Operator deployment is not running: USD oc get deploy --selector=operators.coreos.com/service-telemetry-operator.service-telemetry No resources found in service-telemetry namespace. Verify the Service Telemetry Operator subscription is absent: USD oc get sub --selector=operators.coreos.com/service-telemetry-operator.service-telemetry No resources found in service-telemetry namespace. Verify the Service Telemetry Operator ClusterServiceVersion is absent: USD oc get csv --selector=operators.coreos.com/service-telemetry-operator.service-telemetry No resources found in service-telemetry namespace. 8.1.2. Removing the Smart Gateway Operator As part of upgrading your Service Telemetry Framework (STF) installation, you must remove the Smart Gateway Operator in the service-telemetry namespace on your Red Hat OpenShift Container Platform environment. Procedure Change to the service-telemetry project: USD oc project service-telemetry Remove the Smart Gateway Operator Subscription: USD oc delete sub --selector=operators.coreos.com/smart-gateway-operator.service-telemetry subscription.operators.coreos.com "smart-gateway-operator-stable-1.4-redhat-operators-openshift-marketplace" deleted Remove the Smart Gateway Operator ClusterServiceVersion : USD oc delete csv --selector=operators.coreos.com/smart-gateway-operator.service-telemetry clusterserviceversion.operators.coreos.com "smart-gateway-operator.v4.0.1669718962" deleted Verification Verify that the Smart Gateway Operator deployment is not running: USD oc get deploy --selector=operators.coreos.com/smart-gateway-operator.service-telemetry No resources found in service-telemetry namespace. Verify the Smart Gateway Operator subscription is absent: USD oc get sub --selector=operators.coreos.com/smart-gateway-operator.service-telemetry No resources found in service-telemetry namespace. Verify the Smart Gateway Operator ClusterServiceVersion is absent: USD oc get csv --selector=operators.coreos.com/smart-gateway-operator.service-telemetry No resources found in service-telemetry namespace. 8.1.3. Removing the AMQ Certificate Manager Operator Procedure Remove the AMQ Certificate Manager Operator Subscription: USD oc delete sub --namespace openshift-operators --selector=operators.coreos.com/amq7-cert-manager-operator.openshift-operators subscription.operators.coreos.com "amq7-cert-manager-operator" deleted Remove the AMQ Certificate Manager Operator ClusterServiceVersion : USD oc delete csv --namespace openshift-operators --selector=operators.coreos.com/amq7-cert-manager-operator.openshift-operators clusterserviceversion.operators.coreos.com "amq7-cert-manager.v1.0.11" deleted Verification Verify that the AMQ Certificate Manager Operator deployment is not running: USD oc get deploy --namespace openshift-operators --selector=operators.coreos.com/amq7-cert-manager-operator.openshift-operators No resources found in openshift-operators namespace. Verify that the AMQ Certificate Manager Operator subscription is absent: USD oc get sub --namespace openshift-operators --selector=operators.coreos.com/amq7-cert-manager-operator.service-telemetry No resources found in openshift-operators namespace. Verify that the AMQ Certificate Manager Operator Cluster Service Version is absent: USD oc get csv --namespace openshift-operators --selector=operators.coreos.com/amq7-cert-manager-operator.openshift-operators No resources found in openshift-operators namespace. 8.1.4. Removing the Grafana Operator Procedure Remove the Grafana Operator Subscription: USD oc delete sub --selector=operators.coreos.com/grafana-operator.service-telemetry subscription.operators.coreos.com "grafana-operator" deleted Remove the Grafana Operator ClusterServiceVersion : USD oc delete csv --selector=operators.coreos.com/grafana-operator.service-telemetry clusterserviceversion.operators.coreos.com "grafana-operator.v3.10.3" deleted Verification Verify the Grafana Operator deployment is not running: USD oc get deploy --selector=operators.coreos.com/grafana-operator.service-telemetry No resources found in service-telemetry namespace. Verify the Grafana Operator subscription is absent: USD oc get sub --selector=operators.coreos.com/grafana-operator.service-telemetry No resources found in service-telemetry namespace. Verify the Grafana Operator Cluster Service Version is absent: USD oc get csv --selector=operators.coreos.com/grafana-operator.service-telemetry No resources found in service-telemetry namespace. 8.2. Upgrading Red Hat OpenShift Container Platform to 4.10 Service Telemetry Framework (STF) 1.5 is only compatible with Red Hat OpenShift Container Platform 4.10. For more information about upgrading your Red Hat OpenShift Container Platform from 4.8 to 4.10, see Updating clusters overview . Prerequisites You removed the STF 1.4 Operators. You removed the AMQ Certificate Manager Operator and Grafana Operator. You must remove the Operators before you upgrade Red Hat OpenShift Container Platform because the Operator APIs are incompatible with 4.10. For more information about preparing your Red Hat OpenShift Container Platform for upgrade from 4.8 to 4.10, see Understanding OpenShift Container Platform updates . Verify the suitability of your Red Hat OpenShift Container Platform upgrade: USD oc adm upgrade You cannot upgrade the cluster if you encounter the following error message: Cluster operator operator-lifecycle-manager should not be upgraded between minor versions: ClusterServiceVersions blocking cluster upgrade: service-telemetry/grafana-operator.v3.10.3 is incompatible with OpenShift minor versions greater than 4.8,openshift-operators/amq7-cert-manager.v1.0.11 is incompatible with OpenShift minor versions greater than 4.8 8.3. Installing the Service Telemetry Framework 1.5 Operators Install the Service Telemetry Framework (STF) 1.5 Operators and the Certificate Manager for OpenShift Operator on your Red Hat OpenShift Container Platform environment. See Section 1.1, "Support for Service Telemetry Framework" for more information about STF support status and life cycle. Note After a successful STF 1.5 install, you must retrieve and apply the AMQ Interconnect CA certificate to the Red Hat OpenStack Platform environment, or the transport layer and telemetry data becomes unavailable. For more information about updating the AMQ Interconnect CA certificate, see Section 8.4, "Updating the AMQ Interconnect CA Certificate on Red Hat OpenStack Platform" . Prerequisites You have upgraded your Red Hat OpenShift Container Platform environment to 4.10. For more information about upgrading Red Hat OpenShift Container Platform, see Section 8.2, "Upgrading Red Hat OpenShift Container Platform to 4.10" . Your Red Hat OpenShift Container Platform environment network is fully-connected. Procedure Change to the service-telemetry project: USD oc project service-telemetry Create a namespace for the cert-manager Operator: USD oc create -f - <<EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: openshift-cert-manager-operator spec: finalizers: - kubernetes EOF Create an OperatorGroup for the cert-manager Operator: USD oc create -f - <<EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-cert-manager-operator namespace: openshift-cert-manager-operator spec: {} EOF Subscribe to the cert-manager Operator with the redhat-operators CatalogSource: USD oc create -f - <<EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-cert-manager-operator namespace: openshift-cert-manager-operator spec: channel: tech-preview installPlanApproval: Automatic name: openshift-cert-manager-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF Validate your ClusterServiceVersion . Ensure that the phase of cert-manager Operator is Succeeded : USD oc get csv --namespace openshift-cert-manager-operator --selector=operators.coreos.com/openshift-cert-manager-operator.openshift-cert-manager-operator NAME DISPLAY VERSION REPLACES PHASE openshift-cert-manager.v1.7.1 cert-manager Operator for Red Hat OpenShift 1.7.1-1 Succeeded Optional: Resubscribe to the Grafana Operator. For more information, see: test Section 5.1.1, "Configuring Grafana to host the dashboard" . Create the Service Telemetry Operator subscription to manage the STF instances: USD oc create -f - <<EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: service-telemetry-operator namespace: service-telemetry spec: channel: stable-1.5 installPlanApproval: Automatic name: service-telemetry-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF Validate the Service Telemetry Operator and the dependent operators: USD oc get csv --namespace service-telemetry NAME DISPLAY VERSION REPLACES PHASE amq7-interconnect-operator.v1.10.13 Red Hat Integration - AMQ Interconnect 1.10.13 amq7-interconnect-operator.v1.10.4 Succeeded elasticsearch-eck-operator-certified.v2.5.0 Elasticsearch (ECK) Operator 2.5.0 elasticsearch-eck-operator-certified.v2.4.0 Succeeded openshift-cert-manager.v1.7.1 cert-manager Operator for Red Hat OpenShift 1.7.1-1 Succeeded prometheusoperator.0.47.0 Prometheus Operator 0.47.0 prometheusoperator.0.37.0 Succeeded service-telemetry-operator.v1.5.1669950702 Service Telemetry Operator 1.5.1669950702 Succeeded smart-gateway-operator.v5.0.1669950681 Smart Gateway Operator 5.0.1669950681 Succeeded Verification Verify that the Service Telemetry Operator has successfully reconciled. USD oc logs -f --selector=name=service-telemetry-operator [...] ----- Ansible Task Status Event StdOut (infra.watch/v1beta1, Kind=ServiceTelemetry, default/service-telemetry) ----- PLAY RECAP ********************************************************************* localhost : ok=115 changed=0 unreachable=0 failed=0 skipped=21 rescued=0 ignored=0 USD oc get pods NAME READY STATUS RESTARTS AGE alertmanager-default-0 3/3 Running 0 20h default-cloud1-ceil-event-smartgateway-6d57ffbbdc-5mrj8 2/2 Running 1 (3m42s ago) 4m21s default-cloud1-ceil-meter-smartgateway-67684d88c8-62mp7 3/3 Running 1 (3m43s ago) 4m20s default-cloud1-coll-event-smartgateway-66cddd5567-qb6p2 2/2 Running 1 (3m42s ago) 4m19s default-cloud1-coll-meter-smartgateway-76d5ff6db5-z5ch8 3/3 Running 0 4m20s default-cloud1-sens-meter-smartgateway-7b59669fdd-c42zg 3/3 Running 1 (3m43s ago) 4m20s default-interconnect-845c4b647c-wwfcm 1/1 Running 0 4m10s elastic-operator-57b57964c5-6q88r 1/1 Running 8 (17h ago) 20h elasticsearch-es-default-0 1/1 Running 0 21h grafana-deployment-59c54f7d7c-zjnhm 2/2 Running 0 20h interconnect-operator-848889bf8b-bq2zx 1/1 Running 0 20h prometheus-default-0 3/3 Running 1 (20h ago) 20h prometheus-operator-5d7b69fd46-c2xlv 1/1 Running 0 20h service-telemetry-operator-79fb664dfb-nj8jn 1/1 Running 0 5m11s smart-gateway-operator-79557664f8-ql7xr 1/1 Running 0 5m7s 8.4. Updating the AMQ Interconnect CA Certificate on Red Hat OpenStack Platform After you upgrade to Service Telemetry Framework (STF) v1.5, the CA certificate for AMQ Interconnect regenerates. In STF v1.4 the CA certificate for AMQ Interconnect is valid for three months and must be updated periodically in Red Hat OpenStack Platform (RHOSP). In STF v1.5, the generated certificates are valid for the lifetime of the RHOSP lifecycle, 70080 hours by default. Prerequisites You have successfully installed STF v1.5 and updated the CA certificate for AMQ Interconnect. Procedure For more information about how to update the CA certificate in your RHOSP environment, see Chapter 6, Renewing the AMQ Interconnect certificate | [
"oc project service-telemetry",
"oc delete sub --selector=operators.coreos.com/service-telemetry-operator.service-telemetry subscription.operators.coreos.com \"service-telemetry-operator\" deleted",
"oc delete csv --selector=operators.coreos.com/service-telemetry-operator.service-telemetry clusterserviceversion.operators.coreos.com \"service-telemetry-operator.v1.4.1669718959\" deleted",
"oc get deploy --selector=operators.coreos.com/service-telemetry-operator.service-telemetry No resources found in service-telemetry namespace.",
"oc get sub --selector=operators.coreos.com/service-telemetry-operator.service-telemetry No resources found in service-telemetry namespace.",
"oc get csv --selector=operators.coreos.com/service-telemetry-operator.service-telemetry No resources found in service-telemetry namespace.",
"oc project service-telemetry",
"oc delete sub --selector=operators.coreos.com/smart-gateway-operator.service-telemetry subscription.operators.coreos.com \"smart-gateway-operator-stable-1.4-redhat-operators-openshift-marketplace\" deleted",
"oc delete csv --selector=operators.coreos.com/smart-gateway-operator.service-telemetry clusterserviceversion.operators.coreos.com \"smart-gateway-operator.v4.0.1669718962\" deleted",
"oc get deploy --selector=operators.coreos.com/smart-gateway-operator.service-telemetry No resources found in service-telemetry namespace.",
"oc get sub --selector=operators.coreos.com/smart-gateway-operator.service-telemetry No resources found in service-telemetry namespace.",
"oc get csv --selector=operators.coreos.com/smart-gateway-operator.service-telemetry No resources found in service-telemetry namespace.",
"oc delete sub --namespace openshift-operators --selector=operators.coreos.com/amq7-cert-manager-operator.openshift-operators subscription.operators.coreos.com \"amq7-cert-manager-operator\" deleted",
"oc delete csv --namespace openshift-operators --selector=operators.coreos.com/amq7-cert-manager-operator.openshift-operators clusterserviceversion.operators.coreos.com \"amq7-cert-manager.v1.0.11\" deleted",
"oc get deploy --namespace openshift-operators --selector=operators.coreos.com/amq7-cert-manager-operator.openshift-operators No resources found in openshift-operators namespace.",
"oc get sub --namespace openshift-operators --selector=operators.coreos.com/amq7-cert-manager-operator.service-telemetry No resources found in openshift-operators namespace.",
"oc get csv --namespace openshift-operators --selector=operators.coreos.com/amq7-cert-manager-operator.openshift-operators No resources found in openshift-operators namespace.",
"oc delete sub --selector=operators.coreos.com/grafana-operator.service-telemetry subscription.operators.coreos.com \"grafana-operator\" deleted",
"oc delete csv --selector=operators.coreos.com/grafana-operator.service-telemetry clusterserviceversion.operators.coreos.com \"grafana-operator.v3.10.3\" deleted",
"oc get deploy --selector=operators.coreos.com/grafana-operator.service-telemetry No resources found in service-telemetry namespace.",
"oc get sub --selector=operators.coreos.com/grafana-operator.service-telemetry No resources found in service-telemetry namespace.",
"oc get csv --selector=operators.coreos.com/grafana-operator.service-telemetry No resources found in service-telemetry namespace.",
"oc adm upgrade",
"Cluster operator operator-lifecycle-manager should not be upgraded between minor versions: ClusterServiceVersions blocking cluster upgrade: service-telemetry/grafana-operator.v3.10.3 is incompatible with OpenShift minor versions greater than 4.8,openshift-operators/amq7-cert-manager.v1.0.11 is incompatible with OpenShift minor versions greater than 4.8",
"oc project service-telemetry",
"oc create -f - <<EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: openshift-cert-manager-operator spec: finalizers: - kubernetes EOF",
"oc create -f - <<EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-cert-manager-operator namespace: openshift-cert-manager-operator spec: {} EOF",
"oc create -f - <<EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-cert-manager-operator namespace: openshift-cert-manager-operator spec: channel: tech-preview installPlanApproval: Automatic name: openshift-cert-manager-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF",
"oc get csv --namespace openshift-cert-manager-operator --selector=operators.coreos.com/openshift-cert-manager-operator.openshift-cert-manager-operator NAME DISPLAY VERSION REPLACES PHASE openshift-cert-manager.v1.7.1 cert-manager Operator for Red Hat OpenShift 1.7.1-1 Succeeded",
"oc create -f - <<EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: service-telemetry-operator namespace: service-telemetry spec: channel: stable-1.5 installPlanApproval: Automatic name: service-telemetry-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF",
"oc get csv --namespace service-telemetry NAME DISPLAY VERSION REPLACES PHASE amq7-interconnect-operator.v1.10.13 Red Hat Integration - AMQ Interconnect 1.10.13 amq7-interconnect-operator.v1.10.4 Succeeded elasticsearch-eck-operator-certified.v2.5.0 Elasticsearch (ECK) Operator 2.5.0 elasticsearch-eck-operator-certified.v2.4.0 Succeeded openshift-cert-manager.v1.7.1 cert-manager Operator for Red Hat OpenShift 1.7.1-1 Succeeded prometheusoperator.0.47.0 Prometheus Operator 0.47.0 prometheusoperator.0.37.0 Succeeded service-telemetry-operator.v1.5.1669950702 Service Telemetry Operator 1.5.1669950702 Succeeded smart-gateway-operator.v5.0.1669950681 Smart Gateway Operator 5.0.1669950681 Succeeded",
"oc logs -f --selector=name=service-telemetry-operator [...] ----- Ansible Task Status Event StdOut (infra.watch/v1beta1, Kind=ServiceTelemetry, default/service-telemetry) ----- PLAY RECAP ********************************************************************* localhost : ok=115 changed=0 unreachable=0 failed=0 skipped=21 rescued=0 ignored=0 oc get pods NAME READY STATUS RESTARTS AGE alertmanager-default-0 3/3 Running 0 20h default-cloud1-ceil-event-smartgateway-6d57ffbbdc-5mrj8 2/2 Running 1 (3m42s ago) 4m21s default-cloud1-ceil-meter-smartgateway-67684d88c8-62mp7 3/3 Running 1 (3m43s ago) 4m20s default-cloud1-coll-event-smartgateway-66cddd5567-qb6p2 2/2 Running 1 (3m42s ago) 4m19s default-cloud1-coll-meter-smartgateway-76d5ff6db5-z5ch8 3/3 Running 0 4m20s default-cloud1-sens-meter-smartgateway-7b59669fdd-c42zg 3/3 Running 1 (3m43s ago) 4m20s default-interconnect-845c4b647c-wwfcm 1/1 Running 0 4m10s elastic-operator-57b57964c5-6q88r 1/1 Running 8 (17h ago) 20h elasticsearch-es-default-0 1/1 Running 0 21h grafana-deployment-59c54f7d7c-zjnhm 2/2 Running 0 20h interconnect-operator-848889bf8b-bq2zx 1/1 Running 0 20h prometheus-default-0 3/3 Running 1 (20h ago) 20h prometheus-operator-5d7b69fd46-c2xlv 1/1 Running 0 20h service-telemetry-operator-79fb664dfb-nj8jn 1/1 Running 0 5m11s smart-gateway-operator-79557664f8-ql7xr 1/1 Running 0 5m7s"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/service_telemetry_framework_1.5/upgrading-service-telemetry-framework-to-version-1-5_assembly |
5.15. Configuring Complex Firewall Rules with the "Rich Language" Syntax | 5.15. Configuring Complex Firewall Rules with the "Rich Language" Syntax With the " rich language " syntax, complex firewall rules can be created in a way that is easier to understand than the direct-interface method. In addition, the settings can be made permanent. The language uses keywords with values and is an abstract representation of iptables rules. Zones can be configured using this language; the current configuration method will still be supported. 5.15.1. Formatting of the Rich Language Commands All the commands in this section need to be run as root . The format of the command to add a rule is as follows: firewall-cmd [ --zone= zone ] --add-rich-rule=' rule ' [ --timeout= timeval ] This will add a rich language rule rule for zone zone . This option can be specified multiple times. If the zone is omitted, the default zone is used. If a timeout is supplied, the rule or rules only stay active for the amount of time specified and will be removed automatically afterwards. The time value can be followed by s (seconds), m (minutes), or h (hours) to specify the unit of time. The default is seconds. To remove a rule: firewall-cmd [ --zone= zone ] --remove-rich-rule=' rule ' This will remove a rich language rule rule for zone zone . This option can be specified multiple times. If the zone is omitted, the default zone is used. To check if a rule is present: firewall-cmd [ --zone= zone ] --query-rich-rule=' rule ' This will return whether a rich language rule rule has been added for the zone zone . The command prints yes with exit status 0 if enabled. It prints no with exit status 1 otherwise. If the zone is omitted, the default zone is used. For information about the rich language representation used in the zone configuration files, see the firewalld.zone (5) man page. 5.15.2. Understanding the Rich Rule Structure The format or structure of the rich rule commands is as follows: Note The structure of the rich rule in the file uses the NOT keyword to invert the sense of the source and destination address commands, but the command line uses the invert =" true " option. A rule is associated with a particular zone. A zone can have several rules. If some rules interact or contradict, the first rule that matches the packet applies. 5.15.3. Understanding the Rich Rule Command Options family If the rule family is provided, either ipv4 or ipv6 , it limits the rule to IPv4 or IPv6 , respectively. If the rule family is not provided, the rule is added for both IPv4 and IPv6 . If source or destination addresses are used in a rule, then the rule family needs to be provided. This is also the case for port forwarding. Source and Destination Addresses source By specifying the source address, the origin of a connection attempt can be limited to the source address. A source address or address range is either an IP address or a network IP address with a mask for IPv4 or IPv6 . For IPv4 , the mask can be a network mask or a plain number. For IPv6 , the mask is a plain number. The use of host names is not supported. It is possible to invert the sense of the source address command by adding the NOT keyword; all but the supplied address matches. A MAC address and also an IP set with type hash:mac can be added for IPv4 and IPv6 if no family is specified for the rule. Other IP sets need to match the family setting of the rule. destination By specifying the destination address, the target can be limited to the destination address. The destination address uses the same syntax as the source address for IP address or address ranges. The use of source and destination addresses is optional, and the use of a destination addresses is not possible with all elements. This depends on the use of destination addresses, for example, in service entries. You can combine destination and action . Elements The element can be only one of the following element types: service , port , protocol , masquerade , icmp-block , forward-port , and source-port . service The service element is one of the firewalld provided services. To get a list of the predefined services, enter the following command: If a service provides a destination address, it will conflict with a destination address in the rule and will result in an error. The services using destination addresses internally are mostly services using multicast. The command takes the following form: service name= service_name port The port element can either be a single port number or a port range, for example, 5060-5062 , followed by the protocol, either as tcp or udp . The command takes the following form: port port= number_or_range protocol= protocol protocol The protocol value can be either a protocol ID number or a protocol name. For allowed protocol entries, see /etc/protocols . The command takes the following form: protocol value= protocol_name_or_ID icmp-block Use this command to block one or more ICMP types. The ICMP type is one of the ICMP types firewalld supports. To get a listing of supported ICMP types, enter the following command: Specifying an action is not allowed here. icmp-block uses the action reject internally. The command takes the following form: icmp-block name= icmptype_name masquerade Turns on IP masquerading in the rule. A source address can be provided to limit masquerading to this area, but not a destination address. Specifying an action is not allowed here. forward-port Forward packets from a local port with protocol specified as tcp or udp to either another port locally, to another machine, or to another port on another machine. The port and to-port can either be a single port number or a port range. The destination address is a simple IP address. Specifying an action is not allowed here. The forward-port command uses the action accept internally. The command takes the following form: source-port Matches the source port of the packet - the port that is used on the origin of a connection attempt. To match a port on current machine, use the port element. The source-port element can either be a single port number or a port range (for example, 5060-5062) followed by the protocol as tcp or udp . The command takes the following form: Logging log Log new connection attempts to the rule with kernel logging, for example, in syslog. You can define a prefix text that will be added to the log message as a prefix. Log level can be one of emerg , alert , crit , error , warning , notice , info , or debug . The use of log is optional. It is possible to limit logging as follows: log [ prefix= prefix text ] [ level= log level ] limit value= rate/duration The rate is a natural positive number [1, ..], with the duration of s , m , h , d . s means seconds, m means minutes, h means hours, and d days. The maximum limit value is 1/d , which means at maximum one log entry per day. audit Audit provides an alternative way for logging using audit records sent to the service auditd . The audit type can be one of ACCEPT , REJECT , or DROP , but it is not specified after the command audit as the audit type will be automatically gathered from the rule action. Audit does not have its own parameters, but limit can be added optionally. The use of audit is optional. Action accept|reject|drop|mark An action can be one of accept , reject , drop , or mark . The rule can only contain an element or a source. If the rule contains an element, then new connections matching the element will be handled with the action. If the rule contains a source, then everything from the source address will be handled with the action specified. accept | reject [ type= reject type ] | drop | mark set=" mark [ / mask ]" With accept , all new connection attempts will be granted. With reject , they will be rejected and their source will get a reject message. The reject type can be set to use another value. With drop , all packets will be dropped immediately and no information is sent to the source. With mark all packets will be marked with the given mark and the optional mask . 5.15.4. Using the Rich Rule Log Command Logging can be done with the Netfilter log target and also with the audit target. A new chain is added to all zones with a name in the format " zone _log " , where zone is the zone name. This is processed before the deny chain to have the proper ordering. The rules or parts of them are placed in separate chains, according to the action of the rule, as follows: All logging rules will be placed in the " zone _log " chain, which will be parsed first. All reject and drop rules will be placed in the " zone _deny " chain, which will be parsed after the log chain. All accept rules will be placed in the " zone _allow " chain, which will be parsed after the deny chain. If a rule contains log and also deny or allow actions, the parts of the rule that specify these actions are placed in the matching chains. 5.15.4.1. Using the Rich Rule Log Command Example 1 Enable new IPv4 and IPv6 connections for authentication header protocol AH : 5.15.4.2. Using the Rich Rule Log Command Example 2 Allow new IPv4 and IPv6 connections for protocol FTP and log 1 per minute using audit: 5.15.4.3. Using the Rich Rule Log Command Example 3 Allow new IPv4 connections from address 192.168.0.0/24 for protocol TFTP and log 1 per minute using syslog: 5.15.4.4. Using the Rich Rule Log Command Example 4 New IPv6 connections from 1:2:3:4:6:: for protocol RADIUS are all rejected and logged at a rate of 3 per minute. New IPv6 connections from other sources are accepted: 5.15.4.5. Using the Rich Rule Log Command Example 5 Forward IPv6 packets received from 1:2:3:4:6:: on port 4011 with protocol TCP to 1::2:3:4:7 on port 4012. 5.15.4.6. Using the Rich Rule Log Command Example 6 Whitelist a source address to allow all connections from this source. See the firewalld.richlanguage(5) man page for more examples. | [
"rule [ family=\" rule family \" ] [ source [ NOT ] [ address=\" address \" ] [ mac=\" mac-address \" ] [ ipset=\" ipset \" ] ] [ destination [ NOT ] address=\" address \" ] [ element ] [ log [ prefix=\" prefix text \" ] [ level=\" log level \" ] [ limit value=\"rate/duration\" ] ] [ audit ] [ action ]",
"~]USD firewall-cmd --get-services",
"~]USD firewall-cmd --get-icmptypes",
"forward-port port= number_or_range protocol= protocol / to-port= number_or_range to-addr= address",
"source-port port= number_or_range protocol= protocol",
"zone _log zone _deny zone _allow",
"rule protocol value=\"ah\" accept",
"rule service name=\"ftp\" log limit value=\"1/m\" audit accept",
"rule family=\"ipv4\" source address=\"192.168.0.0/24\" service name=\"tftp\" log prefix=\"tftp\" level=\"info\" limit value=\"1/m\" accept",
"rule family=\"ipv6\" source address=\"1:2:3:4:6::\" service name=\"radius\" log prefix=\"dns\" level=\"info\" limit value=\"3/m\" reject rule family=\"ipv6\" service name=\"radius\" accept",
"rule family=\"ipv6\" source address=\"1:2:3:4:6::\" forward-port to-addr=\"1::2:3:4:7\" to-port=\"4012\" protocol=\"tcp\" port=\"4011\"",
"rule family=\"ipv4\" source address=\"192.168.2.2\" accept"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/Configuring_Complex_Firewall_Rules_with_the_Rich-Language_Syntax |
10.12. Troubleshooting Geo-replication | 10.12. Troubleshooting Geo-replication This section describes the most common troubleshooting scenarios related to geo-replication. 10.12.1. Tuning Geo-replication performance with Change Log There are options for the change log that can be configured to give better performance in a geo-replication environment. The rollover-time option sets the rate at which the change log is consumed. The default rollover time is 15 seconds, but it can be configured to a faster rate. A recommended rollover-time for geo-replication is 10-15 seconds. To change the rollover-time option, use following the command: The fsync-interval option determines the frequency that updates to the change log are written to disk. The default interval is 5, which means that updates to the change log are written synchronously as they occur, and this may negatively impact performance in a geo-replication environment. Configuring fsync-interval to a non-zero value will write updates to disk asynchronously at the specified interval. To change the fsync-interval option, use following the command: 10.12.2. Triggering Explicit Sync on Entries Geo-replication provides an option to explicitly trigger the sync operation of files and directories. A virtual extended attribute glusterfs.geo-rep.trigger-sync is provided to accomplish the same. The support of explicit trigger of sync is supported only for directories and regular files. 10.12.3. Synchronization Is Not Complete Situation The geo-replication status is displayed as Stable , but the data has not been completely synchronized. Solution A full synchronization of the data can be performed by erasing the index and restarting geo-replication. After restarting geo-replication, it will begin a synchronization of the data using checksums. This may be a long and resource intensive process on large data sets. If the issue persists, contact Red Hat Support. For more information about erasing the index, see Section 11.1, "Configuring Volume Options" . 10.12.4. Issues with File Synchronization Situation The geo-replication status is displayed as Stable , but only directories and symlinks are synchronized. Error messages similar to the following are in the logs: Solution Geo-replication requires rsync v3.0.0 or higher on the host and the remote machines. Verify if you have installed the required version of rsync . 10.12.5. Geo-replication Status is Often Faulty Situation The geo-replication status is often displayed as Faulty , with a backtrace similar to the following: Solution This usually indicates that RPC communication between the master gsyncd module and slave gsyncd module is broken. Make sure that the following prerequisites are met: Key-based SSH authentication is set up properly between the host and remote machines. FUSE is installed on the machines. The geo-replication module mounts Red Hat Gluster Storage volumes using FUSE to sync data. 10.12.6. Intermediate Master is in a Faulty State Situation In a cascading environment, the intermediate master is in a faulty state, and messages similar to the following are in the log: Solution In a cascading configuration, an intermediate master is loyal to its original primary master. The above log message indicates that the geo-replication module has detected that the primary master has changed. If this change was deliberate, delete the volume-id configuration option in the session that was initiated from the intermediate master. 10.12.7. Remote gsyncd Not Found Situation The master is in a faulty state, and messages similar to the following are in the log: Solution The steps to configure a SSH connection for geo-replication have been updated. Use the steps as described in Section 10.3.4.1, "Setting Up your Environment for Geo-replication Session" | [
"gluster volume set VOLNAME rollover-time 15",
"gluster volume set VOLNAME fsync-interval 5",
"setfattr -n glusterfs.geo-rep.trigger-sync -v \"1\" <file-path>",
"[2011-05-02 13:42:13.467644] E [master:288:regjob] GMaster: failed to sync ./some_file`",
"012-09-28 14:06:18.378859] E [syncdutils:131:log_raise_exception] <top>: FAIL: Traceback (most recent call last): File \"/usr/local/libexec/glusterfs/python/syncdaemon/syncdutils.py\", line 152, in twraptf(*aa) File \"/usr/local/libexec/glusterfs/python/syncdaemon/repce.py\", line 118, in listen rid, exc, res = recv(self.inf) File \"/usr/local/libexec/glusterfs/python/syncdaemon/repce.py\", line 42, in recv return pickle.load(inf) EOFError",
"raise RuntimeError (\"aborting on uuid change from %s to %s\" % \\ RuntimeError: aborting on uuid change from af07e07c-427f-4586-ab9f- 4bf7d299be81 to de6b5040-8f4e-4575-8831-c4f55bd41154",
"[2012-04-04 03:41:40.324496] E [resource:169:errfail] Popen: ssh> bash: /usr/local/libexec/glusterfs/gsyncd: No such file or directory"
]
| https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/sect-troubleshooting_geo-replication |
Chapter 11. Configuring cluster resources | Chapter 11. Configuring cluster resources Create and delete cluster resources with the following commands. The format for the command to create a cluster resource is as follows: Key cluster resource creation options include the following: The --before and --after options specify the position of the added resource relative to a resource that already exists in a resource group. Specifying the --disabled option indicates that the resource is not started automatically. There is no limit to the number of resources you can create in a cluster. You can determine the behavior of a resource in a cluster by configuring constraints for that resource. Resource creation examples The following command creates a resource with the name VirtualIP of standard ocf , provider heartbeat , and type IPaddr2 . The floating address of this resource is 192.168.0.120, and the system will check whether the resource is running every 30 seconds. Alternately, you can omit the standard and provider fields and use the following command. This will default to a standard of ocf and a provider of heartbeat . Deleting a configured resource Delete a configured resource with the following command. For example, the following command deletes an existing resource with a resource ID of VirtualIP . 11.1. Resource agent identifiers The identifiers that you define for a resource tell the cluster which agent to use for the resource, where to find that agent and what standards it conforms to. The following table describes these properties of a resource agent. Table 11.1. Resource Agent Identifiers Field Description standard The standard the agent conforms to. Allowed values and their meaning: * ocf - The specified type is the name of an executable file conforming to the Open Cluster Framework Resource Agent API and located beneath /usr/lib/ocf/resource.d/ provider * lsb - The specified type is the name of an executable file conforming to Linux Standard Base Init Script Actions. If the type does not specify a full path, the system will look for it in the /etc/init.d directory. * systemd - The specified type is the name of an installed systemd unit * service - Pacemaker will search for the specified type , first as an lsb agent, then as a systemd agent * nagios - The specified type is the name of an executable file conforming to the Nagios Plugin API and located in the /usr/libexec/nagios/plugins directory, with OCF-style metadata stored separately in the /usr/share/nagios/plugins-metadata directory (available in the nagios-agents-metadata package for certain common plugins). type The name of the resource agent you wish to use, for example IPaddr or Filesystem provider The OCF spec allows multiple vendors to supply the same resource agent. Most of the agents shipped by Red Hat use heartbeat as the provider. The following table summarizes the commands that display the available resource properties. Table 11.2. Commands to Display Resource Properties pcs Display Command Output pcs resource list Displays a list of all available resources. pcs resource standards Displays a list of available resource agent standards. pcs resource providers Displays a list of available resource agent providers. pcs resource list string Displays a list of available resources filtered by the specified string. You can use this command to display resources filtered by the name of a standard, a provider, or a type. 11.2. Displaying resource-specific parameters For any individual resource, you can use the following command to display a description of the resource, the parameters you can set for that resource, and the default values that are set for the resource. For example, the following command displays information for a resource of type apache . 11.3. Configuring resource meta options In addition to the resource-specific parameters, you can configure additional resource options for any resource. These options are used by the cluster to decide how your resource should behave. The following table describes the resource meta options. Table 11.3. Resource Meta Options Field Default Description priority 0 If not all resources can be active, the cluster will stop lower priority resources in order to keep higher priority ones active. target-role Started Indicates what state the cluster should attempt to keep this resource in. Allowed values: * Stopped - Force the resource to be stopped * Started - Allow the resource to be started (and in the case of promotable clones, promoted to master role if appropriate) * Master - Allow the resource to be started and, if appropriate, promoted * Slave - Allow the resource to be started, but only in slave mode if the resource is promotable As of RHEL 8.5, the pcs command-line interface accepts Promoted and Unpromoted anywhere roles are specified in Pacemaker configuration. These role names are the functional equivalent of the Master and Slave Pacemaker roles. is-managed true Indicates whether the cluster is allowed to start and stop the resource. Allowed values: true , false resource-stickiness 0 Value to indicate how much the resource prefers to stay where it is. For information about this attribute, see Configuring a resource to prefer its current node . requires Calculated Indicates under what conditions the resource can be started. Defaults to fencing except under the conditions noted below. Possible values: * nothing - The cluster can always start the resource. * quorum - The cluster can only start this resource if a majority of the configured nodes are active. This is the default value if stonith-enabled is false or the resource's standard is stonith . * fencing - The cluster can only start this resource if a majority of the configured nodes are active and any failed or unknown nodes have been fenced. * unfencing - The cluster can only start this resource if a majority of the configured nodes are active and any failed or unknown nodes have been fenced and only on nodes that have been unfenced . This is the default value if the provides=unfencing stonith meta option has been set for a fencing device. migration-threshold INFINITY How many failures may occur for this resource on a node before this node is marked ineligible to host this resource. A value of 0 indicates that this feature is disabled (the node will never be marked ineligible); by contrast, the cluster treats INFINITY (the default) as a very large but finite number. This option has an effect only if the failed operation has on-fail=restart (the default), and additionally for failed start operations if the cluster property start-failure-is-fatal is false . failure-timeout 0 (disabled) Ignore previously failed resource actions after this much time has passed without new failures. This potentially allows the resource to move back to the node on which it failed, if it previously reached its migration threshold there. A value of 0 indicates that failures do not expire. WARNING : If this value is low, and pending cluster activity prevents the cluster from responding to a failure within that time, the failure is ignored completely and does not cause recovery of the resource, even if a recurring action continues to report failure. The value of this option should be at least greater than the longest action timeout for all resources in the cluster. A value in hours or days is reasonable. multiple-active stop_start Indicates what the cluster should do if it ever finds the resource active on more than one node. Allowed values: * block - mark the resource as unmanaged * stop_only - stop all active instances and leave them that way * stop_start - stop all active instances and start the resource in one location only * stop_unexpected - (RHEL 8.7 and later) stop only unexpected instances of the resource, without requiring a full restart. It is the user's responsibility to verify that the service and its resource agent can function with extra active instances without requiring a full restart. critical true (RHEL 8.4 and later) Sets the default value for the influence option for all colocation constraints involving the resource as a dependent resource ( target_resource ), including implicit colocation constraints created when the resource is part of a resource group. The influence colocation constraint option determines whether the cluster will move both the primary and dependent resources to another node when the dependent resource reaches its migration threshold for failure, or whether the cluster will leave the dependent resource offline without causing a service switch. The critical resource meta option can have a value of true or false , with a default value of true . allow-unhealthy-nodes false (RHEL 8.7 and later) When set to true , the resource is not forced off a node due to degraded node health. When health resources have this attribute set, the cluster can automatically detect if the node's health recovers and move resources back to it. A node's health is determined by a combination of the health attributes set by health resource agents based on local conditions, and the strategy-related options that determine how the cluster reacts to those conditions. 11.3.1. Changing the default value of a resource option As of Red Hat Enterprise Linux 8.3, you can change the default value of a resource option for all resources with the pcs resource defaults update command. The following command resets the default value of resource-stickiness to 100. The original pcs resource defaults name = value command, which set defaults for all resources in releases, remains supported unless there is more than one set of defaults configured. However, pcs resource defaults update is now the preferred version of the command. 11.3.2. Changing the default value of a resource option for sets of resources As of Red Hat Enterprise Linux 8.3, you can create multiple sets of resource defaults with the pcs resource defaults set create command, which allows you to specify a rule that contains resource expressions. In RHEL 8.3, only resource expressions, including and , or and parentheses, are allowed in rules that you specify with this command. In RHEL 8.4 and later, only resource and date expressions, including and , or and parentheses, are allowed in rules that you specify with this command. With the pcs resource defaults set create command, you can configure a default resource value for all resources of a particular type. If, for example, you are running databases which take a long time to stop, you can increase the resource-stickiness default value for all resources of the database type to prevent those resources from moving to other nodes more often than you desire. The following command sets the default value of resource-stickiness to 100 for all resources of type pqsql . The id option, which names the set of resource defaults, is not mandatory. If you do not set this option pcs will generate an ID automatically. Setting this value allows you to provide a more descriptive name. In this example, ::pgsql means a resource of any class, any provider, of type pgsql . Specifying ocf:heartbeat:pgsql would indicate class ocf , provider heartbeat , type pgsql , Specifying ocf:pacemaker: would indicate all resources of class ocf , provider pacemaker , of any type. To change the default values in an existing set, use the pcs resource defaults set update command. 11.3.3. Displaying currently configured resource defaults The pcs resource defaults command displays a list of currently configured default values for resource options, including any rules that you specified. The following example shows the output of this command after you have reset the default value of resource-stickiness to 100. The following example shows the output of this command after you have reset the default value of resource-stickiness to 100 for all resources of type pqsql and set the id option to id=pgsql-stickiness . 11.3.4. Setting meta options on resource creation Whether you have reset the default value of a resource meta option or not, you can set a resource option for a particular resource to a value other than the default when you create the resource. The following shows the format of the pcs resource create command you use when specifying a value for a resource meta option. For example, the following command creates a resource with a resource-stickiness value of 50. You can also set the value of a resource meta option for an existing resource, group, or cloned resource with the following command. In the following example, there is an existing resource named dummy_resource . This command sets the failure-timeout meta option to 20 seconds, so that the resource can attempt to restart on the same node in 20 seconds. After executing this command, you can display the values for the resource to verify that failure-timeout=20s is set. 11.4. Configuring resource groups One of the most common elements of a cluster is a set of resources that need to be located together, start sequentially, and stop in the reverse order. To simplify this configuration, Pacemaker supports the concept of resource groups. 11.4.1. Creating a resource group You create a resource group with the following command, specifying the resources to include in the group. If the group does not exist, this command creates the group. If the group exists, this command adds additional resources to the group. The resources will start in the order you specify them with this command, and will stop in the reverse order of their starting order. You can use the --before and --after options of this command to specify the position of the added resources relative to a resource that already exists in the group. You can also add a new resource to an existing group when you create the resource, using the following command. The resource you create is added to the group named group_name . If the group group_name does not exist, it will be created. There is no limit to the number of resources a group can contain. The fundamental properties of a group are as follows. Resources are colocated within a group. Resources are started in the order in which you specify them. If a resource in the group cannot run anywhere, then no resource specified after that resource is allowed to run. Resources are stopped in the reverse order in which you specify them. The following example creates a resource group named shortcut that contains the existing resources IPaddr and Email . In this example: The IPaddr is started first, then Email . The Email resource is stopped first, then IPAddr . If IPaddr cannot run anywhere, neither can Email . If Email cannot run anywhere, however, this does not affect IPaddr in any way. 11.4.2. Removing a resource group You remove a resource from a group with the following command. If there are no remaining resources in the group, this command removes the group itself. 11.4.3. Displaying resource groups The following command lists all currently configured resource groups. 11.4.4. Group options You can set the following options for a resource group, and they maintain the same meaning as when they are set for a single resource: priority , target-role , is-managed . For information about resource meta options, see Configuring resource meta options . 11.4.5. Group stickiness Stickiness, the measure of how much a resource wants to stay where it is, is additive in groups. Every active resource of the group will contribute its stickiness value to the group's total. So if the default resource-stickiness is 100, and a group has seven members, five of which are active, then the group as a whole will prefer its current location with a score of 500. 11.5. Determining resource behavior You can determine the behavior of a resource in a cluster by configuring constraints for that resource. You can configure the following categories of constraints: location constraints - A location constraint determines which nodes a resource can run on. For information about configuring location constraints, see Determining which nodes a resource can run on . order constraints - An ordering constraint determines the order in which the resources run. For information about configuring ordering constraints, see Determining the order in which cluster resources are run . colocation constraints - A colocation constraint determines where resources will be placed relative to other resources. For information about colocation constraints, see Colocating cluster resources . As a shorthand for configuring a set of constraints that will locate a set of resources together and ensure that the resources start sequentially and stop in reverse order, Pacemaker supports the concept of resource groups. After you have created a resource group, you can configure constraints on the group itself just as you configure constraints for individual resources. | [
"pcs resource create resource_id [ standard :[ provider :]] type [ resource_options ] [op operation_action operation_options [ operation_action operation options ]...] [meta meta_options ...] [clone [ clone_options ] | master [ master_options ] [--wait[= n ]]",
"pcs resource create VirtualIP ocf:heartbeat:IPaddr2 ip=192.168.0.120 cidr_netmask=24 op monitor interval=30s",
"pcs resource create VirtualIP IPaddr2 ip=192.168.0.120 cidr_netmask=24 op monitor interval=30s",
"pcs resource delete resource_id",
"pcs resource delete VirtualIP",
"pcs resource describe [ standard :[ provider :]] type",
"pcs resource describe ocf:heartbeat:apache This is the resource agent for the Apache Web server. This resource agent operates both version 1.x and version 2.x Apache servers.",
"pcs resource defaults update resource-stickiness=100",
"pcs resource defaults set create id=pgsql-stickiness meta resource-stickiness=100 rule resource ::pgsql",
"pcs resource defaults Meta Attrs: rsc_defaults-meta_attributes resource-stickiness=100",
"pcs resource defaults Meta Attrs: pgsql-stickiness resource-stickiness=100 Rule: boolean-op=and score=INFINITY Expression: resource ::pgsql",
"pcs resource create resource_id [ standard :[ provider :]] type [ resource options ] [meta meta_options ...]",
"pcs resource create VirtualIP ocf:heartbeat:IPaddr2 ip=192.168.0.120 meta resource-stickiness=50",
"pcs resource meta resource_id | group_id | clone_id meta_options",
"pcs resource meta dummy_resource failure-timeout=20s",
"pcs resource config dummy_resource Resource: dummy_resource (class=ocf provider=heartbeat type=Dummy) Meta Attrs: failure-timeout=20s",
"pcs resource group add group_name resource_id [ resource_id ] ... [ resource_id ] [--before resource_id | --after resource_id ]",
"pcs resource create resource_id [ standard :[ provider :]] type [resource_options] [op operation_action operation_options ] --group group_name",
"pcs resource group add shortcut IPaddr Email",
"pcs resource group remove group_name resource_id",
"pcs resource group list"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_high_availability_clusters/assembly_configuring-cluster-resources-configuring-and-managing-high-availability-clusters |
Chapter 2. Understanding Operators | Chapter 2. Understanding Operators 2.1. What are Operators? Conceptually, Operators take human operational knowledge and encode it into software that is more easily shared with consumers. Operators are pieces of software that ease the operational complexity of running another piece of software. They act like an extension of the software vendor's engineering team, monitoring a Kubernetes environment (such as OpenShift Container Platform) and using its current state to make decisions in real time. Advanced Operators are designed to handle upgrades seamlessly, react to failures automatically, and not take shortcuts, like skipping a software backup process to save time. More technically, Operators are a method of packaging, deploying, and managing a Kubernetes application. A Kubernetes application is an app that is both deployed on Kubernetes and managed using the Kubernetes APIs and kubectl or oc tooling. To be able to make the most of Kubernetes, you require a set of cohesive APIs to extend in order to service and manage your apps that run on Kubernetes. Think of Operators as the runtime that manages this type of app on Kubernetes. 2.1.1. Why use Operators? Operators provide: Repeatability of installation and upgrade. Constant health checks of every system component. Over-the-air (OTA) updates for OpenShift components and ISV content. A place to encapsulate knowledge from field engineers and spread it to all users, not just one or two. Why deploy on Kubernetes? Kubernetes (and by extension, OpenShift Container Platform) contains all of the primitives needed to build complex distributed systems - secret handling, load balancing, service discovery, autoscaling - that work across on-premises and cloud providers. Why manage your app with Kubernetes APIs and kubectl tooling? These APIs are feature rich, have clients for all platforms and plug into the cluster's access control/auditing. An Operator uses the Kubernetes extension mechanism, custom resource definitions (CRDs), so your custom object, for example MongoDB , looks and acts just like the built-in, native Kubernetes objects. How do Operators compare with service brokers? A service broker is a step towards programmatic discovery and deployment of an app. However, because it is not a long running process, it cannot execute Day 2 operations like upgrade, failover, or scaling. Customizations and parameterization of tunables are provided at install time, versus an Operator that is constantly watching the current state of your cluster. Off-cluster services are a good match for a service broker, although Operators exist for these as well. 2.1.2. Operator Framework The Operator Framework is a family of tools and capabilities to deliver on the customer experience described above. It is not just about writing code; testing, delivering, and updating Operators is just as important. The Operator Framework components consist of open source tools to tackle these problems: Operator SDK The Operator SDK assists Operator authors in bootstrapping, building, testing, and packaging their own Operator based on their expertise without requiring knowledge of Kubernetes API complexities. Operator Lifecycle Manager Operator Lifecycle Manager (OLM) controls the installation, upgrade, and role-based access control (RBAC) of Operators in a cluster. It is deployed by default in OpenShift Container Platform 4.15. Operator Registry The Operator Registry stores cluster service versions (CSVs) and custom resource definitions (CRDs) for creation in a cluster and stores Operator metadata about packages and channels. It runs in a Kubernetes or OpenShift cluster to provide this Operator catalog data to OLM. OperatorHub OperatorHub is a web console for cluster administrators to discover and select Operators to install on their cluster. It is deployed by default in OpenShift Container Platform. These tools are designed to be composable, so you can use any that are useful to you. 2.1.3. Operator maturity model The level of sophistication of the management logic encapsulated within an Operator can vary. This logic is also in general highly dependent on the type of the service represented by the Operator. One can however generalize the scale of the maturity of the encapsulated operations of an Operator for certain set of capabilities that most Operators can include. To this end, the following Operator maturity model defines five phases of maturity for generic Day 2 operations of an Operator: Figure 2.1. Operator maturity model The above model also shows how these capabilities can best be developed through the Helm, Go, and Ansible capabilities of the Operator SDK. 2.2. Operator Framework packaging format This guide outlines the packaging format for Operators supported by Operator Lifecycle Manager (OLM) in OpenShift Container Platform. 2.2.1. Bundle format The bundle format for Operators is a packaging format introduced by the Operator Framework. To improve scalability and to better enable upstream users hosting their own catalogs, the bundle format specification simplifies the distribution of Operator metadata. An Operator bundle represents a single version of an Operator. On-disk bundle manifests are containerized and shipped as a bundle image , which is a non-runnable container image that stores the Kubernetes manifests and Operator metadata. Storage and distribution of the bundle image is then managed using existing container tools like podman and docker and container registries such as Quay. Operator metadata can include: Information that identifies the Operator, for example its name and version. Additional information that drives the UI, for example its icon and some example custom resources (CRs). Required and provided APIs. Related images. When loading manifests into the Operator Registry database, the following requirements are validated: The bundle must have at least one channel defined in the annotations. Every bundle has exactly one cluster service version (CSV). If a CSV owns a custom resource definition (CRD), that CRD must exist in the bundle. 2.2.1.1. Manifests Bundle manifests refer to a set of Kubernetes manifests that define the deployment and RBAC model of the Operator. A bundle includes one CSV per directory and typically the CRDs that define the owned APIs of the CSV in its /manifests directory. Example bundle format layout etcd βββ manifests β βββ etcdcluster.crd.yaml β βββ etcdoperator.clusterserviceversion.yaml β βββ secret.yaml β βββ configmap.yaml βββ metadata βββ annotations.yaml βββ dependencies.yaml Additionally supported objects The following object types can also be optionally included in the /manifests directory of a bundle: Supported optional object types ClusterRole ClusterRoleBinding ConfigMap ConsoleCLIDownload ConsoleLink ConsoleQuickStart ConsoleYamlSample PodDisruptionBudget PriorityClass PrometheusRule Role RoleBinding Secret Service ServiceAccount ServiceMonitor VerticalPodAutoscaler When these optional objects are included in a bundle, Operator Lifecycle Manager (OLM) can create them from the bundle and manage their lifecycle along with the CSV: Lifecycle for optional objects When the CSV is deleted, OLM deletes the optional object. When the CSV is upgraded: If the name of the optional object is the same, OLM updates it in place. If the name of the optional object has changed between versions, OLM deletes and recreates it. 2.2.1.2. Annotations A bundle also includes an annotations.yaml file in its /metadata directory. This file defines higher level aggregate data that helps describe the format and package information about how the bundle should be added into an index of bundles: Example annotations.yaml annotations: operators.operatorframework.io.bundle.mediatype.v1: "registry+v1" 1 operators.operatorframework.io.bundle.manifests.v1: "manifests/" 2 operators.operatorframework.io.bundle.metadata.v1: "metadata/" 3 operators.operatorframework.io.bundle.package.v1: "test-operator" 4 operators.operatorframework.io.bundle.channels.v1: "beta,stable" 5 operators.operatorframework.io.bundle.channel.default.v1: "stable" 6 1 The media type or format of the Operator bundle. The registry+v1 format means it contains a CSV and its associated Kubernetes objects. 2 The path in the image to the directory that contains the Operator manifests. This label is reserved for future use and currently defaults to manifests/ . The value manifests.v1 implies that the bundle contains Operator manifests. 3 The path in the image to the directory that contains metadata files about the bundle. This label is reserved for future use and currently defaults to metadata/ . The value metadata.v1 implies that this bundle has Operator metadata. 4 The package name of the bundle. 5 The list of channels the bundle is subscribing to when added into an Operator Registry. 6 The default channel an Operator should be subscribed to when installed from a registry. Note In case of a mismatch, the annotations.yaml file is authoritative because the on-cluster Operator Registry that relies on these annotations only has access to this file. 2.2.1.3. Dependencies The dependencies of an Operator are listed in a dependencies.yaml file in the metadata/ folder of a bundle. This file is optional and currently only used to specify explicit Operator-version dependencies. The dependency list contains a type field for each item to specify what kind of dependency this is. The following types of Operator dependencies are supported: olm.package This type indicates a dependency for a specific Operator version. The dependency information must include the package name and the version of the package in semver format. For example, you can specify an exact version such as 0.5.2 or a range of versions such as >0.5.1 . olm.gvk With this type, the author can specify a dependency with group/version/kind (GVK) information, similar to existing CRD and API-based usage in a CSV. This is a path to enable Operator authors to consolidate all dependencies, API or explicit versions, to be in the same place. olm.constraint This type declares generic constraints on arbitrary Operator properties. In the following example, dependencies are specified for a Prometheus Operator and etcd CRDs: Example dependencies.yaml file dependencies: - type: olm.package value: packageName: prometheus version: ">0.27.0" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2 Additional resources Operator Lifecycle Manager dependency resolution 2.2.1.4. About the opm CLI The opm CLI tool is provided by the Operator Framework for use with the Operator bundle format. This tool allows you to create and maintain catalogs of Operators from a list of Operator bundles that are similar to software repositories. The result is a container image which can be stored in a container registry and then installed on a cluster. A catalog contains a database of pointers to Operator manifest content that can be queried through an included API that is served when the container image is run. On OpenShift Container Platform, Operator Lifecycle Manager (OLM) can reference the image in a catalog source, defined by a CatalogSource object, which polls the image at regular intervals to enable frequent updates to installed Operators on the cluster. See CLI tools for steps on installing the opm CLI. 2.2.2. File-based catalogs File-based catalogs are the latest iteration of the catalog format in Operator Lifecycle Manager (OLM). It is a plain text-based (JSON or YAML) and declarative config evolution of the earlier SQLite database format, and it is fully backwards compatible. The goal of this format is to enable Operator catalog editing, composability, and extensibility. Editing With file-based catalogs, users interacting with the contents of a catalog are able to make direct changes to the format and verify that their changes are valid. Because this format is plain text JSON or YAML, catalog maintainers can easily manipulate catalog metadata by hand or with widely known and supported JSON or YAML tooling, such as the jq CLI. This editability enables the following features and user-defined extensions: Promoting an existing bundle to a new channel Changing the default channel of a package Custom algorithms for adding, updating, and removing upgrade edges Composability File-based catalogs are stored in an arbitrary directory hierarchy, which enables catalog composition. For example, consider two separate file-based catalog directories: catalogA and catalogB . A catalog maintainer can create a new combined catalog by making a new directory catalogC and copying catalogA and catalogB into it. This composability enables decentralized catalogs. The format permits Operator authors to maintain Operator-specific catalogs, and it permits maintainers to trivially build a catalog composed of individual Operator catalogs. File-based catalogs can be composed by combining multiple other catalogs, by extracting subsets of one catalog, or a combination of both of these. Note Duplicate packages and duplicate bundles within a package are not permitted. The opm validate command returns an error if any duplicates are found. Because Operator authors are most familiar with their Operator, its dependencies, and its upgrade compatibility, they are able to maintain their own Operator-specific catalog and have direct control over its contents. With file-based catalogs, Operator authors own the task of building and maintaining their packages in a catalog. Composite catalog maintainers, however, only own the task of curating the packages in their catalog and publishing the catalog to users. Extensibility The file-based catalog specification is a low-level representation of a catalog. While it can be maintained directly in its low-level form, catalog maintainers can build interesting extensions on top that can be used by their own custom tooling to make any number of mutations. For example, a tool could translate a high-level API, such as (mode=semver) , down to the low-level, file-based catalog format for upgrade edges. Or a catalog maintainer might need to customize all of the bundle metadata by adding a new property to bundles that meet a certain criteria. While this extensibility allows for additional official tooling to be developed on top of the low-level APIs for future OpenShift Container Platform releases, the major benefit is that catalog maintainers have this capability as well. Important As of OpenShift Container Platform 4.11, the default Red Hat-provided Operator catalog releases in the file-based catalog format. The default Red Hat-provided Operator catalogs for OpenShift Container Platform 4.6 through 4.10 released in the deprecated SQLite database format. The opm subcommands, flags, and functionality related to the SQLite database format are also deprecated and will be removed in a future release. The features are still supported and must be used for catalogs that use the deprecated SQLite database format. Many of the opm subcommands and flags for working with the SQLite database format, such as opm index prune , do not work with the file-based catalog format. For more information about working with file-based catalogs, see Managing custom catalogs and Mirroring images for a disconnected installation using the oc-mirror plugin . 2.2.2.1. Directory structure File-based catalogs can be stored and loaded from directory-based file systems. The opm CLI loads the catalog by walking the root directory and recursing into subdirectories. The CLI attempts to load every file it finds and fails if any errors occur. Non-catalog files can be ignored using .indexignore files, which have the same rules for patterns and precedence as .gitignore files. Example .indexignore file # Ignore everything except non-object .json and .yaml files **/* !*.json !*.yaml **/objects/*.json **/objects/*.yaml Catalog maintainers have the flexibility to choose their desired layout, but it is recommended to store each package's file-based catalog blobs in separate subdirectories. Each individual file can be either JSON or YAML; it is not necessary for every file in a catalog to use the same format. Basic recommended structure catalog βββ packageA β βββ index.yaml βββ packageB β βββ .indexignore β βββ index.yaml β βββ objects β βββ packageB.v0.1.0.clusterserviceversion.yaml βββ packageC βββ index.json βββ deprecations.yaml This recommended structure has the property that each subdirectory in the directory hierarchy is a self-contained catalog, which makes catalog composition, discovery, and navigation trivial file system operations. The catalog can also be included in a parent catalog by copying it into the parent catalog's root directory. 2.2.2.2. Schemas File-based catalogs use a format, based on the CUE language specification , that can be extended with arbitrary schemas. The following _Meta CUE schema defines the format that all file-based catalog blobs must adhere to: _Meta schema _Meta: { // schema is required and must be a non-empty string schema: string & !="" // package is optional, but if it's defined, it must be a non-empty string package?: string & !="" // properties is optional, but if it's defined, it must be a list of 0 or more properties properties?: [... #Property] } #Property: { // type is required type: string & !="" // value is required, and it must not be null value: !=null } Note No CUE schemas listed in this specification should be considered exhaustive. The opm validate command has additional validations that are difficult or impossible to express concisely in CUE. An Operator Lifecycle Manager (OLM) catalog currently uses three schemas ( olm.package , olm.channel , and olm.bundle ), which correspond to OLM's existing package and bundle concepts. Each Operator package in a catalog requires exactly one olm.package blob, at least one olm.channel blob, and one or more olm.bundle blobs. Note All olm.* schemas are reserved for OLM-defined schemas. Custom schemas must use a unique prefix, such as a domain that you own. 2.2.2.2.1. olm.package schema The olm.package schema defines package-level metadata for an Operator. This includes its name, description, default channel, and icon. Example 2.1. olm.package schema #Package: { schema: "olm.package" // Package name name: string & !="" // A description of the package description?: string // The package's default channel defaultChannel: string & !="" // An optional icon icon?: { base64data: string mediatype: string } } 2.2.2.2.2. olm.channel schema The olm.channel schema defines a channel within a package, the bundle entries that are members of the channel, and the upgrade edges for those bundles. If a bundle entry represents an edge in multiple olm.channel blobs, it can only appear once per channel. It is valid for an entry's replaces value to reference another bundle name that cannot be found in this catalog or another catalog. However, all other channel invariants must hold true, such as a channel not having multiple heads. Example 2.2. olm.channel schema #Channel: { schema: "olm.channel" package: string & !="" name: string & !="" entries: [...#ChannelEntry] } #ChannelEntry: { // name is required. It is the name of an `olm.bundle` that // is present in the channel. name: string & !="" // replaces is optional. It is the name of bundle that is replaced // by this entry. It does not have to be present in the entry list. replaces?: string & !="" // skips is optional. It is a list of bundle names that are skipped by // this entry. The skipped bundles do not have to be present in the // entry list. skips?: [...string & !=""] // skipRange is optional. It is the semver range of bundle versions // that are skipped by this entry. skipRange?: string & !="" } Warning When using the skipRange field, the skipped Operator versions are pruned from the update graph and are longer installable by users with the spec.startingCSV property of Subscription objects. You can update an Operator incrementally while keeping previously installed versions available to users for future installation by using both the skipRange and replaces field. Ensure that the replaces field points to the immediate version of the Operator version in question. 2.2.2.2.3. olm.bundle schema Example 2.3. olm.bundle schema #Bundle: { schema: "olm.bundle" package: string & !="" name: string & !="" image: string & !="" properties: [...#Property] relatedImages?: [...#RelatedImage] } #Property: { // type is required type: string & !="" // value is required, and it must not be null value: !=null } #RelatedImage: { // image is the image reference image: string & !="" // name is an optional descriptive name for an image that // helps identify its purpose in the context of the bundle name?: string & !="" } 2.2.2.2.4. olm.deprecations schema The optional olm.deprecations schema defines deprecation information for packages, bundles, and channels in a catalog. Operator authors can use this schema to provide relevant messages about their Operators, such as support status and recommended upgrade paths, to users running those Operators from a catalog. An olm.deprecations schema entry contains one or more of the following reference types, which indicates the deprecation scope. After the Operator is installed, any specified messages can be viewed as status conditions on the related Subscription object. Table 2.1. Deprecation reference types Type Scope Status condition olm.package Represents the entire package PackageDeprecated olm.channel Represents one channel ChannelDeprecated olm.bundle Represents one bundle version BundleDeprecated Each reference type has their own requirements, as detailed in the following example. Example 2.4. Example olm.deprecations schema with each reference type schema: olm.deprecations package: my-operator 1 entries: - reference: schema: olm.package 2 message: | 3 The 'my-operator' package is end of life. Please use the 'my-operator-new' package for support. - reference: schema: olm.channel name: alpha 4 message: | The 'alpha' channel is no longer supported. Please switch to the 'stable' channel. - reference: schema: olm.bundle name: my-operator.v1.68.0 5 message: | my-operator.v1.68.0 is deprecated. Uninstall my-operator.v1.68.0 and install my-operator.v1.72.0 for support. 1 Each deprecation schema must have a package value, and that package reference must be unique across the catalog. There must not be an associated name field. 2 The olm.package schema must not include a name field, because it is determined by the package field defined earlier in the schema. 3 All message fields, for any reference type, must be a non-zero length and represented as an opaque text blob. 4 The name field for the olm.channel schema is required. 5 The name field for the olm.bundle schema is required. Note The deprecation feature does not consider overlapping deprecation, for example package versus channel versus bundle. Operator authors can save olm.deprecations schema entries as a deprecations.yaml file in the same directory as the package's index.yaml file: Example directory structure for a catalog with deprecations my-catalog βββ my-operator βββ index.yaml βββ deprecations.yaml Additional resources Updating or filtering a file-based catalog image 2.2.2.3. Properties Properties are arbitrary pieces of metadata that can be attached to file-based catalog schemas. The type field is a string that effectively specifies the semantic and syntactic meaning of the value field. The value can be any arbitrary JSON or YAML. OLM defines a handful of property types, again using the reserved olm.* prefix. 2.2.2.3.1. olm.package property The olm.package property defines the package name and version. This is a required property on bundles, and there must be exactly one of these properties. The packageName field must match the bundle's first-class package field, and the version field must be a valid semantic version. Example 2.5. olm.package property #PropertyPackage: { type: "olm.package" value: { packageName: string & !="" version: string & !="" } } 2.2.2.3.2. olm.gvk property The olm.gvk property defines the group/version/kind (GVK) of a Kubernetes API that is provided by this bundle. This property is used by OLM to resolve a bundle with this property as a dependency for other bundles that list the same GVK as a required API. The GVK must adhere to Kubernetes GVK validations. Example 2.6. olm.gvk property #PropertyGVK: { type: "olm.gvk" value: { group: string & !="" version: string & !="" kind: string & !="" } } 2.2.2.3.3. olm.package.required The olm.package.required property defines the package name and version range of another package that this bundle requires. For every required package property a bundle lists, OLM ensures there is an Operator installed on the cluster for the listed package and in the required version range. The versionRange field must be a valid semantic version (semver) range. Example 2.7. olm.package.required property #PropertyPackageRequired: { type: "olm.package.required" value: { packageName: string & !="" versionRange: string & !="" } } 2.2.2.3.4. olm.gvk.required The olm.gvk.required property defines the group/version/kind (GVK) of a Kubernetes API that this bundle requires. For every required GVK property a bundle lists, OLM ensures there is an Operator installed on the cluster that provides it. The GVK must adhere to Kubernetes GVK validations. Example 2.8. olm.gvk.required property #PropertyGVKRequired: { type: "olm.gvk.required" value: { group: string & !="" version: string & !="" kind: string & !="" } } 2.2.2.4. Example catalog With file-based catalogs, catalog maintainers can focus on Operator curation and compatibility. Because Operator authors have already produced Operator-specific catalogs for their Operators, catalog maintainers can build their catalog by rendering each Operator catalog into a subdirectory of the catalog's root directory. There are many possible ways to build a file-based catalog; the following steps outline a simple approach: Maintain a single configuration file for the catalog, containing image references for each Operator in the catalog: Example catalog configuration file name: community-operators repo: quay.io/community-operators/catalog tag: latest references: - name: etcd-operator image: quay.io/etcd-operator/index@sha256:5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03 - name: prometheus-operator image: quay.io/prometheus-operator/index@sha256:e258d248fda94c63753607f7c4494ee0fcbe92f1a76bfdac795c9d84101eb317 Run a script that parses the configuration file and creates a new catalog from its references: Example script name=USD(yq eval '.name' catalog.yaml) mkdir "USDname" yq eval '.name + "/" + .references[].name' catalog.yaml | xargs mkdir for l in USD(yq e '.name as USDcatalog | .references[] | .image + "|" + USDcatalog + "/" + .name + "/index.yaml"' catalog.yaml); do image=USD(echo USDl | cut -d'|' -f1) file=USD(echo USDl | cut -d'|' -f2) opm render "USDimage" > "USDfile" done opm generate dockerfile "USDname" indexImage=USD(yq eval '.repo + ":" + .tag' catalog.yaml) docker build -t "USDindexImage" -f "USDname.Dockerfile" . docker push "USDindexImage" 2.2.2.5. Guidelines Consider the following guidelines when maintaining file-based catalogs. 2.2.2.5.1. Immutable bundles The general advice with Operator Lifecycle Manager (OLM) is that bundle images and their metadata should be treated as immutable. If a broken bundle has been pushed to a catalog, you must assume that at least one of your users has upgraded to that bundle. Based on that assumption, you must release another bundle with an upgrade edge from the broken bundle to ensure users with the broken bundle installed receive an upgrade. OLM will not reinstall an installed bundle if the contents of that bundle are updated in the catalog. However, there are some cases where a change in the catalog metadata is preferred: Channel promotion: If you already released a bundle and later decide that you would like to add it to another channel, you can add an entry for your bundle in another olm.channel blob. New upgrade edges: If you release a new 1.2.z bundle version, for example 1.2.4 , but 1.3.0 is already released, you can update the catalog metadata for 1.3.0 to skip 1.2.4 . 2.2.2.5.2. Source control Catalog metadata should be stored in source control and treated as the source of truth. Updates to catalog images should include the following steps: Update the source-controlled catalog directory with a new commit. Build and push the catalog image. Use a consistent tagging taxonomy, such as :latest or :<target_cluster_version> , so that users can receive updates to a catalog as they become available. 2.2.2.6. CLI usage For instructions about creating file-based catalogs by using the opm CLI, see Managing custom catalogs . For reference documentation about the opm CLI commands related to managing file-based catalogs, see CLI tools . 2.2.2.7. Automation Operator authors and catalog maintainers are encouraged to automate their catalog maintenance with CI/CD workflows. Catalog maintainers can further improve on this by building GitOps automation to accomplish the following tasks: Check that pull request (PR) authors are permitted to make the requested changes, for example by updating their package's image reference. Check that the catalog updates pass the opm validate command. Check that the updated bundle or catalog image references exist, the catalog images run successfully in a cluster, and Operators from that package can be successfully installed. Automatically merge PRs that pass the checks. Automatically rebuild and republish the catalog image. 2.2.3. RukPak (Technology Preview) Important RukPak is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenShift Container Platform 4.12 introduces the platform Operator type as a Technology Preview feature. The platform Operator mechanism relies on the RukPak component, also introduced in OpenShift Container Platform 4.12, and its resources to manage content. OpenShift Container Platform 4.14 introduces Operator Lifecycle Manager (OLM) 1.0 as a Technology Preview feature, which also relies on the RukPak component. RukPak is a pluggable solution for packaging and distributing cloud-native content. It supports advanced strategies for installation, updates, and policy. RukPak provides a content ecosystem for installing a variety of artifacts on a Kubernetes cluster. Artifact examples include Git repositories, Helm charts, and OLM bundles. RukPak can then manage, scale, and upgrade these artifacts in a safe way to enable powerful cluster extensions. At its core, RukPak is a small set of APIs and controllers. The APIs are packaged as custom resource definitions (CRDs) that express what content to install on a cluster and how to create a running deployment of the content. The controllers watch for the APIs. Common terminology Bundle A collection of Kubernetes manifests that define content to be deployed to a cluster Bundle image A container image that contains a bundle within its filesystem Bundle Git repository A Git repository that contains a bundle within a directory Provisioner Controllers that install and manage content on a Kubernetes cluster Bundle deployment Generates deployed instances of a bundle Additional resources Managing platform Operators Technology Preview restrictions for platform Operators About Operator Lifecycle Manager 1.0 (Technology Preview) 2.2.3.1. Bundle A RukPak Bundle object represents content to make available to other consumers in the cluster. Much like the contents of a container image must be pulled and unpacked in order for pod to start using them, Bundle objects are used to reference content that might need to be pulled and unpacked. In this sense, a bundle is a generalization of the image concept and can be used to represent any type of content. Bundles cannot do anything on their own; they require a provisioner to unpack and make their content available in the cluster. They can be unpacked to any arbitrary storage medium, such as a tar.gz file in a directory mounted into the provisioner pods. Each Bundle object has an associated spec.provisionerClassName field that indicates the Provisioner object that watches and unpacks that particular bundle type. Example Bundle object configured to work with the plain provisioner apiVersion: core.rukpak.io/v1alpha1 kind: Bundle metadata: name: my-bundle spec: source: type: image image: ref: my-bundle@sha256:xyz123 provisionerClassName: core-rukpak-io-plain Note Bundles are considered immutable after they are created. 2.2.3.1.1. Bundle immutability After a Bundle object is accepted by the API server, the bundle is considered an immutable artifact by the rest of the RukPak system. This behavior enforces the notion that a bundle represents some unique, static piece of content to source onto the cluster. A user can have confidence that a particular bundle is pointing to a specific set of manifests and cannot be updated without creating a new bundle. This property is true for both standalone bundles and dynamic bundles created by an embedded BundleTemplate object. Bundle immutability is enforced by the core RukPak webhook. This webhook watches Bundle object events and, for any update to a bundle, checks whether the spec field of the existing bundle is semantically equal to that in the proposed updated bundle. If they are not equal, the update is rejected by the webhook. Other Bundle object fields, such as metadata or status , are updated during the bundle's lifecycle; it is only the spec field that is considered immutable. Applying a Bundle object and then attempting to update its spec should fail. For example, the following example creates a bundle: USD oc apply -f -<<EOF apiVersion: core.rukpak.io/v1alpha1 kind: Bundle metadata: name: combo-tag-ref spec: source: type: git git: ref: tag: v0.0.2 repository: https://github.com/operator-framework/combo provisionerClassName: core-rukpak-io-plain EOF Example output bundle.core.rukpak.io/combo-tag-ref created Then, patching the bundle to point to a newer tag returns an error: USD oc patch bundle combo-tag-ref --type='merge' -p '{"spec":{"source":{"git":{"ref":{"tag":"v0.0.3"}}}}}' Example output Error from server (bundle.spec is immutable): admission webhook "vbundles.core.rukpak.io" denied the request: bundle.spec is immutable The core RukPak admission webhook rejected the patch because the spec of the bundle is immutable. The recommended method to change the content of a bundle is by creating a new Bundle object instead of updating it in-place. Further immutability considerations While the spec field of the Bundle object is immutable, it is still possible for a BundleDeployment object to pivot to a newer version of bundle content without changing the underlying spec field. This unintentional pivoting could occur in the following scenario: A user sets an image tag, a Git branch, or a Git tag in the spec.source field of the Bundle object. The image tag moves to a new digest, a user pushes changes to a Git branch, or a user deletes and re-pushes a Git tag on a different commit. A user does something to cause the bundle unpack pod to be re-created, such as deleting the unpack pod. If this scenario occurs, the new content from step 2 is unpacked as a result of step 3. The bundle deployment detects the changes and pivots to the newer version of the content. This is similar to pod behavior, where one of the pod's container images uses a tag, the tag is moved to a different digest, and then at some point in the future the existing pod is rescheduled on a different node. At that point, the node pulls the new image at the new digest and runs something different without the user explicitly asking for it. To be confident that the underlying Bundle spec content does not change, use a digest-based image or a Git commit reference when creating the bundle. 2.2.3.1.2. Plain bundle spec A plain bundle in RukPak is a collection of static, arbitrary, Kubernetes YAML manifests in a given directory. The currently implemented plain bundle format is the plain+v0 format. The name of the bundle format, plain+v0 , combines the type of bundle ( plain ) with the current schema version ( v0 ). Note The plain+v0 bundle format is at schema version v0 , which means it is an experimental format that is subject to change. For example, the following shows the file tree in a plain+v0 bundle. It must have a manifests/ directory containing the Kubernetes resources required to deploy an application. Example plain+v0 bundle file tree USD tree manifests manifests βββ namespace.yaml βββ service_account.yaml βββ cluster_role.yaml βββ cluster_role_binding.yaml βββ deployment.yaml The static manifests must be located in the manifests/ directory with at least one resource in it for the bundle to be a valid plain+v0 bundle that the provisioner can unpack. The manifests/ directory must also be flat; all manifests must be at the top-level with no subdirectories. Important Do not include any content in the manifests/ directory of a plain bundle that are not static manifests. Otherwise, a failure will occur when creating content on-cluster from that bundle. Any file that would not successfully apply with the oc apply command will result in an error. Multi-object YAML or JSON files are valid, as well. 2.2.3.1.3. Registry bundle spec A registry bundle, or registry+v1 bundle, contains a set of static Kubernetes YAML manifests organized in the legacy Operator Lifecycle Manager (OLM) bundle format. Additional resources Legacy OLM bundle format 2.2.3.2. BundleDeployment Warning A BundleDeployment object changes the state of a Kubernetes cluster by installing and removing objects. It is important to verify and trust the content that is being installed and limit access, by using RBAC, to the BundleDeployment API to only those who require those permissions. The RukPak BundleDeployment API points to a Bundle object and indicates that it should be active. This includes pivoting from older versions of an active bundle. A BundleDeployment object might also include an embedded spec for a desired bundle. Much like pods generate instances of container images, a bundle deployment generates a deployed version of a bundle. A bundle deployment can be seen as a generalization of the pod concept. The specifics of how a bundle deployment makes changes to a cluster based on a referenced bundle is defined by the provisioner that is configured to watch that bundle deployment. Example BundleDeployment object configured to work with the plain provisioner apiVersion: core.rukpak.io/v1alpha1 kind: BundleDeployment metadata: name: my-bundle-deployment spec: provisionerClassName: core-rukpak-io-plain template: metadata: labels: app: my-bundle spec: source: type: image image: ref: my-bundle@sha256:xyz123 provisionerClassName: core-rukpak-io-plain 2.2.3.3. About provisioners RukPak consists of a series of controllers, known as provisioners , that install and manage content on a Kubernetes cluster. RukPak also provides two primary APIs: Bundle and BundleDeployment . These components work together to bring content onto the cluster and install it, generating resources within the cluster. Two provisioners are currently implemented and bundled with RukPak: the plain provisioner that sources and unpacks plain+v0 bundles, and the registry provisioner that sources and unpacks Operator Lifecycle Manager (OLM) registry+v1 bundles. Each provisioner is assigned a unique ID and is responsible for reconciling Bundle and BundleDeployment objects with a spec.provisionerClassName field that matches that particular ID. For example, the plain provisioner is able to unpack a given plain+v0 bundle onto a cluster and then instantiate it, making the content of the bundle available in the cluster. A provisioner places a watch on both Bundle and BundleDeployment resources that refer to the provisioner explicitly. For a given bundle, the provisioner unpacks the contents of the Bundle resource onto the cluster. Then, given a BundleDeployment resource referring to that bundle, the provisioner installs the bundle contents and is responsible for managing the lifecycle of those resources. 2.3. Operator Framework glossary of common terms This topic provides a glossary of common terms related to the Operator Framework, including Operator Lifecycle Manager (OLM) and the Operator SDK. 2.3.1. Common Operator Framework terms 2.3.1.1. Bundle In the bundle format, a bundle is a collection of an Operator CSV, manifests, and metadata. Together, they form a unique version of an Operator that can be installed onto the cluster. 2.3.1.2. Bundle image In the bundle format, a bundle image is a container image that is built from Operator manifests and that contains one bundle. Bundle images are stored and distributed by Open Container Initiative (OCI) spec container registries, such as Quay.io or DockerHub. 2.3.1.3. Catalog source A catalog source represents a store of metadata that OLM can query to discover and install Operators and their dependencies. 2.3.1.4. Channel A channel defines a stream of updates for an Operator and is used to roll out updates for subscribers. The head points to the latest version of that channel. For example, a stable channel would have all stable versions of an Operator arranged from the earliest to the latest. An Operator can have several channels, and a subscription binding to a certain channel would only look for updates in that channel. 2.3.1.5. Channel head A channel head refers to the latest known update in a particular channel. 2.3.1.6. Cluster service version A cluster service version (CSV) is a YAML manifest created from Operator metadata that assists OLM in running the Operator in a cluster. It is the metadata that accompanies an Operator container image, used to populate user interfaces with information such as its logo, description, and version. It is also a source of technical information that is required to run the Operator, like the RBAC rules it requires and which custom resources (CRs) it manages or depends on. 2.3.1.7. Dependency An Operator may have a dependency on another Operator being present in the cluster. For example, the Vault Operator has a dependency on the etcd Operator for its data persistence layer. OLM resolves dependencies by ensuring that all specified versions of Operators and CRDs are installed on the cluster during the installation phase. This dependency is resolved by finding and installing an Operator in a catalog that satisfies the required CRD API, and is not related to packages or bundles. 2.3.1.8. Index image In the bundle format, an index image refers to an image of a database (a database snapshot) that contains information about Operator bundles including CSVs and CRDs of all versions. This index can host a history of Operators on a cluster and be maintained by adding or removing Operators using the opm CLI tool. 2.3.1.9. Install plan An install plan is a calculated list of resources to be created to automatically install or upgrade a CSV. 2.3.1.10. Multitenancy A tenant in OpenShift Container Platform is a user or group of users that share common access and privileges for a set of deployed workloads, typically represented by a namespace or project. You can use tenants to provide a level of isolation between different groups or teams. When a cluster is shared by multiple users or groups, it is considered a multitenant cluster. 2.3.1.11. Operator group An Operator group configures all Operators deployed in the same namespace as the OperatorGroup object to watch for their CR in a list of namespaces or cluster-wide. 2.3.1.12. Package In the bundle format, a package is a directory that encloses all released history of an Operator with each version. A released version of an Operator is described in a CSV manifest alongside the CRDs. 2.3.1.13. Registry A registry is a database that stores bundle images of Operators, each with all of its latest and historical versions in all channels. 2.3.1.14. Subscription A subscription keeps CSVs up to date by tracking a channel in a package. 2.3.1.15. Update graph An update graph links versions of CSVs together, similar to the update graph of any other packaged software. Operators can be installed sequentially, or certain versions can be skipped. The update graph is expected to grow only at the head with newer versions being added. 2.4. Operator Lifecycle Manager (OLM) 2.4.1. Operator Lifecycle Manager concepts and resources This guide provides an overview of the concepts that drive Operator Lifecycle Manager (OLM) in OpenShift Container Platform. 2.4.1.1. What is Operator Lifecycle Manager? Operator Lifecycle Manager (OLM) helps users install, update, and manage the lifecycle of Kubernetes native applications (Operators) and their associated services running across their OpenShift Container Platform clusters. It is part of the Operator Framework , an open source toolkit designed to manage Operators in an effective, automated, and scalable way. Figure 2.2. Operator Lifecycle Manager workflow OLM runs by default in OpenShift Container Platform 4.15, which aids cluster administrators in installing, upgrading, and granting access to Operators running on their cluster. The OpenShift Container Platform web console provides management screens for cluster administrators to install Operators, as well as grant specific projects access to use the catalog of Operators available on the cluster. For developers, a self-service experience allows provisioning and configuring instances of databases, monitoring, and big data services without having to be subject matter experts, because the Operator has that knowledge baked into it. 2.4.1.2. OLM resources The following custom resource definitions (CRDs) are defined and managed by Operator Lifecycle Manager (OLM): Table 2.2. CRDs managed by OLM and Catalog Operators Resource Short name Description ClusterServiceVersion (CSV) csv Application metadata. For example: name, version, icon, required resources. CatalogSource catsrc A repository of CSVs, CRDs, and packages that define an application. Subscription sub Keeps CSVs up to date by tracking a channel in a package. InstallPlan ip Calculated list of resources to be created to automatically install or upgrade a CSV. OperatorGroup og Configures all Operators deployed in the same namespace as the OperatorGroup object to watch for their custom resource (CR) in a list of namespaces or cluster-wide. OperatorConditions - Creates a communication channel between OLM and an Operator it manages. Operators can write to the Status.Conditions array to communicate complex states to OLM. 2.4.1.2.1. Cluster service version A cluster service version (CSV) represents a specific version of a running Operator on an OpenShift Container Platform cluster. It is a YAML manifest created from Operator metadata that assists Operator Lifecycle Manager (OLM) in running the Operator in the cluster. OLM requires this metadata about an Operator to ensure that it can be kept running safely on a cluster, and to provide information about how updates should be applied as new versions of the Operator are published. This is similar to packaging software for a traditional operating system; think of the packaging step for OLM as the stage at which you make your rpm , deb , or apk bundle. A CSV includes the metadata that accompanies an Operator container image, used to populate user interfaces with information such as its name, version, description, labels, repository link, and logo. A CSV is also a source of technical information required to run the Operator, such as which custom resources (CRs) it manages or depends on, RBAC rules, cluster requirements, and install strategies. This information tells OLM how to create required resources and set up the Operator as a deployment. 2.4.1.2.2. Catalog source A catalog source represents a store of metadata, typically by referencing an index image stored in a container registry. Operator Lifecycle Manager (OLM) queries catalog sources to discover and install Operators and their dependencies. OperatorHub in the OpenShift Container Platform web console also displays the Operators provided by catalog sources. Tip Cluster administrators can view the full list of Operators provided by an enabled catalog source on a cluster by using the Administration Cluster Settings Configuration OperatorHub page in the web console. The spec of a CatalogSource object indicates how to construct a pod or how to communicate with a service that serves the Operator Registry gRPC API. Example 2.9. Example CatalogSource object \ufeffapiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: generation: 1 name: example-catalog 1 namespace: openshift-marketplace 2 annotations: olm.catalogImageTemplate: 3 "quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}.{kube_patch_version}" spec: displayName: Example Catalog 4 image: quay.io/example-org/example-catalog:v1 5 priority: -400 6 publisher: Example Org sourceType: grpc 7 grpcPodConfig: securityContextConfig: <security_mode> 8 nodeSelector: 9 custom_label: <label> priorityClassName: system-cluster-critical 10 tolerations: 11 - key: "key1" operator: "Equal" value: "value1" effect: "NoSchedule" updateStrategy: registryPoll: 12 interval: 30m0s status: connectionState: address: example-catalog.openshift-marketplace.svc:50051 lastConnect: 2021-08-26T18:14:31Z lastObservedState: READY 13 latestImageRegistryPoll: 2021-08-26T18:46:25Z 14 registryService: 15 createdAt: 2021-08-26T16:16:37Z port: 50051 protocol: grpc serviceName: example-catalog serviceNamespace: openshift-marketplace 1 Name for the CatalogSource object. This value is also used as part of the name for the related pod that is created in the requested namespace. 2 Namespace to create the catalog in. To make the catalog available cluster-wide in all namespaces, set this value to openshift-marketplace . The default Red Hat-provided catalog sources also use the openshift-marketplace namespace. Otherwise, set the value to a specific namespace to make the Operator only available in that namespace. 3 Optional: To avoid cluster upgrades potentially leaving Operator installations in an unsupported state or without a continued update path, you can enable automatically changing your Operator catalog's index image version as part of cluster upgrades. Set the olm.catalogImageTemplate annotation to your index image name and use one or more of the Kubernetes cluster version variables as shown when constructing the template for the image tag. The annotation overwrites the spec.image field at run time. See the "Image template for custom catalog sources" section for more details. 4 Display name for the catalog in the web console and CLI. 5 Index image for the catalog. Optionally, can be omitted when using the olm.catalogImageTemplate annotation, which sets the pull spec at run time. 6 Weight for the catalog source. OLM uses the weight for prioritization during dependency resolution. A higher weight indicates the catalog is preferred over lower-weighted catalogs. 7 Source types include the following: grpc with an image reference: OLM pulls the image and runs the pod, which is expected to serve a compliant API. grpc with an address field: OLM attempts to contact the gRPC API at the given address. This should not be used in most cases. configmap : OLM parses config map data and runs a pod that can serve the gRPC API over it. 8 Specify the value of legacy or restricted . If the field is not set, the default value is legacy . In a future OpenShift Container Platform release, it is planned that the default value will be restricted . If your catalog cannot run with restricted permissions, it is recommended that you manually set this field to legacy . 9 Optional: For grpc type catalog sources, overrides the default node selector for the pod serving the content in spec.image , if defined. 10 Optional: For grpc type catalog sources, overrides the default priority class name for the pod serving the content in spec.image , if defined. Kubernetes provides system-cluster-critical and system-node-critical priority classes by default. Setting the field to empty ( "" ) assigns the pod the default priority. Other priority classes can be defined manually. 11 Optional: For grpc type catalog sources, overrides the default tolerations for the pod serving the content in spec.image , if defined. 12 Automatically check for new versions at a given interval to stay up-to-date. 13 Last observed state of the catalog connection. For example: READY : A connection is successfully established. CONNECTING : A connection is attempting to establish. TRANSIENT_FAILURE : A temporary problem has occurred while attempting to establish a connection, such as a timeout. The state will eventually switch back to CONNECTING and try again. See States of Connectivity in the gRPC documentation for more details. 14 Latest time the container registry storing the catalog image was polled to ensure the image is up-to-date. 15 Status information for the catalog's Operator Registry service. Referencing the name of a CatalogSource object in a subscription instructs OLM where to search to find a requested Operator: Example 2.10. Example Subscription object referencing a catalog source apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-namespace spec: channel: stable name: example-operator source: example-catalog sourceNamespace: openshift-marketplace Additional resources Understanding OperatorHub Red Hat-provided Operator catalogs Adding a catalog source to a cluster Catalog priority Viewing Operator catalog source status by using the CLI Understanding and managing pod security admission Catalog source pod scheduling 2.4.1.2.2.1. Image template for custom catalog sources Operator compatibility with the underlying cluster can be expressed by a catalog source in various ways. One way, which is used for the default Red Hat-provided catalog sources, is to identify image tags for index images that are specifically created for a particular platform release, for example OpenShift Container Platform 4.15. During a cluster upgrade, the index image tag for the default Red Hat-provided catalog sources are updated automatically by the Cluster Version Operator (CVO) so that Operator Lifecycle Manager (OLM) pulls the updated version of the catalog. For example during an upgrade from OpenShift Container Platform 4.14 to 4.15, the spec.image field in the CatalogSource object for the redhat-operators catalog is updated from: registry.redhat.io/redhat/redhat-operator-index:v4.14 to: registry.redhat.io/redhat/redhat-operator-index:v4.15 However, the CVO does not automatically update image tags for custom catalogs. To ensure users are left with a compatible and supported Operator installation after a cluster upgrade, custom catalogs should also be kept updated to reference an updated index image. Starting in OpenShift Container Platform 4.9, cluster administrators can add the olm.catalogImageTemplate annotation in the CatalogSource object for custom catalogs to an image reference that includes a template. The following Kubernetes version variables are supported for use in the template: kube_major_version kube_minor_version kube_patch_version Note You must specify the Kubernetes cluster version and not an OpenShift Container Platform cluster version, as the latter is not currently available for templating. Provided that you have created and pushed an index image with a tag specifying the updated Kubernetes version, setting this annotation enables the index image versions in custom catalogs to be automatically changed after a cluster upgrade. The annotation value is used to set or update the image reference in the spec.image field of the CatalogSource object. This helps avoid cluster upgrades leaving Operator installations in unsupported states or without a continued update path. Important You must ensure that the index image with the updated tag, in whichever registry it is stored in, is accessible by the cluster at the time of the cluster upgrade. Example 2.11. Example catalog source with an image template apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: generation: 1 name: example-catalog namespace: openshift-marketplace annotations: olm.catalogImageTemplate: "quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}" spec: displayName: Example Catalog image: quay.io/example-org/example-catalog:v1.28 priority: -400 publisher: Example Org Note If the spec.image field and the olm.catalogImageTemplate annotation are both set, the spec.image field is overwritten by the resolved value from the annotation. If the annotation does not resolve to a usable pull spec, the catalog source falls back to the set spec.image value. If the spec.image field is not set and the annotation does not resolve to a usable pull spec, OLM stops reconciliation of the catalog source and sets it into a human-readable error condition. For an OpenShift Container Platform 4.15 cluster, which uses Kubernetes 1.28, the olm.catalogImageTemplate annotation in the preceding example resolves to the following image reference: quay.io/example-org/example-catalog:v1.28 For future releases of OpenShift Container Platform, you can create updated index images for your custom catalogs that target the later Kubernetes version that is used by the later OpenShift Container Platform version. With the olm.catalogImageTemplate annotation set before the upgrade, upgrading the cluster to the later OpenShift Container Platform version would then automatically update the catalog's index image as well. 2.4.1.2.2.2. Catalog health requirements Operator catalogs on a cluster are interchangeable from the perspective of installation resolution; a Subscription object might reference a specific catalog, but dependencies are resolved using all catalogs on the cluster. For example, if Catalog A is unhealthy, a subscription referencing Catalog A could resolve a dependency in Catalog B, which the cluster administrator might not have been expecting, because B normally had a lower catalog priority than A. As a result, OLM requires that all catalogs with a given global namespace (for example, the default openshift-marketplace namespace or a custom global namespace) are healthy. When a catalog is unhealthy, all Operator installation or update operations within its shared global namespace will fail with a CatalogSourcesUnhealthy condition. If these operations were permitted in an unhealthy state, OLM might make resolution and installation decisions that were unexpected to the cluster administrator. As a cluster administrator, if you observe an unhealthy catalog and want to consider the catalog as invalid and resume Operator installations, see the "Removing custom catalogs" or "Disabling the default OperatorHub catalog sources" sections for information about removing the unhealthy catalog. Additional resources Removing custom catalogs Disabling the default OperatorHub catalog sources 2.4.1.2.3. Subscription A subscription , defined by a Subscription object, represents an intention to install an Operator. It is the custom resource that relates an Operator to a catalog source. Subscriptions describe which channel of an Operator package to subscribe to, and whether to perform updates automatically or manually. If set to automatic, the subscription ensures Operator Lifecycle Manager (OLM) manages and upgrades the Operator to ensure that the latest version is always running in the cluster. Example Subscription object apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-namespace spec: channel: stable name: example-operator source: example-catalog sourceNamespace: openshift-marketplace This Subscription object defines the name and namespace of the Operator, as well as the catalog from which the Operator data can be found. The channel, such as alpha , beta , or stable , helps determine which Operator stream should be installed from the catalog source. The names of channels in a subscription can differ between Operators, but the naming scheme should follow a common convention within a given Operator. For example, channel names might follow a minor release update stream for the application provided by the Operator ( 1.2 , 1.3 ) or a release frequency ( stable , fast ). In addition to being easily visible from the OpenShift Container Platform web console, it is possible to identify when there is a newer version of an Operator available by inspecting the status of the related subscription. The value associated with the currentCSV field is the newest version that is known to OLM, and installedCSV is the version that is installed on the cluster. Additional resources Multitenancy and Operator colocation Viewing Operator subscription status by using the CLI 2.4.1.2.4. Install plan An install plan , defined by an InstallPlan object, describes a set of resources that Operator Lifecycle Manager (OLM) creates to install or upgrade to a specific version of an Operator. The version is defined by a cluster service version (CSV). To install an Operator, a cluster administrator, or a user who has been granted Operator installation permissions, must first create a Subscription object. A subscription represents the intent to subscribe to a stream of available versions of an Operator from a catalog source. The subscription then creates an InstallPlan object to facilitate the installation of the resources for the Operator. The install plan must then be approved according to one of the following approval strategies: If the subscription's spec.installPlanApproval field is set to Automatic , the install plan is approved automatically. If the subscription's spec.installPlanApproval field is set to Manual , the install plan must be manually approved by a cluster administrator or user with proper permissions. After the install plan is approved, OLM creates the specified resources and installs the Operator in the namespace that is specified by the subscription. Example 2.12. Example InstallPlan object apiVersion: operators.coreos.com/v1alpha1 kind: InstallPlan metadata: name: install-abcde namespace: operators spec: approval: Automatic approved: true clusterServiceVersionNames: - my-operator.v1.0.1 generation: 1 status: ... catalogSources: [] conditions: - lastTransitionTime: '2021-01-01T20:17:27Z' lastUpdateTime: '2021-01-01T20:17:27Z' status: 'True' type: Installed phase: Complete plan: - resolving: my-operator.v1.0.1 resource: group: operators.coreos.com kind: ClusterServiceVersion manifest: >- ... name: my-operator.v1.0.1 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1alpha1 status: Created - resolving: my-operator.v1.0.1 resource: group: apiextensions.k8s.io kind: CustomResourceDefinition manifest: >- ... name: webservers.web.servers.org sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1beta1 status: Created - resolving: my-operator.v1.0.1 resource: group: '' kind: ServiceAccount manifest: >- ... name: my-operator sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created - resolving: my-operator.v1.0.1 resource: group: rbac.authorization.k8s.io kind: Role manifest: >- ... name: my-operator.v1.0.1-my-operator-6d7cbc6f57 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created - resolving: my-operator.v1.0.1 resource: group: rbac.authorization.k8s.io kind: RoleBinding manifest: >- ... name: my-operator.v1.0.1-my-operator-6d7cbc6f57 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created ... Additional resources Multitenancy and Operator colocation Allowing non-cluster administrators to install Operators 2.4.1.2.5. Operator groups An Operator group , defined by the OperatorGroup resource, provides multitenant configuration to OLM-installed Operators. An Operator group selects target namespaces in which to generate required RBAC access for its member Operators. The set of target namespaces is provided by a comma-delimited string stored in the olm.targetNamespaces annotation of a cluster service version (CSV). This annotation is applied to the CSV instances of member Operators and is projected into their deployments. Additional resources Operator groups 2.4.1.2.6. Operator conditions As part of its role in managing the lifecycle of an Operator, Operator Lifecycle Manager (OLM) infers the state of an Operator from the state of Kubernetes resources that define the Operator. While this approach provides some level of assurance that an Operator is in a given state, there are many instances where an Operator might need to communicate information to OLM that could not be inferred otherwise. This information can then be used by OLM to better manage the lifecycle of the Operator. OLM provides a custom resource definition (CRD) called OperatorCondition that allows Operators to communicate conditions to OLM. There are a set of supported conditions that influence management of the Operator by OLM when present in the Spec.Conditions array of an OperatorCondition resource. Note By default, the Spec.Conditions array is not present in an OperatorCondition object until it is either added by a user or as a result of custom Operator logic. Additional resources Operator conditions 2.4.2. Operator Lifecycle Manager architecture This guide outlines the component architecture of Operator Lifecycle Manager (OLM) in OpenShift Container Platform. 2.4.2.1. Component responsibilities Operator Lifecycle Manager (OLM) is composed of two Operators: the OLM Operator and the Catalog Operator. Each of these Operators is responsible for managing the custom resource definitions (CRDs) that are the basis for the OLM framework: Table 2.3. CRDs managed by OLM and Catalog Operators Resource Short name Owner Description ClusterServiceVersion (CSV) csv OLM Application metadata: name, version, icon, required resources, installation, and so on. InstallPlan ip Catalog Calculated list of resources to be created to automatically install or upgrade a CSV. CatalogSource catsrc Catalog A repository of CSVs, CRDs, and packages that define an application. Subscription sub Catalog Used to keep CSVs up to date by tracking a channel in a package. OperatorGroup og OLM Configures all Operators deployed in the same namespace as the OperatorGroup object to watch for their custom resource (CR) in a list of namespaces or cluster-wide. Each of these Operators is also responsible for creating the following resources: Table 2.4. Resources created by OLM and Catalog Operators Resource Owner Deployments OLM ServiceAccounts (Cluster)Roles (Cluster)RoleBindings CustomResourceDefinitions (CRDs) Catalog ClusterServiceVersions 2.4.2.2. OLM Operator The OLM Operator is responsible for deploying applications defined by CSV resources after the required resources specified in the CSV are present in the cluster. The OLM Operator is not concerned with the creation of the required resources; you can choose to manually create these resources using the CLI or using the Catalog Operator. This separation of concern allows users incremental buy-in in terms of how much of the OLM framework they choose to leverage for their application. The OLM Operator uses the following workflow: Watch for cluster service versions (CSVs) in a namespace and check that requirements are met. If requirements are met, run the install strategy for the CSV. Note A CSV must be an active member of an Operator group for the install strategy to run. 2.4.2.3. Catalog Operator The Catalog Operator is responsible for resolving and installing cluster service versions (CSVs) and the required resources they specify. It is also responsible for watching catalog sources for updates to packages in channels and upgrading them, automatically if desired, to the latest available versions. To track a package in a channel, you can create a Subscription object configuring the desired package, channel, and the CatalogSource object you want to use for pulling updates. When updates are found, an appropriate InstallPlan object is written into the namespace on behalf of the user. The Catalog Operator uses the following workflow: Connect to each catalog source in the cluster. Watch for unresolved install plans created by a user, and if found: Find the CSV matching the name requested and add the CSV as a resolved resource. For each managed or required CRD, add the CRD as a resolved resource. For each required CRD, find the CSV that manages it. Watch for resolved install plans and create all of the discovered resources for it, if approved by a user or automatically. Watch for catalog sources and subscriptions and create install plans based on them. 2.4.2.4. Catalog Registry The Catalog Registry stores CSVs and CRDs for creation in a cluster and stores metadata about packages and channels. A package manifest is an entry in the Catalog Registry that associates a package identity with sets of CSVs. Within a package, channels point to a particular CSV. Because CSVs explicitly reference the CSV that they replace, a package manifest provides the Catalog Operator with all of the information that is required to update a CSV to the latest version in a channel, stepping through each intermediate version. 2.4.3. Operator Lifecycle Manager workflow This guide outlines the workflow of Operator Lifecycle Manager (OLM) in OpenShift Container Platform. 2.4.3.1. Operator installation and upgrade workflow in OLM In the Operator Lifecycle Manager (OLM) ecosystem, the following resources are used to resolve Operator installations and upgrades: ClusterServiceVersion (CSV) CatalogSource Subscription Operator metadata, defined in CSVs, can be stored in a collection called a catalog source. OLM uses catalog sources, which use the Operator Registry API , to query for available Operators as well as upgrades for installed Operators. Figure 2.3. Catalog source overview Within a catalog source, Operators are organized into packages and streams of updates called channels , which should be a familiar update pattern from OpenShift Container Platform or other software on a continuous release cycle like web browsers. Figure 2.4. Packages and channels in a Catalog source A user indicates a particular package and channel in a particular catalog source in a subscription , for example an etcd package and its alpha channel. If a subscription is made to a package that has not yet been installed in the namespace, the latest Operator for that package is installed. Note OLM deliberately avoids version comparisons, so the "latest" or "newest" Operator available from a given catalog channel package path does not necessarily need to be the highest version number. It should be thought of more as the head reference of a channel, similar to a Git repository. Each CSV has a replaces parameter that indicates which Operator it replaces. This builds a graph of CSVs that can be queried by OLM, and updates can be shared between channels. Channels can be thought of as entry points into the graph of updates: Figure 2.5. OLM graph of available channel updates Example channels in a package packageName: example channels: - name: alpha currentCSV: example.v0.1.2 - name: beta currentCSV: example.v0.1.3 defaultChannel: alpha For OLM to successfully query for updates, given a catalog source, package, channel, and CSV, a catalog must be able to return, unambiguously and deterministically, a single CSV that replaces the input CSV. 2.4.3.1.1. Example upgrade path For an example upgrade scenario, consider an installed Operator corresponding to CSV version 0.1.1 . OLM queries the catalog source and detects an upgrade in the subscribed channel with new CSV version 0.1.3 that replaces an older but not-installed CSV version 0.1.2 , which in turn replaces the older and installed CSV version 0.1.1 . OLM walks back from the channel head to versions via the replaces field specified in the CSVs to determine the upgrade path 0.1.3 0.1.2 0.1.1 ; the direction of the arrow indicates that the former replaces the latter. OLM upgrades the Operator one version at the time until it reaches the channel head. For this given scenario, OLM installs Operator version 0.1.2 to replace the existing Operator version 0.1.1 . Then, it installs Operator version 0.1.3 to replace the previously installed Operator version 0.1.2 . At this point, the installed operator version 0.1.3 matches the channel head and the upgrade is completed. 2.4.3.1.2. Skipping upgrades The basic path for upgrades in OLM is: A catalog source is updated with one or more updates to an Operator. OLM traverses every version of the Operator until reaching the latest version the catalog source contains. However, sometimes this is not a safe operation to perform. There will be cases where a published version of an Operator should never be installed on a cluster if it has not already, for example because a version introduces a serious vulnerability. In those cases, OLM must consider two cluster states and provide an update graph that supports both: The "bad" intermediate Operator has been seen by the cluster and installed. The "bad" intermediate Operator has not yet been installed onto the cluster. By shipping a new catalog and adding a skipped release, OLM is ensured that it can always get a single unique update regardless of the cluster state and whether it has seen the bad update yet. Example CSV with skipped release apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: etcdoperator.v0.9.2 namespace: placeholder annotations: spec: displayName: etcd description: Etcd Operator replaces: etcdoperator.v0.9.0 skips: - etcdoperator.v0.9.1 Consider the following example of Old CatalogSource and New CatalogSource . Figure 2.6. Skipping updates This graph maintains that: Any Operator found in Old CatalogSource has a single replacement in New CatalogSource . Any Operator found in New CatalogSource has a single replacement in New CatalogSource . If the bad update has not yet been installed, it will never be. 2.4.3.1.3. Replacing multiple Operators Creating New CatalogSource as described requires publishing CSVs that replace one Operator, but can skip several. This can be accomplished using the skipRange annotation: olm.skipRange: <semver_range> where <semver_range> has the version range format supported by the semver library . When searching catalogs for updates, if the head of a channel has a skipRange annotation and the currently installed Operator has a version field that falls in the range, OLM updates to the latest entry in the channel. The order of precedence is: Channel head in the source specified by sourceName on the subscription, if the other criteria for skipping are met. The Operator that replaces the current one, in the source specified by sourceName . Channel head in another source that is visible to the subscription, if the other criteria for skipping are met. The Operator that replaces the current one in any source visible to the subscription. Example CSV with skipRange apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: elasticsearch-operator.v4.1.2 namespace: <namespace> annotations: olm.skipRange: '>=4.1.0 <4.1.2' 2.4.3.1.4. Z-stream support A z-stream , or patch release, must replace all z-stream releases for the same minor version. OLM does not consider major, minor, or patch versions, it just needs to build the correct graph in a catalog. In other words, OLM must be able to take a graph as in Old CatalogSource and, similar to before, generate a graph as in New CatalogSource : Figure 2.7. Replacing several Operators This graph maintains that: Any Operator found in Old CatalogSource has a single replacement in New CatalogSource . Any Operator found in New CatalogSource has a single replacement in New CatalogSource . Any z-stream release in Old CatalogSource will update to the latest z-stream release in New CatalogSource . Unavailable releases can be considered "virtual" graph nodes; their content does not need to exist, the registry just needs to respond as if the graph looks like this. 2.4.4. Operator Lifecycle Manager dependency resolution This guide outlines dependency resolution and custom resource definition (CRD) upgrade lifecycles with Operator Lifecycle Manager (OLM) in OpenShift Container Platform. 2.4.4.1. About dependency resolution Operator Lifecycle Manager (OLM) manages the dependency resolution and upgrade lifecycle of running Operators. In many ways, the problems OLM faces are similar to other system or language package managers, such as yum and rpm . However, there is one constraint that similar systems do not generally have that OLM does: because Operators are always running, OLM attempts to ensure that you are never left with a set of Operators that do not work with each other. As a result, OLM must never create the following scenarios: Install a set of Operators that require APIs that cannot be provided Update an Operator in a way that breaks another that depends upon it This is made possible with two types of data: Properties Typed metadata about the Operator that constitutes the public interface for it in the dependency resolver. Examples include the group/version/kind (GVK) of the APIs provided by the Operator and the semantic version (semver) of the Operator. Constraints or dependencies An Operator's requirements that should be satisfied by other Operators that might or might not have already been installed on the target cluster. These act as queries or filters over all available Operators and constrain the selection during dependency resolution and installation. Examples include requiring a specific API to be available on the cluster or expecting a particular Operator with a particular version to be installed. OLM converts these properties and constraints into a system of Boolean formulas and passes them to a SAT solver, a program that establishes Boolean satisfiability, which does the work of determining what Operators should be installed. 2.4.4.2. Operator properties All Operators in a catalog have the following properties: olm.package Includes the name of the package and the version of the Operator olm.gvk A single property for each provided API from the cluster service version (CSV) Additional properties can also be directly declared by an Operator author by including a properties.yaml file in the metadata/ directory of the Operator bundle. Example arbitrary property properties: - type: olm.kubeversion value: version: "1.16.0" 2.4.4.2.1. Arbitrary properties Operator authors can declare arbitrary properties in a properties.yaml file in the metadata/ directory of the Operator bundle. These properties are translated into a map data structure that is used as an input to the Operator Lifecycle Manager (OLM) resolver at runtime. These properties are opaque to the resolver as it does not understand the properties, but it can evaluate the generic constraints against those properties to determine if the constraints can be satisfied given the properties list. Example arbitrary properties properties: - property: type: color value: red - property: type: shape value: square - property: type: olm.gvk value: group: olm.coreos.io version: v1alpha1 kind: myresource This structure can be used to construct a Common Expression Language (CEL) expression for generic constraints. Additional resources Common Expression Language (CEL) constraints 2.4.4.3. Operator dependencies The dependencies of an Operator are listed in a dependencies.yaml file in the metadata/ folder of a bundle. This file is optional and currently only used to specify explicit Operator-version dependencies. The dependency list contains a type field for each item to specify what kind of dependency this is. The following types of Operator dependencies are supported: olm.package This type indicates a dependency for a specific Operator version. The dependency information must include the package name and the version of the package in semver format. For example, you can specify an exact version such as 0.5.2 or a range of versions such as >0.5.1 . olm.gvk With this type, the author can specify a dependency with group/version/kind (GVK) information, similar to existing CRD and API-based usage in a CSV. This is a path to enable Operator authors to consolidate all dependencies, API or explicit versions, to be in the same place. olm.constraint This type declares generic constraints on arbitrary Operator properties. In the following example, dependencies are specified for a Prometheus Operator and etcd CRDs: Example dependencies.yaml file dependencies: - type: olm.package value: packageName: prometheus version: ">0.27.0" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2 2.4.4.4. Generic constraints An olm.constraint property declares a dependency constraint of a particular type, differentiating non-constraint and constraint properties. Its value field is an object containing a failureMessage field holding a string-representation of the constraint message. This message is surfaced as an informative comment to users if the constraint is not satisfiable at runtime. The following keys denote the available constraint types: gvk Type whose value and interpretation is identical to the olm.gvk type package Type whose value and interpretation is identical to the olm.package type cel A Common Expression Language (CEL) expression evaluated at runtime by the Operator Lifecycle Manager (OLM) resolver over arbitrary bundle properties and cluster information all , any , not Conjunction, disjunction, and negation constraints, respectively, containing one or more concrete constraints, such as gvk or a nested compound constraint 2.4.4.4.1. Common Expression Language (CEL) constraints The cel constraint type supports Common Expression Language (CEL) as the expression language. The cel struct has a rule field which contains the CEL expression string that is evaluated against Operator properties at runtime to determine if the Operator satisfies the constraint. Example cel constraint type: olm.constraint value: failureMessage: 'require to have "certified"' cel: rule: 'properties.exists(p, p.type == "certified")' The CEL syntax supports a wide range of logical operators, such as AND and OR . As a result, a single CEL expression can have multiple rules for multiple conditions that are linked together by these logical operators. These rules are evaluated against a dataset of multiple different properties from a bundle or any given source, and the output is solved into a single bundle or Operator that satisfies all of those rules within a single constraint. Example cel constraint with multiple rules type: olm.constraint value: failureMessage: 'require to have "certified" and "stable" properties' cel: rule: 'properties.exists(p, p.type == "certified") && properties.exists(p, p.type == "stable")' 2.4.4.4.2. Compound constraints (all, any, not) Compound constraint types are evaluated following their logical definitions. The following is an example of a conjunctive constraint ( all ) of two packages and one GVK. That is, they must all be satisfied by installed bundles: Example all constraint schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: All are required for Red because... all: constraints: - failureMessage: Package blue is needed for... package: name: blue versionRange: '>=1.0.0' - failureMessage: GVK Green/v1 is needed for... gvk: group: greens.example.com version: v1 kind: Green The following is an example of a disjunctive constraint ( any ) of three versions of the same GVK. That is, at least one must be satisfied by installed bundles: Example any constraint schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: Any are required for Red because... any: constraints: - gvk: group: blues.example.com version: v1beta1 kind: Blue - gvk: group: blues.example.com version: v1beta2 kind: Blue - gvk: group: blues.example.com version: v1 kind: Blue The following is an example of a negation constraint ( not ) of one version of a GVK. That is, this GVK cannot be provided by any bundle in the result set: Example not constraint schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: all: constraints: - failureMessage: Package blue is needed for... package: name: blue versionRange: '>=1.0.0' - failureMessage: Cannot be required for Red because... not: constraints: - gvk: group: greens.example.com version: v1alpha1 kind: greens The negation semantics might appear unclear in the not constraint context. To clarify, the negation is really instructing the resolver to remove any possible solution that includes a particular GVK, package at a version, or satisfies some child compound constraint from the result set. As a corollary, the not compound constraint should only be used within all or any constraints, because negating without first selecting a possible set of dependencies does not make sense. 2.4.4.4.3. Nested compound constraints A nested compound constraint, one that contains at least one child compound constraint along with zero or more simple constraints, is evaluated from the bottom up following the procedures for each previously described constraint type. The following is an example of a disjunction of conjunctions, where one, the other, or both can satisfy the constraint: Example nested compound constraint schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: Required for Red because... any: constraints: - all: constraints: - package: name: blue versionRange: '>=1.0.0' - gvk: group: blues.example.com version: v1 kind: Blue - all: constraints: - package: name: blue versionRange: '<1.0.0' - gvk: group: blues.example.com version: v1beta1 kind: Blue Note The maximum raw size of an olm.constraint type is 64KB to limit resource exhaustion attacks. 2.4.4.5. Dependency preferences There can be many options that equally satisfy a dependency of an Operator. The dependency resolver in Operator Lifecycle Manager (OLM) determines which option best fits the requirements of the requested Operator. As an Operator author or user, it can be important to understand how these choices are made so that dependency resolution is clear. 2.4.4.5.1. Catalog priority On OpenShift Container Platform cluster, OLM reads catalog sources to know which Operators are available for installation. Example CatalogSource object apiVersion: "operators.coreos.com/v1alpha1" kind: "CatalogSource" metadata: name: "my-operators" namespace: "operators" spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 1 image: example.com/my/operator-index:v1 displayName: "My Operators" priority: 100 1 Specify the value of legacy or restricted . If the field is not set, the default value is legacy . In a future OpenShift Container Platform release, it is planned that the default value will be restricted . If your catalog cannot run with restricted permissions, it is recommended that you manually set this field to legacy . A CatalogSource object has a priority field, which is used by the resolver to know how to prefer options for a dependency. There are two rules that govern catalog preference: Options in higher-priority catalogs are preferred to options in lower-priority catalogs. Options in the same catalog as the dependent are preferred to any other catalogs. 2.4.4.5.2. Channel ordering An Operator package in a catalog is a collection of update channels that a user can subscribe to in an OpenShift Container Platform cluster. Channels can be used to provide a particular stream of updates for a minor release ( 1.2 , 1.3 ) or a release frequency ( stable , fast ). It is likely that a dependency might be satisfied by Operators in the same package, but different channels. For example, version 1.2 of an Operator might exist in both the stable and fast channels. Each package has a default channel, which is always preferred to non-default channels. If no option in the default channel can satisfy a dependency, options are considered from the remaining channels in lexicographic order of the channel name. 2.4.4.5.3. Order within a channel There are almost always multiple options to satisfy a dependency within a single channel. For example, Operators in one package and channel provide the same set of APIs. When a user creates a subscription, they indicate which channel to receive updates from. This immediately reduces the search to just that one channel. But within the channel, it is likely that many Operators satisfy a dependency. Within a channel, newer Operators that are higher up in the update graph are preferred. If the head of a channel satisfies a dependency, it will be tried first. 2.4.4.5.4. Other constraints In addition to the constraints supplied by package dependencies, OLM includes additional constraints to represent the desired user state and enforce resolution invariants. 2.4.4.5.4.1. Subscription constraint A subscription constraint filters the set of Operators that can satisfy a subscription. Subscriptions are user-supplied constraints for the dependency resolver. They declare the intent to either install a new Operator if it is not already on the cluster, or to keep an existing Operator updated. 2.4.4.5.4.2. Package constraint Within a namespace, no two Operators may come from the same package. 2.4.4.5.5. Additional resources Catalog health requirements 2.4.4.6. CRD upgrades OLM upgrades a custom resource definition (CRD) immediately if it is owned by a singular cluster service version (CSV). If a CRD is owned by multiple CSVs, then the CRD is upgraded when it has satisfied all of the following backward compatible conditions: All existing serving versions in the current CRD are present in the new CRD. All existing instances, or custom resources, that are associated with the serving versions of the CRD are valid when validated against the validation schema of the new CRD. Additional resources Adding a new CRD version Deprecating or removing a CRD version 2.4.4.7. Dependency best practices When specifying dependencies, there are best practices you should consider. Depend on APIs or a specific version range of Operators Operators can add or remove APIs at any time; always specify an olm.gvk dependency on any APIs your Operators requires. The exception to this is if you are specifying olm.package constraints instead. Set a minimum version The Kubernetes documentation on API changes describes what changes are allowed for Kubernetes-style Operators. These versioning conventions allow an Operator to update an API without bumping the API version, as long as the API is backwards-compatible. For Operator dependencies, this means that knowing the API version of a dependency might not be enough to ensure the dependent Operator works as intended. For example: TestOperator v1.0.0 provides v1alpha1 API version of the MyObject resource. TestOperator v1.0.1 adds a new field spec.newfield to MyObject , but still at v1alpha1. Your Operator might require the ability to write spec.newfield into the MyObject resource. An olm.gvk constraint alone is not enough for OLM to determine that you need TestOperator v1.0.1 and not TestOperator v1.0.0. Whenever possible, if a specific Operator that provides an API is known ahead of time, specify an additional olm.package constraint to set a minimum. Omit a maximum version or allow a very wide range Because Operators provide cluster-scoped resources such as API services and CRDs, an Operator that specifies a small window for a dependency might unnecessarily constrain updates for other consumers of that dependency. Whenever possible, do not set a maximum version. Alternatively, set a very wide semantic range to prevent conflicts with other Operators. For example, >1.0.0 <2.0.0 . Unlike with conventional package managers, Operator authors explicitly encode that updates are safe through channels in OLM. If an update is available for an existing subscription, it is assumed that the Operator author is indicating that it can update from the version. Setting a maximum version for a dependency overrides the update stream of the author by unnecessarily truncating it at a particular upper bound. Note Cluster administrators cannot override dependencies set by an Operator author. However, maximum versions can and should be set if there are known incompatibilities that must be avoided. Specific versions can be omitted with the version range syntax, for example > 1.0.0 !1.2.1 . Additional resources Kubernetes documentation: Changing the API 2.4.4.8. Dependency caveats When specifying dependencies, there are caveats you should consider. No compound constraints (AND) There is currently no method for specifying an AND relationship between constraints. In other words, there is no way to specify that one Operator depends on another Operator that both provides a given API and has version >1.1.0 . This means that when specifying a dependency such as: dependencies: - type: olm.package value: packageName: etcd version: ">3.1.0" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2 It would be possible for OLM to satisfy this with two Operators: one that provides EtcdCluster and one that has version >3.1.0 . Whether that happens, or whether an Operator is selected that satisfies both constraints, depends on the ordering that potential options are visited. Dependency preferences and ordering options are well-defined and can be reasoned about, but to exercise caution, Operators should stick to one mechanism or the other. Cross-namespace compatibility OLM performs dependency resolution at the namespace scope. It is possible to get into an update deadlock if updating an Operator in one namespace would be an issue for an Operator in another namespace, and vice-versa. 2.4.4.9. Example dependency resolution scenarios In the following examples, a provider is an Operator which "owns" a CRD or API service. Example: Deprecating dependent APIs A and B are APIs (CRDs): The provider of A depends on B. The provider of B has a subscription. The provider of B updates to provide C but deprecates B. This results in: B no longer has a provider. A no longer works. This is a case OLM prevents with its upgrade strategy. Example: Version deadlock A and B are APIs: The provider of A requires B. The provider of B requires A. The provider of A updates to (provide A2, require B2) and deprecate A. The provider of B updates to (provide B2, require A2) and deprecate B. If OLM attempts to update A without simultaneously updating B, or vice-versa, it is unable to progress to new versions of the Operators, even though a new compatible set can be found. This is another case OLM prevents with its upgrade strategy. 2.4.5. Operator groups This guide outlines the use of Operator groups with Operator Lifecycle Manager (OLM) in OpenShift Container Platform. 2.4.5.1. About Operator groups An Operator group , defined by the OperatorGroup resource, provides multitenant configuration to OLM-installed Operators. An Operator group selects target namespaces in which to generate required RBAC access for its member Operators. The set of target namespaces is provided by a comma-delimited string stored in the olm.targetNamespaces annotation of a cluster service version (CSV). This annotation is applied to the CSV instances of member Operators and is projected into their deployments. 2.4.5.2. Operator group membership An Operator is considered a member of an Operator group if the following conditions are true: The CSV of the Operator exists in the same namespace as the Operator group. The install modes in the CSV of the Operator support the set of namespaces targeted by the Operator group. An install mode in a CSV consists of an InstallModeType field and a boolean Supported field. The spec of a CSV can contain a set of install modes of four distinct InstallModeTypes : Table 2.5. Install modes and supported Operator groups InstallModeType Description OwnNamespace The Operator can be a member of an Operator group that selects its own namespace. SingleNamespace The Operator can be a member of an Operator group that selects one namespace. MultiNamespace The Operator can be a member of an Operator group that selects more than one namespace. AllNamespaces The Operator can be a member of an Operator group that selects all namespaces (target namespace set is the empty string "" ). Note If the spec of a CSV omits an entry of InstallModeType , then that type is considered unsupported unless support can be inferred by an existing entry that implicitly supports it. 2.4.5.3. Target namespace selection You can explicitly name the target namespace for an Operator group using the spec.targetNamespaces parameter: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: targetNamespaces: - my-namespace You can alternatively specify a namespace using a label selector with the spec.selector parameter: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: selector: cool.io/prod: "true" Important Listing multiple namespaces via spec.targetNamespaces or use of a label selector via spec.selector is not recommended, as the support for more than one target namespace in an Operator group will likely be removed in a future release. If both spec.targetNamespaces and spec.selector are defined, spec.selector is ignored. Alternatively, you can omit both spec.selector and spec.targetNamespaces to specify a global Operator group, which selects all namespaces: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace The resolved set of selected namespaces is shown in the status.namespaces parameter of an Opeator group. The status.namespace of a global Operator group contains the empty string ( "" ), which signals to a consuming Operator that it should watch all namespaces. 2.4.5.4. Operator group CSV annotations Member CSVs of an Operator group have the following annotations: Annotation Description olm.operatorGroup=<group_name> Contains the name of the Operator group. olm.operatorNamespace=<group_namespace> Contains the namespace of the Operator group. olm.targetNamespaces=<target_namespaces> Contains a comma-delimited string that lists the target namespace selection of the Operator group. Note All annotations except olm.targetNamespaces are included with copied CSVs. Omitting the olm.targetNamespaces annotation on copied CSVs prevents the duplication of target namespaces between tenants. 2.4.5.5. Provided APIs annotation A group/version/kind (GVK) is a unique identifier for a Kubernetes API. Information about what GVKs are provided by an Operator group are shown in an olm.providedAPIs annotation. The value of the annotation is a string consisting of <kind>.<version>.<group> delimited with commas. The GVKs of CRDs and API services provided by all active member CSVs of an Operator group are included. Review the following example of an OperatorGroup object with a single active member CSV that provides the PackageManifest resource: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: PackageManifest.v1alpha1.packages.apps.redhat.com name: olm-operators namespace: local ... spec: selector: {} serviceAccountName: metadata: creationTimestamp: null targetNamespaces: - local status: lastUpdated: 2019-02-19T16:18:28Z namespaces: - local 2.4.5.6. Role-based access control When an Operator group is created, three cluster roles are generated. Each contains a single aggregation rule with a cluster role selector set to match a label, as shown below: Cluster role Label to match olm.og.<operatorgroup_name>-admin-<hash_value> olm.opgroup.permissions/aggregate-to-admin: <operatorgroup_name> olm.og.<operatorgroup_name>-edit-<hash_value> olm.opgroup.permissions/aggregate-to-edit: <operatorgroup_name> olm.og.<operatorgroup_name>-view-<hash_value> olm.opgroup.permissions/aggregate-to-view: <operatorgroup_name> The following RBAC resources are generated when a CSV becomes an active member of an Operator group, as long as the CSV is watching all namespaces with the AllNamespaces install mode and is not in a failed state with reason InterOperatorGroupOwnerConflict : Cluster roles for each API resource from a CRD Cluster roles for each API resource from an API service Additional roles and role bindings Table 2.6. Cluster roles generated for each API resource from a CRD Cluster role Settings <kind>.<group>-<version>-admin Verbs on <kind> : * Aggregation labels: rbac.authorization.k8s.io/aggregate-to-admin: true olm.opgroup.permissions/aggregate-to-admin: <operatorgroup_name> <kind>.<group>-<version>-edit Verbs on <kind> : create update patch delete Aggregation labels: rbac.authorization.k8s.io/aggregate-to-edit: true olm.opgroup.permissions/aggregate-to-edit: <operatorgroup_name> <kind>.<group>-<version>-view Verbs on <kind> : get list watch Aggregation labels: rbac.authorization.k8s.io/aggregate-to-view: true olm.opgroup.permissions/aggregate-to-view: <operatorgroup_name> <kind>.<group>-<version>-view-crdview Verbs on apiextensions.k8s.io customresourcedefinitions <crd-name> : get Aggregation labels: rbac.authorization.k8s.io/aggregate-to-view: true olm.opgroup.permissions/aggregate-to-view: <operatorgroup_name> Table 2.7. Cluster roles generated for each API resource from an API service Cluster role Settings <kind>.<group>-<version>-admin Verbs on <kind> : * Aggregation labels: rbac.authorization.k8s.io/aggregate-to-admin: true olm.opgroup.permissions/aggregate-to-admin: <operatorgroup_name> <kind>.<group>-<version>-edit Verbs on <kind> : create update patch delete Aggregation labels: rbac.authorization.k8s.io/aggregate-to-edit: true olm.opgroup.permissions/aggregate-to-edit: <operatorgroup_name> <kind>.<group>-<version>-view Verbs on <kind> : get list watch Aggregation labels: rbac.authorization.k8s.io/aggregate-to-view: true olm.opgroup.permissions/aggregate-to-view: <operatorgroup_name> Additional roles and role bindings If the CSV defines exactly one target namespace that contains * , then a cluster role and corresponding cluster role binding are generated for each permission defined in the permissions field of the CSV. All resources generated are given the olm.owner: <csv_name> and olm.owner.namespace: <csv_namespace> labels. If the CSV does not define exactly one target namespace that contains * , then all roles and role bindings in the Operator namespace with the olm.owner: <csv_name> and olm.owner.namespace: <csv_namespace> labels are copied into the target namespace. 2.4.5.7. Copied CSVs OLM creates copies of all active member CSVs of an Operator group in each of the target namespaces of that Operator group. The purpose of a copied CSV is to tell users of a target namespace that a specific Operator is configured to watch resources created there. Copied CSVs have a status reason Copied and are updated to match the status of their source CSV. The olm.targetNamespaces annotation is stripped from copied CSVs before they are created on the cluster. Omitting the target namespace selection avoids the duplication of target namespaces between tenants. Copied CSVs are deleted when their source CSV no longer exists or the Operator group that their source CSV belongs to no longer targets the namespace of the copied CSV. Note By default, the disableCopiedCSVs field is disabled. After enabling a disableCopiedCSVs field, the OLM deletes existing copied CSVs on a cluster. When a disableCopiedCSVs field is disabled, the OLM adds copied CSVs again. Disable the disableCopiedCSVs field: USD cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: false EOF Enable the disableCopiedCSVs field: USD cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: true EOF 2.4.5.8. Static Operator groups An Operator group is static if its spec.staticProvidedAPIs field is set to true . As a result, OLM does not modify the olm.providedAPIs annotation of an Operator group, which means that it can be set in advance. This is useful when a user wants to use an Operator group to prevent resource contention in a set of namespaces but does not have active member CSVs that provide the APIs for those resources. Below is an example of an Operator group that protects Prometheus resources in all namespaces with the something.cool.io/cluster-monitoring: "true" annotation: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-monitoring namespace: cluster-monitoring annotations: olm.providedAPIs: Alertmanager.v1.monitoring.coreos.com,Prometheus.v1.monitoring.coreos.com,PrometheusRule.v1.monitoring.coreos.com,ServiceMonitor.v1.monitoring.coreos.com spec: staticProvidedAPIs: true selector: matchLabels: something.cool.io/cluster-monitoring: "true" 2.4.5.9. Operator group intersection Two Operator groups are said to have intersecting provided APIs if the intersection of their target namespace sets is not an empty set and the intersection of their provided API sets, defined by olm.providedAPIs annotations, is not an empty set. A potential issue is that Operator groups with intersecting provided APIs can compete for the same resources in the set of intersecting namespaces. Note When checking intersection rules, an Operator group namespace is always included as part of its selected target namespaces. Rules for intersection Each time an active member CSV synchronizes, OLM queries the cluster for the set of intersecting provided APIs between the Operator group of the CSV and all others. OLM then checks if that set is an empty set: If true and the CSV's provided APIs are a subset of the Operator group's: Continue transitioning. If true and the CSV's provided APIs are not a subset of the Operator group's: If the Operator group is static: Clean up any deployments that belong to the CSV. Transition the CSV to a failed state with status reason CannotModifyStaticOperatorGroupProvidedAPIs . If the Operator group is not static: Replace the Operator group's olm.providedAPIs annotation with the union of itself and the CSV's provided APIs. If false and the CSV's provided APIs are not a subset of the Operator group's: Clean up any deployments that belong to the CSV. Transition the CSV to a failed state with status reason InterOperatorGroupOwnerConflict . If false and the CSV's provided APIs are a subset of the Operator group's: If the Operator group is static: Clean up any deployments that belong to the CSV. Transition the CSV to a failed state with status reason CannotModifyStaticOperatorGroupProvidedAPIs . If the Operator group is not static: Replace the Operator group's olm.providedAPIs annotation with the difference between itself and the CSV's provided APIs. Note Failure states caused by Operator groups are non-terminal. The following actions are performed each time an Operator group synchronizes: The set of provided APIs from active member CSVs is calculated from the cluster. Note that copied CSVs are ignored. The cluster set is compared to olm.providedAPIs , and if olm.providedAPIs contains any extra APIs, then those APIs are pruned. All CSVs that provide the same APIs across all namespaces are requeued. This notifies conflicting CSVs in intersecting groups that their conflict has possibly been resolved, either through resizing or through deletion of the conflicting CSV. 2.4.5.10. Limitations for multitenant Operator management OpenShift Container Platform provides limited support for simultaneously installing different versions of an Operator on the same cluster. Operator Lifecycle Manager (OLM) installs Operators multiple times in different namespaces. One constraint of this is that the Operator's API versions must be the same. Operators are control plane extensions due to their usage of CustomResourceDefinition objects (CRDs), which are global resources in Kubernetes. Different major versions of an Operator often have incompatible CRDs. This makes them incompatible to install simultaneously in different namespaces on a cluster. All tenants, or namespaces, share the same control plane of a cluster. Therefore, tenants in a multitenant cluster also share global CRDs, which limits the scenarios in which different instances of the same Operator can be used in parallel on the same cluster. The supported scenarios include the following: Operators of different versions that ship the exact same CRD definition (in case of versioned CRDs, the exact same set of versions) Operators of different versions that do not ship a CRD, and instead have their CRD available in a separate bundle on the OperatorHub All other scenarios are not supported, because the integrity of the cluster data cannot be guaranteed if there are multiple competing or overlapping CRDs from different Operator versions to be reconciled on the same cluster. Additional resources Operator Lifecycle Manager (OLM) Multitenancy and Operator colocation Operators in multitenant clusters Allowing non-cluster administrators to install Operators 2.4.5.11. Troubleshooting Operator groups Membership An install plan's namespace must contain only one Operator group. When attempting to generate a cluster service version (CSV) in a namespace, an install plan considers an Operator group invalid in the following scenarios: No Operator groups exist in the install plan's namespace. Multiple Operator groups exist in the install plan's namespace. An incorrect or non-existent service account name is specified in the Operator group. If an install plan encounters an invalid Operator group, the CSV is not generated and the InstallPlan resource continues to install with a relevant message. For example, the following message is provided if more than one Operator group exists in the same namespace: attenuated service account query failed - more than one operator group(s) are managing this namespace count=2 where count= specifies the number of Operator groups in the namespace. If the install modes of a CSV do not support the target namespace selection of the Operator group in its namespace, the CSV transitions to a failure state with the reason UnsupportedOperatorGroup . CSVs in a failed state for this reason transition to pending after either the target namespace selection of the Operator group changes to a supported configuration, or the install modes of the CSV are modified to support the target namespace selection. 2.4.6. Multitenancy and Operator colocation This guide outlines multitenancy and Operator colocation in Operator Lifecycle Manager (OLM). 2.4.6.1. Colocation of Operators in a namespace Operator Lifecycle Manager (OLM) handles OLM-managed Operators that are installed in the same namespace, meaning their Subscription resources are colocated in the same namespace, as related Operators. Even if they are not actually related, OLM considers their states, such as their version and update policy, when any one of them is updated. This default behavior manifests in two ways: InstallPlan resources of pending updates include ClusterServiceVersion (CSV) resources of all other Operators that are in the same namespace. All Operators in the same namespace share the same update policy. For example, if one Operator is set to manual updates, all other Operators' update policies are also set to manual. These scenarios can lead to the following issues: It becomes hard to reason about install plans for Operator updates, because there are many more resources defined in them than just the updated Operator. It becomes impossible to have some Operators in a namespace update automatically while other are updated manually, which is a common desire for cluster administrators. These issues usually surface because, when installing Operators with the OpenShift Container Platform web console, the default behavior installs Operators that support the All namespaces install mode into the default openshift-operators global namespace. As a cluster administrator, you can bypass this default behavior manually by using the following workflow: Create a namespace for the installation of the Operator. Create a custom global Operator group , which is an Operator group that watches all namespaces. By associating this Operator group with the namespace you just created, it makes the installation namespace a global namespace, which makes Operators installed there available in all namespaces. Install the desired Operator in the installation namespace. If the Operator has dependencies, the dependencies are automatically installed in the pre-created namespace. As a result, it is then valid for the dependency Operators to have the same update policy and shared install plans. For a detailed procedure, see "Installing global Operators in custom namespaces". Additional resources Installing global Operators in custom namespaces Operators in multitenant clusters 2.4.7. Operator conditions This guide outlines how Operator Lifecycle Manager (OLM) uses Operator conditions. 2.4.7.1. About Operator conditions As part of its role in managing the lifecycle of an Operator, Operator Lifecycle Manager (OLM) infers the state of an Operator from the state of Kubernetes resources that define the Operator. While this approach provides some level of assurance that an Operator is in a given state, there are many instances where an Operator might need to communicate information to OLM that could not be inferred otherwise. This information can then be used by OLM to better manage the lifecycle of the Operator. OLM provides a custom resource definition (CRD) called OperatorCondition that allows Operators to communicate conditions to OLM. There are a set of supported conditions that influence management of the Operator by OLM when present in the Spec.Conditions array of an OperatorCondition resource. Note By default, the Spec.Conditions array is not present in an OperatorCondition object until it is either added by a user or as a result of custom Operator logic. 2.4.7.2. Supported conditions Operator Lifecycle Manager (OLM) supports the following Operator conditions. 2.4.7.2.1. Upgradeable condition The Upgradeable Operator condition prevents an existing cluster service version (CSV) from being replaced by a newer version of the CSV. This condition is useful when: An Operator is about to start a critical process and should not be upgraded until the process is completed. An Operator is performing a migration of custom resources (CRs) that must be completed before the Operator is ready to be upgraded. Important Setting the Upgradeable Operator condition to the False value does not avoid pod disruption. If you must ensure your pods are not disrupted, see "Using pod disruption budgets to specify the number of pods that must be up" and "Graceful termination" in the "Additional resources" section. Example Upgradeable Operator condition apiVersion: operators.coreos.com/v1 kind: OperatorCondition metadata: name: my-operator namespace: operators spec: conditions: - type: Upgradeable 1 status: "False" 2 reason: "migration" message: "The Operator is performing a migration." lastTransitionTime: "2020-08-24T23:15:55Z" 1 Name of the condition. 2 A False value indicates the Operator is not ready to be upgraded. OLM prevents a CSV that replaces the existing CSV of the Operator from leaving the Pending phase. A False value does not block cluster upgrades. 2.4.7.3. Additional resources Managing Operator conditions Enabling Operator conditions Using pod disruption budgets to specify the number of pods that must be up Graceful termination 2.4.8. Operator Lifecycle Manager metrics 2.4.8.1. Exposed metrics Operator Lifecycle Manager (OLM) exposes certain OLM-specific resources for use by the Prometheus-based OpenShift Container Platform cluster monitoring stack. Table 2.8. Metrics exposed by OLM Name Description catalog_source_count Number of catalog sources. catalogsource_ready State of a catalog source. The value 1 indicates that the catalog source is in a READY state. The value of 0 indicates that the catalog source is not in a READY state. csv_abnormal When reconciling a cluster service version (CSV), present whenever a CSV version is in any state other than Succeeded , for example when it is not installed. Includes the name , namespace , phase , reason , and version labels. A Prometheus alert is created when this metric is present. csv_count Number of CSVs successfully registered. csv_succeeded When reconciling a CSV, represents whether a CSV version is in a Succeeded state (value 1 ) or not (value 0 ). Includes the name , namespace , and version labels. csv_upgrade_count Monotonic count of CSV upgrades. install_plan_count Number of install plans. installplan_warnings_total Monotonic count of warnings generated by resources, such as deprecated resources, included in an install plan. olm_resolution_duration_seconds The duration of a dependency resolution attempt. subscription_count Number of subscriptions. subscription_sync_total Monotonic count of subscription syncs. Includes the channel , installed CSV, and subscription name labels. 2.4.9. Webhook management in Operator Lifecycle Manager Webhooks allow Operator authors to intercept, modify, and accept or reject resources before they are saved to the object store and handled by the Operator controller. Operator Lifecycle Manager (OLM) can manage the lifecycle of these webhooks when they are shipped alongside your Operator. See Defining cluster service versions (CSVs) for details on how an Operator developer can define webhooks for their Operator, as well as considerations when running on OLM. 2.4.9.1. Additional resources Types of webhook admission plugins Kubernetes documentation: Validating admission webhooks Mutating admission webhooks Conversion webhooks 2.5. Understanding OperatorHub 2.5.1. About OperatorHub OperatorHub is the web console interface in OpenShift Container Platform that cluster administrators use to discover and install Operators. With one click, an Operator can be pulled from its off-cluster source, installed and subscribed on the cluster, and made ready for engineering teams to self-service manage the product across deployment environments using Operator Lifecycle Manager (OLM). Cluster administrators can choose from catalogs grouped into the following categories: Category Description Red Hat Operators Red Hat products packaged and shipped by Red Hat. Supported by Red Hat. Certified Operators Products from leading independent software vendors (ISVs). Red Hat partners with ISVs to package and ship. Supported by the ISV. Red Hat Marketplace Certified software that can be purchased from Red Hat Marketplace . Community Operators Optionally-visible software maintained by relevant representatives in the redhat-openshift-ecosystem/community-operators-prod/operators GitHub repository. No official support. Custom Operators Operators you add to the cluster yourself. If you have not added any custom Operators, the Custom category does not appear in the web console on your OperatorHub. Operators on OperatorHub are packaged to run on OLM. This includes a YAML file called a cluster service version (CSV) containing all of the CRDs, RBAC rules, deployments, and container images required to install and securely run the Operator. It also contains user-visible information like a description of its features and supported Kubernetes versions. The Operator SDK can be used to assist developers packaging their Operators for use on OLM and OperatorHub. If you have a commercial application that you want to make accessible to your customers, get it included using the certification workflow provided on the Red Hat Partner Connect portal at connect.redhat.com . 2.5.2. OperatorHub architecture The OperatorHub UI component is driven by the Marketplace Operator by default on OpenShift Container Platform in the openshift-marketplace namespace. 2.5.2.1. OperatorHub custom resource The Marketplace Operator manages an OperatorHub custom resource (CR) named cluster that manages the default CatalogSource objects provided with OperatorHub. You can modify this resource to enable or disable the default catalogs, which is useful when configuring OpenShift Container Platform in restricted network environments. Example OperatorHub custom resource apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: name: cluster spec: disableAllDefaultSources: true 1 sources: [ 2 { name: "community-operators", disabled: false } ] 1 disableAllDefaultSources is an override that controls availability of all default catalogs that are configured by default during an OpenShift Container Platform installation. 2 Disable default catalogs individually by changing the disabled parameter value per source. 2.5.3. Additional resources Catalog source About the Operator SDK Defining cluster service versions (CSVs) Operator installation and upgrade workflow in OLM Red Hat Partner Connect Red Hat Marketplace 2.6. Red Hat-provided Operator catalogs Red Hat provides several Operator catalogs that are included with OpenShift Container Platform by default. Important As of OpenShift Container Platform 4.11, the default Red Hat-provided Operator catalog releases in the file-based catalog format. The default Red Hat-provided Operator catalogs for OpenShift Container Platform 4.6 through 4.10 released in the deprecated SQLite database format. The opm subcommands, flags, and functionality related to the SQLite database format are also deprecated and will be removed in a future release. The features are still supported and must be used for catalogs that use the deprecated SQLite database format. Many of the opm subcommands and flags for working with the SQLite database format, such as opm index prune , do not work with the file-based catalog format. For more information about working with file-based catalogs, see Managing custom catalogs , Operator Framework packaging format , and Mirroring images for a disconnected installation using the oc-mirror plugin . 2.6.1. About Operator catalogs An Operator catalog is a repository of metadata that Operator Lifecycle Manager (OLM) can query to discover and install Operators and their dependencies on a cluster. OLM always installs Operators from the latest version of a catalog. An index image, based on the Operator bundle format, is a containerized snapshot of a catalog. It is an immutable artifact that contains the database of pointers to a set of Operator manifest content. A catalog can reference an index image to source its content for OLM on the cluster. As catalogs are updated, the latest versions of Operators change, and older versions may be removed or altered. In addition, when OLM runs on an OpenShift Container Platform cluster in a restricted network environment, it is unable to access the catalogs directly from the internet to pull the latest content. As a cluster administrator, you can create your own custom index image, either based on a Red Hat-provided catalog or from scratch, which can be used to source the catalog content on the cluster. Creating and updating your own index image provides a method for customizing the set of Operators available on the cluster, while also avoiding the aforementioned restricted network environment issues. Important Kubernetes periodically deprecates certain APIs that are removed in subsequent releases. As a result, Operators are unable to use removed APIs starting with the version of OpenShift Container Platform that uses the Kubernetes version that removed the API. If your cluster is using custom catalogs, see Controlling Operator compatibility with OpenShift Container Platform versions for more details about how Operator authors can update their projects to help avoid workload issues and prevent incompatible upgrades. Note Support for the legacy package manifest format for Operators, including custom catalogs that were using the legacy format, is removed in OpenShift Container Platform 4.8 and later. When creating custom catalog images, versions of OpenShift Container Platform 4 required using the oc adm catalog build command, which was deprecated for several releases and is now removed. With the availability of Red Hat-provided index images starting in OpenShift Container Platform 4.6, catalog builders must use the opm index command to manage index images. Additional resources Managing custom catalogs Packaging format Using Operator Lifecycle Manager on restricted networks 2.6.2. About Red Hat-provided Operator catalogs The Red Hat-provided catalog sources are installed by default in the openshift-marketplace namespace, which makes the catalogs available cluster-wide in all namespaces. The following Operator catalogs are distributed by Red Hat: Catalog Index image Description redhat-operators registry.redhat.io/redhat/redhat-operator-index:v4.15 Red Hat products packaged and shipped by Red Hat. Supported by Red Hat. certified-operators registry.redhat.io/redhat/certified-operator-index:v4.15 Products from leading independent software vendors (ISVs). Red Hat partners with ISVs to package and ship. Supported by the ISV. redhat-marketplace registry.redhat.io/redhat/redhat-marketplace-index:v4.15 Certified software that can be purchased from Red Hat Marketplace . community-operators registry.redhat.io/redhat/community-operator-index:v4.15 Software maintained by relevant representatives in the redhat-openshift-ecosystem/community-operators-prod/operators GitHub repository. No official support. During a cluster upgrade, the index image tag for the default Red Hat-provided catalog sources are updated automatically by the Cluster Version Operator (CVO) so that Operator Lifecycle Manager (OLM) pulls the updated version of the catalog. For example during an upgrade from OpenShift Container Platform 4.8 to 4.9, the spec.image field in the CatalogSource object for the redhat-operators catalog is updated from: registry.redhat.io/redhat/redhat-operator-index:v4.8 to: registry.redhat.io/redhat/redhat-operator-index:v4.9 2.7. Operators in multitenant clusters The default behavior for Operator Lifecycle Manager (OLM) aims to provide simplicity during Operator installation. However, this behavior can lack flexibility, especially in multitenant clusters. In order for multiple tenants on a OpenShift Container Platform cluster to use an Operator, the default behavior of OLM requires that administrators install the Operator in All namespaces mode, which can be considered to violate the principle of least privilege. Consider the following scenarios to determine which Operator installation workflow works best for your environment and requirements. Additional resources Common terms: Multitenant Limitations for multitenant Operator management 2.7.1. Default Operator install modes and behavior When installing Operators with the web console as an administrator, you typically have two choices for the install mode, depending on the Operator's capabilities: Single namespace Installs the Operator in the chosen single namespace, and makes all permissions that the Operator requests available in that namespace. All namespaces Installs the Operator in the default openshift-operators namespace to watch and be made available to all namespaces in the cluster. Makes all permissions that the Operator requests available in all namespaces. In some cases, an Operator author can define metadata to give the user a second option for that Operator's suggested namespace. This choice also means that users in the affected namespaces get access to the Operators APIs, which can leverage the custom resources (CRs) they own, depending on their role in the namespace: The namespace-admin and namespace-edit roles can read/write to the Operator APIs, meaning they can use them. The namespace-view role can read CR objects of that Operator. For Single namespace mode, because the Operator itself installs in the chosen namespace, its pod and service account are also located there. For All namespaces mode, the Operator's privileges are all automatically elevated to cluster roles, meaning the Operator has those permissions in all namespaces. Additional resources Adding Operators to a cluster Install modes types Setting a suggested namespace 2.7.2. Recommended solution for multitenant clusters While a Multinamespace install mode does exist, it is supported by very few Operators. As a middle ground solution between the standard All namespaces and Single namespace install modes, you can install multiple instances of the same Operator, one for each tenant, by using the following workflow: Create a namespace for the tenant Operator that is separate from the tenant's namespace. Create an Operator group for the tenant Operator scoped only to the tenant's namespace. Install the Operator in the tenant Operator namespace. As a result, the Operator resides in the tenant Operator namespace and watches the tenant namespace, but neither the Operator's pod nor its service account are visible or usable by the tenant. This solution provides better tenant separation, least privilege principle at the cost of resource usage, and additional orchestration to ensure the constraints are met. For a detailed procedure, see "Preparing for multiple instances of an Operator for multitenant clusters". Limitations and considerations This solution only works when the following constraints are met: All instances of the same Operator must be the same version. The Operator cannot have dependencies on other Operators. The Operator cannot ship a CRD conversion webhook. Important You cannot use different versions of the same Operator on the same cluster. Eventually, the installation of another instance of the Operator would be blocked when it meets the following conditions: The instance is not the newest version of the Operator. The instance ships an older revision of the CRDs that lack information or versions that newer revisions have that are already in use on the cluster. Warning As an administrator, use caution when allowing non-cluster administrators to install Operators self-sufficiently, as explained in "Allowing non-cluster administrators to install Operators". These tenants should only have access to a curated catalog of Operators that are known to not have dependencies. These tenants must also be forced to use the same version line of an Operator, to ensure the CRDs do not change. This requires the use of namespace-scoped catalogs and likely disabling the global default catalogs. Additional resources Preparing for multiple instances of an Operator for multitenant clusters Allowing non-cluster administrators to install Operators Disabling the default OperatorHub catalog sources 2.7.3. Operator colocation and Operator groups Operator Lifecycle Manager (OLM) handles OLM-managed Operators that are installed in the same namespace, meaning their Subscription resources are colocated in the same namespace, as related Operators. Even if they are not actually related, OLM considers their states, such as their version and update policy, when any one of them is updated. For more information on Operator colocation and using Operator groups effectively, see Operator Lifecycle Manager (OLM) Multitenancy and Operator colocation . 2.8. CRDs 2.8.1. Extending the Kubernetes API with custom resource definitions Operators use the Kubernetes extension mechanism, custom resource definitions (CRDs), so that custom objects managed by the Operator look and act just like the built-in, native Kubernetes objects. This guide describes how cluster administrators can extend their OpenShift Container Platform cluster by creating and managing CRDs. 2.8.1.1. Custom resource definitions In the Kubernetes API, a resource is an endpoint that stores a collection of API objects of a certain kind. For example, the built-in Pods resource contains a collection of Pod objects. A custom resource definition (CRD) object defines a new, unique object type, called a kind , in the cluster and lets the Kubernetes API server handle its entire lifecycle. Custom resource (CR) objects are created from CRDs that have been added to the cluster by a cluster administrator, allowing all cluster users to add the new resource type into projects. When a cluster administrator adds a new CRD to the cluster, the Kubernetes API server reacts by creating a new RESTful resource path that can be accessed by the entire cluster or a single project (namespace) and begins serving the specified CR. Cluster administrators that want to grant access to the CRD to other users can use cluster role aggregation to grant access to users with the admin , edit , or view default cluster roles. Cluster role aggregation allows the insertion of custom policy rules into these cluster roles. This behavior integrates the new resource into the RBAC policy of the cluster as if it was a built-in resource. Operators in particular make use of CRDs by packaging them with any required RBAC policy and other software-specific logic. Cluster administrators can also add CRDs manually to the cluster outside of the lifecycle of an Operator, making them available to all users. Note While only cluster administrators can create CRDs, developers can create the CR from an existing CRD if they have read and write permission to it. 2.8.1.2. Creating a custom resource definition To create custom resource (CR) objects, cluster administrators must first create a custom resource definition (CRD). Prerequisites Access to an OpenShift Container Platform cluster with cluster-admin user privileges. Procedure To create a CRD: Create a YAML file that contains the following field types: Example YAML file for a CRD apiVersion: apiextensions.k8s.io/v1 1 kind: CustomResourceDefinition metadata: name: crontabs.stable.example.com 2 spec: group: stable.example.com 3 versions: - name: v1 4 served: true storage: true schema: openAPIV3Schema: type: object properties: spec: type: object properties: cronSpec: type: string image: type: string replicas: type: integer scope: Namespaced 5 names: plural: crontabs 6 singular: crontab 7 kind: CronTab 8 shortNames: - ct 9 1 Use the apiextensions.k8s.io/v1 API. 2 Specify a name for the definition. This must be in the <plural-name>.<group> format using the values from the group and plural fields. 3 Specify a group name for the API. An API group is a collection of objects that are logically related. For example, all batch objects like Job or ScheduledJob could be in the batch API group (such as batch.api.example.com ). A good practice is to use a fully-qualified-domain name (FQDN) of your organization. 4 Specify a version name to be used in the URL. Each API group can exist in multiple versions, for example v1alpha , v1beta , v1 . 5 Specify whether the custom objects are available to a project ( Namespaced ) or all projects in the cluster ( Cluster ). 6 Specify the plural name to use in the URL. The plural field is the same as a resource in an API URL. 7 Specify a singular name to use as an alias on the CLI and for display. 8 Specify the kind of objects that can be created. The type can be in CamelCase. 9 Specify a shorter string to match your resource on the CLI. Note By default, a CRD is cluster-scoped and available to all projects. Create the CRD object: USD oc create -f <file_name>.yaml A new RESTful API endpoint is created at: /apis/<spec:group>/<spec:version>/<scope>/*/<names-plural>/... For example, using the example file, the following endpoint is created: /apis/stable.example.com/v1/namespaces/*/crontabs/... You can now use this endpoint URL to create and manage CRs. The object kind is based on the spec.kind field of the CRD object you created. 2.8.1.3. Creating cluster roles for custom resource definitions Cluster administrators can grant permissions to existing cluster-scoped custom resource definitions (CRDs). If you use the admin , edit , and view default cluster roles, you can take advantage of cluster role aggregation for their rules. Important You must explicitly assign permissions to each of these roles. The roles with more permissions do not inherit rules from roles with fewer permissions. If you assign a rule to a role, you must also assign that verb to roles that have more permissions. For example, if you grant the get crontabs permission to the view role, you must also grant it to the edit and admin roles. The admin or edit role is usually assigned to the user that created a project through the project template. Prerequisites Create a CRD. Procedure Create a cluster role definition file for the CRD. The cluster role definition is a YAML file that contains the rules that apply to each cluster role. An OpenShift Container Platform controller adds the rules that you specify to the default cluster roles. Example YAML file for a cluster role definition kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 1 metadata: name: aggregate-cron-tabs-admin-edit 2 labels: rbac.authorization.k8s.io/aggregate-to-admin: "true" 3 rbac.authorization.k8s.io/aggregate-to-edit: "true" 4 rules: - apiGroups: ["stable.example.com"] 5 resources: ["crontabs"] 6 verbs: ["get", "list", "watch", "create", "update", "patch", "delete", "deletecollection"] 7 --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: aggregate-cron-tabs-view 8 labels: # Add these permissions to the "view" default role. rbac.authorization.k8s.io/aggregate-to-view: "true" 9 rbac.authorization.k8s.io/aggregate-to-cluster-reader: "true" 10 rules: - apiGroups: ["stable.example.com"] 11 resources: ["crontabs"] 12 verbs: ["get", "list", "watch"] 13 1 Use the rbac.authorization.k8s.io/v1 API. 2 8 Specify a name for the definition. 3 Specify this label to grant permissions to the admin default role. 4 Specify this label to grant permissions to the edit default role. 5 11 Specify the group name of the CRD. 6 12 Specify the plural name of the CRD that these rules apply to. 7 13 Specify the verbs that represent the permissions that are granted to the role. For example, apply read and write permissions to the admin and edit roles and only read permission to the view role. 9 Specify this label to grant permissions to the view default role. 10 Specify this label to grant permissions to the cluster-reader default role. Create the cluster role: USD oc create -f <file_name>.yaml 2.8.1.4. Creating custom resources from a file After a custom resource definition (CRD) has been added to the cluster, custom resources (CRs) can be created with the CLI from a file using the CR specification. Prerequisites CRD added to the cluster by a cluster administrator. Procedure Create a YAML file for the CR. In the following example definition, the cronSpec and image custom fields are set in a CR of Kind: CronTab . The Kind comes from the spec.kind field of the CRD object: Example YAML file for a CR apiVersion: "stable.example.com/v1" 1 kind: CronTab 2 metadata: name: my-new-cron-object 3 finalizers: 4 - finalizer.stable.example.com spec: 5 cronSpec: "* * * * /5" image: my-awesome-cron-image 1 Specify the group name and API version (name/version) from the CRD. 2 Specify the type in the CRD. 3 Specify a name for the object. 4 Specify the finalizers for the object, if any. Finalizers allow controllers to implement conditions that must be completed before the object can be deleted. 5 Specify conditions specific to the type of object. After you create the file, create the object: USD oc create -f <file_name>.yaml 2.8.1.5. Inspecting custom resources You can inspect custom resource (CR) objects that exist in your cluster using the CLI. Prerequisites A CR object exists in a namespace to which you have access. Procedure To get information on a specific kind of a CR, run: USD oc get <kind> For example: USD oc get crontab Example output NAME KIND my-new-cron-object CronTab.v1.stable.example.com Resource names are not case-sensitive, and you can use either the singular or plural forms defined in the CRD, as well as any short name. For example: USD oc get crontabs USD oc get crontab USD oc get ct You can also view the raw YAML data for a CR: USD oc get <kind> -o yaml For example: USD oc get ct -o yaml Example output apiVersion: v1 items: - apiVersion: stable.example.com/v1 kind: CronTab metadata: clusterName: "" creationTimestamp: 2017-05-31T12:56:35Z deletionGracePeriodSeconds: null deletionTimestamp: null name: my-new-cron-object namespace: default resourceVersion: "285" selfLink: /apis/stable.example.com/v1/namespaces/default/crontabs/my-new-cron-object uid: 9423255b-4600-11e7-af6a-28d2447dc82b spec: cronSpec: '* * * * /5' 1 image: my-awesome-cron-image 2 1 2 Custom data from the YAML that you used to create the object displays. 2.8.2. Managing resources from custom resource definitions This guide describes how developers can manage custom resources (CRs) that come from custom resource definitions (CRDs). 2.8.2.1. Custom resource definitions In the Kubernetes API, a resource is an endpoint that stores a collection of API objects of a certain kind. For example, the built-in Pods resource contains a collection of Pod objects. A custom resource definition (CRD) object defines a new, unique object type, called a kind , in the cluster and lets the Kubernetes API server handle its entire lifecycle. Custom resource (CR) objects are created from CRDs that have been added to the cluster by a cluster administrator, allowing all cluster users to add the new resource type into projects. Operators in particular make use of CRDs by packaging them with any required RBAC policy and other software-specific logic. Cluster administrators can also add CRDs manually to the cluster outside of the lifecycle of an Operator, making them available to all users. Note While only cluster administrators can create CRDs, developers can create the CR from an existing CRD if they have read and write permission to it. 2.8.2.2. Creating custom resources from a file After a custom resource definition (CRD) has been added to the cluster, custom resources (CRs) can be created with the CLI from a file using the CR specification. Prerequisites CRD added to the cluster by a cluster administrator. Procedure Create a YAML file for the CR. In the following example definition, the cronSpec and image custom fields are set in a CR of Kind: CronTab . The Kind comes from the spec.kind field of the CRD object: Example YAML file for a CR apiVersion: "stable.example.com/v1" 1 kind: CronTab 2 metadata: name: my-new-cron-object 3 finalizers: 4 - finalizer.stable.example.com spec: 5 cronSpec: "* * * * /5" image: my-awesome-cron-image 1 Specify the group name and API version (name/version) from the CRD. 2 Specify the type in the CRD. 3 Specify a name for the object. 4 Specify the finalizers for the object, if any. Finalizers allow controllers to implement conditions that must be completed before the object can be deleted. 5 Specify conditions specific to the type of object. After you create the file, create the object: USD oc create -f <file_name>.yaml 2.8.2.3. Inspecting custom resources You can inspect custom resource (CR) objects that exist in your cluster using the CLI. Prerequisites A CR object exists in a namespace to which you have access. Procedure To get information on a specific kind of a CR, run: USD oc get <kind> For example: USD oc get crontab Example output NAME KIND my-new-cron-object CronTab.v1.stable.example.com Resource names are not case-sensitive, and you can use either the singular or plural forms defined in the CRD, as well as any short name. For example: USD oc get crontabs USD oc get crontab USD oc get ct You can also view the raw YAML data for a CR: USD oc get <kind> -o yaml For example: USD oc get ct -o yaml Example output apiVersion: v1 items: - apiVersion: stable.example.com/v1 kind: CronTab metadata: clusterName: "" creationTimestamp: 2017-05-31T12:56:35Z deletionGracePeriodSeconds: null deletionTimestamp: null name: my-new-cron-object namespace: default resourceVersion: "285" selfLink: /apis/stable.example.com/v1/namespaces/default/crontabs/my-new-cron-object uid: 9423255b-4600-11e7-af6a-28d2447dc82b spec: cronSpec: '* * * * /5' 1 image: my-awesome-cron-image 2 1 2 Custom data from the YAML that you used to create the object displays. | [
"etcd βββ manifests β βββ etcdcluster.crd.yaml β βββ etcdoperator.clusterserviceversion.yaml β βββ secret.yaml β βββ configmap.yaml βββ metadata βββ annotations.yaml βββ dependencies.yaml",
"annotations: operators.operatorframework.io.bundle.mediatype.v1: \"registry+v1\" 1 operators.operatorframework.io.bundle.manifests.v1: \"manifests/\" 2 operators.operatorframework.io.bundle.metadata.v1: \"metadata/\" 3 operators.operatorframework.io.bundle.package.v1: \"test-operator\" 4 operators.operatorframework.io.bundle.channels.v1: \"beta,stable\" 5 operators.operatorframework.io.bundle.channel.default.v1: \"stable\" 6",
"dependencies: - type: olm.package value: packageName: prometheus version: \">0.27.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2",
"Ignore everything except non-object .json and .yaml files **/* !*.json !*.yaml **/objects/*.json **/objects/*.yaml",
"catalog βββ packageA β βββ index.yaml βββ packageB β βββ .indexignore β βββ index.yaml β βββ objects β βββ packageB.v0.1.0.clusterserviceversion.yaml βββ packageC βββ index.json βββ deprecations.yaml",
"_Meta: { // schema is required and must be a non-empty string schema: string & !=\"\" // package is optional, but if it's defined, it must be a non-empty string package?: string & !=\"\" // properties is optional, but if it's defined, it must be a list of 0 or more properties properties?: [... #Property] } #Property: { // type is required type: string & !=\"\" // value is required, and it must not be null value: !=null }",
"#Package: { schema: \"olm.package\" // Package name name: string & !=\"\" // A description of the package description?: string // The package's default channel defaultChannel: string & !=\"\" // An optional icon icon?: { base64data: string mediatype: string } }",
"#Channel: { schema: \"olm.channel\" package: string & !=\"\" name: string & !=\"\" entries: [...#ChannelEntry] } #ChannelEntry: { // name is required. It is the name of an `olm.bundle` that // is present in the channel. name: string & !=\"\" // replaces is optional. It is the name of bundle that is replaced // by this entry. It does not have to be present in the entry list. replaces?: string & !=\"\" // skips is optional. It is a list of bundle names that are skipped by // this entry. The skipped bundles do not have to be present in the // entry list. skips?: [...string & !=\"\"] // skipRange is optional. It is the semver range of bundle versions // that are skipped by this entry. skipRange?: string & !=\"\" }",
"#Bundle: { schema: \"olm.bundle\" package: string & !=\"\" name: string & !=\"\" image: string & !=\"\" properties: [...#Property] relatedImages?: [...#RelatedImage] } #Property: { // type is required type: string & !=\"\" // value is required, and it must not be null value: !=null } #RelatedImage: { // image is the image reference image: string & !=\"\" // name is an optional descriptive name for an image that // helps identify its purpose in the context of the bundle name?: string & !=\"\" }",
"schema: olm.deprecations package: my-operator 1 entries: - reference: schema: olm.package 2 message: | 3 The 'my-operator' package is end of life. Please use the 'my-operator-new' package for support. - reference: schema: olm.channel name: alpha 4 message: | The 'alpha' channel is no longer supported. Please switch to the 'stable' channel. - reference: schema: olm.bundle name: my-operator.v1.68.0 5 message: | my-operator.v1.68.0 is deprecated. Uninstall my-operator.v1.68.0 and install my-operator.v1.72.0 for support.",
"my-catalog βββ my-operator βββ index.yaml βββ deprecations.yaml",
"#PropertyPackage: { type: \"olm.package\" value: { packageName: string & !=\"\" version: string & !=\"\" } }",
"#PropertyGVK: { type: \"olm.gvk\" value: { group: string & !=\"\" version: string & !=\"\" kind: string & !=\"\" } }",
"#PropertyPackageRequired: { type: \"olm.package.required\" value: { packageName: string & !=\"\" versionRange: string & !=\"\" } }",
"#PropertyGVKRequired: { type: \"olm.gvk.required\" value: { group: string & !=\"\" version: string & !=\"\" kind: string & !=\"\" } }",
"name: community-operators repo: quay.io/community-operators/catalog tag: latest references: - name: etcd-operator image: quay.io/etcd-operator/index@sha256:5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03 - name: prometheus-operator image: quay.io/prometheus-operator/index@sha256:e258d248fda94c63753607f7c4494ee0fcbe92f1a76bfdac795c9d84101eb317",
"name=USD(yq eval '.name' catalog.yaml) mkdir \"USDname\" yq eval '.name + \"/\" + .references[].name' catalog.yaml | xargs mkdir for l in USD(yq e '.name as USDcatalog | .references[] | .image + \"|\" + USDcatalog + \"/\" + .name + \"/index.yaml\"' catalog.yaml); do image=USD(echo USDl | cut -d'|' -f1) file=USD(echo USDl | cut -d'|' -f2) opm render \"USDimage\" > \"USDfile\" done opm generate dockerfile \"USDname\" indexImage=USD(yq eval '.repo + \":\" + .tag' catalog.yaml) docker build -t \"USDindexImage\" -f \"USDname.Dockerfile\" . docker push \"USDindexImage\"",
"apiVersion: core.rukpak.io/v1alpha1 kind: Bundle metadata: name: my-bundle spec: source: type: image image: ref: my-bundle@sha256:xyz123 provisionerClassName: core-rukpak-io-plain",
"oc apply -f -<<EOF apiVersion: core.rukpak.io/v1alpha1 kind: Bundle metadata: name: combo-tag-ref spec: source: type: git git: ref: tag: v0.0.2 repository: https://github.com/operator-framework/combo provisionerClassName: core-rukpak-io-plain EOF",
"bundle.core.rukpak.io/combo-tag-ref created",
"oc patch bundle combo-tag-ref --type='merge' -p '{\"spec\":{\"source\":{\"git\":{\"ref\":{\"tag\":\"v0.0.3\"}}}}}'",
"Error from server (bundle.spec is immutable): admission webhook \"vbundles.core.rukpak.io\" denied the request: bundle.spec is immutable",
"tree manifests manifests βββ namespace.yaml βββ service_account.yaml βββ cluster_role.yaml βββ cluster_role_binding.yaml βββ deployment.yaml",
"apiVersion: core.rukpak.io/v1alpha1 kind: BundleDeployment metadata: name: my-bundle-deployment spec: provisionerClassName: core-rukpak-io-plain template: metadata: labels: app: my-bundle spec: source: type: image image: ref: my-bundle@sha256:xyz123 provisionerClassName: core-rukpak-io-plain",
"\\ufeffapiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: generation: 1 name: example-catalog 1 namespace: openshift-marketplace 2 annotations: olm.catalogImageTemplate: 3 \"quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}.{kube_patch_version}\" spec: displayName: Example Catalog 4 image: quay.io/example-org/example-catalog:v1 5 priority: -400 6 publisher: Example Org sourceType: grpc 7 grpcPodConfig: securityContextConfig: <security_mode> 8 nodeSelector: 9 custom_label: <label> priorityClassName: system-cluster-critical 10 tolerations: 11 - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\" updateStrategy: registryPoll: 12 interval: 30m0s status: connectionState: address: example-catalog.openshift-marketplace.svc:50051 lastConnect: 2021-08-26T18:14:31Z lastObservedState: READY 13 latestImageRegistryPoll: 2021-08-26T18:46:25Z 14 registryService: 15 createdAt: 2021-08-26T16:16:37Z port: 50051 protocol: grpc serviceName: example-catalog serviceNamespace: openshift-marketplace",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-namespace spec: channel: stable name: example-operator source: example-catalog sourceNamespace: openshift-marketplace",
"registry.redhat.io/redhat/redhat-operator-index:v4.14",
"registry.redhat.io/redhat/redhat-operator-index:v4.15",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: generation: 1 name: example-catalog namespace: openshift-marketplace annotations: olm.catalogImageTemplate: \"quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}\" spec: displayName: Example Catalog image: quay.io/example-org/example-catalog:v1.28 priority: -400 publisher: Example Org",
"quay.io/example-org/example-catalog:v1.28",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-namespace spec: channel: stable name: example-operator source: example-catalog sourceNamespace: openshift-marketplace",
"apiVersion: operators.coreos.com/v1alpha1 kind: InstallPlan metadata: name: install-abcde namespace: operators spec: approval: Automatic approved: true clusterServiceVersionNames: - my-operator.v1.0.1 generation: 1 status: catalogSources: [] conditions: - lastTransitionTime: '2021-01-01T20:17:27Z' lastUpdateTime: '2021-01-01T20:17:27Z' status: 'True' type: Installed phase: Complete plan: - resolving: my-operator.v1.0.1 resource: group: operators.coreos.com kind: ClusterServiceVersion manifest: >- name: my-operator.v1.0.1 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1alpha1 status: Created - resolving: my-operator.v1.0.1 resource: group: apiextensions.k8s.io kind: CustomResourceDefinition manifest: >- name: webservers.web.servers.org sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1beta1 status: Created - resolving: my-operator.v1.0.1 resource: group: '' kind: ServiceAccount manifest: >- name: my-operator sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created - resolving: my-operator.v1.0.1 resource: group: rbac.authorization.k8s.io kind: Role manifest: >- name: my-operator.v1.0.1-my-operator-6d7cbc6f57 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created - resolving: my-operator.v1.0.1 resource: group: rbac.authorization.k8s.io kind: RoleBinding manifest: >- name: my-operator.v1.0.1-my-operator-6d7cbc6f57 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created",
"packageName: example channels: - name: alpha currentCSV: example.v0.1.2 - name: beta currentCSV: example.v0.1.3 defaultChannel: alpha",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: etcdoperator.v0.9.2 namespace: placeholder annotations: spec: displayName: etcd description: Etcd Operator replaces: etcdoperator.v0.9.0 skips: - etcdoperator.v0.9.1",
"olm.skipRange: <semver_range>",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: elasticsearch-operator.v4.1.2 namespace: <namespace> annotations: olm.skipRange: '>=4.1.0 <4.1.2'",
"properties: - type: olm.kubeversion value: version: \"1.16.0\"",
"properties: - property: type: color value: red - property: type: shape value: square - property: type: olm.gvk value: group: olm.coreos.io version: v1alpha1 kind: myresource",
"dependencies: - type: olm.package value: packageName: prometheus version: \">0.27.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2",
"type: olm.constraint value: failureMessage: 'require to have \"certified\"' cel: rule: 'properties.exists(p, p.type == \"certified\")'",
"type: olm.constraint value: failureMessage: 'require to have \"certified\" and \"stable\" properties' cel: rule: 'properties.exists(p, p.type == \"certified\") && properties.exists(p, p.type == \"stable\")'",
"schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: All are required for Red because all: constraints: - failureMessage: Package blue is needed for package: name: blue versionRange: '>=1.0.0' - failureMessage: GVK Green/v1 is needed for gvk: group: greens.example.com version: v1 kind: Green",
"schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: Any are required for Red because any: constraints: - gvk: group: blues.example.com version: v1beta1 kind: Blue - gvk: group: blues.example.com version: v1beta2 kind: Blue - gvk: group: blues.example.com version: v1 kind: Blue",
"schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: all: constraints: - failureMessage: Package blue is needed for package: name: blue versionRange: '>=1.0.0' - failureMessage: Cannot be required for Red because not: constraints: - gvk: group: greens.example.com version: v1alpha1 kind: greens",
"schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: Required for Red because any: constraints: - all: constraints: - package: name: blue versionRange: '>=1.0.0' - gvk: group: blues.example.com version: v1 kind: Blue - all: constraints: - package: name: blue versionRange: '<1.0.0' - gvk: group: blues.example.com version: v1beta1 kind: Blue",
"apiVersion: \"operators.coreos.com/v1alpha1\" kind: \"CatalogSource\" metadata: name: \"my-operators\" namespace: \"operators\" spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 1 image: example.com/my/operator-index:v1 displayName: \"My Operators\" priority: 100",
"dependencies: - type: olm.package value: packageName: etcd version: \">3.1.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: targetNamespaces: - my-namespace",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: selector: cool.io/prod: \"true\"",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: PackageManifest.v1alpha1.packages.apps.redhat.com name: olm-operators namespace: local spec: selector: {} serviceAccountName: metadata: creationTimestamp: null targetNamespaces: - local status: lastUpdated: 2019-02-19T16:18:28Z namespaces: - local",
"cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: false EOF",
"cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: true EOF",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-monitoring namespace: cluster-monitoring annotations: olm.providedAPIs: Alertmanager.v1.monitoring.coreos.com,Prometheus.v1.monitoring.coreos.com,PrometheusRule.v1.monitoring.coreos.com,ServiceMonitor.v1.monitoring.coreos.com spec: staticProvidedAPIs: true selector: matchLabels: something.cool.io/cluster-monitoring: \"true\"",
"attenuated service account query failed - more than one operator group(s) are managing this namespace count=2",
"apiVersion: operators.coreos.com/v1 kind: OperatorCondition metadata: name: my-operator namespace: operators spec: conditions: - type: Upgradeable 1 status: \"False\" 2 reason: \"migration\" message: \"The Operator is performing a migration.\" lastTransitionTime: \"2020-08-24T23:15:55Z\"",
"apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: name: cluster spec: disableAllDefaultSources: true 1 sources: [ 2 { name: \"community-operators\", disabled: false } ]",
"registry.redhat.io/redhat/redhat-operator-index:v4.8",
"registry.redhat.io/redhat/redhat-operator-index:v4.9",
"apiVersion: apiextensions.k8s.io/v1 1 kind: CustomResourceDefinition metadata: name: crontabs.stable.example.com 2 spec: group: stable.example.com 3 versions: - name: v1 4 served: true storage: true schema: openAPIV3Schema: type: object properties: spec: type: object properties: cronSpec: type: string image: type: string replicas: type: integer scope: Namespaced 5 names: plural: crontabs 6 singular: crontab 7 kind: CronTab 8 shortNames: - ct 9",
"oc create -f <file_name>.yaml",
"/apis/<spec:group>/<spec:version>/<scope>/*/<names-plural>/",
"/apis/stable.example.com/v1/namespaces/*/crontabs/",
"kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 1 metadata: name: aggregate-cron-tabs-admin-edit 2 labels: rbac.authorization.k8s.io/aggregate-to-admin: \"true\" 3 rbac.authorization.k8s.io/aggregate-to-edit: \"true\" 4 rules: - apiGroups: [\"stable.example.com\"] 5 resources: [\"crontabs\"] 6 verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\", \"deletecollection\"] 7 --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: aggregate-cron-tabs-view 8 labels: # Add these permissions to the \"view\" default role. rbac.authorization.k8s.io/aggregate-to-view: \"true\" 9 rbac.authorization.k8s.io/aggregate-to-cluster-reader: \"true\" 10 rules: - apiGroups: [\"stable.example.com\"] 11 resources: [\"crontabs\"] 12 verbs: [\"get\", \"list\", \"watch\"] 13",
"oc create -f <file_name>.yaml",
"apiVersion: \"stable.example.com/v1\" 1 kind: CronTab 2 metadata: name: my-new-cron-object 3 finalizers: 4 - finalizer.stable.example.com spec: 5 cronSpec: \"* * * * /5\" image: my-awesome-cron-image",
"oc create -f <file_name>.yaml",
"oc get <kind>",
"oc get crontab",
"NAME KIND my-new-cron-object CronTab.v1.stable.example.com",
"oc get crontabs",
"oc get crontab",
"oc get ct",
"oc get <kind> -o yaml",
"oc get ct -o yaml",
"apiVersion: v1 items: - apiVersion: stable.example.com/v1 kind: CronTab metadata: clusterName: \"\" creationTimestamp: 2017-05-31T12:56:35Z deletionGracePeriodSeconds: null deletionTimestamp: null name: my-new-cron-object namespace: default resourceVersion: \"285\" selfLink: /apis/stable.example.com/v1/namespaces/default/crontabs/my-new-cron-object uid: 9423255b-4600-11e7-af6a-28d2447dc82b spec: cronSpec: '* * * * /5' 1 image: my-awesome-cron-image 2",
"apiVersion: \"stable.example.com/v1\" 1 kind: CronTab 2 metadata: name: my-new-cron-object 3 finalizers: 4 - finalizer.stable.example.com spec: 5 cronSpec: \"* * * * /5\" image: my-awesome-cron-image",
"oc create -f <file_name>.yaml",
"oc get <kind>",
"oc get crontab",
"NAME KIND my-new-cron-object CronTab.v1.stable.example.com",
"oc get crontabs",
"oc get crontab",
"oc get ct",
"oc get <kind> -o yaml",
"oc get ct -o yaml",
"apiVersion: v1 items: - apiVersion: stable.example.com/v1 kind: CronTab metadata: clusterName: \"\" creationTimestamp: 2017-05-31T12:56:35Z deletionGracePeriodSeconds: null deletionTimestamp: null name: my-new-cron-object namespace: default resourceVersion: \"285\" selfLink: /apis/stable.example.com/v1/namespaces/default/crontabs/my-new-cron-object uid: 9423255b-4600-11e7-af6a-28d2447dc82b spec: cronSpec: '* * * * /5' 1 image: my-awesome-cron-image 2"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/operators/understanding-operators |
Chapter 27. dns | Chapter 27. dns This chapter describes the commands under the dns command. 27.1. dns quota list List quotas Usage: Table 27.1. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None --project-id PROJECT_ID Project id default: current project Table 27.2. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 27.3. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 27.4. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 27.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 27.2. dns quota reset Reset quotas Usage: Table 27.6. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None --project-id PROJECT_ID Project id 27.3. dns quota set Set quotas Usage: Table 27.7. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None --project-id PROJECT_ID Project id --api-export-size <api-export-size> New value for the api-export-size quota --recordset-records <recordset-records> New value for the recordset-records quota --zone-records <zone-records> New value for the zone-records quota --zone-recordsets <zone-recordsets> New value for the zone-recordsets quota --zones <zones> New value for the zones quota Table 27.8. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 27.9. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 27.10. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 27.11. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 27.4. dns service list List service statuses Usage: Table 27.12. Command arguments Value Summary -h, --help Show this help message and exit --hostname HOSTNAME Hostname --service_name SERVICE_NAME Service name --status STATUS Status --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 27.13. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 27.14. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 27.15. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 27.16. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 27.5. dns service show Show service status details Usage: Table 27.17. Positional arguments Value Summary id Service status id Table 27.18. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show results from all projects. default: false --sudo-project-id SUDO_PROJECT_ID Project id to impersonate for this command. default: None Table 27.19. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 27.20. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 27.21. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 27.22. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack dns quota list [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] [--project-id PROJECT_ID]",
"openstack dns quota reset [-h] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] [--project-id PROJECT_ID]",
"openstack dns quota set [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] [--project-id PROJECT_ID] [--api-export-size <api-export-size>] [--recordset-records <recordset-records>] [--zone-records <zone-records>] [--zone-recordsets <zone-recordsets>] [--zones <zones>]",
"openstack dns service list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--hostname HOSTNAME] [--service_name SERVICE_NAME] [--status STATUS] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID]",
"openstack dns service show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all-projects] [--sudo-project-id SUDO_PROJECT_ID] id"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/dns |
function::returnstr | function::returnstr Name function::returnstr - Formats the return value as a string Synopsis Arguments format Variable to determine return type base value Description This function is used by the nd_syscall tapset, and returns a string. Set format equal to 1 for a decimal, 2 for hex, 3 for octal. Note that this function should only be used in dwarfless probes (i.e. 'kprobe.function( " foo " )'). Other probes should use return_str . | [
"returnstr:string(format:long)"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-returnstr |
12.2. About Host Entry Configuration Properties | 12.2. About Host Entry Configuration Properties A host entry can contain information about the host that is outside its system configuration, such as its physical location, MAC address, keys, and certificates. This information can be set when the host entry is created if it is created manually; otherwise, most of this information needs to be added to the host entry after the host is enrolled in the domain. Table 12.1. Host Configuration Properties UI Field Command-Line Option Description Description --desc = description A description of the host. Locality --locality = locality The geographic location of the host. Location --location = location The physical location of the host, such as its data center rack. Platform --platform = string The host hardware or architecture. Operating system --os = string The operating system and version for the host. MAC address --macaddress = address The MAC address for the host. This is a multi-valued attribute. The MAC address is used by the NIS plug-in to create a NIS ethers map for the host. SSH public keys --sshpubkey = string The full SSH public key for the host. This is a multi-valued attribute, so multiple keys can be set. Principal name (not editable) --principalname = principal The Kerberos principal name for the host. This defaults to the host name during the client installation, unless a different principal is explicitly set in the -p . This can be changed using the command-line tools, but cannot be changed in the UI. Set One-Time Password --password = string Sets a password for the host which can be used in bulk enrollment. - --random Generates a random password to be used in bulk enrollment. - --certificate = string A certificate blob for the host. - --updatedns This sets whether the host can dynamically update its DNS entries if its IP address changes. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/host-attr |
function::ansi_cursor_hide | function::ansi_cursor_hide Name function::ansi_cursor_hide - Hides the cursor. Synopsis Arguments None Description Sends ansi code for hiding the cursor. | [
"ansi_cursor_hide()"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-ansi-cursor-hide |
8.27. certmonger | 8.27. certmonger 8.27.1. RHBA-2014:1512 - certmonger bug fix and enhancement update Updated certmonger packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The certmonger service monitors certificates, warning of their impending expiration, and optionally attempting to re-enroll with supported Certificate Authorities (CA). This update also fixes the following bugs: Note The certmonger packages have been upgraded to upstream version 0.75.13, which provides a number of bug fixes and enhancements over the version including: - support for retrieving an IPA server's root certificate and optionally storing it to specified locations - improvements in how the certmonger daemon handles disconnection from the system message bus - improvements in how the certmonger daemon runs enrollment helpers and parses results returned by them - fixed bug causing unexpected termination if an attempt to save a certificate failed - fixed incorrect use of the libdbus library that triggered the _dbus_abort() function - fixed segmentation fault with incorrectly structured entries in the /var/lib/certmonger/cas/ directory (BZ# 1098208 , BZ# 948993 , BZ# 1032760 , BZ# 1103090 , BZ# 1115831 ) This update also fixes the following bugs: Bug Fix BZ# 1125342 This update fixes the implementation of the remove_known_ca dbus call in the certmonger package to prevent the certmonger daemon from terminating unexpectedly when called by remove_known_ca. The certmonger packages have been upgraded to upstream version 0.75.13, which provides a number of bug fixes and enhancements over the version including: - support for retrieving an IPA server's root certificate and optionally storing it to specified locations - improvements in how the certmonger daemon handles disconnection from the system message bus - improvements in how the certmonger daemon runs enrollment helpers and parses results returned by them - fixed bug causing unexpected termination if an attempt to save a certificate failed - fixed incorrect use of the libdbus library that triggered the _dbus_abort() function - fixed segmentation fault with incorrectly structured entries in the /var/lib/certmonger/cas/ directory (BZ#1098208, BZ#948993, BZ#1032760, BZ#1103090, BZ#1115831) In addition, this update adds the following Enhancement BZ# 1027265 This update adds the certmonger_selinux manual page to document the effect that SELinux has in limiting the allowed access to locations for the certmonger daemon. Also, the selinux.txt document has been added to the certmonger package to provide more details about interaction with SELinux. A reference to certmonger_selinux and selinux.txt has been added to other certmonger man pages. Users of certmonger are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/certmonger |
Chapter 11. Measuring scheduling latency using timerlat in RHEL for Real Time | Chapter 11. Measuring scheduling latency using timerlat in RHEL for Real Time The rtla-timerlat tool is an interface for the timerlat tracer. The timerlat tracer finds sources of wake-up latencies for real-time threads. The timerlat tracer creates a kernel thread per CPU with a real-time priority and these threads set a periodic timer to wake up and go back to sleep. On a wake up, timerlat finds and collects information, which is useful to debug operating system timer latencies. The timerlat tracer generates an output and prints the following two lines at every activation: The timerlat tracer periodically prints the timer latency seen at timer interrupt requests (IRQs) handler. This is the first output seen at the hardirq context before a thread activation. The second output is the timer latency of a thread. The ACTIVATION ID field displays the interrupt requests (IRQs) performance to its respective thread execution. 11.1. Configuring the timerlat tracer to measure scheduling latency You can configure the timerlat tracer by adding timerlat in the curret_tracer file of the tracing system. The current_tracer file is generally mounted in the /sys/kernel/tracing directory. The timerlat tracer measures the interrupt requests (IRQs) and saves the trace output for analysis when a thread latency is more than 100 microseconds. Procedure List the current tracer: The no operations ( nop ) is the default tracer. Add the timerlat tracer in the current_tracer file of the tracing system: Generate a tracing output: Verification Enter the following command to check if timerlat is enabled as the current tracer: 11.2. The timerlat tracer options The timerlat tracer is built on top of osnoise tracer. Therefore, you can set the options in the /osnoise/config directory to trace and capture information for thread scheduling latencies. timerlat options cpus Sets CPUs for a timerlat thread to execute on. timerlat_period_us Sets the duration period of the timerlat thread in microseconds. stop_tracing_us Stops the system tracing if a timer latency at the irq context is more than the configured value. Writing 0 disables this option. stop_tracing_total_us Stops the system tracing if the total noise is more than the configured value. Writing 0 disables this option. print_stack Saves the stack of the interrupt requests (IRQs) occurrence. The stack saves the IRQs occurrence after the thread context event, or if the IRQs handler is more than the configured value. 11.3. Measuring timer latency with rtla-timerlat-top The rtla-timerlat-top tracer displays a summary of the periodic output from the timerlat tracer . The tracer output also provides information about each operating system noise and events, such as osnoise , and tracepoints . You can view this information by using the -t option. Procedure To measure timer latency: 11.4. The rtla timerlat top tracer options By using the rtla timerlat top --help command, you can view the help usage on options for the rtla-timerlat-top tracer . timerlat-top-tracer options -p, --period us Sets the timerlat tracer period in microseconds. -i, --irq us Stops the trace if the interrupt requests (IRQs) latency is more than the argument in microseconds. -T, --thread us Stops the trace if the thread latency is more than the argument in microseconds. -t, --trace Saves the stopped trace to the timerlat_trace.txt file. -s, --stack us Saves the stack trace at the interrupt requests (IRQs), if a thread latency is more than the argument. | [
"cat /sys/kernel/tracing/current_tracer nop",
"cd /sys/kernel/tracing/ echo timerlat > current_tracer",
"cat trace tracer: timerlat",
"cat /sys/kernel/tracing/current_tracer timerlat",
"rtla timerlat top -s 30 -T 30 -t"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/9/html/optimizing_rhel_9_for_real_time_for_low_latency_operation/measuring-scheduling-latency-using-timerlat-in-rhel-for-real-time_optimizing-rhel9-for-real-time-for-low-latency-operation |
Chapter 5. Enabling disk encryption | Chapter 5. Enabling disk encryption You can enable encryption of installation disks using either the TPM v2 or Tang encryption modes. Note In some situations, when you enable TPM disk encryption in the firmware for a bare-metal host and then boot it from an ISO that you generate with the Assisted Installer, the cluster deployment can get stuck. This can happen if there are left-over TPM encryption keys from a installation on the host. For more information, see BZ#2011634 . If you experience this problem, contact Red Hat support. 5.1. Enabling TPM v2 encryption Prerequisites Check to see if TPM v2 encryption is enabled in the BIOS on each host. Most Dell systems require this. Check the manual for your computer. The Assisted Installer will also validate that TPM is enabled in the firmware. See the disk-encruption model in the Assisted Installer API for additional details. Important Verify that a TPM v2 encryption chip is installed on each node and enabled in the firmware. Procedure Optional: Using the web console, in the Cluster details step of the user interface wizard, choose to enable TPM v2 encryption on either the control plane nodes, workers, or both. Optional: Using the API, follow the "Modifying hosts" procedure. Set the disk_encryption.enable_on setting to all , masters , or workers . Set the disk_encryption.mode setting to tpmv2 . Refresh the API token: USD source refresh-token Enable TPM v2 encryption: USD curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "disk_encryption": { "enable_on": "none", "mode": "tpmv2" } } ' | jq Valid settings for enable_on are all , master , worker , or none . 5.2. Enabling Tang encryption Prerequisites You have access to a Red Hat Enterprise Linux (RHEL) 8 machine that can be used to generate a thumbprint of the Tang exchange key. Procedure Set up a Tang server or access an existing one. See Network-bound disk encryption for instructions. You can set multiple Tang servers, but the Assisted Installer must be able to connect to all of them during installation. On the Tang server, retrieve the thumbprint for the Tang server using tang-show-keys : USD tang-show-keys <port> Optional: Replace <port> with the port number. The default port number is 80 . Example thumbprint 1gYTN_LpU9ZMB35yn5IbADY5OQ0 Optional: Retrieve the thumbprint for the Tang server using jose . Ensure jose is installed on the Tang server: USD sudo dnf install jose On the Tang server, retrieve the thumbprint using jose : USD sudo jose jwk thp -i /var/db/tang/<public_key>.jwk Replace <public_key> with the public exchange key for the Tang server. Example thumbprint 1gYTN_LpU9ZMB35yn5IbADY5OQ0 Optional: In the Cluster details step of the user interface wizard, choose to enable Tang encryption on either the control plane nodes, workers, or both. You will be required to enter URLs and thumbprints for the Tang servers. Optional: Using the API, follow the "Modifying hosts" procedure. Refresh the API token: USD source refresh-token Set the disk_encryption.enable_on setting to all , masters , or workers . Set the disk_encryption.mode setting to tang . Set disk_encyrption.tang_servers to provide the URL and thumbprint details about one or more Tang servers: USD curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "disk_encryption": { "enable_on": "all", "mode": "tang", "tang_servers": "[{\"url\":\"http://tang.example.com:7500\",\"thumbprint\":\"PLjNyRdGw03zlRoGjQYMahSZGu9\"},{\"url\":\"http://tang2.example.com:7500\",\"thumbprint\":\"XYjNyRdGw03zlRoGjQYMahSZGu3\"}]" } } ' | jq Valid settings for enable_on are all , master , worker , or none . Within the tang_servers value, comment out the quotes within the object(s). 5.3. Additional resources Modifying hosts | [
"source refresh-token",
"curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"disk_encryption\": { \"enable_on\": \"none\", \"mode\": \"tpmv2\" } } ' | jq",
"tang-show-keys <port>",
"1gYTN_LpU9ZMB35yn5IbADY5OQ0",
"sudo dnf install jose",
"sudo jose jwk thp -i /var/db/tang/<public_key>.jwk",
"1gYTN_LpU9ZMB35yn5IbADY5OQ0",
"source refresh-token",
"curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"disk_encryption\": { \"enable_on\": \"all\", \"mode\": \"tang\", \"tang_servers\": \"[{\\\"url\\\":\\\"http://tang.example.com:7500\\\",\\\"thumbprint\\\":\\\"PLjNyRdGw03zlRoGjQYMahSZGu9\\\"},{\\\"url\\\":\\\"http://tang2.example.com:7500\\\",\\\"thumbprint\\\":\\\"XYjNyRdGw03zlRoGjQYMahSZGu3\\\"}]\" } } ' | jq"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_openshift_container_platform_with_the_assisted_installer/assembly_enabling-disk-encryption |
Chapter 68. ListenerAddress schema reference | Chapter 68. ListenerAddress schema reference Used in: ListenerStatus Property Property type Description host string The DNS name or IP address of the Kafka bootstrap service. port integer The port of the Kafka bootstrap service. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-ListenerAddress-reference |
16.4.5. Augeas and libguestfs Scripting | 16.4.5. Augeas and libguestfs Scripting Combining libguestfs with Augeas can help when writing scripts to manipulate Linux guest virtual machine configuration. For example, the following script uses Augeas to parse the keyboard configuration of a guest virtual machine, and to print out the layout. Note that this example only works with guest virtual machines running Red Hat Enterprise Linux: Augeas can also be used to modify configuration files. You can modify the above script to change the keyboard layout: Note the three changes between the two scripts: The --ro option has been removed in the second example, giving the ability to write to the guest virtual machine. The aug-get command has been changed to aug-set to modify the value instead of fetching it. The new value will be "gb" (including the quotes). The aug-save command is used here so Augeas will write the changes out to disk. Note More information about Augeas can be found on the website http://augeas.net . guestfish can do much more than we can cover in this introductory document. For example, creating disk images from scratch: Or copying out whole directories from a disk image: For more information see the man page guestfish(1). | [
"#!/bin/bash - set -e guestname=\"USD1\" guestfish -d \"USD1\" -i --ro <<'EOF' aug-init / 0 aug-get /files/etc/sysconfig/keyboard/LAYOUT EOF",
"#!/bin/bash - set -e guestname=\"USD1\" guestfish -d \"USD1\" -i <<'EOF' aug-init / 0 aug-set /files/etc/sysconfig/keyboard/LAYOUT '\"gb\"' aug-save EOF",
"guestfish -N fs",
"><fs> copy-out /home /tmp/home"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-augeas-and-libguestfs-scripting |
Chapter 26. Storage Driver Updates | Chapter 26. Storage Driver Updates The md driver has been updated to the latest upstream version. The nvme driver has been updated to version 0.10. The O2Micro card reader driver, which specifically enables the SDHCI card reader to work on the O2Micro chips, has been updated to the latest upstream version. The ipr driver, used to enable new SAS VRAID adapters on POWER, has been updated to version 2.6.3. The tcm_fc.ko (FCoE fabric) driver has been updated to the latest upstream version. The qla2xxx driver has been updated to version 8.07.00.26.06.8-k. The LPFC (Avago Emulex Fibrechannel) driver has been updated to version 11.0.0.4. The megaraid_sas driver has been updated to version 06.810.09.00-rh1. The mpt2sas driver has been updated to version 20.102.00.00. The mpt3sas driver has been updated to version 09.102.00.00-rh. The hpsa (HP Smart Array SCSI driver) driver has been updated to version 3.4.10-0-RH1. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.8_technical_notes/storage_drivers |
Storage | Storage OpenShift Container Platform 4.17 Configuring and managing storage in OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/storage/index |
9.7.5. NFS over RDMA | 9.7.5. NFS over RDMA NFS over Remote Direct Memory Access (NFSoRDMA) is best suited for CPU-intensive workloads where a large amount of data needs to be transferred. NFSoRDMA is usually used over an InfiniBand fiber, which provides higher performance with lower latency. The data movement offload feature available with RDMA reduces the amount of data copied around. Procedure 9.2. Enabling RDMA transport in the NFS server Ensure the RDMA RPM is installed and the RDMA service is enabled: Ensure the package that provides the nfs-rdma service is installed and the service is enabled: Ensure that the RDMA port is set to the preferred port (default for Red Hat Enterprise Linux 6 is 2050 ): edit the /etc/rdma/rdma.conf file to set NFSoRDMA_LOAD=yes and NFSoRDMA_PORT to the desired port. Set up the exported file system as normal for NFS mounts. Procedure 9.3. Enabling RDMA from the client Ensure the RDMA RPM is installed and the RDMA service is enabled: Mount the NFS exported partition using the RDMA option on the mount call. The port option can optionally be added to the call. The following Red Hat Knowledgebase article provides on overview of cards that use kernel modules supported for NFSoRDMA: What RDMA hardware is supported in Red Hat Enterprise Linux? | [
"yum install rdma; chkconfig --level 2345 rdma on",
"yum install rdma; chkconfig --level 345 nfs-rdma on",
"yum install rdma; chkconfig --level 2345 rdma on",
"mount -t nfs -o rdma,port= port_number"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/nfs-rdma |
Chapter 15. Encrypting etcd data | Chapter 15. Encrypting etcd data 15.1. About etcd encryption By default, etcd data is not encrypted in OpenShift Container Platform. You can enable etcd encryption for your cluster to provide an additional layer of data security. For example, it can help protect the loss of sensitive data if an etcd backup is exposed to the incorrect parties. When you enable etcd encryption, the following OpenShift API server and Kubernetes API server resources are encrypted: Secrets Config maps Routes OAuth access tokens OAuth authorize tokens When you enable etcd encryption, encryption keys are created. You must have these keys to restore from an etcd backup. Note Etcd encryption only encrypts values, not keys. Resource types, namespaces, and object names are unencrypted. If etcd encryption is enabled during a backup, the static_kuberesources_<datetimestamp>.tar.gz file contains the encryption keys for the etcd snapshot. For security reasons, store this file separately from the etcd snapshot. However, this file is required to restore a state of etcd from the respective etcd snapshot. 15.2. Supported encryption types The following encryption types are supported for encrypting etcd data in OpenShift Container Platform: AES-CBC Uses AES-CBC with PKCS#7 padding and a 32 byte key to perform the encryption. The encryption keys are rotated weekly. AES-GCM Uses AES-GCM with a random nonce and a 32 byte key to perform the encryption. The encryption keys are rotated weekly. 15.3. Enabling etcd encryption You can enable etcd encryption to encrypt sensitive resources in your cluster. Warning Do not back up etcd resources until the initial encryption process is completed. If the encryption process is not completed, the backup might be only partially encrypted. After you enable etcd encryption, several changes can occur: The etcd encryption might affect the memory consumption of a few resources. You might notice a transient affect on backup performance because the leader must serve the backup. A disk I/O can affect the node that receives the backup state. You can encrypt the etcd database in either AES-GCM or AES-CBC encryption. Note To migrate your etcd database from one encryption type to the other, you can modify the API server's spec.encryption.type field. Migration of the etcd data to the new encryption type occurs automatically. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Modify the APIServer object: USD oc edit apiserver Set the spec.encryption.type field to aesgcm or aescbc : spec: encryption: type: aesgcm 1 1 Set to aesgcm for AES-GCM encryption or aescbc for AES-CBC encryption. Save the file to apply the changes. The encryption process starts. It can take 20 minutes or longer for this process to complete, depending on the size of the etcd database. Verify that etcd encryption was successful. Review the Encrypted status condition for the OpenShift API server to verify that its resources were successfully encrypted: USD oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows EncryptionCompleted upon successful encryption: EncryptionCompleted All resources encrypted: routes.route.openshift.io If the output shows EncryptionInProgress , encryption is still in progress. Wait a few minutes and try again. Review the Encrypted status condition for the Kubernetes API server to verify that its resources were successfully encrypted: USD oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows EncryptionCompleted upon successful encryption: EncryptionCompleted All resources encrypted: secrets, configmaps If the output shows EncryptionInProgress , encryption is still in progress. Wait a few minutes and try again. Review the Encrypted status condition for the OpenShift OAuth API server to verify that its resources were successfully encrypted: USD oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows EncryptionCompleted upon successful encryption: EncryptionCompleted All resources encrypted: oauthaccesstokens.oauth.openshift.io, oauthauthorizetokens.oauth.openshift.io If the output shows EncryptionInProgress , encryption is still in progress. Wait a few minutes and try again. 15.4. Disabling etcd encryption You can disable encryption of etcd data in your cluster. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Modify the APIServer object: USD oc edit apiserver Set the encryption field type to identity : spec: encryption: type: identity 1 1 The identity type is the default value and means that no encryption is performed. Save the file to apply the changes. The decryption process starts. It can take 20 minutes or longer for this process to complete, depending on the size of your cluster. Verify that etcd decryption was successful. Review the Encrypted status condition for the OpenShift API server to verify that its resources were successfully decrypted: USD oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows DecryptionCompleted upon successful decryption: DecryptionCompleted Encryption mode set to identity and everything is decrypted If the output shows DecryptionInProgress , decryption is still in progress. Wait a few minutes and try again. Review the Encrypted status condition for the Kubernetes API server to verify that its resources were successfully decrypted: USD oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows DecryptionCompleted upon successful decryption: DecryptionCompleted Encryption mode set to identity and everything is decrypted If the output shows DecryptionInProgress , decryption is still in progress. Wait a few minutes and try again. Review the Encrypted status condition for the OpenShift OAuth API server to verify that its resources were successfully decrypted: USD oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type=="Encrypted")]}{.reason}{"\n"}{.message}{"\n"}' The output shows DecryptionCompleted upon successful decryption: DecryptionCompleted Encryption mode set to identity and everything is decrypted If the output shows DecryptionInProgress , decryption is still in progress. Wait a few minutes and try again. | [
"oc edit apiserver",
"spec: encryption: type: aesgcm 1",
"oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"EncryptionCompleted All resources encrypted: routes.route.openshift.io",
"oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"EncryptionCompleted All resources encrypted: secrets, configmaps",
"oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"EncryptionCompleted All resources encrypted: oauthaccesstokens.oauth.openshift.io, oauthauthorizetokens.oauth.openshift.io",
"oc edit apiserver",
"spec: encryption: type: identity 1",
"oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"DecryptionCompleted Encryption mode set to identity and everything is decrypted",
"oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"DecryptionCompleted Encryption mode set to identity and everything is decrypted",
"oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"DecryptionCompleted Encryption mode set to identity and everything is decrypted"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/security_and_compliance/encrypting-etcd |
function::ipmib_filter_key | function::ipmib_filter_key Name function::ipmib_filter_key - Default filter function for ipmib.* probes Synopsis Arguments skb pointer to the struct sk_buff op value to be counted if skb passes the filter SourceIsLocal 1 is local operation and 0 is non-local operation Description This function is a default filter function. The user can replace this function with their own. The user-supplied filter function returns an index key based on the values in skb . A return value of 0 means this particular skb should be not be counted. | [
"ipmib_filter_key:long(skb:long,op:long,SourceIsLocal:long)"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-ipmib-filter-key |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_openshift_data_foundation_in_external_mode/providing-feedback-on-red-hat-documentation_rhodf |
Chapter 6. MachineConfigPool [machineconfiguration.openshift.io/v1] | Chapter 6. MachineConfigPool [machineconfiguration.openshift.io/v1] Description MachineConfigPool describes a pool of MachineConfigs. Type object 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object MachineConfigPoolSpec is the spec for MachineConfigPool resource. status object MachineConfigPoolStatus is the status for MachineConfigPool resource. 6.1.1. .spec Description MachineConfigPoolSpec is the spec for MachineConfigPool resource. Type object Property Type Description configuration object The targeted MachineConfig object for the machine config pool. machineConfigSelector object machineConfigSelector specifies a label selector for MachineConfigs. Refer https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ on how label and selectors work. maxUnavailable integer-or-string maxUnavailable defines either an integer number or percentage of nodes in the corresponding pool that can go Unavailable during an update. This includes nodes Unavailable for any reason, including user initiated cordons, failing nodes, etc. The default value is 1. A value larger than 1 will mean multiple nodes going unavailable during the update, which may affect your workload stress on the remaining nodes. You cannot set this value to 0 to stop updates (it will default back to 1); to stop updates, use the 'paused' property instead. Drain will respect Pod Disruption Budgets (PDBs) such as etcd quorum guards, even if maxUnavailable is greater than one. nodeSelector object nodeSelector specifies a label selector for Machines paused boolean paused specifies whether or not changes to this machine config pool should be stopped. This includes generating new desiredMachineConfig and update of machines. 6.1.2. .spec.configuration Description The targeted MachineConfig object for the machine config pool. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency source array source is the list of MachineConfig objects that were used to generate the single MachineConfig object specified in content . source[] object ObjectReference contains enough information to let you inspect or modify the referred object. uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 6.1.3. .spec.configuration.source Description source is the list of MachineConfig objects that were used to generate the single MachineConfig object specified in content . Type array 6.1.4. .spec.configuration.source[] Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 6.1.5. .spec.machineConfigSelector Description machineConfigSelector specifies a label selector for MachineConfigs. Refer https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ on how label and selectors work. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 6.1.6. .spec.machineConfigSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 6.1.7. .spec.machineConfigSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 6.1.8. .spec.nodeSelector Description nodeSelector specifies a label selector for Machines Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 6.1.9. .spec.nodeSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 6.1.10. .spec.nodeSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 6.1.11. .status Description MachineConfigPoolStatus is the status for MachineConfigPool resource. Type object Property Type Description certExpirys array The certificate expiry dates from the controller config certExpirys[] object the certExpiry contains specific information about a certificate stored in the controllerConfig. conditions array conditions represents the latest available observations of current state. conditions[] object MachineConfigPoolCondition contains condition information for an MachineConfigPool. configuration object configuration represents the MachineConfig object that last successfully rolled out to all nodes. degradedMachineCount integer degradedMachineCount represents the total number of machines marked degraded (or unreconcilable). A node is marked degraded if applying a configuration failed.. machineCount integer machineCount represents the total number of machines in the machine config pool. observedGeneration integer observedGeneration represents the generation observed by the controller. readyMachineCount integer readyMachineCount represents the total number of ready machines targeted by the pool. unavailableMachineCount integer unavailableMachineCount represents the total number of unavailable (non-ready) machines targeted by the pool. A node is marked unavailable if it is in updating state or NodeReady condition is false. updatedMachineCount integer updatedMachineCount represents the total number of machines targeted by the pool that have the CurrentMachineConfig as their config. 6.1.12. .status.certExpirys Description The certificate expiry dates from the controller config Type array 6.1.13. .status.certExpirys[] Description the certExpiry contains specific information about a certificate stored in the controllerConfig. Type object Property Type Description bundle string the bundle for which the expiry applies subject string the subject of the cert 6.1.14. .status.conditions Description conditions represents the latest available observations of current state. Type array 6.1.15. .status.conditions[] Description MachineConfigPoolCondition contains condition information for an MachineConfigPool. Type object Property Type Description lastTransitionTime `` lastTransitionTime is the timestamp corresponding to the last status change of this condition. message string message is a human readable description of the details of the last transition, complementing reason. reason string reason is a brief machine readable explanation for the condition's last transition. status string status of the condition, one of ('True', 'False', 'Unknown'). type string type of the condition, currently ('Done', 'Updating', 'Failed'). 6.1.16. .status.configuration Description configuration represents the MachineConfig object that last successfully rolled out to all nodes. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency source array source is the list of MachineConfig objects that were used to generate the single MachineConfig object specified in content . source[] object ObjectReference contains enough information to let you inspect or modify the referred object. uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 6.1.17. .status.configuration.source Description source is the list of MachineConfig objects that were used to generate the single MachineConfig object specified in content . Type array 6.1.18. .status.configuration.source[] Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 6.2. API endpoints The following API endpoints are available: /apis/machineconfiguration.openshift.io/v1/machineconfigpools DELETE : delete collection of MachineConfigPool GET : list objects of kind MachineConfigPool POST : create a MachineConfigPool /apis/machineconfiguration.openshift.io/v1/machineconfigpools/{name} DELETE : delete a MachineConfigPool GET : read the specified MachineConfigPool PATCH : partially update the specified MachineConfigPool PUT : replace the specified MachineConfigPool /apis/machineconfiguration.openshift.io/v1/machineconfigpools/{name}/status GET : read status of the specified MachineConfigPool PATCH : partially update status of the specified MachineConfigPool PUT : replace status of the specified MachineConfigPool 6.2.1. /apis/machineconfiguration.openshift.io/v1/machineconfigpools Table 6.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of MachineConfigPool Table 6.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind MachineConfigPool Table 6.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.5. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPoolList schema 401 - Unauthorized Empty HTTP method POST Description create a MachineConfigPool Table 6.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.7. Body parameters Parameter Type Description body MachineConfigPool schema Table 6.8. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 201 - Created MachineConfigPool schema 202 - Accepted MachineConfigPool schema 401 - Unauthorized Empty 6.2.2. /apis/machineconfiguration.openshift.io/v1/machineconfigpools/{name} Table 6.9. Global path parameters Parameter Type Description name string name of the MachineConfigPool Table 6.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a MachineConfigPool Table 6.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 6.12. Body parameters Parameter Type Description body DeleteOptions schema Table 6.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified MachineConfigPool Table 6.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 6.15. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified MachineConfigPool Table 6.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 6.17. Body parameters Parameter Type Description body Patch schema Table 6.18. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified MachineConfigPool Table 6.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.20. Body parameters Parameter Type Description body MachineConfigPool schema Table 6.21. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 201 - Created MachineConfigPool schema 401 - Unauthorized Empty 6.2.3. /apis/machineconfiguration.openshift.io/v1/machineconfigpools/{name}/status Table 6.22. Global path parameters Parameter Type Description name string name of the MachineConfigPool Table 6.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified MachineConfigPool Table 6.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 6.25. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified MachineConfigPool Table 6.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 6.27. Body parameters Parameter Type Description body Patch schema Table 6.28. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified MachineConfigPool Table 6.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.30. Body parameters Parameter Type Description body MachineConfigPool schema Table 6.31. HTTP responses HTTP code Reponse body 200 - OK MachineConfigPool schema 201 - Created MachineConfigPool schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/machine_apis/machineconfigpool-machineconfiguration-openshift-io-v1 |
6.3. RHEA-2013:0422 - new packages: libjpeg-turbo | 6.3. RHEA-2013:0422 - new packages: libjpeg-turbo New libjpeg-turbo packages are now available for Red Hat Enterprise Linux 6. The libjpeg-turbo packages contain a library of functions for manipulating JPEG images. They also contain simple client programs for accessing the libjpeg functions. These packages provide the same functionality and API as libjpeg but with better performance. This enhancement update adds the libjpeg-turbo packages to Red Hat Enterprise Linux 6. (BZ#788687) All users who require libjpeg-turbo are advised to install these new packages. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/rhea-2013-0422 |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/red_hat_ansible_automation_platform_performance_considerations_for_operator_based_installations/making-open-source-more-inclusive |
Chapter 3. Usage | Chapter 3. Usage This chapter describes the necessary steps for using Red Hat Software Collections 3.5, and deploying applications that use Red Hat Software Collections. 3.1. Using Red Hat Software Collections 3.1.1. Running an Executable from a Software Collection To run an executable from a particular Software Collection, type the following command at a shell prompt: scl enable software_collection ... ' command ...' Or, alternatively, use the following command: scl enable software_collection ... -- command ... Replace software_collection with a space-separated list of Software Collections you want to use and command with the command you want to run. For example, to execute a Perl program stored in a file named hello.pl with the Perl interpreter from the perl526 Software Collection, type: You can execute any command using the scl utility, causing it to be run with the executables from a selected Software Collection in preference to their possible Red Hat Enterprise Linux system equivalents. For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections Components" . 3.1.2. Running a Shell Session with a Software Collection as Default To start a new shell session with executables from a selected Software Collection in preference to their Red Hat Enterprise Linux equivalents, type the following at a shell prompt: scl enable software_collection ... bash Replace software_collection with a space-separated list of Software Collections you want to use. For example, to start a new shell session with the python27 and rh-postgresql10 Software Collections as default, type: The list of Software Collections that are enabled in the current session is stored in the USDX_SCLS environment variable, for instance: For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections Components" . 3.1.3. Running a System Service from a Software Collection Running a System Service from a Software Collection in Red Hat Enterprise Linux 6 Software Collections that include system services install corresponding init scripts in the /etc/rc.d/init.d/ directory. To start such a service in the current session, type the following at a shell prompt as root : service software_collection - service_name start Replace software_collection with the name of the Software Collection and service_name with the name of the service you want to start. To configure this service to start automatically at boot time, type the following command as root : chkconfig software_collection - service_name on For example, to start the postgresql service from the rh-postgresql96 Software Collection and enable it in runlevels 2, 3, 4, and 5, type as root : For more information on how to manage system services in Red Hat Enterprise Linux 6, refer to the Red Hat Enterprise Linux 6 Deployment Guide . For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections Components" . Running a System Service from a Software Collection in Red Hat Enterprise Linux 7 In Red Hat Enterprise Linux 7, init scripts have been replaced by systemd service unit files, which end with the .service file extension and serve a similar purpose as init scripts. To start a service in the current session, execute the following command as root : systemctl start software_collection - service_name .service Replace software_collection with the name of the Software Collection and service_name with the name of the service you want to start. To configure this service to start automatically at boot time, type the following command as root : systemctl enable software_collection - service_name .service For example, to start the postgresql service from the rh-postgresql10 Software Collection and enable it at boot time, type as root : For more information on how to manage system services in Red Hat Enterprise Linux 7, refer to the Red Hat Enterprise Linux 7 System Administrator's Guide . For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections Components" . 3.2. Accessing a Manual Page from a Software Collection Every Software Collection contains a general manual page that describes the content of this component. Each manual page has the same name as the component and it is located in the /opt/rh directory. To read a manual page for a Software Collection, type the following command: scl enable software_collection 'man software_collection ' Replace software_collection with the particular Red Hat Software Collections component. For example, to display the manual page for rh-mariadb102 , type: 3.3. Deploying Applications That Use Red Hat Software Collections In general, you can use one of the following two approaches to deploy an application that depends on a component from Red Hat Software Collections in production: Install all required Software Collections and packages manually and then deploy your application, or Create a new Software Collection for your application and specify all required Software Collections and other packages as dependencies. For more information on how to manually install individual Red Hat Software Collections components, see Section 2.2, "Installing Red Hat Software Collections" . For further details on how to use Red Hat Software Collections, see Section 3.1, "Using Red Hat Software Collections" . For a detailed explanation of how to create a custom Software Collection or extend an existing one, read the Red Hat Software Collections Packaging Guide . 3.4. Red Hat Software Collections Container Images Container images based on Red Hat Software Collections include applications, daemons, and databases. The images can be run on Red Hat Enterprise Linux 7 Server and Red Hat Enterprise Linux Atomic Host. For information about their usage, see Using Red Hat Software Collections 3 Container Images . For details regarding container images based on Red Hat Software Collections versions 2.4 and earlier, see Using Red Hat Software Collections 2 Container Images . The following container images are available with Red Hat Software Collections 3.5: rhscl/perl-530-rhel7 rhscl/python-38-rhel7 rhscl/ruby-26-rhel7 rhscl/httpd-24-rhel7 rhscl/varnish-6-rhel7 rhscl/devtoolset-9-toolchain-rhel7 rhscl/devtoolset-9-perftools-rhel7 The following container images are based on Red Hat Software Collections 3.4: rhscl/nodejs-12-rhel7 rhscl/php-73-rhel7 rhscl/nginx-116-rhel7 rhscl/postgresql-12-rhel7 The following container images are based on Red Hat Software Collections 3.3: rhscl/mariadb-103-rhel7 rhscl/redis-5-rhel7 rhscl/ruby-26-rhel7 rhscl/devtoolset-8-toolchain-rhel7 rhscl/devtoolset-8-perftools-rhel7 The following container images are based on Red Hat Software Collections 3.2: rhscl/mysql-80-rhel7 rhscl/nginx-114-rhel7 rhscl/php-72-rhel7 rhscl/nodejs-10-rhel7 The following container images are based on Red Hat Software Collections 3.1: rhscl/mongodb-36-rhel7 rhscl/perl-526-rhel7 rhscl/postgresql-10-rhel7 rhscl/ruby-25-rhel7 rhscl/varnish-5-rhel7 The following container images are based on Red Hat Software Collections 3.0: rhscl/mariadb-102-rhel7 rhscl/mongodb-34-rhel7 rhscl/postgresql-96-rhel7 rhscl/python-36-rhel7 The following container images are based on Red Hat Software Collections 2: rhscl/python-27-rhel7 rhscl/s2i-base-rhel7 | [
"~]USD scl enable rh-perl526 'perl hello.pl' Hello, World!",
"~]USD scl enable python27 rh-postgresql10 bash",
"~]USD echo USDX_SCLS python27 rh-postgresql10",
"~]# service rh-postgresql96-postgresql start Starting rh-postgresql96-postgresql service: [ OK ] ~]# chkconfig rh-postgresql96-postgresql on",
"~]# systemctl start rh-postgresql10-postgresql.service ~]# systemctl enable rh-postgresql10-postgresql.service",
"~]USD scl enable rh-mariadb102 \"man rh-mariadb102\""
]
| https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.5_release_notes/chap-usage |
Chapter 7. alarm | Chapter 7. alarm This chapter describes the commands under the alarm command. 7.1. alarm create Create an alarm Usage: Table 7.1. Command arguments Value Summary -h, --help Show this help message and exit --name <NAME> Name of the alarm -t <TYPE>, --type <TYPE> Type of alarm, should be one of: event, composite, threshold, gnocchi_resources_threshold, gnocchi_aggregation_by_metrics_threshold, gnocchi_aggregation_by_resources_threshold, loadbalancer_member_health. --project-id <PROJECT_ID> Project to associate with alarm (configurable by admin users only) --user-id <USER_ID> User to associate with alarm (configurable by admin users only) --description <DESCRIPTION> Free text description of the alarm --state <STATE> State of the alarm, one of: [ ok , alarm , insufficient data ] --severity <SEVERITY> Severity of the alarm, one of: [ low , moderate , critical ] --enabled {True|False} True if alarm evaluation is enabled --alarm-action <Webhook URL> Url to invoke when state transitions to alarm. may be used multiple times --ok-action <Webhook URL> Url to invoke when state transitions to ok. may be used multiple times --insufficient-data-action <Webhook URL> Url to invoke when state transitions to insufficient data. May be used multiple times --time-constraint <Time Constraint> Only evaluate the alarm if the time at evaluation is within this time constraint. Start point(s) of the constraint are specified with a cron expression, whereas its duration is given in seconds. Can be specified multiple times for multiple time constraints, format is: name=<CONSTRAINT_NAME>;start=< CRON>;duration=<SECONDS>;[description=<DESCRIPTION>;[t imezone=<IANA Timezone>]] --repeat-actions {True|False} True if actions should be repeatedly notified while alarm remains in target state Table 7.2. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 7.3. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 7.4. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 7.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. Table 7.6. common alarm rules Value Summary --query <QUERY> For alarms of type threshold or event: key[op]data_type::value; list. data_type is optional, but if supplied must be string, integer, float, or boolean. For alarms of type gnocchi_aggregation_by_resources_threshold: need to specify a complex query json string, like: {"and": [{"=": {"ended_at": null}}, ... ]}. --comparison-operator <OPERATOR> Operator to compare with, one of: [ lt , le , eq , ne , ge , gt ] --evaluation-periods <EVAL_PERIODS> Number of periods to evaluate over --threshold <THRESHOLD> Threshold to evaluate against. Table 7.7. event alarm Value Summary --event-type <EVENT_TYPE> Event type to evaluate against Table 7.8. threshold alarm Value Summary -m <METER NAME>, --meter-name <METER NAME> Meter to evaluate against --period <PERIOD> Length of each period (seconds) to evaluate over. --statistic <STATISTIC> Statistic to evaluate, one of: [ max , min , avg , sum , count ] Table 7.9. common gnocchi alarm rules Value Summary --granularity <GRANULARITY> The time range in seconds over which to query. --aggregation-method <AGGR_METHOD> The aggregation_method to compare to the threshold. --metric <METRIC>, --metrics <METRIC> The metric id or name depending of the alarm type Table 7.10. gnocchi resource threshold alarm Value Summary --resource-type <RESOURCE_TYPE> The type of resource. --resource-id <RESOURCE_ID> The id of a resource. Table 7.11. composite alarm Value Summary --composite-rule <COMPOSITE_RULE> Composite threshold rule with json format, the form can be a nested dict which combine threshold/gnocchi rules by "and", "or". For example, the form is like: {"or":[RULE1, RULE2, {"and": [RULE3, RULE4]}]}, The RULEx can be basic threshold rules but must include a "type" field, like this: {"threshold": 0.8,"meter_name":"cpu_util","type":"threshold"} Table 7.12. loadbalancer member health alarm Value Summary --stack-id <STACK_NAME_OR_ID> Name or id of the root / top level heat stack containing the loadbalancer pool and members. An update will be triggered on the root Stack if an unhealthy member is detected in the loadbalancer pool. --pool-id <LOADBALANCER_POOL_NAME_OR_ID> Name or id of the loadbalancer pool for which the health of each member will be evaluated. --autoscaling-group-id <AUTOSCALING_GROUP_NAME_OR_ID> Id of the heat autoscaling group that contains the loadbalancer members. Unhealthy members will be marked as such before an update is triggered on the root stack. 7.2. alarm delete Delete an alarm Usage: Table 7.13. Positional arguments Value Summary <ALARM ID or NAME> Id or name of an alarm. Table 7.14. Command arguments Value Summary -h, --help Show this help message and exit --name <NAME> Name of the alarm 7.3. alarm-history search Show history for all alarms based on query Usage: Table 7.15. Command arguments Value Summary -h, --help Show this help message and exit --query QUERY Rich query supported by aodh, e.g. project_id!=my-id user_id=foo or user_id=bar Table 7.16. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 7.17. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 7.18. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 7.19. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 7.4. alarm-history show Show history for an alarm Usage: Table 7.20. Positional arguments Value Summary <alarm-id> Id of an alarm Table 7.21. Command arguments Value Summary -h, --help Show this help message and exit --limit <LIMIT> Number of resources to return (default is server default) --marker <MARKER> Last item of the listing. return the results after this value,the supported marker is event_id. --sort <SORT_KEY:SORT_DIR> Sort of resource attribute. e.g. timestamp:desc Table 7.22. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 7.23. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 7.24. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 7.25. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 7.5. alarm list List alarms Usage: Table 7.26. Command arguments Value Summary -h, --help Show this help message and exit --query QUERY Rich query supported by aodh, e.g. project_id!=my-id user_id=foo or user_id=bar --filter <KEY1=VALUE1;KEY2=VALUE2... > Filter parameters to apply on returned alarms. --limit <LIMIT> Number of resources to return (default is server default) --marker <MARKER> Last item of the listing. return the results after this value,the supported marker is alarm_id. --sort <SORT_KEY:SORT_DIR> Sort of resource attribute, e.g. name:asc Table 7.27. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 7.28. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 7.29. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 7.30. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 7.6. alarm quota set Command base class for displaying data about a single object. Usage: Table 7.31. Positional arguments Value Summary project Project id. Table 7.32. Command arguments Value Summary -h, --help Show this help message and exit --alarm ALARM New value for the alarm quota. value -1 means unlimited. Table 7.33. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 7.34. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 7.35. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 7.36. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 7.7. alarm quota show Show quota for a project Usage: Table 7.37. Command arguments Value Summary -h, --help Show this help message and exit --project PROJECT Project id. if not specified, get quota for the current project. Table 7.38. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 7.39. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 7.40. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 7.41. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 7.8. alarm show Show an alarm Usage: Table 7.42. Positional arguments Value Summary <ALARM ID or NAME> Id or name of an alarm. Table 7.43. Command arguments Value Summary -h, --help Show this help message and exit --name <NAME> Name of the alarm Table 7.44. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 7.45. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 7.46. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 7.47. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 7.9. alarm state get Get state of an alarm Usage: Table 7.48. Positional arguments Value Summary <ALARM ID or NAME> Id or name of an alarm. Table 7.49. Command arguments Value Summary -h, --help Show this help message and exit --name <NAME> Name of the alarm Table 7.50. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 7.51. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 7.52. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 7.53. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 7.10. alarm state set Set state of an alarm Usage: Table 7.54. Positional arguments Value Summary <ALARM ID or NAME> Id or name of an alarm. Table 7.55. Command arguments Value Summary -h, --help Show this help message and exit --name <NAME> Name of the alarm --state <STATE> State of the alarm, one of: [ ok , alarm , insufficient data ] Table 7.56. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 7.57. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 7.58. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 7.59. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 7.11. alarm update Update an alarm Usage: Table 7.60. Positional arguments Value Summary <ALARM ID or NAME> Id or name of an alarm. Table 7.61. Command arguments Value Summary -h, --help Show this help message and exit --name <NAME> Name of the alarm -t <TYPE>, --type <TYPE> Type of alarm, should be one of: event, composite, threshold, gnocchi_resources_threshold, gnocchi_aggregation_by_metrics_threshold, gnocchi_aggregation_by_resources_threshold, loadbalancer_member_health. --project-id <PROJECT_ID> Project to associate with alarm (configurable by admin users only) --user-id <USER_ID> User to associate with alarm (configurable by admin users only) --description <DESCRIPTION> Free text description of the alarm --state <STATE> State of the alarm, one of: [ ok , alarm , insufficient data ] --severity <SEVERITY> Severity of the alarm, one of: [ low , moderate , critical ] --enabled {True|False} True if alarm evaluation is enabled --alarm-action <Webhook URL> Url to invoke when state transitions to alarm. may be used multiple times --ok-action <Webhook URL> Url to invoke when state transitions to ok. may be used multiple times --insufficient-data-action <Webhook URL> Url to invoke when state transitions to insufficient data. May be used multiple times --time-constraint <Time Constraint> Only evaluate the alarm if the time at evaluation is within this time constraint. Start point(s) of the constraint are specified with a cron expression, whereas its duration is given in seconds. Can be specified multiple times for multiple time constraints, format is: name=<CONSTRAINT_NAME>;start=< CRON>;duration=<SECONDS>;[description=<DESCRIPTION>;[t imezone=<IANA Timezone>]] --repeat-actions {True|False} True if actions should be repeatedly notified while alarm remains in target state Table 7.62. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 7.63. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 7.64. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 7.65. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. Table 7.66. common alarm rules Value Summary --query <QUERY> For alarms of type threshold or event: key[op]data_type::value; list. data_type is optional, but if supplied must be string, integer, float, or boolean. For alarms of type gnocchi_aggregation_by_resources_threshold: need to specify a complex query json string, like: {"and": [{"=": {"ended_at": null}}, ... ]}. --comparison-operator <OPERATOR> Operator to compare with, one of: [ lt , le , eq , ne , ge , gt ] --evaluation-periods <EVAL_PERIODS> Number of periods to evaluate over --threshold <THRESHOLD> Threshold to evaluate against. Table 7.67. event alarm Value Summary --event-type <EVENT_TYPE> Event type to evaluate against Table 7.68. threshold alarm Value Summary -m <METER NAME>, --meter-name <METER NAME> Meter to evaluate against --period <PERIOD> Length of each period (seconds) to evaluate over. --statistic <STATISTIC> Statistic to evaluate, one of: [ max , min , avg , sum , count ] Table 7.69. common gnocchi alarm rules Value Summary --granularity <GRANULARITY> The time range in seconds over which to query. --aggregation-method <AGGR_METHOD> The aggregation_method to compare to the threshold. --metric <METRIC>, --metrics <METRIC> The metric id or name depending of the alarm type Table 7.70. gnocchi resource threshold alarm Value Summary --resource-type <RESOURCE_TYPE> The type of resource. --resource-id <RESOURCE_ID> The id of a resource. Table 7.71. composite alarm Value Summary --composite-rule <COMPOSITE_RULE> Composite threshold rule with json format, the form can be a nested dict which combine threshold/gnocchi rules by "and", "or". For example, the form is like: {"or":[RULE1, RULE2, {"and": [RULE3, RULE4]}]}, The RULEx can be basic threshold rules but must include a "type" field, like this: {"threshold": 0.8,"meter_name":"cpu_util","type":"threshold"} Table 7.72. loadbalancer member health alarm Value Summary --stack-id <STACK_NAME_OR_ID> Name or id of the root / top level heat stack containing the loadbalancer pool and members. An update will be triggered on the root Stack if an unhealthy member is detected in the loadbalancer pool. --pool-id <LOADBALANCER_POOL_NAME_OR_ID> Name or id of the loadbalancer pool for which the health of each member will be evaluated. --autoscaling-group-id <AUTOSCALING_GROUP_NAME_OR_ID> Id of the heat autoscaling group that contains the loadbalancer members. Unhealthy members will be marked as such before an update is triggered on the root stack. | [
"openstack alarm create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --name <NAME> -t <TYPE> [--project-id <PROJECT_ID>] [--user-id <USER_ID>] [--description <DESCRIPTION>] [--state <STATE>] [--severity <SEVERITY>] [--enabled {True|False}] [--alarm-action <Webhook URL>] [--ok-action <Webhook URL>] [--insufficient-data-action <Webhook URL>] [--time-constraint <Time Constraint>] [--repeat-actions {True|False}] [--query <QUERY>] [--comparison-operator <OPERATOR>] [--evaluation-periods <EVAL_PERIODS>] [--threshold <THRESHOLD>] [--event-type <EVENT_TYPE>] [-m <METER NAME>] [--period <PERIOD>] [--statistic <STATISTIC>] [--granularity <GRANULARITY>] [--aggregation-method <AGGR_METHOD>] [--metric <METRIC>] [--resource-type <RESOURCE_TYPE>] [--resource-id <RESOURCE_ID>] [--composite-rule <COMPOSITE_RULE>] [--stack-id <STACK_NAME_OR_ID>] [--pool-id <LOADBALANCER_POOL_NAME_OR_ID>] [--autoscaling-group-id <AUTOSCALING_GROUP_NAME_OR_ID>]",
"openstack alarm delete [-h] [--name <NAME>] [<ALARM ID or NAME>]",
"openstack alarm-history search [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--query QUERY]",
"openstack alarm-history show [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--limit <LIMIT>] [--marker <MARKER>] [--sort <SORT_KEY:SORT_DIR>] <alarm-id>",
"openstack alarm list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--query QUERY | --filter <KEY1=VALUE1;KEY2=VALUE2...>] [--limit <LIMIT>] [--marker <MARKER>] [--sort <SORT_KEY:SORT_DIR>]",
"openstack alarm quota set [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--alarm ALARM] project",
"openstack alarm quota show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--project PROJECT]",
"openstack alarm show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <NAME>] [<ALARM ID or NAME>]",
"openstack alarm state get [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <NAME>] [<ALARM ID or NAME>]",
"openstack alarm state set [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <NAME>] --state <STATE> [<ALARM ID or NAME>]",
"openstack alarm update [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <NAME>] [-t <TYPE>] [--project-id <PROJECT_ID>] [--user-id <USER_ID>] [--description <DESCRIPTION>] [--state <STATE>] [--severity <SEVERITY>] [--enabled {True|False}] [--alarm-action <Webhook URL>] [--ok-action <Webhook URL>] [--insufficient-data-action <Webhook URL>] [--time-constraint <Time Constraint>] [--repeat-actions {True|False}] [--query <QUERY>] [--comparison-operator <OPERATOR>] [--evaluation-periods <EVAL_PERIODS>] [--threshold <THRESHOLD>] [--event-type <EVENT_TYPE>] [-m <METER NAME>] [--period <PERIOD>] [--statistic <STATISTIC>] [--granularity <GRANULARITY>] [--aggregation-method <AGGR_METHOD>] [--metric <METRIC>] [--resource-type <RESOURCE_TYPE>] [--resource-id <RESOURCE_ID>] [--composite-rule <COMPOSITE_RULE>] [--stack-id <STACK_NAME_OR_ID>] [--pool-id <LOADBALANCER_POOL_NAME_OR_ID>] [--autoscaling-group-id <AUTOSCALING_GROUP_NAME_OR_ID>] [<ALARM ID or NAME>]"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/command_line_interface_reference/alarm |
Chapter 2. Understanding build configurations | Chapter 2. Understanding build configurations The following sections define the concept of a build, build configuration, and outline the primary build strategies available. 2.1. BuildConfigs A build configuration describes a single build definition and a set of triggers for when a new build is created. Build configurations are defined by a BuildConfig , which is a REST object that can be used in a POST to the API server to create a new instance. A build configuration, or BuildConfig , is characterized by a build strategy and one or more sources. The strategy determines the process, while the sources provide its input. Depending on how you choose to create your application using OpenShift Container Platform, a BuildConfig is typically generated automatically for you if you use the web console or CLI, and it can be edited at any time. Understanding the parts that make up a BuildConfig and their available options can help if you choose to manually change your configuration later. The following example BuildConfig results in a new build every time a container image tag or the source code changes: BuildConfig object definition kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: "ruby-sample-build" 1 spec: runPolicy: "Serial" 2 triggers: 3 - type: "GitHub" github: secret: "secret101" - type: "Generic" generic: secret: "secret101" - type: "ImageChange" source: 4 git: uri: "https://github.com/openshift/ruby-hello-world" strategy: 5 sourceStrategy: from: kind: "ImageStreamTag" name: "ruby-20-centos7:latest" output: 6 to: kind: "ImageStreamTag" name: "origin-ruby-sample:latest" postCommit: 7 script: "bundle exec rake test" 1 This specification creates a new BuildConfig named ruby-sample-build . 2 The runPolicy field controls whether builds created from this build configuration can be run simultaneously. The default value is Serial , which means new builds run sequentially, not simultaneously. 3 You can specify a list of triggers, which cause a new build to be created. 4 The source section defines the source of the build. The source type determines the primary source of input, and can be either Git , to point to a code repository location, Dockerfile , to build from an inline Dockerfile, or Binary , to accept binary payloads. It is possible to have multiple sources at once. See the documentation for each source type for details. 5 The strategy section describes the build strategy used to execute the build. You can specify a Source , Docker , or Custom strategy here. This example uses the ruby-20-centos7 container image that Source-to-image (S2I) uses for the application build. 6 After the container image is successfully built, it is pushed into the repository described in the output section. 7 The postCommit section defines an optional build hook. | [
"kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: \"ruby-sample-build\" 1 spec: runPolicy: \"Serial\" 2 triggers: 3 - type: \"GitHub\" github: secret: \"secret101\" - type: \"Generic\" generic: secret: \"secret101\" - type: \"ImageChange\" source: 4 git: uri: \"https://github.com/openshift/ruby-hello-world\" strategy: 5 sourceStrategy: from: kind: \"ImageStreamTag\" name: \"ruby-20-centos7:latest\" output: 6 to: kind: \"ImageStreamTag\" name: \"origin-ruby-sample:latest\" postCommit: 7 script: \"bundle exec rake test\""
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/builds_using_buildconfig/understanding-buildconfigs |
Camel Extensions for Quarkus Reference | Camel Extensions for Quarkus Reference Red Hat build of Apache Camel Extensions for Quarkus 2.13 Camel Extensions for Quarkus provided by Red Hat Camel Extensions for Quarkus Documentation Team [email protected] Camel Extensions for Quarkus Support Team http://access.redhat.com/support | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_extensions_for_quarkus/2.13/html/camel_extensions_for_quarkus_reference/index |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation for Red Hat OpenStack Services on OpenShift (RHOSO) or earlier releases of {osp_long} ({osp_acro}). When you create an issue for RHOSO or {osp_acro} documents, the issue is recorded in the RHOSO Jira project, where you can track the progress of your feedback. To complete the Create Issue form, ensure that you are logged in to Jira. If you do not have a Red Hat Jira account, you can create an account at https://issues.redhat.com . Click the following link to open a Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/introduction_to_red_hat_openstack_platform/proc_providing-feedback-on-red-hat-documentation |
Chapter 5. Infrastructure Security | Chapter 5. Infrastructure Security The scope of this guide is Red Hat Ceph Storage. However, a proper Red Hat Ceph Storage security plan requires consideration of the following prerequisites. 5.1. Prerequisites Review the Red Hat Enterprise Linux 8 Security Hardening Guide . Review the Red Hat Enterprise Linux 8 Using SELinux Guide . 5.2. Administration Administering a Red Hat Ceph Storage cluster involves using command line tools. The CLI tools require an administrator key for administrator access privileges to the cluster. By default, Ceph stores the administrator key in the /etc/ceph directory. The default file name is ceph.client.admin.keyring . Take steps to secure the keyring so that only a user with administrative privileges to the cluster may access the keyring. 5.3. Network Communication Red Hat Ceph Storage provides two networks: A public network. A cluster network. All Ceph daemons and Ceph clients require access to the public network, which is part of the storage access security zone . By contrast, ONLY the OSD daemons require access to the cluster network, which is part of the Ceph cluster security zone . The Ceph configuration contains public_network and cluster_network settings. For hardening purposes, specify the IP address and the netmask using CIDR notation. Specify multiple comma-delimited IP address and netmask entries if the cluster will have multiple subnets. See the Ceph network configuration section of the Red Hat Ceph Storage Configuration Guide for details. 5.4. Hardening the Network Service System administrators deploy Red Hat Ceph Storage clusters on Red Hat Enterprise Linux 8 Server. SELinux is on by default and the firewall blocks all inbound traffic except for the SSH service port 22 ; however, you MUST ensure that this is the case so that no other unauthorized ports are open or unnecessary services are enabled. On each server node, execute the following: Start the firewalld service, enable it to run on boot, and ensure that it is running: Take an inventory of all open ports. On a new installation, the sources: section should be blank indicating that no ports have been opened specifically. The services section should indicate ssh indicating that the SSH service (and port 22 ) and dhcpv6-client are enabled. Ensure SELinux is running and Enforcing . If SELinux is Permissive , set it to Enforcing . If SELinux is not running, enable it. See the Red Hat Enterprise Linux 8 Using SELinux Guide for details. Each Ceph daemon uses one or more ports to communicate with other daemons in the Red Hat Ceph Storage cluster. In some cases, you may change the default port settings. Administrators typically only change the default port with the Ceph Object Gateway or ceph-radosgw daemon. Table 5.1. Ceph Ports TCP/UDP Port Daemon Configuration Option 8080 ceph-radosgw rgw_frontends 6789, 3300 ceph-mon N/A 6800-7300 ceph-osd ms_bind_port_min to ms_bind_port_max 6800-7300 ceph-mgr ms_bind_port_min to ms_bind_port_max 6800 ceph-mds N/A The Ceph Storage Cluster daemons include ceph-mon , ceph-mgr , and ceph-osd . These daemons and their hosts comprise the Ceph cluster security zone, which should use its own subnet for hardening purposes. The Ceph clients include ceph-radosgw , ceph-mds , ceph-fuse , libcephfs , rbd , librbd , and librados . These daemons and their hosts comprise the storage access security zone, which should use its own subnet for hardening purposes. On the Ceph Storage Cluster zone's hosts, consider enabling only hosts running Ceph clients to connect to the Ceph Storage Cluster daemons. For example: Replace <zone-name> with the zone name, <ipaddress> with the IP address, <netmask> with the subnet mask in CIDR notation, and <port-number> with the port number or range. Repeat the process with the --permanent flag so that the changes persist after reboot. For example: 5.5. Reporting Red Hat Ceph Storage provides basic system monitoring and reporting with the ceph-mgr daemon plug-ins, namely, the RESTful API, the dashboard, and other plug-ins such as Prometheus and Zabbix . Ceph collects this information using collectd and sockets to retrieve settings, configuration details, and statistical information. In addition to default system behavior, system administrators may configure collectd to report on security matters, such as configuring the IP-Tables or ConnTrack plug-ins to track open ports and connections respectively. System administrators may also retrieve configuration settings at runtime. See Viewing the Ceph configuration at runtime . 5.6. Auditing Administrator Actions An important aspect of system security is to periodically audit administrator actions on the cluster. Red Hat Ceph Storage stores a history of administrator actions in the /var/log/ceph/CLUSTER_FSID/ceph.audit.log file. Run the following command on the monitor host. Example Each entry will contain: Timestamp: Indicates when the command was executed. Monitor Address: Identifies the monitor modified. Client Node: Identifies the client node initiating the change. Entity: Identifies the user making the change. Command: Identifies the command executed. The following is an output of the Ceph audit log: In distributed systems such as Ceph, actions may begin on one instance and get propagated to other nodes in the cluster. When the action begins, the log indicates dispatch . When the action ends, the log indicates finished . | [
"public_network = <public-network/netmask>[,<public-network/netmask>] cluster_network = <cluster-network/netmask>[,<cluster-network/netmask>]",
"systemctl enable firewalld systemctl start firewalld systemctl status firewalld",
"firewall-cmd --list-all",
"sources: services: ssh dhcpv6-client",
"getenforce Enforcing",
"setenforce 1",
"firewall-cmd --zone=<zone-name> --add-rich-rule=\"rule family=\"ipv4\" source address=\"<ip-address>/<netmask>\" port protocol=\"tcp\" port=\"<port-number>\" accept\"",
"firewall-cmd --zone=<zone-name> --add-rich-rule=\"rule family=\"ipv4\" source address=\"<ip-address>/<netmask>\" port protocol=\"tcp\" port=\"<port-number>\" accept\" --permanent",
"cat /var/log/ceph/6c58dfb8-4342-11ee-a953-fa163e843234/ceph.audit.log",
"2023-09-01T10:20:21.445990+0000 mon.host01 (mon.0) 122301 : audit [DBG] from='mgr.14189 10.0.210.22:0/1157748332' entity='mgr.host01.mcadea' cmd=[{\"prefix\": \"config generate-minimal-conf\"}]: dispatch 2023-09-01T10:20:21.446972+0000 mon.host01 (mon.0) 122302 : audit [INF] from='mgr.14189 10.0.210.22:0/1157748332' entity='mgr.host01.mcadea' cmd=[{\"prefix\": \"auth get\", \"entity\": \"client.admin\"}]: dispatch 2023-09-01T10:20:21.453790+0000 mon.host01 (mon.0) 122303 : audit [INF] from='mgr.14189 10.0.210.22:0/1157748332' entity='mgr.host01.mcadea' 2023-09-01T10:20:21.457119+0000 mon.host01 (mon.0) 122304 : audit [DBG] from='mgr.14189 10.0.210.22:0/1157748332' entity='mgr.host01.mcadea' cmd=[{\"prefix\": \"osd tree\", \"states\": [\"destroyed\"], \"format\": \"json\"}]: dispatch 2023-09-01T10:20:30.671816+0000 mon.host01 (mon.0) 122305 : audit [DBG] from='mgr.14189 10.0.210.22:0/1157748332' entity='mgr.host01.mcadea' cmd=[{\"prefix\": \"osd blocklist ls\", \"format\": \"json\"}]: dispatch"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/data_security_and_hardening_guide/assembly-infrastructure-security |
Chapter 21. Configuring a Linux instance on 64-bit IBM Z | Chapter 21. Configuring a Linux instance on 64-bit IBM Z This section describes most of the common tasks for installing Red Hat Enterprise Linux on 64-bit IBM Z. 21.1. Adding DASDs to a z/VM system Direct Access Storage Devices (DASDs) are a type of storage commonly used with 64-bit IBM Z. For more information, see Working with DASDs in the IBM Knowledge Center. The following example is how to set a DASD online, format it, and make the change persistent. Verify that the device is attached or linked to the Linux system if running under z/VM. To link a mini disk to which you have access, run the following commands: 21.2. Dynamically setting DASDs online This section contains information about setting a DASD online. Procedure Use the cio_ignore utility to remove the DASD from the list of ignored devices and make it visible to Linux: Replace device_number with the device number of the DASD. For example: Set the device online. Use a command of the following form: Replace device_number with the device number of the DASD. For example: For instructions on how to set a DASD online persistently, see Persistently setting DASDs online . 21.3. Preparing a new DASD with low-level formatting Once the disk is online, change back to the /root directory and low-level format the device. This is only required once for a DASD during its entire lifetime: When the progress bar reaches the end and the format is complete, dasdfmt prints the following output: Now, use fdasd to partition the DASD. You can create up to three partitions on a DASD. In our example here, we create one partition spanning the whole disk: After a (low-level formatted) DASD is online, it can be used like any other disk under Linux. For example, you can create file systems, LVM physical volumes, or swap space on its partitions, for example /dev/disk/by-path/ccw-0.0.4b2e-part1 . Never use the full DASD device ( dev/dasdb ) for anything but the commands dasdfmt and fdasd . If you want to use the entire DASD, create one partition spanning the entire drive as in the fdasd example above. To add additional disks later without breaking existing disk entries in, for example, /etc/fstab , use the persistent device symbolic links under /dev/disk/by-path/ . 21.4. Persistently setting DASDs online The above instructions described how to activate DASDs dynamically in a running system. However, such changes are not persistent and do not survive a reboot. Making changes to the DASD configuration persistent in your Linux system depends on whether the DASDs belong to the root file system. Those DASDs required for the root file system need to be activated very early during the boot process by the initramfs to be able to mount the root file system. The cio_ignore commands are handled transparently for persistent device configurations and you do not need to free devices from the ignore list manually. 21.5. DASDs that are part of the root file system The file you have to modify to add DASDs that are part of the root file system has changed in Red Hat Enterprise Linux 8. Instead of editing the /etc/zipl.conf file, the new file to be edited, and its location, may be found by running the following commands: There is one boot option to activate DASDs early in the boot process: rd.dasd= . This option takes a Direct Access Storage Device (DASD) adapter device bus identifier. For multiple DASDs, specify the parameter multiple times, or use a comma separated list of bus IDs. To specify a range of DASDs, specify the first and the last bus ID. Below is an example of the /boot/loader/entries/4ab74e52867b4f998e73e06cf23fd761-4.18.0-80.el8.s390x.conf file for a system that uses physical volumes on partitions of two DASDs for an LVM volume group vg_devel1 that contains a logical volume lv_root for the root file system. To add another physical volume on a partition of a third DASD with device bus ID 0.0.202b . To do this, add rd.dasd=0.0.202b to the parameters line of your boot kernel in /boot/loader/entries/4ab74e52867b4f998e73e06cf23fd761-4.18.0-32.el8.s390x.conf : Warning Make sure the length of the kernel command line in the configuration file does not exceed 896 bytes. Otherwise, the boot loader cannot be saved, and the installation fails. Run zipl to apply the changes of the configuration file for the IPL: 21.6. DASDs that are not part of the root file system Direct Access Storage Devices (DASDs) that are not part of the root file system, that is, data disks , are persistently configured in the /etc/dasd.conf file. This file contains one DASD per line, where each line begins with the DASD's bus ID. When adding a DASD to the /etc/dasd.conf file, use key-value pairs to specify the options for each entry. Separate the key and its value with an equal (=) sign. When adding multiple options, use a space or a tab to separate each option. Example /etc/dasd.conf file Changes to the /etc/dasd.conf file take effect after a system reboot or after a new DASD is dynamically added by changing the system's I/O configuration (that is, the DASD is attached under z/VM). Alternatively, to activate a DASD that you have added to the /etc/dasd.conf file, complete the following steps: Remove the DASD from the list of ignored devices and make it visible using the cio_ignore utility: where device_number is the DASD device number. For example, if the device number is 021a , run: Activate the DASD by writing to the device's uevent attribute: where dasd-bus-ID is the DASD's bus ID. For example, if the bus ID is 0.0.021a , run: 21.7. FCP LUNs that are part of the root file system The only file you have to modify for adding FCP LUNs that are part of the root file system has changed in Red Hat Enterprise Linux 8. Instead of editing the /etc/zipl.conf file, the new file to be edited, and its location, may be found by running the following commands: Red Hat Enterprise Linux provides a parameter to activate FCP LUNs early in the boot process: rd.zfcp= . The value is a comma-separated list containing the FCP device bus ID, the target WWPN as 16 digit hexadecimal number prefixed with 0x , and the FCP LUN prefixed with 0x and padded with zeroes to the right to have 16 hexadecimal digits. The WWPN and FCP LUN values are only necessary if the zFCP device is not configured in NPIV mode, when auto LUN scanning is disabled by the zfcp.allow_lun_scan=0 kernel module parameter or when installing RHEL-8.6 or older releases. Otherwise they can be omitted, for example, rd.zfcp=0.0.4000 . Below is an example of the /boot/loader/entries/4ab74e52867b4f998e73e06cf23fd761-4.18.0-80.el8.s390x.conf file for a system that uses physical volumes on partitions of two FCP LUNs for an LVM volume group vg_devel1 that contains a logical volume lv_root for the root file system. For simplicity, the example shows a configuration without multipathing. To add another physical volume on a partition of a third FCP LUN with device bus ID 0.0.fc00, WWPN 0x5105074308c212e9 and FCP LUN 0x401040a300000000, add rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a300000000 to the parameters line of your boot kernel in /boot/loader/entries/4ab74e52867b4f998e73e06cf23fd761-4.18.0-32.el8.s390x.conf . For example: Warning Make sure the length of the kernel command line in the configuration file does not exceed 896 bytes. Otherwise, the boot loader cannot be saved, and the installation fails. Run dracut -f to update the initial RAM disk of your target kernel. Run zipl to apply the changes of the configuration file for the IPL: 21.8. FCP LUNs that are not part of the root file system FCP LUNs that are not part of the root file system, such as data disks, are persistently configured in the file /etc/zfcp.conf . It contains one FCP LUN per line. Each line contains the device bus ID of the FCP adapter, the target WWPN as 16 digit hexadecimal number prefixed with 0x , and the FCP LUN prefixed with 0x and padded with zeroes to the right to have 16 hexadecimal digits, separated by a space or tab. The WWPN and FCP LUN values are only necessary if the zFCP device is not configured in NPIV mode, when auto LUN scanning is disabled by the zfcp.allow_lun_scan=0 kernel module parameter or when installing RHEL-8.6 or older releases. Otherwise they can be omitted and only the device bus ID is mandatory. Entries in /etc/zfcp.conf are activated and configured by udev when an FCP adapter is added to the system. At boot time, all FCP adapters visible to the system are added and trigger udev . Example content of /etc/zfcp.conf : Modifications of /etc/zfcp.conf only become effective after a reboot of the system or after the dynamic addition of a new FCP channel by changing the system's I/O configuration (for example, a channel is attached under z/VM). Alternatively, you can trigger the activation of a new entry in /etc/zfcp.conf for an FCP adapter which was previously not active, by executing the following commands: Use the cio_ignore utility to remove the FCP adapter from the list of ignored devices and make it visible to Linux: Replace device_number with the device number of the FCP adapter. For example: To trigger the uevent that activates the change, issue: For example: 21.9. Adding a qeth device The qeth network device driver supports 64-bit IBM Z OSA-Express features in QDIO mode, HiperSockets, z/VM guest LAN, and z/VM VSWITCH. For more information about the qeth device driver naming scheme, see Customizing boot parameters . 21.10. Dynamically adding a qeth device This section contains information about how to add a qeth device dynamically. Procedure Determine whether the qeth device driver modules are loaded. The following example shows loaded qeth modules: If the output of the lsmod command shows that the qeth modules are not loaded, run the modprobe command to load them: Use the cio_ignore utility to remove the network channels from the list of ignored devices and make them visible to Linux: Replace read_device_bus_id , write_device_bus_id , data_device_bus_id with the three device bus IDs representing a network device. For example, if the read_device_bus_id is 0.0.f500 , the write_device_bus_id is 0.0.f501 , and the data_device_bus_id is 0.0.f502 : Use the znetconf utility to sense and list candidate configurations for network devices: Select the configuration you want to work with and use znetconf to apply the configuration and to bring the configured group device online as network device. Optional: You can also pass arguments that are configured on the group device before it is set online: Now you can continue to configure the encf500 network interface. Alternatively, you can use sysfs attributes to set the device online as follows: Create a qeth group device: For example: , verify that the qeth group device was created properly by looking for the read channel: You can optionally set additional parameters and features, depending on the way you are setting up your system and the features you require, such as: portno layer2 portname Bring the device online by writing 1 to the online sysfs attribute: Then verify the state of the device: A return value of 1 indicates that the device is online, while a return value 0 indicates that the device is offline. Find the interface name that was assigned to the device: Now you can continue to configure the encf500 network interface. The following command from the s390utils package shows the most important settings of your qeth device: 21.11. Persistently adding a qeth device To make your new qeth device persistent, you need to create the configuration file for your new interface. The network interface configuration files are placed in the /etc/sysconfig/network-scripts/ directory. The network configuration files use the naming convention ifcfg- device , where device is the value found in the if_name file in the qeth group device that was created earlier, for example enc9a0 . The cio_ignore commands are handled transparently for persistent device configurations and you do not need to free devices from the ignore list manually. If a configuration file for another device of the same type already exists, the simplest way to add the config file is to copy it to the new name and then edit it: To learn IDs of your network devices, use the lsqeth utility: If you do not have a similar device defined, you must create a new file. Use this example of /etc/sysconfig/network-scripts/ifcfg-0.0.09a0 as a template: Edit the new ifcfg-0.0.0600 file as follows: Modify the DEVICE statement to reflect the contents of the if_name file from your ccw group. Modify the IPADDR statement to reflect the IP address of your new interface. Modify the NETMASK statement as needed. If the new interface is to be activated at boot time, then make sure ONBOOT is set to yes . Make sure the SUBCHANNELS statement matches the hardware addresses for your qeth device. Modify the PORTNAME statement or leave it out if it is not necessary in your environment. You can add any valid sysfs attribute and its value to the OPTIONS parameter. The Red Hat Enterprise Linux installation program currently uses this to configure the layer mode ( layer2 ) and the relative port number ( portno ) of qeth devices. The qeth device driver default for OSA devices is now layer 2 mode. To continue using old ifcfg definitions that rely on the default of layer 3 mode, add layer2=0 to the OPTIONS parameter. /etc/sysconfig/network-scripts/ifcfg-0.0.0600 Changes to an ifcfg file only become effective after rebooting the system or after the dynamic addition of new network device channels by changing the system's I/O configuration (for example, attaching under z/VM). Alternatively, you can trigger the activation of a ifcfg file for network channels which were previously not active yet, by executing the following commands: Use the cio_ignore utility to remove the network channels from the list of ignored devices and make them visible to Linux: Replace read_device_bus_id , write_device_bus_id , data_device_bus_id with the three device bus IDs representing a network device. For example, if the read_device_bus_id is 0.0.0600 , the write_device_bus_id is 0.0.0601 , and the data_device_bus_id is 0.0.0602 : To trigger the uevent that activates the change, issue: For example: Check the status of the network device: Now start the new interface: Check the status of the interface: Check the routing for the new interface: Verify your changes by using the ping utility to ping the gateway or another host on the subnet of the new device: If the default route information has changed, you must also update /etc/sysconfig/network accordingly. Additional resources nm-settings-keyfile man page on your system 21.12. Configuring an 64-bit IBM Z network device for network root file system To add a network device that is required to access the root file system, you only have to change the boot options. The boot options can be in a parameter file, however, the /etc/zipl.conf file no longer contains specifications of the boot records. The file that needs to be modified can be located using the following commands: Dracut , the mkinitrd successor that provides the functionality in the initramfs that in turn replaces initrd , provides a boot parameter to activate network devices on 64-bit IBM Z early in the boot process: rd.znet= . As input, this parameter takes a comma-separated list of the NETTYPE (qeth, lcs, ctc), two (lcs, ctc) or three (qeth) device bus IDs, and optional additional parameters consisting of key-value pairs corresponding to network device sysfs attributes. This parameter configures and activates the 64-bit IBM Z network hardware. The configuration of IP addresses and other network specifics works the same as for other platforms. See the dracut documentation for more details. The cio_ignore commands for the network channels are handled transparently on boot. Example boot options for a root file system accessed over the network through NFS: 21.13. Additional resources Device Drivers, Features, and Commands on RHEL . | [
"CP ATTACH EB1C TO *",
"CP LINK RHEL7X 4B2E 4B2E MR DASD 4B2E LINKED R/W",
"cio_ignore -r device_number",
"cio_ignore -r 4b2e",
"chccwdev -e device_number",
"chccwdev -e 4b2e",
"cd /root # dasdfmt -b 4096 -d cdl -p /dev/disk/by-path/ccw-0.0.4b2e Drive Geometry: 10017 Cylinders * 15 Heads = 150255 Tracks I am going to format the device /dev/disk/by-path/ccw-0.0.4b2e in the following way: Device number of device : 0x4b2e Labelling device : yes Disk label : VOL1 Disk identifier : 0X4B2E Extent start (trk no) : 0 Extent end (trk no) : 150254 Compatible Disk Layout : yes Blocksize : 4096 --->> ATTENTION! <<--- All data of that device will be lost. Type \"yes\" to continue, no will leave the disk untouched: yes cyl 97 of 3338 |#----------------------------------------------| 2%",
"Rereading the partition table Exiting",
"fdasd -a /dev/disk/by-path/ccw-0.0.4b2e reading volume label ..: VOL1 reading vtoc ..........: ok auto-creating one partition for the whole disk writing volume label writing VTOC rereading partition table",
"machine_id=USD(cat /etc/machine-id) kernel_version=USD(uname -r) ls /boot/loader/entries/USDmachine_id-USDkernel_version.conf",
"title Red Hat Enterprise Linux (4.18.0-80.el8.s390x) 8.0 (Ootpa) version 4.18.0-80.el8.s390x linux /boot/vmlinuz-4.18.0-80.el8.s390x initrd /boot/initramfs-4.18.0-80.el8.s390x.img options root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.dasd=0.0.0200 rd.dasd=0.0.0207 rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0 id rhel-20181027190514-4.18.0-80.el8.s390x grub_users USDgrub_users grub_arg --unrestricted grub_class kernel",
"title Red Hat Enterprise Linux (4.18.0-80.el8.s390x) 8.0 (Ootpa) version 4.18.0-80.el8.s390x linux /boot/vmlinuz-4.18.0-80.el8.s390x initrd /boot/initramfs-4.18.0-80.el8.s390x.img options root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.dasd=0.0.0200 rd.dasd=0.0.0207 rd.dasd=0.0.202b rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0 id rhel-20181027190514-4.18.0-80.el8.s390x grub_users USDgrub_users grub_arg --unrestricted grub_class kernel",
"zipl -V Using config file '/etc/zipl.conf' Using BLS config file '/boot/loader/entries/4ab74e52867b4f998e73e06cf23fd761-4.18.0-80.el8.s390x.conf' Target device information Device..........................: 5e:00 Partition.......................: 5e:01 Device name.....................: dasda Device driver name..............: dasd DASD device number..............: 0201 Type............................: disk partition Disk layout.....................: ECKD/compatible disk layout Geometry - heads................: 15 Geometry - sectors..............: 12 Geometry - cylinders............: 13356 Geometry - start................: 24 File system block size..........: 4096 Physical block size.............: 4096 Device size in physical blocks..: 262152 Building bootmap in '/boot' Building menu 'zipl-automatic-menu' Adding #1: IPL section '4.18.0-80.el8.s390x' (default) initial ramdisk...: /boot/initramfs-4.18.0-80.el8.s390x.img kernel image......: /boot/vmlinuz-4.18.0-80.el8.s390x kernel parmline...: 'root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.dasd=0.0.0200 rd.dasd=0.0.0207 rd.dasd=0.0.202b rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0' component address: kernel image....: 0x00010000-0x0049afff parmline........: 0x0049b000-0x0049bfff initial ramdisk.: 0x004a0000-0x01a26fff internal loader.: 0x0000a000-0x0000cfff Preparing boot menu Interactive prompt......: enabled Menu timeout............: 5 seconds Default configuration...: '4.18.0-80.el8.s390x' Preparing boot device: dasda (0201). Syncing disks Done.",
"0.0.0207 0.0.0200 use_diag=1 readonly=1",
"cio_ignore -r device_number",
"cio_ignore -r 021a",
"echo add > /sys/bus/ccw/devices/ dasd-bus-ID /uevent",
"echo add > /sys/bus/ccw/devices/0.0.021a/uevent",
"machine_id=USD(cat /etc/machine-id) kernel_version=USD(uname -r) ls /boot/loader/entries/USDmachine_id-USDkernel_version.conf",
"title Red Hat Enterprise Linux (4.18.0-32.el8.s390x) 8.0 (Ootpa) version 4.18.0-32.el8.s390x linux /boot/vmlinuz-4.18.0-32.el8.s390x initrd /boot/initramfs-4.18.0-32.el8.s390x.img options root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a000000000 rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a100000000 rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0 id rhel-20181027190514-4.18.0-32.el8.s390x grub_users USDgrub_users grub_arg --unrestricted grub_class kernel",
"title Red Hat Enterprise Linux (4.18.0-32.el8.s390x) 8.0 (Ootpa) version 4.18.0-32.el8.s390x linux /boot/vmlinuz-4.18.0-32.el8.s390x initrd /boot/initramfs-4.18.0-32.el8.s390x.img options root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a000000000 rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a100000000 rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a300000000 rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0 id rhel-20181027190514-4.18.0-32.el8.s390x grub_users USDgrub_users grub_arg --unrestricted grub_class kernel",
"zipl -V Using config file '/etc/zipl.conf' Using BLS config file '/boot/loader/entries/4ab74e52867b4f998e73e06cf23fd761-4.18.0-32.el8.s390x.conf' Target device information Device..........................: 08:00 Partition.......................: 08:01 Device name.....................: sda Device driver name..............: sd Type............................: disk partition Disk layout.....................: SCSI disk layout Geometry - start................: 2048 File system block size..........: 4096 Physical block size.............: 512 Device size in physical blocks..: 10074112 Building bootmap in '/boot/' Building menu 'rh-automatic-menu' Adding #1: IPL section '4.18.0-32.el8.s390x' (default) kernel image......: /boot/vmlinuz-4.18.0-32.el8.s390x kernel parmline...: 'root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a000000000 rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a100000000 rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a300000000 rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0' initial ramdisk...: /boot/initramfs-4.18.0-32.el8.s390x.img component address: kernel image....: 0x00010000-0x007a21ff parmline........: 0x00001000-0x000011ff initial ramdisk.: 0x02000000-0x028f63ff internal loader.: 0x0000a000-0x0000a3ff Preparing boot device: sda. Detected SCSI PCBIOS disk layout. Writing SCSI master boot record. Syncing disks Done.",
"0.0.fc00 0x5105074308c212e9 0x401040a000000000 0.0.fc00 0x5105074308c212e9 0x401040a100000000 0.0.fc00 0x5105074308c212e9 0x401040a300000000 0.0.fcd0 0x5105074308c2aee9 0x401040a000000000 0.0.fcd0 0x5105074308c2aee9 0x401040a100000000 0.0.fcd0 0x5105074308c2aee9 0x401040a300000000 0.0.4000 0.0.5000",
"cio_ignore -r device_number",
"cio_ignore -r fcfc",
"echo add > /sys/bus/ccw/devices/device-bus-ID/uevent",
"echo add > /sys/bus/ccw/devices/0.0.fcfc/uevent",
"lsmod | grep qeth qeth_l3 69632 0 qeth_l2 49152 1 qeth 131072 2 qeth_l3,qeth_l2 qdio 65536 3 qeth,qeth_l3,qeth_l2 ccwgroup 20480 1 qeth",
"modprobe qeth",
"cio_ignore -r read_device_bus_id,write_device_bus_id,data_device_bus_id",
"cio_ignore -r 0.0.f500,0.0.f501,0.0.f502",
"znetconf -u Scanning for network devices Device IDs Type Card Type CHPID Drv. ------------------------------------------------------------ 0.0.f500,0.0.f501,0.0.f502 1731/01 OSA (QDIO) 00 qeth 0.0.f503,0.0.f504,0.0.f505 1731/01 OSA (QDIO) 01 qeth 0.0.0400,0.0.0401,0.0.0402 1731/05 HiperSockets 02 qeth",
"znetconf -a f500 Scanning for network devices Successfully configured device 0.0.f500 (encf500)",
"znetconf -a f500 -o portname=myname Scanning for network devices Successfully configured device 0.0.f500 (encf500)",
"echo read_device_bus_id,write_device_bus_id,data_device_bus_id > /sys/bus/ccwgroup/drivers/qeth/group",
"echo 0.0.f500,0.0.f501,0.0.f502 > /sys/bus/ccwgroup/drivers/qeth/group",
"ls /sys/bus/ccwgroup/drivers/qeth/0.0.f500",
"echo 1 > /sys/bus/ccwgroup/drivers/qeth/0.0.f500/online",
"cat /sys/bus/ccwgroup/drivers/qeth/0.0.f500/online 1",
"cat /sys/bus/ccwgroup/drivers/qeth/0.0.f500/if_name encf500",
"lsqeth encf500 Device name : encf500 ------------------------------------------------- card_type : OSD_1000 cdev0 : 0.0.f500 cdev1 : 0.0.f501 cdev2 : 0.0.f502 chpid : 76 online : 1 portname : OSAPORT portno : 0 state : UP (LAN ONLINE) priority_queueing : always queue 0 buffer_count : 16 layer2 : 1 isolation : none",
"cd /etc/sysconfig/network-scripts # cp ifcfg-enc9a0 ifcfg-enc600",
"lsqeth -p devices CHPID interface cardtype port chksum prio-q'ing rtr4 rtr6 lay'2 cnt -------------------------- ----- ---------------- -------------- ---- ------ ---------- ---- ---- ----- ----- 0.0.09a0/0.0.09a1/0.0.09a2 x00 enc9a0 Virt.NIC QDIO 0 sw always_q_2 n/a n/a 1 64 0.0.0600/0.0.0601/0.0.0602 x00 enc600 Virt.NIC QDIO 0 sw always_q_2 n/a n/a 1 64",
"IBM QETH DEVICE=enc9a0 BOOTPROTO=static IPADDR=10.12.20.136 NETMASK=255.255.255.0 ONBOOT=yes NETTYPE=qeth SUBCHANNELS=0.0.09a0,0.0.09a1,0.0.09a2 PORTNAME=OSAPORT OPTIONS='layer2=1 portno=0' MACADDR=02:00:00:23:65:1a TYPE=Ethernet",
"IBM QETH DEVICE=enc600 BOOTPROTO=static IPADDR=192.168.70.87 NETMASK=255.255.255.0 ONBOOT=yes NETTYPE=qeth SUBCHANNELS=0.0.0600,0.0.0601,0.0.0602 PORTNAME=OSAPORT OPTIONS='layer2=1 portno=0' MACADDR=02:00:00:b3:84:ef TYPE=Ethernet",
"cio_ignore -r read_device_bus_id,write_device_bus_id,data_device_bus_id",
"cio_ignore -r 0.0.0600,0.0.0601,0.0.0602",
"echo add > /sys/bus/ccw/devices/read-channel/uevent",
"echo add > /sys/bus/ccw/devices/0.0.0600/uevent",
"lsqeth",
"ifup enc600",
"ip addr show enc600 3: enc600: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 3c:97:0e:51:38:17 brd ff:ff:ff:ff:ff:ff inet 10.85.1.245/24 brd 10.34.3.255 scope global dynamic enc600 valid_lft 81487sec preferred_lft 81487sec inet6 1574:12:5:1185:3e97:eff:fe51:3817/64 scope global noprefixroute dynamic valid_lft 2591994sec preferred_lft 604794sec inet6 fe45::a455:eff:d078:3847/64 scope link valid_lft forever preferred_lft forever",
"ip route default via 10.85.1.245 dev enc600 proto static metric 1024 12.34.4.95/24 dev enp0s25 proto kernel scope link src 12.34.4.201 12.38.4.128 via 12.38.19.254 dev enp0s25 proto dhcp metric 1 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1",
"ping -c 1 192.168.70.8 PING 192.168.70.8 (192.168.70.8) 56(84) bytes of data. 64 bytes from 192.168.70.8: icmp_seq=0 ttl=63 time=8.07 ms",
"machine_id=USD(cat /etc/machine-id) kernel_version=USD(uname -r) ls /boot/loader/entries/USDmachine_id-USDkernel_version.conf",
"root=10.16.105.196:/nfs/nfs_root cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0,portname=OSAPORT ip=10.16.105.197:10.16.105.196:10.16.111.254:255.255.248.0:nfsβserver.subdomain.domain:enc9a0:none rd_NO_LUKS rd_NO_LVM rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=us"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/interactively_installing_rhel_over_the_network/configuring-a-linux-instance-on-ibm-z_rhel-installer |
Managing, monitoring, and updating the kernel | Managing, monitoring, and updating the kernel Red Hat Enterprise Linux 8 A guide to managing the Linux kernel on Red Hat Enterprise Linux 8 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_monitoring_and_updating_the_kernel/index |
Chapter 51. TlsSidecar schema reference | Chapter 51. TlsSidecar schema reference Used in: CruiseControlSpec , EntityOperatorSpec Full list of TlsSidecar schema properties Configures a TLS sidecar, which is a container that runs in a pod, but serves a supporting purpose. In Streams for Apache Kafka, the TLS sidecar uses TLS to encrypt and decrypt communication between components and ZooKeeper. The TLS sidecar is used in the Entity Operator. The TLS sidecar is configured using the tlsSidecar property in Kafka.spec.entityOperator . The TLS sidecar supports the following additional options: image resources logLevel readinessProbe livenessProbe The resources property specifies the memory and CPU resources allocated for the TLS sidecar. The image property configures the container image which will be used. The readinessProbe and livenessProbe properties configure healthcheck probes for the TLS sidecar. The logLevel property specifies the logging level. The following logging levels are supported: emerg alert crit err warning notice info debug The default value is notice . Example TLS sidecar configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # ... entityOperator: # ... tlsSidecar: resources: requests: cpu: 200m memory: 64Mi limits: cpu: 500m memory: 128Mi # ... 51.1. TlsSidecar schema properties Property Property type Description image string The docker image for the container. livenessProbe Probe Pod liveness checking. logLevel string (one of [emerg, debug, crit, err, alert, warning, notice, info]) The log level for the TLS sidecar. Default value is notice . readinessProbe Probe Pod readiness checking. resources ResourceRequirements CPU and memory resources to reserve. | [
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # entityOperator: # tlsSidecar: resources: requests: cpu: 200m memory: 64Mi limits: cpu: 500m memory: 128Mi #"
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-TlsSidecar-reference |
Chapter 1. Introduction to system authentication | Chapter 1. Introduction to system authentication One of the cornerstones of establishing a secure network environment is ensuring that access is restricted to authorized users. When access is allowed, users can authenticate to the system, verifying their identities. On any Red Hat Enterprise Linux system, various services are available to create and manage user identities. These can include local system files, services that connect to larger identity domains like Kerberos or Samba, or tools to create those domains. 1.1. Confirming user identities Authentication is the process of confirming an identity. For network interactions, authentication involves the identification of one party by another party. There are many ways to use authentication over networks, such as simple passwords, certificates, passwordless methods, one-time password (OTP) tokens, or biometric scans. Authorization defines what the authenticated party is allowed to do or access. Authentication requires that a user presents some kind of credential to verify his identity. The kind of credential that is required is defined by the authentication mechanism being used. There are several kinds of authentication for local users on a system: Password-based authentication Almost all software permits the user to authenticate by providing a recognized username and password. This is also called simple authentication. Certificate-based authentication Client authentication based on certificates is part of the Secure Sockets Layer (SSL) protocol. The client digitally signs a randomly generated piece of data and sends both the certificate and the signed data across the network. The server validates the signature and confirms the validity of the certificate. Kerberos authentication Kerberos establishes a system of short-lived credentials, called ticket-granting tickets (TGTs). The user presents credentials, that is, user name and password, that identify the user and indicate to the system that the user can be issued a ticket. TGT can then be repeatedly used to request access tickets to other services, like websites and email. Authentication using Kerberos allows the user to undergo only a single authentication process in this way. Smart card-based authentication This is a variant of certificate-based authentication. The smart card (or token) stores user certificates; when a user inserts the token into a system, the system reads the certificates and grant access. Single sign-on using smart cards goes through three steps: A user inserts a smart card into the card reader. Pluggable authentication modules (PAMs) on Red Hat Enterprise Linux detect the inserted smart card. The system maps the certificate to the user entry and then compares the presented certificates on the smart card, which are encrypted with a private key as explained under the certificate-based authentication, to the certificates stored in the user entry. If the certificate is successfully validated against the key distribution center (KDC), then the user is allowed to log in. Smart card-based authentication builds on the simple authentication layer established by Kerberos by adding certificates as additional identification mechanisms as well as by adding physical access requirements. For more information see Managing smart card authentication . One-time password authentication One-time passwords bring an additional step to your authentication security. The authentication uses your password in combination with an automatically generated one time password. For more information see One time password (OTP) authentication in Identity Management . Passkey authentication A passkey is a FIDO2 authentication device that is supported by the libfido2 library, such as Yubikey 5 and Nitrokey. It allows passwordless and multi-factor authentication. If your system is enrolled and connected to an IdM environment, this authentication method issues a Kerberos ticket automatically, which enables single sign-on (SSO) for an Identity Management (IdM) user. For more information see Enabling passkey authentication in IdM environment . External identity providers You can associate users with external identity providers (IdP) that support the OAuth 2 device authorization flow. When these users authenticate with the SSSD version available in RHEL 9.1 or later, they receive RHEL Identity Management (IdM) single sign-on capabilities with Kerberos tickets after performing authentication and authorization at the external IdP. For more information see Using external identity providers to authenticate to IdM . 1.2. Planning single sign-on Without a central identity store and every application maintaining its own set of users and credentials, a user has to enter a password for every single service or application they open. By configuring single sign-on, administrators create a single password store so that users can log in once, by using a single password, and be authenticated to all network resources. Red Hat Enterprise Linux supports single sign-on for several resources, including logging into workstations, unlocking screen savers, and accessing secured web pages using Mozilla Firefox. With other available system services such as Privileged Access Management (PAM), Name Service Switch (NSS), and Kerberos, other system applications can be configured to use those identity sources. Single sign-on is both a convenience to users and another layer of security for the server and the network. Single sign-on hinges on secure and effective authentication. Red Hat Enterprise Linux provides two authentication mechanisms which can be used to enable single sign-on: Kerberos-based authentication, through both Kerberos realms and Active Directory domains Smart card-based authentication Both of these methods create a centralized identity store (either through a Kerberos realm or a certificate authority in a public key infrastructure), and the local system services then use those identity domains rather than maintaining multiple local stores. 1.3. Services available for local user authentication All Red Hat Enterprise Linux systems have some services already available to configure authentication for local users on local systems. These include: Authentication setup The Authentication Configuration tool authselect sets up different identity back ends and means of authentication (such as passwords, fingerprints, or smart cards) for the system. Identity back end setup The Security System Services Daemon (SSSD) sets up multiple identity providers (primarily LDAP-based directories such as Microsoft Active Directory or Red Hat Enterprise Linux IdM) which can then be used by both the local system and applications for users. Passwords and tickets are cached, allowing both offline authentication and single sign-on by reusing credentials. The realmd service is a command-line utility that allows you to configure an authentication back end, which is SSSD for IdM. The realmd service detects available IdM domains based on the DNS records, configures SSSD, and then joins the system as an account to a domain. Name Service Switch (NSS) is a mechanism for low-level system calls that return information about users, groups, or hosts. NSS determines what source, that is, which modules, should be used to obtain the required information. For example, user information can be located in traditioal UNIX files, such as the /etc/passwd file, or in LDAP-based directories, while host addresses can be read from files, such as the /etc/hosts file, or the DNS records; NSS locates where the information is stored. Authentication Mechanisms Pluggable Authentication Modules (PAM) provide a system to set up authentication policies. An application using PAM for authentication loads different modules that control different aspects of authentication; which PAM module an application uses is based on how the application is configured. The available PAM modules include Kerberos, Winbind, SSSD, or local UNIX file-based authentication. Other services and applications are also available, but these are common ones. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_authentication_and_authorization_in_rhel/introduction-to-system-authentication_configuring-authentication-and-authorization-in-rhel |
16.6. Language Selection | 16.6. Language Selection Using your mouse, select the language (for example, U.S. English) you would prefer to use for the installation and as the system default (refer to the figure below). Once you have made your selection, click to continue. Figure 16.2. Language Configuration | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/language-selection-ppc |
Chapter 18. Using a vault | Chapter 18. Using a vault Red Hat build of Keycloak provides two out-of-the-box implementations of the Vault SPI: a plain-text file-based vault and Java KeyStore-based vault. The file-based vault implementation is especially useful for Kubernetes/OpenShift secrets. You can mount Kubernetes secrets into the Red Hat build of Keycloak Container, and the data fields will be available in the mounted folder with a flat-file structure. The Java KeyStore-based vault implementation is useful for storing secrets in bare metal installations. You can use the KeyStore vault, which is encrypted using a password. 18.1. Available integrations Secrets stored in the vaults can be used at the following places of the Administration Console: Obtain the SMTP Mail server Password Obtain the LDAP Bind Credential when using LDAP-based User Federation Obtain the OIDC identity providers Client Secret when integrating external identity providers 18.2. Enabling a vault For enabling the file-based vault you need to build Red Hat build of Keycloak first using the following build option: bin/kc.[sh|bat] build --vault=file Analogically, for the Java KeyStore-based you need to specify the following build option: bin/kc.[sh|bat] build --vault=keystore 18.3. Configuring the file-based vault 18.3.1. Setting the base directory to lookup secrets Kubernetes/OpenShift secrets are basically mounted files. To configure a directory where these files should be mounted, enter this command: bin/kc.[sh|bat] start --vault-dir=/my/path 18.3.2. Realm-specific secret files Kubernetes/OpenShift Secrets are used on a per-realm basis in Red Hat build of Keycloak, which requires a naming convention for the file in place: USD{vault.<realmname>_<secretname>} 18.3.3. Using underscores in the Name To process the secret correctly, you double all underscores in the <realmname> or the <secretname>, separated by a single underscore. Example Realm Name: sso_realm Desired Name: ldap_credential Resulting file Name: Note the doubled underscores between sso and realm and also between ldap and credential . 18.4. Configuring the Java KeyStore-based vault In order to use the Java KeyStore-based vault, you need to create a KeyStore file first. You can use the following command for doing so: keytool -importpass -alias <realm-name>_<alias> -keystore keystore.p12 -storepass keystorepassword and then enter a value you want to store in the vault. Note that the format of the -alias parameter depends on the key resolver used. The default key resolver is REALM_UNDERSCORE_KEY . This by default results to storing the value in a form of generic PBEKey (password based encryption) within SecretKeyEntry. You can then start Red Hat build of Keycloak using the following runtime options: bin/kc.[sh|bat] start --vault-file=/path/to/keystore.p12 --vault-pass=<value> --vault-type=<value> Note that the --vault-type parameter is optional and defaults to PKCS12 . Secrets stored in the vault can then be accessed in a realm via the following placeholder (assuming using the REALM_UNDERSCORE_KEY key resolver): USD{vault.realm-name_alias} . 18.5. Example: Use an LDAP bind credential secret in the Admin Console Example setup A realm named secrettest A desired Name ldapBc for the bind Credential Resulting file name: secrettest_ldapBc Usage in Admin Console You can then use this secret from the Admin Console by using USD{vault.ldapBc} as the value for the Bind Credential when configuring your LDAP User federation. 18.6. Relevant options Value vault π Enables a vault provider. CLI: --vault Env: KC_VAULT file , keystore vault-dir If set, secrets can be obtained by reading the content of files within the given directory. CLI: --vault-dir Env: KC_VAULT_DIR vault-file Path to the keystore file. CLI: --vault-file Env: KC_VAULT_FILE vault-pass Password for the vault keystore. CLI: --vault-pass Env: KC_VAULT_PASS vault-type Specifies the type of the keystore file. CLI: --vault-type Env: KC_VAULT_TYPE PKCS12 (default) | [
"bin/kc.[sh|bat] build --vault=file",
"bin/kc.[sh|bat] build --vault=keystore",
"bin/kc.[sh|bat] start --vault-dir=/my/path",
"USD{vault.<realmname>_<secretname>}",
"sso__realm_ldap__credential",
"keytool -importpass -alias <realm-name>_<alias> -keystore keystore.p12 -storepass keystorepassword",
"bin/kc.[sh|bat] start --vault-file=/path/to/keystore.p12 --vault-pass=<value> --vault-type=<value>"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/server_guide/vault- |
Chapter 8. ROSA CLI | Chapter 8. ROSA CLI 8.1. Getting started with the ROSA CLI 8.1.1. About the ROSA CLI Use the Red Hat OpenShift Service on AWS (ROSA) command-line interface (CLI), the rosa command, to create, update, manage, and delete ROSA clusters and resources. 8.1.2. Setting up the ROSA CLI Use the following steps to install and configure the ROSA CLI ( rosa ) on your installation host. Procedure Install and configure the latest AWS CLI ( aws ). Follow the AWS Command Line Interface documentation to install and configure the AWS CLI for your operating system. Specify your aws_access_key_id , aws_secret_access_key , and region in the .aws/credentials file. See AWS Configuration basics in the AWS documentation. Note You can optionally use the AWS_DEFAULT_REGION environment variable to set the default AWS region. Query the AWS API to verify if the AWS CLI is installed and configured correctly: USD aws sts get-caller-identity --output text Example output <aws_account_id> arn:aws:iam::<aws_account_id>:user/<username> <aws_user_id> Download the latest version of the ROSA CLI ( rosa ) for your operating system from the Downloads page on OpenShift Cluster Manager. Extract the rosa binary file from the downloaded archive. The following example extracts the binary from a Linux tar archive: USD tar xvf rosa-linux.tar.gz Add rosa to your path. In the following example, the /usr/local/bin directory is included in the path of the user: USD sudo mv rosa /usr/local/bin/rosa Verify if the ROSA CLI is installed correctly by querying the rosa version: USD rosa version Example output 1.2.15 Your ROSA CLI is up to date. Optional: Enable tab completion for the ROSA CLI. With tab completion enabled, you can press the Tab key twice to automatically complete subcommands and receive command suggestions: To enable persistent tab completion for Bash on a Linux host: Generate a rosa tab completion configuration file for Bash and save it to your /etc/bash_completion.d/ directory: # rosa completion bash > /etc/bash_completion.d/rosa Open a new terminal to activate the configuration. To enable persistent tab completion for Bash on a macOS host: Generate a rosa tab completion configuration file for Bash and save it to your /usr/local/etc/bash_completion.d/ directory: USD rosa completion bash > /usr/local/etc/bash_completion.d/rosa Open a new terminal to activate the configuration. To enable persistent tab completion for Zsh: If tab completion is not enabled for your Zsh environment, enable it by running the following command: USD echo "autoload -U compinit; compinit" >> ~/.zshrc Generate a rosa tab completion configuration file for Zsh and save it to the first directory in your functions path: USD rosa completion zsh > "USD{fpath[1]}/_rosa" Open a new terminal to activate the configuration. To enable persistent tab completion for fish: Generate a rosa tab completion configuration file for fish and save it to your ~/.config/fish/completions/ directory: USD rosa completion fish > ~/.config/fish/completions/rosa.fish Open a new terminal to activate the configuration. To enable persistent tab completion for PowerShell: Generate a rosa tab completion configuration file for PowerShell and save it to a file named rosa.ps1 : PS> rosa completion powershell | Out-String | Invoke-Expression Source the rosa.ps1 file from your PowerShell profile. Note For more information about configuring rosa tab completion, see the help menu by running the rosa completion --help command. 8.1.3. Configuring the ROSA CLI Use the following commands to configure the Red Hat OpenShift Service on AWS (ROSA) CLI, rosa . 8.1.3.1. login Log in to your Red Hat account, saving the credentials to the rosa configuration file. You must provide a token when logging in. You can copy your token from the ROSA token page . The ROSA CLI ( rosa ) looks for a token in the following priority order: Command-line arguments The ROSA_TOKEN environment variable The rosa configuration file Interactively from a command-line prompt Syntax USD rosa login [arguments] Table 8.1. Arguments Option Definition --client-id The OpenID client identifier (string). Default: cloud-services --client-secret The OpenID client secret (string). --insecure Enables insecure communication with the server. This disables verification of TLS certificates and host names. --scope The OpenID scope (string). If this option is used, it replaces the default scopes. This can be repeated multiple times to specify multiple scopes. Default: openid --token Accesses or refreshes the token (string). --token-url The OpenID token URL (string). Default: https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token Table 8.2. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --profile Specifies an AWS profile (string) from your credentials file. 8.1.3.2. logout Log out of rosa . Logging out also removes the rosa configuration file. Syntax USD rosa logout [arguments] Table 8.3. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --profile Specifies an AWS profile (string) from your credentials file. 8.1.3.3. verify permissions Verify that the AWS permissions required to create a ROSA cluster are configured correctly: Syntax USD rosa verify permissions [arguments] Note This command verifies permissions only for clusters that do not use the AWS Security Token Service (STS). Table 8.4. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --region The AWS region (string) in which to run the command. This value overrides the AWS_REGION environment variable. --profile Specifies an AWS profile (string) from your credentials file. Examples Verify that the AWS permissions are configured correctly: USD rosa verify permissions Verify that the AWS permissions are configured correctly in a specific region: USD rosa verify permissions --region=us-west-2 8.1.3.4. verify quota Verifies that AWS quotas are configured correctly for your default region. Syntax USD rosa verify quota [arguments] Table 8.5. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --region The AWS region (string) in which to run the command. This value overrides the AWS_REGION environment variable. --profile Specifies an AWS profile (string) from your credentials file. Examples Verify that the AWS quotas are configured correctly for the default region: USD rosa verify quota Verify that the AWS quotas are configured correctly in a specific region: USD rosa verify quota --region=us-west-2 8.1.3.5. download rosa Download the latest compatible version of the rosa CLI. After you download rosa , extract the contents of the archive and add it to your path. Syntax USD rosa download rosa [arguments] Table 8.6. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. 8.1.3.6. download oc Download the latest compatible version of the OpenShift Container Platform CLI ( oc ). After you download oc , you must extract the contents of the archive and add it to your path. Syntax USD rosa download oc [arguments] Table 8.7. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. Example Download oc client tools: USD rosa download oc 8.1.3.7. verify oc Verifies that the OpenShift Container Platform CLI ( oc ) is installed correctly. Syntax USD rosa verify oc [arguments] Table 8.8. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. Example Verify oc client tools: USD rosa verify oc Additional resources Setting up the ROSA CLI Getting started with the OpenShift CLI 8.1.4. Initializing ROSA Use the init command to initialize Red Hat OpenShift Service on AWS (ROSA) only if you are using non-STS. 8.1.4.1. init Perform a series of checks to verify that you are ready to deploy a ROSA cluster. The list of checks includes the following: Checks to see that you have logged in (see login ) Checks that your AWS credentials are valid Checks that your AWS permissions are valid (see verify permissions ) Checks that your AWS quota levels are high enough (see verify quota ) Runs a cluster simulation to ensure cluster creation will perform as expected Checks that the osdCcsAdmin user has been created in your AWS account Checks that the OpenShift Container Platform command-line tool is available on your system Syntax USD rosa init [arguments] Table 8.9. Arguments Option Definition --region The AWS region (string) in which to verify quota and permissions. This value overrides the AWS_REGION environment variable only when running the init command, but it does not change your AWS CLI configuration. --delete Deletes the stack template that is applied to your AWS account during the init command. --client-id The OpenID client identifier (string). Default: cloud-services --client-secret The OpenID client secret (string). --insecure Enables insecure communication with the server. This disables verification of TLS certificates and host names. --scope The OpenID scope (string). If this option is used, it completely replaces the default scopes. This can be repeated multiple times to specify multiple scopes. Default: openid --token Accesses or refreshes the token (string). --token-url The OpenID token URL (string). Default: https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token Table 8.10. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --profile Specifies an AWS profile (string) from your credentials file. Examples Configure your AWS account to allow ROSA clusters: USD rosa init Configure a new AWS account using pre-existing OpenShift Cluster Manager credentials: USD rosa init --token=USDOFFLINE_ACCESS_TOKEN 8.1.5. Using a Bash script This is an example workflow of how to use a Bash script with the Red Hat OpenShift Service on AWS (ROSA) CLI, rosa . Prerequisites Make sure that AWS credentials are available as one of the following options: AWS profile Environment variables ( AWS_ACCESS_KEY_ID , AWS_SECRET_ACCESS_KEY ) Procedure Initialize rosa using an Red Hat OpenShift Cluster Manager offline token from Red Hat : USD rosa init --token=<token> Create the ROSA cluster: USD rosa create cluster --cluster-name=<cluster_name> Add an identity provider (IDP): USD rosa create idp --cluster=<cluster_name> --type=<identity_provider> [arguments] Add a dedicated-admin user: USD rosa grant user dedicated-admin --user=<idp_user_name> --cluster=<cluster_name> 8.1.6. Updating the ROSA CLI Update to the latest compatible version of the ROSA CLI ( rosa ). Procedure Confirm that a new version of the ROSA CLI ( rosa ) is available: USD rosa version Example output 1.2.12 There is a newer release version '1.2.15', please consider updating: https://mirror.openshift.com/pub/openshift-v4/clients/rosa/latest/ Download the latest compatible version of the ROSA CLI: USD rosa download rosa This command downloads an archive called rosa-*.tar.gz into the current directory. The exact name of the file depends on your operating system and system architecture. Extract the contents of the archive: USD tar -xzf rosa-linux.tar.gz Install the new version of the ROSA CLI by moving the extracted file into your path. In the following example, the /usr/local/bin directory is included in the path of the user: USD sudo mv rosa /usr/local/bin/rosa Verification Verify that the new version of ROSA is installed. USD rosa version Example output 1.2.15 Your ROSA CLI is up to date. 8.2. Managing objects with the ROSA CLI Managing objects with the Red Hat OpenShift Service on AWS (ROSA) CLI, rosa , such as adding dedicated-admin users, managing clusters, and scheduling cluster upgrades. Note To access a cluster that is accessible only over an HTTP proxy server, you can set the HTTP_PROXY , HTTPS_PROXY , and NO_PROXY variables. These environment variables are respected by the rosa CLI so that all communication with the cluster goes through the HTTP proxy. 8.2.1. Common commands and arguments These common commands and arguments are available for the Red Hat OpenShift Service on AWS (ROSA) CLI, rosa . 8.2.1.1. debug Enables debug mode for the parent command to help with troubleshooting. Example USD rosa create cluster --cluster-name=<cluster_name> --debug 8.2.1.2. download Downloads the latest compatible version of the specified software to the current directory in an archive file. Extract the contents of the archive and add the contents to your path to use the software. To download the latest ROSA CLI, specify rosa . To download the latest OpenShift CLI, specify oc . Example USD rosa download <software> 8.2.1.3. help Displays general help information for the ROSA CLI ( rosa ) and a list of available commands. This option can also be used as an argument to display help information for a parent command, such as version or create . Examples Displays general help for the ROSA CLI. USD rosa --help Displays general help for version . USD rosa version --help 8.2.1.4. interactive Enables interactive mode. Example USD rosa create cluster --cluster-name=<cluster_name> --interactive 8.2.1.5. profile Specifies an AWS profile from your credential file. Example USD rosa create cluster --cluster-name=<cluster_name> --profile=myAWSprofile 8.2.1.6. version Displays the rosa version and checks whether a newer version is available. Example USD rosa version [arguments] Example output Displayed when a newer version of the ROSA CLI is available. 1.2.12 There is a newer release version '1.2.15', please consider updating: https://mirror.openshift.com/pub/openshift-v4/clients/rosa/latest/ 8.2.2. Parent commands The Red Hat OpenShift Service on AWS (ROSA) CLI, rosa , uses parent commands with child commands to manage objects. The parent commands are create , edit , delete , list , and describe . Not all parent commands can be used with all child commands. For more information, see the specific reference topics that describes the child commands. 8.2.2.1. create Creates an object or resource when paired with a child command. Example USD rosa create cluster --cluster-name=mycluster 8.2.2.2. edit Edits options for an object, such as making a cluster private. Example USD rosa edit cluster --cluster=mycluster --private 8.2.2.3. delete Deletes an object or resource when paired with a child command. Example USD rosa delete ingress --cluster=mycluster 8.2.2.4. list Lists clusters or resources for a specific cluster. Example USD rosa list users --cluster=mycluster 8.2.2.5. describe Shows the details for a cluster. Example USD rosa describe cluster --cluster=mycluster 8.2.3. Create objects This section describes the create commands for clusters and resources. 8.2.3.1. create account-roles Create the required account-wide role and policy resources for your cluster. Syntax USD rosa create account-roles [flags] Table 8.11. Flags Option Definition --debug Enable debug mode. -i, --interactive Enable interactive mode. -m, --mode string How to perform the operation. Valid options are: auto Resource changes will be automatically applied using the current AWS account. manual Commands necessary to modify AWS resources will be output to be run manually. --path string The Amazon Resource Name (ARN) path for the account-wide roles and policies, including the Operator policies. --permissions-boundary string The ARN of the policy that is used to set the permissions boundary for the account roles. --prefix string User-defined prefix for all generated AWS resources. The default is ManagedOpenShift . --profile string Use a specific AWS profile from your credential file. -y, --yes Automatically answer yes to confirm operations. 8.2.3.2. create admin Create a cluster administrator with an automatically generated password that can log in to a cluster. Syntax USD rosa create admin --cluster=<cluster_name>|<cluster_id> Table 8.12. Arguments Option Definition --cluster <cluster_name>|<cluster_id> Required. The name or ID (string) of the cluster to add to the identity provider (IDP). Table 8.13. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --interactive Enables interactive mode. --profile string Specifies an AWS profile from your credentials file. Example Create a cluster administrator that can log in to a cluster named mycluster . USD rosa create admin --cluster=mycluster 8.2.3.3. create break glass credential Create a break glass credential for a hosted control plane cluster with external authentication enabled. Syntax USD rosa create break-glass-credential --cluster=<cluster_name> [arguments] Table 8.14. Arguments Option Definition --cluster <cluster_name>|<cluster_id> Required. The name or ID of the cluster to which the break glass credential will be added. --expiration Optional: How long a break glass credential can be used before expiring. The expiration duration must be a minimum of 10 minutes and a maximum of 24 hours. If you do not enter a value, the expiration duration defaults to 24 hours. --username Optional. The username for the break glass credential. If you do not enter a value, a random username is generated for you. Table 8.15. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --interactive Enables interactive mode. --profile Specifies an AWS profile (string) from your credentials file. --region Specifies an AWS region, overriding the AWS_REGION environment variable. --yes Automatically answers yes to confirm the operation. Examples Add a break glass credential to a cluster named mycluster . Syntax USD rosa create break-glass-credential --cluster=mycluster Add a break glass credential to a cluster named mycluster using the interactive mode. Syntax USD rosa create break-glass-credential --cluster=mycluster -i 8.2.3.4. create cluster Create a new cluster. Syntax USD rosa create cluster --cluster-name=<cluster_name> [arguments] Table 8.16. Arguments Option Definition --additional-compute-security-group-ids <sec_group_id> The identifier of one or more additional security groups to use along with the default security groups that are used with the standard machine pool created alongside the cluster. For more information on additional security groups, see the requirements for Security groups under Additional resources . --additional-infra-security-group-ids <sec_group_id> The identifier of one or more additional security groups to use along with the default security groups that are used with the infra nodes created alongside the cluster. For more information on additional security groups, see the requirements for Security groups under Additional resources . --additional-control-plane-security-group-ids <sec_group_id> The identifier of one or more additional security groups to use along with the default security groups that are used with the control plane nodes created alongside the cluster. For more information on additional security groups, see the requirements for Security groups under Additional resources . --additional-allowed-principals <arn> A comma-separated list of additional allowed principal ARNs to be added to the hosted control plane's VPC endpoint service to enable additional VPC endpoint connection requests to be automatically accepted. --cluster-name <cluster_name> Required. The name of the cluster. When used with the create cluster command, this argument is used to set the cluster name and can hold up to 54 characters. The value for this argument must be unique within your organization. --compute-machine-type <instance_type> The instance type for compute nodes in the cluster. This determines the amount of memory and vCPU that is allocated to each compute node. For more information on valid instance types, see AWS Instance types in ROSA service definition . --controlplane-iam-role <arn> The ARN of the IAM role to attach to control plane instances. --create-cluster-admin Optional. As part of cluster creation, create a local administrator user ( cluster-admin ) for your cluster. This automatically configures an htpasswd identity provider for the cluster-admin user. Optionally, use the --cluster-admin-user and --cluster-admin-password options to specify the username and password for the administrator user. Omitting these options automatically generates the credentials and displays their values as terminal output. --cluster-admin-user Optional. Specifies the user name of the cluster administrator user created when used in conjunction with the --create-cluster-admin option. --cluster-admin-password Optional. Specifies the password of the cluster administrator user created when used in conjunction with the --create-cluster-admin option. --disable-scp-checks Indicates whether cloud permission checks are disabled when attempting to install a cluster. --dry-run Simulates creating the cluster. --domain-prefix Optional: When used with the create cluster command, this argument sets the subdomain for your cluster on *.openshiftapps.com . The value for this argument must be unique within your organization, cannot be longer than 15 characters, and cannot be changed after cluster creation. If the argument is not supplied, an autogenerated value is created that depends on the length of the cluster name. If the cluster name is fewer than or equal to 15 characters, that name is used for the domain prefix. If the cluster name is longer than 15 characters, the domain prefix is randomly generated to a 15 character string. --ec2-metadata-http-tokens string Configures the use of IMDSv2 for EC2 instances. Valid values are optional (default) or required . --enable-autoscaling Enables autoscaling of compute nodes. By default, autoscaling is set to 2 nodes. To set non-default node limits, use this argument with the --min-replicas and --max-replicas arguments. --etcd-encryption Enables encryption of ETCD key-values on Red Hat OpenShift Service on AWS (classical architecture) clusters. --etcd-encryption-kms-arn Enables encryption of ETCD storage using the customer-managed key managed in AWS Key Management Service. --external-id <arn_string> An optional unique identifier that might be required when you assume a role in another account. --host-prefix <subnet> The subnet prefix length to assign to each individual node, as an integer. For example, if host prefix is set to 23 , then each node is assigned a /23 subnet out of the given CIDR. --machine-cidr <address_block> Block of IP addresses (ipNet) used by ROSA while installing the cluster, for example, 10.0.0.0/16 . Important OVN-Kubernetes, the default network provider in ROSA 4.11 and later, uses the 100.64.0.0/16 IP address range internally. If your cluster uses OVN-Kubernetes, do not include the 100.64.0.0/16 IP address range in any other CIDR definitions in your cluster. --max-replicas <number_of_nodes> Specifies the maximum number of compute nodes when enabling autoscaling. Default: 2 --min-replicas <number_of_nodes> Specifies the minimum number of compute nodes when enabling autoscaling. Default: 2 --multi-az Deploys to multiple data centers. --no-cni Creates a cluster without a Container Network Interface (CNI) plugin. Customers can then bring their own CNI plugin and install it after cluster creation. --operator-roles-prefix <string> Prefix that are used for all IAM roles used by the operators needed in the OpenShift installer. A prefix is generated automatically if you do not specify one. --pod-cidr <address_block> Block of IP addresses (ipNet) from which pod IP addresses are allocated, for example, 10.128.0.0/14 . Important OVN-Kubernetes, the default network provider in ROSA 4.11 and later, uses the 100.64.0.0/16 IP address range internally. If your cluster uses OVN-Kubernetes, do not include the 100.64.0.0/16 IP address range in any other CIDR definitions in your cluster. --private Restricts primary API endpoint and application routes to direct, private connectivity. --private-link Specifies to use AWS PrivateLink to provide private connectivity between VPCs and services. The --subnet-ids argument is required when using --private-link . --region <region_name> The name of the AWS region where your worker pool will be located, for example, us-east-1 . This argument overrides the AWS_REGION environment variable. --replicas n The number of worker nodes to provision per availability zone. Single-zone clusters require at least 2 nodes. Multi-zone clusters require at least 3 nodes. Default: 2 for single-zone clusters; 3 for multi-zone clusters. --role-arn <arn> The ARN of the installer role that OpenShift Cluster Manager uses to create the cluster. This is required if you have not already created account roles. --service-cidr <address_block> Block of IP addresses (ipNet) for services, for example, 172.30.0.0/16 . Important OVN-Kubernetes, the default network provider in ROSA 4.11 and later, uses the 100.64.0.0/16 IP address range internally. If your cluster uses OVN-Kubernetes, do not include the 100.64.0.0/16 IP address range in any other CIDR definitions in your cluster. --sts | --non-sts Specifies whether to use AWS Security Token Service (STS) or IAM credentials (non-STS) to deploy your cluster. --subnet-ids <aws_subnet_id> The AWS subnet IDs that are used when installing the cluster, for example, subnet-01abc234d5678ef9a . Subnet IDs must be in pairs with one private subnet ID and one public subnet ID per availability zone. Subnets are comma-delimited, for example, --subnet-ids=subnet-1,subnet-2 . Leave the value empty for installer-provisioned subnet IDs. When using --private-link , the --subnet-ids argument is required and only one private subnet is allowed per zone. --support-role-arn string The ARN of the role used by Red Hat Site Reliability Engineers (SREs) to enable access to the cluster account to provide support. --tags Tags that are used on resources created by Red Hat OpenShift Service on AWS in AWS. Tags can help you manage, identify, organize, search for, and filter resources within AWS. Tags are comma separated, for example: "key value, foo bar". Important Red Hat OpenShift Service on AWS only supports custom tags to Red Hat OpenShift resources during cluster creation. Once added, the tags cannot be removed or edited. Tags that are added by Red Hat are required for clusters to stay in compliance with Red Hat production service level agreements (SLAs). These tags must not be removed. Red Hat OpenShift Service on AWS does not support adding additional tags outside of ROSA cluster-managed resources. These tags can be lost when AWS resources are managed by the ROSA cluster. In these cases, you might need custom solutions or tools to reconcile the tags and keep them intact. --version string The version of ROSA that will be used to install the cluster or cluster resources. For cluster use an X.Y.Z format, for example, 4.18.0 . For account-role use an X.Y format, for example, 4.18 . --worker-iam-role string The ARN of the IAM role that will be attached to compute instances. Table 8.17. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --interactive Enables interactive mode. --profile Specifies an AWS profile (string) from your credentials file. Examples Create a cluster named mycluster . USD rosa create cluster --cluster-name=mycluster Create a cluster with a specific AWS region. USD rosa create cluster --cluster-name=mycluster --region=us-east-2 Create a cluster with autoscaling enabled on the default worker machine pool. USD rosa create cluster --cluster-name=mycluster -region=us-east-1 --enable-autoscaling --min-replicas=2 --max-replicas=5 8.2.3.5. create external-auth-provider Add an external identity provider instead of the OpenShift OAuth2 server. Important You can only use external authentication providers on ROSA with HCP clusters. Syntax USD rosa create external-auth-provider --cluster=<cluster_name> | <cluster_id> [arguments] Table 8.18. Arguments Option Definition --claim-mapping-groups-claim <string> Required. Describes rules on how to transform information from an ID token into a cluster identity. --claim-validation-rule <strings> Rules that are applied to validate token claims to authenticate users. The input will be in a <claim>:<required_value> format. To have multiple claim validation rules, you can separate the values by , . For example, <claim>:<required_value>,<claim>:<required_value> . --claim-mapping-username-claim <string> The name of the claim that should be used to construct user names for the cluster identity. --cluster <cluster_name>|<cluster_id> Required. The name or ID of the cluster to which the IDP will be added. --console-client-id <string> The identifier of the OIDC client from the OIDC provider for the OpenShift Cluster Manager web console. --console-client-secret <string> The secret that is associated with the console application registration. --issuer-audiences <strings> An array of audiences to check the incoming tokens against. Valid tokens must include at least one of these values in their audience claim. --issuer-ca-file <string> The path to the PEM-encoded certificate file to use when making requests to the server. --issuer-url <string> The serving URL of the token issuer. --name <string> A name that is used to refer to the external authentication provider. Table 8.19. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --interactive Enables interactive mode. --profile Specifies an AWS profile string from your credentials file. Examples Add a Microsoft Entra ID identity provider to a cluster named mycluster . USD rosa create external-auth-provider --cluster=mycluster --name <provider_name> --issuer-audiences <audience_id> --issuer-url <issuing id> --claim-mapping-username-claim email --claim-mapping-groups-claim groups 8.2.3.6. create idp Add an identity provider (IDP) to define how users log in to a cluster. Syntax USD rosa create idp --cluster=<cluster_name> | <cluster_id> [arguments] Table 8.20. Arguments Option Definition --cluster <cluster_name>|<cluster_id> Required. The name or ID of the cluster to which the IDP will be added. --ca <path_to_file> The path to the PEM-encoded certificate file to use when making requests to the server, for example, /usr/share/cert.pem . --client-id The client ID (string) from the registered application. --client-secret The client secret (string) from the registered application. --mapping-method Specifies how new identities (string) are mapped to users when they log in. Default: claim --name The name (string) for the identity provider. --type The type (string) of identity provider. Options: github , gitlab , google , ldap , openid Table 8.21. GitHub arguments Option Definition --hostname The optional domain (string) that are used with a hosted instance of GitHub Enterprise. --organizations Specifies the organizations for login access. Only users that are members of at least one of the listed organizations (string) are allowed to log in. --teams Specifies the teams for login access. Only users that are members of at least one of the listed teams (string) are allowed to log in. The format is <org>/<team> . Table 8.22. GitLab arguments Option Definition --host-url The host URL (string) of a GitLab provider. Default: https://gitlab.com Table 8.23. Google arguments Option Definition --hosted-domain Restricts users to a Google Apps domain (string). Table 8.24. LDAP arguments Option Definition --bind-dn The domain name (string) to bind with during the search phase. --bind-password The password (string) to bind with during the search phase. --email-attributes The list (string) of attributes whose values should be used as the email address. --id-attributes The list (string) of attributes whose values should be used as the user ID. Default: dn --insecure Does not make TLS connections to the server. --name-attributes The list (string) of attributes whose values should be used as the display name. Default: cn --url An RFC 2255 URL (string) which specifies the LDAP search parameters that are used. --username-attributes The list (string) of attributes whose values should be used as the preferred username. Default: uid Table 8.25. OpenID arguments Option Definition --email-claims The list (string) of claims that are used as the email address. --extra-scopes The list (string) of scopes to request, in addition to the openid scope, during the authorization token request. --issuer-url The URL (string) that the OpenID provider asserts as the issuer identifier. It must use the HTTPS scheme with no URL query parameters or fragment. --name-claims The list (string) of claims that are used as the display name. --username-claims The list (string) of claims that are used as the preferred username when provisioning a user. --groups-claims The list (string) of claims that are used as the groups names. Table 8.26. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --interactive Enables interactive mode. --profile Specifies an AWS profile (string) from your credentials file. Examples Add a GitHub identity provider to a cluster named mycluster . USD rosa create idp --type=github --cluster=mycluster Add an identity provider following interactive prompts. USD rosa create idp --cluster=mycluster --interactive 8.2.3.7. create ingress Add an ingress endpoint to enable API access to the cluster. Syntax USD rosa create ingress --cluster=<cluster_name> | <cluster_id> [arguments] Table 8.27. Arguments Option Definition --cluster <cluster_name>|<cluster_id> Required: The name or ID of the cluster to which the ingress will be added. --label-match The label match (string) for ingress. The format must be a comma-delimited list of key=value pairs. If no label is specified, all routes are exposed on both routers. --private Restricts application route to direct, private connectivity. Table 8.28. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --interactive Enables interactive mode. --profile Specifies an AWS profile (string) from your credentials file. Examples Add an internal ingress to a cluster named mycluster . USD rosa create ingress --private --cluster=mycluster Add a public ingress to a cluster named mycluster . USD rosa create ingress --cluster=mycluster Add an ingress with a route selector label match. USD rosa create ingress --cluster=mycluster --label-match=foo=bar,bar=baz 8.2.3.8. create kubeletconfig Create a custom KubeletConfig object to allow custom configuration of nodes in a Syntax USD rosa create kubeletconfig --cluster=<cluster_name|cluster_id> --name=<kubeletconfig_name> --pod-pids-limit=<number> [flags] Table 8.29. Flags Option Definition --pod-pids-limit <number> Required. The maximum number of PIDs for each node in the -c, --cluster <cluster_name>|<cluster_id> Required. The name or ID of the cluster in which to create the KubeletConfig object. --name Specifies a name for the KubeletConfig object. -i, --interactive Enable interactive mode. -h, --help Shows help for this command. For more information about setting the PID limit for the cluster, see Configuring PID limits . 8.2.3.9. create machinepool Add a machine pool to an existing cluster. Syntax USD rosa create machinepool --cluster=<cluster_name> | <cluster_id> --replicas=<number> --name=<machinepool_name> [arguments] Table 8.30. Arguments Option Definition --additional-security-group-ids <sec_group_id> The identifier of one or more additional security groups to use along with the default security groups for this machine pool. For more information on additional security groups, see the requirements for Security groups under Additional resources . --cluster <cluster_name>|<cluster_id> Required: The name or ID of the cluster to which the machine pool will be added. --disk-size Set the disk volume size for the machine pool, in Gib or TiB. The default is 300 GiB. For ROSA (classic architecture) clusters version 4.13 or earlier, the minimum disk size is 128 GiB, and the maximum is 1 TiB. For cluster version 4.14 and later, the minimum is 128 GiB, and the maximum is 16 TiB. For ROSA with HCP clusters, the minimum disk size is 75 GiB, and the maximum is 16,384 GiB. --enable-autoscaling Enable or disable autoscaling of compute nodes. To enable autoscaling, use this argument with the --min-replicas and --max-replicas arguments. To disable autoscaling, use --enable-autoscaling=false with the --replicas argument. --instance-type The instance type (string) that should be used. Default: m5.xlarge --kubelet-configs <kubeletconfig_name> For Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) clusters, the names of any KubeletConfig objects to apply to nodes in a machine pool. --labels The labels (string) for the machine pool. The format must be a comma-delimited list of key=value pairs. This list overwrites any modifications made to node labels on an ongoing basis. --max-replicas Specifies the maximum number of compute nodes when enabling autoscaling. --min-replicas Specifies the minimum number of compute nodes when enabling autoscaling. --max-surge For Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) clusters, the max-surge parameter defines the number of new nodes that can be provisioned in excess of the desired number of replicas for the machine pool, as configured using the --replicas parameter, or as determined by the autoscaler when autoscaling is enabled. This can be an absolute number (for example, 2 ) or a percentage of the machine pool size (for example, 20% ), but must use the same unit as the max-unavailable parameter. The default value is 1 , meaning that the maximum number of nodes in the machine pool during an upgrade is 1 plus the desired number of replicas for the machine pool. In this situation, one excess node can be provisioned before existing nodes need to be made unavailable. The number of nodes that can be provisioned simultaneously during an upgrade is max-surge plus max-unavailable . --max-unavailable For Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) clusters, the max-unavailable parameter defines the number of nodes that can be made unavailable in a machine pool during an upgrade, before new nodes are provisioned. This can be an absolute number (for example, 2 ) or a percentage of the current replica count in the machine pool (for example, 20% ), but must use the same unit as the max-surge parameter. The default value is 0 , meaning that no outdated nodes are removed before new nodes are provisioned. The valid range for this value is from 0 to the current machine pool size, or from 0% to 100% . The total number of nodes that can be upgraded simultaneously during an upgrade is max-surge plus max-unavailable . --name Required: The name (string) for the machine pool. --replicas Required when autoscaling is not configured. The number (integer) of machines for this machine pool. --tags Apply user defined tags to all resources created by ROSA in AWS. Tags are comma separated, for example: 'key value, foo bar' . --taints Taints for the machine pool. This string value should be formatted as a comma-separated list of key=value:ScheduleType . This list will overwrite any modifications made to Node taints on an ongoing basis. Table 8.31. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --interactive Enables interactive mode. --profile Specifies an AWS profile (string) from your credentials file. Examples Interactively add a machine pool to a cluster named mycluster . USD rosa create machinepool --cluster=mycluster --interactive Add a machine pool that is named mp-1 to a cluster with autoscaling enabled. USD rosa create machinepool --cluster=mycluster --enable-autoscaling --min-replicas=2 --max-replicas=5 --name=mp-1 Add a machine pool that is named mp-1 with 3 replicas of m5.xlarge to a cluster. USD rosa create machinepool --cluster=mycluster --replicas=3 --instance-type=m5.xlarge --name=mp-1 Add a machine pool ( mp-1 ) to a Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) cluster, configuring 6 replicas and the following upgrade behavior: Allow up to 2 excess nodes to be provisioned during an upgrade. Ensure that no more than 3 nodes are unavailable during an upgrade. USD rosa create machinepool --cluster=mycluster --replicas=6 --name=mp-1 --max-surge=2 --max-unavailable=3 Add a machine pool with labels to a cluster. USD rosa create machinepool --cluster=mycluster --replicas=2 --instance-type=r5.2xlarge --labels=foo=bar,bar=baz --name=mp-1 Add a machine pool with tags to a cluster. USD rosa create machinepool --cluster=mycluster --replicas=2 --instance-type=r5.2xlarge --tags='foo bar,bar baz' --name=mp-1 8.2.3.10. create network Create a network that creates any necessary AWS resources through AWS CloudFormation templates. This helper command is intended to help create and configure a VPC for use with ROSA with HCP. This command also supports zero egress clusters. Important Running this command creates resources within your AWS account. Note For custom or advanced configurations, it is highly recommended to use the AWS CLI directly using the aws cloudformation command or create a new custom template with the required configurations. If you use a custom CloudFormation template with the ROSA CLI, the minimum required version is 1.2.47 or later. Syntax USD rosa create network [flags] Table 8.32. Arguments Option Definition <template-name> Allows you to use a custom template. Templates must be in the template folder, structured as templates/<template-name>/cloudformation.yaml . If no template name is provided, the command uses the default template. For binary builds, this template directory must be referenced manually after it is downloaded. Default CloudFormation template AWSTemplateFormatVersion: '2010-09-09' Description: CloudFormation template to create a ROSA Quickstart default VPC. This CloudFormation template may not work with rosa CLI versions later than 1.2.47. Please ensure that you are using the compatible CLI version before deploying this template. Parameters: AvailabilityZoneCount: Type: Number Description: "Number of Availability Zones to use" Default: 1 MinValue: 1 MaxValue: 3 Region: Type: String Description: "AWS Region" Default: "us-west-2" Name: Type: String Description: "Name prefix for resources" VpcCidr: Type: String Description: CIDR block for the VPC Default: '10.0.0.0/16' Conditions: HasAZ1: !Equals [!Ref AvailabilityZoneCount, 1] HasAZ2: !Equals [!Ref AvailabilityZoneCount, 2] HasAZ3: !Equals [!Ref AvailabilityZoneCount, 3] One: Fn::Or: - Condition: HasAZ1 - Condition: HasAZ2 - Condition: HasAZ3 Two: Fn::Or: - Condition: HasAZ3 - Condition: HasAZ2 Resources: VPC: Type: AWS::EC2::VPC Properties: CidrBlock: !Ref VpcCidr EnableDnsSupport: true EnableDnsHostnames: true Tags: - Key: Name Value: !Ref Name - Key: 'rosa_managed_policies' Value: 'true' - Key: 'rosa_hcp_policies' Value: 'true' - Key: 'service' Value: 'ROSA' S3VPCEndpoint: Type: AWS::EC2::VPCEndpoint Properties: VpcId: !Ref VPC ServiceName: !Sub "com.amazonaws.USD{Region}.s3" VpcEndpointType: Gateway RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable SubnetPublic1: Condition: One Type: AWS::EC2::Subnet Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, 8]] AvailabilityZone: !Select [0, !GetAZs ''] MapPublicIpOnLaunch: true Tags: - Key: Name Value: !Sub "USD{Name}-Public-Subnet-1" - Key: 'rosa_managed_policies' Value: 'true' - Key: 'rosa_hcp_policies' Value: 'true' - Key: 'service' Value: 'ROSA' - Key: 'kubernetes.io/role/elb' Value: '1' SubnetPrivate1: Condition: One Type: AWS::EC2::Subnet Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, 8]] AvailabilityZone: !Select [0, !GetAZs ''] MapPublicIpOnLaunch: false Tags: - Key: Name Value: !Sub "USD{Name}-Private-Subnet-1" - Key: 'rosa_managed_policies' Value: 'true' - Key: 'rosa_hcp_policies' Value: 'true' - Key: 'service' Value: 'ROSA' - Key: 'kubernetes.io/role/internal-elb' Value: '1' SubnetPublic2: Condition: Two Type: AWS::EC2::Subnet Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, 8]] AvailabilityZone: !Select [1, !GetAZs ''] MapPublicIpOnLaunch: true Tags: - Key: Name Value: !Sub "USD{Name}-Public-Subnet-2" - Key: 'rosa_managed_policies' Value: 'true' - Key: 'rosa_hcp_policies' Value: 'true' - Key: 'service' Value: 'ROSA' - Key: 'kubernetes.io/role/elb' Value: '1' SubnetPrivate2: Condition: Two Type: AWS::EC2::Subnet Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, 8]] AvailabilityZone: !Select [1, !GetAZs ''] MapPublicIpOnLaunch: false Tags: - Key: Name Value: !Sub "USD{Name}-Private-Subnet-2" - Key: 'rosa_managed_policies' Value: 'true' - Key: 'rosa_hcp_policies' Value: 'true' - Key: 'service' Value: 'ROSA' - Key: 'kubernetes.io/role/internal-elb' Value: '1' SubnetPublic3: Condition: HasAZ3 Type: AWS::EC2::Subnet Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, 8]] AvailabilityZone: !Select [2, !GetAZs ''] MapPublicIpOnLaunch: true Tags: - Key: Name Value: !Sub "USD{Name}-Public-Subnet-3" - Key: 'rosa_managed_policies' Value: 'true' - Key: 'rosa_hcp_policies' Value: 'true' - Key: 'service' Value: 'ROSA' - Key: 'kubernetes.io/role/elb' Value: '1' SubnetPrivate3: Condition: HasAZ3 Type: AWS::EC2::Subnet Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, 8]] AvailabilityZone: !Select [2, !GetAZs ''] MapPublicIpOnLaunch: false Tags: - Key: Name Value: !Sub "USD{Name}-Private-Subnet-3" - Key: 'rosa_managed_policies' Value: 'true' - Key: 'rosa_hcp_policies' Value: 'true' - Key: 'service' Value: 'ROSA' - Key: 'kubernetes.io/role/internal-elb' Value: '1' InternetGateway: Type: AWS::EC2::InternetGateway Properties: Tags: - Key: Name Value: !Ref Name - Key: 'rosa_managed_policies' Value: 'true' - Key: 'rosa_hcp_policies' Value: 'true' - Key: 'service' Value: 'ROSA' AttachGateway: Type: AWS::EC2::VPCGatewayAttachment Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway ElasticIP1: Type: AWS::EC2::EIP Properties: Domain: vpc Tags: - Key: Name Value: !Ref Name - Key: 'rosa_managed_policies' Value: 'true' - Key: 'rosa_hcp_policies' Value: 'true' - Key: 'service' Value: 'ROSA' ElasticIP2: Type: AWS::EC2::EIP Properties: Domain: vpc Tags: - Key: Name Value: !Ref Name - Key: 'rosa_managed_policies' Value: 'true' - Key: 'rosa_hcp_policies' Value: 'true' - Key: 'service' Value: 'ROSA' ElasticIP3: Condition: HasAZ3 Type: AWS::EC2::EIP Properties: Domain: vpc Tags: - Key: Name Value: !Ref Name - Key: 'rosa_managed_policies' Value: 'true' - Key: 'rosa_hcp_policies' Value: 'true' - Key: 'service' Value: 'ROSA' NATGateway1: Condition: One Type: 'AWS::EC2::NatGateway' Properties: AllocationId: !GetAtt ElasticIP1.AllocationId SubnetId: !Ref SubnetPublic1 Tags: - Key: Name Value: !Sub "USD{Name}-NAT-1" - Key: 'rosa_managed_policies' Value: 'true' - Key: 'rosa_hcp_policies' Value: 'true' - Key: 'service' Value: 'ROSA' NATGateway2: Condition: Two Type: 'AWS::EC2::NatGateway' Properties: AllocationId: !GetAtt ElasticIP2.AllocationId SubnetId: !Ref SubnetPublic2 Tags: - Key: Name Value: !Sub "USD{Name}-NAT-2" - Key: 'rosa_managed_policies' Value: 'true' - Key: 'rosa_hcp_policies' Value: 'true' - Key: 'service' Value: 'ROSA' NATGateway3: Condition: HasAZ3 Type: 'AWS::EC2::NatGateway' Properties: AllocationId: !GetAtt ElasticIP3.AllocationId SubnetId: !Ref SubnetPublic3 Tags: - Key: Name Value: !Sub "USD{Name}-NAT-3" - Key: 'rosa_managed_policies' Value: 'true' - Key: 'rosa_hcp_policies' Value: 'true' - Key: 'service' Value: 'ROSA' PublicRouteTable: Type: AWS::EC2::RouteTable Properties: VpcId: !Ref VPC Tags: - Key: Name Value: !Ref Name - Key: 'rosa_managed_policies' Value: 'true' - Key: 'rosa_hcp_policies' Value: 'true' - Key: 'service' Value: 'ROSA' PublicRoute: Type: AWS::EC2::Route DependsOn: AttachGateway Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PrivateRouteTable: Type: AWS::EC2::RouteTable Properties: VpcId: !Ref VPC Tags: - Key: Name Value: !Sub "USD{Name}-Private-Route-Table" - Key: 'rosa_managed_policies' Value: 'true' - Key: 'rosa_hcp_policies' Value: 'true' - Key: 'service' Value: 'ROSA' PrivateRoute: Type: AWS::EC2::Route Properties: RouteTableId: !Ref PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: !If - One - !Ref NATGateway1 - !If - Two - !Ref NATGateway2 - !If - HasAZ3 - !Ref NATGateway3 - !Ref "AWS::NoValue" PublicSubnetRouteTableAssociation1: Condition: One Type: AWS::EC2::SubnetRouteTableAssociation Properties: SubnetId: !Ref SubnetPublic1 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Condition: Two Type: AWS::EC2::SubnetRouteTableAssociation Properties: SubnetId: !Ref SubnetPublic2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: HasAZ3 Type: AWS::EC2::SubnetRouteTableAssociation Properties: SubnetId: !Ref SubnetPublic3 RouteTableId: !Ref PublicRouteTable PrivateSubnetRouteTableAssociation1: Condition: One Type: AWS::EC2::SubnetRouteTableAssociation Properties: SubnetId: !Ref SubnetPrivate1 RouteTableId: !Ref PrivateRouteTable PrivateSubnetRouteTableAssociation2: Condition: Two Type: AWS::EC2::SubnetRouteTableAssociation Properties: SubnetId: !Ref SubnetPrivate2 RouteTableId: !Ref PrivateRouteTable PrivateSubnetRouteTableAssociation3: Condition: HasAZ3 Type: AWS::EC2::SubnetRouteTableAssociation Properties: SubnetId: !Ref SubnetPrivate3 RouteTableId: !Ref PrivateRouteTable SecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: "Authorize inbound VPC traffic" VpcId: !Ref VPC SecurityGroupIngress: - IpProtocol: -1 FromPort: 0 ToPort: 0 CidrIp: "10.0.0.0/16" SecurityGroupEgress: - IpProtocol: -1 FromPort: 0 ToPort: 0 CidrIp: 0.0.0.0/0 Tags: - Key: Name Value: !Ref Name - Key: 'service' Value: 'ROSA' - Key: 'rosa_managed_policies' Value: 'true' - Key: 'rosa_hcp_policies' Value: 'true' EC2VPCEndpoint: Type: AWS::EC2::VPCEndpoint Properties: VpcId: !Ref VPC ServiceName: !Sub "com.amazonaws.USD{Region}.ec2" PrivateDnsEnabled: true VpcEndpointType: Interface SubnetIds: - !If [One, !Ref SubnetPrivate1, !Ref "AWS::NoValue"] - !If [Two, !Ref SubnetPrivate2, !Ref "AWS::NoValue"] - !If [HasAZ3, !Ref SubnetPrivate3, !Ref "AWS::NoValue"] SecurityGroupIds: - !Ref SecurityGroup KMSVPCEndpoint: Type: AWS::EC2::VPCEndpoint Properties: VpcId: !Ref VPC ServiceName: !Sub "com.amazonaws.USD{Region}.kms" PrivateDnsEnabled: true VpcEndpointType: Interface SubnetIds: - !If [One, !Ref SubnetPrivate1, !Ref "AWS::NoValue"] - !If [Two, !Ref SubnetPrivate2, !Ref "AWS::NoValue"] - !If [HasAZ3, !Ref SubnetPrivate3, !Ref "AWS::NoValue"] SecurityGroupIds: - !Ref SecurityGroup STSVPCEndpoint: Type: AWS::EC2::VPCEndpoint Properties: VpcId: !Ref VPC ServiceName: !Sub "com.amazonaws.USD{Region}.sts" PrivateDnsEnabled: true VpcEndpointType: Interface SubnetIds: - !If [One, !Ref SubnetPrivate1, !Ref "AWS::NoValue"] - !If [Two, !Ref SubnetPrivate2, !Ref "AWS::NoValue"] - !If [HasAZ3, !Ref SubnetPrivate3, !Ref "AWS::NoValue"] SecurityGroupIds: - !Ref SecurityGroup EcrApiVPCEndpoint: Type: AWS::EC2::VPCEndpoint Properties: VpcId: !Ref VPC ServiceName: !Sub "com.amazonaws.USD{Region}.ecr.api" PrivateDnsEnabled: true VpcEndpointType: Interface SubnetIds: - !If [One, !Ref SubnetPrivate1, !Ref "AWS::NoValue"] - !If [Two, !Ref SubnetPrivate2, !Ref "AWS::NoValue"] - !If [HasAZ3, !Ref SubnetPrivate3, !Ref "AWS::NoValue"] SecurityGroupIds: - !Ref SecurityGroup EcrDkrVPCEndpoint: Type: AWS::EC2::VPCEndpoint Properties: VpcId: !Ref VPC ServiceName: !Sub "com.amazonaws.USD{Region}.ecr.dkr" PrivateDnsEnabled: true VpcEndpointType: Interface SubnetIds: - !If [One, !Ref SubnetPrivate1, !Ref "AWS::NoValue"] - !If [Two, !Ref SubnetPrivate2, !Ref "AWS::NoValue"] - !If [HasAZ3, !Ref SubnetPrivate3, !Ref "AWS::NoValue"] SecurityGroupIds: - !Ref SecurityGroup Outputs: VPCId: Description: "VPC Id" Value: !Ref VPC Export: Name: !Sub "USD{Name}-VPCId" VPCEndpointId: Description: The ID of the VPC Endpoint Value: !Ref S3VPCEndpoint Export: Name: !Sub "USD{Name}-VPCEndpointId" PublicSubnets: Description: "Public Subnet Ids" Value: !Join [",", [!If [One, !Ref SubnetPublic1, !Ref "AWS::NoValue"], !If [Two, !Ref SubnetPublic2, !Ref "AWS::NoValue"], !If [HasAZ3, !Ref SubnetPublic3, !Ref "AWS::NoValue"]]] Export: Name: !Sub "USD{Name}-PublicSubnets" PrivateSubnets: Description: "Private Subnet Ids" Value: !Join [",", [!If [One, !Ref SubnetPrivate1, !Ref "AWS::NoValue"], !If [Two, !Ref SubnetPrivate2, !Ref "AWS::NoValue"], !If [HasAZ3, !Ref SubnetPrivate3, !Ref "AWS::NoValue"]]] Export: Name: !Sub "USD{Name}-PrivateSubnets" EIP1AllocationId: Description: Allocation ID for ElasticIP1 Value: !GetAtt ElasticIP1.AllocationId Export: Name: !Sub "USD{Name}-EIP1-AllocationId" EIP2AllocationId: Description: Allocation ID for ElasticIP2 Value: !GetAtt ElasticIP2.AllocationId Export: Name: !Sub "USD{Name}-EIP2-AllocationId" EIP3AllocationId: Condition: HasAZ3 Description: Allocation ID for ElasticIP3 Value: !GetAtt ElasticIP3.AllocationId Export: Name: !Sub "USD{Name}-EIP3-AllocationId" NatGatewayId: Description: The NAT Gateway IDs Value: !Join [",", [!If [One, !Ref NATGateway1, !Ref "AWS::NoValue"], !If [Two, !Ref NATGateway2, !Ref "AWS::NoValue"], !If [HasAZ3, !Ref NATGateway3, !Ref "AWS::NoValue"]]] Export: Name: !Sub "USD{Name}-NatGatewayId" InternetGatewayId: Description: The ID of the Internet Gateway Value: !Ref InternetGateway Export: Name: !Sub "USD{Name}-InternetGatewayId" PublicRouteTableId: Description: The ID of the public route table Value: !Ref PublicRouteTable Export: Name: !Sub "USD{Name}-PublicRouteTableId" PrivateRouteTableId: Description: The ID of the private route table Value: !Ref PrivateRouteTable Export: Name: !Sub "USD{Name}-PrivateRouteTableId" EC2VPCEndpointId: Description: The ID of the EC2 VPC Endpoint Value: !Ref EC2VPCEndpoint Export: Name: !Sub "USD{Name}-EC2VPCEndpointId" KMSVPCEndpointId: Description: The ID of the KMS VPC Endpoint Value: !Ref KMSVPCEndpoint Export: Name: !Sub "USD{Name}-KMSVPCEndpointId" STSVPCEndpointId: Description: The ID of the STS VPC Endpoint Value: !Ref STSVPCEndpoint Export: Name: !Sub "USD{Name}-STSVPCEndpointId" EcrApiVPCEndpointId: Description: The ID of the ECR API VPC Endpoint Value: !Ref EcrApiVPCEndpoint Export: Name: !Sub "USD{Name}-EcrApiVPCEndpointId" EcrDkrVPCEndpointId: Description: The ID of the ECR DKR VPC Endpoint Value: !Ref EcrDkrVPCEndpoint Export: Name: !Sub "USD{Name}-EcrDkrVPCEndpointId" Table 8.33. Flags Option Definition --template-dir Allows you to specify the path to the template directory. Overrides the OCM_TEMPLATE_DIR environment variable. Required if not running the command inside the template directory. --param Name Define the name of your network. A required parameter when using a custom template file. --param Region Define the region of your network. A required parameter when using a custom template file. --param <various> Available parameters depend on the template. Use --help when in the template directory to find available parameters. --mode=manual Provides AWS commands to create the network stack. Example Create a basic network with regular arguments and flags. USD rosa create network rosa-quickstart-default-vpc --param Tags=key1=value1,key2=value2 --param Name=example-stack --param Region=us-west-2 8.2.3.11. create ocm-role Create the required ocm-role resources for your cluster. Syntax USD rosa create ocm-role [flags] Table 8.34. Flags Option Definition --admin Enable admin capabilities for the role. --debug Enable debug mode. -i, --interactive Enable interactive mode. -m, --mode string How to perform the operation. Valid options are: auto : Resource changes will be automatically applied using the current AWS account manual : Commands necessary to modify AWS resources will be output to be run manually --path string The ARN path for the OCM role and policies. --permissions-boundary string The ARN of the policy that is used to set the permissions boundary for the OCM role. --prefix string User-defined prefix for all generated AWS resources. The default is ManagedOpenShift . --profile string Use a specific AWS profile from your credential file. -y, --yes Automatically answer yes to confirm operation. For more information about the OCM role created with the rosa create ocm-role command, see Account-wide IAM role and policy reference . 8.2.3.12. create user-role Create the required user-role resources for your cluster. Syntax USD rosa create user-role [flags] Table 8.35. Flags Option Definition --debug Enable debug mode. -i, --interactive Enable interactive mode. -m, --mode string How to perform the operation. Valid options are: auto : Resource changes will be automatically applied using the current AWS account manual : Commands necessary to modify AWS resources will be output to be run manually --path string The ARN path for the user role and policies. --permissions-boundary string The ARN of the policy that is used to set the permissions boundary for the user role. --prefix string User-defined prefix for all generated AWS resources The default is ManagedOpenShift . --profile string Use a specific AWS profile from your credential file. -y, --yes Automatically answer yes to confirm operation. For more information about the user role created with the rosa create user-role command, see Understanding AWS account association . 8.2.4. Additional resources See AWS Instance types for a list of supported instance types. See Account-wide IAM role and policy reference for a list of IAM roles needed for cluster creation. See Understanding AWS account association for more information about the OCM role and user role. See Additional custom security groups for information about security group requirements. 8.2.5. Edit objects This section describes the edit commands for clusters and resources. 8.2.5.1. edit cluster Allows edits to an existing cluster. Syntax USD rosa edit cluster --cluster=<cluster_name> | <cluster_id> [arguments] Table 8.36. Arguments Option Definition --additional-allowed-principals <arn> A comma-separated list of additional allowed principal ARNs to be added to the Hosted Control Plane's VPC endpoint service to enable additional VPC endpoint connection requests to be automatically accepted. --cluster Required: The name or ID (string) of the cluster to edit. --private Restricts a primary API endpoint to direct, private connectivity. --enable-delete-protection=true Enables the delete protection feature. --enable-delete-protection=false Disables the delete protection feature. Table 8.37. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --interactive Enables interactive mode. --profile Specifies an AWS profile (string) from your credentials file. Examples Edit a cluster named mycluster to make it private. USD rosa edit cluster --cluster=mycluster --private Edit all cluster options interactively on a cluster named mycluster . USD rosa edit cluster --cluster=mycluster --interactive 8.2.5.2. edit ingress Edits the default application router for a cluster. Note For information about editing non-default application routers, see Additional resources . Syntax USD rosa edit ingress --cluster=<cluster_name> | <cluster_id> [arguments] Table 8.38. Arguments Option Definition --cluster Required: The name or ID (string) of the cluster to which the ingress will be added. --cluster-routes-hostname Components route hostname for OAuth, console, and download. --cluster-routes-tls-secret-ref Components route TLS secret reference for OAuth, console, and download. --excluded-namespaces Excluded namespaces for ingress. Format is a comma-separated list value1, value2... . If no values are specified, all namespaces will be exposed. --label-match The label match (string) for ingress. The format must be a comma-delimited list of key=value pairs. If no label is specified, all routes are exposed on both routers. --lb-type Type of Load Balancer. Options are classic , nlb . --namespace-ownership-policy Namespace Ownership Policy for ingress. Options are Strict and InterNamespaceAllowed . Default is Strict . --private Restricts the application route to direct, private connectivity. --route-selector Route Selector for ingress. Format is a comma-separated list of key=value. If no label is specified, all routes will be exposed on both routers. For legacy ingress support these are inclusion labels, otherwise they are treated as exclusion label. --wildcard-policy Wildcard Policy for ingress. Options are WildcardsDisallowed and WildcardsAllowed . Default is WildcardsDisallowed . Table 8.39. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --interactive Enables interactive mode. --profile Specifies an AWS profile (string) from your credentials file. Examples Make an additional ingress with the ID a1b2 as a private connection on a cluster named mycluster . USD rosa edit ingress --private --cluster=mycluster a1b2 Update the router selectors for the additional ingress with the ID a1b2 on a cluster named mycluster . USD rosa edit ingress --label-match=foo=bar --cluster=mycluster a1b2 Update the default ingress using the sub-domain identifier apps on a cluster named mycluster . USD rosa edit ingress --private=false --cluster=mycluster apps Update the load balancer type of the apps2 ingress. USD rosa edit ingress --lb-type=nlb --cluster=mycluster apps2 8.2.5.3. edit kubeletconfig Edit a custom KubeletConfig object in a Syntax USD rosa edit kubeletconfig --cluster=<cluster_name|cluster_id> --name=<kubeletconfig_name> --pod-pids-limit=<number> [flags] Table 8.40. Flags Option Definition -c, --cluster <cluster_name>|<cluster_id> Required. The name or ID of the cluster for which the KubeletConfig object will be edited. -i, --interactive Enable interactive mode. --pod-pids-limit <number> Required. The maximum number of PIDs for each node in the --name Specifies a name for the KubeletConfig object. -h, --help Shows help for this command. For more information about setting the PID limit for the cluster, see Configuring PID limits . 8.2.5.4. edit machinepool Allows edits to the machine pool in a cluster. Syntax USD rosa edit machinepool --cluster=<cluster_name_or_id> <machinepool_name> [arguments] Table 8.41. Arguments Option Definition --cluster Required: The name or ID (string) of the cluster to edit on which the additional machine pool will be edited. --enable-autoscaling Enable or disable autoscaling of compute nodes. To enable autoscaling, use this argument with the --min-replicas and --max-replicas arguments. To disable autoscaling, use --enable-autoscaling=false with the --replicas argument. --labels The labels (string) for the machine pool. The format must be a comma-delimited list of key=value pairs. Editing this value only affects newly created nodes of the machine pool, which are created by increasing the node number, and does not affect the existing nodes. This list overwrites any modifications made to node labels on an ongoing basis. --kubelet-configs <kubeletconfig_name> For Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) clusters, the names of any KubeletConfig objects to apply to nodes in a machine pool. --max-replicas Specifies the maximum number of compute nodes when enabling autoscaling. --min-replicas Specifies the minimum number of compute nodes when enabling autoscaling. --max-surge For Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) clusters, the max-surge parameter defines the number of new nodes that can be provisioned in excess of the desired number of replicas for the machine pool, as configured using the --replicas parameter, or as determined by the autoscaler when autoscaling is enabled. This can be an absolute number (for example, 2 ) or a percentage of the machine pool size (for example, 20% ), but must use the same unit as the max-unavailable parameter. The default value is 1 , meaning that the maximum number of nodes in the machine pool during an upgrade is 1 plus the desired number of replicas for the machine pool. In this situation, one excess node can be provisioned before existing nodes need to be made unavailable. The number of nodes that can be provisioned simultaneously during an upgrade is max-surge plus max-unavailable . --max-unavailable For Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) clusters, the max-unavailable parameter defines the number of nodes that can be made unavailable in a machine pool during an upgrade, before new nodes are provisioned. This can be an absolute number (for example, 2 ) or a percentage of the current replica count in the machine pool (for example, 20% ), but must use the same unit as the max-surge parameter. The default value is 0 , meaning that no outdated nodes are removed before new nodes are provisioned. The valid range for this value is from 0 to the current machine pool size, or from 0% to 100% . The total number of nodes that can be upgraded simultaneously during an upgrade is max-surge plus max-unavailable . --node-drain-grace-period Specifies the node drain grace period when upgrading or replacing the machine pool. (This is for ROSA with HCP clusters only.) --replicas Required when autoscaling is not configured. The number (integer) of machines for this machine pool. --taints Taints for the machine pool. This string value should be formatted as a comma-separated list of key=value:ScheduleType . Editing this value only affect newly created nodes of the machine pool, which are created by increasing the node number, and does not affect the existing nodes. This list overwrites any modifications made to Node taints on an ongoing basis. Table 8.42. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --interactive Enables interactive mode. --profile Specifies an AWS profile (string) from your credentials file. Examples Set 4 replicas on a machine pool named mp1 on a cluster named mycluster . USD rosa edit machinepool --cluster=mycluster --replicas=4 mp1 Enable autoscaling on a machine pool named mp1 on a cluster named mycluster . USD rosa edit machinepool --cluster=mycluster --enable-autoscaling --min-replicas=3 --max-replicas=5 mp1 Disable autoscaling on a machine pool named mp1 on a cluster named mycluster . USD rosa edit machinepool --cluster=mycluster --enable-autoscaling=false --replicas=3 mp1 Modify the autoscaling range on a machine pool named mp1 on a cluster named mycluster . USD rosa edit machinepool --max-replicas=9 --cluster=mycluster mp1 On Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) clusters, edit the mp1 machine pool to add the following behavior during upgrades: Allow up to 2 excess nodes to be provisioned during an upgrade. Ensure that no more than 3 nodes are unavailable during an upgrade. USD rosa edit machinepool --cluster=mycluster mp1 --max-surge=2 --max-unavailable=3 Associate a KubeletConfig object with an existing high-pid-pool machine pool on a ROSA with HCP cluster. USD rosa edit machinepool -c mycluster --kubelet-configs=set-high-pids high-pid-pool 8.2.6. Additional resources See Configuring the Ingress Controller for information regarding editing non-default application routers. 8.2.7. Delete objects This section describes the delete commands for clusters and resources. 8.2.7.1. delete admin Deletes a cluster administrator from a specified cluster. Syntax USD rosa delete admin --cluster=<cluster_name> | <cluster_id> Table 8.43. Arguments Option Definition --cluster Required: The name or ID (string) of the cluster to add to the identity provider (IDP). Table 8.44. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --interactive Enables interactive mode. --profile Specifies an AWS profile (string) from your credentials file. Example Delete a cluster administrator from a cluster named mycluster . USD rosa delete admin --cluster=mycluster 8.2.7.2. delete cluster Deletes a cluster. Syntax USD rosa delete cluster --cluster=<cluster_name> | <cluster_id> [arguments] Table 8.45. Arguments Option Definition --cluster Required: The name or ID (string) of the cluster to delete. --watch Watches the cluster uninstallation logs. --best-effort Skips steps in the cluster destruction chain that are known to cause the cluster deletion process to fail. You should use this option with care and it is recommended that you manually check your AWS account for any resources that might be left over after using --best-effort . Table 8.46. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --interactive Enables interactive mode. --profile Specifies an AWS profile (string) from your credentials file. --yes Automatically answers yes to confirm the operation. Examples Delete a cluster named mycluster . USD rosa delete cluster --cluster=mycluster 8.2.7.3. delete external-auth-provider Deletes an external authentication provider from a cluster. Syntax USD rosa delete external-auth-provider <name_of_external_auth_provider> --cluster=<cluster_name> | <cluster_id> [arguments] Table 8.47. Arguments Option Definition --cluster Required. The name or ID string of the cluster the external auth provider will be deleted from. Table 8.48. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --interactive Enables interactive mode. --profile Specifies an AWS profile string from your credentials file. --yes Automatically answers yes to confirm the operation. Example Delete an identity provider named exauth-1 from a cluster named mycluster . USD rosa delete external-auth-provider exauth-1 --cluster=mycluster 8.2.7.4. delete idp Deletes a specific identity provider (IDP) from a cluster. Syntax USD rosa delete idp --cluster=<cluster_name> | <cluster_id> [arguments] Table 8.49. Arguments Option Definition --cluster Required: The name or ID (string) of the cluster from which the IDP will be deleted. Table 8.50. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --interactive Enables interactive mode. --profile Specifies an AWS profile (string) from your credentials file. --yes Automatically answers yes to confirm the operation. Example Delete an identity provider named github from a cluster named mycluster . USD rosa delete idp github --cluster=mycluster 8.2.7.5. delete ingress Deletes a non-default application router (ingress) from a cluster. Syntax USD rosa delete ingress --cluster=<cluster_name> | <cluster_id> [arguments] Table 8.51. Arguments Option Definition --cluster Required: The name or ID (string) of the cluster from which the ingress will be deleted. Table 8.52. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --interactive Enables interactive mode. --profile Specifies an AWS profile (string) from your credentials file. --yes Automatically answers yes to confirm the operation. Examples Delete an ingress with the ID a1b2 from a cluster named mycluster . USD rosa delete ingress --cluster=mycluster a1b2 Delete a secondary ingress with the subdomain name apps2 from a cluster named mycluster . USD rosa delete ingress --cluster=mycluster apps2 8.2.7.6. delete kubeletconfig Delete a custom KubeletConfig object from a cluster. Syntax USD rosa delete kubeletconfig --cluster=<cluster_name|cluster_id> [flags] Table 8.53. Flags Option Definition -c, --cluster <cluster_name>|<cluster_id> Required. The name or ID of the cluster for which you want to delete the KubeletConfig object. -h, --help Shows help for this command. --name Specifies a name for the KubeletConfig object. -y, --yes Automatically answers yes to confirm the operation. 8.2.7.7. delete machinepool Deletes a machine pool from a cluster. Syntax USD rosa delete machinepool --cluster=<cluster_name> | <cluster_id> <machine_pool_id> Table 8.54. Arguments Option Definition --cluster Required: The name or ID (string) of the cluster that the machine pool will be deleted from. Table 8.55. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --interactive Enables interactive mode. --profile Specifies an AWS profile (string) from your credentials file. --yes Automatically answers yes to confirm the operation. Example Delete the machine pool with the ID mp-1 from a cluster named mycluster . USD rosa delete machinepool --cluster=mycluster mp-1 8.2.8. Install and uninstall add-ons This section describes how to install and uninstall Red Hat managed service add-ons to a cluster. 8.2.8.1. install addon Installs a managed service add-on on a cluster. Syntax USD rosa install addon --cluster=<cluster_name> | <cluster_id> [arguments] Table 8.56. Arguments Option Definition --cluster Required: The name or ID (string) of the cluster where the add-on will be installed. Table 8.57. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --profile Uses a specific AWS profile (string) from your credentials file. --yes Automatically answers yes to confirm the operation. Example Add the dbaas-operator add-on installation to a cluster named mycluster . USD rosa install addon --cluster=mycluster dbaas-operator 8.2.8.2. uninstall addon Uninstalls a managed service add-on from a cluster. Syntax USD rosa uninstall addon --cluster=<cluster_name> | <cluster_id> [arguments] Table 8.58. Arguments Option Definition --cluster Required: The name or ID (string) of the cluster that the add-on will be uninstalled from. Table 8.59. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --profile Uses a specific AWS profile (string) from your credentials file. --yes Automatically answers yes to confirm the operation. Example Remove the dbaas-operator add-on installation from a cluster named mycluster . USD rosa uninstall addon --cluster=mycluster dbaas-operator 8.2.9. List and describe objects This section describes the list and describe commands for clusters and resources. 8.2.9.1. list addon List the managed service add-on installations. Syntax USD rosa list addons --cluster=<cluster_name> | <cluster_id> Table 8.60. Arguments Option Definition --cluster Required: The name or ID (string) of the cluster to list the add-ons for. Table 8.61. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --profile Specifies an AWS profile (string) from your credentials file. 8.2.9.2. List break glass credentials List all of the break glass credentials for a cluster. Syntax USD rosa list break-glass-credential [arguments] Table 8.62. Arguments Option Definition --cluster <cluster_name>|<cluster_id> Required. The name or ID of the cluster to which the break glass credentials have been added. Table 8.63. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --profile Specifies an AWS profile (string) from your credentials file. Example List all of the break glass credentials for a cluster named mycluster . USD rosa list break-glass-credential --cluster=mycluster 8.2.9.3. list clusters List all of your clusters. Syntax USD rosa list clusters [arguments] Table 8.64. Arguments Option Definition --count The number (integer) of clusters to display. Default: 100 Table 8.65. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --profile Specifies an AWS profile (string) from your credentials file. 8.2.9.4. list external-auth-provider List any external authentication providers for a cluster. Syntax USD rosa list external-auth-provider --cluster=<cluster_name> | <cluster_id> [arguments] Table 8.66. Arguments Option Definition --cluster Required: The name or ID string of the cluster that the external authentication provider will be listed for. Table 8.67. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --profile Specifies an AWS profile string from your credentials file. Example List any external authentication providers for a cluster named mycluster . USD rosa list external-auth-provider --cluster=mycluster 8.2.9.5. list idps List all of the identity providers (IDPs) for a cluster. Syntax USD rosa list idps --cluster=<cluster_name> | <cluster_id> [arguments] Table 8.68. Arguments Option Definition --cluster Required: The name or ID (string) of the cluster that the IDPs will be listed for. Table 8.69. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --profile Specifies an AWS profile (string) from your credentials file. Example List all identity providers (IDPs) for a cluster named mycluster . USD rosa list idps --cluster=mycluster 8.2.9.6. list ingresses List all of the API and ingress endpoints for a cluster. Syntax USD rosa list ingresses --cluster=<cluster_name> | <cluster_id> [arguments] Table 8.70. Arguments Option Definition --cluster Required: The name or ID (string) of the cluster that the IDPs will be listed for. Table 8.71. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --profile Specifies an AWS profile (string) from your credentials file. Example List all API and ingress endpoints for a cluster named mycluster . USD rosa list ingresses --cluster=mycluster 8.2.9.7. list instance-types List all of the available instance types for use with ROSA. Availability is based on the account's AWS quota. Syntax USD rosa list instance-types [arguments] Table 8.72. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --output The output format. Allowed formats are json or yaml . --profile Specifies an AWS profile (string) from your credentials file. Example List all instance types. USD rosa list instance-types 8.2.9.8. list kubeletconfigs List the KubeletConfig objects configured on a cluster. Syntax USD rosa list kubeletconfigs --cluster=<cluster_name> | <cluster_id> [arguments] Table 8.73. Arguments Option Definition --cluster Required: The name or ID (string) of the cluster that the machine pools will be listed for. Table 8.74. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. Example List all of the KubeletConfig objects on a cluster named mycluster . USD rosa list kubeletconfigs --cluster=mycluster 8.2.9.9. list machinepools List the machine pools configured on a cluster. Syntax USD rosa list machinepools --cluster=<cluster_name> | <cluster_id> [arguments] Table 8.75. Arguments Option Definition --cluster Required: The name or ID (string) of the cluster that the machine pools will be listed for. Table 8.76. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --profile Specifies an AWS profile (string) from your credentials file. Example List all of the machine pools on a cluster named mycluster . USD rosa list machinepools --cluster=mycluster 8.2.9.10. list regions List all of the available regions for the current AWS account. Syntax USD rosa list regions [arguments] Table 8.77. Arguments Option Definition --multi-az Lists regions that provide support for multiple availability zones. Table 8.78. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --profile Specifies an AWS profile (string) from your credentials file. Example List all of the available regions. USD rosa list regions 8.2.9.11. list upgrades List all available and scheduled cluster version upgrades. Syntax USD rosa list upgrades --cluster=<cluster_name> | <cluster_id> [arguments] Table 8.79. Arguments Option Definition --cluster Required: The name or ID (string) of the cluster that the available upgrades will be listed for. Table 8.80. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --profile Specifies an AWS profile (string) from your credentials file. Example List all of the available upgrades for a cluster named mycluster . USD rosa list upgrades --cluster=mycluster 8.2.9.12. list users List the cluster administrator and dedicated administrator users for a specified cluster. Syntax USD rosa list users --cluster=<cluster_name> | <cluster_id> [arguments] Table 8.81. Arguments Option Definition --cluster Required: The name or ID (string) of the cluster that the cluster administrators will be listed for. Table 8.82. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --profile Specifies an AWS profile (string) from your credentials file. Example List all of the cluster administrators and dedicated administrators for a cluster named mycluster . USD rosa list users --cluster=mycluster 8.2.9.13. list versions List all of the OpenShift versions that are available for creating a cluster. Syntax USD rosa list versions [arguments] Table 8.83. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --profile Specifies an AWS profile (string) from your credentials file. Example List all of the OpenShift Container Platform versions. USD rosa list versions 8.2.9.14. describe admin Show the details of a specified cluster-admin user and a command to log in to the cluster. Syntax USD rosa describe admin --cluster=<cluster_name> | <cluster_id> [arguments] Table 8.84. Arguments Option Definition --cluster Required: The name or ID (string) of the cluster to which the cluster-admin belongs. Table 8.85. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --profile Specifies an AWS profile (string) from your credentials file. Example Describe the cluster-admin user for a cluster named mycluster . USD rosa describe admin --cluster=mycluster 8.2.9.15. describe addon Show the details of a managed service add-on. Syntax USD rosa describe addon <addon_id> | <addon_name> [arguments] Table 8.86. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --profile Specifies an AWS profile (string) from your credentials file. Example Describe an add-on named dbaas-operator . USD rosa describe addon dbaas-operator 8.2.9.16. describe break glass credential Shows the details for a break glass credential for a specific cluster. Syntax USD rosa describe break-glass-credential --id=<break_glass_credential_id> --cluster=<cluster_name>| <cluster_id> [arguments] Table 8.87. Arguments Option Definition --cluster Required: The name or ID (string) of the cluster. --id Required: The ID (string) of the break glass credential. --kubeconfig Optional: Retrieves the kubeconfig from the break glass credential. Table 8.88. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --profile Specifies an AWS profile (string) from your credentials file. 8.2.9.17. describe cluster Shows the details for a cluster. Syntax USD rosa describe cluster --cluster=<cluster_name> | <cluster_id> [arguments] Table 8.89. Arguments Option Definition --cluster Required: The name or ID (string) of the cluster. Table 8.90. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --external-id <arn_string> An optional unique identifier that might be required when you assume a role in another account. --profile Specifies an AWS profile (string) from your credentials file. --get-role-policy-bindings Lists the policies that are attached to the STS roles assigned to the cluster. Example Describe a cluster named mycluster . USD rosa describe cluster --cluster=mycluster 8.2.9.18. describe kubeletconfig Show the details of a custom KubeletConfig object. Syntax USD rosa describe kubeletconfig --cluster=<cluster_name|cluster_id> [flags] Table 8.91. Flags Option Definition -c, --cluster <cluster_name>|<cluster_id> Required. The name or ID of the cluster for which you want to view the KubeletConfig object. -h, --help Shows help for this command. --name Optional. Specifies the name of the KubeletConfig object to describe. -o, --output string -o, --output string 8.2.9.19. describe machinepool Describes a specific machine pool configured on a cluster. Syntax USD rosa describe machinepool --cluster=[<cluster_name>|<cluster_id>] --machinepool=<machinepool_name> [arguments] Table 8.92. Arguments Option Definition --cluster Required: The name or ID (string) of the cluster. --machinepool Required: The name or ID (string) of the machinepool. Table 8.93. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --profile Specifies an AWS profile (string) from your credentials file. Example Describe a machine pool named mymachinepool on a cluster named mycluster . USD rosa describe machinepool --cluster=mycluster --machinepool=mymachinepool 8.2.10. Revoke objects This section describes the revoke commands for clusters and resources. 8.2.10.1. revoke-break-glass-credential Revokes all break glass credentials from a specified hosted control plane cluster with external authentication enabled. Syntax USD rosa revoke break-glass-credential --cluster=<cluster_name> | <cluster_id> Table 8.94. Arguments Option Definition --cluster Required: The name or ID (string) of the cluster from which the break glass credentials will be deleted. Table 8.95. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --profile Specifies an AWS profile (string) from your credentials file. --yes Automatically answers yes to confirm the operation. Example Revoke the break glass credentials from a cluster named mycluster . USD rosa revoke break-glass-credential --cluster=mycluster 8.2.11. Upgrade and delete upgrade for objects This section describes the upgrade command usage for objects. 8.2.11.1. upgrade cluster Schedule a cluster upgrade. Syntax USD rosa upgrade cluster --cluster=<cluster_name> | <cluster_id> [arguments] Table 8.96. Arguments Option Definition --cluster Required: The name or ID (string) of the cluster that the upgrade will be scheduled for. --interactive Enables interactive mode. --version The version (string) of OpenShift Container Platform that the cluster will be upgraded to. --schedule-date The date (string) when the upgrade will run at the specified time in Coordinated Universal Time (UTC). Format: yyyy-mm-dd --schedule-time The time the upgrade will run on the specified date in Coordinated Universal Time (UTC). Format: HH:mm --node-drain-grace-period [1] Sets a grace period (string) for how long the pod disruption budget-protected workloads are respected during upgrades. After this grace period, any workloads protected by pod disruption budgets that have not been successfully drained from a node will be forcibly evicted. Default: 1 hour --control-plane [2] Upgrades the cluster's hosted control plane. Classic clusters only ROSA with HCP clusters only Table 8.97. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. Examples Interactively schedule an upgrade on a cluster named mycluster . USD rosa upgrade cluster --cluster=mycluster --interactive Schedule a cluster upgrade within the hour on a cluster named mycluster . USD rosa upgrade cluster --cluster=mycluster --version 4.5.20 8.2.11.2. delete cluster upgrade Cancel a scheduled cluster upgrade. Syntax USD rosa delete upgrade --cluster=<cluster_name> | <cluster_id> Table 8.98. Arguments Option Definition --cluster Required: The name or ID (string) of the cluster that the upgrade will be cancelled for. Table 8.99. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --yes Automatically answers yes to confirm the operation. 8.2.11.3. upgrade machinepool Upgrades a specific machine pool configured on a ROSA with HCP cluster. Note The upgrade command for machinepools applies to ROSA with HCP clusters only. Syntax USD rosa upgrade machinepool --cluster=<cluster_name> <machinepool_name> Table 8.100. Arguments Option Definition --cluster Required: The name or ID (string) of the cluster. --schedule-date The date (string) when the upgrade will run at the specified time in Coordinated Universal Time (UTC). Format: yyyy-mm-dd --schedule-time The time the upgrade will run on the specified date in Coordinated Universal Time (UTC). Format: HH:mm Table 8.101. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --profile Specifies an AWS profile (string) from your credentials file. Example Upgrade a machine pool on a cluster named mycluster . USD rosa upgrade machinepool --cluster=mycluster 8.2.11.4. delete machinepool upgrade Cancel a scheduled machinepool upgrade. Syntax USD rosa delete upgrade --cluster=<cluster_name> <machinepool_name> Table 8.102. Arguments Option Definition --cluster Required: The name or ID (string) of the cluster. Table 8.103. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --profile Specifies an AWS profile (string) from your credentials file. 8.2.11.5. upgrade roles Upgrades roles configured on a cluster. Syntax USD rosa upgrade roles --cluster=<cluster_id> Table 8.104. Arguments Option Definition --cluster Required: The name or ID (string) of the cluster. Table 8.105. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --profile Specifies an AWS profile (string) from your credentials file. Example Upgrade roles on a cluster named mycluster . USD rosa upgrade roles --cluster=mycluster 8.3. Checking account and version information with the ROSA CLI Use the following commands to check your account and version information. 8.3.1. whoami Display information about your AWS and Red Hat accounts by using the following command syntax: Syntax USD rosa whoami [arguments] Table 8.106. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --profile Specifies an AWS profile (string) from your credentials file. Example USD rosa whoami 8.3.2. version Display the version of your rosa CLI by using the following command syntax: Syntax USD rosa version [arguments] Table 8.107. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --profile Specifies an AWS profile (string) from your credentials file. Example USD rosa version 8.4. Checking logs with the ROSA CLI Use the following commands to check your install and uninstall logs. 8.4.1. logs install Show the cluster install logs by using the following command syntax: Syntax USD rosa logs install --cluster=<cluster_name> | <cluster_id> [arguments] Table 8.108. Arguments Option Definition --cluster Required: The name or ID (string) of the cluster to get logs for. --tail The number (integer) of lines to get from the end of the log. Default: 2000 --watch Watches for changes after getting the logs. Table 8.109. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --profile Specifies an AWS profile (string) from your credentials file. Examples Show the last 100 install log lines for a cluster named mycluster : USD rosa logs install mycluster --tail=100 Show the install logs for a cluster named mycluster : USD rosa logs install --cluster=mycluster 8.4.2. logs uninstall Show the cluster uninstall logs by using the following command syntax: Syntax USD rosa logs uninstall --cluster=<cluster_name> | <cluster_id> [arguments] Table 8.110. Arguments Option Definition --cluster The name or ID (string) of the cluster to get logs for. --tail The number (integer) of lines to get from the end of the log. Default: 2000 --watch Watches for changes after getting the logs. Table 8.111. Optional arguments inherited from parent commands Option Definition --help Shows help for this command. --debug Enables debug mode. --profile Specifies an AWS profile (string) from your credentials file. Example Show the last 100 uninstall logs for a cluster named mycluster : USD rosa logs uninstall --cluster=mycluster --tail=100 8.5. Least privilege permissions for ROSA CLI commands You can create roles with permissions that adhere to the principal of least privilege, in which the users assigned the roles have no other permissions assigned to them outside the scope of the specific action they need to perform. These policies contain only the minimum required permissions needed to perform specific actions by using the Red Hat OpenShift Service on AWS (ROSA) command line interface (CLI). Important Although the policies and commands presented in this topic will work in conjunction with one another, you might have other restrictions within your AWS environment that make the policies for these commands insufficient for your specific needs. Red Hat provides these examples as a baseline, assuming no other AWS Identity and Access Management (IAM) restrictions are present. For more information about configuring permissions, policies, and roles in the AWS console, see AWS Identity and Access Management in the AWS documentation. 8.5.1. Least privilege permissions for common ROSA CLI commands The following required minimum permissions for the listed ROSA CLI commands are applicable for hosted control plane (HCP) and Classic clusters. 8.5.1.1. Create a managed OpenID Connect (OIDC) provider Run the following command with the specified permissions to create your managed OIDC provider by using auto mode. Input USD rosa create oidc-config --mode auto Policy { "Version": "2012-10-17", "Statement": [ { "Sid": "CreateOidcConfig", "Effect": "Allow", "Action": [ "iam:TagOpenIDConnectProvider", "iam:CreateOpenIDConnectProvider" ], "Resource": "*" } ] } 8.5.1.2. Create an unmanaged OpenID Connect provider Run the following command with the specified permissions to create your unmanaged OIDC provider by using auto mode. Input USD rosa create oidc-config --mode auto --managed=false Policy { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "iam:GetRole", "iam:TagOpenIDConnectProvider", "iam:ListRoleTags", "iam:ListRoles", "iam:CreateOpenIDConnectProvider", "s3:CreateBucket", "s3:PutObject", "s3:PutBucketTagging", "s3:PutBucketPolicy", "s3:PutObjectTagging", "s3:PutBucketPublicAccessBlock", "secretsmanager:CreateSecret", "secretsmanager:TagResource" ], "Resource": "*" } ] } 8.5.1.3. List your account roles Run the following command with the specified permissions to list your account roles. Input USD rosa list account-roles Policy { "Version": "2012-10-17", "Statement": [ { "Sid": "ListAccountRoles", "Effect": "Allow", "Action": [ "iam:ListRoleTags", "iam:ListRoles" ], "Resource": "*" } ] } 8.5.1.4. List your Operator roles Run the following command with the specified permissions to list your Operator roles. Input USD rosa list operator-roles Policy { "Version": "2012-10-17", "Statement": [ { "Sid": "ListOperatorRoles", "Effect": "Allow", "Action": [ "iam:ListRoleTags", "iam:ListAttachedRolePolicies", "iam:ListRoles", "iam:ListPolicyTags" ], "Resource": "*" } ] } 8.5.1.5. List your OIDC providers Run the following command with the specified permissions to list your OIDC providers. Input USD rosa list oidc-providers Policy { "Version": "2012-10-17", "Statement": [ { "Sid": "ListOidcProviders", "Effect": "Allow", "Action": [ "iam:ListOpenIDConnectProviders", "iam:ListOpenIDConnectProviderTags" ], "Resource": "*" } ] } 8.5.1.6. Verify your quota Run the following command with the specified permissions to verify your quota. Input USD rosa verify quota Policy { "Version": "2012-10-17", "Statement": [ { "Sid": "VerifyQuota", "Effect": "Allow", "Action": [ "elasticloadbalancing:DescribeAccountLimits", "servicequotas:ListServiceQuotas" ], "Resource": "*" } ] } 8.5.1.7. Delete your managed OIDC configuration Run the following command with the specified permissions to delete your managed OIDC configuration by using auto mode. Input USD rosa delete oidc-config --mode auto Policy { "Version": "2012-10-17", "Statement": [ { "Sid": "DeleteOidcConfig", "Effect": "Allow", "Action": [ "iam:ListOpenIDConnectProviders", "iam:DeleteOpenIDConnectProvider" ], "Resource": "*" } ] } 8.5.1.8. Delete your unmanaged OIDC configuration Run the following command with the specified permissions to delete your unmanaged OIDC configuration by using auto mode. Input USD rosa delete oidc-config --mode auto Policy { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "iam:ListOpenIDConnectProviders", "iam:DeleteOpenIDConnectProvider", "secretsmanager:DeleteSecret", "s3:ListBucket", "s3:DeleteObject", "s3:DeleteBucket" ], "Resource": "*" } ] } 8.5.2. Least privilege permissions for common ROSA with HCP CLI commands The following examples show the least privilege permissions needed for the most common ROSA CLI commands when building ROSA with hosted control plane (HCP) clusters. 8.5.2.1. Create a cluster Run the following command with the specified permissions to create ROSA with HCP clusters. Input USD rosa create cluster --hosted-cp Policy { "Version": "2012-10-17", "Statement": [ { "Sid": "CreateCluster", "Effect": "Allow", "Action": [ "iam:GetRole", "iam:ListRoleTags", "iam:ListAttachedRolePolicies", "iam:ListRoles", "ec2:DescribeSubnets", "ec2:DescribeRouteTables", "ec2:DescribeAvailabilityZones" ], "Resource": "*" } ] } 8.5.2.2. Create your account roles and Operator roles Run the following command with the specified permissions to create account and Operator roles by using auto mode. Input USD rosa create account-roles --mode auto --hosted-cp Policy { "Version": "2012-10-17", "Statement": [ { "Sid": "CreateAccountRoles", "Effect": "Allow", "Action": [ "iam:GetRole", "iam:UpdateAssumeRolePolicy", "iam:ListRoleTags", "iam:GetPolicy", "iam:TagRole", "iam:ListRoles", "iam:CreateRole", "iam:AttachRolePolicy", "iam:ListPolicyTags" ], "Resource": "*" } ] } 8.5.2.3. Delete your account roles Run the following command with the specified permissions to delete the account roles in auto mode. Input USD rosa delete account-roles --mode auto Policy { "Version": "2012-10-17", "Statement": [ { "Sid": "DeleteAccountRoles", "Effect": "Allow", "Action": [ "iam:GetRole", "iam:ListInstanceProfilesForRole", "iam:DetachRolePolicy", "iam:ListAttachedRolePolicies", "iam:ListRoles", "iam:DeleteRole", "iam:ListRolePolicies" ], "Resource": "*" } ] } 8.5.2.4. Delete your Operator roles Run the following command with the specified permissions to delete your Operator roles in auto mode. Input USD rosa delete operator-roles --mode auto Policy { "Version": "2012-10-17", "Statement": [ { "Sid": "DeleteOperatorRoles", "Effect": "Allow", "Action": [ "iam:GetRole", "iam:DetachRolePolicy", "iam:ListAttachedRolePolicies", "iam:ListRoles", "iam:DeleteRole" ], "Resource": "*" } ] } 8.5.3. Least privilege permissions for common ROSA Classic CLI commands The following examples show the least privilege permissions needed for the most common ROSA CLI commands when building ROSA Classic clusters. 8.5.3.1. Create a cluster Run the following command with the specified permissions to create a ROSA Classic cluster with least privilege permissions. Input USD rosa create cluster Policy { "Version": "2012-10-17", "Statement": [ { "Sid": "CreateCluster", "Effect": "Allow", "Action": [ "iam:GetRole", "iam:ListRoleTags", "iam:ListRoles" ], "Resource": "*" } ] } 8.5.3.2. Create account roles and Operator roles Run the following command with the specified permissions to create account and Operator roles in `auto' mode. Input USD rosa create account-roles --mode auto --classic Policy { "Version": "2012-10-17", "Statement": [ { "Sid": "CreateAccountOperatorRoles", "Effect": "Allow", "Action": [ "iam:GetRole", "iam:UpdateAssumeRolePolicy", "iam:ListRoleTags", "iam:GetPolicy", "iam:TagRole", "iam:ListRoles", "iam:CreateRole", "iam:AttachRolePolicy", "iam:TagPolicy", "iam:CreatePolicy", "iam:ListPolicyTags" ], "Resource": "*" } ] } 8.5.3.3. Delete your account roles Run the following command with the specified permissions to delete the account roles in auto mode. Input USD rosa delete account-roles --mode auto Policy { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "iam:GetRole", "iam:ListInstanceProfilesForRole", "iam:DetachRolePolicy", "iam:ListAttachedRolePolicies", "iam:ListRoles", "iam:DeleteRole", "iam:ListRolePolicies", "iam:GetPolicy", "iam:ListPolicyVersions", "iam:DeletePolicy" ], "Resource": "*" } ] } 8.5.3.4. Delete your Operator roles Run the following command with the specified permissions to delete the Operator roles in auto mode. Input USD rosa delete operator-roles --mode auto Policy { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "iam:GetRole", "iam:ListInstanceProfilesForRole", "iam:DetachRolePolicy", "iam:ListAttachedRolePolicies", "iam:ListRoles", "iam:DeleteRole", "iam:ListRolePolicies", "iam:GetPolicy", "iam:ListPolicyVersions", "iam:DeletePolicy" ], "Resource": "*" } ] } 8.5.4. ROSA CLI commands with no required permissions The following ROSA CLI commands do not require permissions or policies to run. Instead, they require an access key and configured secret key or an attached role. Table 8.112. Commands Command Input list cluster USD rosa list cluster list versions USD rosa list versions describe cluster USD rosa describe cluster -c <cluster name> create admin USD rosa create admin -c <cluster name> list users USD rosa list users -c <cluster-name> list upgrades USD rosa list upgrades list OIDC configuration USD rosa list oidc-config list identity providers USD rosa list idps -c <cluster-name> list ingresses USD rosa list ingresses -c <cluster-name> 8.5.5. Additional resources For more information about AWS roles, see IAM roles . For more information about AWS policies and permissions, see Policies and permissions in IAM . | [
"aws sts get-caller-identity --output text",
"<aws_account_id> arn:aws:iam::<aws_account_id>:user/<username> <aws_user_id>",
"tar xvf rosa-linux.tar.gz",
"sudo mv rosa /usr/local/bin/rosa",
"rosa version",
"1.2.15 Your ROSA CLI is up to date.",
"rosa completion bash > /etc/bash_completion.d/rosa",
"rosa completion bash > /usr/local/etc/bash_completion.d/rosa",
"echo \"autoload -U compinit; compinit\" >> ~/.zshrc",
"rosa completion zsh > \"USD{fpath[1]}/_rosa\"",
"rosa completion fish > ~/.config/fish/completions/rosa.fish",
"PS> rosa completion powershell | Out-String | Invoke-Expression",
"rosa login [arguments]",
"rosa logout [arguments]",
"rosa verify permissions [arguments]",
"rosa verify permissions",
"rosa verify permissions --region=us-west-2",
"rosa verify quota [arguments]",
"rosa verify quota",
"rosa verify quota --region=us-west-2",
"rosa download rosa [arguments]",
"rosa download oc [arguments]",
"rosa download oc",
"rosa verify oc [arguments]",
"rosa verify oc",
"rosa init [arguments]",
"rosa init",
"rosa init --token=USDOFFLINE_ACCESS_TOKEN",
"rosa init --token=<token>",
"rosa create cluster --cluster-name=<cluster_name>",
"rosa create idp --cluster=<cluster_name> --type=<identity_provider> [arguments]",
"rosa grant user dedicated-admin --user=<idp_user_name> --cluster=<cluster_name>",
"rosa version",
"1.2.12 There is a newer release version '1.2.15', please consider updating: https://mirror.openshift.com/pub/openshift-v4/clients/rosa/latest/",
"rosa download rosa",
"tar -xzf rosa-linux.tar.gz",
"sudo mv rosa /usr/local/bin/rosa",
"rosa version",
"1.2.15 Your ROSA CLI is up to date.",
"rosa create cluster --cluster-name=<cluster_name> --debug",
"rosa download <software>",
"rosa --help",
"rosa version --help",
"rosa create cluster --cluster-name=<cluster_name> --interactive",
"rosa create cluster --cluster-name=<cluster_name> --profile=myAWSprofile",
"rosa version [arguments]",
"1.2.12 There is a newer release version '1.2.15', please consider updating: https://mirror.openshift.com/pub/openshift-v4/clients/rosa/latest/",
"rosa create cluster --cluster-name=mycluster",
"rosa edit cluster --cluster=mycluster --private",
"rosa delete ingress --cluster=mycluster",
"rosa list users --cluster=mycluster",
"rosa describe cluster --cluster=mycluster",
"rosa create account-roles [flags]",
"rosa create admin --cluster=<cluster_name>|<cluster_id>",
"rosa create admin --cluster=mycluster",
"rosa create break-glass-credential --cluster=<cluster_name> [arguments]",
"rosa create break-glass-credential --cluster=mycluster",
"rosa create break-glass-credential --cluster=mycluster -i",
"rosa create cluster --cluster-name=<cluster_name> [arguments]",
"rosa create cluster --cluster-name=mycluster",
"rosa create cluster --cluster-name=mycluster --region=us-east-2",
"rosa create cluster --cluster-name=mycluster -region=us-east-1 --enable-autoscaling --min-replicas=2 --max-replicas=5",
"rosa create external-auth-provider --cluster=<cluster_name> | <cluster_id> [arguments]",
"rosa create external-auth-provider --cluster=mycluster --name <provider_name> --issuer-audiences <audience_id> --issuer-url <issuing id> --claim-mapping-username-claim email --claim-mapping-groups-claim groups",
"rosa create idp --cluster=<cluster_name> | <cluster_id> [arguments]",
"rosa create idp --type=github --cluster=mycluster",
"rosa create idp --cluster=mycluster --interactive",
"rosa create ingress --cluster=<cluster_name> | <cluster_id> [arguments]",
"rosa create ingress --private --cluster=mycluster",
"rosa create ingress --cluster=mycluster",
"rosa create ingress --cluster=mycluster --label-match=foo=bar,bar=baz",
"rosa create kubeletconfig --cluster=<cluster_name|cluster_id> --name=<kubeletconfig_name> --pod-pids-limit=<number> [flags]",
"rosa create machinepool --cluster=<cluster_name> | <cluster_id> --replicas=<number> --name=<machinepool_name> [arguments]",
"rosa create machinepool --cluster=mycluster --interactive",
"rosa create machinepool --cluster=mycluster --enable-autoscaling --min-replicas=2 --max-replicas=5 --name=mp-1",
"rosa create machinepool --cluster=mycluster --replicas=3 --instance-type=m5.xlarge --name=mp-1",
"rosa create machinepool --cluster=mycluster --replicas=6 --name=mp-1 --max-surge=2 --max-unavailable=3",
"rosa create machinepool --cluster=mycluster --replicas=2 --instance-type=r5.2xlarge --labels=foo=bar,bar=baz --name=mp-1",
"rosa create machinepool --cluster=mycluster --replicas=2 --instance-type=r5.2xlarge --tags='foo bar,bar baz' --name=mp-1",
"rosa create network [flags]",
"AWSTemplateFormatVersion: '2010-09-09' Description: CloudFormation template to create a ROSA Quickstart default VPC. This CloudFormation template may not work with rosa CLI versions later than 1.2.47. Please ensure that you are using the compatible CLI version before deploying this template. Parameters: AvailabilityZoneCount: Type: Number Description: \"Number of Availability Zones to use\" Default: 1 MinValue: 1 MaxValue: 3 Region: Type: String Description: \"AWS Region\" Default: \"us-west-2\" Name: Type: String Description: \"Name prefix for resources\" VpcCidr: Type: String Description: CIDR block for the VPC Default: '10.0.0.0/16' Conditions: HasAZ1: !Equals [!Ref AvailabilityZoneCount, 1] HasAZ2: !Equals [!Ref AvailabilityZoneCount, 2] HasAZ3: !Equals [!Ref AvailabilityZoneCount, 3] One: Fn::Or: - Condition: HasAZ1 - Condition: HasAZ2 - Condition: HasAZ3 Two: Fn::Or: - Condition: HasAZ3 - Condition: HasAZ2 Resources: VPC: Type: AWS::EC2::VPC Properties: CidrBlock: !Ref VpcCidr EnableDnsSupport: true EnableDnsHostnames: true Tags: - Key: Name Value: !Ref Name - Key: 'rosa_managed_policies' Value: 'true' - Key: 'rosa_hcp_policies' Value: 'true' - Key: 'service' Value: 'ROSA' S3VPCEndpoint: Type: AWS::EC2::VPCEndpoint Properties: VpcId: !Ref VPC ServiceName: !Sub \"com.amazonaws.USD{Region}.s3\" VpcEndpointType: Gateway RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable SubnetPublic1: Condition: One Type: AWS::EC2::Subnet Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, 8]] AvailabilityZone: !Select [0, !GetAZs ''] MapPublicIpOnLaunch: true Tags: - Key: Name Value: !Sub \"USD{Name}-Public-Subnet-1\" - Key: 'rosa_managed_policies' Value: 'true' - Key: 'rosa_hcp_policies' Value: 'true' - Key: 'service' Value: 'ROSA' - Key: 'kubernetes.io/role/elb' Value: '1' SubnetPrivate1: Condition: One Type: AWS::EC2::Subnet Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, 8]] AvailabilityZone: !Select [0, !GetAZs ''] MapPublicIpOnLaunch: false Tags: - Key: Name Value: !Sub \"USD{Name}-Private-Subnet-1\" - Key: 'rosa_managed_policies' Value: 'true' - Key: 'rosa_hcp_policies' Value: 'true' - Key: 'service' Value: 'ROSA' - Key: 'kubernetes.io/role/internal-elb' Value: '1' SubnetPublic2: Condition: Two Type: AWS::EC2::Subnet Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, 8]] AvailabilityZone: !Select [1, !GetAZs ''] MapPublicIpOnLaunch: true Tags: - Key: Name Value: !Sub \"USD{Name}-Public-Subnet-2\" - Key: 'rosa_managed_policies' Value: 'true' - Key: 'rosa_hcp_policies' Value: 'true' - Key: 'service' Value: 'ROSA' - Key: 'kubernetes.io/role/elb' Value: '1' SubnetPrivate2: Condition: Two Type: AWS::EC2::Subnet Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, 8]] AvailabilityZone: !Select [1, !GetAZs ''] MapPublicIpOnLaunch: false Tags: - Key: Name Value: !Sub \"USD{Name}-Private-Subnet-2\" - Key: 'rosa_managed_policies' Value: 'true' - Key: 'rosa_hcp_policies' Value: 'true' - Key: 'service' Value: 'ROSA' - Key: 'kubernetes.io/role/internal-elb' Value: '1' SubnetPublic3: Condition: HasAZ3 Type: AWS::EC2::Subnet Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, 8]] AvailabilityZone: !Select [2, !GetAZs ''] MapPublicIpOnLaunch: true Tags: - Key: Name Value: !Sub \"USD{Name}-Public-Subnet-3\" - Key: 'rosa_managed_policies' Value: 'true' - Key: 'rosa_hcp_policies' Value: 'true' - Key: 'service' Value: 'ROSA' - Key: 'kubernetes.io/role/elb' Value: '1' SubnetPrivate3: Condition: HasAZ3 Type: AWS::EC2::Subnet Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, 8]] AvailabilityZone: !Select [2, !GetAZs ''] MapPublicIpOnLaunch: false Tags: - Key: Name Value: !Sub \"USD{Name}-Private-Subnet-3\" - Key: 'rosa_managed_policies' Value: 'true' - Key: 'rosa_hcp_policies' Value: 'true' - Key: 'service' Value: 'ROSA' - Key: 'kubernetes.io/role/internal-elb' Value: '1' InternetGateway: Type: AWS::EC2::InternetGateway Properties: Tags: - Key: Name Value: !Ref Name - Key: 'rosa_managed_policies' Value: 'true' - Key: 'rosa_hcp_policies' Value: 'true' - Key: 'service' Value: 'ROSA' AttachGateway: Type: AWS::EC2::VPCGatewayAttachment Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway ElasticIP1: Type: AWS::EC2::EIP Properties: Domain: vpc Tags: - Key: Name Value: !Ref Name - Key: 'rosa_managed_policies' Value: 'true' - Key: 'rosa_hcp_policies' Value: 'true' - Key: 'service' Value: 'ROSA' ElasticIP2: Type: AWS::EC2::EIP Properties: Domain: vpc Tags: - Key: Name Value: !Ref Name - Key: 'rosa_managed_policies' Value: 'true' - Key: 'rosa_hcp_policies' Value: 'true' - Key: 'service' Value: 'ROSA' ElasticIP3: Condition: HasAZ3 Type: AWS::EC2::EIP Properties: Domain: vpc Tags: - Key: Name Value: !Ref Name - Key: 'rosa_managed_policies' Value: 'true' - Key: 'rosa_hcp_policies' Value: 'true' - Key: 'service' Value: 'ROSA' NATGateway1: Condition: One Type: 'AWS::EC2::NatGateway' Properties: AllocationId: !GetAtt ElasticIP1.AllocationId SubnetId: !Ref SubnetPublic1 Tags: - Key: Name Value: !Sub \"USD{Name}-NAT-1\" - Key: 'rosa_managed_policies' Value: 'true' - Key: 'rosa_hcp_policies' Value: 'true' - Key: 'service' Value: 'ROSA' NATGateway2: Condition: Two Type: 'AWS::EC2::NatGateway' Properties: AllocationId: !GetAtt ElasticIP2.AllocationId SubnetId: !Ref SubnetPublic2 Tags: - Key: Name Value: !Sub \"USD{Name}-NAT-2\" - Key: 'rosa_managed_policies' Value: 'true' - Key: 'rosa_hcp_policies' Value: 'true' - Key: 'service' Value: 'ROSA' NATGateway3: Condition: HasAZ3 Type: 'AWS::EC2::NatGateway' Properties: AllocationId: !GetAtt ElasticIP3.AllocationId SubnetId: !Ref SubnetPublic3 Tags: - Key: Name Value: !Sub \"USD{Name}-NAT-3\" - Key: 'rosa_managed_policies' Value: 'true' - Key: 'rosa_hcp_policies' Value: 'true' - Key: 'service' Value: 'ROSA' PublicRouteTable: Type: AWS::EC2::RouteTable Properties: VpcId: !Ref VPC Tags: - Key: Name Value: !Ref Name - Key: 'rosa_managed_policies' Value: 'true' - Key: 'rosa_hcp_policies' Value: 'true' - Key: 'service' Value: 'ROSA' PublicRoute: Type: AWS::EC2::Route DependsOn: AttachGateway Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PrivateRouteTable: Type: AWS::EC2::RouteTable Properties: VpcId: !Ref VPC Tags: - Key: Name Value: !Sub \"USD{Name}-Private-Route-Table\" - Key: 'rosa_managed_policies' Value: 'true' - Key: 'rosa_hcp_policies' Value: 'true' - Key: 'service' Value: 'ROSA' PrivateRoute: Type: AWS::EC2::Route Properties: RouteTableId: !Ref PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: !If - One - !Ref NATGateway1 - !If - Two - !Ref NATGateway2 - !If - HasAZ3 - !Ref NATGateway3 - !Ref \"AWS::NoValue\" PublicSubnetRouteTableAssociation1: Condition: One Type: AWS::EC2::SubnetRouteTableAssociation Properties: SubnetId: !Ref SubnetPublic1 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Condition: Two Type: AWS::EC2::SubnetRouteTableAssociation Properties: SubnetId: !Ref SubnetPublic2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: HasAZ3 Type: AWS::EC2::SubnetRouteTableAssociation Properties: SubnetId: !Ref SubnetPublic3 RouteTableId: !Ref PublicRouteTable PrivateSubnetRouteTableAssociation1: Condition: One Type: AWS::EC2::SubnetRouteTableAssociation Properties: SubnetId: !Ref SubnetPrivate1 RouteTableId: !Ref PrivateRouteTable PrivateSubnetRouteTableAssociation2: Condition: Two Type: AWS::EC2::SubnetRouteTableAssociation Properties: SubnetId: !Ref SubnetPrivate2 RouteTableId: !Ref PrivateRouteTable PrivateSubnetRouteTableAssociation3: Condition: HasAZ3 Type: AWS::EC2::SubnetRouteTableAssociation Properties: SubnetId: !Ref SubnetPrivate3 RouteTableId: !Ref PrivateRouteTable SecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: \"Authorize inbound VPC traffic\" VpcId: !Ref VPC SecurityGroupIngress: - IpProtocol: -1 FromPort: 0 ToPort: 0 CidrIp: \"10.0.0.0/16\" SecurityGroupEgress: - IpProtocol: -1 FromPort: 0 ToPort: 0 CidrIp: 0.0.0.0/0 Tags: - Key: Name Value: !Ref Name - Key: 'service' Value: 'ROSA' - Key: 'rosa_managed_policies' Value: 'true' - Key: 'rosa_hcp_policies' Value: 'true' EC2VPCEndpoint: Type: AWS::EC2::VPCEndpoint Properties: VpcId: !Ref VPC ServiceName: !Sub \"com.amazonaws.USD{Region}.ec2\" PrivateDnsEnabled: true VpcEndpointType: Interface SubnetIds: - !If [One, !Ref SubnetPrivate1, !Ref \"AWS::NoValue\"] - !If [Two, !Ref SubnetPrivate2, !Ref \"AWS::NoValue\"] - !If [HasAZ3, !Ref SubnetPrivate3, !Ref \"AWS::NoValue\"] SecurityGroupIds: - !Ref SecurityGroup KMSVPCEndpoint: Type: AWS::EC2::VPCEndpoint Properties: VpcId: !Ref VPC ServiceName: !Sub \"com.amazonaws.USD{Region}.kms\" PrivateDnsEnabled: true VpcEndpointType: Interface SubnetIds: - !If [One, !Ref SubnetPrivate1, !Ref \"AWS::NoValue\"] - !If [Two, !Ref SubnetPrivate2, !Ref \"AWS::NoValue\"] - !If [HasAZ3, !Ref SubnetPrivate3, !Ref \"AWS::NoValue\"] SecurityGroupIds: - !Ref SecurityGroup STSVPCEndpoint: Type: AWS::EC2::VPCEndpoint Properties: VpcId: !Ref VPC ServiceName: !Sub \"com.amazonaws.USD{Region}.sts\" PrivateDnsEnabled: true VpcEndpointType: Interface SubnetIds: - !If [One, !Ref SubnetPrivate1, !Ref \"AWS::NoValue\"] - !If [Two, !Ref SubnetPrivate2, !Ref \"AWS::NoValue\"] - !If [HasAZ3, !Ref SubnetPrivate3, !Ref \"AWS::NoValue\"] SecurityGroupIds: - !Ref SecurityGroup EcrApiVPCEndpoint: Type: AWS::EC2::VPCEndpoint Properties: VpcId: !Ref VPC ServiceName: !Sub \"com.amazonaws.USD{Region}.ecr.api\" PrivateDnsEnabled: true VpcEndpointType: Interface SubnetIds: - !If [One, !Ref SubnetPrivate1, !Ref \"AWS::NoValue\"] - !If [Two, !Ref SubnetPrivate2, !Ref \"AWS::NoValue\"] - !If [HasAZ3, !Ref SubnetPrivate3, !Ref \"AWS::NoValue\"] SecurityGroupIds: - !Ref SecurityGroup EcrDkrVPCEndpoint: Type: AWS::EC2::VPCEndpoint Properties: VpcId: !Ref VPC ServiceName: !Sub \"com.amazonaws.USD{Region}.ecr.dkr\" PrivateDnsEnabled: true VpcEndpointType: Interface SubnetIds: - !If [One, !Ref SubnetPrivate1, !Ref \"AWS::NoValue\"] - !If [Two, !Ref SubnetPrivate2, !Ref \"AWS::NoValue\"] - !If [HasAZ3, !Ref SubnetPrivate3, !Ref \"AWS::NoValue\"] SecurityGroupIds: - !Ref SecurityGroup Outputs: VPCId: Description: \"VPC Id\" Value: !Ref VPC Export: Name: !Sub \"USD{Name}-VPCId\" VPCEndpointId: Description: The ID of the VPC Endpoint Value: !Ref S3VPCEndpoint Export: Name: !Sub \"USD{Name}-VPCEndpointId\" PublicSubnets: Description: \"Public Subnet Ids\" Value: !Join [\",\", [!If [One, !Ref SubnetPublic1, !Ref \"AWS::NoValue\"], !If [Two, !Ref SubnetPublic2, !Ref \"AWS::NoValue\"], !If [HasAZ3, !Ref SubnetPublic3, !Ref \"AWS::NoValue\"]]] Export: Name: !Sub \"USD{Name}-PublicSubnets\" PrivateSubnets: Description: \"Private Subnet Ids\" Value: !Join [\",\", [!If [One, !Ref SubnetPrivate1, !Ref \"AWS::NoValue\"], !If [Two, !Ref SubnetPrivate2, !Ref \"AWS::NoValue\"], !If [HasAZ3, !Ref SubnetPrivate3, !Ref \"AWS::NoValue\"]]] Export: Name: !Sub \"USD{Name}-PrivateSubnets\" EIP1AllocationId: Description: Allocation ID for ElasticIP1 Value: !GetAtt ElasticIP1.AllocationId Export: Name: !Sub \"USD{Name}-EIP1-AllocationId\" EIP2AllocationId: Description: Allocation ID for ElasticIP2 Value: !GetAtt ElasticIP2.AllocationId Export: Name: !Sub \"USD{Name}-EIP2-AllocationId\" EIP3AllocationId: Condition: HasAZ3 Description: Allocation ID for ElasticIP3 Value: !GetAtt ElasticIP3.AllocationId Export: Name: !Sub \"USD{Name}-EIP3-AllocationId\" NatGatewayId: Description: The NAT Gateway IDs Value: !Join [\",\", [!If [One, !Ref NATGateway1, !Ref \"AWS::NoValue\"], !If [Two, !Ref NATGateway2, !Ref \"AWS::NoValue\"], !If [HasAZ3, !Ref NATGateway3, !Ref \"AWS::NoValue\"]]] Export: Name: !Sub \"USD{Name}-NatGatewayId\" InternetGatewayId: Description: The ID of the Internet Gateway Value: !Ref InternetGateway Export: Name: !Sub \"USD{Name}-InternetGatewayId\" PublicRouteTableId: Description: The ID of the public route table Value: !Ref PublicRouteTable Export: Name: !Sub \"USD{Name}-PublicRouteTableId\" PrivateRouteTableId: Description: The ID of the private route table Value: !Ref PrivateRouteTable Export: Name: !Sub \"USD{Name}-PrivateRouteTableId\" EC2VPCEndpointId: Description: The ID of the EC2 VPC Endpoint Value: !Ref EC2VPCEndpoint Export: Name: !Sub \"USD{Name}-EC2VPCEndpointId\" KMSVPCEndpointId: Description: The ID of the KMS VPC Endpoint Value: !Ref KMSVPCEndpoint Export: Name: !Sub \"USD{Name}-KMSVPCEndpointId\" STSVPCEndpointId: Description: The ID of the STS VPC Endpoint Value: !Ref STSVPCEndpoint Export: Name: !Sub \"USD{Name}-STSVPCEndpointId\" EcrApiVPCEndpointId: Description: The ID of the ECR API VPC Endpoint Value: !Ref EcrApiVPCEndpoint Export: Name: !Sub \"USD{Name}-EcrApiVPCEndpointId\" EcrDkrVPCEndpointId: Description: The ID of the ECR DKR VPC Endpoint Value: !Ref EcrDkrVPCEndpoint Export: Name: !Sub \"USD{Name}-EcrDkrVPCEndpointId\"",
"rosa create network rosa-quickstart-default-vpc --param Tags=key1=value1,key2=value2 --param Name=example-stack --param Region=us-west-2",
"rosa create ocm-role [flags]",
"rosa create user-role [flags]",
"rosa edit cluster --cluster=<cluster_name> | <cluster_id> [arguments]",
"rosa edit cluster --cluster=mycluster --private",
"rosa edit cluster --cluster=mycluster --interactive",
"rosa edit ingress --cluster=<cluster_name> | <cluster_id> [arguments]",
"rosa edit ingress --private --cluster=mycluster a1b2",
"rosa edit ingress --label-match=foo=bar --cluster=mycluster a1b2",
"rosa edit ingress --private=false --cluster=mycluster apps",
"rosa edit ingress --lb-type=nlb --cluster=mycluster apps2",
"rosa edit kubeletconfig --cluster=<cluster_name|cluster_id> --name=<kubeletconfig_name> --pod-pids-limit=<number> [flags]",
"rosa edit machinepool --cluster=<cluster_name_or_id> <machinepool_name> [arguments]",
"rosa edit machinepool --cluster=mycluster --replicas=4 mp1",
"rosa edit machinepool --cluster=mycluster --enable-autoscaling --min-replicas=3 --max-replicas=5 mp1",
"rosa edit machinepool --cluster=mycluster --enable-autoscaling=false --replicas=3 mp1",
"rosa edit machinepool --max-replicas=9 --cluster=mycluster mp1",
"rosa edit machinepool --cluster=mycluster mp1 --max-surge=2 --max-unavailable=3",
"rosa edit machinepool -c mycluster --kubelet-configs=set-high-pids high-pid-pool",
"rosa delete admin --cluster=<cluster_name> | <cluster_id>",
"rosa delete admin --cluster=mycluster",
"rosa delete cluster --cluster=<cluster_name> | <cluster_id> [arguments]",
"rosa delete cluster --cluster=mycluster",
"rosa delete external-auth-provider <name_of_external_auth_provider> --cluster=<cluster_name> | <cluster_id> [arguments]",
"rosa delete external-auth-provider exauth-1 --cluster=mycluster",
"rosa delete idp --cluster=<cluster_name> | <cluster_id> [arguments]",
"rosa delete idp github --cluster=mycluster",
"rosa delete ingress --cluster=<cluster_name> | <cluster_id> [arguments]",
"rosa delete ingress --cluster=mycluster a1b2",
"rosa delete ingress --cluster=mycluster apps2",
"rosa delete kubeletconfig --cluster=<cluster_name|cluster_id> [flags]",
"rosa delete machinepool --cluster=<cluster_name> | <cluster_id> <machine_pool_id>",
"rosa delete machinepool --cluster=mycluster mp-1",
"rosa install addon --cluster=<cluster_name> | <cluster_id> [arguments]",
"rosa install addon --cluster=mycluster dbaas-operator",
"rosa uninstall addon --cluster=<cluster_name> | <cluster_id> [arguments]",
"rosa uninstall addon --cluster=mycluster dbaas-operator",
"rosa list addons --cluster=<cluster_name> | <cluster_id>",
"rosa list break-glass-credential [arguments]",
"rosa list break-glass-credential --cluster=mycluster",
"rosa list clusters [arguments]",
"rosa list external-auth-provider --cluster=<cluster_name> | <cluster_id> [arguments]",
"rosa list external-auth-provider --cluster=mycluster",
"rosa list idps --cluster=<cluster_name> | <cluster_id> [arguments]",
"rosa list idps --cluster=mycluster",
"rosa list ingresses --cluster=<cluster_name> | <cluster_id> [arguments]",
"rosa list ingresses --cluster=mycluster",
"rosa list instance-types [arguments]",
"rosa list instance-types",
"rosa list kubeletconfigs --cluster=<cluster_name> | <cluster_id> [arguments]",
"rosa list kubeletconfigs --cluster=mycluster",
"rosa list machinepools --cluster=<cluster_name> | <cluster_id> [arguments]",
"rosa list machinepools --cluster=mycluster",
"rosa list regions [arguments]",
"rosa list regions",
"rosa list upgrades --cluster=<cluster_name> | <cluster_id> [arguments]",
"rosa list upgrades --cluster=mycluster",
"rosa list users --cluster=<cluster_name> | <cluster_id> [arguments]",
"rosa list users --cluster=mycluster",
"rosa list versions [arguments]",
"rosa list versions",
"rosa describe admin --cluster=<cluster_name> | <cluster_id> [arguments]",
"rosa describe admin --cluster=mycluster",
"rosa describe addon <addon_id> | <addon_name> [arguments]",
"rosa describe addon dbaas-operator",
"rosa describe break-glass-credential --id=<break_glass_credential_id> --cluster=<cluster_name>| <cluster_id> [arguments]",
"rosa describe cluster --cluster=<cluster_name> | <cluster_id> [arguments]",
"rosa describe cluster --cluster=mycluster",
"rosa describe kubeletconfig --cluster=<cluster_name|cluster_id> [flags]",
"rosa describe machinepool --cluster=[<cluster_name>|<cluster_id>] --machinepool=<machinepool_name> [arguments]",
"rosa describe machinepool --cluster=mycluster --machinepool=mymachinepool",
"rosa revoke break-glass-credential --cluster=<cluster_name> | <cluster_id>",
"rosa revoke break-glass-credential --cluster=mycluster",
"rosa upgrade cluster --cluster=<cluster_name> | <cluster_id> [arguments]",
"rosa upgrade cluster --cluster=mycluster --interactive",
"rosa upgrade cluster --cluster=mycluster --version 4.5.20",
"rosa delete upgrade --cluster=<cluster_name> | <cluster_id>",
"rosa upgrade machinepool --cluster=<cluster_name> <machinepool_name>",
"rosa upgrade machinepool --cluster=mycluster",
"rosa delete upgrade --cluster=<cluster_name> <machinepool_name>",
"rosa upgrade roles --cluster=<cluster_id>",
"rosa upgrade roles --cluster=mycluster",
"rosa whoami [arguments]",
"rosa whoami",
"rosa version [arguments]",
"rosa version",
"rosa logs install --cluster=<cluster_name> | <cluster_id> [arguments]",
"rosa logs install mycluster --tail=100",
"rosa logs install --cluster=mycluster",
"rosa logs uninstall --cluster=<cluster_name> | <cluster_id> [arguments]",
"rosa logs uninstall --cluster=mycluster --tail=100",
"rosa create oidc-config --mode auto",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"CreateOidcConfig\", \"Effect\": \"Allow\", \"Action\": [ \"iam:TagOpenIDConnectProvider\", \"iam:CreateOpenIDConnectProvider\" ], \"Resource\": \"*\" } ] }",
"rosa create oidc-config --mode auto --managed=false",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"VisualEditor0\", \"Effect\": \"Allow\", \"Action\": [ \"iam:GetRole\", \"iam:TagOpenIDConnectProvider\", \"iam:ListRoleTags\", \"iam:ListRoles\", \"iam:CreateOpenIDConnectProvider\", \"s3:CreateBucket\", \"s3:PutObject\", \"s3:PutBucketTagging\", \"s3:PutBucketPolicy\", \"s3:PutObjectTagging\", \"s3:PutBucketPublicAccessBlock\", \"secretsmanager:CreateSecret\", \"secretsmanager:TagResource\" ], \"Resource\": \"*\" } ] }",
"rosa list account-roles",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"ListAccountRoles\", \"Effect\": \"Allow\", \"Action\": [ \"iam:ListRoleTags\", \"iam:ListRoles\" ], \"Resource\": \"*\" } ] }",
"rosa list operator-roles",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"ListOperatorRoles\", \"Effect\": \"Allow\", \"Action\": [ \"iam:ListRoleTags\", \"iam:ListAttachedRolePolicies\", \"iam:ListRoles\", \"iam:ListPolicyTags\" ], \"Resource\": \"*\" } ] }",
"rosa list oidc-providers",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"ListOidcProviders\", \"Effect\": \"Allow\", \"Action\": [ \"iam:ListOpenIDConnectProviders\", \"iam:ListOpenIDConnectProviderTags\" ], \"Resource\": \"*\" } ] }",
"rosa verify quota",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"VerifyQuota\", \"Effect\": \"Allow\", \"Action\": [ \"elasticloadbalancing:DescribeAccountLimits\", \"servicequotas:ListServiceQuotas\" ], \"Resource\": \"*\" } ] }",
"rosa delete oidc-config --mode auto",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"DeleteOidcConfig\", \"Effect\": \"Allow\", \"Action\": [ \"iam:ListOpenIDConnectProviders\", \"iam:DeleteOpenIDConnectProvider\" ], \"Resource\": \"*\" } ] }",
"rosa delete oidc-config --mode auto",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"VisualEditor0\", \"Effect\": \"Allow\", \"Action\": [ \"iam:ListOpenIDConnectProviders\", \"iam:DeleteOpenIDConnectProvider\", \"secretsmanager:DeleteSecret\", \"s3:ListBucket\", \"s3:DeleteObject\", \"s3:DeleteBucket\" ], \"Resource\": \"*\" } ] }",
"rosa create cluster --hosted-cp",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"CreateCluster\", \"Effect\": \"Allow\", \"Action\": [ \"iam:GetRole\", \"iam:ListRoleTags\", \"iam:ListAttachedRolePolicies\", \"iam:ListRoles\", \"ec2:DescribeSubnets\", \"ec2:DescribeRouteTables\", \"ec2:DescribeAvailabilityZones\" ], \"Resource\": \"*\" } ] }",
"rosa create account-roles --mode auto --hosted-cp",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"CreateAccountRoles\", \"Effect\": \"Allow\", \"Action\": [ \"iam:GetRole\", \"iam:UpdateAssumeRolePolicy\", \"iam:ListRoleTags\", \"iam:GetPolicy\", \"iam:TagRole\", \"iam:ListRoles\", \"iam:CreateRole\", \"iam:AttachRolePolicy\", \"iam:ListPolicyTags\" ], \"Resource\": \"*\" } ] }",
"rosa delete account-roles --mode auto",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"DeleteAccountRoles\", \"Effect\": \"Allow\", \"Action\": [ \"iam:GetRole\", \"iam:ListInstanceProfilesForRole\", \"iam:DetachRolePolicy\", \"iam:ListAttachedRolePolicies\", \"iam:ListRoles\", \"iam:DeleteRole\", \"iam:ListRolePolicies\" ], \"Resource\": \"*\" } ] }",
"rosa delete operator-roles --mode auto",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"DeleteOperatorRoles\", \"Effect\": \"Allow\", \"Action\": [ \"iam:GetRole\", \"iam:DetachRolePolicy\", \"iam:ListAttachedRolePolicies\", \"iam:ListRoles\", \"iam:DeleteRole\" ], \"Resource\": \"*\" } ] }",
"rosa create cluster",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"CreateCluster\", \"Effect\": \"Allow\", \"Action\": [ \"iam:GetRole\", \"iam:ListRoleTags\", \"iam:ListRoles\" ], \"Resource\": \"*\" } ] }",
"rosa create account-roles --mode auto --classic",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"CreateAccountOperatorRoles\", \"Effect\": \"Allow\", \"Action\": [ \"iam:GetRole\", \"iam:UpdateAssumeRolePolicy\", \"iam:ListRoleTags\", \"iam:GetPolicy\", \"iam:TagRole\", \"iam:ListRoles\", \"iam:CreateRole\", \"iam:AttachRolePolicy\", \"iam:TagPolicy\", \"iam:CreatePolicy\", \"iam:ListPolicyTags\" ], \"Resource\": \"*\" } ] }",
"rosa delete account-roles --mode auto",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"VisualEditor0\", \"Effect\": \"Allow\", \"Action\": [ \"iam:GetRole\", \"iam:ListInstanceProfilesForRole\", \"iam:DetachRolePolicy\", \"iam:ListAttachedRolePolicies\", \"iam:ListRoles\", \"iam:DeleteRole\", \"iam:ListRolePolicies\", \"iam:GetPolicy\", \"iam:ListPolicyVersions\", \"iam:DeletePolicy\" ], \"Resource\": \"*\" } ] }",
"rosa delete operator-roles --mode auto",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"VisualEditor0\", \"Effect\": \"Allow\", \"Action\": [ \"iam:GetRole\", \"iam:ListInstanceProfilesForRole\", \"iam:DetachRolePolicy\", \"iam:ListAttachedRolePolicies\", \"iam:ListRoles\", \"iam:DeleteRole\", \"iam:ListRolePolicies\", \"iam:GetPolicy\", \"iam:ListPolicyVersions\", \"iam:DeletePolicy\" ], \"Resource\": \"*\" } ] }"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/cli_tools/rosa-cli |
Chapter 2. Power Management Auditing And Analysis | Chapter 2. Power Management Auditing And Analysis 2.1. Audit And Analysis Overview The detailed manual audit, analysis, and tuning of a single system is usually the exception because the time and cost spent to do so typically outweighs the benefits gained from these last pieces of system tuning. However, performing these tasks once for a large number of nearly identical systems where you can reuse the same settings for all systems can be very useful. For example, consider the deployment of thousands of desktop systems, or a HPC cluster where the machines are nearly identical. Another reason to do auditing and analysis is to provide a basis for comparison against which you can identify regressions or changes in system behavior in the future. The results of this analysis can be very helpful in cases where hardware, BIOS, or software updates happen regularly and you want to avoid any surprises with regard to power consumption. Generally, a thorough audit and analysis gives you a much better idea of what is really happening on a particular system. Auditing and analyzing a system with regard to power consumption is relatively hard, even with the most modern systems available. Most systems do not provide the necessary means to measure power use via software. Exceptions exist though: the ILO management console of Hewlett Packard server systems has a power management module that you can access through the web. IBM provides a similar solution in their BladeCenter power management module. On some Dell systems, the IT Assistant offers power monitoring capabilities as well. Other vendors are likely to offer similar capabilities for their server platforms, but as can be seen there is no single solution available that is supported by all vendors. Direct measurements of power consumption is often only necessary to maximize savings as far as possible. Fortunately, other means are available to measure if changes are in effect or how the system is behaving. This chapter describes the necessary tools. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/power_management_guide/Audit_and_Analysis |
Chapter 3. Using AMQ Management Console | Chapter 3. Using AMQ Management Console AMQ Management Console is a web console included in the AMQ Broker installation that enables you to use a web browser to manage AMQ Broker. AMQ Management Console is based on hawtio . 3.1. Overview AMQ Broker is a full-featured, message-oriented middleware broker. It offers specialized queueing behaviors, message persistence, and manageability. It supports multiple protocols and client languages, freeing you to use many of your application assets. AMQ Broker's key features allow you to: monitor your AMQ brokers and clients view the topology view network health at a glance manage AMQ brokers using: AMQ Management Console Command-line Interface (CLI) Management API The supported web browsers for AMQ Management Console are Firefox, Chrome, and Internet Explorer. For more information on supported browser versions, see AMQ 7 Supported Configurations . 3.2. Configuring local and remote access to AMQ Management Console The procedure in this section shows how to configure local and remote access to AMQ Management Console. Remote access to the console can take one of two forms: Within a console session on a local broker, you use the Connect tab to connect to another, remote broker From a remote host, you connect to the console for the local broker, using an externally-reachable IP address for the local broker Prerequisites You must upgrade to at least AMQ Broker 7.1.0. As part of this upgrade, an access-management configuration file named jolokia-access.xml is added to the broker instance. For more information about upgrading, see Upgrading a Broker instance from 7.0.x to 7.1.0 . Procedure Open the <broker-instance-dir> /etc/bootstrap.xml file. Within the web element, observe that the web port is bound only to localhost by default. <web bind="http://localhost:8161" path="web"> <app url="redhat-branding" war="redhat-branding.war"/> <app url="artemis-plugin" war="artemis-plugin.war"/> <app url="dispatch-hawtio-console" war="dispatch-hawtio-console.war"/> <app url="console" war="console.war"/> </web> To enable connection to the console for the local broker from a remote host, change the web port binding to a network-reachable interface. For example: <web bind="http://0.0.0.0:8161" path="web"> In the preceding example, by specifying 0.0.0.0 , you bind the web port to all interfaces on the local broker. Save the bootstrap.xml file. Open the <broker-instance-dir> /etc/jolokia-access.xml file. Within the <cors> (that is, Cross-Origin Resource Sharing ) element, add an allow-origin entry for each HTTP origin request header that you want to allow to access the console. For example: <cors> <allow-origin>*://localhost*</allow-origin> <allow-origin>*://192.168.0.49*</allow-origin> <allow-origin>*://192.168.0.51*</allow-origin> <!-- Check for the proper origin on the server side, too --> <strict-checking/> </cors> In the preceding configuration, you specify that the following connections are allowed: Connection from the local host (that is, the host machine for your local broker instance) to the console. The first asterisk ( * ) wildcard character allows either the http or https scheme to be specified in the connection request, based on whether you have configured the console for secure connections. The second asterisk wildcard character allows any port on the host machine to be used for the connection. Connection from a remote host to the console for the local broker, using the externally-reachable IP address of the local broker. In this case, the externally-reachable IP address of the local broker is 192.168.0.49 . Connection from within a console session opened on another, remote broker to the local broker. In this case, the IP address of the remote broker is 192.168.0.51 . Save the jolokia-access.xml file. Open the <broker-instance-dir> /etc/artemis.profile file. To enable the Connect tab in the console, set the value of the Dhawtio.disableProxy argument to false . -Dhawtio.disableProxy=false Important It is recommended that you enable remote connections from the console (that is, set the value of the Dhawtio.disableProxy argument to false ) only if the console is exposed to a secure network. Add a new argument, Dhawtio.proxyWhitelist , to the JAVA_ARGS list of Java system arguments. As a comma-separated list, specify IP addresses for any remote brokers that you want to connect to from the local broker (that is, by using the Connect tab within a console session running on the local broker). For example: -Dhawtio.proxyWhitelist=192.168.0.51 Based on the preceding configuration, you can use the Connect tab within a console session on the local broker to connect to another, remote broker with an IP address of 192.168.0.51 . Save the aretmis.profile file. Additional resources To learn how to access the console, see Section 3.3, "Accessing AMQ Management Console" . For more information about: Cross-Origin Resource Sharing, see W3C Recommendations . Jolokia security, see Jolokia Protocols . Securing connections to the console, see Section 3.4.2, "Securing network access to AMQ Management Console" . 3.3. Accessing AMQ Management Console The procedure in this section shows how to: Open AMQ Management Console from the local broker Connect to other brokers from within a console session on the local broker Open a console instance for the local broker from a remote host using the externally-reachable IP address of the local broker Prerequisites You must have already configured local and remote access to the console. For more information, see Section 3.2, "Configuring local and remote access to AMQ Management Console" . Procedure In your web browser, navigate to the console address for the local broker. The console address is http:// <host:port> /console/login . If you are using the default address, navigate to http://localhost:8161/console/login . Otherwise, use the values of host and port that are defined for the bind attribute of the web element in the <broker-instance-dir> /etc/bootstrap.xml configuration file. Figure 3.1. Console login page Log in to AMQ Management Console using the default user name and password that you created when you created the broker. To connect to another, remote broker from the console session of the local broker: In the left menu, click the Connect tab. In the main pane, on the Remote tab, click the Add connection button. In the Add Connection dialog box, specify the following details: Name Name for the remote connection, for example, my_other_broker . Scheme Protocol to use for the remote connection. Select http for a non-secured connection, or https for a secured connection. Host IP address of a remote broker. You must have already configured console access for this remote broker. Port Port on the local broker to use for the remote connection. Specify the port value that is defined for the bind attribute of the web element in the <broker-instance-dir> /etc/bootstrap.xml configuration file. The default value is 8161 . Path Path to use for console access. Specify console/jolokia . To test the connection, click the Test Connection button. If the connection test is successful, click the Add button. If the connection test fails, review and modify the connection details as needed. Test the connection again. On the Remote page, for a connection that you have added, click the Connect button. A new web browser tab opens for the console instance on the remote broker. In the Log In dialog box, enter the user name and password for the remote broker. Click Log In . The console instance for the remote broker opens. To connect to the console for the local broker from a remote host, specify the Jolokia endpoint for the local broker in a web browser. This endpoint includes the externally-reachable IP address that you specified for the local broker when configuring remote console access. For example: 3.4. Configuring AMQ Management Console Configure user access and request access to resources on the broker. 3.4.1. Setting up user access to AMQ Management Console You can access AMQ Management Console using the broker login credentials. The following table provides information about different methods to add additional broker users to access AMQ Management Console: Authentication Method Description Guest authentication Enables anonymous access. In this configuration, any user who connects without credentials or with the wrong credentials will be authenticated automatically and assigned a specific user and role. For more information, see Configuring guest access in Configuring AMQ Broker . Basic user and password authentication For each user, you must define a username and password and assign a security role. Users can only log into AMQ Management Console using these credentials. For more information, see Configuring basic user and password authentication in Configuring AMQ Broker . LDAP authentication Users are authenticated and authorized by checking the credentials against user data stored in a central X.500 directory server. For more information, see Configuring LDAP to authenticate clients in Configuring AMQ Broker . 3.4.2. Securing network access to AMQ Management Console To secure AMQ Management Console when the console is being accessed over a WAN or the internet, use SSL to specify that network access uses https instead of http . Prerequisites The following should be located in the <broker-instance-dir> /etc/ directory: Java key store Java trust store (needed only if you require client authentication) Procedure Open the <broker-instance-dir> /etc/bootstrap.xml file. In the <web> element, add the following attributes: <web bind="https://0.0.0.0:8161" path="web"> ... keyStorePath="<path_to_keystore>" keyStorePassword="<password>" clientAuth="<true/false>" trustStorePath="<path_to_truststore>" trustStorePassword="<password>"> ... </web> bind For secure connections to the console, change the URI scheme to https . keyStorePath Path of the keystore file. For example: keyStorePath="< broker-instance-dir> /etc/keystore.jks" keyStorePassword Key store password. This password can be encrypted. clientAuth Specifies whether client authentication is required. The default value is false . trustStorePath Path of the trust store file. You need to define this attribute only if clientAuth is set to true . trustStorePassword Trust store password. This password can be encrypted. Additional resources For more information about encrypting passwords in broker configuration files, including bootstrap.xml , see Encrypting Passwords in Configuration Files . 3.5. Managing brokers using AMQ Management Console You can use AMQ Management Console to view information about a running broker and manage the following resources: Incoming network connections (acceptors) Addresses Queues 3.5.1. Viewing details about the broker To see how the broker is configured, in the left menu, click Artemis . In the folder tree, the local broker is selected by default. In the main pane, the following tabs are available: Status Displays information about the current status of the broker, such as uptime and cluster information. Also displays the amount of address memory that the broker is currently using. The graph shows this value as a proportion of the global-max-size configuration parameter. Figure 3.2. Status tab Connections Displays information about broker connections, including client, cluster, and bridge connections. Sessions Displays information about all sessions currently open on the broker. Consumers Displays information about all consumers currently open on the broker. Producers Displays information about producers currently open on the broker. Addresses Displays information about addresses on the broker. This includes internal addresses, such as store-and-forward addresses. Queues Displays information about queues on the broker. This includes internal queues, such as store-and-forward queues. Attributes Displays detailed information about attributes configured on the broker. Operations Displays JMX operations that you can execute on the broker from the console. When you click an operation, a dialog box opens that enables you to specify parameter values for the operation. Chart Displays real-time data for attributes configured on the broker. You can edit the chart to specify the attributes that are included in the chart. Broker diagram Displays a diagram of the cluster topology. This includes all brokers in the cluster and any addresses and queues on the local broker. 3.5.2. Viewing the broker diagram You can view a diagram of all AMQ Broker resources in your topology, including brokers (live and backup brokers), producers and consumers, addresses, and queues. Procedure In the left menu, click Artemis . In the main pane, click the Broker diagram tab. The console displays a diagram of the cluster topology. This includes all brokers in the cluster and any addresses and queues on the local broker, as shown in the figure. Figure 3.3. Broker diagram tab To change what items are displayed on the diagram, use the check boxes at the top of the diagram. Click Refresh . To show attributes for the local broker or an address or queue that is connected to it, click that node in the diagram. For example, the following figure shows a diagram that also includes attributes for the local broker. Figure 3.4. Broker diagram tab, including attributes 3.5.3. Viewing acceptors You can view details about the acceptors configured for the broker. Procedure In the left menu, click Artemis . In the folder tree, click acceptors . To view details about how an acceptor is configured, click the acceptor. The console shows the corresponding attributes on the Attributes tab, as shown in the figure. Figure 3.5. AMQP acceptor attributes To see complete details for an attribute, click the attribute. An additional window opens to show the details. 3.5.4. Managing addresses and queues An address represents a messaging endpoint. Within the configuration, a typical address is given a unique name. A queue is associated with an address. There can be multiple queues per address. Once an incoming message is matched to an address, the message is sent on to one or more of its queues, depending on the routing type configured. Queues can be configured to be automatically created and deleted. 3.5.4.1. Creating addresses A typical address is given a unique name, zero or more queues, and a routing type. A routing type determines how messages are sent to the queues associated with an address. Addresses can be configured with two different routing types. If you want your messages routed to... Use this routing type... A single queue within the matching address, in a point-to-point manner. Anycast Every queue within the matching address, in a publish-subscribe manner. Multicast You can create and configure addresses and queues, and then delete them when they are no longer in use. Procedure In the left menu, click Artemis . In the folder tree, click addresses . In the main pane, click the Create address tab. A page appears for you to create an address, as shown in the figure. Figure 3.6. Create Address page Complete the following fields: Address name The routing name of the address. Routing type Select one of the following options: Multicast : Messages sent to the address will be distributed to all subscribers in a publish-subscribe manner. Anycast : Messages sent to this address will be distributed to only one subscriber in a point-to-point manner. Both : Enables you to define more than one routing type per address. This typically results in an anti-pattern and is not recommended. Note If an address does use both routing types, and the client does not show a preference for either one, the broker defaults to the anycast routing type. The one exception is when the client uses the MQTT protocol. In that case, the default routing type is multicast . Click Create Address . 3.5.4.2. Sending messages to an address The following procedure shows how to use the console to send a message to an address. Procedure In the left menu, click Artemis . In the folder tree, select an address. On the navigation bar in the main pane, click More Send message . A page appears for you to create a message, as shown in the figure. Figure 3.7. Send Message page If necessary, click the Add Header button to add message header information. Enter the message body. In the Format drop-down menu, select an option for the format of the message body, and then click Format . The message body is formatted in a human-readable style for the format you selected. Click Send message . The message is sent. To send additional messages, change any of the information you entered, and then click Send message . 3.5.4.3. Creating queues Queues provide a channel between a producer and a consumer. Prerequisites The address to which you want to bind the queue must exist. To learn how to use the console to create an address, see Section 3.5.4.1, "Creating addresses" . Procedure In the left menu, click Artemis . In the folder tree, select the address to which you want to bind the queue. In the main pane, click the Create queue tab. A page appears for you to create a queue, as shown in the figure. Figure 3.8. Create Queue page Complete the following fields: Queue name A unique name for the queue. Routing type Select one of the following options: Multicast : Messages sent to the parent address will be distributed to all queues bound to the address. Anycast : Only one queue bound to the parent address will receive a copy of the message. Messages will be distributed evenly among all of the queues bound to the address. Durable If you select this option, the queue and its messages will be persistent. Filter The username to be used when connecting to the broker. Max Consumers The maximum number of consumers that can access the queue at a given time. Purge when no consumers If selected, the queue will be purged when no consumers are connected. Click Create Queue . 3.5.4.4. Checking the status of a queue Charts provide a real-time view of the status of a queue on a broker. Procedure In the left menu, click Artemis . In the folder tree, navigate to a queue. In the main pane, click the Chart tab. The console displays a chart that shows real-time data for all of the queue attributes. Figure 3.9. Chart tab for a queue Note To view a chart for multiple queues on an address, select the anycast or multicast folder that contains the queues. If necessary, select different criteria for the chart: In the main pane, click Edit . On the Attributes list, select one or more attributes that you want to include in the chart. To select multiple attributes, press and hold the Ctrl key and select each attribute. Click the View Chart button. The chart is updated based on the attributes that you selected. 3.5.4.5. Browsing queues Browsing a queue displays all of the messages in the queue. You can also filter and sort the list to find specific messages. Procedure In the left menu, click Artemis . In the folder tree, navigate to a queue. Queues are located within the addresses to which they are bound. On the navigation bar in the main pane, click More Browse queue . The messages in the queue are displayed. By default, the first 200 messages are displayed. Figure 3.10. Browse Queue page To browse for a specific message or group of messages, do one of the following: To... Do this... Filter the list of messages In the Filter... text field, enter filter criteria. Click the search (that is, magnifying glass) icon. Sort the list of messages In the list of messages, click a column header. To sort the messages in descending order, click the header a second time. To view the content of a message, click the Show button. You can view the message header, properties, and body. 3.5.4.6. Sending messages to a queue After creating a queue, you can send a message to it. The following procedure outlines the steps required to send a message to an existing queue. Procedure In the left menu, click Artemis . In the folder tree, navigate to a queue. In the main pane, click the Send message tab. A page appears for you to compose the message. Figure 3.11. Send Message page for a queue If necessary, click the Add Header button to add message header information. Enter the message body. In the Format drop-down menu, select an option for the format of the message body, and then click Format . The message body is formatted in a human-readable style for the format you selected. Click Send message . The message is sent. To send additional messages, change any of the information you entered, and click Send message . 3.5.4.7. Resending messages to a queue You can resend previously sent messages. Procedure Browse for the message you want to resend . Click the check box to the message that you want to resend. Click the Resend button. The message is displayed. Update the message header and body as needed, and then click Send message . 3.5.4.8. Moving messages to a different queue You can move one or more messages in a queue to a different queue. Procedure Browse for the messages you want to move . Click the check box to each message that you want to move. In the navigation bar, click Move Messages . A confirmation dialog box appears. From the drop-down menu, select the name of the queue to which you want to move the messages. Click Move . 3.5.4.9. Deleting messages or queues You can delete a queue or purge all of the messages from a queue. Procedure Browse for the queue you want to delete or purge . Do one of the following: To... Do this... Delete a message from the queue Click the check box to each message that you want to delete. Click the Delete button. Purge all messages from the queue On the navigation bar in the main pane, click Delete queue . Click the Purge Queue button. Delete the queue On the navigation bar in the main pane, click Delete queue . Click the Delete Queue button. | [
"<web bind=\"http://localhost:8161\" path=\"web\"> <app url=\"redhat-branding\" war=\"redhat-branding.war\"/> <app url=\"artemis-plugin\" war=\"artemis-plugin.war\"/> <app url=\"dispatch-hawtio-console\" war=\"dispatch-hawtio-console.war\"/> <app url=\"console\" war=\"console.war\"/> </web>",
"<web bind=\"http://0.0.0.0:8161\" path=\"web\">",
"<cors> <allow-origin>*://localhost*</allow-origin> <allow-origin>*://192.168.0.49*</allow-origin> <allow-origin>*://192.168.0.51*</allow-origin> <!-- Check for the proper origin on the server side, too --> <strict-checking/> </cors>",
"-Dhawtio.disableProxy=false",
"-Dhawtio.proxyWhitelist=192.168.0.51",
"http://192.168.0.49/console/jolokia",
"<web bind=\"https://0.0.0.0:8161\" path=\"web\"> keyStorePath=\"<path_to_keystore>\" keyStorePassword=\"<password>\" clientAuth=\"<true/false>\" trustStorePath=\"<path_to_truststore>\" trustStorePassword=\"<password>\"> </web>",
"keyStorePath=\"< broker-instance-dir> /etc/keystore.jks\""
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/managing_amq_broker/assembly-using-AMQ-console-managing |
12.2. LVM Partition Management | 12.2. LVM Partition Management The following commands can be found by issuing lvm help at a command prompt. Table 12.2. LVM commands Command Description dumpconfig Dump the active configuration formats List the available metadata formats help Display the help commands lvchange Change the attributes of logical volume(s) lvcreate Create a logical volume lvdisplay Display information about a logical volume lvextend Add space to a logical volume lvmchange Due to use of the device mapper, this command has been deprecated lvmdiskscan List devices that may be used as physical volumes lvmsadc Collect activity data lvmsar Create activity report lvreduce Reduce the size of a logical volume lvremove Remove logical volume(s) from the system lvrename Rename a logical volume lvresize Resize a logical volume lvs Display information about logical volumes lvscan List all logical volumes in all volume groups pvchange Change attributes of physical volume(s) pvcreate Initialize physical volume(s) for use by LVM pvdata Display the on-disk metadata for physical volume(s) pvdisplay Display various attributes of physical volume(s) pvmove Move extents from one physical volume to another pvremove Remove LVM label(s) from physical volume(s) pvresize Resize a physical volume in use by a volume group pvs Display information about physical volumes pvscan List all physical volumes segtypes List available segment types vgcfgbackup Backup volume group configuration vgcfgrestore Restore volume group configuration vgchange Change volume group attributes vgck Check the consistency of a volume group vgconvert Change volume group metadata format vgcreate Create a volume group vgdisplay Display volume group information vgexport Unregister a volume group from the system vgextend Add physical volumes to a volume group vgimport Register exported volume group with system vgmerge Merge volume groups vgmknodes Create the special files for volume group devices in /dev/ vgreduce Remove a physical volume from a volume group vgremove Remove a volume group vgrename Rename a volume group vgs Display information about volume groups vgscan Search for all volume groups vgsplit Move physical volumes into a new volume group version Display software and driver version information | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Managing_Disk_Storage-LVM_Partition_Management |
Chapter 7. Known issues | Chapter 7. Known issues This section describes known issues in AMQ Broker 7.8. ENTMQBR-17 - AMQ222117: Unable to start cluster connection A broker cluster may fail to initialize properly in environments that support IPv6. The failure is due to a SocketException that is indicated by the log message Can't assign requested address . To work around this issue, set the java.net.preferIPv4Stack system property to true . ENTMQBR-463 - Attributes in clustering settings have order restrictions. Would be nice to either have better error message or simply ignore the order Currently the sequence of the elements in the cluster connection configuration has to be in a specific order. The workaround is to adhere to the order in the configuration schema. ENTMQBR-520 - Receiving from address named the same as a queue bound to another address should not be allowed A queue with the same name as an address must only be assigned to address. Creating a queue with the same name as an existing address, but bound to an address with a different name, is an invalid configuration. Doing so can result in incorrect messages being routed to the queue. ENTMQBR-522 - Broker running on windows write problems with remove temp files when shutting down On Windows, the broker does not successfully clean up temporary files when it shuts down. This issue causes the shutdown process to be slow. In addition, temporary files not deleted by the broker accumulate over time. ENTMQBR-569 - Conversion of IDs from OpenWire to AMQP results in sending IDs as binary When communicating cross-protocol from an A-MQ 6 OpenWire client to an AMQP client, additional information is encoded in the application message properties. This is benign information used internally by the broker and can be ignored. ENTMQBR-599 - Define truststore and keystore by Artemis cli Creating a broker instance by using the --ssl-key , --ssl-key-password , --ssl-trust , and --ssl-trust-password parameters does not work. To work around this issue, set the corresponding properties manually in bootstrap.xml after creating the broker. ENTMQBR-636 - Journal breaks, causing JavaNullPointerException , under perf load (mpt) To prevent IO-related issues from occurring when the broker is managing heavy loads, verify that the JVM is allocated with enough memory and heap space. See the section titled "Tuning the VM" in the Performance Tuning chapter of the ActiveMQ Artemis documentation. ENTMQBR-648 - JMS Openwire client is unable to send messages to queue with defined purgeOnNoConsumer or queue filter Using an A-MQ 6 JMS client to send messages to an address that has a queue with purgeOnNoConsumer set to true fails if the queue has no consumers. It is recommended that you do not set the purgeOnNoConsumer option when using A-MQ 6 JMS clients. ENTMQBR-652 - List of known amq-jon-plugin bugs This version of amq-jon-plugin has known issues with the MBeans for broker and queue. Issues with the broker MBean: Closing a connection throws java.net.SocketTimeoutException exception listSessions() throws java.lang.ClassCastException Adding address settings throws java.lang.IllegalArgumentException getConnectorServices() operation cannot be found listConsumersAsJSON() operation cannot be found getDivertNames() operation cannot be found Listing network topology throws IllegalArgumentException Remove address settings has wrong parameter name Issues with the queue MBean: expireMessage() throws argument type mismatch exception listDeliveringMessages() throws IllegalArgumentException listMessages() throws java.lang.Exception moveMessages() throws IllegalArgumentException with error message argument type mismatch removeMessage() throws IllegalArgumentException with error message argument type mismatch removeMessages() throws exception with error Can't find operation removeMessage with 2 arguments retryMessage() throws argument type mismatch IllegalArgumentException ENTMQBR-655 - [AMQP] Unable to send message when populate-validated-user is enabled The configuration option populate-validated-user is not supported for messages produced using the AMQP protocol. ENTMQBR-738 - Unable to build AMQ 7 examples offline with provided offline repo You cannot build the examples included with AMQ Broker in an offline environment. This issue is caused by missing dependencies in the provided offline Maven repository. ENTMQBR-897 - Openwire client/protocol issues with special characters in destination name Currently AMQ OpenWire JMS clients cannot access queues and addresses that include the following characters in their name: comma (','), hash ('#'), greater than ('>'), and whitespace. ENTMQBR-944 - [A-MQ7, Hawtio, RBAC] User gets no feedback if operation access was denied by RBAC The console can indicate that an operation attempted by an unauthorized user was successful when it was not. ENTMQBR-1498 - Diagram in management console for HA (replication, sharedstore) does not reflect real topology If you configure a broker cluster with some extra, passive slaves, the cluster diagram in the web console does not show these passive slaves. ENTMQBR-1848 - "javax.jms.JMSException: Incorrect Routing Type for queue, expecting: ANYCAST" occurs when qpid-jms client consumes a message from a multicast queue as javax.jms.Queue object with FQQN Currently, sending a message by using the Qpid JMS client to a multicast queue by using FQQN (fully qualified queue name) to an address that has multiple queues configured generates an error message on the client, and the message cannot be sent. To work around this issue, modify the broker configuration to resolve the error and unblock the client. ENTMQBR-1875 - [AMQ 7, ha, replicated store] backup broker appear not to go "live" or shutdown after - ActiveMQIllegalStateException errorType=ILLEGAL_STATE message=AMQ119026: Backup Server was not yet in sync with live Removing the paging disk of a master broker while a backup broker is trying to sync with the master broker causes the master to fail. In addition, the backup broker cannot become live because it continues trying to sync with the master. ENTMQBR-2068 - some messages received but not delivered during HA fail-over, fail-back scenario Currently, if a broker fails over to its slave while an OpenWire client is sending messages, messages being delivered to the broker when failover occurs could be lost. To work around this issue, ensure that the broker persists the messages before acknowledging them. ENTMQBR-2452 - Upgraded broker AMQ 7.3.0 from AMQ 7.2.4 on Windows cannot log If you intend to upgrade a broker instance from 7.2.4 to 7.3.0 on Windows, logging will not work unless you specify the correct log manager version during your upgrade process. For more information, see Upgrading from 7.2.x to 7.3.0 on Windows . ENTMQBR-2470 - [AMQ7, openwire,redelivery] redelivery counter for message increasing, if consumer is closed without consuming any messages If a broker sends a message to an Openwire consumer, but the consumer is closed before consuming the message, the broker wrongly increments the redelivery count for the pending message. If the number of occurrences of this behavior exceeds the value of the max-delivery-attempts configuration parameter, the broker sends the message to the dead letter queue (DLQ) or drops the message, based on your configuration. This issue does not affect other protocols, such as the Core protocol. ENTMQBR-2593 - broker does not set message ID header on cross protocol consumption A Qpid JMS client successfully retrieves a message ID only if the message was produced by another Qpid JMS client. If the message was produced by a Core JMS or OpenWire client, the Qpid JMS client cannot read the message ID. ENTMQBR-2678 - After isolated master is live again it is unable to connect to the cluster In a cluster of three or more live-backup groups that is using the replication high availability (HA) policy, the live broker shuts down when its replication connection fails. However, when the replication connection is restored and the original live broker is restarted, the broker is sometimes unable to rejoin the broker cluster. To enable the original live broker to rejoin the cluster, first stop the new live (original backup) broker, restart the original live broker, and then restart the original backup broker. ENTMQBR-2928 - Broker Operator unable to recover from CR changes causing erroneous state If the AMQ Broker Operator encounters an error when applying a Custom Resource (CR) update, the Operator does not recover. Specifically, the Operator stops responding as expected to further updates to your CRs. For example, say that a misspelling in the value of the image attribute in your main broker CR causes broker Pods to fail to deploy, with an associated error message of ImagePullBackOff . If you then fix the misspelling and apply the CR changes, the Operator does not deploy the specified number of broker Pods. In addition, the Operator does not respond to any further CR changes. To work around this issue, you must delete the CRs that you originally deployed, before redeploying them. To delete an existing CR, use a command such as oc delete -f <CR name> . ENTMQBR-2942 - Pod #0 tries to contact non-existent Pods If you change the size attribute of your Custom Resource (CR) instance to scale down a broker deployment, the first broker Pod in the cluster can make repeated attempts to connect to the drainer Pods that started up to migrate messages from the brokers that shut down, before they shut down themselves. To work around this issue, follow these steps: 1) Scale your deployment to a single broker Pod. 2) Wait for all drainer Pods to start, complete message migration, and then shut down. 3) If the single remaining broker Pod has log entries for an "unknown host exception", scale the deployment down to zero broker Pods, and then back to one. 4) When you have verified that the single remaining broker Pod is not recording exception-based log entries, scale your deployment back to its original size. ENTMQBR-3131 - Topology Fails to Update correctly for Backup Brokers when Master is Killed When a live broker fails in a cluster with more than four live-backup pairs, the live brokers, including the newly-elected live broker, all correctly report the updated topology. However, the remaining backup brokers might show the wrong topology in the following ways: If a backup broker has failed over in place of the failed live broker, the remaining backup brokers show this backup broker twice in the topology. If a backup broker has not yet failed over in place of the failed live broker, the remaining backup brokers still show the failed live broker in the topology. To work around this issue, ensure that the first connector-ref element in the cluster-connection > static-connectors configuration of each backup broker specifies the expected live broker. ENTMQBR-3604 - Enabling Pooling for the LDAP Login Module Causes Shutdown to Hang If you enable connection pooling for an LDAP provider (that is, by setting connectionPool to true in the LDAPLoginModule section of the login.config configuration file), this can cause connections to the LDAP provider to remain open indefinitely, even when you stop the broker clients. As a result, if you try to shut down the broker in the normal way, the broker does not shut down. Instead, you need to use a Linux command such as SIGKILL to terminate the broker process. This situation occurs even if you specify a pool timeout in the JVM arguments for the broker (for example, -Dcom.sun.jndi.ldap.connect.pool.timeout=30000 ) and there are no active clients when you try to shut down the broker. To work around this issue, set a value for the connectionTimeout property in the LDAPLoginModule section of the login.config configuration file. When connection pooling has been requested for a connection, the connectionTimeout property specifies the maximum time that the broker waits for a connection when the maximum pool size has already been reached and all connections in the pool are in use. For more information, see Using LDAP for Authentication in Configuring AMQ Broker . ENTMQBR-3653 - NPE thrown if metrics plugin is not configured and the metrics web context is invoked If the /metrics web context on a broker is invoked, but the metrics plugin has not yet been configured, the broker displays a null pointer exception. For more information about configuring the Prometheus metrics plugin for AMQ Broker, see Enabling the Prometheus plugin for AMQ Broker (on-premise broker deployments) or Enabling the Prometheus plugin for a running broker deployment (OpenShift broker deployments). ENTMQBR-3724 - OperatorHub displays inappropriate variant of AMQ Broker Operator If you use OperatorHub to deploy the AMQ Broker Operator on OpenShift Container Platform 4.5 or earlier, OperatorHub displays a variant of the Operator that is not appropriate for your host platform. This makes it possible to select the incorrect Operator variant. In particular, regardless of your host platform, OperatorHub displays both the Red Hat Integration - AMQ Broker Operator (the Operator for OpenShift Container Platform) and the AMQ Broker Operator (the Operator for OpenShift Container Platform on IBM Z). To work around this issue, select the Operator variant that is appropriate to your platform, as described above. Alternatively, install the Operator using the OpenShift command-line interface (CLI). In OpenShift Container Platform 4.6, this issue is resolved. OperatorHub displays only the Operator variant that corresponds to your host platform. ENTMQBR-3846 - MQTT client does not reconnect on broker restart When you restart a broker, or a broker fails over, the active broker does not restore connections for previously-connected MQTT clients. To work around this issue, to reconnect an MQTT client, you need to manually call the subscribe() method on the client. ENTMQBR-4023 - AMQ Broker Operator: Pod Status pod names do not reflect the reality For an Operator-based broker deployment in a given OpenShift project, if you use the oc get pod command to list the broker Pods, the ordinal values for the Pods start at 0 , for example, amq-operator-test-broker-ss-0 . However, if you use the oc describe command to get the status of broker Pods created from the activemqartmises Custom Resource (that is, oc describe activemqartemises ), the Pod ordinal values incorrectly start at 1 , for example, amq-operator-test-broker-ss-1 . There is no way to work around this issue. ENTMQBR-4127 - AMQ Broker Operator: Route name generated by Operator might be too long for OpenShift For each broker Pod in an Operator-based deployment, the default name of the Route that the Operator creates for access to the AMQ Broker management console includes the name of the Custom Resource (CR) instance, the name of the OpenShift project, and the name of the OpenShift cluster. For example, my-broker-deployment-wconsj-0-svc-rte-my-openshift-project.my-openshift-domain . If some of these names are long, the default Route name might exceed the limit of 63 characters that OpenShift enforces. In this case, in the OpenShift Container Platform web console, the Route shows a status of Rejected . To work around this issue, use the OpenShift Container Platform web console to manually edit the name of the Route. In the console, click the Route. On the Actions drop-down menu in the top-right corner, select Edit Route . In the YAML editor, find the spec.host property and edit the value. ENTMQBR-4140 - AMQ Broker Operator: Installation becomes unusable if storage.size is improperly specified If you configure the storage.size property of a Custom Resource (CR) instance to specify the size of the Persistent Volume Claim (PVC) required by brokers in a deployment for persistent storage, the Operator installation becomes unusable if you do not specify this value properly. For example, suppose that you set the value of storage.size to 1 (that is, without specifying a unit). In this case, the Operator cannot use the CR to create a broker deployment. In addition, even if you remove the CR and deploy a new version with storage.size specified correctly, the Operator still cannot use this CR to create a deployment as expected. To work around this issue, first stop the Operator. In the OpenShift Container Platform web console, click Deployments . For the Pod that corresponds to the AMQ Broker Operator, click the More options menu (three vertical dots). Click Edit Pod Count and set the value to 0 . When the Operator Pod has stopped, create a new version of the CR with storage.size correctly specified. Then, to restart the Operator, click Edit Pod Count again and set the value back to 1 . ENTMQBR-4141 - AMQ Broker Operator: Increasing Persistent Volume size requires manual involvement even after recreating Stateful Set If you try to increase the size of the Persistent Volume Claim (PVC) required by brokers in a deployment for persistent storage, the change does not take effect without further manual steps. For example, suppose that you configure the storage.size property of a Custom Resource (CR) instance to specify an initial size for the PVC. If you modify the CR to specify a different value of storage.size , the existing brokers continue to use the original PVC size. This is the case even if you scale the deployment down to zero brokers and then back up to the original number. However, if you scale the size of the deployment up to add additional brokers, the new brokers use the new PVC size. To work around this issue, and ensure that all brokers in the deployment use the same PVC size, use the OpenShift Container Platform web console to expand the PVC size used by the deployment. In the console, click Storage Persistent Volume Claims . Click your deployment. On the Actions drop-down menu in the top-right corner, select Expand PVC and enter a new value. | null | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/release_notes_for_red_hat_amq_broker_7.8/known |
Preface | Preface Thank you for your interest in Red Hat Ansible Automation Platform. Ansible Automation Platform is a commercial offering that helps teams manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environments. This guide helps you to understand the requirements and processes behind setting up an automation mesh on a operator-based installation of Red Hat Ansible Automation Platform. This document has been updated to include information for the latest release of Ansible Automation Platform. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_automation_platform_automation_mesh_for_operator-based_installations/pr01 |
Chapter 19. Renaming IdM client systems | Chapter 19. Renaming IdM client systems You can change the host name of an Identity Management (IdM) client system. Warning Renaming a client is a manual procedure. Do not perform it unless changing the host name is absolutely required. 19.1. Preparing an IdM client for its renaming Before uninstalling the current client, make note of certain settings for the client. You will apply this configuration after re-enrolling the machine with a new host name. Identify which services are running on the machine: Use the ipa service-find command, and identify services with certificates in the output: In addition, each host has a default host service which does not appear in the ipa service-find output. The service principal for the host service, also called a host principal , is host/ old-client-name.example.com . For all service principals displayed by ipa service-find old-client-name.example.com , determine the location of the corresponding keytabs on the old-client-name.example.com system: Each service on the client system has a Kerberos principal in the form service_name/host_name@REALM , such as ldap/ [email protected] . Identify all host groups to which the machine belongs. 19.2. Uninstalling an IdM client Uninstalling a client removes the client from the Identity Management (IdM) domain, along with all of the specific IdM configuration of system services, such as System Security Services Daemon (SSSD). This restores the configuration of the client system. Procedure Enter the ipa-client-install --uninstall command: Optional: Check that you cannot obtain a Kerberos ticket-granting ticket (TGT) for an IdM user: If a Kerberos TGT ticket has been returned successfully, follow the additional uninstallation steps in Uninstalling an IdM client: additional steps after multiple past installations . On the client, remove old Kerberos principals from each identified keytab other than /etc/krb5.keytab : On an IdM server, remove all DNS entries for the client host from IdM: On the IdM server, remove the client host entry from the IdM LDAP server. This removes all services and revokes all certificates issued for that host: Important Removing the client host entry from the IdM LDAP server is crucial if you think you might re-enroll the client in the future, with a different IP address or a different hostname. 19.3. Uninstalling an IdM client: additional steps after multiple past installations If you install and uninstall a host as an Identity Management (IdM) client multiple times, the uninstallation procedure might not restore the pre-IdM Kerberos configuration. In this situation, you must manually remove the IdM Kerberos configuration. In extreme cases, you must reinstall the operating system. Prerequisites You have used the ipa-client-install --uninstall command to uninstall the IdM client configuration from the host. However, you can still obtain a Kerberos ticket-granting ticket (TGT) for an IdM user from the IdM server. You have checked that the /var/lib/ipa-client/sysrestore directory is empty and hence you cannot restore the prior-to-IdM-client configuration of the system using the files in the directory. Procedure Check the /etc/krb5.conf.ipa file: If the contents of the /etc/krb5.conf.ipa file are the same as the contents of the krb5.conf file prior to the installation of the IdM client, you can: Remove the /etc/krb5.conf file: Rename the /etc/krb5.conf.ipa file into /etc/krb5.conf : If the contents of the /etc/krb5.conf.ipa file are not the same as the contents of the krb5.conf file prior to the installation of the IdM client, you can at least restore the Kerberos configuration to the state directly after the installation of the operating system: Re-install the krb5-libs package: As a dependency, this command will also re-install the krb5-workstation package and the original version of the /etc/krb5.conf file. Remove the var/log/ipaclient-install.log file if present. Verification Try to obtain IdM user credentials. This should fail: The /etc/krb5.conf file is now restored to its factory state. As a result, you cannot obtain a Kerberos TGT for an IdM user on the host. 19.4. Renaming the host system Rename the machine as required. For example: You can now re-install the Identity Management (IdM) client to the IdM domain with the new host name. 19.5. Re-installing an IdM client Install a client on your renamed host following the procedure described in Installing a client . 19.6. Re-adding services, re-generating certificates, and re-adding host groups Procedure You can re-add services, re-generate certificates, and re-add host groups on your Identity Management (IdM) server. On the Identity Management server, add a new keytab for every service identified in the Preparing an IdM client for its renaming . Generate certificates for services that had a certificate assigned in the Preparing an IdM client for its renaming . You can do this: Using the IdM administration tools Using the certmonger utility Re-add the client to the host groups identified in the Preparing an IdM client for its renaming . | [
"ipa service-find old-client-name.example.com",
"find / -name \"*.keytab\"",
"ipa hostgroup-find old-client-name.example.com",
"ipa-client-install --uninstall",
"kinit admin kinit: Client '[email protected]' not found in Kerberos database while getting initial credentials",
"ipa-rmkeytab -k /path/to/keytab -r EXAMPLE.COM",
"ipa dnsrecord-del Record name: old-client-name Zone name: idm.example.com No option to delete specific record provided. Delete all? Yes/No (default No): true ------------------------ Deleted record \"old-client-name\"",
"ipa host-del client.idm.example.com",
"rm /etc/krb5.conf",
"mv /etc/krb5.conf.ipa /etc/krb5.conf",
"yum reinstall krb5-libs",
"kinit admin kinit: Client '[email protected]' not found in Kerberos database while getting initial credentials",
"hostnamectl set-hostname new-client-name.example.com",
"ipa service-add service_name/new-client-name"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/installing_identity_management/renaming-ipa-client-systems_installing-identity-management |
function::kernel_string2_utf16 | function::kernel_string2_utf16 Name function::kernel_string2_utf16 - Retrieves UTF-16 string from kernel memory with alternative error string Synopsis Arguments addr The kernel address to retrieve the string from err_msg The error message to return when data isn't available Description This function returns a null terminated UTF-8 string converted from the UTF-16 string at a given kernel memory address. Reports the given error message on string copy fault or conversion error. | [
"kernel_string2_utf16:string(addr:long,err_msg:string)"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-kernel-string2-utf16 |
2.8.6. Malicious Software and Spoofed IP Addresses | 2.8.6. Malicious Software and Spoofed IP Addresses More elaborate rules can be created that control access to specific subnets, or even specific nodes, within a LAN. You can also restrict certain dubious applications or programs such as Trojans, worms, and other client/server viruses from contacting their server. For example, some Trojans scan networks for services on ports from 31337 to 31340 (called the elite ports in cracking terminology). Since there are no legitimate services that communicate via these non-standard ports, blocking them can effectively diminish the chances that potentially infected nodes on your network independently communicate with their remote master servers. The following rules drop all TCP traffic that attempts to use port 31337: You can also block outside connections that attempt to spoof private IP address ranges to infiltrate your LAN. For example, if your LAN uses the 192.168.1.0/24 range, you can design a rule that instructs the Internet-facing network device (for example, eth0) to drop any packets to that device with an address in your LAN IP range. Because it is recommended to reject forwarded packets as a default policy, any other spoofed IP address to the external-facing device (eth0) is rejected automatically. Note There is a distinction between the DROP and REJECT targets when dealing with appended rules. The REJECT target denies access and returns a connection refused error to users who attempt to connect to the service. The DROP target, as the name implies, drops the packet without any warning. Administrators can use their own discretion when using these targets. | [
"~]# iptables -A OUTPUT -o eth0 -p tcp --dport 31337 --sport 31337 -j DROP ~]# iptables -A FORWARD -o eth0 -p tcp --dport 31337 --sport 31337 -j DROP",
"~]# iptables -A FORWARD -s 192.168.1.0/24 -i eth0 -j DROP"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-firewalls-malicious_software_and_spoofed_ip_addresses |
4.3.2. Creating a Virtual Machine with Virtual Machine Manager | 4.3.2. Creating a Virtual Machine with Virtual Machine Manager Follow these steps to create a Red Hat Enterprise Linux 7 virtual machine on Virtual Machine Manager . Procedure 4.2. Creating a guest virtual machine with Virtual Machine Manager Open Virtual Machine Manager Click Applications System Tools Virtual Machine Manager or Open the terminal and use the virt-manager . Create a new virtual machine Click to open the New VM wizard. Specify name and installation method In Step 1, type in a virtual machine name and choose an installation type to install the guest virtual machine's operating system. Figure 4.2. Name virtual machine and select installation method For this tutorial, select Local install media (ISO image) . This installation method uses an image of an installation disk (in this case an .iso file). Click Forward to continue to the step. Locate installation media Select the Use ISO Image option. Click Browse Browse Local buttons. Locate the ISO downloaded in Procedure 4.1, "Installing the virtualization packages with yum " on your machine. Select the ISO file and click Open . Ensure that Virtual Machine Manager correctly detected the OS type. If not, uncheck Automatically detect operating system based on install media and select Linux from the OS type drop-down and Red Hat Enterprise Linux 7 from the Version drop-down. Figure 4.3. Local ISO image installation Configure memory and CPU You can use step 3 of the wizard to configure the amount of memory and the number of CPUs to allocate to the virtual machine. The wizard shows the number of CPUs and amount of memory available to allocate. For this tutorial, leave the default settings and click Forward . Figure 4.4. Configuring CPU and Memory Configure storage Using step 4 of the wizard, you can assign storage to the guest virtual machine. The wizard shows options for storage, including where to store the virtual machine on the host machine. For this tutorial, leave the default settings and click Forward . Figure 4.5. Configuring storage Review the configuration Using step 5 of the wizard, you can configure the virtualization type, and guest architecture, and networking settings. For this tutorial, verify the settings, and click Finish . Virtual Machine Manager will create a virtual machine with the specified hardware settings. Figure 4.6. Verifying the configuration After Virtual Machine Manager creates your Red Hat Enterprise Linux 7 virtual machine, the virtual machine's window will open, and the installation of the selected operating system will begin in it. Follow the instructions in the Red Hat Enterprise Linux 7 installer to complete the installation of the virtual machine's operating system. Note For help with Red Hat Enterprise Linux 7 installation, refer to the Red Hat Enterprise Linux 7 Installation Guide . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_getting_started_guide/sec-virtualization_getting_started-quickstart_virt-manager-create_vm |
Chapter 6. Services | Chapter 6. Services This section enumerates all the services that are available in the API. 6.1. AffinityGroup This service manages a single affinity group. Table 6.1. Methods summary Name Summary get Retrieve the affinity group details. remove Remove the affinity group. update Update the affinity group. 6.1.1. get GET Retrieve the affinity group details. <affinity_group id="00000000-0000-0000-0000-000000000000"> <name>AF_GROUP_001</name> <cluster id="00000000-0000-0000-0000-000000000000"/> <positive>true</positive> <enforcing>true</enforcing> </affinity_group> Table 6.2. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . group AffinityGroup Out The affinity group. 6.1.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.1.2. remove DELETE Remove the affinity group. Table 6.3. Parameters summary Name Type Direction Summary async Boolean In Indicates if the removal should be performed asynchronously. 6.1.3. update PUT Update the affinity group. Table 6.4. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. group AffinityGroup In/Out The affinity group. 6.2. AffinityGroupVm This service manages a single virtual machine to affinity group assignment. Table 6.5. Methods summary Name Summary remove Remove this virtual machine from the affinity group. 6.2.1. remove DELETE Remove this virtual machine from the affinity group. Table 6.6. Parameters summary Name Type Direction Summary async Boolean In Indicates if the removal should be performed asynchronously. 6.3. AffinityGroupVms This service manages a collection of all the virtual machines assigned to an affinity group. Table 6.7. Methods summary Name Summary add Adds a virtual machine to the affinity group. list List all virtual machines assigned to this affinity group. 6.3.1. add POST Adds a virtual machine to the affinity group. For example, to add the virtual machine 789 to the affinity group 456 of cluster 123 , send a request like this: With the following body: <vm id="789"/> Table 6.8. Parameters summary Name Type Direction Summary vm Vm In/Out 6.3.2. list GET List all virtual machines assigned to this affinity group. The order of the returned virtual machines isn't guaranteed. Table 6.9. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of virtual machines to return. vms Vm[] Out 6.3.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.3.2.2. max Sets the maximum number of virtual machines to return. If not specified, all the virtual machines are returned. 6.4. AffinityGroups The affinity groups service manages virtual machine relationships and dependencies. Table 6.10. Methods summary Name Summary add Create a new affinity group. list List existing affinity groups. 6.4.1. add POST Create a new affinity group. Post a request like in the example below to create a new affinity group: And use the following example in its body: <affinity_group> <name>AF_GROUP_001</name> <hosts_rule> <enforcing>true</enforcing> <positive>true</positive> </hosts_rule> <vms_rule> <enabled>false</enabled> </vms_rule> </affinity_group> Table 6.11. Parameters summary Name Type Direction Summary group AffinityGroup In/Out The affinity group object to create. 6.4.2. list GET List existing affinity groups. The order of the affinity groups results isn't guaranteed. Table 6.12. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . groups AffinityGroup[] Out The list of existing affinity groups. max Integer In Sets the maximum number of affinity groups to return. 6.4.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.4.2.2. max Sets the maximum number of affinity groups to return. If not specified all the affinity groups are returned. 6.5. AffinityLabel The details of a single affinity label. Table 6.13. Methods summary Name Summary get Retrieves the details of a label. remove Removes a label from the system and clears all assignments of the removed label. update Updates a label. 6.5.1. get GET Retrieves the details of a label. Table 6.14. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . label AffinityLabel Out 6.5.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.5.2. remove DELETE Removes a label from the system and clears all assignments of the removed label. 6.5.3. update PUT Updates a label. This call will update all metadata, such as the name or description. Table 6.15. Parameters summary Name Type Direction Summary label AffinityLabel In/Out 6.6. AffinityLabelHost This service represents a host that has a specific label when accessed through the affinitylabels/hosts subcollection. Table 6.16. Methods summary Name Summary get Retrieves details about a host that has this label assigned. remove Remove a label from a host. 6.6.1. get GET Retrieves details about a host that has this label assigned. Table 6.17. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . host Host Out 6.6.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.6.2. remove DELETE Remove a label from a host. 6.7. AffinityLabelHosts This service represents list of hosts that have a specific label when accessed through the affinitylabels/hosts subcollection. Table 6.18. Methods summary Name Summary add Add a label to a host. list List all hosts with the label. 6.7.1. add POST Add a label to a host. Table 6.19. Parameters summary Name Type Direction Summary host Host In/Out 6.7.2. list GET List all hosts with the label. The order of the returned hosts isn't guaranteed. Table 6.20. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . hosts Host[] Out 6.7.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.8. AffinityLabelVm This service represents a vm that has a specific label when accessed through the affinitylabels/vms subcollection. Table 6.21. Methods summary Name Summary get Retrieves details about a vm that has this label assigned. remove Remove a label from a vm. 6.8.1. get GET Retrieves details about a vm that has this label assigned. Table 6.22. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . vm Vm Out 6.8.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.8.2. remove DELETE Remove a label from a vm. 6.9. AffinityLabelVms This service represents list of vms that have a specific label when accessed through the affinitylabels/vms subcollection. Table 6.23. Methods summary Name Summary add Add a label to a vm. list List all virtual machines with the label. 6.9.1. add POST Add a label to a vm. Table 6.24. Parameters summary Name Type Direction Summary vm Vm In/Out 6.9.2. list GET List all virtual machines with the label. The order of the returned virtual machines isn't guaranteed. Table 6.25. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . vms Vm[] Out 6.9.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.10. AffinityLabels Manages the affinity labels available in the system. Table 6.26. Methods summary Name Summary add Creates a new label. list Lists all labels present in the system. 6.10.1. add POST Creates a new label. The label is automatically attached to all entities mentioned in the vms or hosts lists. Table 6.27. Parameters summary Name Type Direction Summary label AffinityLabel In/Out 6.10.2. list GET Lists all labels present in the system. The order of the returned labels isn't guaranteed. Table 6.28. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . labels AffinityLabel[] Out max Integer In Sets the maximum number of labels to return. 6.10.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.10.2.2. max Sets the maximum number of labels to return. If not specified all the labels are returned. 6.11. Area This annotation is intended to specify what oVirt area is the annotated concept related to. Currently the following areas are in use, and they are closely related to the oVirt teams, but not necessarily the same: Infrastructure Network SLA Storage Virtualization A concept may be associated to more than one area, or to no area. The value of this annotation is intended for reporting only, and it doesn't affect at all the generated code or the validity of the model 6.12. AssignedAffinityLabel This service represents one label to entity assignment when accessed using the entities/affinitylabels subcollection. Table 6.29. Methods summary Name Summary get Retrieves details about the attached label. remove Removes the label from an entity. 6.12.1. get GET Retrieves details about the attached label. Table 6.30. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . label AffinityLabel Out 6.12.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.12.2. remove DELETE Removes the label from an entity. Does not touch the label itself. 6.13. AssignedAffinityLabels This service is used to list and manipulate affinity labels that are assigned to supported entities when accessed using entities/affinitylabels. Table 6.31. Methods summary Name Summary add Attaches a label to an entity. list Lists all labels that are attached to an entity. 6.13.1. add POST Attaches a label to an entity. Table 6.32. Parameters summary Name Type Direction Summary label AffinityLabel In/Out 6.13.2. list GET Lists all labels that are attached to an entity. The order of the returned entities isn't guaranteed. Table 6.33. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . label AffinityLabel[] Out 6.13.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.14. AssignedCpuProfile Table 6.34. Methods summary Name Summary get remove 6.14.1. get GET Table 6.35. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . profile CpuProfile Out 6.14.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.14.2. remove DELETE Table 6.36. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.15. AssignedCpuProfiles Table 6.37. Methods summary Name Summary add Add a new cpu profile for the cluster. list List the CPU profiles assigned to the cluster. 6.15.1. add POST Add a new cpu profile for the cluster. Table 6.38. Parameters summary Name Type Direction Summary profile CpuProfile In/Out 6.15.2. list GET List the CPU profiles assigned to the cluster. The order of the returned CPU profiles isn't guaranteed. Table 6.39. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of profiles to return. profiles CpuProfile[] Out 6.15.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.15.2.2. max Sets the maximum number of profiles to return. If not specified all the profiles are returned. 6.16. AssignedDiskProfile Table 6.40. Methods summary Name Summary get remove 6.16.1. get GET Table 6.41. Parameters summary Name Type Direction Summary disk_profile DiskProfile Out follow String In Indicates which inner links should be followed . 6.16.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.16.2. remove DELETE Table 6.42. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.17. AssignedDiskProfiles Table 6.43. Methods summary Name Summary add Add a new disk profile for the storage domain. list Returns the list of disk profiles assigned to the storage domain. 6.17.1. add POST Add a new disk profile for the storage domain. Table 6.44. Parameters summary Name Type Direction Summary profile DiskProfile In/Out 6.17.2. list GET Returns the list of disk profiles assigned to the storage domain. The order of the returned disk profiles isn't guaranteed. Table 6.45. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of profiles to return. profiles DiskProfile[] Out 6.17.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.17.2.2. max Sets the maximum number of profiles to return. If not specified all the profiles are returned. 6.18. AssignedPermissions Represents a permission sub-collection, scoped by user, group or some entity type. Table 6.46. Methods summary Name Summary add Assign a new permission to a user or group for specific entity. list List all the permissions of the specific entity. 6.18.1. add POST Assign a new permission to a user or group for specific entity. For example, to assign the UserVmManager role to the virtual machine with id 123 to the user with id 456 send a request like this: With a request body like this: <permission> <role> <name>UserVmManager</name> </role> <user id="456"/> </permission> To assign the SuperUser role to the system to the user with id 456 send a request like this: With a request body like this: <permission> <role> <name>SuperUser</name> </role> <user id="456"/> </permission> If you want to assign permission to the group instead of the user please replace the user element with the group element with proper id of the group. For example to assign the UserRole role to the cluster with id 123 to the group with id 789 send a request like this: With a request body like this: <permission> <role> <name>UserRole</name> </role> <group id="789"/> </permission> Table 6.47. Parameters summary Name Type Direction Summary permission Permission In/Out The permission. 6.18.2. list GET List all the permissions of the specific entity. For example to list all the permissions of the cluster with id 123 send a request like this: <permissions> <permission id="456"> <cluster id="123"/> <role id="789"/> <user id="451"/> </permission> <permission id="654"> <cluster id="123"/> <role id="789"/> <group id="127"/> </permission> </permissions> The order of the returned permissions isn't guaranteed. Table 6.48. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . permissions Permission[] Out The list of permissions. 6.18.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.19. AssignedRoles Represents a roles sub-collection, for example scoped by user. Table 6.49. Methods summary Name Summary list Returns the roles assigned to the permission. 6.19.1. list GET Returns the roles assigned to the permission. The order of the returned roles isn't guaranteed. Table 6.50. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of roles to return. roles Role[] Out 6.19.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.19.1.2. max Sets the maximum number of roles to return. If not specified all the roles are returned. 6.20. AssignedTag A service to manage assignment of specific tag to specific entities in system. Table 6.51. Methods summary Name Summary get Gets the information about the assigned tag. remove Unassign tag from specific entity in the system. 6.20.1. get GET Gets the information about the assigned tag. For example to retrieve the information about the tag with the id 456 which is assigned to virtual machine with id 123 send a request like this: <tag href="/ovirt-engine/api/tags/456" id="456"> <name>root</name> <description>root</description> <vm href="/ovirt-engine/api/vms/123" id="123"/> </tag> Table 6.52. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . tag Tag Out The assigned tag. 6.20.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.20.2. remove DELETE Unassign tag from specific entity in the system. For example to unassign the tag with id 456 from virtual machine with id 123 send a request like this: Table 6.53. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.21. AssignedTags A service to manage collection of assignment of tags to specific entities in system. Table 6.54. Methods summary Name Summary add Assign tag to specific entity in the system. list List all tags assigned to the specific entity. 6.21.1. add POST Assign tag to specific entity in the system. For example to assign tag mytag to virtual machine with the id 123 send a request like this: With a request body like this: <tag> <name>mytag</name> </tag> Table 6.55. Parameters summary Name Type Direction Summary tag Tag In/Out The assigned tag. 6.21.2. list GET List all tags assigned to the specific entity. For example to list all the tags of the virtual machine with id 123 send a request like this: <tags> <tag href="/ovirt-engine/api/tags/222" id="222"> <name>mytag</name> <description>mytag</description> <vm href="/ovirt-engine/api/vms/123" id="123"/> </tag> </tags> The order of the returned tags isn't guaranteed. Table 6.56. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of tags to return. tags Tag[] Out The list of assigned tags. 6.21.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.21.2.2. max Sets the maximum number of tags to return. If not specified all the tags are returned. 6.22. AssignedVnicProfile Table 6.57. Methods summary Name Summary get remove 6.22.1. get GET Table 6.58. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . profile VnicProfile Out 6.22.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.22.2. remove DELETE Table 6.59. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.23. AssignedVnicProfiles Table 6.60. Methods summary Name Summary add Add a new virtual network interface card profile for the network. list Returns the list of VNIC profiles assifned to the network. 6.23.1. add POST Add a new virtual network interface card profile for the network. Table 6.61. Parameters summary Name Type Direction Summary profile VnicProfile In/Out 6.23.2. list GET Returns the list of VNIC profiles assifned to the network. The order of the returned VNIC profiles isn't guaranteed. Table 6.62. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of profiles to return. profiles VnicProfile[] Out 6.23.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.23.2.2. max Sets the maximum number of profiles to return. If not specified all the profiles are returned. 6.24. AttachedStorageDomain Table 6.63. Methods summary Name Summary activate This operation activates an attached storage domain. deactivate This operation deactivates an attached storage domain. get remove 6.24.1. activate POST This operation activates an attached storage domain. Once the storage domain is activated it is ready for use with the data center. The activate action does not take any action specific parameters, so the request body should contain an empty action : <action/> Table 6.64. Parameters summary Name Type Direction Summary async Boolean In Indicates if the activation should be performed asynchronously. 6.24.2. deactivate POST This operation deactivates an attached storage domain. Once the storage domain is deactivated it will not be used with the data center. For example, to deactivate storage domain 456 , send the following request: With a request body like this: <action/> If the force parameter is true then the operation will succeed, even if the OVF update which takes place before the deactivation of the storage domain failed. If the force parameter is false and the OVF update failed, the deactivation of the storage domain will also fail. Table 6.65. Parameters summary Name Type Direction Summary async Boolean In Indicates if the deactivation should be performed asynchronously. force Boolean In Indicates if the operation should succeed and the storage domain should be moved to a deactivated state, even if the OVF update for the storage domain failed. 6.24.2.1. force Indicates if the operation should succeed and the storage domain should be moved to a deactivated state, even if the OVF update for the storage domain failed. For example, to deactivate storage domain 456 using force flag, send the following request: With a request body like this: <action> <force>true</force> <action> This parameter is optional, and the default value is false . 6.24.3. get GET Table 6.66. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . storage_domain StorageDomain Out 6.24.3.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.24.4. remove DELETE Table 6.67. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.25. AttachedStorageDomainDisk Manages a single disk available in a storage domain attached to a data center. Important Since version 4.2 of the engine this service is intended only to list disks available in the storage domain, and to register unregistered disks. All the other operations, like copying a disk, moving a disk, etc, have been deprecated and will be removed in the future. To perform those operations use the service that manages all the disks of the system , or the service that manages an specific disk . Table 6.68. Methods summary Name Summary copy Copies a disk to the specified storage domain. export Exports a disk to an export storage domain. get Retrieves the description of the disk. move Moves a disk to another storage domain. register Registers an unregistered disk. remove Removes a disk. sparsify Sparsify the disk. update Updates the disk. 6.25.1. copy POST Copies a disk to the specified storage domain. Important Since version 4.2 of the engine this operation is deprecated, and preserved only for backwards compatibility. It will be removed in the future. To copy a disk use the copy operation of the service that manages that disk. Table 6.69. Parameters summary Name Type Direction Summary disk Disk In Description of the resulting disk. storage_domain StorageDomain In The storage domain where the new disk will be created. 6.25.2. export POST Exports a disk to an export storage domain. Important Since version 4.2 of the engine this operation is deprecated, and preserved only for backwards compatibility. It will be removed in the future. To export a disk use the export operation of the service that manages that disk. Table 6.70. Parameters summary Name Type Direction Summary storage_domain StorageDomain In The export storage domain where the disk should be exported to. 6.25.3. get GET Retrieves the description of the disk. Table 6.71. Parameters summary Name Type Direction Summary disk Disk Out The description of the disk. follow String In Indicates which inner links should be followed . 6.25.3.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.25.4. move POST Moves a disk to another storage domain. Important Since version 4.2 of the engine this operation is deprecated, and preserved only for backwards compatibility. It will be removed in the future. To move a disk use the move operation of the service that manages that disk. Table 6.72. Parameters summary Name Type Direction Summary async Boolean In Indicates if the move should be performed asynchronously. filter Boolean In Indicates if the results should be filtered according to the permissions of the user. storage_domain StorageDomain In The storage domain where the disk will be moved to. 6.25.5. register POST Registers an unregistered disk. 6.25.6. remove DELETE Removes a disk. Important Since version 4.2 of the engine this operation is deprecated, and preserved only for backwards compatibility. It will be removed in the future. To remove a disk use the remove operation of the service that manages that disk. 6.25.7. sparsify POST Sparsify the disk. Important Since version 4.2 of the engine this operation is deprecated, and preserved only for backwards compatibility. It will be removed in the future. To remove a disk use the remove operation of the service that manages that disk. 6.25.8. update PUT Updates the disk. Important Since version 4.2 of the engine this operation is deprecated, and preserved only for backwards compatibility. It will be removed in the future. To update a disk use the update operation of the service that manages that disk. Table 6.73. Parameters summary Name Type Direction Summary disk Disk In/Out The update to apply to the disk. 6.26. AttachedStorageDomainDisks Manages the collection of disks available inside an storage domain that is attached to a data center. Table 6.74. Methods summary Name Summary add Adds or registers a disk. list Retrieve the list of disks that are available in the storage domain. 6.26.1. add POST Adds or registers a disk. Important Since version 4.2 of the engine this operation is deprecated, and preserved only for backwards compatibility. It will be removed in the future. To add a new disk use the add operation of the service that manages the disks of the system. To register an unregistered disk use the register operation of the service that manages that disk. Table 6.75. Parameters summary Name Type Direction Summary disk Disk In/Out The disk to add or register. unregistered Boolean In Indicates if a new disk should be added or if an existing unregistered disk should be registered. 6.26.1.1. unregistered Indicates if a new disk should be added or if an existing unregistered disk should be registered. If the value is true then the identifier of the disk to register needs to be provided. For example, to register the disk with id 456 send a request like this: With a request body like this: <disk id="456"/> If the value is false then a new disk will be created in the storage domain. In that case the provisioned_size , format and name attributes are mandatory. For example, to create a new copy on write disk of 1 GiB, send a request like this: With a request body like this: <disk> <name>mydisk</name> <format>cow</format> <provisioned_size>1073741824</provisioned_size> </disk> The default value is false . 6.26.2. list GET Retrieve the list of disks that are available in the storage domain. Table 6.76. Parameters summary Name Type Direction Summary disks Disk[] Out List of retrieved disks. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of disks to return. 6.26.2.1. disks List of retrieved disks. The order of the returned disks isn't guaranteed. 6.26.2.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.26.2.3. max Sets the maximum number of disks to return. If not specified all the disks are returned. 6.27. AttachedStorageDomains Manages the storage domains attached to a data center. Table 6.77. Methods summary Name Summary add Attaches an existing storage domain to the data center. list Returns the list of storage domains attached to the data center. 6.27.1. add POST Attaches an existing storage domain to the data center. Table 6.78. Parameters summary Name Type Direction Summary storage_domain StorageDomain In/Out The storage domain to attach to the data center. 6.27.2. list GET Returns the list of storage domains attached to the data center. The order of the returned storage domains isn't guaranteed. Table 6.79. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of storage domains to return. storage_domains StorageDomain[] Out A list of storage domains that are attached to the data center. 6.27.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.27.2.2. max Sets the maximum number of storage domains to return. If not specified all the storage domains are returned. 6.28. Balance Table 6.80. Methods summary Name Summary get remove 6.28.1. get GET Table 6.81. Parameters summary Name Type Direction Summary balance Balance Out filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . 6.28.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.28.2. remove DELETE Table 6.82. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.29. Balances Table 6.83. Methods summary Name Summary add Add a balance module to a specified user defined scheduling policy. list Returns the list of balance modules used by the scheduling policy. 6.29.1. add POST Add a balance module to a specified user defined scheduling policy. Table 6.84. Parameters summary Name Type Direction Summary balance Balance In/Out 6.29.2. list GET Returns the list of balance modules used by the scheduling policy. The order of the returned balance modules isn't guaranteed. Table 6.85. Parameters summary Name Type Direction Summary balances Balance[] Out filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of balances to return. 6.29.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.29.2.2. max Sets the maximum number of balances to return. If not specified all the balances are returned. 6.30. Bookmark A service to manage a bookmark. Table 6.86. Methods summary Name Summary get Get a bookmark. remove Remove a bookmark. update Update a bookmark. 6.30.1. get GET Get a bookmark. An example for getting a bookmark: <bookmark href="/ovirt-engine/api/bookmarks/123" id="123"> <name>example_vm</name> <value>vm: name=example*</value> </bookmark> Table 6.87. Parameters summary Name Type Direction Summary bookmark Bookmark Out The requested bookmark. follow String In Indicates which inner links should be followed . 6.30.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.30.2. remove DELETE Remove a bookmark. An example for removing a bookmark: Table 6.88. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.30.3. update PUT Update a bookmark. An example for updating a bookmark: With the request body: <bookmark> <name>new_example_vm</name> <value>vm: name=new_example*</value> </bookmark> Table 6.89. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. bookmark Bookmark In/Out The updated bookmark. 6.31. Bookmarks A service to manage bookmarks. Table 6.90. Methods summary Name Summary add Adding a new bookmark. list Listing all the available bookmarks. 6.31.1. add POST Adding a new bookmark. Example of adding a bookmark: <bookmark> <name>new_example_vm</name> <value>vm: name=new_example*</value> </bookmark> Table 6.91. Parameters summary Name Type Direction Summary bookmark Bookmark In/Out The added bookmark. 6.31.2. list GET Listing all the available bookmarks. Example of listing bookmarks: <bookmarks> <bookmark href="/ovirt-engine/api/bookmarks/123" id="123"> <name>database</name> <value>vm: name=database*</value> </bookmark> <bookmark href="/ovirt-engine/api/bookmarks/456" id="456"> <name>example</name> <value>vm: name=example*</value> </bookmark> </bookmarks> The order of the returned bookmarks isn't guaranteed. Table 6.92. Parameters summary Name Type Direction Summary bookmarks Bookmark[] Out The list of available bookmarks. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of bookmarks to return. 6.31.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.31.2.2. max Sets the maximum number of bookmarks to return. If not specified all the bookmarks are returned. 6.32. Cluster A service to manage a specific cluster. Table 6.93. Methods summary Name Summary get Gets information about the cluster. remove Removes the cluster from the system. resetemulatedmachine syncallnetworks Synchronizes all networks on the cluster. update Updates information about the cluster. 6.32.1. get GET Gets information about the cluster. An example of getting a cluster: <cluster href="/ovirt-engine/api/clusters/123" id="123"> <actions> <link href="/ovirt-engine/api/clusters/123/resetemulatedmachine" rel="resetemulatedmachine"/> </actions> <name>Default</name> <description>The default server cluster</description> <link href="/ovirt-engine/api/clusters/123/networks" rel="networks"/> <link href="/ovirt-engine/api/clusters/123/permissions" rel="permissions"/> <link href="/ovirt-engine/api/clusters/123/glustervolumes" rel="glustervolumes"/> <link href="/ovirt-engine/api/clusters/123/glusterhooks" rel="glusterhooks"/> <link href="/ovirt-engine/api/clusters/123/affinitygroups" rel="affinitygroups"/> <link href="/ovirt-engine/api/clusters/123/cpuprofiles" rel="cpuprofiles"/> <ballooning_enabled>false</ballooning_enabled> <cpu> <architecture>x86_64</architecture> <type>Intel Penryn Family</type> </cpu> <error_handling> <on_error>migrate</on_error> </error_handling> <fencing_policy> <enabled>true</enabled> <skip_if_connectivity_broken> <enabled>false</enabled> <threshold>50</threshold> </skip_if_connectivity_broken> <skip_if_sd_active> <enabled>false</enabled> </skip_if_sd_active> </fencing_policy> <gluster_service>false</gluster_service> <ha_reservation>false</ha_reservation> <ksm> <enabled>true</enabled> <merge_across_nodes>true</merge_across_nodes> </ksm> <maintenance_reason_required>false</maintenance_reason_required> <memory_policy> <over_commit> <percent>100</percent> </over_commit> <transparent_hugepages> <enabled>true</enabled> </transparent_hugepages> </memory_policy> <migration> <auto_converge>inherit</auto_converge> <bandwidth> <assignment_method>auto</assignment_method> </bandwidth> <compressed>inherit</compressed> </migration> <optional_reason>false</optional_reason> <required_rng_sources> <required_rng_source>random</required_rng_source> </required_rng_sources> <scheduling_policy href="/ovirt-engine/api/schedulingpolicies/456" id="456"/> <threads_as_cores>false</threads_as_cores> <trusted_service>false</trusted_service> <tunnel_migration>false</tunnel_migration> <version> <major>4</major> <minor>0</minor> </version> <virt_service>true</virt_service> <data_center href="/ovirt-engine/api/datacenters/111" id="111"/> </cluster> Table 6.94. Parameters summary Name Type Direction Summary cluster Cluster Out filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . 6.32.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.32.2. remove DELETE Removes the cluster from the system. Table 6.95. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.32.3. resetemulatedmachine POST Table 6.96. Parameters summary Name Type Direction Summary async Boolean In Indicates if the reset should be performed asynchronously. 6.32.4. syncallnetworks POST Synchronizes all networks on the cluster. With a request body like this: <action/> Table 6.97. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. 6.32.5. update PUT Updates information about the cluster. Only the specified fields are updated; others remain unchanged. For example, to update the cluster's CPU: With a request body like this: <cluster> <cpu> <type>Intel Haswell-noTSX Family</type> </cpu> </cluster> Table 6.98. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. cluster Cluster In/Out 6.33. ClusterEnabledFeature Represents a feature enabled for the cluster. Table 6.99. Methods summary Name Summary get Provides the information about the cluster feature enabled. remove Disables a cluster feature. 6.33.1. get GET Provides the information about the cluster feature enabled. For example, to find details of the enabled feature 456 for cluster 123 , send a request like this: That will return a ClusterFeature object containing the name: <cluster_feature id="456"> <name>libgfapi_supported</name> </cluster_feature> Table 6.100. Parameters summary Name Type Direction Summary feature ClusterFeature Out Retrieved cluster feature that's enabled. follow String In Indicates which inner links should be followed . 6.33.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.33.2. remove DELETE Disables a cluster feature. For example, to disable the feature 456 of cluster 123 send a request like this: 6.34. ClusterEnabledFeatures Provides information about the additional features that are enabled for this cluster. The features that are enabled are the available features for the cluster level Table 6.101. Methods summary Name Summary add Enable an additional feature for a cluster. list Lists the additional features enabled for the cluster. 6.34.1. add POST Enable an additional feature for a cluster. For example, to enable a feature 456 on cluster 123 , send a request like this: The request body should look like this: <cluster_feature id="456"/> Table 6.102. Parameters summary Name Type Direction Summary feature ClusterFeature In/Out 6.34.2. list GET Lists the additional features enabled for the cluster. For example, to get the features enabled for cluster 123 send a request like this: This will return a list of features: <enabled_features> <cluster_feature id="123"> <name>test_feature</name> </cluster_feature> ... </enabled_features> Table 6.103. Parameters summary Name Type Direction Summary features ClusterFeature[] Out Retrieved features. follow String In Indicates which inner links should be followed . 6.34.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.35. ClusterExternalProviders This service lists external providers. Table 6.104. Methods summary Name Summary list Returns the list of external providers. 6.35.1. list GET Returns the list of external providers. The order of the returned list of providers is not guaranteed. Table 6.105. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . providers ExternalProvider[] Out 6.35.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.36. ClusterFeature Represents a feature enabled for the cluster level Table 6.106. Methods summary Name Summary get Provides the information about the a cluster feature supported by a cluster level. 6.36.1. get GET Provides the information about the a cluster feature supported by a cluster level. For example, to find details of the cluster feature 456 for cluster level 4.1, send a request like this: That will return a ClusterFeature object containing the name: <cluster_feature id="456"> <name>libgfapi_supported</name> </cluster_feature> Table 6.107. Parameters summary Name Type Direction Summary feature ClusterFeature Out Retrieved cluster feature. follow String In Indicates which inner links should be followed . 6.36.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.37. ClusterFeatures Provides information about the cluster features that are supported by a cluster level. Table 6.108. Methods summary Name Summary list Lists the cluster features supported by the cluster level. 6.37.1. list GET Lists the cluster features supported by the cluster level. This will return a list of cluster features supported by the cluster level: <cluster_features> <cluster_feature id="123"> <name>test_feature</name> </cluster_feature> ... </cluster_features> Table 6.109. Parameters summary Name Type Direction Summary features ClusterFeature[] Out Retrieved features. follow String In Indicates which inner links should be followed . 6.37.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.38. ClusterLevel Provides information about a specific cluster level. See the ClusterLevels service for more information. Table 6.110. Methods summary Name Summary get Provides the information about the capabilities of the specific cluster level managed by this service. 6.38.1. get GET Provides the information about the capabilities of the specific cluster level managed by this service. For example, to find what CPU types are supported by level 3.6 you can send a request like this: That will return a ClusterLevel object containing the supported CPU types, and other information which describes the cluster level: <cluster_level id="3.6"> <cpu_types> <cpu_type> <name>Intel Conroe Family</name> <level>3</level> <architecture>x86_64</architecture> </cpu_type> ... </cpu_types> <permits> <permit id="1"> <name>create_vm</name> <administrative>false</administrative> </permit> ... </permits> </cluster_level> Table 6.111. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . level ClusterLevel Out Retreived cluster level. 6.38.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.39. ClusterLevels Provides information about the capabilities of different cluster levels supported by the engine. Version 4.0 of the engine supports levels 4.0 and 3.6. Each of these levels support different sets of CPU types, for example. This service provides that information. Table 6.112. Methods summary Name Summary list Lists the cluster levels supported by the system. 6.39.1. list GET Lists the cluster levels supported by the system. This will return a list of available cluster levels. <cluster_levels> <cluster_level id="4.0"> ... </cluster_level> ... </cluster_levels> The order of the returned cluster levels isn't guaranteed. Table 6.113. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . levels ClusterLevel[] Out Retrieved cluster levels. 6.39.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.40. ClusterNetwork A service to manage a specific cluster network. Table 6.114. Methods summary Name Summary get Retrieves the cluster network details. remove Unassigns the network from a cluster. update Updates the network in the cluster. 6.40.1. get GET Retrieves the cluster network details. Table 6.115. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . network Network Out The cluster network. 6.40.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.40.2. remove DELETE Unassigns the network from a cluster. 6.40.3. update PUT Updates the network in the cluster. Table 6.116. Parameters summary Name Type Direction Summary network Network In/Out The cluster network. 6.41. ClusterNetworks A service to manage cluster networks. Table 6.117. Methods summary Name Summary add Assigns the network to a cluster. list Lists the networks that are assigned to the cluster. 6.41.1. add POST Assigns the network to a cluster. Post a request like in the example below to assign the network to a cluster: Use the following example in its body: <network id="123" /> Table 6.118. Parameters summary Name Type Direction Summary network Network In/Out The network object to be assigned to the cluster. 6.41.2. list GET Lists the networks that are assigned to the cluster. The order of the returned clusters isn't guaranteed. Table 6.119. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of networks to return. networks Network[] Out The list of networks that are assigned to the cluster. 6.41.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.41.2.2. max Sets the maximum number of networks to return. If not specified, all the networks are returned. 6.42. Clusters A service to manage clusters. Table 6.120. Methods summary Name Summary add Creates a new cluster. list Returns the list of clusters of the system. 6.42.1. add POST Creates a new cluster. This requires the name , cpu.type , and data_center attributes. Identify the data center with either the id or name attribute. With a request body like this: <cluster> <name>mycluster</name> <cpu> <type>Intel Penryn Family</type> </cpu> <data_center id="123"/> </cluster> To create a cluster with an external network provider to be deployed on every host that is added to the cluster, send a request like this: With a request body containing a reference to the desired provider: <cluster> <name>mycluster</name> <cpu> <type>Intel Penryn Family</type> </cpu> <data_center id="123"/> <external_network_providers> <external_provider name="ovirt-provider-ovn"/> </external_network_providers> </cluster> Table 6.121. Parameters summary Name Type Direction Summary cluster Cluster In/Out 6.42.2. list GET Returns the list of clusters of the system. The order of the returned clusters is guaranteed only if the sortby clause is included in the search parameter. Table 6.122. Parameters summary Name Type Direction Summary case_sensitive Boolean In Indicates if the search should be performed taking case into account. clusters Cluster[] Out filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of clusters to return. search String In A query string used to restrict the returned clusters. 6.42.2.1. case_sensitive Indicates if the search should be performed taking case into account. The default value is true , which means that case is taken into account. To search ignoring case, set it to false . 6.42.2.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.42.2.3. max Sets the maximum number of clusters to return. If not specified, all the clusters are returned. 6.43. Copyable Table 6.123. Methods summary Name Summary copy 6.43.1. copy POST Table 6.124. Parameters summary Name Type Direction Summary async Boolean In Indicates if the copy should be performed asynchronously. 6.44. CpuProfile Table 6.125. Methods summary Name Summary get remove update Update the specified cpu profile in the system. 6.44.1. get GET Table 6.126. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . profile CpuProfile Out 6.44.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.44.2. remove DELETE Table 6.127. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.44.3. update PUT Update the specified cpu profile in the system. Table 6.128. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. profile CpuProfile In/Out 6.45. CpuProfiles Table 6.129. Methods summary Name Summary add Add a new cpu profile to the system. list Returns the list of CPU profiles of the system. 6.45.1. add POST Add a new cpu profile to the system. Table 6.130. Parameters summary Name Type Direction Summary profile CpuProfile In/Out 6.45.2. list GET Returns the list of CPU profiles of the system. The order of the returned list of CPU profiles isn't guranteed. Table 6.131. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of profiles to return. profile CpuProfile[] Out 6.45.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.45.2.2. max Sets the maximum number of profiles to return. If not specified all the profiles are returned. 6.46. DataCenter A service to manage a data center. Table 6.132. Methods summary Name Summary get Get a data center. remove Removes the data center. update Updates the data center. 6.46.1. get GET Get a data center. An example of getting a data center: <data_center href="/ovirt-engine/api/datacenters/123" id="123"> <name>Default</name> <description>The default Data Center</description> <link href="/ovirt-engine/api/datacenters/123/clusters" rel="clusters"/> <link href="/ovirt-engine/api/datacenters/123/storagedomains" rel="storagedomains"/> <link href="/ovirt-engine/api/datacenters/123/permissions" rel="permissions"/> <link href="/ovirt-engine/api/datacenters/123/networks" rel="networks"/> <link href="/ovirt-engine/api/datacenters/123/quotas" rel="quotas"/> <link href="/ovirt-engine/api/datacenters/123/qoss" rel="qoss"/> <link href="/ovirt-engine/api/datacenters/123/iscsibonds" rel="iscsibonds"/> <local>false</local> <quota_mode>disabled</quota_mode> <status>up</status> <storage_format>v3</storage_format> <supported_versions> <version> <major>4</major> <minor>0</minor> </version> </supported_versions> <version> <major>4</major> <minor>0</minor> </version> <mac_pool href="/ovirt-engine/api/macpools/456" id="456"/> </data_center> Table 6.133. Parameters summary Name Type Direction Summary data_center DataCenter Out filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . 6.46.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.46.2. remove DELETE Removes the data center. Without any special parameters, the storage domains attached to the data center are detached and then removed from the storage. If something fails when performing this operation, for example if there is no host available to remove the storage domains from the storage, the complete operation will fail. If the force parameter is true then the operation will always succeed, even if something fails while removing one storage domain, for example. The failure is just ignored and the data center is removed from the database anyway. Table 6.134. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. force Boolean In Indicates if the operation should succeed, and the storage domain removed from the database, even if something fails during the operation. 6.46.2.1. force Indicates if the operation should succeed, and the storage domain removed from the database, even if something fails during the operation. This parameter is optional, and the default value is false . 6.46.3. update PUT Updates the data center. The name , description , storage_type , version , storage_format and mac_pool elements are updatable post-creation. For example, to change the name and description of data center 123 send a request like this: With a request body like this: <data_center> <name>myupdatedname</name> <description>An updated description for the data center</description> </data_center> Table 6.135. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. data_center DataCenter In/Out The data center that is being updated. 6.47. DataCenterNetwork A service to manage a specific data center network. Table 6.136. Methods summary Name Summary get Retrieves the data center network details. remove Removes the network. update Updates the network in the data center. 6.47.1. get GET Retrieves the data center network details. Table 6.137. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . network Network Out The data center network. 6.47.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.47.2. remove DELETE Removes the network. 6.47.3. update PUT Updates the network in the data center. Table 6.138. Parameters summary Name Type Direction Summary network Network In/Out The data center network. 6.48. DataCenterNetworks A service to manage data center networks. Table 6.139. Methods summary Name Summary add Create a new network in a data center. list Lists networks in the data center. 6.48.1. add POST Create a new network in a data center. Post a request like in the example below to create a new network in a data center with an ID of 123 . Use the following example in its body: <network> <name>mynetwork</name> </network> Table 6.140. Parameters summary Name Type Direction Summary network Network In/Out The network object to be created in the data center. 6.48.2. list GET Lists networks in the data center. The order of the returned list of networks isn't guaranteed. Table 6.141. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of networks to return. networks Network[] Out The list of networks which are in the data center. 6.48.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.48.2.2. max Sets the maximum number of networks to return. If not specified, all the networks are returned. 6.49. DataCenters A service to manage data centers. Table 6.142. Methods summary Name Summary add Creates a new data center. list Lists the data centers. 6.49.1. add POST Creates a new data center. Creation of a new data center requires the name and local elements. For example, to create a data center named mydc that uses shared storage (NFS, iSCSI or Fibre Channel) send a request like this: With a request body like this: <data_center> <name>mydc</name> <local>false</local> </data_center> Table 6.143. Parameters summary Name Type Direction Summary data_center DataCenter In/Out The data center that is being added. For more information, see Data Center Properties and Changing the Data Center Storage Type in the Administration Guide. 6.49.2. list GET Lists the data centers. The following request retrieves a representation of the data centers: The above request performed with curl : curl \ --request GET \ --cacert /etc/pki/ovirt-engine/ca.pem \ --header "Version: 4" \ --header "Accept: application/xml" \ --user "admin@internal:mypassword" \ https://myengine.example.com/ovirt-engine/api/datacenters This is what an example response could look like: <data_center href="/ovirt-engine/api/datacenters/123" id="123"> <name>Default</name> <description>The default Data Center</description> <link href="/ovirt-engine/api/datacenters/123/networks" rel="networks"/> <link href="/ovirt-engine/api/datacenters/123/storagedomains" rel="storagedomains"/> <link href="/ovirt-engine/api/datacenters/123/permissions" rel="permissions"/> <link href="/ovirt-engine/api/datacenters/123/clusters" rel="clusters"/> <link href="/ovirt-engine/api/datacenters/123/qoss" rel="qoss"/> <link href="/ovirt-engine/api/datacenters/123/iscsibonds" rel="iscsibonds"/> <link href="/ovirt-engine/api/datacenters/123/quotas" rel="quotas"/> <local>false</local> <quota_mode>disabled</quota_mode> <status>up</status> <supported_versions> <version> <major>4</major> <minor>0</minor> </version> </supported_versions> <version> <major>4</major> <minor>0</minor> </version> </data_center> Note the id code of your Default data center. This code identifies this data center in relation to other resources of your virtual environment. The data center also contains a link to the storage domains collection. The data center uses this collection to attach storage domains from the storage domains main collection. The order of the returned list of data centers is guaranteed only if the sortby clause is included in the search parameter. Table 6.144. Parameters summary Name Type Direction Summary case_sensitive Boolean In Indicates if the search performed using the search parameter should be performed taking case into account. data_centers DataCenter[] Out filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of data centers to return. search String In A query string used to restrict the returned data centers. 6.49.2.1. case_sensitive Indicates if the search performed using the search parameter should be performed taking case into account. The default value is true , which means that case is taken into account. If you want to search ignoring case set it to false . 6.49.2.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.49.2.3. max Sets the maximum number of data centers to return. If not specified all the data centers are returned. 6.50. Disk Manages a single disk. Table 6.145. Methods summary Name Summary copy This operation copies a disk to the specified storage domain. export Exports a disk to an export storage domain. get Retrieves the description of the disk. move Moves a disk to another storage domain. reduce Reduces the size of the disk image. refreshlun Refreshes a direct LUN disk with up-to-date information from the storage. remove Removes a disk. sparsify Sparsify the disk. update This operation updates the disk with the appropriate parameters. 6.50.1. copy POST This operation copies a disk to the specified storage domain. For example, copy of a disk can be facilitated using the following request: With a request body like this: <action> <storage_domain id="456"/> <disk> <name>mydisk</name> </disk> </action> If the disk profile or the quota used currently by the disk aren't defined for the new storage domain, then they can be explicitly specified. If they aren't then the first available disk profile and the default quota are used. For example, to explicitly use disk profile 987 and quota 753 send a request body like this: <action> <storage_domain id="456"/> <disk_profile id="987"/> <quota id="753"/> </action> Table 6.146. Parameters summary Name Type Direction Summary async Boolean In Indicates if the copy should be performed asynchronously. disk Disk In disk_profile DiskProfile In Disk profile for the disk in the new storage domain. filter Boolean In Indicates if the results should be filtered according to the permissions of the user. quota Quota In Quota for the disk in the new storage domain. storage_domain StorageDomain In The storage domain where the new disk will be created. 6.50.1.1. disk_profile Disk profile for the disk in the new storage domain. Disk profiles are defined for storage domains, so the old disk profile will not exist in the new storage domain. If this parameter is not used, the first disk profile from the new storage domain to which the user has permissions will be assigned to the disk. 6.50.1.2. quota Quota for the disk in the new storage domain. This optional parameter can be used to specify new quota for the disk, because the current quota may not be defined for the new storage domain. If this parameter is not used and the old quota is not defined for the new storage domain, the default (unlimited) quota will be assigned to the disk. 6.50.1.3. storage_domain The storage domain where the new disk will be created. Can be specified using the id or name attributes. For example, to copy a disk to the storage domain named mydata send a request like this: With a request body like this: <action> <storage_domain> <name>mydata</name> </storage_domain> </action> 6.50.2. export POST Exports a disk to an export storage domain. Table 6.147. Parameters summary Name Type Direction Summary async Boolean In Indicates if the export should be performed asynchronously. filter Boolean In Indicates if the results should be filtered according to the permissions of the user. storage_domain StorageDomain In 6.50.3. get GET Retrieves the description of the disk. Table 6.148. Parameters summary Name Type Direction Summary all_content Boolean In Indicates if all of the attributes of the disk should be included in the response. disk Disk Out The description of the disk. follow String In Indicates which inner links should be followed . 6.50.3.1. all_content Indicates if all of the attributes of the disk should be included in the response. By default the following disk attributes are excluded: vms For example, to retrieve the complete representation of disk '123': 6.50.3.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.50.4. move POST Moves a disk to another storage domain. For example, to move the disk with identifier 123 to a storage domain with identifier 456 send the following request: With the following request body: <action> <storage_domain id="456"/> </action> If the disk profile or the quota used currently by the disk aren't defined for the new storage domain, then they can be explicitly specified. If they aren't then the first available disk profile and the default quota are used. For example, to explicitly use disk profile 987 and quota 753 send a request body like this: <action> <storage_domain id="456"/> <disk_profile id="987"/> <quota id="753"/> </action> Table 6.149. Parameters summary Name Type Direction Summary async Boolean In Indicates if the move should be performed asynchronously. disk_profile DiskProfile In Disk profile for the disk in the new storage domain. filter Boolean In Indicates if the results should be filtered according to the permissions of the user. quota Quota In Quota for the disk in the new storage domain. storage_domain StorageDomain In 6.50.4.1. disk_profile Disk profile for the disk in the new storage domain. Disk profiles are defined for storage domains, so the old disk profile will not exist in the new storage domain. If this parameter is not used, the first disk profile from the new storage domain to which the user has permissions will be assigned to the disk. 6.50.4.2. quota Quota for the disk in the new storage domain. This optional parameter can be used to specify new quota for the disk, because the current quota may not be defined for the new storage domain. If this parameter is not used and the old quota is not defined for the new storage domain, the default (unlimited) quota will be assigned to the disk. 6.50.5. reduce POST Reduces the size of the disk image. Invokes reduce on the logical volume (i.e. this is only applicable for block storage domains). This is applicable for floating disks and disks attached to non-running virtual machines. There is no need to specify the size as the optimal size is calculated automatically. Table 6.150. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.50.6. refreshlun POST Refreshes a direct LUN disk with up-to-date information from the storage. Refreshing a direct LUN disk is useful when: The LUN was added using the API without the host parameter, and therefore does not contain any information from the storage (see DisksService::add ). New information about the LUN is available on the storage and you want to update the LUN with it. To refresh direct LUN disk 123 using host 456 , send the following request: With the following request body: <action> <host id='456'/> </action> Table 6.151. Parameters summary Name Type Direction Summary host Host In The host that will be used to refresh the direct LUN disk. 6.50.7. remove DELETE Removes a disk. Table 6.152. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.50.8. sparsify POST Sparsify the disk. Sparsification frees space in the disk image that is not used by its filesystem. As a result, the image will occupy less space on the storage. Currently sparsification works only on disks without snapshots. Disks having derived disks are also not allowed. 6.50.9. update PUT This operation updates the disk with the appropriate parameters. The only field that can be updated is qcow_version . For example, update disk can be facilitated using the following request: With a request body like this: <disk> <qcow_version>qcow2_v3</qcow_version> </disk> Since the backend operation is asynchronous the disk element which will be returned to the user might not be synced with the changed properties. Table 6.153. Parameters summary Name Type Direction Summary disk Disk In/Out The update to apply to the disk. 6.51. DiskAttachment This service manages the attachment of a disk to a virtual machine. Table 6.154. Methods summary Name Summary get Returns the details of the attachment, including the bootable flag and link to the disk. remove Removes the disk attachment. update Update the disk attachment and the disk properties within it. 6.51.1. get GET Returns the details of the attachment, including the bootable flag and link to the disk. An example of getting a disk attachment: <disk_attachment href="/ovirt-engine/api/vms/123/diskattachments/456" id="456"> <active>true</active> <bootable>true</bootable> <interface>virtio</interface> <disk href="/ovirt-engine/api/disks/456" id="456"/> <vm href="/ovirt-engine/api/vms/123" id="123"/> </disk_attachment> Table 6.155. Parameters summary Name Type Direction Summary attachment DiskAttachment Out follow String In Indicates which inner links should be followed . 6.51.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.51.2. remove DELETE Removes the disk attachment. This will only detach the disk from the virtual machine, but won't remove it from the system, unless the detach_only parameter is false . An example of removing a disk attachment: Table 6.156. Parameters summary Name Type Direction Summary detach_only Boolean In Indicates if the disk should only be detached from the virtual machine, but not removed from the system. 6.51.2.1. detach_only Indicates if the disk should only be detached from the virtual machine, but not removed from the system. The default value is true , which won't remove the disk from the system. 6.51.3. update PUT Update the disk attachment and the disk properties within it. Table 6.157. Parameters summary Name Type Direction Summary disk_attachment DiskAttachment In/Out 6.52. DiskAttachments This service manages the set of disks attached to a virtual machine. Each attached disk is represented by a DiskAttachment , containing the bootable flag, the disk interface and the reference to the disk. Table 6.158. Methods summary Name Summary add Adds a new disk attachment to the virtual machine. list List the disk that are attached to the virtual machine. 6.52.1. add POST Adds a new disk attachment to the virtual machine. The attachment parameter can contain just a reference, if the disk already exists: <disk_attachment> <bootable>true</bootable> <pass_discard>true</pass_discard> <interface>ide</interface> <active>true</active> <disk id="123"/> </disk_attachment> Or it can contain the complete representation of the disk, if the disk doesn't exist yet: <disk_attachment> <bootable>true</bootable> <pass_discard>true</pass_discard> <interface>ide</interface> <active>true</active> <disk> <name>mydisk</name> <provisioned_size>1024</provisioned_size> ... </disk> </disk_attachment> In this case the disk will be created and then attached to the virtual machine. In both cases, use the following URL for a virtual machine with an id 345 : Important The server accepts requests that don't contain the active attribute, but the effect is undefined. In some cases the disk will be automatically activated and in other cases it won't. To avoid issues it is strongly recommended to always include the active attribute with the desired value. Table 6.159. Parameters summary Name Type Direction Summary attachment DiskAttachment In/Out The disk attachment to add to the virtual machine. 6.52.2. list GET List the disk that are attached to the virtual machine. The order of the returned list of disks attachments isn't guaranteed. Table 6.160. Parameters summary Name Type Direction Summary attachments DiskAttachment[] Out A list of disk attachments that are attached to the virtual machine. follow String In Indicates which inner links should be followed . 6.52.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.53. DiskProfile Table 6.161. Methods summary Name Summary get remove update Update the specified disk profile in the system. 6.53.1. get GET Table 6.162. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . profile DiskProfile Out 6.53.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.53.2. remove DELETE Table 6.163. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.53.3. update PUT Update the specified disk profile in the system. Table 6.164. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. profile DiskProfile In/Out 6.54. DiskProfiles Table 6.165. Methods summary Name Summary add Add a new disk profile to the system. list Returns the list of disk profiles of the system. 6.54.1. add POST Add a new disk profile to the system. Table 6.166. Parameters summary Name Type Direction Summary profile DiskProfile In/Out 6.54.2. list GET Returns the list of disk profiles of the system. The order of the returned list of disk profiles isn't guaranteed. Table 6.167. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of profiles to return. profile DiskProfile[] Out 6.54.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.54.2.2. max Sets the maximum number of profiles to return. If not specified all the profiles are returned. 6.55. DiskSnapshot Table 6.168. Methods summary Name Summary get remove 6.55.1. get GET Table 6.169. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . snapshot DiskSnapshot Out 6.55.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.55.2. remove DELETE Table 6.170. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.56. DiskSnapshots Manages the collection of disk snapshots available in an storage domain. Table 6.171. Methods summary Name Summary list Returns the list of disk snapshots of the storage domain. 6.56.1. list GET Returns the list of disk snapshots of the storage domain. The order of the returned list of disk snapshots isn't guaranteed. Table 6.172. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of snapshots to return. snapshots DiskSnapshot[] Out 6.56.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.56.1.2. max Sets the maximum number of snapshots to return. If not specified all the snapshots are returned. 6.57. Disks Manages the collection of disks available in the system. Table 6.173. Methods summary Name Summary add Adds a new floating disk. list Get list of disks. 6.57.1. add POST Adds a new floating disk. There are three types of disks that can be added - disk image, direct LUN and Cinder disk. Adding a new image disk: When creating a new floating image Disk , the API requires the storage_domain , provisioned_size and format attributes. Note that block storage domains (i.e., storage domains with the storage type of iSCSI or FCP) don't support the combination of the raw format with sparse=true , so sparse=false must be stated explicitly. To create a new floating image disk with specified provisioned_size , format and name on a storage domain with an id 123 , send a request as follows: With a request body as follows: <disk> <storage_domains> <storage_domain id="123"/> </storage_domains> <name>mydisk</name> <provisioned_size>1048576</provisioned_size> <format>cow</format> </disk> Adding a new direct LUN disk: When adding a new floating direct LUN via the API, there are two flavors that can be used: With a host element - in this case, the host is used for sanity checks (e.g., that the LUN is visible) and to retrieve basic information about the LUN (e.g., size and serial). Without a host element - in this case, the operation is a database-only operation, and the storage is never accessed. To create a new floating direct LUN disk with a host element with an id 123 , specified alias , type and logical_unit with an id 456 (that has the attributes address , port and target ), send a request as follows: With a request body as follows: <disk> <alias>mylun</alias> <lun_storage> <host id="123"/> <type>iscsi</type> <logical_units> <logical_unit id="456"> <address>10.35.10.20</address> <port>3260</port> <target>iqn.2017-01.com.myhost:444</target> </logical_unit> </logical_units> </lun_storage> </disk> To create a new floating direct LUN disk without using a host, remove the host element. Adding a new Cinder disk: To create a new floating Cinder disk, send a request as follows: With a request body as follows: <disk> <openstack_volume_type> <name>myceph</name> </openstack_volume_type> <storage_domains> <storage_domain> <name>cinderDomain</name> </storage_domain> </storage_domains> <provisioned_size>1073741824</provisioned_size> <interface>virtio</interface> <format>raw</format> </disk> Adding a floating disks in order to upload disk snapshots: Since version 4.2 of the engine it is possible to upload disks with snapshots. This request should be used to create the base image of the images chain (The consecutive disk snapshots (images), should be created using disk-attachments element when creating a snapshot). The disk has to be created with the same disk identifier and image identifier of the uploaded image. I.e. the identifiers should be saved as part of the backup process. The image identifier can be also fetched using the qemu-img info command. For example, if the disk image is stored into a file named b7a4c6c5-443b-47c5-967f-6abc79675e8b/myimage.img : USD qemu-img info b7a4c6c5-443b-47c5-967f-6abc79675e8b/myimage.img image: b548366b-fb51-4b41-97be-733c887fe305 file format: qcow2 virtual size: 1.0G (1073741824 bytes) disk size: 196K cluster_size: 65536 backing file: ad58716a-1fe9-481f-815e-664de1df04eb backing file format: raw To create a disk with with the disk identifier and image identifier obtained with the qemu-img info command shown above, send a request like this: With a request body as follows: <disk id="b7a4c6c5-443b-47c5-967f-6abc79675e8b"> <image_id>b548366b-fb51-4b41-97be-733c887fe305</image_id> <storage_domains> <storage_domain id="123"/> </storage_domains> <name>mydisk</name> <provisioned_size>1048576</provisioned_size> <format>cow</format> </disk> Table 6.174. Parameters summary Name Type Direction Summary disk Disk In/Out The disk. 6.57.2. list GET Get list of disks. You will get a XML response which will look like this one: <disks> <disk id="123"> <actions>...</actions> <name>MyDisk</name> <description>MyDisk description</description> <link href="/ovirt-engine/api/disks/123/permissions" rel="permissions"/> <link href="/ovirt-engine/api/disks/123/statistics" rel="statistics"/> <actual_size>5345845248</actual_size> <alias>MyDisk alias</alias> ... <status>ok</status> <storage_type>image</storage_type> <wipe_after_delete>false</wipe_after_delete> <disk_profile id="123"/> <quota id="123"/> <storage_domains>...</storage_domains> </disk> ... </disks> The order of the returned list of disks is guaranteed only if the sortby clause is included in the search parameter. Table 6.175. Parameters summary Name Type Direction Summary case_sensitive Boolean In Indicates if the search performed using the search parameter should be performed taking case into account. disks Disk[] Out List of retrieved disks. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of disks to return. search String In A query string used to restrict the returned disks. 6.57.2.1. case_sensitive Indicates if the search performed using the search parameter should be performed taking case into account. The default value is true , which means that case is taken into account. If you want to search ignoring case set it to false . 6.57.2.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.57.2.3. max Sets the maximum number of disks to return. If not specified all the disks are returned. 6.58. Domain A service to view details of an authentication domain in the system. Table 6.176. Methods summary Name Summary get Gets the authentication domain information. 6.58.1. get GET Gets the authentication domain information. Usage: Will return the domain information: <domain href="/ovirt-engine/api/domains/5678" id="5678"> <name>internal-authz</name> <link href="/ovirt-engine/api/domains/5678/users" rel="users"/> <link href="/ovirt-engine/api/domains/5678/groups" rel="groups"/> <link href="/ovirt-engine/api/domains/5678/users?search={query}" rel="users/search"/> <link href="/ovirt-engine/api/domains/5678/groups?search={query}" rel="groups/search"/> </domain> Table 6.177. Parameters summary Name Type Direction Summary domain Domain Out The authentication domain. follow String In Indicates which inner links should be followed . 6.58.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.59. DomainGroup Table 6.178. Methods summary Name Summary get 6.59.1. get GET Table 6.179. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . get Group Out 6.59.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.60. DomainGroups Table 6.180. Methods summary Name Summary list Returns the list of groups. 6.60.1. list GET Returns the list of groups. The order of the returned list of groups isn't guaranteed. Table 6.181. Parameters summary Name Type Direction Summary case_sensitive Boolean In Indicates if the search performed using the search parameter should be performed taking case into account. follow String In Indicates which inner links should be followed . groups Group[] Out max Integer In Sets the maximum number of groups to return. search String In A query string used to restrict the returned groups. 6.60.1.1. case_sensitive Indicates if the search performed using the search parameter should be performed taking case into account. The default value is true , which means that case is taken into account. If you want to search ignoring case set it to false . 6.60.1.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.60.1.3. max Sets the maximum number of groups to return. If not specified all the groups are returned. 6.61. DomainUser A service to view a domain user in the system. Table 6.182. Methods summary Name Summary get Gets the domain user information. 6.61.1. get GET Gets the domain user information. Usage: Will return the domain user information: <user href="/ovirt-engine/api/users/1234" id="1234"> <name>admin</name> <namespace>*</namespace> <principal>admin</principal> <user_name>admin@internal-authz</user_name> <domain href="/ovirt-engine/api/domains/5678" id="5678"> <name>internal-authz</name> </domain> <groups/> </user> Table 6.183. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . user User Out The domain user. 6.61.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.62. DomainUserGroups A service that shows a user's group membership in the AAA extension. Table 6.184. Methods summary Name Summary list Returns the list of groups that the user is a member of. 6.62.1. list GET Returns the list of groups that the user is a member of. Table 6.185. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . groups Group[] Out The list of groups that the user is a member of. 6.62.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.63. DomainUsers A service to list all domain users in the system. Table 6.186. Methods summary Name Summary list List all the users in the domain. 6.63.1. list GET List all the users in the domain. Usage: Will return the list of users in the domain: <users> <user href="/ovirt-engine/api/domains/5678/users/1234" id="1234"> <name>admin</name> <namespace>*</namespace> <principal>admin</principal> <user_name>admin@internal-authz</user_name> <domain href="/ovirt-engine/api/domains/5678" id="5678"> <name>internal-authz</name> </domain> <groups/> </user> </users> The order of the returned list of users isn't guaranteed. Table 6.187. Parameters summary Name Type Direction Summary case_sensitive Boolean In Indicates if the search performed using the search parameter should be performed taking case into account. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of users to return. search String In A query string used to restrict the returned users. users User[] Out The list of users in the domain. 6.63.1.1. case_sensitive Indicates if the search performed using the search parameter should be performed taking case into account. The default value is true , which means that case is taken into account. If you want to search ignoring case set it to false . 6.63.1.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.63.1.3. max Sets the maximum number of users to return. If not specified all the users are returned. 6.64. Domains A service to list all authentication domains in the system. Table 6.188. Methods summary Name Summary list List all the authentication domains in the system. 6.64.1. list GET List all the authentication domains in the system. Usage: Will return the list of domains: <domains> <domain href="/ovirt-engine/api/domains/5678" id="5678"> <name>internal-authz</name> <link href="/ovirt-engine/api/domains/5678/users" rel="users"/> <link href="/ovirt-engine/api/domains/5678/groups" rel="groups"/> <link href="/ovirt-engine/api/domains/5678/users?search={query}" rel="users/search"/> <link href="/ovirt-engine/api/domains/5678/groups?search={query}" rel="groups/search"/> </domain> </domains> The order of the returned list of domains isn't guaranteed. Table 6.189. Parameters summary Name Type Direction Summary domains Domain[] Out The list of domains. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of domains to return. 6.64.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.64.1.2. max Sets the maximum number of domains to return. If not specified all the domains are returned. 6.65. EngineKatelloErrata A service to manage Katello errata assigned to the engine. The information is retrieved from Katello. Table 6.190. Methods summary Name Summary list Retrieves the representation of the Katello errata. 6.65.1. list GET Retrieves the representation of the Katello errata. You will receive response in XML like this one: <katello_errata> <katello_erratum href="/ovirt-engine/api/katelloerrata/123" id="123"> <name>RHBA-2013:XYZ</name> <description>The description of the erratum</description> <title>some bug fix update</title> <type>bugfix</type> <issued>2013-11-20T02:00:00.000+02:00</issued> <solution>Few guidelines regarding the solution</solution> <summary>Updated packages that fix one bug are now available for XYZ</summary> <packages> <package> <name>libipa_hbac-1.9.2-82.11.el6_4.i686</name> </package> ... </packages> </katello_erratum> ... </katello_errata> The order of the returned list of erratum isn't guaranteed. Table 6.191. Parameters summary Name Type Direction Summary errata KatelloErratum[] Out A representation of Katello errata. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of errata to return. 6.65.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.65.1.2. max Sets the maximum number of errata to return. If not specified all the errata are returned. 6.66. Event A service to manage an event in the system. Table 6.192. Methods summary Name Summary get Get an event. remove Removes an event from internal audit log. 6.66.1. get GET Get an event. An example of getting an event: <event href="/ovirt-engine/api/events/123" id="123"> <description>Host example.com was added by admin@internal-authz.</description> <code>42</code> <correlation_id>135</correlation_id> <custom_id>-1</custom_id> <flood_rate>30</flood_rate> <origin>oVirt</origin> <severity>normal</severity> <time>2016-12-11T11:13:44.654+02:00</time> <cluster href="/ovirt-engine/api/clusters/456" id="456"/> <host href="/ovirt-engine/api/hosts/789" id="789"/> <user href="/ovirt-engine/api/users/987" id="987"/> </event> Note that the number of fields changes according to the information that resides on the event. For example, for storage domain related events you will get the storage domain reference, as well as the reference for the data center this storage domain resides in. Table 6.193. Parameters summary Name Type Direction Summary event Event Out follow String In Indicates which inner links should be followed . 6.66.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.66.2. remove DELETE Removes an event from internal audit log. An event can be removed by sending following request Table 6.194. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.67. Events A service to manage events in the system. Table 6.195. Methods summary Name Summary add Adds an external event to the internal audit log. list Get list of events. undelete 6.67.1. add POST Adds an external event to the internal audit log. This is intended for integration with external systems that detect or produce events relevant for the administrator of the system. For example, an external monitoring tool may be able to detect that a file system is full inside the guest operating system of a virtual machine. This event can be added to the internal audit log sending a request like this: Events can also be linked to specific objects. For example, the above event could be linked to the specific virtual machine where it happened, using the vm link: Note When using links, like the vm in the example, only the id attribute is accepted. The name attribute, if provided, is simply ignored. Table 6.196. Parameters summary Name Type Direction Summary event Event In/Out 6.67.2. list GET Get list of events. To the above request we get following response: <events> <event href="/ovirt-engine/api/events/2" id="2"> <description>User admin@internal-authz logged out.</description> <code>31</code> <correlation_id>1e892ea9</correlation_id> <custom_id>-1</custom_id> <flood_rate>30</flood_rate> <origin>oVirt</origin> <severity>normal</severity> <time>2016-09-14T12:14:34.541+02:00</time> <user href="/ovirt-engine/api/users/57d91d48-00da-0137-0138-000000000244" id="57d91d48-00da-0137-0138-000000000244"/> </event> <event href="/ovirt-engine/api/events/1" id="1"> <description>User admin logged in.</description> <code>30</code> <correlation_id>1fbd81f4</correlation_id> <custom_id>-1</custom_id> <flood_rate>30</flood_rate> <origin>oVirt</origin> <severity>normal</severity> <time>2016-09-14T11:54:35.229+02:00</time> <user href="/ovirt-engine/api/users/57d91d48-00da-0137-0138-000000000244" id="57d91d48-00da-0137-0138-000000000244"/> </event> </events> The following events occur: id="1" - The API logs in the admin user account. id="2" - The API logs out of the admin user account. The order of the returned list of events is always garanteed. If the sortby clause is included in the search parameter, then the events will be ordered according to that clause. If the sortby clause isn't included, then the events will be sorted by the numeric value of the id attribute, starting with the highest value. This, combined with the max parameter, simplifies obtaining the most recent event: Table 6.197. Parameters summary Name Type Direction Summary case_sensitive Boolean In Indicates if the search performed using the search parameter should be performed taking case into account. events Event[] Out follow String In Indicates which inner links should be followed . from Integer In Indicates the event index after which events should be returned. max Integer In Sets the maximum number of events to return. search String In The events service provides search queries similar to other resource services. 6.67.2.1. case_sensitive Indicates if the search performed using the search parameter should be performed taking case into account. The default value is true , which means that case is taken into account. If you want to search ignoring case set it to false . 6.67.2.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.67.2.3. from Indicates the event index after which events should be returned. The indexes of events are strictly increasing, so when this parameter is used only the events with greater indexes will be returned. For example, the following request will return only the events with indexes greater than 123 : This parameter is optional, and if not specified then the first event returned will be most recently generated. 6.67.2.4. max Sets the maximum number of events to return. If not specified all the events are returned. 6.67.2.5. search The events service provides search queries similar to other resource services. We can search by providing specific severity. To the above request we get a list of events which severity is equal to normal : <events> <event href="/ovirt-engine/api/events/2" id="2"> <description>User admin@internal-authz logged out.</description> <code>31</code> <correlation_id>1fbd81f4</correlation_id> <custom_id>-1</custom_id> <flood_rate>30</flood_rate> <origin>oVirt</origin> <severity>normal</severity> <time>2016-09-14T11:54:35.229+02:00</time> <user href="/ovirt-engine/api/users/57d91d48-00da-0137-0138-000000000244" id="57d91d48-00da-0137-0138-000000000244"/> </event> <event href="/ovirt-engine/api/events/1" id="1"> <description>Affinity Rules Enforcement Manager started.</description> <code>10780</code> <custom_id>-1</custom_id> <flood_rate>30</flood_rate> <origin>oVirt</origin> <severity>normal</severity> <time>2016-09-14T11:52:18.861+02:00</time> </event> </events> A virtualization environment generates a large amount of events after a period of time. However, the API only displays a default number of events for one search query. To display more than the default, the API separates results into pages with the page command in a search query. The following search query tells the API to paginate results using a page value in combination with the sortby clause: Below example paginates event resources. The URL-encoded request is: Increase the page value to view the page of results. 6.67.3. undelete POST Table 6.198. Parameters summary Name Type Direction Summary async Boolean In Indicates if the un-delete should be performed asynchronously. 6.68. ExternalComputeResource Manages a single external compute resource. Compute resource is a term of host external provider. The external provider also needs to know to where the provisioned host needs to register. The login details of the engine are saved as a compute resource in the external provider side. Table 6.199. Methods summary Name Summary get Retrieves external compute resource details. 6.68.1. get GET Retrieves external compute resource details. For example, to get the details of compute resource 234 of provider 123 , send a request like this: It will return a response like this: <external_compute_resource href="/ovirt-engine/api/externalhostproviders/123/computeresources/234" id="234"> <name>hostname</name> <provider>oVirt</provider> <url>https://hostname/api</url> <user>admin@internal</user> <external_host_provider href="/ovirt-engine/api/externalhostproviders/123" id="123"/> </external_compute_resource> Table 6.200. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . resource ExternalComputeResource Out External compute resource information 6.68.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.69. ExternalComputeResources Manages a collection of external compute resources. Compute resource is a term of host external provider. The external provider also needs to know to where the provisioned host needs to register. The login details of the engine is saved as a compute resource in the external provider side. Table 6.201. Methods summary Name Summary list Retrieves a list of external compute resources. 6.69.1. list GET Retrieves a list of external compute resources. For example, to retrieve the compute resources of external host provider 123 , send a request like this: It will return a response like this: <external_compute_resources> <external_compute_resource href="/ovirt-engine/api/externalhostproviders/123/computeresources/234" id="234"> <name>hostname</name> <provider>oVirt</provider> <url>https://address/api</url> <user>admin@internal</user> <external_host_provider href="/ovirt-engine/api/externalhostproviders/123" id="123"/> </external_compute_resource> ... </external_compute_resources> The order of the returned list of compute resources isn't guaranteed. Table 6.202. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of resources to return. resources ExternalComputeResource[] Out List of external computer resources. 6.69.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.69.1.2. max Sets the maximum number of resources to return. If not specified all the resources are returned. 6.70. ExternalDiscoveredHost This service manages a single discovered host. Table 6.203. Methods summary Name Summary get Get discovered host info. 6.70.1. get GET Get discovered host info. Retrieves information about an host that is managed in external provider management system, such as Foreman. The information includes hostname, address, subnet, base image and more. For example, to get the details of host 234 from provider 123 , send a request like this: The result will be like this: <external_discovered_host href="/ovirt-engine/api/externalhostproviders/123/discoveredhosts/234" id="234"> <name>mac001a4ad04040</name> <ip>10.34.67.43</ip> <last_report>2017-04-24 11:05:41 UTC</last_report> <mac>00:1a:4a:d0:40:40</mac> <subnet_name>sat0</subnet_name> <external_host_provider href="/ovirt-engine/api/externalhostproviders/123" id="123"/> </external_discovered_host> Table 6.204. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . host ExternalDiscoveredHost Out Host's hardware and config information. 6.70.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.71. ExternalDiscoveredHosts This service manages external discovered hosts. Table 6.205. Methods summary Name Summary list Get list of discovered hosts' information. 6.71.1. list GET Get list of discovered hosts' information. Discovered hosts are fetched from third-party providers such as Foreman. To list all discovered hosts for provider 123 send the following: <external_discovered_hosts> <external_discovered_host href="/ovirt-engine/api/externalhostproviders/123/discoveredhosts/456" id="456"> <name>mac001a4ad04031</name> <ip>10.34.67.42</ip> <last_report>2017-04-24 11:05:41 UTC</last_report> <mac>00:1a:4a:d0:40:31</mac> <subnet_name>sat0</subnet_name> <external_host_provider href="/ovirt-engine/api/externalhostproviders/123" id="123"/> </external_discovered_host> <external_discovered_host href="/ovirt-engine/api/externalhostproviders/123/discoveredhosts/789" id="789"> <name>mac001a4ad04040</name> <ip>10.34.67.43</ip> <last_report>2017-04-24 11:05:41 UTC</last_report> <mac>00:1a:4a:d0:40:40</mac> <subnet_name>sat0</subnet_name> <external_host_provider href="/ovirt-engine/api/externalhostproviders/123" id="123"/> </external_discovered_host> ... </external_discovered_hosts> The order of the returned list of hosts isn't guaranteed. Table 6.206. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . hosts ExternalDiscoveredHost[] Out List of discovered hosts max Integer In Sets the maximum number of hosts to return. 6.71.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.71.1.2. max Sets the maximum number of hosts to return. If not specified all the hosts are returned. 6.72. ExternalHost Table 6.207. Methods summary Name Summary get 6.72.1. get GET Table 6.208. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . host ExternalHost Out 6.72.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.73. ExternalHostGroup This service manages a single host group information. Host group is a term of host provider - the host group includes provision details that are applied to new discovered host. Information such as subnet, operating system, domain, etc. Table 6.209. Methods summary Name Summary get Get host group information. 6.73.1. get GET Get host group information. For example, to get the details of hostgroup 234 of provider 123 , send a request like this: It will return a response like this: <external_host_group href="/ovirt-engine/api/externalhostproviders/123/hostgroups/234" id="234"> <name>rhel7</name> <architecture_name>x86_64</architecture_name> <domain_name>s.com</domain_name> <operating_system_name>RedHat 7.3</operating_system_name> <subnet_name>sat0</subnet_name> <external_host_provider href="/ovirt-engine/api/externalhostproviders/123" id="123"/> </external_host_group> Table 6.210. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . group ExternalHostGroup Out Host group information. 6.73.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.74. ExternalHostGroups This service manages hostgroups. Table 6.211. Methods summary Name Summary list Get host groups list from external host provider. 6.74.1. list GET Get host groups list from external host provider. Host group is a term of host providers - the host group includes provision details. This API returns all possible hostgroups exposed by the external provider. For example, to get the details of all host groups of provider 123 , send a request like this: The response will be like this: <external_host_groups> <external_host_group href="/ovirt-engine/api/externalhostproviders/123/hostgroups/234" id="234"> <name>rhel7</name> <architecture_name>x86_64</architecture_name> <domain_name>example.com</domain_name> <operating_system_name>RedHat 7.3</operating_system_name> <subnet_name>sat0</subnet_name> <external_host_provider href="/ovirt-engine/api/externalhostproviders/123" id="123"/> </external_host_group> ... </external_host_groups> The order of the returned list of host groups isn't guaranteed. Table 6.212. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . groups ExternalHostGroup[] Out List of all hostgroups available for external host provider max Integer In Sets the maximum number of groups to return. 6.74.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.74.1.2. max Sets the maximum number of groups to return. If not specified all the groups are returned. 6.75. ExternalHostProvider Represents an external host provider, such as Foreman or Satellite. See https://www.theforeman.org/ for more details on Foreman. See https://access.redhat.com/products/red-hat-satellite for more details on Red Hat Satellite. Table 6.213. Methods summary Name Summary get Get external host provider information Host provider, Foreman or Satellite, can be set as an external provider in ovirt. importcertificates Import the SSL certificates of the external host provider. remove testconnectivity In order to test connectivity for external provider we need to run following request where 123 is an id of a provider. update Update the specified external host provider in the system. 6.75.1. get GET Get external host provider information Host provider, Foreman or Satellite, can be set as an external provider in ovirt. To see details about specific host providers attached to ovirt use this API. For example, to get the details of host provider 123 , send a request like this: The response will be like this: <external_host_provider href="/ovirt-engine/api/externalhostproviders/123" id="123"> <name>mysatellite</name> <requires_authentication>true</requires_authentication> <url>https://mysatellite.example.com</url> <username>admin</username> </external_host_provider> Table 6.214. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . provider ExternalHostProvider Out 6.75.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.75.2. importcertificates POST Import the SSL certificates of the external host provider. Table 6.215. Parameters summary Name Type Direction Summary certificates Certificate[] In 6.75.3. remove DELETE Table 6.216. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.75.4. testconnectivity POST In order to test connectivity for external provider we need to run following request where 123 is an id of a provider. Table 6.217. Parameters summary Name Type Direction Summary async Boolean In Indicates if the test should be performed asynchronously. 6.75.5. update PUT Update the specified external host provider in the system. Table 6.218. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. provider ExternalHostProvider In/Out 6.76. ExternalHostProviders Table 6.219. Methods summary Name Summary add Adds a new external host provider to the system. list Returns the list of external host providers. 6.76.1. add POST Adds a new external host provider to the system. Table 6.220. Parameters summary Name Type Direction Summary provider ExternalHostProvider In/Out 6.76.2. list GET Returns the list of external host providers. The order of the returned list of host providers is not guaranteed. Table 6.221. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of providers to return. providers ExternalHostProvider[] Out search String In A query string used to restrict the returned external host providers. 6.76.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.76.2.2. max Sets the maximum number of providers to return. If not specified, all the providers are returned. 6.77. ExternalHosts Table 6.222. Methods summary Name Summary list Return the list of external hosts. 6.77.1. list GET Return the list of external hosts. The order of the returned list of hosts isn't guaranteed. Table 6.223. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . hosts ExternalHost[] Out max Integer In Sets the maximum number of hosts to return. 6.77.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.77.1.2. max Sets the maximum number of hosts to return. If not specified all the hosts are returned. 6.78. ExternalNetworkProviderConfiguration Describes how an external network provider is provisioned by the system on the host. Table 6.224. Methods summary Name Summary get Returns the information about an external network provider on the host. 6.78.1. get GET Returns the information about an external network provider on the host. Table 6.225. Parameters summary Name Type Direction Summary configuration ExternalNetworkProviderConfiguration Out follow String In Indicates which inner links should be followed . 6.78.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.79. ExternalNetworkProviderConfigurations A service to list all external network providers provisioned by the system on the host. Table 6.226. Methods summary Name Summary list Returns the list of all external network providers on the host. 6.79.1. list GET Returns the list of all external network providers on the host. The order of the returned list of networks is not guaranteed. Table 6.227. Parameters summary Name Type Direction Summary configurations ExternalNetworkProviderConfiguration[] Out follow String In Indicates which inner links should be followed . 6.79.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.80. ExternalProvider Provides capability to manage external providers. Table 6.228. Methods summary Name Summary importcertificates Import the SSL certificates of the external host provider. testconnectivity In order to test connectivity for external provider we need to run following request where 123 is an id of a provider. 6.80.1. importcertificates POST Import the SSL certificates of the external host provider. Table 6.229. Parameters summary Name Type Direction Summary certificates Certificate[] In 6.80.2. testconnectivity POST In order to test connectivity for external provider we need to run following request where 123 is an id of a provider. Table 6.230. Parameters summary Name Type Direction Summary async Boolean In Indicates if the test should be performed asynchronously. 6.81. ExternalProviderCertificate A service to view specific certificate for external provider. Table 6.231. Methods summary Name Summary get Get specific certificate. 6.81.1. get GET Get specific certificate. And here is sample response: <certificate id="0"> <organization>provider.example.com</organization> <subject>CN=provider.example.com</subject> <content>...</content> </certificate> Table 6.232. Parameters summary Name Type Direction Summary certificate Certificate Out The details of the certificate. follow String In Indicates which inner links should be followed . 6.81.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.82. ExternalProviderCertificates A service to view certificates for external provider. Table 6.233. Methods summary Name Summary list Returns the chain of certificates presented by the external provider. 6.82.1. list GET Returns the chain of certificates presented by the external provider. And here is sample response: <certificates> <certificate id="789">...</certificate> ... </certificates> The order of the returned certificates is always guaranteed to be the sign order: the first is the certificate of the server itself, the second the certificate of the CA that signs the first, so on. Table 6.234. Parameters summary Name Type Direction Summary certificates Certificate[] Out List containing certificate details. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of certificates to return. 6.82.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.82.1.2. max Sets the maximum number of certificates to return. If not specified all the certificates are returned. 6.83. ExternalVmImports Provides capability to import external virtual machines. Table 6.235. Methods summary Name Summary add This operation is used to import a virtual machine from external hypervisor, such as KVM, XEN or VMware. 6.83.1. add POST This operation is used to import a virtual machine from external hypervisor, such as KVM, XEN or VMware. For example import of a virtual machine from VMware can be facilitated using the following request: With request body of type ExternalVmImport , for example: <external_vm_import> <vm> <name>my_vm</name> </vm> <cluster id="360014051136c20574f743bdbd28177fd" /> <storage_domain id="8bb5ade5-e988-4000-8b93-dbfc6717fe50" /> <name>vm_name_as_is_in_vmware</name> <sparse>true</sparse> <username>vmware_user</username> <password>123456</password> <provider>VMWARE</provider> <url>vpx://wmware_user@vcenter-host/DataCenter/Cluster/esxi-host?no_verify=1</url> <drivers_iso id="virtio-win-1.6.7.iso" /> </external_vm_import> Table 6.236. Parameters summary Name Type Direction Summary import ExternalVmImport In/Out 6.84. FenceAgent A service to manage fence agent for a specific host. Table 6.237. Methods summary Name Summary get Gets details of this fence agent. remove Removes a fence agent for a specific host. update Update a fencing-agent. 6.84.1. get GET Gets details of this fence agent. And here is sample response: <agent id="0"> <type>apc</type> <order>1</order> <ip>192.168.1.101</ip> <user>user</user> <password>xxx</password> <port>9</port> <options>name1=value1, name2=value2</options> </agent> Table 6.238. Parameters summary Name Type Direction Summary agent Agent Out Fence agent details. follow String In Indicates which inner links should be followed . 6.84.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.84.2. remove DELETE Removes a fence agent for a specific host. Table 6.239. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.84.3. update PUT Update a fencing-agent. Table 6.240. Parameters summary Name Type Direction Summary agent Agent In/Out Fence agent details. async Boolean In Indicates if the update should be performed asynchronously. 6.85. FenceAgents A service to manage fence agents for a specific host. Table 6.241. Methods summary Name Summary add Add a new fencing-agent to the host. list Returns the list of fencing agents configured for the host. 6.85.1. add POST Add a new fencing-agent to the host. Table 6.242. Parameters summary Name Type Direction Summary agent Agent In/Out 6.85.2. list GET Returns the list of fencing agents configured for the host. And here is sample response: <agents> <agent id="0"> <type>apc</type> <order>1</order> <ip>192.168.1.101</ip> <user>user</user> <password>xxx</password> <port>9</port> <options>name1=value1, name2=value2</options> </agent> </agents> The order of the returned list of fencing agents isn't guaranteed. Table 6.243. Parameters summary Name Type Direction Summary agents Agent[] Out List of fence agent details. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of agents to return. 6.85.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.85.2.2. max Sets the maximum number of agents to return. If not specified all the agents are returned. 6.86. File Table 6.244. Methods summary Name Summary get 6.86.1. get GET Table 6.245. Parameters summary Name Type Direction Summary file File Out follow String In Indicates which inner links should be followed . 6.86.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.87. Files Provides a way for clients to list available files. This service is specifically targeted to ISO storage domains, which contain ISO images and virtual floppy disks (VFDs) that an administrator uploads. The addition of a CD-ROM device to a virtual machine requires an ISO image from the files of an ISO storage domain. Table 6.246. Methods summary Name Summary list Returns the list of ISO images and virtual floppy disks available in the storage domain. 6.87.1. list GET Returns the list of ISO images and virtual floppy disks available in the storage domain. The order of the returned list is not guaranteed. If the refresh parameter is false , the returned list may not reflect recent changes to the storage domain; for example, it may not contain a new ISO file that was recently added. This is because the server caches the list of files to improve performance. To get the very latest results, set the refresh parameter to true . The default value of the refresh parameter is true , but it can be changed using the configuration value ForceRefreshDomainFilesByDefault : Important Setting the value of the refresh parameter to true has an impact on the performance of the server. Use it only if necessary. Table 6.247. Parameters summary Name Type Direction Summary case_sensitive Boolean In Indicates if the search performed using the search parameter should take case into account. file File[] Out follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of files to return. refresh Boolean In Indicates whether the list of files should be refreshed from the storage domain, rather than showing cached results that are updated at certain intervals. search String In A query string used to restrict the returned files. 6.87.1.1. case_sensitive Indicates if the search performed using the search parameter should take case into account. The default value is true . 6.87.1.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.87.1.3. max Sets the maximum number of files to return. If not specified, all the files are returned. 6.88. Filter Table 6.248. Methods summary Name Summary get remove 6.88.1. get GET Table 6.249. Parameters summary Name Type Direction Summary filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . result Filter Out 6.88.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.88.2. remove DELETE Table 6.250. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.89. Filters Manages the filters used by an scheduling policy. Table 6.251. Methods summary Name Summary add Add a filter to a specified user defined scheduling policy. list Returns the list of filters used by the scheduling policy. 6.89.1. add POST Add a filter to a specified user defined scheduling policy. Table 6.252. Parameters summary Name Type Direction Summary filter Filter In/Out 6.89.2. list GET Returns the list of filters used by the scheduling policy. The order of the returned list of filters isn't guaranteed. Table 6.253. Parameters summary Name Type Direction Summary filter Boolean In Indicates if the results should be filtered according to the permissions of the user. filters Filter[] Out follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of filters to return. 6.89.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.89.2.2. max Sets the maximum number of filters to return. If not specified all the filters are returned. 6.90. Follow 6.91. GlusterBrick This service manages a single gluster brick. Table 6.254. Methods summary Name Summary get Get details of a brick. remove Removes a brick. replace Replaces this brick with a new one. 6.91.1. get GET Get details of a brick. Retrieves status details of brick from underlying gluster volume with header All-Content set to true . This is the equivalent of running gluster volume status <volumename> <brickname> detail . For example, to get the details of brick 234 of gluster volume 123 , send a request like this: Which will return a response body like this: <brick id="234"> <name>host1:/rhgs/data/brick1</name> <brick_dir>/rhgs/data/brick1</brick_dir> <server_id>111</server_id> <status>up</status> <device>/dev/mapper/RHGS_vg1-lv_vmaddldisks</device> <fs_name>xfs</fs_name> <gluster_clients> <gluster_client> <bytes_read>2818417648</bytes_read> <bytes_written>1384694844</bytes_written> <client_port>1011</client_port> <host_name>client2</host_name> </gluster_client> </gluster_clients> <memory_pools> <memory_pool> <name>data-server:fd_t</name> <alloc_count>1626348</alloc_count> <cold_count>1020</cold_count> <hot_count>4</hot_count> <max_alloc>23</max_alloc> <max_stdalloc>0</max_stdalloc> <padded_size>140</padded_size> <pool_misses>0</pool_misses> </memory_pool> </memory_pools> <mnt_options>rw,seclabel,noatime,nodiratime,attr2,inode64,sunit=512,swidth=2048,noquota</mnt_options> <pid>25589</pid> <port>49155</port> </brick> Table 6.255. Parameters summary Name Type Direction Summary brick GlusterBrick Out follow String In Indicates which inner links should be followed . 6.91.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.91.2. remove DELETE Removes a brick. Removes a brick from the underlying gluster volume and deletes entries from database. This can be used only when removing a single brick without data migration. To remove multiple bricks and with data migration, use migrate instead. For example, to delete brick 234 from gluster volume 123 , send a request like this: Table 6.256. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.91.3. replace POST Replaces this brick with a new one. Important This operation has been deprecated since version 3.5 of the engine and will be removed in the future. Use add brick(s) and migrate brick(s) instead. Table 6.257. Parameters summary Name Type Direction Summary async Boolean In Indicates if the replacement should be performed asynchronously. force Boolean In 6.92. GlusterBricks This service manages the gluster bricks in a gluster volume Table 6.258. Methods summary Name Summary activate Activate the bricks post data migration of remove brick operation. add Adds a list of bricks to gluster volume. list Lists the bricks of a gluster volume. migrate Start migration of data prior to removing bricks. remove Removes bricks from gluster volume. stopmigrate Stops migration of data from bricks for a remove brick operation. 6.92.1. activate POST Activate the bricks post data migration of remove brick operation. Used to activate brick(s) once the data migration from bricks is complete but user no longer wishes to remove bricks. The bricks that were previously marked for removal will now be used as normal bricks. For example, to retain the bricks that on glustervolume 123 from which data was migrated, send a request like this: With a request body like this: <action> <bricks> <brick> <name>host1:/rhgs/brick1</name> </brick> </bricks> </action> Table 6.259. Parameters summary Name Type Direction Summary async Boolean In Indicates if the activation should be performed asynchronously. bricks GlusterBrick[] In The list of bricks that need to be re-activated. 6.92.2. add POST Adds a list of bricks to gluster volume. Used to expand a gluster volume by adding bricks. For replicated volume types, the parameter replica_count needs to be passed. In case the replica count is being increased, then the number of bricks needs to be equivalent to the number of replica sets. For example, to add bricks to gluster volume 123 , send a request like this: With a request body like this: <bricks> <brick> <server_id>111</server_id> <brick_dir>/export/data/brick3</brick_dir> </brick> </bricks> Table 6.260. Parameters summary Name Type Direction Summary bricks GlusterBrick[] In/Out The list of bricks to be added to the volume replica_count Integer In Replica count of volume post add operation. stripe_count Integer In Stripe count of volume post add operation. 6.92.3. list GET Lists the bricks of a gluster volume. For example, to list bricks of gluster volume 123 , send a request like this: Provides an output as below: <bricks> <brick id="234"> <name>host1:/rhgs/data/brick1</name> <brick_dir>/rhgs/data/brick1</brick_dir> <server_id>111</server_id> <status>up</status> </brick> <brick id="233"> <name>host2:/rhgs/data/brick1</name> <brick_dir>/rhgs/data/brick1</brick_dir> <server_id>222</server_id> <status>up</status> </brick> </bricks> The order of the returned list is based on the brick order provided at gluster volume creation. Table 6.261. Parameters summary Name Type Direction Summary bricks GlusterBrick[] Out follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of bricks to return. 6.92.3.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.92.3.2. max Sets the maximum number of bricks to return. If not specified all the bricks are returned. 6.92.4. migrate POST Start migration of data prior to removing bricks. Removing bricks is a two-step process, where the data on bricks to be removed, is first migrated to remaining bricks. Once migration is completed the removal of bricks is confirmed via the API remove . If at any point, the action needs to be cancelled stopmigrate has to be called. For instance, to delete a brick from a gluster volume with id 123 , send a request: With a request body like this: <action> <bricks> <brick> <name>host1:/rhgs/brick1</name> </brick> </bricks> </action> The migration process can be tracked from the job id returned from the API using job and steps in job using step Table 6.262. Parameters summary Name Type Direction Summary async Boolean In Indicates if the migration should be performed asynchronously. bricks GlusterBrick[] In List of bricks for which data migration needs to be started. 6.92.5. remove DELETE Removes bricks from gluster volume. The recommended way to remove bricks without data loss is to first migrate the data using stopmigrate and then removing them. If migrate was not called on bricks prior to remove, the bricks are removed without data migration which may lead to data loss. For example, to delete the bricks from gluster volume 123 , send a request like this: With a request body like this: <bricks> <brick> <name>host:brick_directory</name> </brick> </bricks> Table 6.263. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. bricks GlusterBrick[] In The list of bricks to be removed replica_count Integer In Replica count of volume post add operation. 6.92.6. stopmigrate POST Stops migration of data from bricks for a remove brick operation. To cancel data migration that was started as part of the 2-step remove brick process in case the user wishes to continue using the bricks. The bricks that were marked for removal will function as normal bricks post this operation. For example, to stop migration of data from the bricks of gluster volume 123 , send a request like this: With a request body like this: <bricks> <brick> <name>host:brick_directory</name> </brick> </bricks> Table 6.264. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. bricks GlusterBrick[] In List of bricks for which data migration needs to be stopped. 6.92.6.1. bricks List of bricks for which data migration needs to be stopped. This list should match the arguments passed to migrate . 6.93. GlusterHook Table 6.265. Methods summary Name Summary disable Resolves status conflict of hook among servers in cluster by disabling Gluster hook in all servers of the cluster. enable Resolves status conflict of hook among servers in cluster by disabling Gluster hook in all servers of the cluster. get remove Removes the this Gluster hook from all servers in cluster and deletes it from the database. resolve Resolves missing hook conflict depending on the resolution type. 6.93.1. disable POST Resolves status conflict of hook among servers in cluster by disabling Gluster hook in all servers of the cluster. This updates the hook status to DISABLED in database. Table 6.266. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. 6.93.2. enable POST Resolves status conflict of hook among servers in cluster by disabling Gluster hook in all servers of the cluster. This updates the hook status to DISABLED in database. Table 6.267. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. 6.93.3. get GET Table 6.268. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . hook GlusterHook Out 6.93.3.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.93.4. remove DELETE Removes the this Gluster hook from all servers in cluster and deletes it from the database. Table 6.269. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.93.5. resolve POST Resolves missing hook conflict depending on the resolution type. For ADD resolves by copying hook stored in engine database to all servers where the hook is missing. The engine maintains a list of all servers where hook is missing. For COPY resolves conflict in hook content by copying hook stored in engine database to all servers where the hook is missing. The engine maintains a list of all servers where the content is conflicting. If a host id is passed as parameter, the hook content from the server is used as the master to copy to other servers in cluster. Table 6.270. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. host Host In resolution_type String In 6.94. GlusterHooks Table 6.271. Methods summary Name Summary list Returns the list of hooks. 6.94.1. list GET Returns the list of hooks. The order of the returned list of hooks isn't guaranteed. Table 6.272. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . hooks GlusterHook[] Out max Integer In Sets the maximum number of hooks to return. 6.94.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.94.1.2. max Sets the maximum number of hooks to return. If not specified all the hooks are returned. 6.95. GlusterVolume This service manages a single gluster volume. Table 6.273. Methods summary Name Summary get Get the gluster volume details. getprofilestatistics Get gluster volume profile statistics. rebalance Rebalance the gluster volume. remove Removes the gluster volume. resetalloptions Resets all the options set in the gluster volume. resetoption Resets a particular option in the gluster volume. setoption Sets a particular option in the gluster volume. start Starts the gluster volume. startprofile Start profiling the gluster volume. stop Stops the gluster volume. stopprofile Stop profiling the gluster volume. stoprebalance Stop rebalancing the gluster volume. 6.95.1. get GET Get the gluster volume details. For example, to get details of a gluster volume with identifier 123 in cluster 456 , send a request like this: This GET request will return the following output: <gluster_volume id="123"> <name>data</name> <link href="/ovirt-engine/api/clusters/456/glustervolumes/123/glusterbricks" rel="glusterbricks"/> <disperse_count>0</disperse_count> <options> <option> <name>storage.owner-gid</name> <value>36</value> </option> <option> <name>performance.io-cache</name> <value>off</value> </option> <option> <name>cluster.data-self-heal-algorithm</name> <value>full</value> </option> </options> <redundancy_count>0</redundancy_count> <replica_count>3</replica_count> <status>up</status> <stripe_count>0</stripe_count> <transport_types> <transport_type>tcp</transport_type> </transport_types> <volume_type>replicate</volume_type> </gluster_volume> Table 6.274. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . volume GlusterVolume Out Representation of the gluster volume. 6.95.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.95.2. getprofilestatistics POST Get gluster volume profile statistics. For example, to get profile statistics for a gluster volume with identifier 123 in cluster 456 , send a request like this: Table 6.275. Parameters summary Name Type Direction Summary details GlusterVolumeProfileDetails Out Gluster volume profiling information returned from the action. 6.95.3. rebalance POST Rebalance the gluster volume. Rebalancing a gluster volume helps to distribute the data evenly across all the bricks. After expanding or shrinking a gluster volume (without migrating data), we need to rebalance the data among the bricks. In a non-replicated volume, all bricks should be online to perform the rebalance operation. In a replicated volume, at least one of the bricks in the replica should be online. For example, to rebalance a gluster volume with identifier 123 in cluster 456 , send a request like this: Table 6.276. Parameters summary Name Type Direction Summary async Boolean In Indicates if the rebalance should be performed asynchronously. fix_layout Boolean In If set to true, rebalance will only fix the layout so that new data added to the volume is distributed across all the hosts. force Boolean In Indicates if the rebalance should be force started. 6.95.3.1. fix_layout If set to true, rebalance will only fix the layout so that new data added to the volume is distributed across all the hosts. But it will not migrate/rebalance the existing data. Default is false . 6.95.3.2. force Indicates if the rebalance should be force started. The rebalance command can be executed with the force option even when the older clients are connected to the cluster. However, this could lead to a data loss situation. Default is false . 6.95.4. remove DELETE Removes the gluster volume. For example, to remove a volume with identifier 123 in cluster 456 , send a request like this: Table 6.277. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.95.5. resetalloptions POST Resets all the options set in the gluster volume. For example, to reset all options in a gluster volume with identifier 123 in cluster 456 , send a request like this: Table 6.278. Parameters summary Name Type Direction Summary async Boolean In Indicates if the reset should be performed asynchronously. 6.95.6. resetoption POST Resets a particular option in the gluster volume. For example, to reset a particular option option1 in a gluster volume with identifier 123 in cluster 456 , send a request like this: With the following request body: <action> <option name="option1"/> </action> Table 6.279. Parameters summary Name Type Direction Summary async Boolean In Indicates if the reset should be performed asynchronously. force Boolean In option Option In Option to reset. 6.95.7. setoption POST Sets a particular option in the gluster volume. For example, to set option1 with value value1 in a gluster volume with identifier 123 in cluster 456 , send a request like this: With the following request body: <action> <option name="option1" value="value1"/> </action> Table 6.280. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. option Option In Option to set. 6.95.8. start POST Starts the gluster volume. A Gluster Volume should be started to read/write data. For example, to start a gluster volume with identifier 123 in cluster 456 , send a request like this: Table 6.281. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. force Boolean In Indicates if the volume should be force started. 6.95.8.1. force Indicates if the volume should be force started. If a gluster volume is started already but few/all bricks are down then force start can be used to bring all the bricks up. Default is false . 6.95.9. startprofile POST Start profiling the gluster volume. For example, to start profiling a gluster volume with identifier 123 in cluster 456 , send a request like this: Table 6.282. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. 6.95.10. stop POST Stops the gluster volume. Stopping a volume will make its data inaccessible. For example, to stop a gluster volume with identifier 123 in cluster 456 , send a request like this: Table 6.283. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. force Boolean In 6.95.11. stopprofile POST Stop profiling the gluster volume. For example, to stop profiling a gluster volume with identifier 123 in cluster 456 , send a request like this: Table 6.284. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. 6.95.12. stoprebalance POST Stop rebalancing the gluster volume. For example, to stop rebalancing a gluster volume with identifier 123 in cluster 456 , send a request like this: Table 6.285. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. 6.96. GlusterVolumes This service manages a collection of gluster volumes available in a cluster. Table 6.286. Methods summary Name Summary add Creates a new gluster volume. list Lists all gluster volumes in the cluster. 6.96.1. add POST Creates a new gluster volume. The volume is created based on properties of the volume parameter. The properties name , volume_type and bricks are required. For example, to add a volume with name myvolume to the cluster 123 , send the following request: With the following request body: <gluster_volume> <name>myvolume</name> <volume_type>replicate</volume_type> <replica_count>3</replica_count> <bricks> <brick> <server_id>server1</server_id> <brick_dir>/exp1</brick_dir> </brick> <brick> <server_id>server2</server_id> <brick_dir>/exp1</brick_dir> </brick> <brick> <server_id>server3</server_id> <brick_dir>/exp1</brick_dir> </brick> <bricks> </gluster_volume> Table 6.287. Parameters summary Name Type Direction Summary volume GlusterVolume In/Out The gluster volume definition from which to create the volume is passed as input and the newly created volume is returned. 6.96.2. list GET Lists all gluster volumes in the cluster. For example, to list all Gluster Volumes in cluster 456 , send a request like this: The order of the returned list of volumes isn't guaranteed. Table 6.288. Parameters summary Name Type Direction Summary case_sensitive Boolean In Indicates if the search performed using the search parameter should be performed taking case into account. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of volumes to return. search String In A query string used to restrict the returned volumes. volumes GlusterVolume[] Out 6.96.2.1. case_sensitive Indicates if the search performed using the search parameter should be performed taking case into account. The default value is true , which means that case is taken into account. If you want to search ignoring case set it to false . 6.96.2.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.96.2.3. max Sets the maximum number of volumes to return. If not specified all the volumes are returned. 6.97. Group Manages a group of users. Use this service to either get groups details or remove groups. In order to add new groups please use service that manages the collection of groups. Table 6.289. Methods summary Name Summary get Gets the system group information. remove Removes the system group. 6.97.1. get GET Gets the system group information. Usage: Will return the group information: <group href="/ovirt-engine/api/groups/123" id="123"> <name>mygroup</name> <link href="/ovirt-engine/api/groups/123/roles" rel="roles"/> <link href="/ovirt-engine/api/groups/123/permissions" rel="permissions"/> <link href="/ovirt-engine/api/groups/123/tags" rel="tags"/> <domain_entry_id>476652557A382F67696B6D2B32762B37796E46476D513D3D</domain_entry_id> <namespace>DC=example,DC=com</namespace> <domain href="/ovirt-engine/api/domains/ABCDEF" id="ABCDEF"> <name>myextension-authz</name> </domain> </group> Table 6.290. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . get Group Out The system group. 6.97.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.97.2. remove DELETE Removes the system group. Usage: Table 6.291. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.98. Groups Manages the collection of groups of users. Table 6.292. Methods summary Name Summary add Add group from a directory service. list List all the groups in the system. 6.98.1. add POST Add group from a directory service. Please note that domain name is name of the authorization provider. For example, to add the Developers group from the internal-authz authorization provider send a request like this: With a request body like this: <group> <name>Developers</name> <domain> <name>internal-authz</name> </domain> </group> Table 6.293. Parameters summary Name Type Direction Summary group Group In/Out The group to be added. 6.98.2. list GET List all the groups in the system. Usage: Will return the list of groups: <groups> <group href="/ovirt-engine/api/groups/123" id="123"> <name>mygroup</name> <link href="/ovirt-engine/api/groups/123/roles" rel="roles"/> <link href="/ovirt-engine/api/groups/123/permissions" rel="permissions"/> <link href="/ovirt-engine/api/groups/123/tags" rel="tags"/> <domain_entry_id>476652557A382F67696B6D2B32762B37796E46476D513D3D</domain_entry_id> <namespace>DC=example,DC=com</namespace> <domain href="/ovirt-engine/api/domains/ABCDEF" id="ABCDEF"> <name>myextension-authz</name> </domain> </group> ... </groups> The order of the returned list of groups isn't guaranteed. Table 6.294. Parameters summary Name Type Direction Summary case_sensitive Boolean In Indicates if the search performed using the search parameter should be performed taking case into account. follow String In Indicates which inner links should be followed . groups Group[] Out The list of groups. max Integer In Sets the maximum number of groups to return. search String In A query string used to restrict the returned groups. 6.98.2.1. case_sensitive Indicates if the search performed using the search parameter should be performed taking case into account. The default value is true , which means that case is taken into account. If you want to search ignoring case set it to false . 6.98.2.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.98.2.3. max Sets the maximum number of groups to return. If not specified all the groups are returned. 6.99. Host A service to manage a host. Table 6.295. Methods summary Name Summary activate Activates the host for use, for example to run virtual machines. approve Approve a pre-installed Hypervisor host for usage in the virtualization environment. commitnetconfig Marks the network configuration as good and persists it inside the host. deactivate Deactivates the host to perform maintenance tasks. enrollcertificate Enrolls the certificate of the host. fence Controls the host's power management device. forceselectspm To manually set a host as the storage pool manager (SPM). get Gets the host details. install Installs the latest version of VDSM and related software on the host. iscsidiscover Discovers iSCSI targets on the host, using the initiator details. iscsilogin Login to iSCSI targets on the host, using the target details. refresh Refresh the host devices and capabilities. remove Remove the host from the system. setupnetworks This method is used to change the configuration of the network interfaces of a host. syncallnetworks To synchronize all networks on the host, send a request like this: [source] ---- POST /ovirt-engine/api/hosts/123/syncallnetworks ---- With a request body like this: [source,xml] ---- <action/> ---- unregisteredstoragedomainsdiscover Discovers the block Storage Domains which are candidates to be imported to the setup. update Update the host properties. upgrade Upgrades VDSM and selected software on the host. upgradecheck Check if there are upgrades available for the host. 6.99.1. activate POST Activates the host for use, for example to run virtual machines. Table 6.296. Parameters summary Name Type Direction Summary async Boolean In Indicates if the activation should be performed asynchronously. 6.99.2. approve POST Approve a pre-installed Hypervisor host for usage in the virtualization environment. This action also accepts an optional cluster element to define the target cluster for this host. Table 6.297. Parameters summary Name Type Direction Summary async Boolean In Indicates if the approval should be performed asynchronously. cluster Cluster In The cluster where the host will be added after it is approved. host Host In The host to approve. 6.99.3. commitnetconfig POST Marks the network configuration as good and persists it inside the host. An API user commits the network configuration to persist a host network interface attachment or detachment, or persist the creation and deletion of a bonded interface. Important Networking configuration is only committed after the engine has established that host connectivity is not lost as a result of the configuration changes. If host connectivity is lost, the host requires a reboot and automatically reverts to the networking configuration. For example, to commit the network configuration of host with id 123 send a request like this: With a request body like this: <action/> Table 6.298. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. 6.99.4. deactivate POST Deactivates the host to perform maintenance tasks. Table 6.299. Parameters summary Name Type Direction Summary async Boolean In Indicates if the deactivation should be performed asynchronously. reason String In stop_gluster_service Boolean In Indicates if the gluster service should be stopped as part of deactivating the host. 6.99.4.1. stop_gluster_service Indicates if the gluster service should be stopped as part of deactivating the host. It can be used while performing maintenance operations on the gluster host. Default value for this variable is false . 6.99.5. enrollcertificate POST Enrolls the certificate of the host. Useful in case you get a warning that it is about to expire or has already expired. Table 6.300. Parameters summary Name Type Direction Summary async Boolean In Indicates if the enrollment should be performed asynchronously. 6.99.6. fence POST Controls the host's power management device. For example, to start the host. This can be done via: Table 6.301. Parameters summary Name Type Direction Summary async Boolean In Indicates if the fencing should be performed asynchronously. fence_type String In power_management PowerManagement Out 6.99.7. forceselectspm POST To manually set a host as the storage pool manager (SPM). With a request body like this: <action/> Table 6.302. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. 6.99.8. get GET Gets the host details. Table 6.303. Parameters summary Name Type Direction Summary all_content Boolean In Indicates if all of the attributes of the host should be included in the response. filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . host Host Out The queried host. 6.99.8.1. all_content Indicates if all of the attributes of the host should be included in the response. By default the following attributes are excluded: hosted_engine For example, to retrieve the complete representation of host '123': Note These attributes are not included by default because retrieving them impacts performance. They are seldom used and require additional queries to the database. Use this parameter with caution and only when specifically required. 6.99.8.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.99.9. install POST Installs the latest version of VDSM and related software on the host. The action also performs every configuration steps on the host which is done during adding host to the engine: kdump configuration, hosted-engine deploy, kernel options changes, etc. The host type defines additional parameters for the action. Example of installing a host, using curl and JSON, plain: curl \ --verbose \ --cacert /etc/pki/ovirt-engine/ca.pem \ --request PUT \ --header "Content-Type: application/json" \ --header "Accept: application/json" \ --header "Version: 4" \ --user "admin@internal:..." \ --data ' { "root_password": "myrootpassword" } ' \ "https://engine.example.com/ovirt-engine/api/hosts/123" Example of installing a host, using curl and JSON, with hosted engine components: curl \ curl \ --verbose \ --cacert /etc/pki/ovirt-engine/ca.pem \ --request PUT \ --header "Content-Type: application/json" \ --header "Accept: application/json" \ --header "Version: 4" \ --user "admin@internal:..." \ --data ' { "root_password": "myrootpassword" } ' \ "https://engine.example.com/ovirt-engine/api/hosts/123?deploy_hosted_engine=true" Important Since version 4.1.2 of the engine when a host is reinstalled we override the host firewall definitions by default. Table 6.304. Parameters summary Name Type Direction Summary async Boolean In Indicates if the installation should be performed asynchronously. deploy_hosted_engine Boolean In When set to true it means this host should also deploy the self-hosted engine components. host Host In The override_iptables property is used to indicate if the firewall configuration should be replaced by the default one. image String In When installing Red Hat Virtualization Host, an ISO image file is required. root_password String In The password of of the root user, used to connect to the host via SSH. ssh Ssh In The SSH details used to connect to the host. undeploy_hosted_engine Boolean In When set to true it means this host should un-deploy the self-hosted engine components and this host will not function as part of the High Availability cluster. 6.99.9.1. deploy_hosted_engine When set to true it means this host should also deploy the self-hosted engine components. A missing value is treated as true i.e deploy. Omitting this parameter means false and will perform no operation in the self-hosted engine area. 6.99.9.2. undeploy_hosted_engine When set to true it means this host should un-deploy the self-hosted engine components and this host will not function as part of the High Availability cluster. A missing value is treated as true i.e un-deploy Omitting this parameter means false and will perform no operation in the self-hosted engine area. 6.99.10. iscsidiscover POST Discovers iSCSI targets on the host, using the initiator details. For example, to discover iSCSI targets available in myiscsi.example.com , from host 123 , send a request like this: With a request body like this: <action> <iscsi> <address>myiscsi.example.com</address> </iscsi> </action> The result will be like this: <discovered_targets> <iscsi_details> <address>10.35.1.72</address> <port>3260</port> <portal>10.35.1.72:3260,1</portal> <target>iqn.2015-08.com.tgt:444</target> </iscsi_details> </discovered_targets> Table 6.305. Parameters summary Name Type Direction Summary async Boolean In Indicates if the discovery should be performed asynchronously. discovered_targets IscsiDetails[] Out The discovered targets including all connection information. iscsi IscsiDetails In The target iSCSI device. iscsi_targets String[] Out The iSCSI targets. 6.99.10.1. iscsi_targets The iSCSI targets. Since version 4.2 of the engine, this parameter is deprecated, use discovered_targets instead. 6.99.11. iscsilogin POST Login to iSCSI targets on the host, using the target details. Table 6.306. Parameters summary Name Type Direction Summary async Boolean In Indicates if the login should be performed asynchronously. iscsi IscsiDetails In The target iSCSI device. 6.99.12. refresh POST Refresh the host devices and capabilities. Table 6.307. Parameters summary Name Type Direction Summary async Boolean In Indicates if the refresh should be performed asynchronously. 6.99.13. remove DELETE Remove the host from the system. Table 6.308. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.99.14. setupnetworks POST This method is used to change the configuration of the network interfaces of a host. For example, if you have a host with three network interfaces eth0 , eth1 and eth2 and you want to configure a new bond using eth0 and eth1 , and put a VLAN on top of it. Using a simple shell script and the curl command line HTTP client that can be done as follows: Note This is valid for version 4 of the API. In versions some elements were represented as XML attributes instead of XML elements. In particular the options and ip elements were represented as follows: <options name="mode" value="4"/> <options name="miimon" value="100"/> <ip address="192.168.122.10" netmask="255.255.255.0"/> Using the Python SDK the same can be done with the following code: # Find the service that manages the collection of hosts: hosts_service = connection.system_service().hosts_service() # Find the host: host = hosts_service.list(search='name=myhost')[0] # Find the service that manages the host: host_service = hosts_service.host_service(host.id) # Configure the network adding a bond with two slaves and attaching it to a # network with an static IP address: host_service.setup_networks( modified_bonds=[ types.HostNic( name='bond0', bonding=types.Bonding( options=[ types.Option( name='mode', value='4', ), types.Option( name='miimon', value='100', ), ], slaves=[ types.HostNic( name='eth1', ), types.HostNic( name='eth2', ), ], ), ), ], modified_network_attachments=[ types.NetworkAttachment( network=types.Network( name='myvlan', ), host_nic=types.HostNic( name='bond0', ), ip_address_assignments=[ types.IpAddressAssignment( assignment_method=types.BootProtocol.STATIC, ip=types.Ip( address='192.168.122.10', netmask='255.255.255.0', ), ), ], dns_resolver_configuration=types.DnsResolverConfiguration( name_servers=[ '1.1.1.1', '2.2.2.2', ], ), ), ], ) # After modifying the network configuration it is very important to make it # persistent: host_service.commit_net_config() Important To make sure that the network configuration has been saved in the host, and that it will be applied when the host is rebooted, remember to call commitnetconfig . Table 6.309. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. check_connectivity Boolean In connectivity_timeout Integer In modified_bonds HostNic[] In modified_labels NetworkLabel[] In modified_network_attachments NetworkAttachment[] In removed_bonds HostNic[] In removed_labels NetworkLabel[] In removed_network_attachments NetworkAttachment[] In synchronized_network_attachments NetworkAttachment[] In A list of network attachments that will be synchronized. 6.99.15. syncallnetworks POST To synchronize all networks on the host, send a request like this: With a request body like this: <action/> Table 6.310. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. 6.99.16. unregisteredstoragedomainsdiscover POST Discovers the block Storage Domains which are candidates to be imported to the setup. For FCP no arguments are required. Table 6.311. Parameters summary Name Type Direction Summary async Boolean In Indicates if the discovery should be performed asynchronously. iscsi IscsiDetails In storage_domains StorageDomain[] Out 6.99.17. update PUT Update the host properties. For example, to update a the kernel command line of a host send a request like this: With request body like this: <host> <os> <custom_kernel_cmdline>vfio_iommu_type1.allow_unsafe_interrupts=1</custom_kernel_cmdline> </os> </host> Table 6.312. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. host Host In/Out 6.99.18. upgrade POST Upgrades VDSM and selected software on the host. Table 6.313. Parameters summary Name Type Direction Summary async Boolean In Indicates if the upgrade should be performed asynchronously. image String In The image parameter specifies path to image, which is used for upgrade. reboot Boolean In Indicates if the host should be rebooted after upgrade. 6.99.18.1. image The image parameter specifies path to image, which is used for upgrade. This parameter is used only to upgrade Vintage Node hosts and it is not relevant for other hosts types. 6.99.18.2. reboot Indicates if the host should be rebooted after upgrade. By default the host is rebooted. Note This parameter is ignored for Red Hat Virtualization Host, which is always rebooted after upgrade. 6.99.19. upgradecheck POST Check if there are upgrades available for the host. If there are upgrades available an icon will be displayed to host status icon in the Administration Portal. Audit log messages are also added to indicate the availability of upgrades. The upgrade can be started from the webadmin or by using the upgrade host action. 6.100. HostDevice A service to access a particular device of a host. Table 6.314. Methods summary Name Summary get Retrieve information about a particular host's device. 6.100.1. get GET Retrieve information about a particular host's device. An example of getting a host device: <host_device href="/ovirt-engine/api/hosts/123/devices/456" id="456"> <name>usb_1_9_1_1_0</name> <capability>usb</capability> <host href="/ovirt-engine/api/hosts/123" id="123"/> <parent_device href="/ovirt-engine/api/hosts/123/devices/789" id="789"> <name>usb_1_9_1</name> </parent_device> </host_device> Table 6.315. Parameters summary Name Type Direction Summary device HostDevice Out follow String In Indicates which inner links should be followed . 6.100.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.101. HostDevices A service to access host devices. Table 6.316. Methods summary Name Summary list List the devices of a host. 6.101.1. list GET List the devices of a host. The order of the returned list of devices isn't guaranteed. Table 6.317. Parameters summary Name Type Direction Summary devices HostDevice[] Out follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of devices to return. 6.101.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.101.1.2. max Sets the maximum number of devices to return. If not specified all the devices are returned. 6.102. HostHook Table 6.318. Methods summary Name Summary get 6.102.1. get GET Table 6.319. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . hook Hook Out 6.102.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.103. HostHooks Table 6.320. Methods summary Name Summary list Returns the list of hooks configured for the host. 6.103.1. list GET Returns the list of hooks configured for the host. The order of the returned list of hooks isn't guranteed. Table 6.321. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . hooks Hook[] Out max Integer In Sets the maximum number of hooks to return. 6.103.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.103.1.2. max Sets the maximum number of hooks to return. If not specified all the hooks are returned. 6.104. HostNic A service to manage a network interface of a host. Table 6.322. Methods summary Name Summary get updatevirtualfunctionsconfiguration The action updates virtual function configuration in case the current resource represents an SR-IOV enabled NIC. 6.104.1. get GET Table 6.323. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . nic HostNic Out 6.104.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.104.2. updatevirtualfunctionsconfiguration POST The action updates virtual function configuration in case the current resource represents an SR-IOV enabled NIC. The input should be consisted of at least one of the following properties: allNetworksAllowed numberOfVirtualFunctions Please see the HostNicVirtualFunctionsConfiguration type for the meaning of the properties. Table 6.324. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. virtual_functions_configuration HostNicVirtualFunctionsConfiguration In 6.105. HostNics A service to manage the network interfaces of a host. Table 6.325. Methods summary Name Summary list Returns the list of network interfaces of the host. 6.105.1. list GET Returns the list of network interfaces of the host. The order of the returned list of network interfaces isn't guaranteed. Table 6.326. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of NICs to return. nics HostNic[] Out 6.105.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.105.1.2. max Sets the maximum number of NICs to return. If not specified all the NICs are returned. 6.106. HostNumaNode Table 6.327. Methods summary Name Summary get 6.106.1. get GET Table 6.328. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . node NumaNode Out 6.106.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.107. HostNumaNodes Table 6.329. Methods summary Name Summary list Returns the list of NUMA nodes of the host. 6.107.1. list GET Returns the list of NUMA nodes of the host. The order of the returned list of NUMA nodes isn't guaranteed. Table 6.330. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of nodes to return. nodes NumaNode[] Out 6.107.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.107.1.2. max Sets the maximum number of nodes to return. If not specified all the nodes are returned. 6.108. HostStorage A service to manage host storages. Table 6.331. Methods summary Name Summary list Get list of storages. 6.108.1. list GET Get list of storages. The XML response you get will be like this one: <host_storages> <host_storage id="123"> ... </host_storage> ... </host_storages> The order of the returned list of storages isn't guaranteed. Table 6.332. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . report_status Boolean In Indicates if the status of the LUNs in the storage should be checked. storages HostStorage[] Out Retrieved list of storages. 6.108.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.108.1.2. report_status Indicates if the status of the LUNs in the storage should be checked. Checking the status of the LUN is an heavy weight operation and this data is not always needed by the user. This parameter will give the option to not perform the status check of the LUNs. The default is true for backward compatibility. Here an example with the LUN status : <host_storage id="123"> <logical_units> <logical_unit id="123"> <lun_mapping>0</lun_mapping> <paths>1</paths> <product_id>lun0</product_id> <serial>123</serial> <size>10737418240</size> <status>used</status> <vendor_id>LIO-ORG</vendor_id> <volume_group_id>123</volume_group_id> </logical_unit> </logical_units> <type>iscsi</type> <host id="123"/> </host_storage> Here an example without the LUN status : <host_storage id="123"> <logical_units> <logical_unit id="123"> <lun_mapping>0</lun_mapping> <paths>1</paths> <product_id>lun0</product_id> <serial>123</serial> <size>10737418240</size> <vendor_id>LIO-ORG</vendor_id> <volume_group_id>123</volume_group_id> </logical_unit> </logical_units> <type>iscsi</type> <host id="123"/> </host_storage> 6.109. Hosts A service that manages hosts. Table 6.333. Methods summary Name Summary add Creates a new host. list Get a list of all available hosts. 6.109.1. add POST Creates a new host. The host is created based on the attributes of the host parameter. The name , address and root_password properties are required. For example, to add a host send the following request: With the following request body: <host> <name>myhost</name> <address>myhost.example.com</address> <root_password>myrootpassword</root_password> </host> Note The root_password element is only included in the client-provided initial representation and is not exposed in the representations returned from subsequent requests. Important Since version 4.1.2 of the engine when a host is newly added we override the host firewall definitions by default. To add a hosted engine host, use the optional deploy_hosted_engine parameter: If the cluster has a default external network provider which is supported for automatic deployment, the external network provider is deployed when adding the host. Only external network providers for OVN are supported for the automatic deployment. To deploy an external network provider that differs to what is defined in the clusters, overwrite the external network provider when adding hosts by sending a request like this: With a request body that contains a reference to the desired provider in the external_network_provider_configuration : <host> <name>myhost</name> <address>myhost.example.com</address> <root_password>123456</root_password> <external_network_provider_configurations> <external_network_provider_configuration> <external_network_provider name="ovirt-provider-ovn"/> </external_network_provider_configuration> </external_network_provider_configurations> </host> Table 6.334. Parameters summary Name Type Direction Summary deploy_hosted_engine Boolean In When set to true it means this host should deploy also hosted engine components. host Host In/Out The host definition from which to create the new host is passed as parameter, and the newly created host is returned. undeploy_hosted_engine Boolean In When set to true it means this host should un-deploy hosted engine components and this host will not function as part of the High Availability cluster. 6.109.1.1. deploy_hosted_engine When set to true it means this host should deploy also hosted engine components. Missing value is treated as true i.e deploy. Omitting this parameter means false and will perform no operation in hosted engine area. 6.109.1.2. undeploy_hosted_engine When set to true it means this host should un-deploy hosted engine components and this host will not function as part of the High Availability cluster. Missing value is treated as true i.e un-deploy. Omitting this parameter means false and will perform no operation in hosted engine area. 6.109.2. list GET Get a list of all available hosts. For example, to list the hosts send the following request: The response body will be something like this: <hosts> <host href="/ovirt-engine/api/hosts/123" id="123"> ... </host> <host href="/ovirt-engine/api/hosts/456" id="456"> ... </host> ... </host> The order of the returned list of hosts is guaranteed only if the sortby clause is included in the search parameter. Table 6.335. Parameters summary Name Type Direction Summary all_content Boolean In Indicates if all of the attributes of the hosts should be included in the response. case_sensitive Boolean In Indicates if the search performed using the search parameter should be performed taking case into account. filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . hosts Host[] Out max Integer In Sets the maximum number of hosts to return. search String In A query string used to restrict the returned hosts. 6.109.2.1. all_content Indicates if all of the attributes of the hosts should be included in the response. By default the following host attributes are excluded: hosted_engine For example, to retrieve the complete representation of the hosts: Note These attributes are not included by default because retrieving them impacts performance. They are seldom used and require additional queries to the database. Use this parameter with caution and only when specifically required. 6.109.2.2. case_sensitive Indicates if the search performed using the search parameter should be performed taking case into account. The default value is true , which means that case is taken into account. If you want to search ignoring case set it to false . 6.109.2.3. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.109.2.4. max Sets the maximum number of hosts to return. If not specified all the hosts are returned. 6.110. Icon A service to manage an icon (read-only). Table 6.336. Methods summary Name Summary get Get an icon. 6.110.1. get GET Get an icon. You will get a XML response like this one: <icon id="123"> <data>Some binary data here</data> <media_type>image/png</media_type> </icon> Table 6.337. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . icon Icon Out Retrieved icon. 6.110.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.111. Icons A service to manage icons. Table 6.338. Methods summary Name Summary list Get a list of icons. 6.111.1. list GET Get a list of icons. You will get a XML response which is similar to this one: <icons> <icon id="123"> <data>...</data> <media_type>image/png</media_type> </icon> ... </icons> The order of the returned list of icons isn't guaranteed. Table 6.339. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . icons Icon[] Out Retrieved list of icons. max Integer In Sets the maximum number of icons to return. 6.111.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.111.1.2. max Sets the maximum number of icons to return. If not specified all the icons are returned. 6.112. Image Table 6.340. Methods summary Name Summary get import Imports an image. 6.112.1. get GET Table 6.341. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . image Image Out 6.112.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.112.2. import POST Imports an image. If the import_as_template parameter is true then the image will be imported as a template, otherwise it will be imported as a disk. When imported as a template, the name of the template can be specified by the optional template.name parameter. If that parameter is not specified, then the name of the template will be automatically assigned by the engine as GlanceTemplate-x (where x will be seven random hexadecimal characters). When imported as a disk, the name of the disk can be specified by the optional disk.name parameter. If that parameter is not specified, then the name of the disk will be automatically assigned by the engine as GlanceDisk-x (where x will be the seven hexadecimal characters of the image identifier). It is recommended to always explicitly specify the template or disk name, to avoid these automatic names generated by the engine. Table 6.342. Parameters summary Name Type Direction Summary async Boolean In Indicates if the import should be performed asynchronously. cluster Cluster In The cluster to which the image should be imported if the import_as_template parameter is set to true . disk Disk In The disk to import. import_as_template Boolean In Specifies if a template should be created from the imported disk. storage_domain StorageDomain In The storage domain to which the disk should be imported. template Template In The name of the template being created if the import_as_template parameter is set to true . 6.113. ImageTransfer This service provides a mechanism to control an image transfer. The client will have to create a transfer by using add of the Section 6.114, "ImageTransfers" service, stating the image to transfer data to/from. After doing that, the transfer is managed by this service. Using oVirt's Python's SDK: Uploading a disk with id 123 (on a random host in the data center): transfers_service = system_service.image_transfers_service() transfer = transfers_service.add( types.ImageTransfer( disk=types.Disk( id='123' ) ) ) Uploading a disk with id 123 on host id 456 : transfers_service = system_service.image_transfers_service() transfer = transfers_service.add( types.ImageTransfer( disk=types.Disk( id='123' ), host=types.Host( id='456' ) ) ) If the user wishes to download a disk rather than upload, he/she should specify download as the direction attribute of the transfer. This will grant a read permission from the image, instead of a write permission. E.g: transfers_service = system_service.image_transfers_service() transfer = transfers_service.add( types.ImageTransfer( disk=types.Disk( id='123' ), direction=types.ImageTransferDirection.DOWNLOAD ) ) Transfers have phases, which govern the flow of the upload/download. A client implementing such a flow should poll/check the transfer's phase and act accordingly. All the possible phases can be found in ImageTransferPhase . After adding a new transfer, its phase will be initializing . The client will have to poll on the transfer's phase until it changes. When the phase becomes transferring , the session is ready to start the transfer. For example: transfer_service = transfers_service.image_transfer_service(transfer.id) while transfer.phase == types.ImageTransferPhase.INITIALIZING: time.sleep(3) transfer = transfer_service.get() At that stage, if the transfer's phase is paused_system , then the session was not successfully established. One possible reason for that is that the ovirt-imageio-daemon is not running in the host that was selected for transfer. The transfer can be resumed by calling resume of the service that manages it. If the session was successfully established - the returned transfer entity will contain the proxy_url and signed_ticket attributes, which the client needs to use in order to transfer the required data. The client can choose whatever technique and tool for sending the HTTPS request with the image's data. proxy_url is the address of a proxy server to the image, to do I/O to. signed_ticket is the content that needs to be added to the Authentication header in the HTTPS request, in order to perform a trusted communication. For example, Python's HTTPSConnection can be used in order to perform a transfer, so an transfer_headers dict is set for the upcoming transfer: transfer_headers = { 'Authorization' : transfer.signed_ticket, } Using Python's HTTPSConnection , a new connection is established: # Extract the URI, port, and path from the transfer's proxy_url. url = urlparse.urlparse(transfer.proxy_url) # Create a new instance of the connection. proxy_connection = HTTPSConnection( url.hostname, url.port, context=ssl.SSLContext(ssl.PROTOCOL_SSLv23) ) For upload, the specific content range being sent must be noted in the Content-Range HTTPS header. This can be used in order to split the transfer into several requests for a more flexible process. For doing that, the client will have to repeatedly extend the transfer session to keep the channel open. Otherwise, the session will terminate and the transfer will get into paused_system phase, and HTTPS requests to the server will be rejected. E.g., the client can iterate on chunks of the file, and send them to the proxy server while asking the service to extend the session: path = "/path/to/image" MB_per_request = 32 with open(path, "rb") as disk: size = os.path.getsize(path) chunk_size = 1024*1024*MB_per_request pos = 0 while (pos < size): transfer_service.extend() transfer_headers['Content-Range'] = "bytes %d-%d/%d" % (pos, min(pos + chunk_size, size)-1, size) proxy_connection.request( 'PUT', url.path, disk.read(chunk_size), headers=transfer_headers ) r = proxy_connection.getresponse() print r.status, r.reason, "Completed", "{:.0%}".format(pos/ float(size)) pos += chunk_size Similarly, for a download transfer, a Range header must be sent, making the download process more easily managed by downloading the disk in chunks. E.g., the client will again iterate on chunks of the disk image, but this time he/she will download it to a local file, rather than uploading its own file to the image: output_file = "/home/user/downloaded_image" MiB_per_request = 32 chunk_size = 1024*1024*MiB_per_request total = disk_size with open(output_file, "wb") as disk: pos = 0 while pos < total: transfer_service.extend() transfer_headers['Range'] = "bytes=%d-%d" % (pos, min(total, pos + chunk_size) - 1) proxy_connection.request('GET', proxy_url.path, headers=transfer_headers) r = proxy_connection.getresponse() disk.write(r.read()) print "Completed", "{:.0%}".format(pos/ float(total)) pos += chunk_size When finishing the transfer, the user should call finalize . This will make the final adjustments and verifications for finishing the transfer process. For example: transfer_service.finalize() In case of an error, the transfer's phase will be changed to finished_failure , and the disk's status will be changed to Illegal . Otherwise it will be changed to finished_success , and the disk will be ready to be used. In both cases, the transfer entity will be removed shortly after. Using HTTP and cURL calls: For upload, create a new disk first: Specify 'initial_size' and 'provisioned_size' in bytes. 'initial_size' must be bigger or the same as the size of the uploaded data. With a request body as follows: <disk> <storage_domains> <storage_domain id="123"/> </storage_domains> <alias>mydisk</alias> <initial_size>1073741824</initial_size> <provisioned_size>1073741824</provisioned_size> <format>raw</format> </disk> Create a new image transfer for downloading/uploading a disk with id 456 : With a request body as follows: <image_transfer> <disk id="456"/> <direction>upload|download</direction> </image_transfer> Will respond: <image_transfer id="123"> <direction>download|upload</direction> <phase>initializing|transferring</phase> <proxy_url>https://proxy_fqdn:54323/images/41c732d4-2210-4e7b-9e5c-4e2805baadbb</proxy_url> <transfer_url>https://daemon_fqdn:54322/images/41c732d4-2210-4e7b-9e5c-4e2805baadbb</transfer_url> ... </image_transfer> Note: If the phase is 'initializing', poll the image_transfer till its phase changes to 'transferring'. Use the 'transfer_url' or 'proxy_url' to invoke a curl command: use 'transfer_url' for transferring directly from/to ovirt-imageio-daemon, or, use 'proxy_url' for transferring from/to ovirt-imageio-proxy. Note: using the proxy would mitigate scenarios where there's no direct connectivity to the daemon machine, e.g. vdsm machines are on a different network than the engine. - Download: USD curl --cacert /etc/pki/ovirt-engine/ca.pem https://daemon_fqdn:54322/images/41c732d4-2210-4e7b-9e5c-4e2805baadbb -o <output_file> - Upload: USD curl --cacert /etc/pki/ovirt-engine/ca.pem --upload-file <file_to_upload> -X PUT https://daemon_fqdn:54322/images/41c732d4-2210-4e7b-9e5c-4e2805baadbb Finalize the image transfer by invoking the action: With a request body as follows: <action /> Table 6.343. Methods summary Name Summary cancel Cancel the image transfer session. extend Extend the image transfer session. finalize After finishing to transfer the data, finalize the transfer. get Get the image transfer entity. pause Pause the image transfer session. resume Resume the image transfer session. 6.113.1. cancel POST Cancel the image transfer session. This terminates the transfer operation and removes the partial image. 6.113.2. extend POST Extend the image transfer session. 6.113.3. finalize POST After finishing to transfer the data, finalize the transfer. This will make sure that the data being transferred is valid and fits the image entity that was targeted in the transfer. Specifically, will verify that if the image entity is a QCOW disk, the data uploaded is indeed a QCOW file, and that the image doesn't have a backing file. 6.113.4. get GET Get the image transfer entity. Table 6.344. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . image_transfer ImageTransfer Out 6.113.4.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.113.5. pause POST Pause the image transfer session. 6.113.6. resume POST Resume the image transfer session. The client will need to poll the transfer's phase until it is different than resuming . For example: transfer_service = transfers_service.image_transfer_service(transfer.id) transfer_service.resume() transfer = transfer_service.get() while transfer.phase == types.ImageTransferPhase.RESUMING: time.sleep(1) transfer = transfer_service.get() 6.114. ImageTransfers This service manages image transfers, for performing Image I/O API in Red Hat Virtualization. Please refer to image transfer for further documentation. Table 6.345. Methods summary Name Summary add Add a new image transfer. list Retrieves the list of image transfers that are currently being performed. 6.114.1. add POST Add a new image transfer. An image, disk or disk snapshot needs to be specified in order to make a new transfer. Important The image attribute is deprecated since version 4.2 of the engine. Use the disk or snapshot attributes instead. Creating a new image transfer for downloading or uploading a disk : To create an image transfer to download or upload a disk with id 123 , send the following request: With a request body like this: <image_transfer> <disk id="123"/> <direction>upload|download</direction> </image_transfer> Creating a new image transfer for downloading or uploading a disk_snapshot : To create an image transfer to download or upload a disk_snapshot with id 456 , send the following request: With a request body like this: <image_transfer> <snapshot id="456"/> <direction>download|upload</direction> </image_transfer> Table 6.346. Parameters summary Name Type Direction Summary image_transfer ImageTransfer In/Out The image transfer to add. 6.114.2. list GET Retrieves the list of image transfers that are currently being performed. The order of the returned list of image transfers is not guaranteed. Table 6.347. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . image_transfer ImageTransfer[] Out A list of image transfers that are currently being performed. 6.114.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.115. Images Manages the set of images available in an storage domain or in an OpenStack image provider. Table 6.348. Methods summary Name Summary list Returns the list of images available in the storage domain or provider. 6.115.1. list GET Returns the list of images available in the storage domain or provider. The order of the returned list of images isn't guaranteed. Table 6.349. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . images Image[] Out max Integer In Sets the maximum number of images to return. 6.115.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.115.1.2. max Sets the maximum number of images to return. If not specified all the images are returned. 6.116. InstanceType Table 6.350. Methods summary Name Summary get Get a specific instance type and it's attributes. remove Removes a specific instance type from the system. update Update a specific instance type and it's attributes. 6.116.1. get GET Get a specific instance type and it's attributes. Table 6.351. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . instance_type InstanceType Out 6.116.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.116.2. remove DELETE Removes a specific instance type from the system. If a virtual machine was created using an instance type X after removal of the instance type the virtual machine's instance type will be set to custom . Table 6.352. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.116.3. update PUT Update a specific instance type and it's attributes. All the attributes are editable after creation. If a virtual machine was created using an instance type X and some configuration in instance type X was updated, the virtual machine's configuration will be updated automatically by the engine. For example, to update the memory of instance type 123 to 1 GiB and set the cpu topology to 2 sockets and 1 core, send a request like this: <instance_type> <memory>1073741824</memory> <cpu> <topology> <cores>1</cores> <sockets>2</sockets> <threads>1</threads> </topology> </cpu> </instance_type> Table 6.353. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. instance_type InstanceType In/Out 6.117. InstanceTypeGraphicsConsole Table 6.354. Methods summary Name Summary get Gets graphics console configuration of the instance type. remove Remove the graphics console from the instance type. 6.117.1. get GET Gets graphics console configuration of the instance type. Table 6.355. Parameters summary Name Type Direction Summary console GraphicsConsole Out The information about the graphics console of the instance type. follow String In Indicates which inner links should be followed . 6.117.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.117.2. remove DELETE Remove the graphics console from the instance type. Table 6.356. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.118. InstanceTypeGraphicsConsoles Table 6.357. Methods summary Name Summary add Add new graphics console to the instance type. list Lists all the configured graphics consoles of the instance type. 6.118.1. add POST Add new graphics console to the instance type. Table 6.358. Parameters summary Name Type Direction Summary console GraphicsConsole In/Out 6.118.2. list GET Lists all the configured graphics consoles of the instance type. The order of the returned list of graphics consoles isn't guaranteed. Table 6.359. Parameters summary Name Type Direction Summary consoles GraphicsConsole[] Out The list of graphics consoles of the instance type. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of consoles to return. 6.118.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.118.2.2. max Sets the maximum number of consoles to return. If not specified all the consoles are returned. 6.119. InstanceTypeNic Table 6.360. Methods summary Name Summary get Gets network interface configuration of the instance type. remove Remove the network interface from the instance type. update Updates the network interface configuration of the instance type. 6.119.1. get GET Gets network interface configuration of the instance type. Table 6.361. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . nic Nic Out 6.119.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.119.2. remove DELETE Remove the network interface from the instance type. Table 6.362. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.119.3. update PUT Updates the network interface configuration of the instance type. Table 6.363. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. nic Nic In/Out 6.120. InstanceTypeNics Table 6.364. Methods summary Name Summary add Add new network interface to the instance type. list Lists all the configured network interface of the instance type. 6.120.1. add POST Add new network interface to the instance type. Table 6.365. Parameters summary Name Type Direction Summary nic Nic In/Out 6.120.2. list GET Lists all the configured network interface of the instance type. The order of the returned list of network interfaces isn't guaranteed. Table 6.366. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of NICs to return. nics Nic[] Out search String In A query string used to restrict the returned templates. 6.120.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.120.2.2. max Sets the maximum number of NICs to return. If not specified all the NICs are returned. 6.121. InstanceTypeWatchdog Table 6.367. Methods summary Name Summary get Gets watchdog configuration of the instance type. remove Remove a watchdog from the instance type. update Updates the watchdog configuration of the instance type. 6.121.1. get GET Gets watchdog configuration of the instance type. Table 6.368. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . watchdog Watchdog Out 6.121.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.121.2. remove DELETE Remove a watchdog from the instance type. Table 6.369. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.121.3. update PUT Updates the watchdog configuration of the instance type. Table 6.370. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. watchdog Watchdog In/Out 6.122. InstanceTypeWatchdogs Table 6.371. Methods summary Name Summary add Add new watchdog to the instance type. list Lists all the configured watchdogs of the instance type. 6.122.1. add POST Add new watchdog to the instance type. Table 6.372. Parameters summary Name Type Direction Summary watchdog Watchdog In/Out 6.122.2. list GET Lists all the configured watchdogs of the instance type. The order of the returned list of watchdogs isn't guaranteed. Table 6.373. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of watchdogs to return. search String In A query string used to restrict the returned templates. watchdogs Watchdog[] Out 6.122.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.122.2.2. max Sets the maximum number of watchdogs to return. If not specified all the watchdogs are returned. 6.123. InstanceTypes Table 6.374. Methods summary Name Summary add Creates a new instance type. list Lists all existing instance types in the system. 6.123.1. add POST Creates a new instance type. This requires only a name attribute and can include all hardware configurations of the virtual machine. With a request body like this: <instance_type> <name>myinstancetype</name> </template> Creating an instance type with all hardware configurations with a request body like this: <instance_type> <name>myinstancetype</name> <console> <enabled>true</enabled> </console> <cpu> <topology> <cores>2</cores> <sockets>2</sockets> <threads>1</threads> </topology> </cpu> <custom_cpu_model>AMD Opteron_G2</custom_cpu_model> <custom_emulated_machine>q35</custom_emulated_machine> <display> <monitors>1</monitors> <single_qxl_pci>true</single_qxl_pci> <smartcard_enabled>true</smartcard_enabled> <type>spice</type> </display> <high_availability> <enabled>true</enabled> <priority>1</priority> </high_availability> <io> <threads>2</threads> </io> <memory>4294967296</memory> <memory_policy> <ballooning>true</ballooning> <guaranteed>268435456</guaranteed> </memory_policy> <migration> <auto_converge>inherit</auto_converge> <compressed>inherit</compressed> <policy id="00000000-0000-0000-0000-000000000000"/> </migration> <migration_downtime>2</migration_downtime> <os> <boot> <devices> <device>hd</device> </devices> </boot> </os> <rng_device> <rate> <bytes>200</bytes> <period>2</period> </rate> <source>urandom</source> </rng_device> <soundcard_enabled>true</soundcard_enabled> <usb> <enabled>true</enabled> <type>native</type> </usb> <virtio_scsi> <enabled>true</enabled> </virtio_scsi> </instance_type> Table 6.375. Parameters summary Name Type Direction Summary instance_type InstanceType In/Out 6.123.2. list GET Lists all existing instance types in the system. The order of the returned list of instance types isn't guaranteed. Table 6.376. Parameters summary Name Type Direction Summary case_sensitive Boolean In Indicates if the search performed using the search parameter should be performed taking case into account. follow String In Indicates which inner links should be followed . instance_type InstanceType[] Out max Integer In Sets the maximum number of instance types to return. search String In A query string used to restrict the returned templates. 6.123.2.1. case_sensitive Indicates if the search performed using the search parameter should be performed taking case into account. The default value is true , which means that case is taken into account. If you want to search ignoring case set it to false . 6.123.2.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.123.2.3. max Sets the maximum number of instance types to return. If not specified all the instance types are returned. 6.124. IscsiBond Table 6.377. Methods summary Name Summary get remove Removes of an existing iSCSI bond. update Updates an iSCSI bond. 6.124.1. get GET Table 6.378. Parameters summary Name Type Direction Summary bond IscsiBond Out The iSCSI bond. follow String In Indicates which inner links should be followed . 6.124.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.124.2. remove DELETE Removes of an existing iSCSI bond. For example, to remove the iSCSI bond 456 send a request like this: Table 6.379. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.124.3. update PUT Updates an iSCSI bond. Updating of an iSCSI bond can be done on the name and the description attributes only. For example, to update the iSCSI bond 456 of data center 123 , send a request like this: The request body should look like this: <iscsi_bond> <name>mybond</name> <description>My iSCSI bond</description> </iscsi_bond> Table 6.380. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. bond IscsiBond In/Out The iSCSI bond to update. 6.125. IscsiBonds Table 6.381. Methods summary Name Summary add Create a new iSCSI bond on a data center. list Returns the list of iSCSI bonds configured in the data center. 6.125.1. add POST Create a new iSCSI bond on a data center. For example, to create a new iSCSI bond on data center 123 using storage connections 456 and 789 , send a request like this: The request body should look like this: <iscsi_bond> <name>mybond</name> <storage_connections> <storage_connection id="456"/> <storage_connection id="789"/> </storage_connections> <networks> <network id="abc"/> </networks> </iscsi_bond> Table 6.382. Parameters summary Name Type Direction Summary bond IscsiBond In/Out 6.125.2. list GET Returns the list of iSCSI bonds configured in the data center. The order of the returned list of iSCSI bonds isn't guaranteed. Table 6.383. Parameters summary Name Type Direction Summary bonds IscsiBond[] Out follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of bonds to return. 6.125.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.125.2.2. max Sets the maximum number of bonds to return. If not specified all the bonds are returned. 6.126. Job A service to manage a job. Table 6.384. Methods summary Name Summary clear Set an external job execution to be cleared by the system. end Marks an external job execution as ended. get Retrieves a job. 6.126.1. clear POST Set an external job execution to be cleared by the system. For example, to set a job with identifier 123 send the following request: With the following request body: <action/> Table 6.385. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. 6.126.2. end POST Marks an external job execution as ended. For example, to terminate a job with identifier 123 send the following request: With the following request body: <action> <force>true</force> <status>finished</status> </action> Table 6.386. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. force Boolean In Indicates if the job should be forcibly terminated. succeeded Boolean In Indicates if the job should be marked as successfully finished or as failed. 6.126.2.1. succeeded Indicates if the job should be marked as successfully finished or as failed. This parameter is optional, and the default value is true . 6.126.3. get GET Retrieves a job. You will receive response in XML like this one: <job href="/ovirt-engine/api/jobs/123" id="123"> <actions> <link href="/ovirt-engine/api/jobs/123/clear" rel="clear"/> <link href="/ovirt-engine/api/jobs/123/end" rel="end"/> </actions> <description>Adding Disk</description> <link href="/ovirt-engine/api/jobs/123/steps" rel="steps"/> <auto_cleared>true</auto_cleared> <end_time>2016-12-12T23:07:29.758+02:00</end_time> <external>false</external> <last_updated>2016-12-12T23:07:29.758+02:00</last_updated> <start_time>2016-12-12T23:07:26.593+02:00</start_time> <status>failed</status> <owner href="/ovirt-engine/api/users/456" id="456"/> </job> Table 6.387. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . job Job Out Retrieves the representation of the job. 6.126.3.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.127. Jobs A service to manage jobs. Table 6.388. Methods summary Name Summary add Add an external job. list Retrieves the representation of the jobs. 6.127.1. add POST Add an external job. For example, to add a job with the following request: With the following request body: <job> <description>Doing some work</description> <auto_cleared>true</auto_cleared> </job> The response should look like: <job href="/ovirt-engine/api/jobs/123" id="123"> <actions> <link href="/ovirt-engine/api/jobs/123/clear" rel="clear"/> <link href="/ovirt-engine/api/jobs/123/end" rel="end"/> </actions> <description>Doing some work</description> <link href="/ovirt-engine/api/jobs/123/steps" rel="steps"/> <auto_cleared>true</auto_cleared> <external>true</external> <last_updated>2016-12-13T02:15:42.130+02:00</last_updated> <start_time>2016-12-13T02:15:42.130+02:00</start_time> <status>started</status> <owner href="/ovirt-engine/api/users/456" id="456"/> </job> Table 6.389. Parameters summary Name Type Direction Summary job Job In/Out Job that will be added. 6.127.2. list GET Retrieves the representation of the jobs. You will receive response in XML like this one: <jobs> <job href="/ovirt-engine/api/jobs/123" id="123"> <actions> <link href="/ovirt-engine/api/jobs/123/clear" rel="clear"/> <link href="/ovirt-engine/api/jobs/123/end" rel="end"/> </actions> <description>Adding Disk</description> <link href="/ovirt-engine/api/jobs/123/steps" rel="steps"/> <auto_cleared>true</auto_cleared> <end_time>2016-12-12T23:07:29.758+02:00</end_time> <external>false</external> <last_updated>2016-12-12T23:07:29.758+02:00</last_updated> <start_time>2016-12-12T23:07:26.593+02:00</start_time> <status>failed</status> <owner href="/ovirt-engine/api/users/456" id="456"/> </job> ... </jobs> The order of the returned list of jobs isn't guaranteed. Table 6.390. Parameters summary Name Type Direction Summary case_sensitive Boolean In Indicates if the search performed using the search parameter should be performed taking case into account. follow String In Indicates which inner links should be followed . jobs Job[] Out A representation of jobs. max Integer In Sets the maximum number of jobs to return. search String In A query string used to restrict the returned jobs. 6.127.2.1. case_sensitive Indicates if the search performed using the search parameter should be performed taking case into account. The default value is true , which means that case is taken into account. If you want to search ignoring case set it to false . 6.127.2.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.127.2.3. max Sets the maximum number of jobs to return. If not specified all the jobs are returned. 6.128. KatelloErrata A service to manage Katello errata. The information is retrieved from Katello. Table 6.391. Methods summary Name Summary list Retrieves the representation of the Katello errata. 6.128.1. list GET Retrieves the representation of the Katello errata. You will receive response in XML like this one: <katello_errata> <katello_erratum href="/ovirt-engine/api/katelloerrata/123" id="123"> <name>RHBA-2013:XYZ</name> <description>The description of the erratum</description> <title>some bug fix update</title> <type>bugfix</type> <issued>2013-11-20T02:00:00.000+02:00</issued> <solution>Few guidelines regarding the solution</solution> <summary>Updated packages that fix one bug are now available for XYZ</summary> <packages> <package> <name>libipa_hbac-1.9.2-82.11.el6_4.i686</name> </package> ... </packages> </katello_erratum> ... </katello_errata> The order of the returned list of erratum isn't guaranteed. Table 6.392. Parameters summary Name Type Direction Summary errata KatelloErratum[] Out A representation of Katello errata. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of errata to return. 6.128.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.128.1.2. max Sets the maximum number of errata to return. If not specified all the errata are returned. 6.129. KatelloErratum A service to manage a Katello erratum. Table 6.393. Methods summary Name Summary get Retrieves a Katello erratum. 6.129.1. get GET Retrieves a Katello erratum. You will receive response in XML like this one: <katello_erratum href="/ovirt-engine/api/katelloerrata/123" id="123"> <name>RHBA-2013:XYZ</name> <description>The description of the erratum</description> <title>some bug fix update</title> <type>bugfix</type> <issued>2013-11-20T02:00:00.000+02:00</issued> <solution>Few guidelines regarding the solution</solution> <summary>Updated packages that fix one bug are now available for XYZ</summary> <packages> <package> <name>libipa_hbac-1.9.2-82.11.el6_4.i686</name> </package> ... </packages> </katello_erratum> Table 6.394. Parameters summary Name Type Direction Summary erratum KatelloErratum Out Retrieves the representation of the Katello erratum. follow String In Indicates which inner links should be followed . 6.129.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.130. LinkLayerDiscoveryProtocol A service to fetch information elements received by Link Layer Discovery Protocol (LLDP). Table 6.395. Methods summary Name Summary list Fetches information elements received by LLDP. 6.130.1. list GET Fetches information elements received by LLDP. Table 6.396. Parameters summary Name Type Direction Summary elements LinkLayerDiscoveryProtocolElement[] Out Retrieves a list of information elements received by LLDP. follow String In Indicates which inner links should be followed . 6.130.1.1. elements Retrieves a list of information elements received by LLDP. For example, to retrieve the information elements received on the NIC 321 on host 123 , send a request like this: It will return a response like this: <link_layer_discovery_protocol_elements> ... <link_layer_discovery_protocol_element> <name>Port Description</name> <properties> <property> <name>port description</name> <value>Summit300-48-Port 1001</value> </property> </properties> <type>4</type> </link_layer_discovery_protocol_element> ... <link_layer_discovery_protocol_elements> 6.130.1.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.131. MacPool Table 6.397. Methods summary Name Summary get remove Removes a MAC address pool. update Updates a MAC address pool. 6.131.1. get GET Table 6.398. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . pool MacPool Out 6.131.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.131.2. remove DELETE Removes a MAC address pool. For example, to remove the MAC address pool having id 123 send a request like this: Table 6.399. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.131.3. update PUT Updates a MAC address pool. The name , description , allow_duplicates , and ranges attributes can be updated. For example, to update the MAC address pool of id 123 send a request like this: With a request body like this: <mac_pool> <name>UpdatedMACPool</name> <description>An updated MAC address pool</description> <allow_duplicates>false</allow_duplicates> <ranges> <range> <from>00:1A:4A:16:01:51</from> <to>00:1A:4A:16:01:e6</to> </range> <range> <from>02:1A:4A:01:00:00</from> <to>02:1A:4A:FF:FF:FF</to> </range> </ranges> </mac_pool> Table 6.400. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. pool MacPool In/Out 6.132. MacPools Table 6.401. Methods summary Name Summary add Creates a new MAC address pool. list Return the list of MAC address pools of the system. 6.132.1. add POST Creates a new MAC address pool. Creation of a MAC address pool requires values for the name and ranges attributes. For example, to create MAC address pool send a request like this: With a request body like this: <mac_pool> <name>MACPool</name> <description>A MAC address pool</description> <allow_duplicates>true</allow_duplicates> <default_pool>false</default_pool> <ranges> <range> <from>00:1A:4A:16:01:51</from> <to>00:1A:4A:16:01:e6</to> </range> </ranges> </mac_pool> Table 6.402. Parameters summary Name Type Direction Summary pool MacPool In/Out 6.132.2. list GET Return the list of MAC address pools of the system. The returned list of MAC address pools isn't guaranteed. Table 6.403. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of pools to return. pools MacPool[] Out 6.132.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.132.2.2. max Sets the maximum number of pools to return. If not specified all the pools are returned. 6.133. Measurable 6.134. Moveable Table 6.404. Methods summary Name Summary move 6.134.1. move POST Table 6.405. Parameters summary Name Type Direction Summary async Boolean In Indicates if the move should be performed asynchronously. 6.135. Network A service managing a network Table 6.406. Methods summary Name Summary get Gets a logical network. remove Removes a logical network, or the association of a logical network to a data center. update Updates a logical network. 6.135.1. get GET Gets a logical network. For example: Will respond: <network href="/ovirt-engine/api/networks/123" id="123"> <name>ovirtmgmt</name> <description>Default Management Network</description> <link href="/ovirt-engine/api/networks/123/permissions" rel="permissions"/> <link href="/ovirt-engine/api/networks/123/vnicprofiles" rel="vnicprofiles"/> <link href="/ovirt-engine/api/networks/123/networklabels" rel="networklabels"/> <mtu>0</mtu> <stp>false</stp> <usages> <usage>vm</usage> </usages> <data_center href="/ovirt-engine/api/datacenters/456" id="456"/> </network> Table 6.407. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . network Network Out 6.135.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.135.2. remove DELETE Removes a logical network, or the association of a logical network to a data center. For example, to remove the logical network 123 send a request like this: Each network is bound exactly to one data center. So if we disassociate network with data center it has the same result as if we would just remove that network. However it might be more specific to say we're removing network 456 of data center 123 . For example, to remove the association of network 456 to data center 123 send a request like this: Note To remove an external logical network, the network has to be removed directly from its provider by OpenStack Networking API . The entity representing the external network inside Red Hat Virtualization is removed automatically, if auto_sync is enabled for the provider, otherwise the entity has to be removed using this method. Table 6.408. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.135.3. update PUT Updates a logical network. The name , description , ip , vlan , stp and display attributes can be updated. For example, to update the description of the logical network 123 send a request like this: With a request body like this: <network> <description>My updated description</description> </network> The maximum transmission unit of a network is set using a PUT request to specify the integer value of the mtu attribute. For example, to set the maximum transmission unit send a request like this: With a request body like this: <network> <mtu>1500</mtu> </network> Table 6.409. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. network Network In/Out 6.136. NetworkAttachment Table 6.410. Methods summary Name Summary get remove update Update the specified network attachment on the host. 6.136.1. get GET Table 6.411. Parameters summary Name Type Direction Summary attachment NetworkAttachment Out follow String In Indicates which inner links should be followed . 6.136.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.136.2. remove DELETE Table 6.412. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.136.3. update PUT Update the specified network attachment on the host. Table 6.413. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. attachment NetworkAttachment In/Out 6.137. NetworkAttachments Manages the set of network attachments of a host or host NIC. Table 6.414. Methods summary Name Summary add Add a new network attachment to the network interface. list Returns the list of network attachments of the host or host NIC. 6.137.1. add POST Add a new network attachment to the network interface. Table 6.415. Parameters summary Name Type Direction Summary attachment NetworkAttachment In/Out 6.137.2. list GET Returns the list of network attachments of the host or host NIC. The order of the returned list of network attachments isn't guaranteed. Table 6.416. Parameters summary Name Type Direction Summary attachments NetworkAttachment[] Out follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of attachments to return. 6.137.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.137.2.2. max Sets the maximum number of attachments to return. If not specified all the attachments are returned. 6.138. NetworkFilter Manages a network filter. <network_filter id="00000019-0019-0019-0019-00000000026b"> <name>example-network-filter-b</name> <version> <major>4</major> <minor>0</minor> <build>-1</build> <revision>-1</revision> </version> </network_filter> Please note that version is referring to the minimal support version for the specific filter. Table 6.417. Methods summary Name Summary get Retrieves a representation of the network filter. 6.138.1. get GET Retrieves a representation of the network filter. Table 6.418. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . network_filter NetworkFilter Out 6.138.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.139. NetworkFilters Represents a readonly network filters sub-collection. The network filter enables to filter packets send to/from the VM's nic according to defined rules. For more information please refer to NetworkFilter service documentation Network filters are supported in different versions, starting from version 3.0. A network filter is defined for each vnic profile. A vnic profile is defined for a specific network. A network can be assigned to several different clusters. In the future, each network will be defined in cluster level. Currently, each network is being defined at data center level. Potential network filters for each network are determined by the network's data center compatibility version V. V must be >= the network filter version in order to configure this network filter for a specific network. Please note, that if a network is assigned to cluster with a version supporting a network filter, the filter may not be available due to the data center version being smaller then the network filter's version. Example of listing all of the supported network filters for a specific cluster: Output: <network_filters> <network_filter id="00000019-0019-0019-0019-00000000026c"> <name>example-network-filter-a</name> <version> <major>4</major> <minor>0</minor> <build>-1</build> <revision>-1</revision> </version> </network_filter> <network_filter id="00000019-0019-0019-0019-00000000026b"> <name>example-network-filter-b</name> <version> <major>4</major> <minor>0</minor> <build>-1</build> <revision>-1</revision> </version> </network_filter> <network_filter id="00000019-0019-0019-0019-00000000026a"> <name>example-network-filter-a</name> <version> <major>3</major> <minor>0</minor> <build>-1</build> <revision>-1</revision> </version> </network_filter> </network_filters> Table 6.419. Methods summary Name Summary list Retrieves the representations of the network filters. 6.139.1. list GET Retrieves the representations of the network filters. The order of the returned list of network filters isn't guaranteed. Table 6.420. Parameters summary Name Type Direction Summary filters NetworkFilter[] Out follow String In Indicates which inner links should be followed . 6.139.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.140. NetworkLabel Table 6.421. Methods summary Name Summary get remove Removes a label from a logical network. 6.140.1. get GET Table 6.422. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . label NetworkLabel Out 6.140.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.140.2. remove DELETE Removes a label from a logical network. For example, to remove the label exemplary from a logical network having id 123 send the following request: Table 6.423. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.141. NetworkLabels Manages the ser of labels attached to a network or to a host NIC. Table 6.424. Methods summary Name Summary add Attaches label to logical network. list Returns the list of labels attached to the network or host NIC. 6.141.1. add POST Attaches label to logical network. You can attach labels to a logical network to automate the association of that logical network with physical host network interfaces to which the same label has been attached. For example, to attach the label mylabel to a logical network having id 123 send a request like this: With a request body like this: <label id="mylabel"/> Table 6.425. Parameters summary Name Type Direction Summary label NetworkLabel In/Out 6.141.2. list GET Returns the list of labels attached to the network or host NIC. The order of the returned list of labels isn't guaranteed. Table 6.426. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . labels NetworkLabel[] Out max Integer In Sets the maximum number of labels to return. 6.141.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.141.2.2. max Sets the maximum number of labels to return. If not specified all the labels are returned. 6.142. Networks Manages logical networks. The engine creates a default ovirtmgmt network on installation. This network acts as the management network for access to hypervisor hosts. This network is associated with the Default cluster and is a member of the Default data center. Table 6.427. Methods summary Name Summary add Creates a new logical network, or associates an existing network with a data center. list List logical networks. 6.142.1. add POST Creates a new logical network, or associates an existing network with a data center. Creation of a new network requires the name and data_center elements. For example, to create a network named mynetwork for data center 123 send a request like this: With a request body like this: <network> <name>mynetwork</name> <data_center id="123"/> </network> To associate the existing network 456 with the data center 123 send a request like this: With a request body like this: <network> <name>ovirtmgmt</name> </network> To create a network named exnetwork on top of an external OpenStack network provider 456 send a request like this: <network> <name>exnetwork</name> <external_provider id="456"/> <data_center id="123"/> </network> Table 6.428. Parameters summary Name Type Direction Summary network Network In/Out 6.142.2. list GET List logical networks. For example: Will respond: <networks> <network href="/ovirt-engine/api/networks/123" id="123"> <name>ovirtmgmt</name> <description>Default Management Network</description> <link href="/ovirt-engine/api/networks/123/permissions" rel="permissions"/> <link href="/ovirt-engine/api/networks/123/vnicprofiles" rel="vnicprofiles"/> <link href="/ovirt-engine/api/networks/123/networklabels" rel="networklabels"/> <mtu>0</mtu> <stp>false</stp> <usages> <usage>vm</usage> </usages> <data_center href="/ovirt-engine/api/datacenters/456" id="456"/> </network> ... </networks> The order of the returned list of networks is guaranteed only if the sortby clause is included in the search parameter. Table 6.429. Parameters summary Name Type Direction Summary case_sensitive Boolean In Indicates if the search performed using the search parameter should be performed taking case into account. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of networks to return. networks Network[] Out search String In A query string used to restrict the returned networks. 6.142.2.1. case_sensitive Indicates if the search performed using the search parameter should be performed taking case into account. The default value is true , which means that case is taken into account. If you want to search ignoring case set it to false . 6.142.2.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.142.2.3. max Sets the maximum number of networks to return. If not specified all the networks are returned. 6.143. NicNetworkFilterParameter This service manages a parameter for a network filter. Table 6.430. Methods summary Name Summary get Retrieves a representation of the network filter parameter. remove Removes the filter parameter. update Updates the network filter parameter. 6.143.1. get GET Retrieves a representation of the network filter parameter. Table 6.431. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . parameter NetworkFilterParameter Out The representation of the network filter parameter. 6.143.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.143.2. remove DELETE Removes the filter parameter. For example, to remove the filter parameter with id 123 on NIC 456 of virtual machine 789 send a request like this: 6.143.3. update PUT Updates the network filter parameter. For example, to update the network filter parameter having with with id 123 on NIC 456 of virtual machine 789 send a request like this: With a request body like this: <network_filter_parameter> <name>updatedName</name> <value>updatedValue</value> </network_filter_parameter> Table 6.432. Parameters summary Name Type Direction Summary parameter NetworkFilterParameter In/Out The network filter parameter that is being updated. 6.144. NicNetworkFilterParameters This service manages a collection of parameters for network filters. Table 6.433. Methods summary Name Summary add Add a network filter parameter. list Retrieves the representations of the network filter parameters. 6.144.1. add POST Add a network filter parameter. For example, to add the parameter for the network filter on NIC 456 of virtual machine 789 send a request like this: With a request body like this: <network_filter_parameter> <name>IP</name> <value>10.0.1.2</value> </network_filter_parameter> Table 6.434. Parameters summary Name Type Direction Summary parameter NetworkFilterParameter In/Out The network filter parameter that is being added. 6.144.2. list GET Retrieves the representations of the network filter parameters. The order of the returned list of network filters isn't guaranteed. Table 6.435. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . parameters NetworkFilterParameter[] Out The list of the network filter parameters. 6.144.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.145. OpenstackImage Table 6.436. Methods summary Name Summary get import Imports a virtual machine from a Glance image storage domain. 6.145.1. get GET Table 6.437. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . image OpenStackImage Out 6.145.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.145.2. import POST Imports a virtual machine from a Glance image storage domain. For example, to import the image with identifier 456 from the storage domain with identifier 123 send a request like this: With a request body like this: <action> <storage_domain> <name>images0</name> </storage_domain> <cluster> <name>images0</name> </cluster> </action> Table 6.438. Parameters summary Name Type Direction Summary async Boolean In Indicates if the import should be performed asynchronously. cluster Cluster In This parameter is mandatory in case of using import_as_template and indicates which cluster should be used for import glance image as template. disk Disk In import_as_template Boolean In Indicates whether the image should be imported as a template. storage_domain StorageDomain In template Template In 6.146. OpenstackImageProvider Table 6.439. Methods summary Name Summary get importcertificates Import the SSL certificates of the external host provider. remove testconnectivity In order to test connectivity for external provider we need to run following request where 123 is an id of a provider. update Update the specified OpenStack image provider in the system. 6.146.1. get GET Table 6.440. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . provider OpenStackImageProvider Out 6.146.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.146.2. importcertificates POST Import the SSL certificates of the external host provider. Table 6.441. Parameters summary Name Type Direction Summary certificates Certificate[] In 6.146.3. remove DELETE Table 6.442. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.146.4. testconnectivity POST In order to test connectivity for external provider we need to run following request where 123 is an id of a provider. Table 6.443. Parameters summary Name Type Direction Summary async Boolean In Indicates if the test should be performed asynchronously. 6.146.5. update PUT Update the specified OpenStack image provider in the system. Table 6.444. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. provider OpenStackImageProvider In/Out 6.147. OpenstackImageProviders Table 6.445. Methods summary Name Summary add Adds a new OpenStack image provider to the system. list Returns the list of providers. 6.147.1. add POST Adds a new OpenStack image provider to the system. Table 6.446. Parameters summary Name Type Direction Summary provider OpenStackImageProvider In/Out 6.147.2. list GET Returns the list of providers. The order of the returned list of providers is not guaranteed. Table 6.447. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of providers to return. providers OpenStackImageProvider[] Out search String In A query string used to restrict the returned OpenStack image providers. 6.147.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.147.2.2. max Sets the maximum number of providers to return. If not specified, all the providers are returned. 6.148. OpenstackImages Table 6.448. Methods summary Name Summary list Lists the images of a Glance image storage domain. 6.148.1. list GET Lists the images of a Glance image storage domain. The order of the returned list of images isn't guaranteed. Table 6.449. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . images OpenStackImage[] Out max Integer In Sets the maximum number of images to return. 6.148.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.148.1.2. max Sets the maximum number of images to return. If not specified all the images are returned. 6.149. OpenstackNetwork Table 6.450. Methods summary Name Summary get import This operation imports an external network into Red Hat Virtualization. 6.149.1. get GET Table 6.451. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . network OpenStackNetwork Out 6.149.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.149.2. import POST This operation imports an external network into Red Hat Virtualization. The network will be added to the specified data center. Table 6.452. Parameters summary Name Type Direction Summary async Boolean In Indicates if the import should be performed asynchronously. data_center DataCenter In The data center into which the network is to be imported. 6.149.2.1. data_center The data center into which the network is to be imported. Data center is mandatory, and can be specified using the id or name attributes. The rest of the attributes will be ignored. Note If auto_sync is enabled for the provider, the network might be imported automatically. To prevent this, automatic import can be disabled by setting the auto_sync to false, and enabling it again after importing the network. 6.150. OpenstackNetworkProvider This service manages the OpenStack network provider. Table 6.453. Methods summary Name Summary get Returns the representation of the object managed by this service. importcertificates Import the SSL certificates of the external host provider. remove Removes the provider. testconnectivity In order to test connectivity for external provider we need to run following request where 123 is an id of a provider. update Updates the provider. 6.150.1. get GET Returns the representation of the object managed by this service. For example, to get the OpenStack network provider with identifier 1234 , send a request like this: Table 6.454. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . provider OpenStackNetworkProvider Out 6.150.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.150.2. importcertificates POST Import the SSL certificates of the external host provider. Table 6.455. Parameters summary Name Type Direction Summary certificates Certificate[] In 6.150.3. remove DELETE Removes the provider. For example, to remove the OpenStack network provider with identifier 1234 , send a request like this: Table 6.456. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.150.4. testconnectivity POST In order to test connectivity for external provider we need to run following request where 123 is an id of a provider. Table 6.457. Parameters summary Name Type Direction Summary async Boolean In Indicates if the test should be performed asynchronously. 6.150.5. update PUT Updates the provider. For example, to update provider_name , requires_authentication , url , tenant_name and type properties, for the OpenStack network provider with identifier 1234 , send a request like this: With a request body like this: <openstack_network_provider> <name>ovn-network-provider</name> <requires_authentication>false</requires_authentication> <url>http://some_server_url.domain.com:9696</url> <tenant_name>oVirt</tenant_name> <type>external</type> </openstack_network_provider> Table 6.458. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. provider OpenStackNetworkProvider In/Out The provider to update. 6.151. OpenstackNetworkProviders This service manages OpenStack network providers. Table 6.459. Methods summary Name Summary add Adds a new network provider to the system. list Returns the list of providers. 6.151.1. add POST Adds a new network provider to the system. If the type property is not present, a default value of NEUTRON will be used. Table 6.460. Parameters summary Name Type Direction Summary provider OpenStackNetworkProvider In/Out 6.151.2. list GET Returns the list of providers. The order of the returned list of providers is not guaranteed. Table 6.461. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of providers to return. providers OpenStackNetworkProvider[] Out search String In A query string used to restrict the returned OpenStack network providers. 6.151.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.151.2.2. max Sets the maximum number of providers to return. If not specified, all the providers are returned. 6.152. OpenstackNetworks Table 6.462. Methods summary Name Summary list Returns the list of networks. 6.152.1. list GET Returns the list of networks. The order of the returned list of networks isn't guaranteed. Table 6.463. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of networks to return. networks OpenStackNetwork[] Out 6.152.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.152.1.2. max Sets the maximum number of networks to return. If not specified all the networks are returned. 6.153. OpenstackSubnet Table 6.464. Methods summary Name Summary get remove 6.153.1. get GET Table 6.465. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . subnet OpenStackSubnet Out 6.153.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.153.2. remove DELETE Table 6.466. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.154. OpenstackSubnets Table 6.467. Methods summary Name Summary add list Returns the list of sub-networks. 6.154.1. add POST Table 6.468. Parameters summary Name Type Direction Summary subnet OpenStackSubnet In/Out 6.154.2. list GET Returns the list of sub-networks. The order of the returned list of sub-networks isn't guaranteed. Table 6.469. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of sub-networks to return. subnets OpenStackSubnet[] Out 6.154.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.154.2.2. max Sets the maximum number of sub-networks to return. If not specified all the sub-networks are returned. 6.155. OpenstackVolumeAuthenticationKey Table 6.470. Methods summary Name Summary get remove update Update the specified authentication key. 6.155.1. get GET Table 6.471. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . key OpenstackVolumeAuthenticationKey Out 6.155.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.155.2. remove DELETE Table 6.472. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.155.3. update PUT Update the specified authentication key. Table 6.473. Parameters summary Name Type Direction Summary key OpenstackVolumeAuthenticationKey In/Out 6.156. OpenstackVolumeAuthenticationKeys Table 6.474. Methods summary Name Summary add Add a new authentication key to the OpenStack volume provider. list Returns the list of authentication keys. 6.156.1. add POST Add a new authentication key to the OpenStack volume provider. Table 6.475. Parameters summary Name Type Direction Summary key OpenstackVolumeAuthenticationKey In/Out 6.156.2. list GET Returns the list of authentication keys. The order of the returned list of authentication keys isn't guaranteed. Table 6.476. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . keys OpenstackVolumeAuthenticationKey[] Out max Integer In Sets the maximum number of keys to return. 6.156.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.156.2.2. max Sets the maximum number of keys to return. If not specified all the keys are returned. 6.157. OpenstackVolumeProvider Table 6.477. Methods summary Name Summary get importcertificates Import the SSL certificates of the external host provider. remove testconnectivity In order to test connectivity for external provider we need to run following request where 123 is an id of a provider. update Update the specified OpenStack volume provider in the system. 6.157.1. get GET Table 6.478. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . provider OpenStackVolumeProvider Out 6.157.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.157.2. importcertificates POST Import the SSL certificates of the external host provider. Table 6.479. Parameters summary Name Type Direction Summary certificates Certificate[] In 6.157.3. remove DELETE Table 6.480. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. force Boolean In Indicates if the operation should succeed, and the provider removed from the database, even if something fails during the operation. 6.157.3.1. force Indicates if the operation should succeed, and the provider removed from the database, even if something fails during the operation. This parameter is optional, and the default value is false . 6.157.4. testconnectivity POST In order to test connectivity for external provider we need to run following request where 123 is an id of a provider. Table 6.481. Parameters summary Name Type Direction Summary async Boolean In Indicates if the test should be performed asynchronously. 6.157.5. update PUT Update the specified OpenStack volume provider in the system. Table 6.482. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. provider OpenStackVolumeProvider In/Out 6.158. OpenstackVolumeProviders Table 6.483. Methods summary Name Summary add Adds a new volume provider. list Retrieves the list of volume providers. 6.158.1. add POST Adds a new volume provider. For example: With a request body like this: <openstack_volume_provider> <name>mycinder</name> <url>https://mycinder.example.com:8776</url> <data_center> <name>mydc</name> </data_center> <requires_authentication>true</requires_authentication> <username>admin</username> <password>mypassword</password> <tenant_name>mytenant</tenant_name> </openstack_volume_provider> Table 6.484. Parameters summary Name Type Direction Summary provider OpenStackVolumeProvider In/Out 6.158.2. list GET Retrieves the list of volume providers. The order of the returned list of volume providers is not guaranteed. Table 6.485. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of providers to return. providers OpenStackVolumeProvider[] Out search String In A query string used to restrict the returned volume providers. 6.158.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.158.2.2. max Sets the maximum number of providers to return. If not specified, all the providers are returned. 6.159. OpenstackVolumeType Table 6.486. Methods summary Name Summary get 6.159.1. get GET Table 6.487. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . type OpenStackVolumeType Out 6.159.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.160. OpenstackVolumeTypes Table 6.488. Methods summary Name Summary list Returns the list of volume types. 6.160.1. list GET Returns the list of volume types. The order of the returned list of volume types isn't guaranteed. Table 6.489. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of volume types to return. types OpenStackVolumeType[] Out 6.160.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.160.1.2. max Sets the maximum number of volume types to return. If not specified all the volume types are returned. 6.161. OperatingSystem Table 6.490. Methods summary Name Summary get 6.161.1. get GET Table 6.491. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . operating_system OperatingSystemInfo Out 6.161.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.162. OperatingSystems Manages the set of types of operating systems available in the system. Table 6.492. Methods summary Name Summary list Returns the list of types of operating system available in the system. 6.162.1. list GET Returns the list of types of operating system available in the system. The order of the returned list of operating systems isn't guaranteed. Table 6.493. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of networks to return. operating_system OperatingSystemInfo[] Out 6.162.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.162.1.2. max Sets the maximum number of networks to return. If not specified all the networks are returned. 6.163. Permission Table 6.494. Methods summary Name Summary get remove 6.163.1. get GET Table 6.495. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . permission Permission Out 6.163.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.163.2. remove DELETE Table 6.496. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.164. Permit A service to manage a specific permit of the role. Table 6.497. Methods summary Name Summary get Gets the information about the permit of the role. remove Removes the permit from the role. 6.164.1. get GET Gets the information about the permit of the role. For example to retrieve the information about the permit with the id 456 of the role with the id 123 send a request like this: <permit href="/ovirt-engine/api/roles/123/permits/456" id="456"> <name>change_vm_cd</name> <administrative>false</administrative> <role href="/ovirt-engine/api/roles/123" id="123"/> </permit> Table 6.498. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . permit Permit Out The permit of the role. 6.164.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.164.2. remove DELETE Removes the permit from the role. For example to remove the permit with id 456 from the role with id 123 send a request like this: Table 6.499. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.165. Permits Represents a permits sub-collection of the specific role. Table 6.500. Methods summary Name Summary add Adds a permit to the role. list List the permits of the role. 6.165.1. add POST Adds a permit to the role. The permit name can be retrieved from the Section 6.39, "ClusterLevels" service. For example to assign a permit create_vm to the role with id 123 send a request like this: With a request body like this: <permit> <name>create_vm</name> </permit> Table 6.501. Parameters summary Name Type Direction Summary permit Permit In/Out The permit to add. 6.165.2. list GET List the permits of the role. For example to list the permits of the role with the id 123 send a request like this: <permits> <permit href="/ovirt-engine/api/roles/123/permits/5" id="5"> <name>change_vm_cd</name> <administrative>false</administrative> <role href="/ovirt-engine/api/roles/123" id="123"/> </permit> <permit href="/ovirt-engine/api/roles/123/permits/7" id="7"> <name>connect_to_vm</name> <administrative>false</administrative> <role href="/ovirt-engine/api/roles/123" id="123"/> </permit> </permits> The order of the returned list of permits isn't guaranteed. Table 6.502. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of permits to return. permits Permit[] Out List of permits. 6.165.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.165.2.2. max Sets the maximum number of permits to return. If not specified all the permits are returned. 6.166. Qos Table 6.503. Methods summary Name Summary get Get specified QoS in the data center. remove Remove specified QoS from datacenter. update Update the specified QoS in the dataCenter. 6.166.1. get GET Get specified QoS in the data center. You will get response like this one below: <qos href="/ovirt-engine/api/datacenters/123/qoss/123" id="123"> <name>123</name> <description>123</description> <max_iops>1</max_iops> <max_throughput>1</max_throughput> <type>storage</type> <data_center href="/ovirt-engine/api/datacenters/123" id="123"/> </qos> Table 6.504. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . qos Qos Out Queried QoS object. 6.166.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.166.2. remove DELETE Remove specified QoS from datacenter. Table 6.505. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.166.3. update PUT Update the specified QoS in the dataCenter. For example with curl: You will receive response like this: <qos href="/ovirt-engine/api/datacenters/123/qoss/123" id="123"> <name>321</name> <description>321</description> <max_iops>10</max_iops> <max_throughput>1</max_throughput> <type>storage</type> <data_center href="/ovirt-engine/api/datacenters/123" id="123"/> </qos> Table 6.506. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. qos Qos In/Out Updated QoS object. 6.167. Qoss Manages the set of quality of service configurations available in a data center. Table 6.507. Methods summary Name Summary add Add a new QoS to the dataCenter. list Returns the list of quality of service configurations available in the data center. 6.167.1. add POST Add a new QoS to the dataCenter. The response will look as follows: <qos href="/ovirt-engine/api/datacenters/123/qoss/123" id="123"> <name>123</name> <description>123</description> <max_iops>10</max_iops> <type>storage</type> <data_center href="/ovirt-engine/api/datacenters/123" id="123"/> </qos> Table 6.508. Parameters summary Name Type Direction Summary qos Qos In/Out Added QoS object. 6.167.2. list GET Returns the list of quality of service configurations available in the data center. You will get response which will look like this: <qoss> <qos href="/ovirt-engine/api/datacenters/123/qoss/1" id="1">...</qos> <qos href="/ovirt-engine/api/datacenters/123/qoss/2" id="2">...</qos> <qos href="/ovirt-engine/api/datacenters/123/qoss/3" id="3">...</qos> </qoss> The returned list of quality of service configurations isn't guaranteed. Table 6.509. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of QoS descriptors to return. qoss Qos[] Out List of queried QoS objects. 6.167.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.167.2.2. max Sets the maximum number of QoS descriptors to return. If not specified all the descriptors are returned. 6.168. Quota Table 6.510. Methods summary Name Summary get Retrieves a quota. remove Delete a quota. update Updates a quota. 6.168.1. get GET Retrieves a quota. An example of retrieving a quota: <quota id="456"> <name>myquota</name> <description>My new quota for virtual machines</description> <cluster_hard_limit_pct>20</cluster_hard_limit_pct> <cluster_soft_limit_pct>80</cluster_soft_limit_pct> <storage_hard_limit_pct>20</storage_hard_limit_pct> <storage_soft_limit_pct>80</storage_soft_limit_pct> </quota> Table 6.511. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . quota Quota Out 6.168.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.168.2. remove DELETE Delete a quota. An example of deleting a quota: Table 6.512. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.168.3. update PUT Updates a quota. An example of updating a quota: <quota> <cluster_hard_limit_pct>30</cluster_hard_limit_pct> <cluster_soft_limit_pct>70</cluster_soft_limit_pct> <storage_hard_limit_pct>20</storage_hard_limit_pct> <storage_soft_limit_pct>80</storage_soft_limit_pct> </quota> Table 6.513. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. quota Quota In/Out 6.169. QuotaClusterLimit Table 6.514. Methods summary Name Summary get remove 6.169.1. get GET Table 6.515. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . limit QuotaClusterLimit Out 6.169.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.169.2. remove DELETE Table 6.516. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.170. QuotaClusterLimits Manages the set of quota limits configured for a cluster. Table 6.517. Methods summary Name Summary add Add a cluster limit to a specified Quota. list Returns the set of quota limits configured for the cluster. 6.170.1. add POST Add a cluster limit to a specified Quota. Table 6.518. Parameters summary Name Type Direction Summary limit QuotaClusterLimit In/Out 6.170.2. list GET Returns the set of quota limits configured for the cluster. The returned list of quota limits isn't guaranteed. Table 6.519. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . limits QuotaClusterLimit[] Out max Integer In Sets the maximum number of limits to return. 6.170.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.170.2.2. max Sets the maximum number of limits to return. If not specified all the limits are returned. 6.171. QuotaStorageLimit Table 6.520. Methods summary Name Summary get remove 6.171.1. get GET Table 6.521. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . limit QuotaStorageLimit Out 6.171.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.171.2. remove DELETE Table 6.522. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. 6.172. QuotaStorageLimits Manages the set of storage limits configured for a quota. Table 6.523. Methods summary Name Summary add Adds a storage limit to a specified quota. list Returns the list of storage limits configured for the quota. 6.172.1. add POST Adds a storage limit to a specified quota. To create a 100GiB storage limit for all storage domains in a data center, send a request like this: With a request body like this: <quota_storage_limit> <limit>100</limit> </quota_storage_limit> To create a 50GiB storage limit for a storage domain with the ID 000 , send a request like this: With a request body like this: <quota_storage_limit> <limit>50</limit> <storage_domain id="000"/> </quota_storage_limit> Table 6.524. Parameters summary Name Type Direction Summary limit QuotaStorageLimit In/Out 6.172.2. list GET Returns the list of storage limits configured for the quota. The order of the returned list of storage limits is not guaranteed. Table 6.525. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . limits QuotaStorageLimit[] Out max Integer In Sets the maximum number of limits to return. 6.172.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.172.2.2. max Sets the maximum number of limits to return. If not specified, all the limits are returned. 6.173. Quotas Manages the set of quotas configured for a data center. Table 6.526. Methods summary Name Summary add Creates a new quota. list Lists quotas of a data center. 6.173.1. add POST Creates a new quota. An example of creating a new quota: <quota> <name>myquota</name> <description>My new quota for virtual machines</description> </quota> Table 6.527. Parameters summary Name Type Direction Summary quota Quota In/Out 6.173.2. list GET Lists quotas of a data center. The order of the returned list of quotas isn't guaranteed. Table 6.528. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of quota descriptors to return. quotas Quota[] Out 6.173.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.173.2.2. max Sets the maximum number of quota descriptors to return. If not specified all the descriptors are returned. 6.174. Role Table 6.529. Methods summary Name Summary get Get the role. remove Removes the role. update Updates a role. 6.174.1. get GET Get the role. You will receive XML response like this one: <role id="123"> <name>MyRole</name> <description>MyRole description</description> <link href="/ovirt-engine/api/roles/123/permits" rel="permits"/> <administrative>true</administrative> <mutable>false</mutable> </role> Table 6.530. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . role Role Out Retrieved role. 6.174.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.174.2. remove DELETE Removes the role. To remove the role you need to know its id, then send request like this: Table 6.531. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.174.3. update PUT Updates a role. You are allowed to update name , description and administrative attributes after role is created. Within this endpoint you can't add or remove roles permits you need to use service that manages permits of role. For example to update role's name , description and administrative attributes send a request like this: With a request body like this: <role> <name>MyNewRoleName</name> <description>My new description of the role</description> <administrative>true</administrative> </group> Table 6.532. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. role Role In/Out Updated role. 6.175. Roles Provides read-only access to the global set of roles Table 6.533. Methods summary Name Summary add Create a new role. list List roles. 6.175.1. add POST Create a new role. The role can be administrative or non-administrative and can have different permits. For example, to add the MyRole non-administrative role with permits to login and create virtual machines send a request like this (note that you have to pass permit id): With a request body like this: <role> <name>MyRole</name> <description>My custom role to create virtual machines</description> <administrative>false</administrative> <permits> <permit id="1"/> <permit id="1300"/> </permits> </group> Table 6.534. Parameters summary Name Type Direction Summary role Role In/Out Role that will be added. 6.175.2. list GET List roles. You will receive response in XML like this one: <roles> <role id="123"> <name>SuperUser</name> <description>Roles management administrator</description> <link href="/ovirt-engine/api/roles/123/permits" rel="permits"/> <administrative>true</administrative> <mutable>false</mutable> </role> ... </roles> The order of the returned list of roles isn't guaranteed. Table 6.535. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of roles to return. roles Role[] Out Retrieved list of roles. 6.175.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.175.2.2. max Sets the maximum number of roles to return. If not specified all the roles are returned. 6.176. SchedulingPolicies Manages the set of scheduling policies available in the system. Table 6.536. Methods summary Name Summary add Add a new scheduling policy to the system. list Returns the list of scheduling policies available in the system. 6.176.1. add POST Add a new scheduling policy to the system. Table 6.537. Parameters summary Name Type Direction Summary policy SchedulingPolicy In/Out 6.176.2. list GET Returns the list of scheduling policies available in the system. The order of the returned list of scheduling policies isn't guaranteed. Table 6.538. Parameters summary Name Type Direction Summary filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of policies to return. policies SchedulingPolicy[] Out 6.176.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.176.2.2. max Sets the maximum number of policies to return. If not specified all the policies are returned. 6.177. SchedulingPolicy Table 6.539. Methods summary Name Summary get remove update Update the specified user defined scheduling policy in the system. 6.177.1. get GET Table 6.540. Parameters summary Name Type Direction Summary filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . policy SchedulingPolicy Out 6.177.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.177.2. remove DELETE Table 6.541. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.177.3. update PUT Update the specified user defined scheduling policy in the system. Table 6.542. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. policy SchedulingPolicy In/Out 6.178. SchedulingPolicyUnit Table 6.543. Methods summary Name Summary get remove 6.178.1. get GET Table 6.544. Parameters summary Name Type Direction Summary filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . unit SchedulingPolicyUnit Out 6.178.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.178.2. remove DELETE Table 6.545. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.179. SchedulingPolicyUnits Manages the set of scheduling policy units available in the system. Table 6.546. Methods summary Name Summary list Returns the list of scheduling policy units available in the system. 6.179.1. list GET Returns the list of scheduling policy units available in the system. The order of the returned list of scheduling policy units isn't guaranteed. Table 6.547. Parameters summary Name Type Direction Summary filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of policy units to return. units SchedulingPolicyUnit[] Out 6.179.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.179.1.2. max Sets the maximum number of policy units to return. If not specified all the policy units are returned. 6.180. Snapshot Table 6.548. Methods summary Name Summary get remove restore Restores a virtual machine snapshot. 6.180.1. get GET Table 6.549. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . snapshot Snapshot Out 6.180.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.180.2. remove DELETE Table 6.550. Parameters summary Name Type Direction Summary all_content Boolean In Indicates if all the attributes of the virtual machine snapshot should be included in the response. async Boolean In Indicates if the remove should be performed asynchronously. 6.180.2.1. all_content Indicates if all the attributes of the virtual machine snapshot should be included in the response. By default the attribute initialization.configuration.data is excluded. For example, to retrieve the complete representation of the snapshot with id 456 of the virtual machine with id 123 send a request like this: 6.180.3. restore POST Restores a virtual machine snapshot. For example, to restore the snapshot with identifier 456 of virtual machine with identifier 123 send a request like this: With an empty action in the body: <action/> Note Confirm that the commit operation is finished and the virtual machine is down before running the virtual machine. Table 6.551. Parameters summary Name Type Direction Summary async Boolean In Indicates if the restore should be performed asynchronously. disks Disk[] In Specify the disks included in the snapshot's restore. restore_memory Boolean In 6.180.3.1. disks Specify the disks included in the snapshot's restore. For each disk parameter, it is also required to specify its image_id . For example, to restore a snapshot with an identifier 456 of a virtual machine with identifier 123 , including a disk with identifier 111 and image_id of 222 , send a request like this: Request body: <action> <disks> <disk id="111"> <image_id>222</image_id> </disk> </disks> </action> 6.181. SnapshotCdrom Table 6.552. Methods summary Name Summary get 6.181.1. get GET Table 6.553. Parameters summary Name Type Direction Summary cdrom Cdrom Out follow String In Indicates which inner links should be followed . 6.181.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.182. SnapshotCdroms Manages the set of CD-ROM devices of a virtual machine snapshot. Table 6.554. Methods summary Name Summary list Returns the list of CD-ROM devices of the snapshot. 6.182.1. list GET Returns the list of CD-ROM devices of the snapshot. The order of the returned list of CD-ROM devices isn't guaranteed. Table 6.555. Parameters summary Name Type Direction Summary cdroms Cdrom[] Out follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of CDROMS to return. 6.182.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.182.1.2. max Sets the maximum number of CDROMS to return. If not specified all the CDROMS are returned. 6.183. SnapshotDisk Table 6.556. Methods summary Name Summary get 6.183.1. get GET Table 6.557. Parameters summary Name Type Direction Summary disk Disk Out follow String In Indicates which inner links should be followed . 6.183.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.184. SnapshotDisks Manages the set of disks of an snapshot. Table 6.558. Methods summary Name Summary list Returns the list of disks of the snapshot. 6.184.1. list GET Returns the list of disks of the snapshot. The order of the returned list of disks isn't guaranteed. Table 6.559. Parameters summary Name Type Direction Summary disks Disk[] Out follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of disks to return. 6.184.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.184.1.2. max Sets the maximum number of disks to return. If not specified all the disks are returned. 6.185. SnapshotNic Table 6.560. Methods summary Name Summary get 6.185.1. get GET Table 6.561. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . nic Nic Out 6.185.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.186. SnapshotNics Manages the set of NICs of an snapshot. Table 6.562. Methods summary Name Summary list Returns the list of NICs of the snapshot. 6.186.1. list GET Returns the list of NICs of the snapshot. The order of the returned list of NICs isn't guaranteed. Table 6.563. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of NICs to return. nics Nic[] Out 6.186.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.186.1.2. max Sets the maximum number of NICs to return. If not specified all the NICs are returned. 6.187. Snapshots Manages the set of snapshots of a storage domain or virtual machine. Table 6.564. Methods summary Name Summary add Creates a virtual machine snapshot. list Returns the list of snapshots of the storage domain or virtual machine. 6.187.1. add POST Creates a virtual machine snapshot. For example, to create a new snapshot for virtual machine 123 send a request like this: With a request body like this: <snapshot> <description>My snapshot</description> </snapshot> For including only a sub-set of disks in the snapshots, add disk_attachments element to the request body. Note that disks which are not specified in disk_attachments element will not be a part of the snapshot. If an empty disk_attachments element is passed, the snapshot will include only the virtual machine configuration. If no disk_attachments element is passed, then all the disks will be included in the snapshot. For each disk, image_id element can be specified for setting the new active image id. This is used in order to restore a chain of images from backup. I.e. when restoring a disk with snapshots, the relevant image_id should be specified for each snapshot (so the identifiers of the disk snapshots are identical to the backup). <snapshot> <description>My snapshot</description> <disk_attachments> <disk_attachment> <disk id="123"> <image_id>456</image_id> </disk> </disk_attachment> </disk_attachments> </snapshot> Important When a snapshot is created the default value for the persist_memorystate attribute is true . That means that the content of the memory of the virtual machine will be included in the snapshot, and it also means that the virtual machine will be paused for a longer time. That can negatively affect applications that are very sensitive to timing (NTP servers, for example). In those cases make sure that you set the attribute to false : <snapshot> <description>My snapshot</description> <persist_memorystate>false</persist_memorystate> </snapshot> Table 6.565. Parameters summary Name Type Direction Summary snapshot Snapshot In/Out 6.187.2. list GET Returns the list of snapshots of the storage domain or virtual machine. The order of the returned list of snapshots isn't guaranteed. Table 6.566. Parameters summary Name Type Direction Summary all_content Boolean In Indicates if all the attributes of the virtual machine snapshot should be included in the response. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of snapshots to return. snapshots Snapshot[] Out 6.187.2.1. all_content Indicates if all the attributes of the virtual machine snapshot should be included in the response. By default the attribute initialization.configuration.data is excluded. For example, to retrieve the complete representation of the virtual machine with id 123 snapshots send a request like this: 6.187.2.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.187.2.3. max Sets the maximum number of snapshots to return. If not specified all the snapshots are returned. 6.188. SshPublicKey Table 6.567. Methods summary Name Summary get remove update 6.188.1. get GET Table 6.568. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . key SshPublicKey Out 6.188.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.188.2. remove DELETE Table 6.569. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.188.3. update PUT Table 6.570. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. key SshPublicKey In/Out 6.189. SshPublicKeys Table 6.571. Methods summary Name Summary add list Returns a list of SSH public keys of the user. 6.189.1. add POST Table 6.572. Parameters summary Name Type Direction Summary key SshPublicKey In/Out 6.189.2. list GET Returns a list of SSH public keys of the user. For example, to retrieve the list of SSH keys of user with identifier 123 , send a request like this: The result will be the following XML document: <ssh_public_keys> <ssh_public_key href="/ovirt-engine/api/users/123/sshpublickeys/456" id="456"> <content>ssh-rsa ...</content> <user href="/ovirt-engine/api/users/123" id="123"/> </ssh_public_key> </ssh_public_keys> Or the following JSON object { "ssh_public_key": [ { "content": "ssh-rsa ...", "user": { "href": "/ovirt-engine/api/users/123", "id": "123" }, "href": "/ovirt-engine/api/users/123/sshpublickeys/456", "id": "456" } ] } The order of the returned list of keys is not guaranteed. Table 6.573. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . keys SshPublicKey[] Out max Integer In Sets the maximum number of keys to return. 6.189.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.189.2.2. max Sets the maximum number of keys to return. If not specified all the keys are returned. 6.190. Statistic Table 6.574. Methods summary Name Summary get 6.190.1. get GET Table 6.575. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . statistic Statistic Out 6.190.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.191. Statistics Table 6.576. Methods summary Name Summary list Retrieves a list of statistics. 6.191.1. list GET Retrieves a list of statistics. For example, to retrieve the statistics for virtual machine 123 send a request like this: The result will be like this: <statistics> <statistic href="/ovirt-engine/api/vms/123/statistics/456" id="456"> <name>memory.installed</name> <description>Total memory configured</description> <kind>gauge</kind> <type>integer</type> <unit>bytes</unit> <values> <value> <datum>1073741824</datum> </value> </values> <vm href="/ovirt-engine/api/vms/123" id="123"/> </statistic> ... </statistics> Just a single part of the statistics can be retrieved by specifying its id at the end of the URI. That means: Outputs: <statistic href="/ovirt-engine/api/vms/123/statistics/456" id="456"> <name>memory.installed</name> <description>Total memory configured</description> <kind>gauge</kind> <type>integer</type> <unit>bytes</unit> <values> <value> <datum>1073741824</datum> </value> </values> <vm href="/ovirt-engine/api/vms/123" id="123"/> </statistic> The order of the returned list of statistics isn't guaranteed. Table 6.577. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of statistics to return. statistics Statistic[] Out 6.191.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.191.1.2. max Sets the maximum number of statistics to return. If not specified all the statistics are returned. 6.192. Step A service to manage a step. Table 6.578. Methods summary Name Summary end Marks an external step execution as ended. get Retrieves a step. 6.192.1. end POST Marks an external step execution as ended. For example, to terminate a step with identifier 456 which belongs to a job with identifier 123 send the following request: With the following request body: <action> <force>true</force> <succeeded>true</succeeded> </action> Table 6.579. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. force Boolean In Indicates if the step should be forcibly terminated. succeeded Boolean In Indicates if the step should be marked as successfully finished or as failed. 6.192.1.1. succeeded Indicates if the step should be marked as successfully finished or as failed. This parameter is optional, and the default value is true . 6.192.2. get GET Retrieves a step. You will receive response in XML like this one: <step href="/ovirt-engine/api/jobs/123/steps/456" id="456"> <actions> <link href="/ovirt-engine/api/jobs/123/steps/456/end" rel="end"/> </actions> <description>Validating</description> <end_time>2016-12-12T23:07:26.627+02:00</end_time> <external>false</external> <number>0</number> <start_time>2016-12-12T23:07:26.605+02:00</start_time> <status>finished</status> <type>validating</type> <job href="/ovirt-engine/api/jobs/123" id="123"/> </step> Table 6.580. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . step Step Out Retrieves the representation of the step. 6.192.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.193. Steps A service to manage steps. Table 6.581. Methods summary Name Summary add Add an external step to an existing job or to an existing step. list Retrieves the representation of the steps. 6.193.1. add POST Add an external step to an existing job or to an existing step. For example, to add a step to job with identifier 123 send the following request: With the following request body: <step> <description>Validating</description> <start_time>2016-12-12T23:07:26.605+02:00</start_time> <status>started</status> <type>validating</type> </step> The response should look like: <step href="/ovirt-engine/api/jobs/123/steps/456" id="456"> <actions> <link href="/ovirt-engine/api/jobs/123/steps/456/end" rel="end"/> </actions> <description>Validating</description> <link href="/ovirt-engine/api/jobs/123/steps/456/statistics" rel="statistics"/> <external>true</external> <number>2</number> <start_time>2016-12-13T01:06:15.380+02:00</start_time> <status>started</status> <type>validating</type> <job href="/ovirt-engine/api/jobs/123" id="123"/> </step> Table 6.582. Parameters summary Name Type Direction Summary step Step In/Out Step that will be added. 6.193.2. list GET Retrieves the representation of the steps. You will receive response in XML like this one: <steps> <step href="/ovirt-engine/api/jobs/123/steps/456" id="456"> <actions> <link href="/ovirt-engine/api/jobs/123/steps/456/end" rel="end"/> </actions> <description>Validating</description> <link href="/ovirt-engine/api/jobs/123/steps/456/statistics" rel="statistics"/> <external>true</external> <number>2</number> <start_time>2016-12-13T01:06:15.380+02:00</start_time> <status>started</status> <type>validating</type> <job href="/ovirt-engine/api/jobs/123" id="123"/> </step> ... </steps> The order of the returned list of steps isn't guaranteed. Table 6.583. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of steps to return. steps Step[] Out A representation of steps. 6.193.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.193.2.2. max Sets the maximum number of steps to return. If not specified all the steps are returned. 6.194. Storage Table 6.584. Methods summary Name Summary get 6.194.1. get GET Table 6.585. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . report_status Boolean In Indicates if the status of the LUNs in the storage should be checked. storage HostStorage Out 6.194.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.194.1.2. report_status Indicates if the status of the LUNs in the storage should be checked. Checking the status of the LUN is an heavy weight operation and this data is not always needed by the user. This parameter will give the option to not perform the status check of the LUNs. The default is true for backward compatibility. Here an example with the LUN status : <host_storage id="360014051136c20574f743bdbd28177fd"> <logical_units> <logical_unit id="360014051136c20574f743bdbd28177fd"> <lun_mapping>0</lun_mapping> <paths>1</paths> <product_id>lun0</product_id> <serial>SLIO-ORG_lun0_1136c205-74f7-43bd-bd28-177fd5ce6993</serial> <size>10737418240</size> <status>used</status> <vendor_id>LIO-ORG</vendor_id> <volume_group_id>O9Du7I-RahN-ECe1-dZ1w-nh0b-64io-MNzIBZ</volume_group_id> </logical_unit> </logical_units> <type>iscsi</type> <host id="8bb5ade5-e988-4000-8b93-dbfc6717fe50"/> </host_storage> Here an example without the LUN status : <host_storage id="360014051136c20574f743bdbd28177fd"> <logical_units> <logical_unit id="360014051136c20574f743bdbd28177fd"> <lun_mapping>0</lun_mapping> <paths>1</paths> <product_id>lun0</product_id> <serial>SLIO-ORG_lun0_1136c205-74f7-43bd-bd28-177fd5ce6993</serial> <size>10737418240</size> <vendor_id>LIO-ORG</vendor_id> <volume_group_id>O9Du7I-RahN-ECe1-dZ1w-nh0b-64io-MNzIBZ</volume_group_id> </logical_unit> </logical_units> <type>iscsi</type> <host id="8bb5ade5-e988-4000-8b93-dbfc6717fe50"/> </host_storage> 6.195. StorageDomain Table 6.586. Methods summary Name Summary get Retrieves the description of the storage domain. isattached Used for querying if the storage domain is already attached to a data center using the is_attached boolean field, which is part of the storage server. reduceluns This operation reduces logical units from the storage domain. refreshluns This operation refreshes the LUN size. remove Removes the storage domain. update Updates a storage domain. updateovfstore This operation forces the update of the OVF_STORE of this storage domain. 6.195.1. get GET Retrieves the description of the storage domain. Table 6.587. Parameters summary Name Type Direction Summary filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . storage_domain StorageDomain Out The description of the storage domain. 6.195.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.195.2. isattached POST Used for querying if the storage domain is already attached to a data center using the is_attached boolean field, which is part of the storage server. IMPORTANT: Executing this API will cause the host to disconnect from the storage domain. Table 6.588. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. host Host In Indicates the data center's host. is_attached Boolean Out Indicates whether the storage domain is attached to the data center. 6.195.3. reduceluns POST This operation reduces logical units from the storage domain. In order to do so the data stored on the provided logical units will be moved to other logical units of the storage domain and only then they will be reduced from the storage domain. For example, in order to reduce two logical units from a storage domain send a request like this: With a request body like this: <action> <logical_units> <logical_unit id="1IET_00010001"/> <logical_unit id="1IET_00010002"/> </logical_units> </action> Table 6.589. Parameters summary Name Type Direction Summary logical_units LogicalUnit[] In The logical units that need to be reduced from the storage domain. 6.195.4. refreshluns POST This operation refreshes the LUN size. After increasing the size of the underlying LUN on the storage server, the user can refresh the LUN size. This action forces a rescan of the provided LUNs and updates the database with the new size, if required. For example, in order to refresh the size of two LUNs send a request like this: With a request body like this: <action> <logical_units> <logical_unit id="1IET_00010001"/> <logical_unit id="1IET_00010002"/> </logical_units> </action> Table 6.590. Parameters summary Name Type Direction Summary async Boolean In Indicates if the refresh should be performed asynchronously. logical_units LogicalUnit[] In The LUNs that need to be refreshed. 6.195.5. remove DELETE Removes the storage domain. Without any special parameters, the storage domain is detached from the system and removed from the database. The storage domain can then be imported to the same or to a different setup, with all the data on it. If the storage is not accessible the operation will fail. If the destroy parameter is true then the operation will always succeed, even if the storage is not accessible, the failure is just ignored and the storage domain is removed from the database anyway. If the format parameter is true then the actual storage is formatted, and the metadata is removed from the LUN or directory, so it can no longer be imported to the same or to a different setup. Table 6.591. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. destroy Boolean In Indicates if the operation should succeed, and the storage domain removed from the database, even if the storage is not accessible. format Boolean In Indicates if the actual storage should be formatted, removing all the metadata from the underlying LUN or directory: [source] ---- DELETE /ovirt-engine/api/storagedomains/123?format=true ---- This parameter is optional, and the default value is false . host String In Indicates which host should be used to remove the storage domain. 6.195.5.1. destroy Indicates if the operation should succeed, and the storage domain removed from the database, even if the storage is not accessible. This parameter is optional, and the default value is false . When the value of destroy is true the host parameter will be ignored. 6.195.5.2. host Indicates which host should be used to remove the storage domain. This parameter is mandatory, except if the destroy parameter is included and its value is true , in that case the host parameter will be ignored. The value should contain the name or the identifier of the host. For example, to use the host named myhost to remove the storage domain with identifier 123 send a request like this: 6.195.6. update PUT Updates a storage domain. Not all of the StorageDomain 's attributes are updatable after creation. Those that can be updated are: name , description , comment , warning_low_space_indicator , critical_space_action_blocker and wipe_after_delete. (Note that changing the wipe_after_delete attribute will not change the wipe after delete property of disks that already exist). To update the name and wipe_after_delete attributes of a storage domain with an identifier 123 , send a request as follows: With a request body as follows: <storage_domain> <name>data2</name> <wipe_after_delete>true</wipe_after_delete> </storage_domain> Table 6.592. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. storage_domain StorageDomain In/Out The updated storage domain. 6.195.7. updateovfstore POST This operation forces the update of the OVF_STORE of this storage domain. The OVF_STORE is a disk image that contains the metadata of virtual machines and disks that reside in the storage domain. This metadata is used in case the domain is imported or exported to or from a different data center or a different installation. By default the OVF_STORE is updated periodically (set by default to 60 minutes) but users might want to force an update after an important change, or when the they believe the OVF_STORE is corrupt. When initiated by the user, OVF_STORE update will be performed whether an update is needed or not. Table 6.593. Parameters summary Name Type Direction Summary async Boolean In Indicates if the OVF_STORE update should be performed asynchronously. 6.196. StorageDomainContentDisk Table 6.594. Methods summary Name Summary get 6.196.1. get GET Table 6.595. Parameters summary Name Type Direction Summary disk Disk Out filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . 6.196.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.197. StorageDomainContentDisks Manages the set of disks available in a storage domain. Table 6.596. Methods summary Name Summary list Returns the list of disks available in the storage domain. 6.197.1. list GET Returns the list of disks available in the storage domain. The order of the returned list of disks is guaranteed only if the sortby clause is included in the search parameter. Table 6.597. Parameters summary Name Type Direction Summary case_sensitive Boolean In Indicates if the search performed using the search parameter should be performed taking case into account. disks Disk[] Out follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of disks to return. search String In A query string used to restrict the returned disks. 6.197.1.1. case_sensitive Indicates if the search performed using the search parameter should be performed taking case into account. The default value is true , which means that case is taken into account. If you want to search ignoring case set it to false . 6.197.1.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.197.1.3. max Sets the maximum number of disks to return. If not specified all the disks are returned. 6.198. StorageDomainDisk Manages a single disk available in a storage domain. Important Since version 4.2 of the engine this service is intended only to list disks available in the storage domain, and to register unregistered disks. All the other operations, like copying a disk, moving a disk, etc, have been deprecated and will be removed in the future. To perform those operations use the service that manages all the disks of the system , or the service that manages an specific disk . Table 6.598. Methods summary Name Summary copy Copies a disk to the specified storage domain. export Exports a disk to an export storage domain. get Retrieves the description of the disk. move Moves a disk to another storage domain. reduce Reduces the size of the disk image. remove Removes a disk. sparsify Sparsify the disk. update Updates the disk. 6.198.1. copy POST Copies a disk to the specified storage domain. Important Since version 4.2 of the engine this operation is deprecated, and preserved only for backwards compatibility. It will be removed in the future. To copy a disk use the copy operation of the service that manages that disk. Table 6.599. Parameters summary Name Type Direction Summary disk Disk In Description of the resulting disk. storage_domain StorageDomain In The storage domain where the new disk will be created. 6.198.2. export POST Exports a disk to an export storage domain. Important Since version 4.2 of the engine this operation is deprecated, and preserved only for backwards compatibility. It will be removed in the future. To export a disk use the export operation of the service that manages that disk. Table 6.600. Parameters summary Name Type Direction Summary storage_domain StorageDomain In The export storage domain where the disk should be exported to. 6.198.3. get GET Retrieves the description of the disk. Table 6.601. Parameters summary Name Type Direction Summary disk Disk Out The description of the disk. follow String In Indicates which inner links should be followed . 6.198.3.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.198.4. move POST Moves a disk to another storage domain. Important Since version 4.2 of the engine this operation is deprecated, and preserved only for backwards compatibility. It will be removed in the future. To move a disk use the move operation of the service that manages that disk. Table 6.602. Parameters summary Name Type Direction Summary async Boolean In Indicates if the move should be performed asynchronously. filter Boolean In Indicates if the results should be filtered according to the permissions of the user. storage_domain StorageDomain In The storage domain where the disk will be moved to. 6.198.5. reduce POST Reduces the size of the disk image. Invokes reduce on the logical volume (i.e. this is only applicable for block storage domains). This is applicable for floating disks and disks attached to non-running virtual machines. There is no need to specify the size as the optimal size is calculated automatically. Table 6.603. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.198.6. remove DELETE Removes a disk. Important Since version 4.2 of the engine this operation is deprecated, and preserved only for backwards compatibility. It will be removed in the future. To remove a disk use the remove operation of the service that manages that disk. 6.198.7. sparsify POST Sparsify the disk. Important Since version 4.2 of the engine this operation is deprecated, and preserved only for backwards compatibility. It will be removed in the future. To remove a disk use the remove operation of the service that manages that disk. 6.198.8. update PUT Updates the disk. Important Since version 4.2 of the engine this operation is deprecated, and preserved only for backwards compatibility. It will be removed in the future. To update a disk use the update operation of the service that manages that disk. Table 6.604. Parameters summary Name Type Direction Summary disk Disk In/Out The update to apply to the disk. 6.199. StorageDomainDisks Manages the collection of disks available inside a specific storage domain. Table 6.605. Methods summary Name Summary add Adds or registers a disk. list Retrieves the list of disks that are available in the storage domain. 6.199.1. add POST Adds or registers a disk. Important Since version 4.2 of the Red Hat Virtualization Manager this operation is deprecated, and preserved only for backwards compatibility. It will be removed in the future. To add a new disk use the add operation of the service that manages the disks of the system. To register an unregistered disk use the register operation of the service that manages that disk. Table 6.606. Parameters summary Name Type Direction Summary disk Disk In/Out The disk to add or register. unregistered Boolean In Indicates if a new disk should be added or if an existing unregistered disk should be registered. 6.199.1.1. unregistered Indicates if a new disk should be added or if an existing unregistered disk should be registered. If the value is true then the identifier of the disk to register needs to be provided. For example, to register the disk with ID 456 send a request like this: With a request body like this: <disk id="456"/> If the value is false then a new disk will be created in the storage domain. In that case the provisioned_size , format , and name attributes are mandatory. For example, to create a new copy on write disk of 1 GiB, send a request like this: With a request body like this: <disk> <name>mydisk</name> <format>cow</format> <provisioned_size>1073741824</provisioned_size> </disk> The default value is false . This parameter has been deprecated since version 4.2 of the Red Hat Virtualization Manager. 6.199.2. list GET Retrieves the list of disks that are available in the storage domain. The order of the returned list of disks is not guaranteed. Table 6.607. Parameters summary Name Type Direction Summary disks Disk[] Out The list of retrieved disks. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of disks to return. unregistered Boolean In Indicates whether to retrieve a list of registered or unregistered disks in the storage domain. 6.199.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.199.2.2. max Sets the maximum number of disks to return. If not specified, all the disks are returned. 6.199.2.3. unregistered Indicates whether to retrieve a list of registered or unregistered disks in the storage domain. To get a list of unregistered disks in the storage domain the call should indicate the unregistered flag. For example, to get a list of unregistered disks the REST API call should look like this: The default value of the unregistered flag is false . The request only applies to storage domains that are attached. 6.200. StorageDomainServerConnection Table 6.608. Methods summary Name Summary get remove Detaches a storage connection from storage. 6.200.1. get GET Table 6.609. Parameters summary Name Type Direction Summary connection StorageConnection Out follow String In Indicates which inner links should be followed . 6.200.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.200.2. remove DELETE Detaches a storage connection from storage. Table 6.610. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. 6.201. StorageDomainServerConnections Manages the set of connections to storage servers that exist in a storage domain. Table 6.611. Methods summary Name Summary add list Returns the list of connections to storage servers that existin the storage domain. 6.201.1. add POST Table 6.612. Parameters summary Name Type Direction Summary connection StorageConnection In/Out 6.201.2. list GET Returns the list of connections to storage servers that existin the storage domain. The order of the returned list of connections isn't guaranteed. Table 6.613. Parameters summary Name Type Direction Summary connections StorageConnection[] Out follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of connections to return. 6.201.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.201.2.2. max Sets the maximum number of connections to return. If not specified all the connections are returned. 6.202. StorageDomainTemplate Table 6.614. Methods summary Name Summary get import Action to import a template from an export storage domain. register Register the Template means importing the Template from the data domain by inserting the configuration of the Template and disks into the database without the copy process. remove 6.202.1. get GET Table 6.615. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . template Template Out 6.202.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.202.2. import POST Action to import a template from an export storage domain. For example, to import the template 456 from the storage domain 123 send the following request: With the following request body: <action> <storage_domain> <name>myexport</name> </storage_domain> <cluster> <name>mycluster</name> </cluster> </action> If you register an entity without specifying the cluster ID or name, the cluster name from the entity's OVF will be used (unless the register request also includes the cluster mapping). Table 6.616. Parameters summary Name Type Direction Summary async Boolean In Indicates if the import should be performed asynchronously. clone Boolean In Use the optional clone parameter to generate new UUIDs for the imported template and its entities. cluster Cluster In exclusive Boolean In storage_domain StorageDomain In template Template In vm Vm In 6.202.2.1. clone Use the optional clone parameter to generate new UUIDs for the imported template and its entities. You can import a template with the clone parameter set to false when importing a template from an export domain, with templates that were exported by a different Red Hat Virtualization environment. 6.202.3. register POST Register the Template means importing the Template from the data domain by inserting the configuration of the Template and disks into the database without the copy process. Table 6.617. Parameters summary Name Type Direction Summary allow_partial_import Boolean In Indicates whether a template is allowed to be registered with only some of its disks. async Boolean In Indicates if the registration should be performed asynchronously. clone Boolean In cluster Cluster In exclusive Boolean In registration_configuration RegistrationConfiguration In This parameter describes how the template should be registered. template Template In vnic_profile_mappings VnicProfileMapping[] In Deprecated attribute describing mapping rules for virtual NIC profiles that will be applied during the import\register process. 6.202.3.1. allow_partial_import Indicates whether a template is allowed to be registered with only some of its disks. If this flag is true , the system will not fail in the validation process if an image is not found, but instead it will allow the template to be registered without the missing disks. This is mainly used during registration of a template when some of the storage domains are not available. The default value is false . 6.202.3.2. registration_configuration This parameter describes how the template should be registered. This parameter is optional. If the parameter is not specified, the template will be registered with the same configuration that it had in the original environment where it was created. 6.202.3.3. vnic_profile_mappings Deprecated attribute describing mapping rules for virtual NIC profiles that will be applied during the import\register process. Warning Please note that this attribute has been deprecated since version 4.2.1 of the engine, and preserved only for backward compatibility. It will be removed in the future. To specify vnic_profile_mappings use the vnic_profile_mappings attribute inside the RegistrationConfiguration type. 6.202.4. remove DELETE Table 6.618. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.203. StorageDomainTemplates Manages the set of templates available in a storage domain. Table 6.619. Methods summary Name Summary list Returns the list of templates availalbe in the storage domain. 6.203.1. list GET Returns the list of templates availalbe in the storage domain. The order of the returned list of templates isn't guaranteed. Table 6.620. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of templates to return. templates Template[] Out unregistered Boolean In Indicates whether to retrieve a list of registered or unregistered templates which contain disks on the storage domain. 6.203.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.203.1.2. max Sets the maximum number of templates to return. If not specified all the templates are returned. 6.203.1.3. unregistered Indicates whether to retrieve a list of registered or unregistered templates which contain disks on the storage domain. To get a list of unregistered templates the call should indicate the unregistered flag. For example, to get a list of unregistered templates the REST API call should look like this: The default value of the unregisterd flag is false . The request only apply to storage domains that are attached. 6.204. StorageDomainVm Table 6.621. Methods summary Name Summary get import Imports a virtual machine from an export storage domain. register remove Deletes a virtual machine from an export storage domain. 6.204.1. get GET Table 6.622. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . vm Vm Out 6.204.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.204.2. import POST Imports a virtual machine from an export storage domain. For example, send a request like this: With a request body like this: <action> <storage_domain> <name>mydata</name> </storage_domain> <cluster> <name>mycluster</name> </cluster> </action> To import a virtual machine as a new entity add the clone parameter: <action> <storage_domain> <name>mydata</name> </storage_domain> <cluster> <name>mycluster</name> </cluster> <clone>true</clone> <vm> <name>myvm</name> </vm> </action> Include an optional disks parameter to choose which disks to import. For example, to import the disks of the template that have the identifiers 123 and 456 send the following request body: <action> <cluster> <name>mycluster</name> </cluster> <vm> <name>myvm</name> </vm> <disks> <disk id="123"/> <disk id="456"/> </disks> </action> If you register an entity without specifying the cluster ID or name, the cluster name from the entity's OVF will be used (unless the register request also includes the cluster mapping). Table 6.623. Parameters summary Name Type Direction Summary async Boolean In Indicates if the import should be performed asynchronously. clone Boolean In Indicates if the identifiers of the imported virtual machine should be regenerated. cluster Cluster In collapse_snapshots Boolean In Indicates of the snapshots of the virtual machine that is imported should be collapsed, so that the result will be a virtual machine without snapshots. exclusive Boolean In storage_domain StorageDomain In vm Vm In 6.204.2.1. clone Indicates if the identifiers of the imported virtual machine should be regenerated. By default when a virtual machine is imported the identifiers are preserved. This means that the same virtual machine can't be imported multiple times, as that identifiers needs to be unique. To allow importing the same machine multiple times set this parameter to true , as the default is false . 6.204.2.2. collapse_snapshots Indicates of the snapshots of the virtual machine that is imported should be collapsed, so that the result will be a virtual machine without snapshots. This parameter is optional, and if it isn't explicitly specified the default value is false . 6.204.3. register POST Table 6.624. Parameters summary Name Type Direction Summary allow_partial_import Boolean In Indicates whether a virtual machine is allowed to be registered with only some of its disks. async Boolean In Indicates if the registration should be performed asynchronously. clone Boolean In cluster Cluster In reassign_bad_macs Boolean In Indicates if the problematic MAC addresses should be re-assigned during the import process by the engine. registration_configuration RegistrationConfiguration In This parameter describes how the virtual machine should be registered. vm Vm In vnic_profile_mappings VnicProfileMapping[] In Deprecated attribute describing mapping rules for virtual NIC profiles that will be applied during the import\register process. 6.204.3.1. allow_partial_import Indicates whether a virtual machine is allowed to be registered with only some of its disks. If this flag is true , the engine will not fail in the validation process if an image is not found, but instead it will allow the virtual machine to be registered without the missing disks. This is mainly used during registration of a virtual machine when some of the storage domains are not available. The default value is false . 6.204.3.2. reassign_bad_macs Indicates if the problematic MAC addresses should be re-assigned during the import process by the engine. A MAC address would be considered as a problematic one if one of the following is true: It conflicts with a MAC address that is already allocated to a virtual machine in the target environment. It's out of the range of the target MAC address pool. 6.204.3.3. registration_configuration This parameter describes how the virtual machine should be registered. This parameter is optional. If the parameter is not specified, the virtual machine will be registered with the same configuration that it had in the original environment where it was created. 6.204.3.4. vnic_profile_mappings Deprecated attribute describing mapping rules for virtual NIC profiles that will be applied during the import\register process. Warning Please note that this attribute has been deprecated since version 4.2.1 of the engine, and preserved only for backward compatibility. It will be removed in the future. To specify vnic_profile_mappings use the vnic_profile_mappings attribute inside the RegistrationConfiguration type. 6.204.4. remove DELETE Deletes a virtual machine from an export storage domain. For example, to delete the virtual machine 456 from the storage domain 123 , send a request like this: Table 6.625. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.205. StorageDomainVmDiskAttachment Returns the details of the disks attached to a virtual machine in the export domain. Table 6.626. Methods summary Name Summary get Returns the details of the attachment with all its properties and a link to the disk. 6.205.1. get GET Returns the details of the attachment with all its properties and a link to the disk. Table 6.627. Parameters summary Name Type Direction Summary attachment DiskAttachment Out The disk attachment. follow String In Indicates which inner links should be followed . 6.205.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.206. StorageDomainVmDiskAttachments Returns the details of a disk attached to a virtual machine in the export domain. Table 6.628. Methods summary Name Summary list List the disks that are attached to the virtual machine. 6.206.1. list GET List the disks that are attached to the virtual machine. The order of the returned list of disk attachments isn't guaranteed. Table 6.629. Parameters summary Name Type Direction Summary attachments DiskAttachment[] Out follow String In Indicates which inner links should be followed . 6.206.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.207. StorageDomainVms Lists the virtual machines of an export storage domain. For example, to retrieve the virtual machines that are available in the storage domain with identifier 123 send the following request: This will return the following response body: <vms> <vm id="456" href="/api/storagedomains/123/vms/456"> <name>vm1</name> ... <storage_domain id="123" href="/api/storagedomains/123"/> <actions> <link rel="import" href="/api/storagedomains/123/vms/456/import"/> </actions> </vm> </vms> Virtual machines and templates in these collections have a similar representation to their counterparts in the top-level Vm and Template collections, except they also contain a StorageDomain reference and an import action. Table 6.630. Methods summary Name Summary list Returns the list of virtual machines of the export storage domain. 6.207.1. list GET Returns the list of virtual machines of the export storage domain. The order of the returned list of virtual machines isn't guaranteed. Table 6.631. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of virtual machines to return. unregistered Boolean In Indicates whether to retrieve a list of registered or unregistered virtual machines which contain disks on the storage domain. vm Vm[] Out 6.207.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.207.1.2. max Sets the maximum number of virtual machines to return. If not specified all the virtual machines are returned. 6.207.1.3. unregistered Indicates whether to retrieve a list of registered or unregistered virtual machines which contain disks on the storage domain. To get a list of unregistered virtual machines the call should indicate the unregistered flag. For example, to get a list of unregistered virtual machines the REST API call should look like this: The default value of the unregisterd flag is false . The request only apply to storage domains that are attached. 6.208. StorageDomains Manages the set of storage domains in the system. Table 6.632. Methods summary Name Summary add Adds a new storage domain. list Returns the list of storage domains in the system. 6.208.1. add POST Adds a new storage domain. Creation of a new StorageDomain requires the name , type , host , and storage attributes. Identify the host attribute with the id or name attributes. In Red Hat Virtualization 3.6 and later you can enable the wipe after delete option by default on the storage domain. To configure this, specify wipe_after_delete in the POST request. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist. To add a new storage domain with specified name , type , storage.type , storage.address , and storage.path , and using a host with an id 123 , send a request like this: With a request body like this: <storage_domain> <name>mydata</name> <type>data</type> <storage> <type>nfs</type> <address>mynfs.example.com</address> <path>/exports/mydata</path> </storage> <host> <name>myhost</name> </host> </storage_domain> To create a new NFS ISO storage domain send a request like this: <storage_domain> <name>myisos</name> <type>iso</type> <storage> <type>nfs</type> <address>mynfs.example.com</address> <path>/export/myisos</path> </storage> <host> <name>myhost</name> </host> </storage_domain> To create a new iSCSI storage domain send a request like this: <storage_domain> <name>myiscsi</name> <type>data</type> <storage> <type>iscsi</type> <logical_units> <logical_unit id="3600144f09dbd050000004eedbd340001"/> <logical_unit id="3600144f09dbd050000004eedbd340002"/> </logical_units> </storage> <host> <name>myhost</name> </host> </storage_domain> Table 6.633. Parameters summary Name Type Direction Summary storage_domain StorageDomain In/Out The storage domain to add. 6.208.2. list GET Returns the list of storage domains in the system. The order of the returned list of storage domains is guaranteed only if the sortby clause is included in the search parameter. Table 6.634. Parameters summary Name Type Direction Summary case_sensitive Boolean In Indicates if the search should be performed taking case into account. filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of storage domains to return. search String In A query string used to restrict the returned storage domains. storage_domains StorageDomain[] Out A list of the storage domains in the system. 6.208.2.1. case_sensitive Indicates if the search should be performed taking case into account. The default value is true , which means that case is taken into account. If you want to search ignoring case, set it to false . 6.208.2.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.208.2.3. max Sets the maximum number of storage domains to return. If not specified, all the storage domains are returned. 6.209. StorageServerConnection Table 6.635. Methods summary Name Summary get remove Removes a storage connection. update Updates the storage connection. 6.209.1. get GET Table 6.636. Parameters summary Name Type Direction Summary conection StorageConnection Out follow String In Indicates which inner links should be followed . 6.209.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.209.2. remove DELETE Removes a storage connection. A storage connection can only be deleted if neither storage domain nor LUN disks reference it. The host name or id is optional; providing it disconnects (unmounts) the connection from that host. Table 6.637. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. host String In The name or identifier of the host from which the connection would be unmounted (disconnected). 6.209.2.1. host The name or identifier of the host from which the connection would be unmounted (disconnected). If not provided, no host will be disconnected. For example, to use the host with identifier 456 to delete the storage connection with identifier 123 send a request like this: 6.209.3. update PUT Updates the storage connection. For example, to change the address of an NFS storage server, send a request like this: PUT /ovirt-engine/api/storageconnections/123 With a request body like this: <storage_connection> <address>mynewnfs.example.com</address> </storage_connection> To change the connection of an iSCSI storage server, send a request like this: PUT /ovirt-engine/api/storageconnections/123 With a request body like this: <storage_connection> <port>3260</port> <target>iqn.2017-01.com.myhost:444</target> </storage_connection> Table 6.638. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. connection StorageConnection In/Out force Boolean In Indicates if the operation should succeed regardless to the relevant storage domain's status (i. 6.209.3.1. force Indicates if the operation should succeed regardless to the relevant storage domain's status (i.e. updating is also applicable when storage domain's status is not maintenance). This parameter is optional, and the default value is false . 6.210. StorageServerConnectionExtension Table 6.639. Methods summary Name Summary get remove update Update a storage server connection extension for the given host. 6.210.1. get GET Table 6.640. Parameters summary Name Type Direction Summary extension StorageConnectionExtension Out follow String In Indicates which inner links should be followed . 6.210.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.210.2. remove DELETE Table 6.641. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.210.3. update PUT Update a storage server connection extension for the given host. To update the storage connection 456 of host 123 send a request like this: With a request body like this: <storage_connection_extension> <target>iqn.2016-01.com.example:mytarget</target> <username>myuser</username> <password>mypassword</password> </storage_connection_extension> Table 6.642. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. extension StorageConnectionExtension In/Out 6.211. StorageServerConnectionExtensions Table 6.643. Methods summary Name Summary add Creates a new storage server connection extension for the given host. list Returns the list os storage connection extensions. 6.211.1. add POST Creates a new storage server connection extension for the given host. The extension lets the user define credentials for an iSCSI target for a specific host. For example to use myuser and mypassword as the credentials when connecting to the iSCSI target from host 123 send a request like this: With a request body like this: <storage_connection_extension> <target>iqn.2016-01.com.example:mytarget</target> <username>myuser</username> <password>mypassword</password> </storage_connection_extension> Table 6.644. Parameters summary Name Type Direction Summary extension StorageConnectionExtension In/Out 6.211.2. list GET Returns the list os storage connection extensions. The order of the returned list of storage connections isn't guaranteed. Table 6.645. Parameters summary Name Type Direction Summary extensions StorageConnectionExtension[] Out follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of extensions to return. 6.211.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.211.2.2. max Sets the maximum number of extensions to return. If not specified all the extensions are returned. 6.212. StorageServerConnections Table 6.646. Methods summary Name Summary add Creates a new storage connection. list Returns the list of storage connections. 6.212.1. add POST Creates a new storage connection. For example, to create a new storage connection for the NFS server mynfs.example.com and NFS share /export/mydata send a request like this: With a request body like this: <storage_connection> <type>nfs</type> <address>mynfs.example.com</address> <path>/export/mydata</path> <host> <name>myhost</name> </host> </storage_connection> Table 6.647. Parameters summary Name Type Direction Summary connection StorageConnection In/Out 6.212.2. list GET Returns the list of storage connections. The order of the returned list of connections isn't guaranteed. Table 6.648. Parameters summary Name Type Direction Summary connections StorageConnection[] Out follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of connections to return. 6.212.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.212.2.2. max Sets the maximum number of connections to return. If not specified all the connections are returned. 6.213. System Table 6.649. Methods summary Name Summary get Returns basic information describing the API, like the product name, the version number and a summary of the number of relevant objects. reloadconfigurations 6.213.1. get GET Returns basic information describing the API, like the product name, the version number and a summary of the number of relevant objects. We get following response: <api> <link rel="capabilities" href="/api/capabilities"/> <link rel="clusters" href="/api/clusters"/> <link rel="clusters/search" href="/api/clusters?search={query}"/> <link rel="datacenters" href="/api/datacenters"/> <link rel="datacenters/search" href="/api/datacenters?search={query}"/> <link rel="events" href="/api/events"/> <link rel="events/search" href="/api/events?search={query}"/> <link rel="hosts" href="/api/hosts"/> <link rel="hosts/search" href="/api/hosts?search={query}"/> <link rel="networks" href="/api/networks"/> <link rel="roles" href="/api/roles"/> <link rel="storagedomains" href="/api/storagedomains"/> <link rel="storagedomains/search" href="/api/storagedomains?search={query}"/> <link rel="tags" href="/api/tags"/> <link rel="templates" href="/api/templates"/> <link rel="templates/search" href="/api/templates?search={query}"/> <link rel="users" href="/api/users"/> <link rel="groups" href="/api/groups"/> <link rel="domains" href="/api/domains"/> <link rel="vmpools" href="/api/vmpools"/> <link rel="vmpools/search" href="/api/vmpools?search={query}"/> <link rel="vms" href="/api/vms"/> <link rel="vms/search" href="/api/vms?search={query}"/> <product_info> <name>oVirt Engine</name> <vendor>ovirt.org</vendor> <version> <build>4</build> <full_version>4.0.4</full_version> <major>4</major> <minor>0</minor> <revision>0</revision> </version> </product_info> <special_objects> <blank_template href="/ovirt-engine/api/templates/00000000-0000-0000-0000-000000000000" id="00000000-0000-0000-0000-000000000000"/> <root_tag href="/ovirt-engine/api/tags/00000000-0000-0000-0000-000000000000" id="00000000-0000-0000-0000-000000000000"/> </special_objects> <summary> <hosts> <active>0</active> <total>0</total> </hosts> <storage_domains> <active>0</active> <total>1</total> </storage_domains> <users> <active>1</active> <total>1</total> </users> <vms> <active>0</active> <total>0</total> </vms> </summary> <time>2016-09-14T12:00:48.132+02:00</time> </api> The entry point provides a user with links to the collections in a virtualization environment. The rel attribute of each collection link provides a reference point for each link. The entry point also contains other data such as product_info , special_objects and summary . Table 6.650. Parameters summary Name Type Direction Summary api Api Out follow String In Indicates which inner links should be followed . 6.213.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.213.2. reloadconfigurations POST Table 6.651. Parameters summary Name Type Direction Summary async Boolean In Indicates if the reload should be performed asynchronously. 6.214. SystemOption A service that provides values of specific configuration option of the system. Table 6.652. Methods summary Name Summary get Get the values of specific configuration option. 6.214.1. get GET Get the values of specific configuration option. For example to retrieve the values of configuration option MigrationPoliciesSupported send a request like this: The response to that request will be the following: <system_option href="/ovirt-engine/api/options/MigrationPoliciesSupported" id="MigrationPoliciesSupported"> <name>MigrationPoliciesSupported</name> <values> <system_option_value> <value>true</value> <version>4.0</version> </system_option_value> <system_option_value> <value>true</value> <version>4.1</version> </system_option_value> <system_option_value> <value>true</value> <version>4.2</version> </system_option_value> <system_option_value> <value>false</value> <version>3.6</version> </system_option_value> </values> </system_option> Note The appropriate permissions are required to query configuration options. Some options can be queried only by users with administrator permissions. Important There is NO backward compatibility and no guarantee about the names or values of the options. Options may be removed and their meaning can be changed at any point. We strongly discourage the use of this service for applications other than the ones that are released simultaneously with the engine. Usage by other applications is not supported. Therefore there will be no documentation listing accessible configuration options. Table 6.653. Parameters summary Name Type Direction Summary option SystemOption Out The returned configuration option of the system. version String In Optional version parameter that specifies that only particular version of the configuration option should be returned. 6.214.1.1. version Optional version parameter that specifies that only particular version of the configuration option should be returned. If this parameter isn't used then all the versions will be returned. For example, to get the value of the MigrationPoliciesSupported option but only for version 4.2 send a request like this: The response to that request will be like this: <system_option href="/ovirt-engine/api/options/MigrationPoliciesSupported" id="MigrationPoliciesSupported"> <name>MigrationPoliciesSupported</name> <values> <system_option_value> <value>true</value> <version>4.2</version> </system_option_value> </values> </system_option> 6.215. SystemOptions Service that provides values of configuration options of the system. 6.216. SystemPermissions This service doesn't add any new methods, it is just a placeholder for the annotation that specifies the path of the resource that manages the permissions assigned to the system object. Table 6.654. Methods summary Name Summary add Assign a new permission to a user or group for specific entity. list List all the permissions of the specific entity. 6.216.1. add POST Assign a new permission to a user or group for specific entity. For example, to assign the UserVmManager role to the virtual machine with id 123 to the user with id 456 send a request like this: With a request body like this: <permission> <role> <name>UserVmManager</name> </role> <user id="456"/> </permission> To assign the SuperUser role to the system to the user with id 456 send a request like this: With a request body like this: <permission> <role> <name>SuperUser</name> </role> <user id="456"/> </permission> If you want to assign permission to the group instead of the user please replace the user element with the group element with proper id of the group. For example to assign the UserRole role to the cluster with id 123 to the group with id 789 send a request like this: With a request body like this: <permission> <role> <name>UserRole</name> </role> <group id="789"/> </permission> Table 6.655. Parameters summary Name Type Direction Summary permission Permission In/Out The permission. 6.216.2. list GET List all the permissions of the specific entity. For example to list all the permissions of the cluster with id 123 send a request like this: <permissions> <permission id="456"> <cluster id="123"/> <role id="789"/> <user id="451"/> </permission> <permission id="654"> <cluster id="123"/> <role id="789"/> <group id="127"/> </permission> </permissions> The order of the returned permissions isn't guaranteed. Table 6.656. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . permissions Permission[] Out The list of permissions. 6.216.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.217. Tag A service to manage a specific tag in the system. Table 6.657. Methods summary Name Summary get Gets the information about the tag. remove Removes the tag from the system. update Updates the tag entity. 6.217.1. get GET Gets the information about the tag. For example to retrieve the information about the tag with the id 123 send a request like this: <tag href="/ovirt-engine/api/tags/123" id="123"> <name>root</name> <description>root</description> </tag> Table 6.658. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . tag Tag Out The tag. 6.217.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.217.2. remove DELETE Removes the tag from the system. For example to remove the tag with id 123 send a request like this: Table 6.659. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.217.3. update PUT Updates the tag entity. For example to update parent tag to tag with id 456 of the tag with id 123 send a request like this: With request body like: <tag> <parent id="456"/> </tag> You may also specify a tag name instead of id. For example to update parent tag to tag with name mytag of the tag with id 123 send a request like this: <tag> <parent> <name>mytag</name> </parent> </tag> Table 6.660. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. tag Tag In/Out The updated tag. 6.218. Tags Represents a service to manage collection of the tags in the system. Table 6.661. Methods summary Name Summary add Add a new tag to the system. list List the tags in the system. 6.218.1. add POST Add a new tag to the system. For example, to add new tag with name mytag to the system send a request like this: With a request body like this: <tag> <name>mytag</name> </tag> Note The root tag is a special pseudo-tag assumed as the default parent tag if no parent tag is specified. The root tag cannot be deleted nor assigned a parent tag. To create new tag with specific parent tag send a request body like this: <tag> <name>mytag</name> <parent> <name>myparenttag</name> </parent> </tag> Table 6.662. Parameters summary Name Type Direction Summary tag Tag In/Out The added tag. 6.218.2. list GET List the tags in the system. For example to list the full hierarchy of the tags in the system send a request like this: <tags> <tag href="/ovirt-engine/api/tags/222" id="222"> <name>root2</name> <description>root2</description> <parent href="/ovirt-engine/api/tags/111" id="111"/> </tag> <tag href="/ovirt-engine/api/tags/333" id="333"> <name>root3</name> <description>root3</description> <parent href="/ovirt-engine/api/tags/222" id="222"/> </tag> <tag href="/ovirt-engine/api/tags/111" id="111"> <name>root</name> <description>root</description> </tag> </tags> In the XML output you can see the following hierarchy of the tags: The order of the returned list of tags isn't guaranteed. Table 6.663. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of tags to return. tags Tag[] Out List of all tags in the system. 6.218.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.218.2.2. max Sets the maximum number of tags to return. If not specified all the tags are returned. 6.219. Template Manages the virtual machine template and template versions. Table 6.664. Methods summary Name Summary export Exports a template to the data center export domain. get Returns the information about this template or template version. remove Removes a virtual machine template. update Updates the template. 6.219.1. export POST Exports a template to the data center export domain. For example, send the following request: With a request body like this: <action> <storage_domain id="456"/> <exclusive>true<exclusive/> </action> Table 6.665. Parameters summary Name Type Direction Summary exclusive Boolean In Indicates if the existing templates with the same name should be overwritten. storage_domain StorageDomain In Specifies the destination export storage domain. 6.219.1.1. exclusive Indicates if the existing templates with the same name should be overwritten. The export action reports a failed action if a template of the same name exists in the destination domain. Set this parameter to true to change this behavior and overwrite any existing template. 6.219.2. get GET Returns the information about this template or template version. Table 6.666. Parameters summary Name Type Direction Summary filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . template Template Out The information about the template or template version. 6.219.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.219.3. remove DELETE Removes a virtual machine template. Table 6.667. Parameters summary Name Type Direction Summary async Boolean In Indicates if the removal should be performed asynchronously. 6.219.4. update PUT Updates the template. The name , description , type , memory , cpu , topology , os , high_availability , display , stateless , usb , and timezone elements can be updated after a template has been created. For example, to update a template so that it has 1 GiB of memory send a request like this: With the following request body: <template> <memory>1073741824</memory> </template> The version_name name attribute is the only one that can be updated within the version attribute used for template versions: <template> <version> <version_name>mytemplate_2</version_name> </version> </template> Table 6.668. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. template Template In/Out 6.220. TemplateCdrom A service managing a CD-ROM device on templates. Table 6.669. Methods summary Name Summary get Returns the information about this CD-ROM device. 6.220.1. get GET Returns the information about this CD-ROM device. For example, to get information about the CD-ROM device of template 123 send a request like: Table 6.670. Parameters summary Name Type Direction Summary cdrom Cdrom Out The information about the CD-ROM device. follow String In Indicates which inner links should be followed . 6.220.1.1. cdrom The information about the CD-ROM device. The information consists of cdrom attribute containing reference to the CD-ROM device, the template, and optionally the inserted disk. If there is a disk inserted then the file attribute will contain a reference to the ISO image: <cdrom href="..." id="00000000-0000-0000-0000-000000000000"> <template href="/ovirt-engine/api/templates/123" id="123"/> <file id="mycd.iso"/> </cdrom> If there is no disk inserted then the file attribute won't be reported: <cdrom href="..." id="00000000-0000-0000-0000-000000000000"> <template href="/ovirt-engine/api/templates/123" id="123"/> </cdrom> 6.220.1.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.221. TemplateCdroms Lists the CD-ROM devices of a template. Table 6.671. Methods summary Name Summary list Returns the list of CD-ROM devices of the template. 6.221.1. list GET Returns the list of CD-ROM devices of the template. The order of the returned list of CD-ROM devices isn't guaranteed. Table 6.672. Parameters summary Name Type Direction Summary cdroms Cdrom[] Out The list of CD-ROM devices of the template. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of CD-ROMs to return. 6.221.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.221.1.2. max Sets the maximum number of CD-ROMs to return. If not specified all the CD-ROMs are returned. 6.222. TemplateDisk Table 6.673. Methods summary Name Summary copy Copy the specified disk attached to the template to a specific storage domain. export get remove 6.222.1. copy POST Copy the specified disk attached to the template to a specific storage domain. Table 6.674. Parameters summary Name Type Direction Summary async Boolean In Indicates if the copy should be performed asynchronously. filter Boolean In Indicates if the results should be filtered according to the permissions of the user. storage_domain StorageDomain In 6.222.2. export POST Table 6.675. Parameters summary Name Type Direction Summary async Boolean In Indicates if the export should be performed asynchronously. filter Boolean In Indicates if the results should be filtered according to the permissions of the user. storage_domain StorageDomain In 6.222.3. get GET Table 6.676. Parameters summary Name Type Direction Summary disk Disk Out follow String In Indicates which inner links should be followed . 6.222.3.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.222.4. remove DELETE Table 6.677. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.223. TemplateDiskAttachment This service manages the attachment of a disk to a template. Table 6.678. Methods summary Name Summary get Returns the details of the attachment. remove Removes the disk from the template. 6.223.1. get GET Returns the details of the attachment. Table 6.679. Parameters summary Name Type Direction Summary attachment DiskAttachment Out follow String In Indicates which inner links should be followed . 6.223.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.223.2. remove DELETE Removes the disk from the template. The disk will only be removed if there are other existing copies of the disk on other storage domains. A storage domain has to be specified to determine which of the copies should be removed (template disks can have copies on multiple storage domains). Table 6.680. Parameters summary Name Type Direction Summary force Boolean In storage_domain String In Specifies the identifier of the storage domain the image to be removed resides on. 6.224. TemplateDiskAttachments This service manages the set of disks attached to a template. Each attached disk is represented by a DiskAttachment . Table 6.681. Methods summary Name Summary list List the disks that are attached to the template. 6.224.1. list GET List the disks that are attached to the template. The order of the returned list of attachments isn't guaranteed. Table 6.682. Parameters summary Name Type Direction Summary attachments DiskAttachment[] Out follow String In Indicates which inner links should be followed . 6.224.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.225. TemplateDisks Table 6.683. Methods summary Name Summary list Returns the list of disks of the template. 6.225.1. list GET Returns the list of disks of the template. The order of the returned list of disks isn't guaranteed. Table 6.684. Parameters summary Name Type Direction Summary disks Disk[] Out follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of disks to return. 6.225.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.225.1.2. max Sets the maximum number of disks to return. If not specified all the disks are returned. 6.226. TemplateGraphicsConsole Table 6.685. Methods summary Name Summary get Gets graphics console configuration of the template. remove Remove the graphics console from the template. 6.226.1. get GET Gets graphics console configuration of the template. Table 6.686. Parameters summary Name Type Direction Summary console GraphicsConsole Out The information about the graphics console of the template. follow String In Indicates which inner links should be followed . 6.226.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.226.2. remove DELETE Remove the graphics console from the template. Table 6.687. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.227. TemplateGraphicsConsoles Table 6.688. Methods summary Name Summary add Add new graphics console to the template. list Lists all the configured graphics consoles of the template. 6.227.1. add POST Add new graphics console to the template. Table 6.689. Parameters summary Name Type Direction Summary console GraphicsConsole In/Out 6.227.2. list GET Lists all the configured graphics consoles of the template. The order of the returned list of graphics consoles isn't guaranteed. Table 6.690. Parameters summary Name Type Direction Summary consoles GraphicsConsole[] Out The list of graphics consoles of the template. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of consoles to return. 6.227.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.227.2.2. max Sets the maximum number of consoles to return. If not specified all the consoles are returned. 6.228. TemplateNic Table 6.691. Methods summary Name Summary get remove update Update the specified network interface card attached to the template. 6.228.1. get GET Table 6.692. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . nic Nic Out 6.228.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.228.2. remove DELETE Table 6.693. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.228.3. update PUT Update the specified network interface card attached to the template. Table 6.694. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. nic Nic In/Out 6.229. TemplateNics Table 6.695. Methods summary Name Summary add Add a new network interface card to the template. list Returns the list of NICs of the template. 6.229.1. add POST Add a new network interface card to the template. Table 6.696. Parameters summary Name Type Direction Summary nic Nic In/Out 6.229.2. list GET Returns the list of NICs of the template. The order of the returned list of NICs isn't guaranteed. Table 6.697. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of NICs to return. nics Nic[] Out 6.229.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.229.2.2. max Sets the maximum number of NICs to return. If not specified all the NICs are returned. 6.230. TemplateWatchdog Table 6.698. Methods summary Name Summary get remove update Update the watchdog for the template identified by the given id. 6.230.1. get GET Table 6.699. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . watchdog Watchdog Out 6.230.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.230.2. remove DELETE Table 6.700. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.230.3. update PUT Update the watchdog for the template identified by the given id. Table 6.701. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. watchdog Watchdog In/Out 6.231. TemplateWatchdogs Table 6.702. Methods summary Name Summary add Add a watchdog to the template identified by the given id. list Returns the list of watchdogs. 6.231.1. add POST Add a watchdog to the template identified by the given id. Table 6.703. Parameters summary Name Type Direction Summary watchdog Watchdog In/Out 6.231.2. list GET Returns the list of watchdogs. The order of the returned list of watchdogs isn't guaranteed. Table 6.704. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of watchdogs to return. watchdogs Watchdog[] Out 6.231.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.231.2.2. max Sets the maximum number of watchdogs to return. If not specified all the watchdogs are returned. 6.232. Templates This service manages the virtual machine templates available in the system. Table 6.705. Methods summary Name Summary add Creates a new template. list Returns the list of virtual machine templates. 6.232.1. add POST Creates a new template. This requires the name and vm elements. To identify the virtual machine use the vm.id or vm.name attributes. For example, to create a template from a virtual machine with the identifier 123 send a request like this: With a request body like this: <template> <name>mytemplate</name> <vm id="123"/> </template> The disks of the template can be customized, making some of their characteristics different from the disks of the original virtual machine. To do so use the vm.disk_attachments attribute, specifying the identifier of the disk of the original virtual machine and the characteristics that you want to change. For example, if the original virtual machine has a disk with the identifier 456 , and, for that disk, you want to change the name to mydisk the format to Copy On Write and make it sparse , send a request body like this: <template> <name>mytemplate</name> <vm id="123"> <disk_attachments> <disk_attachment> <disk id="456"> <name>mydisk</name> <format>cow</format> <sparse>true</sparse> </disk> </disk_attachment> </disk_attachments> </vm> </template> The template can be created as a sub-version of an existing template. This requires the name and vm attributes for the new template, and the base_template and version_name attributes for the new template version. The base_template and version_name attributes must be specified within a version section enclosed in the template section. Identify the virtual machine with the id or name attributes. <template> <name>mytemplate</name> <vm id="123"/> <version> <base_template id="456"/> <version_name>mytemplate_001</version_name> </version> </template> The destination storage domain of the template can be customized, in one of two ways: Globally, at the request level. The request must list the desired disk attachments to be created on the storage domain. If the disk attachments are not listed, the global storage domain parameter will be ignored. <template> <name>mytemplate</name> <storage_domain id="123"/> <vm id="456"> <disk_attachments> <disk_attachment> <disk id="789"> <format>cow</format> <sparse>true</sparse> </disk> </disk_attachment> </disk_attachments> </vm> </template> Per each disk attachment. Specify the desired storage domain for each disk attachment. Specifying the global storage definition will override the storage domain per disk attachment specification. <template> <name>mytemplate</name> <vm id="123"> <disk_attachments> <disk_attachment> <disk id="456"> <format>cow</format> <sparse>true</sparse> <storage_domains> <storage_domain id="789"/> </storage_domains> </disk> </disk_attachment> </disk_attachments> </vm> </template> Table 6.706. Parameters summary Name Type Direction Summary clone_permissions Boolean In Specifies if the permissions of the virtual machine should be copied to the template. seal Boolean In Seals the template. template Template In/Out The information about the template or template version. 6.232.1.1. clone_permissions Specifies if the permissions of the virtual machine should be copied to the template. If this optional parameter is provided, and its value is true , then the permissions of the virtual machine (only the direct ones, not the inherited ones) will be copied to the created template. For example, to create a template from the myvm virtual machine copying its permissions, send a request like this: With a request body like this: <template> <name>mytemplate<name> <vm> <name>myvm<name> </vm> </template> 6.232.1.2. seal Seals the template. If this optional parameter is provided and its value is true , then the template is sealed after creation. Sealing erases all host-specific configuration from the filesystem: SSH keys, UDEV rules, MAC addresses, system ID, hostname, and so on, thus making it easier to use the template to create multiple virtual machines without manual intervention. Currently, sealing is supported only for Linux operating systems. 6.232.2. list GET Returns the list of virtual machine templates. For example: Will return the list of virtual machines and virtual machine templates. The order of the returned list of templates is not guaranteed. Table 6.707. Parameters summary Name Type Direction Summary case_sensitive Boolean In Indicates if the search performed using the search parameter should be performed taking case into account. filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of templates to return. search String In A query string used to restrict the returned templates. templates Template[] Out The list of virtual machine templates. 6.232.2.1. case_sensitive Indicates if the search performed using the search parameter should be performed taking case into account. The default value is true , which means that case is taken into account. If you want to search ignoring case set it to false . 6.232.2.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.232.2.3. max Sets the maximum number of templates to return. If not specified, all the templates are returned. 6.233. UnmanagedNetwork Table 6.708. Methods summary Name Summary get remove 6.233.1. get GET Table 6.709. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . network UnmanagedNetwork Out 6.233.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.233.2. remove DELETE Table 6.710. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.234. UnmanagedNetworks Table 6.711. Methods summary Name Summary list Returns the list of unmanaged networks of the host. 6.234.1. list GET Returns the list of unmanaged networks of the host. The order of the returned list of networks isn't guaranteed. Table 6.712. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of networks to return. networks UnmanagedNetwork[] Out 6.234.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.234.1.2. max Sets the maximum number of networks to return. If not specified all the networks are returned. 6.235. User A service to manage a user in the system. Use this service to either get users details or remove users. In order to add new users please use Section 6.236, "Users" . Table 6.713. Methods summary Name Summary get Gets the system user information. remove Removes the system user. 6.235.1. get GET Gets the system user information. Usage: Will return the user information: <user href="/ovirt-engine/api/users/1234" id="1234"> <name>admin</name> <link href="/ovirt-engine/api/users/1234/sshpublickeys" rel="sshpublickeys"/> <link href="/ovirt-engine/api/users/1234/roles" rel="roles"/> <link href="/ovirt-engine/api/users/1234/permissions" rel="permissions"/> <link href="/ovirt-engine/api/users/1234/tags" rel="tags"/> <department></department> <domain_entry_id>23456</domain_entry_id> <email>[email protected]</email> <last_name>Lastname</last_name> <namespace>*</namespace> <principal>user1</principal> <user_name>user1@domain-authz</user_name> <domain href="/ovirt-engine/api/domains/45678" id="45678"> <name>domain-authz</name> </domain> </user> Table 6.714. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . user User Out The system user. 6.235.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.235.2. remove DELETE Removes the system user. Usage: Table 6.715. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.236. Users A service to manage the users in the system. Table 6.716. Methods summary Name Summary add Add user from a directory service. list List all the users in the system. 6.236.1. add POST Add user from a directory service. For example, to add the myuser user from the myextension-authz authorization provider send a request like this: With a request body like this: <user> <user_name>myuser@myextension-authz</user_name> <domain> <name>myextension-authz</name> </domain> </user> In case you are working with Active Directory you have to pass user principal name (UPN) as username , followed by authorization provider name. Due to bug 1147900 you need to provide also principal parameter set to UPN of the user. For example, to add the user with UPN [email protected] from the myextension-authz authorization provider send a request body like this: <user> <principal>[email protected]</principal> <user_name>[email protected]@myextension-authz</user_name> <domain> <name>myextension-authz</name> </domain> </user> Table 6.717. Parameters summary Name Type Direction Summary user User In/Out 6.236.2. list GET List all the users in the system. Usage: Will return the list of users: <users> <user href="/ovirt-engine/api/users/1234" id="1234"> <name>admin</name> <link href="/ovirt-engine/api/users/1234/sshpublickeys" rel="sshpublickeys"/> <link href="/ovirt-engine/api/users/1234/roles" rel="roles"/> <link href="/ovirt-engine/api/users/1234/permissions" rel="permissions"/> <link href="/ovirt-engine/api/users/1234/tags" rel="tags"/> <domain_entry_id>23456</domain_entry_id> <namespace>*</namespace> <principal>user1</principal> <user_name>user1@domain-authz</user_name> <domain href="/ovirt-engine/api/domains/45678" id="45678"> <name>domain-authz</name> </domain> </user> </users> The order of the returned list of users isn't guaranteed. Table 6.718. Parameters summary Name Type Direction Summary case_sensitive Boolean In Indicates if the search performed using the search parameter should be performed taking case into account. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of users to return. search String In A query string used to restrict the returned users. users User[] Out The list of users. 6.236.2.1. case_sensitive Indicates if the search performed using the search parameter should be performed taking case into account. The default value is true , which means that case is taken into account. If you want to search ignoring case set it to false . 6.236.2.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.236.2.3. max Sets the maximum number of users to return. If not specified all the users are returned. 6.237. VirtualFunctionAllowedNetwork Table 6.719. Methods summary Name Summary get remove 6.237.1. get GET Table 6.720. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . network Network Out 6.237.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.237.2. remove DELETE Table 6.721. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.238. VirtualFunctionAllowedNetworks Table 6.722. Methods summary Name Summary add list Returns the list of networks. 6.238.1. add POST Table 6.723. Parameters summary Name Type Direction Summary network Network In/Out 6.238.2. list GET Returns the list of networks. The order of the returned list of networks isn't guaranteed. Table 6.724. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of networks to return. networks Network[] Out 6.238.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.238.2.2. max Sets the maximum number of networks to return. If not specified all the networks are returned. 6.239. Vm Table 6.725. Methods summary Name Summary cancelmigration This operation stops any migration of a virtual machine to another physical host. clone commitsnapshot Permanently restores the virtual machine to the state of the previewed snapshot. detach Detaches a virtual machine from a pool. export Exports the virtual machine. freezefilesystems Freezes virtual machine file systems. get Retrieves the description of the virtual machine. logon Initiates the automatic user logon to access a virtual machine from an external console. maintenance Sets the global maintenance mode on the hosted engine virtual machine. migrate Migrates a virtual machine to another physical host. previewsnapshot Temporarily restores the virtual machine to the state of a snapshot. reboot Sends a reboot request to a virtual machine. remove Removes the virtual machine, including the virtual disks attached to it. reordermacaddresses shutdown This operation sends a shutdown request to a virtual machine. start Starts the virtual machine. stop This operation forces a virtual machine to power-off. suspend This operation saves the virtual machine state to disk and stops it. thawfilesystems Thaws virtual machine file systems. ticket Generates a time-sensitive authentication token for accessing a virtual machine's display. undosnapshot Restores the virtual machine to the state it had before previewing the snapshot. update Update the virtual machine in the system for the given virtual machine id. 6.239.1. cancelmigration POST This operation stops any migration of a virtual machine to another physical host. The cancel migration action does not take any action specific parameters; therefore, the request body should contain an empty action : <action/> Table 6.726. Parameters summary Name Type Direction Summary async Boolean In Indicates if the migration should cancelled asynchronously. 6.239.2. clone POST Table 6.727. Parameters summary Name Type Direction Summary async Boolean In Indicates if the clone should be performed asynchronously. vm Vm In 6.239.3. commitsnapshot POST Permanently restores the virtual machine to the state of the previewed snapshot. See the preview_snapshot operation for details. Table 6.728. Parameters summary Name Type Direction Summary async Boolean In Indicates if the snapshots should be committed asynchronously. 6.239.4. detach POST Detaches a virtual machine from a pool. The detach action does not take any action specific parameters; therefore, the request body should contain an empty action : <action/> Table 6.729. Parameters summary Name Type Direction Summary async Boolean In Indicates if the detach action should be performed asynchronously. 6.239.5. export POST Exports the virtual machine. A virtual machine can be exported to an export domain. For example, to export virtual machine 123 to the export domain myexport : With a request body like this: <action> <storage_domain> <name>myexport</name> </storage_domain> <exclusive>true</exclusive> <discard_snapshots>true</discard_snapshots> </action> Since version 4.2 of the engine it is also possible to export a virtual machine as a virtual appliance (OVA). For example, to export virtual machine 123 as an OVA file named myvm.ova that is placed in the directory /home/ovirt/ on host myhost : With a request body like this: <action> <host> <name>myhost</name> </host> <directory>/home/ovirt</directory> <filename>myvm.ova</filename> </action> Note Confirm that the export operation has completed before attempting any actions on the export domain. Table 6.730. Parameters summary Name Type Direction Summary async Boolean In Indicates if the export should be performed asynchronously. discard_snapshots Boolean In Use the discard_snapshots parameter when the virtual machine should be exported with all of its snapshots collapsed. exclusive Boolean In Use the exclusive parameter when the virtual machine should be exported even if another copy of it already exists in the export domain (override). storage_domain StorageDomain In The (export) storage domain to export the virtual machine to. 6.239.6. freezefilesystems POST Freezes virtual machine file systems. This operation freezes a virtual machine's file systems using the QEMU guest agent when taking a live snapshot of a running virtual machine. Normally, this is done automatically by Manager, but this must be executed manually with the API for virtual machines using OpenStack Volume (Cinder) disks. Example: <action/> Table 6.731. Parameters summary Name Type Direction Summary async Boolean In Indicates if the freeze should be performed asynchronously. 6.239.7. get GET Retrieves the description of the virtual machine. Table 6.732. Parameters summary Name Type Direction Summary all_content Boolean In Indicates if all of the attributes of the virtual machine should be included in the response. filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . next_run Boolean In Indicates if the returned result describes the virtual machine as it is currently running or if describes the virtual machine with the modifications that have already been performed but that will only come into effect when the virtual machine is restarted. vm Vm Out Description of the virtual machine. 6.239.7.1. all_content Indicates if all of the attributes of the virtual machine should be included in the response. By default the following attributes are excluded: console initialization.configuration.data - The OVF document describing the virtual machine. rng_source soundcard virtio_scsi For example, to retrieve the complete representation of the virtual machine '123': Note These attributes are not included by default as they reduce performance. These attributes are seldom used and require additional queries to the database. Only use this parameter when required as it will reduce performance. 6.239.7.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.239.7.3. next_run Indicates if the returned result describes the virtual machine as it is currently running or if describes the virtual machine with the modifications that have already been performed but that will only come into effect when the virtual machine is restarted. By default the value is false . If the parameter is included in the request, but without a value, it is assumed that the value is true . The the following request: Is equivalent to using the value true : 6.239.8. logon POST Initiates the automatic user logon to access a virtual machine from an external console. This action requires the ovirt-guest-agent-gdm-plugin and the ovirt-guest-agent-pam-module packages to be installed and the ovirt-guest-agent service to be running on the virtual machine. Users require the appropriate user permissions for the virtual machine in order to access the virtual machine from an external console. For example: Request body: <action/> Table 6.733. Parameters summary Name Type Direction Summary async Boolean In Indicates if the logon should be performed asynchronously. 6.239.9. maintenance POST Sets the global maintenance mode on the hosted engine virtual machine. This action has no effect on other virtual machines. Example: <action> <maintenance_enabled>true<maintenance_enabled/> </action> Table 6.734. Parameters summary Name Type Direction Summary async Boolean In Indicates if the global maintenance action should be performed asynchronously. maintenance_enabled Boolean In Indicates if global maintenance should be enabled or disabled. 6.239.10. migrate POST Migrates a virtual machine to another physical host. Example: To specify a specific host to migrate the virtual machine to: <action> <host id="2ab5e1da-b726-4274-bbf7-0a42b16a0fc3"/> </action> Table 6.735. Parameters summary Name Type Direction Summary async Boolean In Indicates if the migration should be performed asynchronously. cluster Cluster In Specifies the cluster the virtual machine should migrate to. force Boolean In Specifies that the virtual machine should migrate even if the virtual machine is defined as non-migratable. host Host In Specifies a specific host that the virtual machine should migrate to. 6.239.10.1. cluster Specifies the cluster the virtual machine should migrate to. This is an optional parameter. By default, the virtual machine is migrated to another host within the same cluster. Warning Live migration to another cluster is not supported. Strongly consider the target cluster's hardware architecture and network architecture before attempting a migration. 6.239.10.2. force Specifies that the virtual machine should migrate even if the virtual machine is defined as non-migratable. This is an optional parameter. By default, it is set to false . 6.239.10.3. host Specifies a specific host that the virtual machine should migrate to. This is an optional parameter. By default, the Red Hat Virtualization Manager automatically selects a default host for migration within the same cluster. If an API user requires a specific host, the user can specify the host with either an id or name parameter. 6.239.11. previewsnapshot POST Temporarily restores the virtual machine to the state of a snapshot. The snapshot is indicated with the snapshot.id parameter. It is restored temporarily, so that the content can be inspected. Once that inspection is finished, the state of the virtual machine can be made permanent, using the commit_snapshot method, or discarded using the undo_snapshot method. Table 6.736. Parameters summary Name Type Direction Summary async Boolean In Indicates if the preview should be performed asynchronously. disks Disk[] In Specify the disks included in the snapshot's preview. lease StorageDomainLease In Specify the lease storage domain ID to use in the preview of the snapshot. restore_memory Boolean In snapshot Snapshot In vm Vm In 6.239.11.1. disks Specify the disks included in the snapshot's preview. For each disk parameter, it is also required to specify its image_id . For example, to preview a snapshot with identifier 456 which includes a disk with identifier 111 and its image_id as 222 , send a request like this: Request body: <action> <disks> <disk id="111"> <image_id>222</image_id> </disk> </disks> <snapshot id="456"/> </action> 6.239.11.2. lease Specify the lease storage domain ID to use in the preview of the snapshot. If lease parameter is not passed, then the previewed snapshot lease storage domain will be used. If lease parameter is passed with empty storage domain parameter, then no lease will be used for the snapshot preview. If lease parameter is passed with storage domain parameter then the storage domain ID can be only one of the leases domain IDs that belongs to one of the virtual machine snapshots. This is an optional parameter, set by default to null 6.239.12. reboot POST Sends a reboot request to a virtual machine. For example: The reboot action does not take any action specific parameters; therefore, the request body should contain an empty action : <action/> Table 6.737. Parameters summary Name Type Direction Summary async Boolean In Indicates if the reboot should be performed asynchronously. 6.239.13. remove DELETE Removes the virtual machine, including the virtual disks attached to it. For example, to remove the virtual machine with identifier 123 : Table 6.738. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. detach_only Boolean In Indicates if the attached virtual disks should be detached first and preserved instead of being removed. force Boolean In Indicates if the virtual machine should be forcibly removed. 6.239.13.1. force Indicates if the virtual machine should be forcibly removed. Locked virtual machines and virtual machines with locked disk images cannot be removed without this flag set to true. 6.239.14. reordermacaddresses POST Table 6.739. Parameters summary Name Type Direction Summary async Boolean In Indicates if the action should be performed asynchronously. 6.239.15. shutdown POST This operation sends a shutdown request to a virtual machine. For example: The shutdown action does not take any action specific parameters; therefore, the request body should contain an empty action : <action/> Table 6.740. Parameters summary Name Type Direction Summary async Boolean In Indicates if the shutdown should be performed asynchronously. 6.239.16. start POST Starts the virtual machine. If the virtual environment is complete and the virtual machine contains all necessary components to function, it can be started. This example starts the virtual machine: With a request body: <action/> Table 6.741. Parameters summary Name Type Direction Summary async Boolean In Indicates if the start action should be performed asynchronously. authorized_key AuthorizedKey In filter Boolean In Indicates if the results should be filtered according to the permissions of the user. pause Boolean In If set to true , start the virtual machine in paused mode. use_cloud_init Boolean In If set to true , the initialization type is set to cloud-init . use_sysprep Boolean In If set to true , the initialization type is set to Sysprep . vm Vm In The definition of the virtual machine for this specific run. volatile Boolean In Indicates that this run configuration will be discarded even in the case of guest-initiated reboot. 6.239.16.1. pause If set to true , start the virtual machine in paused mode. The default is false . 6.239.16.2. use_cloud_init If set to true , the initialization type is set to cloud-init . The default value is false . See this for details. 6.239.16.3. use_sysprep If set to true , the initialization type is set to Sysprep . The default value is false . See this for details. 6.239.16.4. vm The definition of the virtual machine for this specific run. For example: <action> <vm> <os> <boot> <devices> <device>cdrom</device> </devices> </boot> </os> </vm> </action> This will set the boot device to the CDROM only for this specific start. After the virtual machine is powered off, this definition will be reverted. 6.239.16.5. volatile Indicates that this run configuration will be discarded even in the case of guest-initiated reboot. The default value is false . 6.239.17. stop POST This operation forces a virtual machine to power-off. For example: The stop action does not take any action specific parameters; therefore, the request body should contain an empty action : <action/> Table 6.742. Parameters summary Name Type Direction Summary async Boolean In Indicates if the stop action should be performed asynchronously. 6.239.18. suspend POST This operation saves the virtual machine state to disk and stops it. Start a suspended virtual machine and restore the virtual machine state with the start action. For example: The suspend action does not take any action specific parameters; therefore, the request body should contain an empty action : <action/> Table 6.743. Parameters summary Name Type Direction Summary async Boolean In Indicates if the suspend action should be performed asynchronously. 6.239.19. thawfilesystems POST Thaws virtual machine file systems. This operation thaws a virtual machine's file systems using the QEMU guest agent when taking a live snapshot of a running virtual machine. Normally, this is done automatically by Manager, but this must be executed manually with the API for virtual machines using OpenStack Volume (Cinder) disks. Example: <action/> Table 6.744. Parameters summary Name Type Direction Summary async Boolean In Indicates if the thaw file systems action should be performed asynchronously. 6.239.20. ticket POST Generates a time-sensitive authentication token for accessing a virtual machine's display. For example: The client-provided action optionally includes a desired ticket value and/or an expiry time in seconds. The response specifies the actual ticket value and expiry used. <action> <ticket> <value>abcd12345</value> <expiry>120</expiry> </ticket> </action> Important If the virtual machine is configured to support only one graphics protocol then the generated authentication token will be valid for that protocol. But if the virtual machine is configured to support multiple protocols, VNC and SPICE, then the authentication token will only be valid for the SPICE protocol. In order to obtain an authentication token for a specific protocol, for example for VNC, use the ticket method of the service , which manages the graphics consoles of the virtual machine, by sending a request: Table 6.745. Parameters summary Name Type Direction Summary async Boolean In Indicates if the generation of the ticket should be performed asynchronously. ticket Ticket In/Out 6.239.21. undosnapshot POST Restores the virtual machine to the state it had before previewing the snapshot. See the preview_snapshot operation for details. Table 6.746. Parameters summary Name Type Direction Summary async Boolean In Indicates if the undo snapshot action should be performed asynchronously. 6.239.22. update PUT Update the virtual machine in the system for the given virtual machine id. Table 6.747. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. next_run Boolean In Indicates if the update should be applied to the virtual machine immediately or if it should be applied only when the virtual machine is restarted. vm Vm In/Out 6.239.22.1. next_run Indicates if the update should be applied to the virtual machine immediately or if it should be applied only when the virtual machine is restarted. The default value is false , so by default changes are applied immediately. 6.240. VmApplication A service that provides information about an application installed in a virtual machine. Table 6.748. Methods summary Name Summary get Returns the information about the application. 6.240.1. get GET Returns the information about the application. Table 6.749. Parameters summary Name Type Direction Summary application Application Out The information about the application. filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . 6.240.1.1. application The information about the application. The information consists of name attribute containing the name of the application (which is an arbitrary string that may also contain additional information such as version) and vm attribute identifying the virtual machine. For example, a request like this: May return information like this: <application href="/ovirt-engine/api/vms/123/applications/789" id="789"> <name>ovirt-guest-agent-common-1.0.12-3.el7</name> <vm href="/ovirt-engine/api/vms/123" id="123"/> </application> 6.240.1.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.241. VmApplications A service that provides information about applications installed in a virtual machine. Table 6.750. Methods summary Name Summary list Returns a list of applications installed in the virtual machine. 6.241.1. list GET Returns a list of applications installed in the virtual machine. The order of the returned list of applications isn't guaranteed. Table 6.751. Parameters summary Name Type Direction Summary applications Application[] Out A list of applications installed in the virtual machine. filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of applications to return. 6.241.1.1. applications A list of applications installed in the virtual machine. For example, a request like this: May return a list like this: <applications> <application href="/ovirt-engine/api/vms/123/applications/456" id="456"> <name>kernel-3.10.0-327.36.1.el7</name> <vm href="/ovirt-engine/api/vms/123" id="123"/> </application> <application href="/ovirt-engine/api/vms/123/applications/789" id="789"> <name>ovirt-guest-agent-common-1.0.12-3.el7</name> <vm href="/ovirt-engine/api/vms/123" id="123"/> </application> </applications> 6.241.1.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.241.1.3. max Sets the maximum number of applications to return. If not specified all the applications are returned. 6.242. VmCdrom Manages a CDROM device of a virtual machine. Changing and ejecting the disk is done using always the update method, to change the value of the file attribute. Table 6.752. Methods summary Name Summary get Returns the information about this CDROM device. update Updates the information about this CDROM device. 6.242.1. get GET Returns the information about this CDROM device. The information consists of cdrom attribute containing reference to the CDROM device, the virtual machine, and optionally the inserted disk. If there is a disk inserted then the file attribute will contain a reference to the ISO image: <cdrom href="..." id="00000000-0000-0000-0000-000000000000"> <file id="mycd.iso"/> <vm href="/ovirt-engine/api/vms/123" id="123"/> </cdrom> If there is no disk inserted then the file attribute won't be reported: <cdrom href="..." id="00000000-0000-0000-0000-000000000000"> <vm href="/ovirt-engine/api/vms/123" id="123"/> </cdrom> Table 6.753. Parameters summary Name Type Direction Summary cdrom Cdrom Out The information about the CDROM device. current Boolean In Indicates if the operation should return the information for the currently running virtual machine. follow String In Indicates which inner links should be followed . 6.242.1.1. current Indicates if the operation should return the information for the currently running virtual machine. This parameter is optional, and the default value is false . 6.242.1.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.242.2. update PUT Updates the information about this CDROM device. It allows to change or eject the disk by changing the value of the file attribute. For example, to insert or change the disk send a request like this: The body should contain the new value for the file attribute: <cdrom> <file id="mycd.iso"/> </cdrom> The value of the id attribute, mycd.iso in this example, should correspond to a file available in an attached ISO storage domain. To eject the disk use a file with an empty id : <cdrom> <file id=""/> </cdrom> By default the above operations change permanently the disk that will be visible to the virtual machine after the boot, but they don't have any effect on the currently running virtual machine. If you want to change the disk that is visible to the current running virtual machine, add the current=true parameter. For example, to eject the current disk send a request like this: With a request body like this: <cdrom> <file id=""/> </cdrom> Important The changes made with the current=true parameter are never persisted, so they won't have any effect after the virtual machine is rebooted. Table 6.754. Parameters summary Name Type Direction Summary cdrom Cdrom In/Out The information about the CDROM device. current Boolean In Indicates if the update should apply to the currently running virtual machine, or to the virtual machine after the boot. 6.242.2.1. current Indicates if the update should apply to the currently running virtual machine, or to the virtual machine after the boot. This parameter is optional, and the default value is false , which means that by default the update will have effect only after the boot. 6.243. VmCdroms Manages the CDROM devices of a virtual machine. Currently virtual machines have exactly one CDROM device. No new devices can be added, and the existing one can't be removed, thus there are no add or remove methods. Changing and ejecting CDROM disks is done with the update method of the service that manages the CDROM device. Table 6.755. Methods summary Name Summary add Add a cdrom to a virtual machine identified by the given id. list Returns the list of CDROM devices of the virtual machine. 6.243.1. add POST Add a cdrom to a virtual machine identified by the given id. Table 6.756. Parameters summary Name Type Direction Summary cdrom Cdrom In/Out 6.243.2. list GET Returns the list of CDROM devices of the virtual machine. The order of the returned list of CD-ROM devices isn't guaranteed. Table 6.757. Parameters summary Name Type Direction Summary cdroms Cdrom[] Out The list of CDROM devices of the virtual machine. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of CDROMs to return. 6.243.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.243.2.2. max Sets the maximum number of CDROMs to return. If not specified all the CDROMs are returned. 6.244. VmDisk Table 6.758. Methods summary Name Summary activate deactivate export get move reduce Reduces the size of the disk image. remove Detach the disk from the virtual machine. update 6.244.1. activate POST Table 6.759. Parameters summary Name Type Direction Summary async Boolean In Indicates if the activation should be performed asynchronously. 6.244.2. deactivate POST Table 6.760. Parameters summary Name Type Direction Summary async Boolean In Indicates if the deactivation should be performed asynchronously. 6.244.3. export POST Table 6.761. Parameters summary Name Type Direction Summary async Boolean In Indicates if the export should be performed asynchronously. filter Boolean In Indicates if the results should be filtered according to the permissions of the user. 6.244.4. get GET Table 6.762. Parameters summary Name Type Direction Summary disk Disk Out follow String In Indicates which inner links should be followed . 6.244.4.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.244.5. move POST Table 6.763. Parameters summary Name Type Direction Summary async Boolean In Indicates if the move should be performed asynchronously. filter Boolean In Indicates if the results should be filtered according to the permissions of the user. 6.244.6. reduce POST Reduces the size of the disk image. Invokes reduce on the logical volume (i.e. this is only applicable for block storage domains). This is applicable for floating disks and disks attached to non-running virtual machines. There is no need to specify the size as the optimal size is calculated automatically. Table 6.764. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.244.7. remove DELETE Detach the disk from the virtual machine. Note In version 3 of the API this used to also remove the disk completely from the system, but starting with version 4 it doesn't. If you need to remove it completely use the remove method of the top level disk service . Table 6.765. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.244.8. update PUT Table 6.766. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. disk Disk In/Out 6.245. VmDisks Table 6.767. Methods summary Name Summary add list Returns the list of disks of the virtual machine. 6.245.1. add POST Table 6.768. Parameters summary Name Type Direction Summary disk Disk In/Out 6.245.2. list GET Returns the list of disks of the virtual machine. The order of the returned list of disks isn't guaranteed. Table 6.769. Parameters summary Name Type Direction Summary disks Disk[] Out follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of disks to return. 6.245.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.245.2.2. max Sets the maximum number of disks to return. If not specified all the disks are returned. 6.246. VmGraphicsConsole Table 6.770. Methods summary Name Summary get Retrieves the graphics console configuration of the virtual machine. proxyticket remoteviewerconnectionfile Generates the file which is compatible with remote-viewer client. remove Remove the graphics console from the virtual machine. ticket Generates a time-sensitive authentication token for accessing this virtual machine's console. 6.246.1. get GET Retrieves the graphics console configuration of the virtual machine. Important By default, when the current parameter is not specified, the data returned corresponds to the execution of the virtual machine. In the current implementation of the system this means that the address and port attributes will not be populated because the system does not know what address and port will be used for the execution. Since in most cases those attributes are needed, it is strongly advised to aways explicitly include the current parameter with the value true . Table 6.771. Parameters summary Name Type Direction Summary console GraphicsConsole Out The information about the graphics console of the virtual machine. current Boolean In Specifies if the data returned should correspond to the execution of the virtual machine, or to the current execution. follow String In Indicates which inner links should be followed . 6.246.1.1. current Specifies if the data returned should correspond to the execution of the virtual machine, or to the current execution. Important The address and port attributes will not be populated unless the value is true . For example, to get data for the current execution of the virtual machine, including the address and port attributes, send a request like this: The default value is false . 6.246.1.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.246.2. proxyticket POST Table 6.772. Parameters summary Name Type Direction Summary async Boolean In Indicates if the generation of the ticket should be performed asynchronously. proxy_ticket ProxyTicket Out 6.246.3. remoteviewerconnectionfile POST Generates the file which is compatible with remote-viewer client. Use the following request to generate remote viewer connection file of the graphics console. Note that this action generates the file only if virtual machine is running. The remoteviewerconnectionfile action does not take any action specific parameters, so the request body should contain an empty action : <action/> The response contains the file, which can be used with remote-viewer client. <action> <remote_viewer_connection_file> [virt-viewer] type=spice host=192.168.1.101 port=-1 password=123456789 delete-this-file=1 fullscreen=0 toggle-fullscreen=shift+f11 release-cursor=shift+f12 secure-attention=ctrl+alt+end tls-port=5900 enable-smartcard=0 enable-usb-autoshare=0 usb-filter=null tls-ciphers=DEFAULT host-subject=O=local,CN=example.com ca=... </remote_viewer_connection_file> </action> E.g., to fetch the content of remote viewer connection file and save it into temporary file, user can use oVirt Python SDK as follows: # Find the virtual machine: vm = vms_service.list(search='name=myvm')[0] # Locate the service that manages the virtual machine, as that is where # the locators are defined: vm_service = vms_service.vm_service(vm.id) # Find the graphic console of the virtual machine: graphics_consoles_service = vm_service.graphics_consoles_service() graphics_console = graphics_consoles_service.list()[0] # Generate the remote viewer connection file: console_service = graphics_consoles_service.console_service(graphics_console.id) remote_viewer_connection_file = console_service.remote_viewer_connection_file() # Write the content to file "/tmp/remote_viewer_connection_file.vv" path = "/tmp/remote_viewer_connection_file.vv" with open(path, "w") as f: f.write(remote_viewer_connection_file) When you create the remote viewer connection file, then you can connect to virtual machine graphic console, as follows: #!/bin/sh -ex remote-viewer --ovirt-ca-file=/etc/pki/ovirt-engine/ca.pem /tmp/remote_viewer_connection_file.vv Table 6.773. Parameters summary Name Type Direction Summary remote_viewer_connection_file String Out Contains the file which is compatible with remote-viewer client. 6.246.3.1. remote_viewer_connection_file Contains the file which is compatible with remote-viewer client. User can use the content of this attribute to create a file, which can be passed to remote-viewer client to connect to virtual machine graphic console. 6.246.4. remove DELETE Remove the graphics console from the virtual machine. Table 6.774. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.246.5. ticket POST Generates a time-sensitive authentication token for accessing this virtual machine's console. The client-provided action optionally includes a desired ticket value and/or an expiry time in seconds. In any case, the response specifies the actual ticket value and expiry used. <action> <ticket> <value>abcd12345</value> <expiry>120</expiry> </ticket> </action> Table 6.775. Parameters summary Name Type Direction Summary ticket Ticket In/Out The generated ticket that can be used to access this console. 6.247. VmGraphicsConsoles Table 6.776. Methods summary Name Summary add Add new graphics console to the virtual machine. list Lists all the configured graphics consoles of the virtual machine. 6.247.1. add POST Add new graphics console to the virtual machine. Table 6.777. Parameters summary Name Type Direction Summary console GraphicsConsole In/Out 6.247.2. list GET Lists all the configured graphics consoles of the virtual machine. Important By default, when the current parameter is not specified, the data returned corresponds to the execution of the virtual machine. In the current implementation of the system this means that the address and port attributes will not be populated because the system does not know what address and port will be used for the execution. Since in most cases those attributes are needed, it is strongly advised to aways explicitly include the current parameter with the value true . The order of the returned list of graphics consoles is not guaranteed. Table 6.778. Parameters summary Name Type Direction Summary consoles GraphicsConsole[] Out The list of graphics consoles of the virtual machine. current Boolean In Specifies if the data returned should correspond to the execution of the virtual machine, or to the current execution. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of consoles to return. 6.247.2.1. current Specifies if the data returned should correspond to the execution of the virtual machine, or to the current execution. Important The address and port attributes will not be populated unless the value is true . For example, to get data for the current execution of the virtual machine, including the address and port attributes, send a request like this: The default value is false . 6.247.2.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.247.2.3. max Sets the maximum number of consoles to return. If not specified all the consoles are returned. 6.248. VmHostDevice A service to manage individual host device attached to a virtual machine. Table 6.779. Methods summary Name Summary get Retrieve information about particular host device attached to given virtual machine. remove Remove the attachment of this host device from given virtual machine. 6.248.1. get GET Retrieve information about particular host device attached to given virtual machine. Example: <host_device href="/ovirt-engine/api/hosts/543/devices/456" id="456"> <name>pci_0000_04_00_0</name> <capability>pci</capability> <iommu_group>30</iommu_group> <placeholder>true</placeholder> <product id="0x13ba"> <name>GM107GL [Quadro K2200]</name> </product> <vendor id="0x10de"> <name>NVIDIA Corporation</name> </vendor> <host href="/ovirt-engine/api/hosts/543" id="543"/> <parent_device href="/ovirt-engine/api/hosts/543/devices/456" id="456"> <name>pci_0000_00_03_0</name> </parent_device> <vm href="/ovirt-engine/api/vms/123" id="123"/> </host_device> Table 6.780. Parameters summary Name Type Direction Summary device HostDevice Out Retrieved information about the host device attached to given virtual machine. follow String In Indicates which inner links should be followed . 6.248.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.248.2. remove DELETE Remove the attachment of this host device from given virtual machine. Note In case this device serves as an IOMMU placeholder, it cannot be removed (remove will result only in setting its placeholder flag to true ). Note that all IOMMU placeholder devices will be removed automatically as soon as there will be no more non-placeholder devices (all devices from given IOMMU group are detached). Table 6.781. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.249. VmHostDevices A service to manage host devices attached to a virtual machine. Table 6.782. Methods summary Name Summary add Attach target device to given virtual machine. list List the host devices assigned to given virtual machine. 6.249.1. add POST Attach target device to given virtual machine. Example: With request body of type HostDevice , for example <host_device id="123" /> Note A necessary precondition for a successful host device attachment is that the virtual machine must be pinned to exactly one host. The device ID is then taken relative to this host. Note Attachment of a PCI device that is part of a bigger IOMMU group will result in attachment of the remaining devices from that IOMMU group as "placeholders". These devices are then identified using the placeholder attribute of the HostDevice type set to true . In case you want attach a device that already serves as an IOMMU placeholder, simply issue an explicit Add operation for it, and its placeholder flag will be cleared, and the device will be accessible to the virtual machine. Table 6.783. Parameters summary Name Type Direction Summary device HostDevice In/Out The host device to be attached to given virtual machine. 6.249.2. list GET List the host devices assigned to given virtual machine. The order of the returned list of devices isn't guaranteed. Table 6.784. Parameters summary Name Type Direction Summary device HostDevice[] Out Retrieved list of host devices attached to given virtual machine. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of devices to return. 6.249.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.249.2.2. max Sets the maximum number of devices to return. If not specified all the devices are returned. 6.250. VmNic Table 6.785. Methods summary Name Summary activate deactivate get remove Removes the NIC. update Updates the NIC. 6.250.1. activate POST Table 6.786. Parameters summary Name Type Direction Summary async Boolean In Indicates if the activation should be performed asynchronously. 6.250.2. deactivate POST Table 6.787. Parameters summary Name Type Direction Summary async Boolean In Indicates if the deactivation should be performed asynchronously. 6.250.3. get GET Table 6.788. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . nic Nic Out 6.250.3.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.250.4. remove DELETE Removes the NIC. For example, to remove the NIC with id 456 from the virtual machine with id 123 send a request like this: Important The hotplugging feature only supports virtual machine operating systems with hotplugging operations. Example operating systems include: Red Hat Enterprise Linux 6 Red Hat Enterprise Linux 5 Windows Server 2008 and Windows Server 2003 Table 6.789. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.250.5. update PUT Updates the NIC. For example, to update the NIC having with 456 belonging to virtual the machine with id 123 send a request like this: With a request body like this: <nic> <name>mynic</name> <interface>e1000</interface> <vnic_profile id='789'/> </nic> Important The hotplugging feature only supports virtual machine operating systems with hotplugging operations. Example operating systems include: Red Hat Enterprise Linux 6 Red Hat Enterprise Linux 5 Windows Server 2008 and Windows Server 2003 Table 6.790. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. nic Nic In/Out 6.251. VmNics Table 6.791. Methods summary Name Summary add Adds a NIC to the virtual machine. list Returns the list of NICs of the virtual machine. 6.251.1. add POST Adds a NIC to the virtual machine. The following example adds to the virtual machine 123 a network interface named mynic using virtio and the NIC profile 456 . <nic> <name>mynic</name> <interface>virtio</interface> <vnic_profile id="456"/> </nic> The following example sends that request using curl : curl \ --request POST \ --header "Version: 4" \ --header "Content-Type: application/xml" \ --header "Accept: application/xml" \ --user "admin@internal:mypassword" \ --cacert /etc/pki/ovirt-engine/ca.pem \ --data ' <nic> <name>mynic</name> <interface>virtio</interface> <vnic_profile id="456"/> </nic> ' \ https://myengine.example.com/ovirt-engine/api/vms/123/nics Important The hotplugging feature only supports virtual machine operating systems with hotplugging operations. Example operating systems include: Red Hat Enterprise Linux 6 Red Hat Enterprise Linux 5 Windows Server 2008 and Windows Server 2003 Table 6.792. Parameters summary Name Type Direction Summary nic Nic In/Out 6.251.2. list GET Returns the list of NICs of the virtual machine. The order of the returned list of NICs isn't guaranteed. Table 6.793. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of NICs to return. nics Nic[] Out 6.251.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.251.2.2. max Sets the maximum number of NICs to return. If not specified all the NICs are returned. 6.252. VmNumaNode Table 6.794. Methods summary Name Summary get remove Removes a virtual NUMA node. update Updates a virtual NUMA node. 6.252.1. get GET Table 6.795. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . node VirtualNumaNode Out 6.252.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.252.2. remove DELETE Removes a virtual NUMA node. An example of removing a virtual NUMA node: Note It's required to remove the numa nodes from the highest index first. Table 6.796. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.252.3. update PUT Updates a virtual NUMA node. An example of pinning a virtual NUMA node to a physical NUMA node on the host: The request body should contain the following: <vm_numa_node> <numa_node_pins> <numa_node_pin> <index>0</index> </numa_node_pin> </numa_node_pins> </vm_numa_node> Table 6.797. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. node VirtualNumaNode In/Out 6.253. VmNumaNodes Table 6.798. Methods summary Name Summary add Creates a new virtual NUMA node for the virtual machine. list Lists virtual NUMA nodes of a virtual machine. 6.253.1. add POST Creates a new virtual NUMA node for the virtual machine. An example of creating a NUMA node: The request body can contain the following: <vm_numa_node> <cpu> <cores> <core> <index>0</index> </core> </cores> </cpu> <index>0</index> <memory>1024</memory> </vm_numa_node> Table 6.799. Parameters summary Name Type Direction Summary node VirtualNumaNode In/Out 6.253.2. list GET Lists virtual NUMA nodes of a virtual machine. The order of the returned list of NUMA nodes isn't guaranteed. Table 6.800. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of nodes to return. nodes VirtualNumaNode[] Out 6.253.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.253.2.2. max Sets the maximum number of nodes to return. If not specified all the nodes are returned. 6.254. VmPool A service to manage a virtual machines pool. Table 6.801. Methods summary Name Summary allocatevm This operation allocates a virtual machine in the virtual machine pool. get Get the virtual machine pool. remove Removes a virtual machine pool. update Update the virtual machine pool. 6.254.1. allocatevm POST This operation allocates a virtual machine in the virtual machine pool. The allocate virtual machine action does not take any action specific parameters, so the request body should contain an empty action : <action/> Table 6.802. Parameters summary Name Type Direction Summary async Boolean In Indicates if the allocation should be performed asynchronously. 6.254.2. get GET Get the virtual machine pool. You will get a XML response like that one: <vm_pool id="123"> <actions>...</actions> <name>MyVmPool</name> <description>MyVmPool description</description> <link href="/ovirt-engine/api/vmpools/123/permissions" rel="permissions"/> <max_user_vms>1</max_user_vms> <prestarted_vms>0</prestarted_vms> <size>100</size> <stateful>false</stateful> <type>automatic</type> <use_latest_template_version>false</use_latest_template_version> <cluster id="123"/> <template id="123"/> <vm id="123">...</vm> ... </vm_pool> Table 6.803. Parameters summary Name Type Direction Summary filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . pool VmPool Out Retrieved virtual machines pool. 6.254.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.254.3. remove DELETE Removes a virtual machine pool. Table 6.804. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.254.4. update PUT Update the virtual machine pool. The name , description , size , prestarted_vms and max_user_vms attributes can be updated after the virtual machine pool has been created. <vmpool> <name>VM_Pool_B</name> <description>Virtual Machine Pool B</description> <size>3</size> <prestarted_vms>1</size> <max_user_vms>2</size> </vmpool> Table 6.805. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. pool VmPool In/Out The virtual machine pool that is being updated. 6.255. VmPools Provides read-write access to virtual machines pools. Table 6.806. Methods summary Name Summary add Creates a new virtual machine pool. list Get a list of available virtual machines pools. 6.255.1. add POST Creates a new virtual machine pool. A new pool requires the name , cluster and template attributes. Identify the cluster and template with the id or name nested attributes: With the following body: <vmpool> <name>mypool</name> <cluster id="123"/> <template id="456"/> </vmpool> Table 6.807. Parameters summary Name Type Direction Summary pool VmPool In/Out Pool to add. 6.255.2. list GET Get a list of available virtual machines pools. You will receive the following response: <vm_pools> <vm_pool id="123"> ... </vm_pool> ... </vm_pools> The order of the returned list of pools is guaranteed only if the sortby clause is included in the search parameter. Table 6.808. Parameters summary Name Type Direction Summary case_sensitive Boolean In Indicates if the search performed using the search parameter should be performed taking case into account. filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of pools to return. pools VmPool[] Out Retrieved pools. search String In A query string used to restrict the returned pools. 6.255.2.1. case_sensitive Indicates if the search performed using the search parameter should be performed taking case into account. The default value is true , which means that case is taken into account. If you want to search ignoring case set it to false . 6.255.2.2. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.255.2.3. max Sets the maximum number of pools to return. If this value is not specified, all of the pools are returned. 6.256. VmReportedDevice Table 6.809. Methods summary Name Summary get 6.256.1. get GET Table 6.810. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . reported_device ReportedDevice Out 6.256.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.257. VmReportedDevices Table 6.811. Methods summary Name Summary list Returns the list of reported devices of the virtual machine. 6.257.1. list GET Returns the list of reported devices of the virtual machine. The order of the returned list of devices isn't guaranteed. Table 6.812. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of devices to return. reported_device ReportedDevice[] Out 6.257.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.257.1.2. max Sets the maximum number of devices to return. If not specified all the devices are returned. 6.258. VmSession Table 6.813. Methods summary Name Summary get 6.258.1. get GET Table 6.814. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . session Session Out 6.258.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.259. VmSessions Provides information about virtual machine user sessions. Table 6.815. Methods summary Name Summary list Lists all user sessions for this virtual machine. 6.259.1. list GET Lists all user sessions for this virtual machine. For example, to retrieve the session information for virtual machine 123 send a request like this: The response body will contain something like this: <sessions> <session href="/ovirt-engine/api/vms/123/sessions/456" id="456"> <console_user>true</console_user> <ip> <address>192.168.122.1</address> </ip> <user href="/ovirt-engine/api/users/789" id="789"/> <vm href="/ovirt-engine/api/vms/123" id="123"/> </session> ... </sessions> The order of the returned list of sessions isn't guaranteed. Table 6.816. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of sessions to return. sessions Session[] Out 6.259.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.259.1.2. max Sets the maximum number of sessions to return. If not specified all the sessions are returned. 6.260. VmWatchdog A service managing a watchdog on virtual machines. Table 6.817. Methods summary Name Summary get Returns the information about the watchdog. remove Removes the watchdog from the virtual machine. update Updates the information about the watchdog. 6.260.1. get GET Returns the information about the watchdog. Table 6.818. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . watchdog Watchdog Out The information about the watchdog. 6.260.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.260.1.2. watchdog The information about the watchdog. The information consists of model element, action element and the reference to the virtual machine. It may look like this: <watchdogs> <watchdog href="/ovirt-engine/api/vms/123/watchdogs/00000000-0000-0000-0000-000000000000" id="00000000-0000-0000-0000-000000000000"> <vm href="/ovirt-engine/api/vms/123" id="123"/> <action>poweroff</action> <model>i6300esb</model> </watchdog> </watchdogs> 6.260.2. remove DELETE Removes the watchdog from the virtual machine. For example, to remove a watchdog from a virtual machine, send a request like this: Table 6.819. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.260.3. update PUT Updates the information about the watchdog. You can update the information using action and model elements. For example, to update a watchdog, send a request like this: with response body: <watchdog href="/ovirt-engine/api/vms/123/watchdogs/00000000-0000-0000-0000-000000000000" id="00000000-0000-0000-0000-000000000000"> <vm href="/ovirt-engine/api/vms/123" id="123"/> <action>reset</action> <model>i6300esb</model> </watchdog> Table 6.820. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. watchdog Watchdog In/Out The information about the watchdog. 6.260.3.1. watchdog The information about the watchdog. The request data must contain at least one of model and action elements. The response data contains complete information about the updated watchdog. 6.261. VmWatchdogs Lists the watchdogs of a virtual machine. Table 6.821. Methods summary Name Summary add Adds new watchdog to the virtual machine. list The list of watchdogs of the virtual machine. 6.261.1. add POST Adds new watchdog to the virtual machine. For example, to add a watchdog to a virtual machine, send a request like this: with response body: <watchdog href="/ovirt-engine/api/vms/123/watchdogs/00000000-0000-0000-0000-000000000000" id="00000000-0000-0000-0000-000000000000"> <vm href="/ovirt-engine/api/vms/123" id="123"/> <action>poweroff</action> <model>i6300esb</model> </watchdog> Table 6.822. Parameters summary Name Type Direction Summary watchdog Watchdog In/Out The information about the watchdog. 6.261.1.1. watchdog The information about the watchdog. The request data must contain model element (such as i6300esb ) and action element (one of none , reset , poweroff , dump , pause ). The response data additionally contains references to the added watchdog and to the virtual machine. 6.261.2. list GET The list of watchdogs of the virtual machine. The order of the returned list of watchdogs isn't guaranteed. Table 6.823. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of watchdogs to return. watchdogs Watchdog[] Out The information about the watchdog. 6.261.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.261.2.2. max Sets the maximum number of watchdogs to return. If not specified all the watchdogs are returned. 6.261.2.3. watchdogs The information about the watchdog. The information consists of model element, action element and the reference to the virtual machine. It may look like this: <watchdogs> <watchdog href="/ovirt-engine/api/vms/123/watchdogs/00000000-0000-0000-0000-000000000000" id="00000000-0000-0000-0000-000000000000"> <vm href="/ovirt-engine/api/vms/123" id="123"/> <action>poweroff</action> <model>i6300esb</model> </watchdog> </watchdogs> 6.262. Vms Table 6.824. Methods summary Name Summary add Creates a new virtual machine. list Returns the list of virtual machines of the system. 6.262.1. add POST Creates a new virtual machine. The virtual machine can be created in different ways: From a template. In this case the identifier or name of the template must be provided. For example, using a plain shell script and XML: #!/bin/sh -ex url="https://engine.example.com/ovirt-engine/api" user="admin@internal" password="..." curl \ --verbose \ --cacert /etc/pki/ovirt-engine/ca.pem \ --user "USD{user}:USD{password}" \ --request POST \ --header "Version: 4" \ --header "Content-Type: application/xml" \ --header "Accept: application/xml" \ --data ' <vm> <name>myvm</name> <template> <name>Blank</name> </template> <cluster> <name>mycluster</name> </cluster> </vm> ' \ "USD{url}/vms" From a snapshot. In this case the identifier of the snapshot has to be provided. For example, using a plain shel script and XML: #!/bin/sh -ex url="https://engine.example.com/ovirt-engine/api" user="admin@internal" password="..." curl \ --verbose \ --cacert /etc/pki/ovirt-engine/ca.pem \ --user "USD{user}:USD{password}" \ --request POST \ --header "Content-Type: application/xml" \ --header "Accept: application/xml" \ --data ' <vm> <name>myvm</name> <snapshots> <snapshot id="266742a5-6a65-483c-816d-d2ce49746680"/> </snapshots> <cluster> <name>mycluster</name> </cluster> </vm> ' \ "USD{url}/vms" When creating a virtual machine from a template or from a snapshot it is usually useful to explicitly indicate in what storage domain to create the disks for the virtual machine. If the virtual machine is created from a template then this is achieved passing a set of disk_attachment elements that indicate the mapping: <vm> ... <disk_attachments> <disk_attachment> <disk id="8d4bd566-6c86-4592-a4a7-912dbf93c298"> <storage_domains> <storage_domain id="9cb6cb0a-cf1d-41c2-92ca-5a6d665649c9"/> </storage_domains> </disk> <disk_attachment> </disk_attachments> </vm> When the virtual machine is created from a snapshot this set of disks is slightly different, it uses the image_id attribute instead of id . <vm> ... <disk_attachments> <disk_attachment> <disk> <image_id>8d4bd566-6c86-4592-a4a7-912dbf93c298</image_id> <storage_domains> <storage_domain id="9cb6cb0a-cf1d-41c2-92ca-5a6d665649c9"/> </storage_domains> </disk> <disk_attachment> </disk_attachments> </vm> It is possible to specify additional virtual machine parameters in the XML description, e.g. a virtual machine of desktop type, with 2 GiB of RAM and additional description can be added sending a request body like the following: <vm> <name>myvm</name> <description>My Desktop Virtual Machine</description> <type>desktop</type> <memory>2147483648</memory> ... </vm> A bootable CDROM device can be set like this: <vm> ... <os> <boot dev="cdrom"/> </os> </vm> In order to boot from CDROM, you first need to insert a disk, as described in the CDROM service . Then booting from that CDROM can be specified using the os.boot.devices attribute: <vm> ... <os> <boot> <devices> <device>cdrom</device> </devices> </boot> </os> </vm> In all cases the name or identifier of the cluster where the virtual machine will be created is mandatory. Table 6.825. Parameters summary Name Type Direction Summary clone Boolean In Specifies if the virtual machine should be independent of the template. clone_permissions Boolean In Specifies if the permissions of the template should be copied to the virtual machine. vm Vm In/Out 6.262.1.1. clone Specifies if the virtual machine should be independent of the template. When a virtual machine is created from a template by default the disks of the virtual machine depend on the disks of the template, they are using the copy on write mechanism so that only the differences from the template take up real storage space. If this parameter is specified and the value is true then the disks of the created virtual machine will be cloned , and independent of the template. For example, to create an independent virtual machine, send a request like this: With a request body like this: <vm> <name>myvm<name> <template> <name>mytemplate<name> </template> <cluster> <name>mycluster<name> </cluster> </vm> Note When this parameter is true the permissions of the template will also be copied, as when using clone_permissions=true . 6.262.1.2. clone_permissions Specifies if the permissions of the template should be copied to the virtual machine. If this optional parameter is provided, and its values is true then the permissions of the template (only the direct ones, not the inherited ones) will be copied to the created virtual machine. For example, to create a virtual machine from the mytemplate template copying its permissions, send a request like this: With a request body like this: <vm> <name>myvm<name> <template> <name>mytemplate<name> </template> <cluster> <name>mycluster<name> </cluster> </vm> 6.262.2. list GET Returns the list of virtual machines of the system. The order of the returned list of virtual machines is guaranteed only if the sortby clause is included in the search parameter. Table 6.826. Parameters summary Name Type Direction Summary all_content Boolean In Indicates if all the attributes of the virtual machines should be included in the response. case_sensitive Boolean In Indicates if the search performed using the search parameter should be performed taking case into account. filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . max Integer In The maximum number of results to return. search String In A query string used to restrict the returned virtual machines. vms Vm[] Out 6.262.2.1. all_content Indicates if all the attributes of the virtual machines should be included in the response. By default the following attributes are excluded: console initialization.configuration.data - The OVF document describing the virtual machine. rng_source soundcard virtio_scsi For example, to retrieve the complete representation of the virtual machines send a request like this: Note The reason for not including these attributes is performance: they are seldom used and they require additional queries to the database. So try to use the this parameter only when it is really needed. 6.262.2.2. case_sensitive Indicates if the search performed using the search parameter should be performed taking case into account. The default value is true , which means that case is taken into account. If you want to search ignoring case set it to false . 6.262.2.3. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.263. VnicProfile This service manages a vNIC profile. Table 6.827. Methods summary Name Summary get Retrieves details about a vNIC profile. remove Removes the vNIC profile. update Updates details of a vNIC profile. 6.263.1. get GET Retrieves details about a vNIC profile. Table 6.828. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . profile VnicProfile Out 6.263.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.263.2. remove DELETE Removes the vNIC profile. Table 6.829. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.263.3. update PUT Updates details of a vNIC profile. Table 6.830. Parameters summary Name Type Direction Summary async Boolean In Indicates if the update should be performed asynchronously. profile VnicProfile In/Out The vNIC profile that is being updated. 6.264. VnicProfiles This service manages the collection of all vNIC profiles. Table 6.831. Methods summary Name Summary add Add a vNIC profile. list List all vNIC profiles. 6.264.1. add POST Add a vNIC profile. For example to add vNIC profile 123 to network 456 send a request to: With the following body: <vnic_profile id="123"> <name>new_vNIC_name</name> <pass_through> <mode>disabled</mode> </pass_through> <port_mirroring>false</port_mirroring> </vnic_profile> Please note that there is a default network filter to each VNIC profile. For more details of how the default network filter is calculated please refer to the documentation in NetworkFilters . Note The automatically created vNIC profile for the external network will be without network filter. The output of creating a new VNIC profile depends in the body arguments that were given. In case no network filter was given, the default network filter will be configured. For example: <vnic_profile href="/ovirt-engine/api/vnicprofiles/123" id="123"> <name>new_vNIC_name</name> <link href="/ovirt-engine/api/vnicprofiles/123/permissions" rel="permissions"/> <pass_through> <mode>disabled</mode> </pass_through> <port_mirroring>false</port_mirroring> <network href="/ovirt-engine/api/networks/456" id="456"/> <network_filter href="/ovirt-engine/api/networkfilters/789" id="789"/> </vnic_profile> In case an empty network filter was given, no network filter will be configured for the specific VNIC profile regardless of the VNIC profile's default network filter. For example: <vnic_profile> <name>no_network_filter</name> <network_filter/> </vnic_profile> In case that a specific valid network filter id was given, the VNIC profile will be configured with the given network filter regardless of the VNIC profiles's default network filter. For example: <vnic_profile> <name>user_choice_network_filter</name> <network_filter id= "0000001b-001b-001b-001b-0000000001d5"/> </vnic_profile> Table 6.832. Parameters summary Name Type Direction Summary profile VnicProfile In/Out The vNIC profile that is being added. 6.264.2. list GET List all vNIC profiles. The order of the returned list of vNIC profiles isn't guaranteed. Table 6.833. Parameters summary Name Type Direction Summary follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of profiles to return. profiles VnicProfile[] Out The list of all vNIC profiles. 6.264.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.264.2.2. max Sets the maximum number of profiles to return. If not specified all the profiles are returned. 6.265. Weight Table 6.834. Methods summary Name Summary get remove 6.265.1. get GET Table 6.835. Parameters summary Name Type Direction Summary filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . weight Weight Out 6.265.1.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.265.2. remove DELETE Table 6.836. Parameters summary Name Type Direction Summary async Boolean In Indicates if the remove should be performed asynchronously. 6.266. Weights Table 6.837. Methods summary Name Summary add Add a weight to a specified user defined scheduling policy. list Returns the list of weights. 6.266.1. add POST Add a weight to a specified user defined scheduling policy. Table 6.838. Parameters summary Name Type Direction Summary weight Weight In/Out 6.266.2. list GET Returns the list of weights. The order of the returned list of weights isn't guaranteed. Table 6.839. Parameters summary Name Type Direction Summary filter Boolean In Indicates if the results should be filtered according to the permissions of the user. follow String In Indicates which inner links should be followed . max Integer In Sets the maximum number of weights to return. weights Weight[] Out 6.266.2.1. follow Indicates which inner links should be followed . The objects referenced by these links will be fetched as part of the current request. See here for details. 6.266.2.2. max Sets the maximum number of weights to return. If not specified all the weights are returned. | [
"<affinity_group id=\"00000000-0000-0000-0000-000000000000\"> <name>AF_GROUP_001</name> <cluster id=\"00000000-0000-0000-0000-000000000000\"/> <positive>true</positive> <enforcing>true</enforcing> </affinity_group>",
"DELETE /ovirt-engine/api/clusters/000-000/affinitygroups/123-456",
"POST /ovirt-engine/api/clusters/123/affinitygroups/456/vms",
"<vm id=\"789\"/>",
"POST /ovirt-engine/api/clusters/000-000/affinitygroups",
"<affinity_group> <name>AF_GROUP_001</name> <hosts_rule> <enforcing>true</enforcing> <positive>true</positive> </hosts_rule> <vms_rule> <enabled>false</enabled> </vms_rule> </affinity_group>",
"POST /ovirt-engine/api/vms/123/permissions",
"<permission> <role> <name>UserVmManager</name> </role> <user id=\"456\"/> </permission>",
"POST /ovirt-engine/api/permissions",
"<permission> <role> <name>SuperUser</name> </role> <user id=\"456\"/> </permission>",
"POST /ovirt-engine/api/clusters/123/permissions",
"<permission> <role> <name>UserRole</name> </role> <group id=\"789\"/> </permission>",
"GET /ovirt-engine/api/clusters/123/permissions",
"<permissions> <permission id=\"456\"> <cluster id=\"123\"/> <role id=\"789\"/> <user id=\"451\"/> </permission> <permission id=\"654\"> <cluster id=\"123\"/> <role id=\"789\"/> <group id=\"127\"/> </permission> </permissions>",
"GET /ovirt-engine/api/vms/123/tags/456",
"<tag href=\"/ovirt-engine/api/tags/456\" id=\"456\"> <name>root</name> <description>root</description> <vm href=\"/ovirt-engine/api/vms/123\" id=\"123\"/> </tag>",
"DELETE /ovirt-engine/api/vms/123/tags/456",
"POST /ovirt-engine/api/vms/123/tags",
"<tag> <name>mytag</name> </tag>",
"GET /ovirt-engine/api/vms/123/tags",
"<tags> <tag href=\"/ovirt-engine/api/tags/222\" id=\"222\"> <name>mytag</name> <description>mytag</description> <vm href=\"/ovirt-engine/api/vms/123\" id=\"123\"/> </tag> </tags>",
"POST /ovirt-engine/api/datacenters/123/storagedomains/456/activate",
"<action/>",
"POST /ovirt-engine/api/datacenters/123/storagedomains/456/deactivate",
"<action/>",
"POST /ovirt-engine/api/datacenters/123/storagedomains/456/deactivate",
"<action> <force>true</force> <action>",
"POST /ovirt-engine/api/storagedomains/123/disks?unregistered=true",
"<disk id=\"456\"/>",
"POST /ovirt-engine/api/storagedomains/123/disks",
"<disk> <name>mydisk</name> <format>cow</format> <provisioned_size>1073741824</provisioned_size> </disk>",
"GET /ovirt-engine/api/bookmarks/123",
"<bookmark href=\"/ovirt-engine/api/bookmarks/123\" id=\"123\"> <name>example_vm</name> <value>vm: name=example*</value> </bookmark>",
"DELETE /ovirt-engine/api/bookmarks/123",
"PUT /ovirt-engine/api/bookmarks/123",
"<bookmark> <name>new_example_vm</name> <value>vm: name=new_example*</value> </bookmark>",
"POST /ovirt-engine/api/bookmarks",
"<bookmark> <name>new_example_vm</name> <value>vm: name=new_example*</value> </bookmark>",
"GET /ovirt-engine/api/bookmarks",
"<bookmarks> <bookmark href=\"/ovirt-engine/api/bookmarks/123\" id=\"123\"> <name>database</name> <value>vm: name=database*</value> </bookmark> <bookmark href=\"/ovirt-engine/api/bookmarks/456\" id=\"456\"> <name>example</name> <value>vm: name=example*</value> </bookmark> </bookmarks>",
"GET /ovirt-engine/api/clusters/123",
"<cluster href=\"/ovirt-engine/api/clusters/123\" id=\"123\"> <actions> <link href=\"/ovirt-engine/api/clusters/123/resetemulatedmachine\" rel=\"resetemulatedmachine\"/> </actions> <name>Default</name> <description>The default server cluster</description> <link href=\"/ovirt-engine/api/clusters/123/networks\" rel=\"networks\"/> <link href=\"/ovirt-engine/api/clusters/123/permissions\" rel=\"permissions\"/> <link href=\"/ovirt-engine/api/clusters/123/glustervolumes\" rel=\"glustervolumes\"/> <link href=\"/ovirt-engine/api/clusters/123/glusterhooks\" rel=\"glusterhooks\"/> <link href=\"/ovirt-engine/api/clusters/123/affinitygroups\" rel=\"affinitygroups\"/> <link href=\"/ovirt-engine/api/clusters/123/cpuprofiles\" rel=\"cpuprofiles\"/> <ballooning_enabled>false</ballooning_enabled> <cpu> <architecture>x86_64</architecture> <type>Intel Penryn Family</type> </cpu> <error_handling> <on_error>migrate</on_error> </error_handling> <fencing_policy> <enabled>true</enabled> <skip_if_connectivity_broken> <enabled>false</enabled> <threshold>50</threshold> </skip_if_connectivity_broken> <skip_if_sd_active> <enabled>false</enabled> </skip_if_sd_active> </fencing_policy> <gluster_service>false</gluster_service> <ha_reservation>false</ha_reservation> <ksm> <enabled>true</enabled> <merge_across_nodes>true</merge_across_nodes> </ksm> <maintenance_reason_required>false</maintenance_reason_required> <memory_policy> <over_commit> <percent>100</percent> </over_commit> <transparent_hugepages> <enabled>true</enabled> </transparent_hugepages> </memory_policy> <migration> <auto_converge>inherit</auto_converge> <bandwidth> <assignment_method>auto</assignment_method> </bandwidth> <compressed>inherit</compressed> </migration> <optional_reason>false</optional_reason> <required_rng_sources> <required_rng_source>random</required_rng_source> </required_rng_sources> <scheduling_policy href=\"/ovirt-engine/api/schedulingpolicies/456\" id=\"456\"/> <threads_as_cores>false</threads_as_cores> <trusted_service>false</trusted_service> <tunnel_migration>false</tunnel_migration> <version> <major>4</major> <minor>0</minor> </version> <virt_service>true</virt_service> <data_center href=\"/ovirt-engine/api/datacenters/111\" id=\"111\"/> </cluster>",
"DELETE /ovirt-engine/api/clusters/00000000-0000-0000-0000-000000000000",
"POST /ovirt-engine/api/clusters/123/syncallnetworks",
"<action/>",
"PUT /ovirt-engine/api/clusters/123",
"<cluster> <cpu> <type>Intel Haswell-noTSX Family</type> </cpu> </cluster>",
"GET /ovirt-engine/api/clusters/123/enabledfeatures/456",
"<cluster_feature id=\"456\"> <name>libgfapi_supported</name> </cluster_feature>",
"DELETE /ovirt-engine/api/clusters/123/enabledfeatures/456",
"POST /ovirt-engine/api/clusters/123/enabledfeatures",
"<cluster_feature id=\"456\"/>",
"GET /ovirt-engine/api/clusters/123/enabledfeatures",
"<enabled_features> <cluster_feature id=\"123\"> <name>test_feature</name> </cluster_feature> </enabled_features>",
"GET /ovirt-engine/api/clusterlevels/4.1/clusterfeatures/456",
"<cluster_feature id=\"456\"> <name>libgfapi_supported</name> </cluster_feature>",
"GET /ovirt-engine/api/clusterlevels/4.1/clusterfeatures",
"<cluster_features> <cluster_feature id=\"123\"> <name>test_feature</name> </cluster_feature> </cluster_features>",
"GET /ovirt-engine/api/clusterlevels/3.6",
"<cluster_level id=\"3.6\"> <cpu_types> <cpu_type> <name>Intel Conroe Family</name> <level>3</level> <architecture>x86_64</architecture> </cpu_type> </cpu_types> <permits> <permit id=\"1\"> <name>create_vm</name> <administrative>false</administrative> </permit> </permits> </cluster_level>",
"GET /ovirt-engine/api/clusterlevels",
"<cluster_levels> <cluster_level id=\"4.0\"> </cluster_level> </cluster_levels>",
"POST /ovirt-engine/api/clusters/123/networks",
"<network id=\"123\" />",
"POST /ovirt-engine/api/clusters",
"<cluster> <name>mycluster</name> <cpu> <type>Intel Penryn Family</type> </cpu> <data_center id=\"123\"/> </cluster>",
"POST /ovirt-engine/api/clusters",
"<cluster> <name>mycluster</name> <cpu> <type>Intel Penryn Family</type> </cpu> <data_center id=\"123\"/> <external_network_providers> <external_provider name=\"ovirt-provider-ovn\"/> </external_network_providers> </cluster>",
"GET /ovirt-engine/api/datacenters/123",
"<data_center href=\"/ovirt-engine/api/datacenters/123\" id=\"123\"> <name>Default</name> <description>The default Data Center</description> <link href=\"/ovirt-engine/api/datacenters/123/clusters\" rel=\"clusters\"/> <link href=\"/ovirt-engine/api/datacenters/123/storagedomains\" rel=\"storagedomains\"/> <link href=\"/ovirt-engine/api/datacenters/123/permissions\" rel=\"permissions\"/> <link href=\"/ovirt-engine/api/datacenters/123/networks\" rel=\"networks\"/> <link href=\"/ovirt-engine/api/datacenters/123/quotas\" rel=\"quotas\"/> <link href=\"/ovirt-engine/api/datacenters/123/qoss\" rel=\"qoss\"/> <link href=\"/ovirt-engine/api/datacenters/123/iscsibonds\" rel=\"iscsibonds\"/> <local>false</local> <quota_mode>disabled</quota_mode> <status>up</status> <storage_format>v3</storage_format> <supported_versions> <version> <major>4</major> <minor>0</minor> </version> </supported_versions> <version> <major>4</major> <minor>0</minor> </version> <mac_pool href=\"/ovirt-engine/api/macpools/456\" id=\"456\"/> </data_center>",
"DELETE /ovirt-engine/api/datacenters/123",
"PUT /ovirt-engine/api/datacenters/123",
"<data_center> <name>myupdatedname</name> <description>An updated description for the data center</description> </data_center>",
"POST /ovirt-engine/api/datacenters/123/networks",
"<network> <name>mynetwork</name> </network>",
"POST /ovirt-engine/api/datacenters",
"<data_center> <name>mydc</name> <local>false</local> </data_center>",
"GET /ovirt-engine/api/datacenters",
"curl --request GET --cacert /etc/pki/ovirt-engine/ca.pem --header \"Version: 4\" --header \"Accept: application/xml\" --user \"admin@internal:mypassword\" https://myengine.example.com/ovirt-engine/api/datacenters",
"<data_center href=\"/ovirt-engine/api/datacenters/123\" id=\"123\"> <name>Default</name> <description>The default Data Center</description> <link href=\"/ovirt-engine/api/datacenters/123/networks\" rel=\"networks\"/> <link href=\"/ovirt-engine/api/datacenters/123/storagedomains\" rel=\"storagedomains\"/> <link href=\"/ovirt-engine/api/datacenters/123/permissions\" rel=\"permissions\"/> <link href=\"/ovirt-engine/api/datacenters/123/clusters\" rel=\"clusters\"/> <link href=\"/ovirt-engine/api/datacenters/123/qoss\" rel=\"qoss\"/> <link href=\"/ovirt-engine/api/datacenters/123/iscsibonds\" rel=\"iscsibonds\"/> <link href=\"/ovirt-engine/api/datacenters/123/quotas\" rel=\"quotas\"/> <local>false</local> <quota_mode>disabled</quota_mode> <status>up</status> <supported_versions> <version> <major>4</major> <minor>0</minor> </version> </supported_versions> <version> <major>4</major> <minor>0</minor> </version> </data_center>",
"POST /ovirt-engine/api/disks/123/copy",
"<action> <storage_domain id=\"456\"/> <disk> <name>mydisk</name> </disk> </action>",
"<action> <storage_domain id=\"456\"/> <disk_profile id=\"987\"/> <quota id=\"753\"/> </action>",
"POST /ovirt-engine/api/storagedomains/123/disks/789",
"<action> <storage_domain> <name>mydata</name> </storage_domain> </action>",
"GET /ovirt-engine/api/disks/123?all_content=true",
"POST /ovirt-engine/api/disks/123/move",
"<action> <storage_domain id=\"456\"/> </action>",
"<action> <storage_domain id=\"456\"/> <disk_profile id=\"987\"/> <quota id=\"753\"/> </action>",
"POST /ovirt-engine/api/disks/123/refreshlun",
"<action> <host id='456'/> </action>",
"PUT /ovirt-engine/api/disks/123",
"<disk> <qcow_version>qcow2_v3</qcow_version> </disk>",
"GET /ovirt-engine/api/vms/123/diskattachments/456",
"<disk_attachment href=\"/ovirt-engine/api/vms/123/diskattachments/456\" id=\"456\"> <active>true</active> <bootable>true</bootable> <interface>virtio</interface> <disk href=\"/ovirt-engine/api/disks/456\" id=\"456\"/> <vm href=\"/ovirt-engine/api/vms/123\" id=\"123\"/> </disk_attachment>",
"DELETE /ovirt-engine/api/vms/123/diskattachments/456?detach_only=true",
"PUT /vms/{vm:id}/disksattachments/{attachment:id} <disk_attachment> <bootable>true</bootable> <interface>ide</interface> <active>true</active> <disk> <name>mydisk</name> <provisioned_size>1024</provisioned_size> </disk> </disk_attachment>",
"<disk_attachment> <bootable>true</bootable> <pass_discard>true</pass_discard> <interface>ide</interface> <active>true</active> <disk id=\"123\"/> </disk_attachment>",
"<disk_attachment> <bootable>true</bootable> <pass_discard>true</pass_discard> <interface>ide</interface> <active>true</active> <disk> <name>mydisk</name> <provisioned_size>1024</provisioned_size> </disk> </disk_attachment>",
"POST /ovirt-engine/api/vms/345/diskattachments",
"POST /ovirt-engine/api/disks",
"<disk> <storage_domains> <storage_domain id=\"123\"/> </storage_domains> <name>mydisk</name> <provisioned_size>1048576</provisioned_size> <format>cow</format> </disk>",
"POST /ovirt-engine/api/disks",
"<disk> <alias>mylun</alias> <lun_storage> <host id=\"123\"/> <type>iscsi</type> <logical_units> <logical_unit id=\"456\"> <address>10.35.10.20</address> <port>3260</port> <target>iqn.2017-01.com.myhost:444</target> </logical_unit> </logical_units> </lun_storage> </disk>",
"POST /ovirt-engine/api/disks",
"<disk> <openstack_volume_type> <name>myceph</name> </openstack_volume_type> <storage_domains> <storage_domain> <name>cinderDomain</name> </storage_domain> </storage_domains> <provisioned_size>1073741824</provisioned_size> <interface>virtio</interface> <format>raw</format> </disk>",
"qemu-img info b7a4c6c5-443b-47c5-967f-6abc79675e8b/myimage.img image: b548366b-fb51-4b41-97be-733c887fe305 file format: qcow2 virtual size: 1.0G (1073741824 bytes) disk size: 196K cluster_size: 65536 backing file: ad58716a-1fe9-481f-815e-664de1df04eb backing file format: raw",
"POST /ovirt-engine/api/disks",
"<disk id=\"b7a4c6c5-443b-47c5-967f-6abc79675e8b\"> <image_id>b548366b-fb51-4b41-97be-733c887fe305</image_id> <storage_domains> <storage_domain id=\"123\"/> </storage_domains> <name>mydisk</name> <provisioned_size>1048576</provisioned_size> <format>cow</format> </disk>",
"GET /ovirt-engine/api/disks",
"<disks> <disk id=\"123\"> <actions>...</actions> <name>MyDisk</name> <description>MyDisk description</description> <link href=\"/ovirt-engine/api/disks/123/permissions\" rel=\"permissions\"/> <link href=\"/ovirt-engine/api/disks/123/statistics\" rel=\"statistics\"/> <actual_size>5345845248</actual_size> <alias>MyDisk alias</alias> <status>ok</status> <storage_type>image</storage_type> <wipe_after_delete>false</wipe_after_delete> <disk_profile id=\"123\"/> <quota id=\"123\"/> <storage_domains>...</storage_domains> </disk> </disks>",
"GET /ovirt-engine/api/domains/5678",
"<domain href=\"/ovirt-engine/api/domains/5678\" id=\"5678\"> <name>internal-authz</name> <link href=\"/ovirt-engine/api/domains/5678/users\" rel=\"users\"/> <link href=\"/ovirt-engine/api/domains/5678/groups\" rel=\"groups\"/> <link href=\"/ovirt-engine/api/domains/5678/users?search={query}\" rel=\"users/search\"/> <link href=\"/ovirt-engine/api/domains/5678/groups?search={query}\" rel=\"groups/search\"/> </domain>",
"GET /ovirt-engine/api/domains/5678/users/1234",
"<user href=\"/ovirt-engine/api/users/1234\" id=\"1234\"> <name>admin</name> <namespace>*</namespace> <principal>admin</principal> <user_name>admin@internal-authz</user_name> <domain href=\"/ovirt-engine/api/domains/5678\" id=\"5678\"> <name>internal-authz</name> </domain> <groups/> </user>",
"GET /ovirt-engine/api/domains/5678/users",
"<users> <user href=\"/ovirt-engine/api/domains/5678/users/1234\" id=\"1234\"> <name>admin</name> <namespace>*</namespace> <principal>admin</principal> <user_name>admin@internal-authz</user_name> <domain href=\"/ovirt-engine/api/domains/5678\" id=\"5678\"> <name>internal-authz</name> </domain> <groups/> </user> </users>",
"GET /ovirt-engine/api/domains",
"<domains> <domain href=\"/ovirt-engine/api/domains/5678\" id=\"5678\"> <name>internal-authz</name> <link href=\"/ovirt-engine/api/domains/5678/users\" rel=\"users\"/> <link href=\"/ovirt-engine/api/domains/5678/groups\" rel=\"groups\"/> <link href=\"/ovirt-engine/api/domains/5678/users?search={query}\" rel=\"users/search\"/> <link href=\"/ovirt-engine/api/domains/5678/groups?search={query}\" rel=\"groups/search\"/> </domain> </domains>",
"GET /ovirt-engine/api/katelloerrata",
"<katello_errata> <katello_erratum href=\"/ovirt-engine/api/katelloerrata/123\" id=\"123\"> <name>RHBA-2013:XYZ</name> <description>The description of the erratum</description> <title>some bug fix update</title> <type>bugfix</type> <issued>2013-11-20T02:00:00.000+02:00</issued> <solution>Few guidelines regarding the solution</solution> <summary>Updated packages that fix one bug are now available for XYZ</summary> <packages> <package> <name>libipa_hbac-1.9.2-82.11.el6_4.i686</name> </package> </packages> </katello_erratum> </katello_errata>",
"GET /ovirt-engine/api/events/123",
"<event href=\"/ovirt-engine/api/events/123\" id=\"123\"> <description>Host example.com was added by admin@internal-authz.</description> <code>42</code> <correlation_id>135</correlation_id> <custom_id>-1</custom_id> <flood_rate>30</flood_rate> <origin>oVirt</origin> <severity>normal</severity> <time>2016-12-11T11:13:44.654+02:00</time> <cluster href=\"/ovirt-engine/api/clusters/456\" id=\"456\"/> <host href=\"/ovirt-engine/api/hosts/789\" id=\"789\"/> <user href=\"/ovirt-engine/api/users/987\" id=\"987\"/> </event>",
"DELETE /ovirt-engine/api/events/123",
"POST /ovirt-engine/api/events <event> <description>File system /home is full</description> <severity>alert</severity> <origin>mymonitor</origin> <custom_id>1467879754</custom_id> </event>",
"POST /ovirt-engine/api/events <event> <description>File system /home is full</description> <severity>alert</severity> <origin>mymonitor</origin> <custom_id>1467879754</custom_id> <vm id=\"aae98225-5b73-490d-a252-899209af17e9\"/> </event>",
"GET /ovirt-engine/api/events",
"<events> <event href=\"/ovirt-engine/api/events/2\" id=\"2\"> <description>User admin@internal-authz logged out.</description> <code>31</code> <correlation_id>1e892ea9</correlation_id> <custom_id>-1</custom_id> <flood_rate>30</flood_rate> <origin>oVirt</origin> <severity>normal</severity> <time>2016-09-14T12:14:34.541+02:00</time> <user href=\"/ovirt-engine/api/users/57d91d48-00da-0137-0138-000000000244\" id=\"57d91d48-00da-0137-0138-000000000244\"/> </event> <event href=\"/ovirt-engine/api/events/1\" id=\"1\"> <description>User admin logged in.</description> <code>30</code> <correlation_id>1fbd81f4</correlation_id> <custom_id>-1</custom_id> <flood_rate>30</flood_rate> <origin>oVirt</origin> <severity>normal</severity> <time>2016-09-14T11:54:35.229+02:00</time> <user href=\"/ovirt-engine/api/users/57d91d48-00da-0137-0138-000000000244\" id=\"57d91d48-00da-0137-0138-000000000244\"/> </event> </events>",
"GET /ovirt-engine/api/events?max=1",
"GET /ovirt-engine/api/events?from=123",
"GET /ovirt-engine/api/events?search=severity%3Dnormal",
"<events> <event href=\"/ovirt-engine/api/events/2\" id=\"2\"> <description>User admin@internal-authz logged out.</description> <code>31</code> <correlation_id>1fbd81f4</correlation_id> <custom_id>-1</custom_id> <flood_rate>30</flood_rate> <origin>oVirt</origin> <severity>normal</severity> <time>2016-09-14T11:54:35.229+02:00</time> <user href=\"/ovirt-engine/api/users/57d91d48-00da-0137-0138-000000000244\" id=\"57d91d48-00da-0137-0138-000000000244\"/> </event> <event href=\"/ovirt-engine/api/events/1\" id=\"1\"> <description>Affinity Rules Enforcement Manager started.</description> <code>10780</code> <custom_id>-1</custom_id> <flood_rate>30</flood_rate> <origin>oVirt</origin> <severity>normal</severity> <time>2016-09-14T11:52:18.861+02:00</time> </event> </events>",
"sortby time asc page 1",
"GET /ovirt-engine/api/events?search=sortby%20time%20asc%20page%201",
"GET /ovirt-engine/api/events?search=sortby%20time%20asc%20page%202",
"GET /ovirt-engine/api/externalhostproviders/123/computeresources/234",
"<external_compute_resource href=\"/ovirt-engine/api/externalhostproviders/123/computeresources/234\" id=\"234\"> <name>hostname</name> <provider>oVirt</provider> <url>https://hostname/api</url> <user>admin@internal</user> <external_host_provider href=\"/ovirt-engine/api/externalhostproviders/123\" id=\"123\"/> </external_compute_resource>",
"GET /ovirt-engine/api/externalhostproviders/123/computeresources",
"<external_compute_resources> <external_compute_resource href=\"/ovirt-engine/api/externalhostproviders/123/computeresources/234\" id=\"234\"> <name>hostname</name> <provider>oVirt</provider> <url>https://address/api</url> <user>admin@internal</user> <external_host_provider href=\"/ovirt-engine/api/externalhostproviders/123\" id=\"123\"/> </external_compute_resource> </external_compute_resources>",
"GET /ovirt-engine/api/externalhostproviders/123/discoveredhosts/234",
"<external_discovered_host href=\"/ovirt-engine/api/externalhostproviders/123/discoveredhosts/234\" id=\"234\"> <name>mac001a4ad04040</name> <ip>10.34.67.43</ip> <last_report>2017-04-24 11:05:41 UTC</last_report> <mac>00:1a:4a:d0:40:40</mac> <subnet_name>sat0</subnet_name> <external_host_provider href=\"/ovirt-engine/api/externalhostproviders/123\" id=\"123\"/> </external_discovered_host>",
"GET /ovirt-engine/api/externalhostproviders/123/discoveredhost",
"<external_discovered_hosts> <external_discovered_host href=\"/ovirt-engine/api/externalhostproviders/123/discoveredhosts/456\" id=\"456\"> <name>mac001a4ad04031</name> <ip>10.34.67.42</ip> <last_report>2017-04-24 11:05:41 UTC</last_report> <mac>00:1a:4a:d0:40:31</mac> <subnet_name>sat0</subnet_name> <external_host_provider href=\"/ovirt-engine/api/externalhostproviders/123\" id=\"123\"/> </external_discovered_host> <external_discovered_host href=\"/ovirt-engine/api/externalhostproviders/123/discoveredhosts/789\" id=\"789\"> <name>mac001a4ad04040</name> <ip>10.34.67.43</ip> <last_report>2017-04-24 11:05:41 UTC</last_report> <mac>00:1a:4a:d0:40:40</mac> <subnet_name>sat0</subnet_name> <external_host_provider href=\"/ovirt-engine/api/externalhostproviders/123\" id=\"123\"/> </external_discovered_host> </external_discovered_hosts>",
"GET /ovirt-engine/api/externalhostproviders/123/hostgroups/234",
"<external_host_group href=\"/ovirt-engine/api/externalhostproviders/123/hostgroups/234\" id=\"234\"> <name>rhel7</name> <architecture_name>x86_64</architecture_name> <domain_name>s.com</domain_name> <operating_system_name>RedHat 7.3</operating_system_name> <subnet_name>sat0</subnet_name> <external_host_provider href=\"/ovirt-engine/api/externalhostproviders/123\" id=\"123\"/> </external_host_group>",
"GET /ovirt-engine/api/externalhostproviders/123/hostgroups",
"<external_host_groups> <external_host_group href=\"/ovirt-engine/api/externalhostproviders/123/hostgroups/234\" id=\"234\"> <name>rhel7</name> <architecture_name>x86_64</architecture_name> <domain_name>example.com</domain_name> <operating_system_name>RedHat 7.3</operating_system_name> <subnet_name>sat0</subnet_name> <external_host_provider href=\"/ovirt-engine/api/externalhostproviders/123\" id=\"123\"/> </external_host_group> </external_host_groups>",
"GET /ovirt-engine/api/externalhostproviders/123",
"<external_host_provider href=\"/ovirt-engine/api/externalhostproviders/123\" id=\"123\"> <name>mysatellite</name> <requires_authentication>true</requires_authentication> <url>https://mysatellite.example.com</url> <username>admin</username> </external_host_provider>",
"POST /ovirt-engine/api/externalhostproviders/123/testconnectivity",
"POST /ovirt-engine/api/externalhostproviders/123/testconnectivity",
"GET /ovirt-engine/api/externalhostproviders/123/certificate/0",
"<certificate id=\"0\"> <organization>provider.example.com</organization> <subject>CN=provider.example.com</subject> <content>...</content> </certificate>",
"GET /ovirt-engine/api/externalhostproviders/123/certificates",
"<certificates> <certificate id=\"789\">...</certificate> </certificates>",
"POST /externalvmimports",
"<external_vm_import> <vm> <name>my_vm</name> </vm> <cluster id=\"360014051136c20574f743bdbd28177fd\" /> <storage_domain id=\"8bb5ade5-e988-4000-8b93-dbfc6717fe50\" /> <name>vm_name_as_is_in_vmware</name> <sparse>true</sparse> <username>vmware_user</username> <password>123456</password> <provider>VMWARE</provider> <url>vpx://wmware_user@vcenter-host/DataCenter/Cluster/esxi-host?no_verify=1</url> <drivers_iso id=\"virtio-win-1.6.7.iso\" /> </external_vm_import>",
"GET /ovirt-engine/api/hosts/123/fenceagents/0",
"<agent id=\"0\"> <type>apc</type> <order>1</order> <ip>192.168.1.101</ip> <user>user</user> <password>xxx</password> <port>9</port> <options>name1=value1, name2=value2</options> </agent>",
"DELETE /ovirt-engine/api/hosts/123/fenceagents/0",
"GET /ovirt-engine/api/hosts/123/fenceagents",
"<agents> <agent id=\"0\"> <type>apc</type> <order>1</order> <ip>192.168.1.101</ip> <user>user</user> <password>xxx</password> <port>9</port> <options>name1=value1, name2=value2</options> </agent> </agents>",
"engine-config -s ForceRefreshDomainFilesByDefault=false",
"GET /ovirt-engine/api/clusters/567/glustervolumes/123/glusterbricks/234",
"<brick id=\"234\"> <name>host1:/rhgs/data/brick1</name> <brick_dir>/rhgs/data/brick1</brick_dir> <server_id>111</server_id> <status>up</status> <device>/dev/mapper/RHGS_vg1-lv_vmaddldisks</device> <fs_name>xfs</fs_name> <gluster_clients> <gluster_client> <bytes_read>2818417648</bytes_read> <bytes_written>1384694844</bytes_written> <client_port>1011</client_port> <host_name>client2</host_name> </gluster_client> </gluster_clients> <memory_pools> <memory_pool> <name>data-server:fd_t</name> <alloc_count>1626348</alloc_count> <cold_count>1020</cold_count> <hot_count>4</hot_count> <max_alloc>23</max_alloc> <max_stdalloc>0</max_stdalloc> <padded_size>140</padded_size> <pool_misses>0</pool_misses> </memory_pool> </memory_pools> <mnt_options>rw,seclabel,noatime,nodiratime,attr2,inode64,sunit=512,swidth=2048,noquota</mnt_options> <pid>25589</pid> <port>49155</port> </brick>",
"DELETE /ovirt-engine/api/clusters/567/glustervolumes/123/glusterbricks/234",
"POST /ovirt-engine/api/clusters/567/glustervolumes/123/glusterbricks/activate",
"<action> <bricks> <brick> <name>host1:/rhgs/brick1</name> </brick> </bricks> </action>",
"POST /ovirt-engine/api/clusters/567/glustervolumes/123/glusterbricks",
"<bricks> <brick> <server_id>111</server_id> <brick_dir>/export/data/brick3</brick_dir> </brick> </bricks>",
"GET /ovirt-engine/api/clusters/567/glustervolumes/123/glusterbricks",
"<bricks> <brick id=\"234\"> <name>host1:/rhgs/data/brick1</name> <brick_dir>/rhgs/data/brick1</brick_dir> <server_id>111</server_id> <status>up</status> </brick> <brick id=\"233\"> <name>host2:/rhgs/data/brick1</name> <brick_dir>/rhgs/data/brick1</brick_dir> <server_id>222</server_id> <status>up</status> </brick> </bricks>",
"POST /ovirt-engine/api/clusters/567/glustervolumes/123/glusterbricks/migrate",
"<action> <bricks> <brick> <name>host1:/rhgs/brick1</name> </brick> </bricks> </action>",
"DELETE /ovirt-engine/api/clusters/567/glustervolumes/123/glusterbricks",
"<bricks> <brick> <name>host:brick_directory</name> </brick> </bricks>",
"POST /ovirt-engine/api/clusters/567/glustervolumes/123/glusterbricks/stopmigrate",
"<bricks> <brick> <name>host:brick_directory</name> </brick> </bricks>",
"GET /ovirt-engine/api/clusters/456/glustervolumes/123",
"<gluster_volume id=\"123\"> <name>data</name> <link href=\"/ovirt-engine/api/clusters/456/glustervolumes/123/glusterbricks\" rel=\"glusterbricks\"/> <disperse_count>0</disperse_count> <options> <option> <name>storage.owner-gid</name> <value>36</value> </option> <option> <name>performance.io-cache</name> <value>off</value> </option> <option> <name>cluster.data-self-heal-algorithm</name> <value>full</value> </option> </options> <redundancy_count>0</redundancy_count> <replica_count>3</replica_count> <status>up</status> <stripe_count>0</stripe_count> <transport_types> <transport_type>tcp</transport_type> </transport_types> <volume_type>replicate</volume_type> </gluster_volume>",
"POST /ovirt-engine/api/clusters/456/glustervolumes/123/getprofilestatistics",
"POST /ovirt-engine/api/clusters/456/glustervolumes/123/rebalance",
"DELETE /ovirt-engine/api/clusters/456/glustervolumes/123",
"POST /ovirt-engine/api/clusters/456/glustervolumes/123/resetalloptions",
"POST /ovirt-engine/api/clusters/456/glustervolumes/123/resetoption",
"<action> <option name=\"option1\"/> </action>",
"POST /ovirt-engine/api/clusters/456/glustervolumes/123/setoption",
"<action> <option name=\"option1\" value=\"value1\"/> </action>",
"POST /ovirt-engine/api/clusters/456/glustervolumes/123/start",
"POST /ovirt-engine/api/clusters/456/glustervolumes/123/startprofile",
"POST /ovirt-engine/api/clusters/456/glustervolumes/123/stop",
"POST /ovirt-engine/api/clusters/456/glustervolumes/123/stopprofile",
"POST /ovirt-engine/api/clusters/456/glustervolumes/123/stoprebalance",
"POST /ovirt-engine/api/clusters/123/glustervolumes",
"<gluster_volume> <name>myvolume</name> <volume_type>replicate</volume_type> <replica_count>3</replica_count> <bricks> <brick> <server_id>server1</server_id> <brick_dir>/exp1</brick_dir> </brick> <brick> <server_id>server2</server_id> <brick_dir>/exp1</brick_dir> </brick> <brick> <server_id>server3</server_id> <brick_dir>/exp1</brick_dir> </brick> <bricks> </gluster_volume>",
"GET /ovirt-engine/api/clusters/456/glustervolumes",
"GET /ovirt-engine/api/groups/123",
"<group href=\"/ovirt-engine/api/groups/123\" id=\"123\"> <name>mygroup</name> <link href=\"/ovirt-engine/api/groups/123/roles\" rel=\"roles\"/> <link href=\"/ovirt-engine/api/groups/123/permissions\" rel=\"permissions\"/> <link href=\"/ovirt-engine/api/groups/123/tags\" rel=\"tags\"/> <domain_entry_id>476652557A382F67696B6D2B32762B37796E46476D513D3D</domain_entry_id> <namespace>DC=example,DC=com</namespace> <domain href=\"/ovirt-engine/api/domains/ABCDEF\" id=\"ABCDEF\"> <name>myextension-authz</name> </domain> </group>",
"DELETE /ovirt-engine/api/groups/123",
"POST /ovirt-engine/api/groups",
"<group> <name>Developers</name> <domain> <name>internal-authz</name> </domain> </group>",
"GET /ovirt-engine/api/groups",
"<groups> <group href=\"/ovirt-engine/api/groups/123\" id=\"123\"> <name>mygroup</name> <link href=\"/ovirt-engine/api/groups/123/roles\" rel=\"roles\"/> <link href=\"/ovirt-engine/api/groups/123/permissions\" rel=\"permissions\"/> <link href=\"/ovirt-engine/api/groups/123/tags\" rel=\"tags\"/> <domain_entry_id>476652557A382F67696B6D2B32762B37796E46476D513D3D</domain_entry_id> <namespace>DC=example,DC=com</namespace> <domain href=\"/ovirt-engine/api/domains/ABCDEF\" id=\"ABCDEF\"> <name>myextension-authz</name> </domain> </group> </groups>",
"POST /ovirt-engine/api/hosts/123/commitnetconfig",
"<action/>",
"#!/bin/sh -ex url=\"https://engine.example.com/ovirt-engine/api\" user=\"admin@internal\" password=\"...\" curl --verbose --cacert /etc/pki/ovirt-engine/ca.pem --user \"USD{user}:USD{password}\" --request POST --header \"Version: 4\" --header \"Content-Type: application/xml\" --header \"Accept: application/xml\" --data ' <action> <fence_type>start</fence_type> </action> ' \"USD{url}/hosts/123/fence\"",
"POST /ovirt-engine/api/hosts/123/forceselectspm",
"<action/>",
"GET /ovirt-engine/api/hosts/123",
"GET /ovirt-engine/api/hosts/123?all_content=true",
"curl --verbose --cacert /etc/pki/ovirt-engine/ca.pem --request PUT --header \"Content-Type: application/json\" --header \"Accept: application/json\" --header \"Version: 4\" --user \"admin@internal:...\" --data ' { \"root_password\": \"myrootpassword\" } ' \"https://engine.example.com/ovirt-engine/api/hosts/123\"",
"curl curl --verbose --cacert /etc/pki/ovirt-engine/ca.pem --request PUT --header \"Content-Type: application/json\" --header \"Accept: application/json\" --header \"Version: 4\" --user \"admin@internal:...\" --data ' { \"root_password\": \"myrootpassword\" } ' \"https://engine.example.com/ovirt-engine/api/hosts/123?deploy_hosted_engine=true\"",
"POST /ovirt-engine/api/hosts/123/iscsidiscover",
"<action> <iscsi> <address>myiscsi.example.com</address> </iscsi> </action>",
"<discovered_targets> <iscsi_details> <address>10.35.1.72</address> <port>3260</port> <portal>10.35.1.72:3260,1</portal> <target>iqn.2015-08.com.tgt:444</target> </iscsi_details> </discovered_targets>",
"#!/bin/sh -ex url=\"https://engine.example.com/ovirt-engine/api\" user=\"admin@internal\" password=\"...\" curl --verbose --cacert /etc/pki/ovirt-engine/ca.pem --user \"USD{user}:USD{password}\" --request DELETE --header \"Version: 4\" \"USD{url}/hosts/1ff7a191-2f3b-4eff-812b-9f91a30c3acc\"",
"#!/bin/sh -ex url=\"https://engine.example.com/ovirt-engine/api\" user=\"admin@internal\" password=\"...\" curl --verbose --cacert /etc/pki/ovirt-engine/ca.pem --user \"USD{user}:USD{password}\" --request POST --header \"Version: 4\" --header \"Content-Type: application/xml\" --header \"Accept: application/xml\" --data ' <action> <modified_bonds> <host_nic> <name>bond0</name> <bonding> <options> <option> <name>mode</name> <value>4</value> </option> <option> <name>miimon</name> <value>100</value> </option> </options> <slaves> <host_nic> <name>eth1</name> </host_nic> <host_nic> <name>eth2</name> </host_nic> </slaves> </bonding> </host_nic> </modified_bonds> <modified_network_attachments> <network_attachment> <network> <name>myvlan</name> </network> <host_nic> <name>bond0</name> </host_nic> <ip_address_assignments> <assignment_method>static</assignment_method> <ip_address_assignment> <ip> <address>192.168.122.10</address> <netmask>255.255.255.0</netmask> </ip> </ip_address_assignment> </ip_address_assignments> <dns_resolver_configuration> <name_servers> <name_server>1.1.1.1</name_server> <name_server>2.2.2.2</name_server> </name_servers> </dns_resolver_configuration> </network_attachment> </modified_network_attachments> </action> ' \"USD{url}/hosts/1ff7a191-2f3b-4eff-812b-9f91a30c3acc/setupnetworks\"",
"<options name=\"mode\" value=\"4\"/> <options name=\"miimon\" value=\"100\"/> <ip address=\"192.168.122.10\" netmask=\"255.255.255.0\"/>",
"Find the service that manages the collection of hosts: hosts_service = connection.system_service().hosts_service() Find the host: host = hosts_service.list(search='name=myhost')[0] Find the service that manages the host: host_service = hosts_service.host_service(host.id) Configure the network adding a bond with two slaves and attaching it to a network with an static IP address: host_service.setup_networks( modified_bonds=[ types.HostNic( name='bond0', bonding=types.Bonding( options=[ types.Option( name='mode', value='4', ), types.Option( name='miimon', value='100', ), ], slaves=[ types.HostNic( name='eth1', ), types.HostNic( name='eth2', ), ], ), ), ], modified_network_attachments=[ types.NetworkAttachment( network=types.Network( name='myvlan', ), host_nic=types.HostNic( name='bond0', ), ip_address_assignments=[ types.IpAddressAssignment( assignment_method=types.BootProtocol.STATIC, ip=types.Ip( address='192.168.122.10', netmask='255.255.255.0', ), ), ], dns_resolver_configuration=types.DnsResolverConfiguration( name_servers=[ '1.1.1.1', '2.2.2.2', ], ), ), ], ) After modifying the network configuration it is very important to make it persistent: host_service.commit_net_config()",
"POST /ovirt-engine/api/hosts/123/syncallnetworks",
"<action/>",
"PUT /ovirt-engine/api/hosts/123",
"<host> <os> <custom_kernel_cmdline>vfio_iommu_type1.allow_unsafe_interrupts=1</custom_kernel_cmdline> </os> </host>",
"GET /ovirt-engine/api/hosts/123/devices/456",
"<host_device href=\"/ovirt-engine/api/hosts/123/devices/456\" id=\"456\"> <name>usb_1_9_1_1_0</name> <capability>usb</capability> <host href=\"/ovirt-engine/api/hosts/123\" id=\"123\"/> <parent_device href=\"/ovirt-engine/api/hosts/123/devices/789\" id=\"789\"> <name>usb_1_9_1</name> </parent_device> </host_device>",
"GET /ovirt-engine/api/hosts/123/storage",
"<host_storages> <host_storage id=\"123\"> </host_storage> </host_storages>",
"<host_storage id=\"123\"> <logical_units> <logical_unit id=\"123\"> <lun_mapping>0</lun_mapping> <paths>1</paths> <product_id>lun0</product_id> <serial>123</serial> <size>10737418240</size> <status>used</status> <vendor_id>LIO-ORG</vendor_id> <volume_group_id>123</volume_group_id> </logical_unit> </logical_units> <type>iscsi</type> <host id=\"123\"/> </host_storage>",
"<host_storage id=\"123\"> <logical_units> <logical_unit id=\"123\"> <lun_mapping>0</lun_mapping> <paths>1</paths> <product_id>lun0</product_id> <serial>123</serial> <size>10737418240</size> <vendor_id>LIO-ORG</vendor_id> <volume_group_id>123</volume_group_id> </logical_unit> </logical_units> <type>iscsi</type> <host id=\"123\"/> </host_storage>",
"POST /ovirt-engine/api/hosts",
"<host> <name>myhost</name> <address>myhost.example.com</address> <root_password>myrootpassword</root_password> </host>",
"POST /ovirt-engine/api/hosts?deploy_hosted_engine=true",
"POST /ovirt-engine/api/hosts",
"<host> <name>myhost</name> <address>myhost.example.com</address> <root_password>123456</root_password> <external_network_provider_configurations> <external_network_provider_configuration> <external_network_provider name=\"ovirt-provider-ovn\"/> </external_network_provider_configuration> </external_network_provider_configurations> </host>",
"GET /ovirt-engine/api/hosts",
"<hosts> <host href=\"/ovirt-engine/api/hosts/123\" id=\"123\"> </host> <host href=\"/ovirt-engine/api/hosts/456\" id=\"456\"> </host> </host>",
"GET /ovirt-engine/api/hosts?all_content=true",
"GET /ovirt-engine/api/icons/123",
"<icon id=\"123\"> <data>Some binary data here</data> <media_type>image/png</media_type> </icon>",
"GET /ovirt-engine/api/icons",
"<icons> <icon id=\"123\"> <data>...</data> <media_type>image/png</media_type> </icon> </icons>",
"transfers_service = system_service.image_transfers_service() transfer = transfers_service.add( types.ImageTransfer( disk=types.Disk( id='123' ) ) )",
"transfers_service = system_service.image_transfers_service() transfer = transfers_service.add( types.ImageTransfer( disk=types.Disk( id='123' ), host=types.Host( id='456' ) ) )",
"transfers_service = system_service.image_transfers_service() transfer = transfers_service.add( types.ImageTransfer( disk=types.Disk( id='123' ), direction=types.ImageTransferDirection.DOWNLOAD ) )",
"transfer_service = transfers_service.image_transfer_service(transfer.id) while transfer.phase == types.ImageTransferPhase.INITIALIZING: time.sleep(3) transfer = transfer_service.get()",
"transfer_headers = { 'Authorization' : transfer.signed_ticket, }",
"Extract the URI, port, and path from the transfer's proxy_url. url = urlparse.urlparse(transfer.proxy_url) Create a new instance of the connection. proxy_connection = HTTPSConnection( url.hostname, url.port, context=ssl.SSLContext(ssl.PROTOCOL_SSLv23) )",
"path = \"/path/to/image\" MB_per_request = 32 with open(path, \"rb\") as disk: size = os.path.getsize(path) chunk_size = 1024*1024*MB_per_request pos = 0 while (pos < size): transfer_service.extend() transfer_headers['Content-Range'] = \"bytes %d-%d/%d\" % (pos, min(pos + chunk_size, size)-1, size) proxy_connection.request( 'PUT', url.path, disk.read(chunk_size), headers=transfer_headers ) r = proxy_connection.getresponse() print r.status, r.reason, \"Completed\", \"{:.0%}\".format(pos/ float(size)) pos += chunk_size",
"output_file = \"/home/user/downloaded_image\" MiB_per_request = 32 chunk_size = 1024*1024*MiB_per_request total = disk_size with open(output_file, \"wb\") as disk: pos = 0 while pos < total: transfer_service.extend() transfer_headers['Range'] = \"bytes=%d-%d\" % (pos, min(total, pos + chunk_size) - 1) proxy_connection.request('GET', proxy_url.path, headers=transfer_headers) r = proxy_connection.getresponse() disk.write(r.read()) print \"Completed\", \"{:.0%}\".format(pos/ float(total)) pos += chunk_size",
"transfer_service.finalize()",
"POST /ovirt-engine/api/disks",
"<disk> <storage_domains> <storage_domain id=\"123\"/> </storage_domains> <alias>mydisk</alias> <initial_size>1073741824</initial_size> <provisioned_size>1073741824</provisioned_size> <format>raw</format> </disk>",
"POST /ovirt-engine/api/imagetransfers",
"<image_transfer> <disk id=\"456\"/> <direction>upload|download</direction> </image_transfer>",
"<image_transfer id=\"123\"> <direction>download|upload</direction> <phase>initializing|transferring</phase> <proxy_url>https://proxy_fqdn:54323/images/41c732d4-2210-4e7b-9e5c-4e2805baadbb</proxy_url> <transfer_url>https://daemon_fqdn:54322/images/41c732d4-2210-4e7b-9e5c-4e2805baadbb</transfer_url> </image_transfer>",
"curl --cacert /etc/pki/ovirt-engine/ca.pem https://daemon_fqdn:54322/images/41c732d4-2210-4e7b-9e5c-4e2805baadbb -o <output_file>",
"curl --cacert /etc/pki/ovirt-engine/ca.pem --upload-file <file_to_upload> -X PUT https://daemon_fqdn:54322/images/41c732d4-2210-4e7b-9e5c-4e2805baadbb",
"POST /ovirt-engine/api/imagetransfers/123/finalize",
"<action />",
"transfer_service = transfers_service.image_transfer_service(transfer.id) transfer_service.resume() transfer = transfer_service.get() while transfer.phase == types.ImageTransferPhase.RESUMING: time.sleep(1) transfer = transfer_service.get()",
"POST /ovirt-engine/api/imagetransfers",
"<image_transfer> <disk id=\"123\"/> <direction>upload|download</direction> </image_transfer>",
"POST /ovirt-engine/api/imagetransfers",
"<image_transfer> <snapshot id=\"456\"/> <direction>download|upload</direction> </image_transfer>",
"GET /ovirt-engine/api/instancetypes/123",
"DELETE /ovirt-engine/api/instancetypes/123",
"PUT /ovirt-engine/api/instancetypes/123",
"<instance_type> <memory>1073741824</memory> <cpu> <topology> <cores>1</cores> <sockets>2</sockets> <threads>1</threads> </topology> </cpu> </instance_type>",
"POST /ovirt-engine/api/instancetypes",
"<instance_type> <name>myinstancetype</name> </template>",
"<instance_type> <name>myinstancetype</name> <console> <enabled>true</enabled> </console> <cpu> <topology> <cores>2</cores> <sockets>2</sockets> <threads>1</threads> </topology> </cpu> <custom_cpu_model>AMD Opteron_G2</custom_cpu_model> <custom_emulated_machine>q35</custom_emulated_machine> <display> <monitors>1</monitors> <single_qxl_pci>true</single_qxl_pci> <smartcard_enabled>true</smartcard_enabled> <type>spice</type> </display> <high_availability> <enabled>true</enabled> <priority>1</priority> </high_availability> <io> <threads>2</threads> </io> <memory>4294967296</memory> <memory_policy> <ballooning>true</ballooning> <guaranteed>268435456</guaranteed> </memory_policy> <migration> <auto_converge>inherit</auto_converge> <compressed>inherit</compressed> <policy id=\"00000000-0000-0000-0000-000000000000\"/> </migration> <migration_downtime>2</migration_downtime> <os> <boot> <devices> <device>hd</device> </devices> </boot> </os> <rng_device> <rate> <bytes>200</bytes> <period>2</period> </rate> <source>urandom</source> </rng_device> <soundcard_enabled>true</soundcard_enabled> <usb> <enabled>true</enabled> <type>native</type> </usb> <virtio_scsi> <enabled>true</enabled> </virtio_scsi> </instance_type>",
"DELETE /ovirt-engine/api/datacenters/123/iscsibonds/456",
"PUT /ovirt-engine/api/datacenters/123/iscsibonds/1234",
"<iscsi_bond> <name>mybond</name> <description>My iSCSI bond</description> </iscsi_bond>",
"POST /ovirt-engine/api/datacenters/123/iscsibonds",
"<iscsi_bond> <name>mybond</name> <storage_connections> <storage_connection id=\"456\"/> <storage_connection id=\"789\"/> </storage_connections> <networks> <network id=\"abc\"/> </networks> </iscsi_bond>",
"POST /ovirt-engine/api/jobs/clear",
"<action/>",
"POST /ovirt-engine/api/jobs/end",
"<action> <force>true</force> <status>finished</status> </action>",
"GET /ovirt-engine/api/jobs/123",
"<job href=\"/ovirt-engine/api/jobs/123\" id=\"123\"> <actions> <link href=\"/ovirt-engine/api/jobs/123/clear\" rel=\"clear\"/> <link href=\"/ovirt-engine/api/jobs/123/end\" rel=\"end\"/> </actions> <description>Adding Disk</description> <link href=\"/ovirt-engine/api/jobs/123/steps\" rel=\"steps\"/> <auto_cleared>true</auto_cleared> <end_time>2016-12-12T23:07:29.758+02:00</end_time> <external>false</external> <last_updated>2016-12-12T23:07:29.758+02:00</last_updated> <start_time>2016-12-12T23:07:26.593+02:00</start_time> <status>failed</status> <owner href=\"/ovirt-engine/api/users/456\" id=\"456\"/> </job>",
"POST /ovirt-engine/api/jobs",
"<job> <description>Doing some work</description> <auto_cleared>true</auto_cleared> </job>",
"<job href=\"/ovirt-engine/api/jobs/123\" id=\"123\"> <actions> <link href=\"/ovirt-engine/api/jobs/123/clear\" rel=\"clear\"/> <link href=\"/ovirt-engine/api/jobs/123/end\" rel=\"end\"/> </actions> <description>Doing some work</description> <link href=\"/ovirt-engine/api/jobs/123/steps\" rel=\"steps\"/> <auto_cleared>true</auto_cleared> <external>true</external> <last_updated>2016-12-13T02:15:42.130+02:00</last_updated> <start_time>2016-12-13T02:15:42.130+02:00</start_time> <status>started</status> <owner href=\"/ovirt-engine/api/users/456\" id=\"456\"/> </job>",
"GET /ovirt-engine/api/jobs",
"<jobs> <job href=\"/ovirt-engine/api/jobs/123\" id=\"123\"> <actions> <link href=\"/ovirt-engine/api/jobs/123/clear\" rel=\"clear\"/> <link href=\"/ovirt-engine/api/jobs/123/end\" rel=\"end\"/> </actions> <description>Adding Disk</description> <link href=\"/ovirt-engine/api/jobs/123/steps\" rel=\"steps\"/> <auto_cleared>true</auto_cleared> <end_time>2016-12-12T23:07:29.758+02:00</end_time> <external>false</external> <last_updated>2016-12-12T23:07:29.758+02:00</last_updated> <start_time>2016-12-12T23:07:26.593+02:00</start_time> <status>failed</status> <owner href=\"/ovirt-engine/api/users/456\" id=\"456\"/> </job> </jobs>",
"GET /ovirt-engine/api/katelloerrata",
"<katello_errata> <katello_erratum href=\"/ovirt-engine/api/katelloerrata/123\" id=\"123\"> <name>RHBA-2013:XYZ</name> <description>The description of the erratum</description> <title>some bug fix update</title> <type>bugfix</type> <issued>2013-11-20T02:00:00.000+02:00</issued> <solution>Few guidelines regarding the solution</solution> <summary>Updated packages that fix one bug are now available for XYZ</summary> <packages> <package> <name>libipa_hbac-1.9.2-82.11.el6_4.i686</name> </package> </packages> </katello_erratum> </katello_errata>",
"GET /ovirt-engine/api/katelloerrata/123",
"<katello_erratum href=\"/ovirt-engine/api/katelloerrata/123\" id=\"123\"> <name>RHBA-2013:XYZ</name> <description>The description of the erratum</description> <title>some bug fix update</title> <type>bugfix</type> <issued>2013-11-20T02:00:00.000+02:00</issued> <solution>Few guidelines regarding the solution</solution> <summary>Updated packages that fix one bug are now available for XYZ</summary> <packages> <package> <name>libipa_hbac-1.9.2-82.11.el6_4.i686</name> </package> </packages> </katello_erratum>",
"GET ovirt-engine/api/hosts/123/nics/321/linklayerdiscoveryprotocolelements",
"<link_layer_discovery_protocol_elements> <link_layer_discovery_protocol_element> <name>Port Description</name> <properties> <property> <name>port description</name> <value>Summit300-48-Port 1001</value> </property> </properties> <type>4</type> </link_layer_discovery_protocol_element> <link_layer_discovery_protocol_elements>",
"DELETE /ovirt-engine/api/macpools/123",
"PUT /ovirt-engine/api/macpools/123",
"<mac_pool> <name>UpdatedMACPool</name> <description>An updated MAC address pool</description> <allow_duplicates>false</allow_duplicates> <ranges> <range> <from>00:1A:4A:16:01:51</from> <to>00:1A:4A:16:01:e6</to> </range> <range> <from>02:1A:4A:01:00:00</from> <to>02:1A:4A:FF:FF:FF</to> </range> </ranges> </mac_pool>",
"POST /ovirt-engine/api/macpools",
"<mac_pool> <name>MACPool</name> <description>A MAC address pool</description> <allow_duplicates>true</allow_duplicates> <default_pool>false</default_pool> <ranges> <range> <from>00:1A:4A:16:01:51</from> <to>00:1A:4A:16:01:e6</to> </range> </ranges> </mac_pool>",
"GET /ovirt-engine/api/networks/123",
"<network href=\"/ovirt-engine/api/networks/123\" id=\"123\"> <name>ovirtmgmt</name> <description>Default Management Network</description> <link href=\"/ovirt-engine/api/networks/123/permissions\" rel=\"permissions\"/> <link href=\"/ovirt-engine/api/networks/123/vnicprofiles\" rel=\"vnicprofiles\"/> <link href=\"/ovirt-engine/api/networks/123/networklabels\" rel=\"networklabels\"/> <mtu>0</mtu> <stp>false</stp> <usages> <usage>vm</usage> </usages> <data_center href=\"/ovirt-engine/api/datacenters/456\" id=\"456\"/> </network>",
"DELETE /ovirt-engine/api/networks/123",
"DELETE /ovirt-engine/api/datacenters/123/networks/456",
"PUT /ovirt-engine/api/networks/123",
"<network> <description>My updated description</description> </network>",
"PUT /ovirt-engine/api/datacenters/123/networks/456",
"<network> <mtu>1500</mtu> </network>",
"<network_filter id=\"00000019-0019-0019-0019-00000000026b\"> <name>example-network-filter-b</name> <version> <major>4</major> <minor>0</minor> <build>-1</build> <revision>-1</revision> </version> </network_filter>",
"GET http://localhost:8080/ovirt-engine/api/clusters/{cluster:id}/networkfilters",
"<network_filters> <network_filter id=\"00000019-0019-0019-0019-00000000026c\"> <name>example-network-filter-a</name> <version> <major>4</major> <minor>0</minor> <build>-1</build> <revision>-1</revision> </version> </network_filter> <network_filter id=\"00000019-0019-0019-0019-00000000026b\"> <name>example-network-filter-b</name> <version> <major>4</major> <minor>0</minor> <build>-1</build> <revision>-1</revision> </version> </network_filter> <network_filter id=\"00000019-0019-0019-0019-00000000026a\"> <name>example-network-filter-a</name> <version> <major>3</major> <minor>0</minor> <build>-1</build> <revision>-1</revision> </version> </network_filter> </network_filters>",
"DELETE /ovirt-engine/api/networks/123/labels/exemplary",
"POST /ovirt-engine/api/networks/123/labels",
"<label id=\"mylabel\"/>",
"POST /ovirt-engine/api/networks",
"<network> <name>mynetwork</name> <data_center id=\"123\"/> </network>",
"POST /ovirt-engine/api/datacenters/123/networks",
"<network> <name>ovirtmgmt</name> </network>",
"POST /ovirt-engine/api/networks",
"<network> <name>exnetwork</name> <external_provider id=\"456\"/> <data_center id=\"123\"/> </network>",
"GET /ovirt-engine/api/networks",
"<networks> <network href=\"/ovirt-engine/api/networks/123\" id=\"123\"> <name>ovirtmgmt</name> <description>Default Management Network</description> <link href=\"/ovirt-engine/api/networks/123/permissions\" rel=\"permissions\"/> <link href=\"/ovirt-engine/api/networks/123/vnicprofiles\" rel=\"vnicprofiles\"/> <link href=\"/ovirt-engine/api/networks/123/networklabels\" rel=\"networklabels\"/> <mtu>0</mtu> <stp>false</stp> <usages> <usage>vm</usage> </usages> <data_center href=\"/ovirt-engine/api/datacenters/456\" id=\"456\"/> </network> </networks>",
"DELETE /ovirt-engine/api/vms/789/nics/456/networkfilterparameters/123",
"PUT /ovirt-engine/api/vms/789/nics/456/networkfilterparameters/123",
"<network_filter_parameter> <name>updatedName</name> <value>updatedValue</value> </network_filter_parameter>",
"POST /ovirt-engine/api/vms/789/nics/456/networkfilterparameters",
"<network_filter_parameter> <name>IP</name> <value>10.0.1.2</value> </network_filter_parameter>",
"POST /ovirt-engine/api/openstackimageproviders/123/images/456/import",
"<action> <storage_domain> <name>images0</name> </storage_domain> <cluster> <name>images0</name> </cluster> </action>",
"POST /ovirt-engine/api/externalhostproviders/123/testconnectivity",
"GET /ovirt-engine/api/openstacknetworkproviders/1234",
"DELETE /ovirt-engine/api/openstacknetworkproviders/1234",
"POST /ovirt-engine/api/externalhostproviders/123/testconnectivity",
"PUT /ovirt-engine/api/openstacknetworkproviders/1234",
"<openstack_network_provider> <name>ovn-network-provider</name> <requires_authentication>false</requires_authentication> <url>http://some_server_url.domain.com:9696</url> <tenant_name>oVirt</tenant_name> <type>external</type> </openstack_network_provider>",
"POST /ovirt-engine/api/externalhostproviders/123/testconnectivity",
"POST /ovirt-engine/api/openstackvolumeproviders",
"<openstack_volume_provider> <name>mycinder</name> <url>https://mycinder.example.com:8776</url> <data_center> <name>mydc</name> </data_center> <requires_authentication>true</requires_authentication> <username>admin</username> <password>mypassword</password> <tenant_name>mytenant</tenant_name> </openstack_volume_provider>",
"GET /ovirt-engine/api/roles/123/permits/456",
"<permit href=\"/ovirt-engine/api/roles/123/permits/456\" id=\"456\"> <name>change_vm_cd</name> <administrative>false</administrative> <role href=\"/ovirt-engine/api/roles/123\" id=\"123\"/> </permit>",
"DELETE /ovirt-engine/api/roles/123/permits/456",
"POST /ovirt-engine/api/roles/123/permits",
"<permit> <name>create_vm</name> </permit>",
"GET /ovirt-engine/api/roles/123/permits",
"<permits> <permit href=\"/ovirt-engine/api/roles/123/permits/5\" id=\"5\"> <name>change_vm_cd</name> <administrative>false</administrative> <role href=\"/ovirt-engine/api/roles/123\" id=\"123\"/> </permit> <permit href=\"/ovirt-engine/api/roles/123/permits/7\" id=\"7\"> <name>connect_to_vm</name> <administrative>false</administrative> <role href=\"/ovirt-engine/api/roles/123\" id=\"123\"/> </permit> </permits>",
"GET /ovirt-engine/api/datacenters/123/qoss/123",
"<qos href=\"/ovirt-engine/api/datacenters/123/qoss/123\" id=\"123\"> <name>123</name> <description>123</description> <max_iops>1</max_iops> <max_throughput>1</max_throughput> <type>storage</type> <data_center href=\"/ovirt-engine/api/datacenters/123\" id=\"123\"/> </qos>",
"DELETE /ovirt-engine/api/datacenters/123/qoss/123",
"PUT /ovirt-engine/api/datacenters/123/qoss/123",
"curl -u admin@internal:123456 -X PUT -H \"content-type: application/xml\" -d \"<qos><name>321</name><description>321</description><max_iops>10</max_iops></qos>\" https://engine/ovirt-engine/api/datacenters/123/qoss/123",
"<qos href=\"/ovirt-engine/api/datacenters/123/qoss/123\" id=\"123\"> <name>321</name> <description>321</description> <max_iops>10</max_iops> <max_throughput>1</max_throughput> <type>storage</type> <data_center href=\"/ovirt-engine/api/datacenters/123\" id=\"123\"/> </qos>",
"POST /ovirt-engine/api/datacenters/123/qoss",
"<qos href=\"/ovirt-engine/api/datacenters/123/qoss/123\" id=\"123\"> <name>123</name> <description>123</description> <max_iops>10</max_iops> <type>storage</type> <data_center href=\"/ovirt-engine/api/datacenters/123\" id=\"123\"/> </qos>",
"GET /ovirt-engine/api/datacenter/123/qoss",
"<qoss> <qos href=\"/ovirt-engine/api/datacenters/123/qoss/1\" id=\"1\">...</qos> <qos href=\"/ovirt-engine/api/datacenters/123/qoss/2\" id=\"2\">...</qos> <qos href=\"/ovirt-engine/api/datacenters/123/qoss/3\" id=\"3\">...</qos> </qoss>",
"GET /ovirt-engine/api/datacenters/123/quotas/456",
"<quota id=\"456\"> <name>myquota</name> <description>My new quota for virtual machines</description> <cluster_hard_limit_pct>20</cluster_hard_limit_pct> <cluster_soft_limit_pct>80</cluster_soft_limit_pct> <storage_hard_limit_pct>20</storage_hard_limit_pct> <storage_soft_limit_pct>80</storage_soft_limit_pct> </quota>",
"DELETE /ovirt-engine/api/datacenters/123-456/quotas/654-321 -0472718ab224 HTTP/1.1 Accept: application/xml Content-type: application/xml",
"PUT /ovirt-engine/api/datacenters/123/quotas/456",
"<quota> <cluster_hard_limit_pct>30</cluster_hard_limit_pct> <cluster_soft_limit_pct>70</cluster_soft_limit_pct> <storage_hard_limit_pct>20</storage_hard_limit_pct> <storage_soft_limit_pct>80</storage_soft_limit_pct> </quota>",
"POST /ovirt-engine/api/datacenters/123/quotas/456/quotastoragelimits",
"<quota_storage_limit> <limit>100</limit> </quota_storage_limit>",
"POST /ovirt-engine/api/datacenters/123/quotas/456/quotastoragelimits",
"<quota_storage_limit> <limit>50</limit> <storage_domain id=\"000\"/> </quota_storage_limit>",
"POST /ovirt-engine/api/datacenters/123/quotas",
"<quota> <name>myquota</name> <description>My new quota for virtual machines</description> </quota>",
"GET /ovirt-engine/api/roles/123",
"<role id=\"123\"> <name>MyRole</name> <description>MyRole description</description> <link href=\"/ovirt-engine/api/roles/123/permits\" rel=\"permits\"/> <administrative>true</administrative> <mutable>false</mutable> </role>",
"DELETE /ovirt-engine/api/roles/{role_id}",
"PUT /ovirt-engine/api/roles/123",
"<role> <name>MyNewRoleName</name> <description>My new description of the role</description> <administrative>true</administrative> </group>",
"POST /ovirt-engine/api/roles",
"<role> <name>MyRole</name> <description>My custom role to create virtual machines</description> <administrative>false</administrative> <permits> <permit id=\"1\"/> <permit id=\"1300\"/> </permits> </group>",
"GET /ovirt-engine/api/roles",
"<roles> <role id=\"123\"> <name>SuperUser</name> <description>Roles management administrator</description> <link href=\"/ovirt-engine/api/roles/123/permits\" rel=\"permits\"/> <administrative>true</administrative> <mutable>false</mutable> </role> </roles>",
"GET /ovirt-engine/api/vms/123/snapshots/456?all_content=true",
"POST /ovirt-engine/api/vms/123/snapshots/456/restore",
"<action/>",
"POST /ovirt-engine/api/vms/123/snapshots/456/restore",
"<action> <disks> <disk id=\"111\"> <image_id>222</image_id> </disk> </disks> </action>",
"POST /ovirt-engine/api/vms/123/snapshots",
"<snapshot> <description>My snapshot</description> </snapshot>",
"<snapshot> <description>My snapshot</description> <disk_attachments> <disk_attachment> <disk id=\"123\"> <image_id>456</image_id> </disk> </disk_attachment> </disk_attachments> </snapshot>",
"<snapshot> <description>My snapshot</description> <persist_memorystate>false</persist_memorystate> </snapshot>",
"GET /ovirt-engine/api/vms/123/snapshots?all_content=true",
"GET /ovirt-engine/api/users/123/sshpublickeys",
"<ssh_public_keys> <ssh_public_key href=\"/ovirt-engine/api/users/123/sshpublickeys/456\" id=\"456\"> <content>ssh-rsa ...</content> <user href=\"/ovirt-engine/api/users/123\" id=\"123\"/> </ssh_public_key> </ssh_public_keys>",
"{ \"ssh_public_key\": [ { \"content\": \"ssh-rsa ...\", \"user\": { \"href\": \"/ovirt-engine/api/users/123\", \"id\": \"123\" }, \"href\": \"/ovirt-engine/api/users/123/sshpublickeys/456\", \"id\": \"456\" } ] }",
"GET /ovirt-engine/api/vms/123/statistics",
"<statistics> <statistic href=\"/ovirt-engine/api/vms/123/statistics/456\" id=\"456\"> <name>memory.installed</name> <description>Total memory configured</description> <kind>gauge</kind> <type>integer</type> <unit>bytes</unit> <values> <value> <datum>1073741824</datum> </value> </values> <vm href=\"/ovirt-engine/api/vms/123\" id=\"123\"/> </statistic> </statistics>",
"GET /ovirt-engine/api/vms/123/statistics/456",
"<statistic href=\"/ovirt-engine/api/vms/123/statistics/456\" id=\"456\"> <name>memory.installed</name> <description>Total memory configured</description> <kind>gauge</kind> <type>integer</type> <unit>bytes</unit> <values> <value> <datum>1073741824</datum> </value> </values> <vm href=\"/ovirt-engine/api/vms/123\" id=\"123\"/> </statistic>",
"POST /ovirt-engine/api/jobs/123/steps/456/end",
"<action> <force>true</force> <succeeded>true</succeeded> </action>",
"GET /ovirt-engine/api/jobs/123/steps/456",
"<step href=\"/ovirt-engine/api/jobs/123/steps/456\" id=\"456\"> <actions> <link href=\"/ovirt-engine/api/jobs/123/steps/456/end\" rel=\"end\"/> </actions> <description>Validating</description> <end_time>2016-12-12T23:07:26.627+02:00</end_time> <external>false</external> <number>0</number> <start_time>2016-12-12T23:07:26.605+02:00</start_time> <status>finished</status> <type>validating</type> <job href=\"/ovirt-engine/api/jobs/123\" id=\"123\"/> </step>",
"POST /ovirt-engine/api/jobs/123/steps",
"<step> <description>Validating</description> <start_time>2016-12-12T23:07:26.605+02:00</start_time> <status>started</status> <type>validating</type> </step>",
"<step href=\"/ovirt-engine/api/jobs/123/steps/456\" id=\"456\"> <actions> <link href=\"/ovirt-engine/api/jobs/123/steps/456/end\" rel=\"end\"/> </actions> <description>Validating</description> <link href=\"/ovirt-engine/api/jobs/123/steps/456/statistics\" rel=\"statistics\"/> <external>true</external> <number>2</number> <start_time>2016-12-13T01:06:15.380+02:00</start_time> <status>started</status> <type>validating</type> <job href=\"/ovirt-engine/api/jobs/123\" id=\"123\"/> </step>",
"GET /ovirt-engine/api/job/123/steps",
"<steps> <step href=\"/ovirt-engine/api/jobs/123/steps/456\" id=\"456\"> <actions> <link href=\"/ovirt-engine/api/jobs/123/steps/456/end\" rel=\"end\"/> </actions> <description>Validating</description> <link href=\"/ovirt-engine/api/jobs/123/steps/456/statistics\" rel=\"statistics\"/> <external>true</external> <number>2</number> <start_time>2016-12-13T01:06:15.380+02:00</start_time> <status>started</status> <type>validating</type> <job href=\"/ovirt-engine/api/jobs/123\" id=\"123\"/> </step> </steps>",
"<host_storage id=\"360014051136c20574f743bdbd28177fd\"> <logical_units> <logical_unit id=\"360014051136c20574f743bdbd28177fd\"> <lun_mapping>0</lun_mapping> <paths>1</paths> <product_id>lun0</product_id> <serial>SLIO-ORG_lun0_1136c205-74f7-43bd-bd28-177fd5ce6993</serial> <size>10737418240</size> <status>used</status> <vendor_id>LIO-ORG</vendor_id> <volume_group_id>O9Du7I-RahN-ECe1-dZ1w-nh0b-64io-MNzIBZ</volume_group_id> </logical_unit> </logical_units> <type>iscsi</type> <host id=\"8bb5ade5-e988-4000-8b93-dbfc6717fe50\"/> </host_storage>",
"<host_storage id=\"360014051136c20574f743bdbd28177fd\"> <logical_units> <logical_unit id=\"360014051136c20574f743bdbd28177fd\"> <lun_mapping>0</lun_mapping> <paths>1</paths> <product_id>lun0</product_id> <serial>SLIO-ORG_lun0_1136c205-74f7-43bd-bd28-177fd5ce6993</serial> <size>10737418240</size> <vendor_id>LIO-ORG</vendor_id> <volume_group_id>O9Du7I-RahN-ECe1-dZ1w-nh0b-64io-MNzIBZ</volume_group_id> </logical_unit> </logical_units> <type>iscsi</type> <host id=\"8bb5ade5-e988-4000-8b93-dbfc6717fe50\"/> </host_storage>",
"POST /ovirt-engine/api/storagedomains/123/reduceluns",
"<action> <logical_units> <logical_unit id=\"1IET_00010001\"/> <logical_unit id=\"1IET_00010002\"/> </logical_units> </action>",
"Note that this operation is only applicable to block storage domains (i.e., storage domains with the <<types/storage_type, storage type> of iSCSI or FCP).",
"POST /ovirt-engine/api/storagedomains/262b056b-aede-40f1-9666-b883eff59d40/refreshluns",
"<action> <logical_units> <logical_unit id=\"1IET_00010001\"/> <logical_unit id=\"1IET_00010002\"/> </logical_units> </action>",
"DELETE /ovirt-engine/api/storagedomains/123?destroy=true",
"DELETE /ovirt-engine/api/storagedomains/123?host=myhost",
"PUT /ovirt-engine/api/storagedomains/123",
"<storage_domain> <name>data2</name> <wipe_after_delete>true</wipe_after_delete> </storage_domain>",
"POST /ovirt-engine/api/storagedomains/123/disks?unregistered=true",
"<disk id=\"456\"/>",
"POST /ovirt-engine/api/storagedomains/123/disks",
"<disk> <name>mydisk</name> <format>cow</format> <provisioned_size>1073741824</provisioned_size> </disk>",
"GET /ovirt-engine/api/storagedomains/123/disks?unregistered=true",
"POST /ovirt-engine/api/storagedomains/123/templates/456/import",
"<action> <storage_domain> <name>myexport</name> </storage_domain> <cluster> <name>mycluster</name> </cluster> </action>",
"GET /ovirt-engine/api/storagedomains/123/templates?unregistered=true",
"POST /ovirt-engine/api/storagedomains/123/vms/456/import",
"<action> <storage_domain> <name>mydata</name> </storage_domain> <cluster> <name>mycluster</name> </cluster> </action>",
"<action> <storage_domain> <name>mydata</name> </storage_domain> <cluster> <name>mycluster</name> </cluster> <clone>true</clone> <vm> <name>myvm</name> </vm> </action>",
"<action> <cluster> <name>mycluster</name> </cluster> <vm> <name>myvm</name> </vm> <disks> <disk id=\"123\"/> <disk id=\"456\"/> </disks> </action>",
"DELETE /ovirt-engine/api/storagedomains/123/vms/456",
"GET /ovirt-engine/api/storagedomains/123/vms",
"<vms> <vm id=\"456\" href=\"/api/storagedomains/123/vms/456\"> <name>vm1</name> <storage_domain id=\"123\" href=\"/api/storagedomains/123\"/> <actions> <link rel=\"import\" href=\"/api/storagedomains/123/vms/456/import\"/> </actions> </vm> </vms>",
"GET /ovirt-engine/api/storagedomains/123/vms?unregistered=true",
"POST /ovirt-engine/api/storagedomains",
"<storage_domain> <name>mydata</name> <type>data</type> <storage> <type>nfs</type> <address>mynfs.example.com</address> <path>/exports/mydata</path> </storage> <host> <name>myhost</name> </host> </storage_domain>",
"<storage_domain> <name>myisos</name> <type>iso</type> <storage> <type>nfs</type> <address>mynfs.example.com</address> <path>/export/myisos</path> </storage> <host> <name>myhost</name> </host> </storage_domain>",
"<storage_domain> <name>myiscsi</name> <type>data</type> <storage> <type>iscsi</type> <logical_units> <logical_unit id=\"3600144f09dbd050000004eedbd340001\"/> <logical_unit id=\"3600144f09dbd050000004eedbd340002\"/> </logical_units> </storage> <host> <name>myhost</name> </host> </storage_domain>",
"DELETE /ovirt-engine/api/storageconnections/123?host=456",
"PUT /ovirt-engine/api/storageconnections/123",
"<storage_connection> <address>mynewnfs.example.com</address> </storage_connection>",
"PUT /ovirt-engine/api/storageconnections/123",
"<storage_connection> <port>3260</port> <target>iqn.2017-01.com.myhost:444</target> </storage_connection>",
"PUT /ovirt-engine/api/hosts/123/storageconnectionextensions/456",
"<storage_connection_extension> <target>iqn.2016-01.com.example:mytarget</target> <username>myuser</username> <password>mypassword</password> </storage_connection_extension>",
"POST /ovirt-engine/api/hosts/123/storageconnectionextensions",
"<storage_connection_extension> <target>iqn.2016-01.com.example:mytarget</target> <username>myuser</username> <password>mypassword</password> </storage_connection_extension>",
"POST /ovirt-engine/api/storageconnections",
"<storage_connection> <type>nfs</type> <address>mynfs.example.com</address> <path>/export/mydata</path> <host> <name>myhost</name> </host> </storage_connection>",
"GET /ovirt-engine/api",
"<api> <link rel=\"capabilities\" href=\"/api/capabilities\"/> <link rel=\"clusters\" href=\"/api/clusters\"/> <link rel=\"clusters/search\" href=\"/api/clusters?search={query}\"/> <link rel=\"datacenters\" href=\"/api/datacenters\"/> <link rel=\"datacenters/search\" href=\"/api/datacenters?search={query}\"/> <link rel=\"events\" href=\"/api/events\"/> <link rel=\"events/search\" href=\"/api/events?search={query}\"/> <link rel=\"hosts\" href=\"/api/hosts\"/> <link rel=\"hosts/search\" href=\"/api/hosts?search={query}\"/> <link rel=\"networks\" href=\"/api/networks\"/> <link rel=\"roles\" href=\"/api/roles\"/> <link rel=\"storagedomains\" href=\"/api/storagedomains\"/> <link rel=\"storagedomains/search\" href=\"/api/storagedomains?search={query}\"/> <link rel=\"tags\" href=\"/api/tags\"/> <link rel=\"templates\" href=\"/api/templates\"/> <link rel=\"templates/search\" href=\"/api/templates?search={query}\"/> <link rel=\"users\" href=\"/api/users\"/> <link rel=\"groups\" href=\"/api/groups\"/> <link rel=\"domains\" href=\"/api/domains\"/> <link rel=\"vmpools\" href=\"/api/vmpools\"/> <link rel=\"vmpools/search\" href=\"/api/vmpools?search={query}\"/> <link rel=\"vms\" href=\"/api/vms\"/> <link rel=\"vms/search\" href=\"/api/vms?search={query}\"/> <product_info> <name>oVirt Engine</name> <vendor>ovirt.org</vendor> <version> <build>4</build> <full_version>4.0.4</full_version> <major>4</major> <minor>0</minor> <revision>0</revision> </version> </product_info> <special_objects> <blank_template href=\"/ovirt-engine/api/templates/00000000-0000-0000-0000-000000000000\" id=\"00000000-0000-0000-0000-000000000000\"/> <root_tag href=\"/ovirt-engine/api/tags/00000000-0000-0000-0000-000000000000\" id=\"00000000-0000-0000-0000-000000000000\"/> </special_objects> <summary> <hosts> <active>0</active> <total>0</total> </hosts> <storage_domains> <active>0</active> <total>1</total> </storage_domains> <users> <active>1</active> <total>1</total> </users> <vms> <active>0</active> <total>0</total> </vms> </summary> <time>2016-09-14T12:00:48.132+02:00</time> </api>",
"GET /ovirt-engine/api/options/MigrationPoliciesSupported",
"<system_option href=\"/ovirt-engine/api/options/MigrationPoliciesSupported\" id=\"MigrationPoliciesSupported\"> <name>MigrationPoliciesSupported</name> <values> <system_option_value> <value>true</value> <version>4.0</version> </system_option_value> <system_option_value> <value>true</value> <version>4.1</version> </system_option_value> <system_option_value> <value>true</value> <version>4.2</version> </system_option_value> <system_option_value> <value>false</value> <version>3.6</version> </system_option_value> </values> </system_option>",
"GET /ovirt-engine/api/options/MigrationPoliciesSupported?version=4.2",
"<system_option href=\"/ovirt-engine/api/options/MigrationPoliciesSupported\" id=\"MigrationPoliciesSupported\"> <name>MigrationPoliciesSupported</name> <values> <system_option_value> <value>true</value> <version>4.2</version> </system_option_value> </values> </system_option>",
"POST /ovirt-engine/api/vms/123/permissions",
"<permission> <role> <name>UserVmManager</name> </role> <user id=\"456\"/> </permission>",
"POST /ovirt-engine/api/permissions",
"<permission> <role> <name>SuperUser</name> </role> <user id=\"456\"/> </permission>",
"POST /ovirt-engine/api/clusters/123/permissions",
"<permission> <role> <name>UserRole</name> </role> <group id=\"789\"/> </permission>",
"GET /ovirt-engine/api/clusters/123/permissions",
"<permissions> <permission id=\"456\"> <cluster id=\"123\"/> <role id=\"789\"/> <user id=\"451\"/> </permission> <permission id=\"654\"> <cluster id=\"123\"/> <role id=\"789\"/> <group id=\"127\"/> </permission> </permissions>",
"GET /ovirt-engine/api/tags/123",
"<tag href=\"/ovirt-engine/api/tags/123\" id=\"123\"> <name>root</name> <description>root</description> </tag>",
"DELETE /ovirt-engine/api/tags/123",
"PUT /ovirt-engine/api/tags/123",
"<tag> <parent id=\"456\"/> </tag>",
"<tag> <parent> <name>mytag</name> </parent> </tag>",
"POST /ovirt-engine/api/tags",
"<tag> <name>mytag</name> </tag>",
"<tag> <name>mytag</name> <parent> <name>myparenttag</name> </parent> </tag>",
"GET /ovirt-engine/api/tags",
"<tags> <tag href=\"/ovirt-engine/api/tags/222\" id=\"222\"> <name>root2</name> <description>root2</description> <parent href=\"/ovirt-engine/api/tags/111\" id=\"111\"/> </tag> <tag href=\"/ovirt-engine/api/tags/333\" id=\"333\"> <name>root3</name> <description>root3</description> <parent href=\"/ovirt-engine/api/tags/222\" id=\"222\"/> </tag> <tag href=\"/ovirt-engine/api/tags/111\" id=\"111\"> <name>root</name> <description>root</description> </tag> </tags>",
"root: (id: 111) - root2 (id: 222) - root3 (id: 333)",
"POST /ovirt-engine/api/templates/123/export",
"<action> <storage_domain id=\"456\"/> <exclusive>true<exclusive/> </action>",
"DELETE /ovirt-engine/api/templates/123",
"PUT /ovirt-engine/api/templates/123",
"<template> <memory>1073741824</memory> </template>",
"<template> <version> <version_name>mytemplate_2</version_name> </version> </template>",
"GET /ovirt-engine/api/templates/123/cdroms/",
"<cdrom href=\"...\" id=\"00000000-0000-0000-0000-000000000000\"> <template href=\"/ovirt-engine/api/templates/123\" id=\"123\"/> <file id=\"mycd.iso\"/> </cdrom>",
"<cdrom href=\"...\" id=\"00000000-0000-0000-0000-000000000000\"> <template href=\"/ovirt-engine/api/templates/123\" id=\"123\"/> </cdrom>",
"DELETE /ovirt-engine/api/templates/{template:id}/diskattachments/{attachment:id}?storage_domain=072fbaa1-08f3-4a40-9f34-a5ca22dd1d74",
"POST /ovirt-engine/api/templates",
"<template> <name>mytemplate</name> <vm id=\"123\"/> </template>",
"<template> <name>mytemplate</name> <vm id=\"123\"> <disk_attachments> <disk_attachment> <disk id=\"456\"> <name>mydisk</name> <format>cow</format> <sparse>true</sparse> </disk> </disk_attachment> </disk_attachments> </vm> </template>",
"<template> <name>mytemplate</name> <vm id=\"123\"/> <version> <base_template id=\"456\"/> <version_name>mytemplate_001</version_name> </version> </template>",
"<template> <name>mytemplate</name> <storage_domain id=\"123\"/> <vm id=\"456\"> <disk_attachments> <disk_attachment> <disk id=\"789\"> <format>cow</format> <sparse>true</sparse> </disk> </disk_attachment> </disk_attachments> </vm> </template>",
"<template> <name>mytemplate</name> <vm id=\"123\"> <disk_attachments> <disk_attachment> <disk id=\"456\"> <format>cow</format> <sparse>true</sparse> <storage_domains> <storage_domain id=\"789\"/> </storage_domains> </disk> </disk_attachment> </disk_attachments> </vm> </template>",
"POST /ovirt-engine/api/templates?clone_permissions=true",
"<template> <name>mytemplate<name> <vm> <name>myvm<name> </vm> </template>",
"GET /ovirt-engine/api/templates",
"GET /ovirt-engine/api/users/1234",
"<user href=\"/ovirt-engine/api/users/1234\" id=\"1234\"> <name>admin</name> <link href=\"/ovirt-engine/api/users/1234/sshpublickeys\" rel=\"sshpublickeys\"/> <link href=\"/ovirt-engine/api/users/1234/roles\" rel=\"roles\"/> <link href=\"/ovirt-engine/api/users/1234/permissions\" rel=\"permissions\"/> <link href=\"/ovirt-engine/api/users/1234/tags\" rel=\"tags\"/> <department></department> <domain_entry_id>23456</domain_entry_id> <email>[email protected]</email> <last_name>Lastname</last_name> <namespace>*</namespace> <principal>user1</principal> <user_name>user1@domain-authz</user_name> <domain href=\"/ovirt-engine/api/domains/45678\" id=\"45678\"> <name>domain-authz</name> </domain> </user>",
"DELETE /ovirt-engine/api/users/1234",
"POST /ovirt-engine/api/users",
"<user> <user_name>myuser@myextension-authz</user_name> <domain> <name>myextension-authz</name> </domain> </user>",
"<user> <principal>[email protected]</principal> <user_name>[email protected]@myextension-authz</user_name> <domain> <name>myextension-authz</name> </domain> </user>",
"GET /ovirt-engine/api/users",
"<users> <user href=\"/ovirt-engine/api/users/1234\" id=\"1234\"> <name>admin</name> <link href=\"/ovirt-engine/api/users/1234/sshpublickeys\" rel=\"sshpublickeys\"/> <link href=\"/ovirt-engine/api/users/1234/roles\" rel=\"roles\"/> <link href=\"/ovirt-engine/api/users/1234/permissions\" rel=\"permissions\"/> <link href=\"/ovirt-engine/api/users/1234/tags\" rel=\"tags\"/> <domain_entry_id>23456</domain_entry_id> <namespace>*</namespace> <principal>user1</principal> <user_name>user1@domain-authz</user_name> <domain href=\"/ovirt-engine/api/domains/45678\" id=\"45678\"> <name>domain-authz</name> </domain> </user> </users>",
"POST /ovirt-engine/api/vms/123/cancelmigration",
"<action/>",
"POST /ovirt-engine/api/vms/123/detach",
"<action/>",
"POST /ovirt-engine/api/vms/123/export",
"<action> <storage_domain> <name>myexport</name> </storage_domain> <exclusive>true</exclusive> <discard_snapshots>true</discard_snapshots> </action>",
"POST /ovirt-engine/api/vms/123/export",
"<action> <host> <name>myhost</name> </host> <directory>/home/ovirt</directory> <filename>myvm.ova</filename> </action>",
"POST /ovirt-engine/api/vms/123/freezefilesystems",
"<action/>",
"GET /ovirt-engine/api/vms/123?all_content=true",
"GET /vms/{vm:id};next_run",
"GET /vms/{vm:id};next_run=true",
"POST /ovirt-engine/api/vms/123/logon",
"<action/>",
"POST /ovirt-engine/api/vms/123/maintenance",
"<action> <maintenance_enabled>true<maintenance_enabled/> </action>",
"POST /ovirt-engine/api/vms/123/migrate",
"<action> <host id=\"2ab5e1da-b726-4274-bbf7-0a42b16a0fc3\"/> </action>",
"POST /ovirt-engine/api/vms/123/previewsnapshot",
"<action> <disks> <disk id=\"111\"> <image_id>222</image_id> </disk> </disks> <snapshot id=\"456\"/> </action>",
"POST /ovirt-engine/api/vms/123/reboot",
"<action/>",
"DELETE /ovirt-engine/api/vms/123",
"POST /ovirt-engine/api/vms/123/shutdown",
"<action/>",
"POST /ovirt-engine/api/vms/123/start",
"<action/>",
"<action> <vm> <os> <boot> <devices> <device>cdrom</device> </devices> </boot> </os> </vm> </action>",
"POST /ovirt-engine/api/vms/123/stop",
"<action/>",
"POST /ovirt-engine/api/vms/123/suspend",
"<action/>",
"POST /api/vms/123/thawfilesystems",
"<action/>",
"POST /ovirt-engine/api/vms/123/ticket",
"<action> <ticket> <value>abcd12345</value> <expiry>120</expiry> </ticket> </action>",
"POST /ovirt-engine/api/vms/123/graphicsconsoles/456/ticket",
"GET /ovirt-engine/api/vms/123/applications/789",
"<application href=\"/ovirt-engine/api/vms/123/applications/789\" id=\"789\"> <name>ovirt-guest-agent-common-1.0.12-3.el7</name> <vm href=\"/ovirt-engine/api/vms/123\" id=\"123\"/> </application>",
"GET /ovirt-engine/api/vms/123/applications/",
"<applications> <application href=\"/ovirt-engine/api/vms/123/applications/456\" id=\"456\"> <name>kernel-3.10.0-327.36.1.el7</name> <vm href=\"/ovirt-engine/api/vms/123\" id=\"123\"/> </application> <application href=\"/ovirt-engine/api/vms/123/applications/789\" id=\"789\"> <name>ovirt-guest-agent-common-1.0.12-3.el7</name> <vm href=\"/ovirt-engine/api/vms/123\" id=\"123\"/> </application> </applications>",
"<cdrom href=\"...\" id=\"00000000-0000-0000-0000-000000000000\"> <file id=\"mycd.iso\"/> <vm href=\"/ovirt-engine/api/vms/123\" id=\"123\"/> </cdrom>",
"<cdrom href=\"...\" id=\"00000000-0000-0000-0000-000000000000\"> <vm href=\"/ovirt-engine/api/vms/123\" id=\"123\"/> </cdrom>",
"PUT /ovirt-engine/api/vms/123/cdroms/00000000-0000-0000-0000-000000000000",
"<cdrom> <file id=\"mycd.iso\"/> </cdrom>",
"<cdrom> <file id=\"\"/> </cdrom>",
"PUT /ovirt-engine/api/vms/123/cdroms/00000000-0000-0000-0000-000000000000?current=true",
"<cdrom> <file id=\"\"/> </cdrom>",
"GET /ovit-engine/api/vms/123/graphicsconsoles/456?current=true",
"POST /ovirt-engine/api/vms/123/graphicsconsoles/456/remoteviewerconnectionfile",
"<action/>",
"<action> <remote_viewer_connection_file> [virt-viewer] type=spice host=192.168.1.101 port=-1 password=123456789 delete-this-file=1 fullscreen=0 toggle-fullscreen=shift+f11 release-cursor=shift+f12 secure-attention=ctrl+alt+end tls-port=5900 enable-smartcard=0 enable-usb-autoshare=0 usb-filter=null tls-ciphers=DEFAULT host-subject=O=local,CN=example.com ca= </remote_viewer_connection_file> </action>",
"Find the virtual machine: vm = vms_service.list(search='name=myvm')[0] Locate the service that manages the virtual machine, as that is where the locators are defined: vm_service = vms_service.vm_service(vm.id) Find the graphic console of the virtual machine: graphics_consoles_service = vm_service.graphics_consoles_service() graphics_console = graphics_consoles_service.list()[0] Generate the remote viewer connection file: console_service = graphics_consoles_service.console_service(graphics_console.id) remote_viewer_connection_file = console_service.remote_viewer_connection_file() Write the content to file \"/tmp/remote_viewer_connection_file.vv\" path = \"/tmp/remote_viewer_connection_file.vv\" with open(path, \"w\") as f: f.write(remote_viewer_connection_file)",
"#!/bin/sh -ex remote-viewer --ovirt-ca-file=/etc/pki/ovirt-engine/ca.pem /tmp/remote_viewer_connection_file.vv",
"POST /ovirt-engine/api/vms/123/graphicsconsoles/456/ticket",
"<action> <ticket> <value>abcd12345</value> <expiry>120</expiry> </ticket> </action>",
"GET /ovirt-engine/api/vms/123/graphicsconsoles?current=true",
"GET /ovirt-engine/api/vms/123/hostdevices/456",
"<host_device href=\"/ovirt-engine/api/hosts/543/devices/456\" id=\"456\"> <name>pci_0000_04_00_0</name> <capability>pci</capability> <iommu_group>30</iommu_group> <placeholder>true</placeholder> <product id=\"0x13ba\"> <name>GM107GL [Quadro K2200]</name> </product> <vendor id=\"0x10de\"> <name>NVIDIA Corporation</name> </vendor> <host href=\"/ovirt-engine/api/hosts/543\" id=\"543\"/> <parent_device href=\"/ovirt-engine/api/hosts/543/devices/456\" id=\"456\"> <name>pci_0000_00_03_0</name> </parent_device> <vm href=\"/ovirt-engine/api/vms/123\" id=\"123\"/> </host_device>",
"DELETE /ovirt-engine/api/vms/123/hostdevices/456",
"POST /ovirt-engine/api/vms/123/hostdevices",
"<host_device id=\"123\" />",
"DELETE /ovirt-engine/api/vms/123/nics/456",
"PUT /ovirt-engine/api/vms/123/nics/456",
"<nic> <name>mynic</name> <interface>e1000</interface> <vnic_profile id='789'/> </nic>",
"POST /ovirt-engine/api/vms/123/nics",
"<nic> <name>mynic</name> <interface>virtio</interface> <vnic_profile id=\"456\"/> </nic>",
"curl --request POST --header \"Version: 4\" --header \"Content-Type: application/xml\" --header \"Accept: application/xml\" --user \"admin@internal:mypassword\" --cacert /etc/pki/ovirt-engine/ca.pem --data ' <nic> <name>mynic</name> <interface>virtio</interface> <vnic_profile id=\"456\"/> </nic> ' https://myengine.example.com/ovirt-engine/api/vms/123/nics",
"DELETE /ovirt-engine/api/vms/123/numanodes/456",
"PUT /ovirt-engine/api/vms/123/numanodes/456",
"<vm_numa_node> <numa_node_pins> <numa_node_pin> <index>0</index> </numa_node_pin> </numa_node_pins> </vm_numa_node>",
"POST /ovirt-engine/api/vms/c7ecd2dc/numanodes Accept: application/xml Content-type: application/xml",
"<vm_numa_node> <cpu> <cores> <core> <index>0</index> </core> </cores> </cpu> <index>0</index> <memory>1024</memory> </vm_numa_node>",
"POST /ovirt-engine/api/vmpools/123/allocatevm",
"<action/>",
"GET /ovirt-engine/api/vmpools/123",
"<vm_pool id=\"123\"> <actions>...</actions> <name>MyVmPool</name> <description>MyVmPool description</description> <link href=\"/ovirt-engine/api/vmpools/123/permissions\" rel=\"permissions\"/> <max_user_vms>1</max_user_vms> <prestarted_vms>0</prestarted_vms> <size>100</size> <stateful>false</stateful> <type>automatic</type> <use_latest_template_version>false</use_latest_template_version> <cluster id=\"123\"/> <template id=\"123\"/> <vm id=\"123\">...</vm> </vm_pool>",
"DELETE /ovirt-engine/api/vmpools/123",
"PUT /ovirt-engine/api/vmpools/123",
"<vmpool> <name>VM_Pool_B</name> <description>Virtual Machine Pool B</description> <size>3</size> <prestarted_vms>1</size> <max_user_vms>2</size> </vmpool>",
"POST /ovirt-engine/api/vmpools",
"<vmpool> <name>mypool</name> <cluster id=\"123\"/> <template id=\"456\"/> </vmpool>",
"GET /ovirt-engine/api/vmpools",
"<vm_pools> <vm_pool id=\"123\"> </vm_pool> </vm_pools>",
"GET /ovirt-engine/api/vms/123/sessions",
"<sessions> <session href=\"/ovirt-engine/api/vms/123/sessions/456\" id=\"456\"> <console_user>true</console_user> <ip> <address>192.168.122.1</address> </ip> <user href=\"/ovirt-engine/api/users/789\" id=\"789\"/> <vm href=\"/ovirt-engine/api/vms/123\" id=\"123\"/> </session> </sessions>",
"<watchdogs> <watchdog href=\"/ovirt-engine/api/vms/123/watchdogs/00000000-0000-0000-0000-000000000000\" id=\"00000000-0000-0000-0000-000000000000\"> <vm href=\"/ovirt-engine/api/vms/123\" id=\"123\"/> <action>poweroff</action> <model>i6300esb</model> </watchdog> </watchdogs>",
"DELETE /ovirt-engine/api/vms/123/watchdogs/00000000-0000-0000-0000-000000000000",
"PUT /ovirt-engine/api/vms/123/watchdogs <watchdog> <action>reset</action> </watchdog>",
"<watchdog href=\"/ovirt-engine/api/vms/123/watchdogs/00000000-0000-0000-0000-000000000000\" id=\"00000000-0000-0000-0000-000000000000\"> <vm href=\"/ovirt-engine/api/vms/123\" id=\"123\"/> <action>reset</action> <model>i6300esb</model> </watchdog>",
"POST /ovirt-engine/api/vms/123/watchdogs <watchdog> <action>poweroff</action> <model>i6300esb</model> </watchdog>",
"<watchdog href=\"/ovirt-engine/api/vms/123/watchdogs/00000000-0000-0000-0000-000000000000\" id=\"00000000-0000-0000-0000-000000000000\"> <vm href=\"/ovirt-engine/api/vms/123\" id=\"123\"/> <action>poweroff</action> <model>i6300esb</model> </watchdog>",
"<watchdogs> <watchdog href=\"/ovirt-engine/api/vms/123/watchdogs/00000000-0000-0000-0000-000000000000\" id=\"00000000-0000-0000-0000-000000000000\"> <vm href=\"/ovirt-engine/api/vms/123\" id=\"123\"/> <action>poweroff</action> <model>i6300esb</model> </watchdog> </watchdogs>",
"#!/bin/sh -ex url=\"https://engine.example.com/ovirt-engine/api\" user=\"admin@internal\" password=\"...\" curl --verbose --cacert /etc/pki/ovirt-engine/ca.pem --user \"USD{user}:USD{password}\" --request POST --header \"Version: 4\" --header \"Content-Type: application/xml\" --header \"Accept: application/xml\" --data ' <vm> <name>myvm</name> <template> <name>Blank</name> </template> <cluster> <name>mycluster</name> </cluster> </vm> ' \"USD{url}/vms\"",
"#!/bin/sh -ex url=\"https://engine.example.com/ovirt-engine/api\" user=\"admin@internal\" password=\"...\" curl --verbose --cacert /etc/pki/ovirt-engine/ca.pem --user \"USD{user}:USD{password}\" --request POST --header \"Content-Type: application/xml\" --header \"Accept: application/xml\" --data ' <vm> <name>myvm</name> <snapshots> <snapshot id=\"266742a5-6a65-483c-816d-d2ce49746680\"/> </snapshots> <cluster> <name>mycluster</name> </cluster> </vm> ' \"USD{url}/vms\"",
"<vm> <disk_attachments> <disk_attachment> <disk id=\"8d4bd566-6c86-4592-a4a7-912dbf93c298\"> <storage_domains> <storage_domain id=\"9cb6cb0a-cf1d-41c2-92ca-5a6d665649c9\"/> </storage_domains> </disk> <disk_attachment> </disk_attachments> </vm>",
"<vm> <disk_attachments> <disk_attachment> <disk> <image_id>8d4bd566-6c86-4592-a4a7-912dbf93c298</image_id> <storage_domains> <storage_domain id=\"9cb6cb0a-cf1d-41c2-92ca-5a6d665649c9\"/> </storage_domains> </disk> <disk_attachment> </disk_attachments> </vm>",
"<vm> <name>myvm</name> <description>My Desktop Virtual Machine</description> <type>desktop</type> <memory>2147483648</memory> </vm>",
"<vm> <os> <boot dev=\"cdrom\"/> </os> </vm>",
"<vm> <os> <boot> <devices> <device>cdrom</device> </devices> </boot> </os> </vm>",
"POST /ovirt-engine/vms?clone=true",
"<vm> <name>myvm<name> <template> <name>mytemplate<name> </template> <cluster> <name>mycluster<name> </cluster> </vm>",
"POST /ovirt-engine/api/vms?clone_permissions=true",
"<vm> <name>myvm<name> <template> <name>mytemplate<name> </template> <cluster> <name>mycluster<name> </cluster> </vm>",
"GET /ovirt-engine/api/vms?all_content=true",
"POST /ovirt-engine/api/networks/456/vnicprofiles",
"<vnic_profile id=\"123\"> <name>new_vNIC_name</name> <pass_through> <mode>disabled</mode> </pass_through> <port_mirroring>false</port_mirroring> </vnic_profile>",
"<vnic_profile href=\"/ovirt-engine/api/vnicprofiles/123\" id=\"123\"> <name>new_vNIC_name</name> <link href=\"/ovirt-engine/api/vnicprofiles/123/permissions\" rel=\"permissions\"/> <pass_through> <mode>disabled</mode> </pass_through> <port_mirroring>false</port_mirroring> <network href=\"/ovirt-engine/api/networks/456\" id=\"456\"/> <network_filter href=\"/ovirt-engine/api/networkfilters/789\" id=\"789\"/> </vnic_profile>",
"<vnic_profile> <name>no_network_filter</name> <network_filter/> </vnic_profile>",
"<vnic_profile> <name>user_choice_network_filter</name> <network_filter id= \"0000001b-001b-001b-001b-0000000001d5\"/> </vnic_profile>"
]
| https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/rest_api_guide/services |
4.10. Suspending Activity on a File System | 4.10. Suspending Activity on a File System You can suspend write activity to a file system by using the dmsetup suspend command. Suspending write activity allows hardware-based device snapshots to be used to capture the file system in a consistent state. The dmsetup resume command ends the suspension. Usage Start Suspension End Suspension MountPoint Specifies the file system. Examples This example suspends writes to file system /mygfs2 . This example ends suspension of writes to file system /mygfs2 . | [
"dmsetup suspend MountPoint",
"dmsetup resume MountPoint",
"dmsetup suspend /mygfs2",
"dmsetup resume /mygfs2"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/global_file_system_2/s1-manage-suspendfs |
15.2.2.3. Unresolved Dependency | 15.2.2.3. Unresolved Dependency RPM packages can, essentially, depend on other packages, which means that they require other packages to be installed to run properly. If you try to install a package which has an unresolved dependency, output similar to the following is displayed: If you are installing a package from the Red Hat Enterprise Linux CD-ROM set, it usually suggest the package(s) needed to resolve the dependency. Find the suggested package(s) on the Red Hat Enterprise Linux CD-ROMs or from the Red Hat FTP site (or mirror), and add it to the command: If installation of both packages is successful, output similar to the following is displayed: If it does not suggest a package to resolve the dependency, you can try the --redhatprovides option to determine which package contains the required file. You need the rpmdb-redhat package installed to use this option. If the package that contains bar.so.2 is in the installed database from the rpmdb-redhat package, the name of the package is displayed: To force the installation anyway (which is not recommended since the package may not run correctly), use the --nodeps option. | [
"error: Failed dependencies: bar.so.2 is needed by foo-1.0-1 Suggested resolutions: bar-2.0.20-3.i386.rpm",
"-ivh foo-1.0-1.i386.rpm bar-2.0.20-3.i386.rpm",
"Preparing... ########################################### [100%] 1:foo ########################################### [ 50%] 2:bar ########################################### [100%]",
"-q --redhatprovides bar.so.2",
"bar-2.0.20-3.i386.rpm"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Installing-Unresolved_Dependency |
Chapter 4. Management of Ceph File System volumes, sub-volume groups, and sub-volumes | Chapter 4. Management of Ceph File System volumes, sub-volume groups, and sub-volumes As a storage administrator, you can use Red Hat's Ceph Container Storage Interface (CSI) to manage Ceph File System (CephFS) exports. This also allows you to use other services, such as OpenStack's file system service (Manila) by having a common command-line interface to interact with. The volumes module for the Ceph Manager daemon ( ceph-mgr ) implements the ability to export Ceph File Systems (CephFS). The Ceph Manager volumes module implements the following file system export abstractions: CephFS volumes CephFS subvolume groups CephFS subvolumes 4.1. Ceph File System volumes As a storage administrator, you can create, list, and remove Ceph File System (CephFS) volumes. CephFS volumes are an abstraction for Ceph File Systems. This section describes how to: Create a Ceph file system volume. List Ceph file system volumes. View information about a Ceph file system volume. Remove a Ceph file system volume. 4.1.1. Creating a Ceph file system volume Ceph Orchestrator is a module for Ceph Manager that creates a Metadata Server (MDS) for the Ceph File System (CephFS). This section describes how to create a CephFS volume. Note This creates the Ceph File System, along with the data and metadata pools. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. Procedure Create a CephFS volume on the monitor node: Syntax Example 4.1.2. Listing Ceph file system volumes This section describes the step to list the Ceph File system (CephFS) volumes. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS volume. Procedure List the CephFS volume: Example 4.1.3. Viewing information about a Ceph file system volume You can list basic details about a Ceph File System (CephFS) volume, such as attributes of data and metadata pools of the CephFS volume, pending subvolumes deletion count, and the like. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS volume created. Procedure View information about a CephFS volume: Syntax Example The output of the ceph fs volume info command includes: mon_addrs : List of monitor addresses. pending_subvolume_deletions : Number of subvolumes pending deletion. pools : Attributes of data and metadata pools. avail : The amount of free space available in bytes. name : Name of the pool. used : The amount of storage consumed in bytes. used_size : Current used size of the CephFS volume in bytes. 4.1.4. Removing a Ceph file system volume Ceph Orchestrator is a module for Ceph Manager that removes the Metadata Server (MDS) for the Ceph File System (CephFS). This section shows how to remove the Ceph File System (CephFS) volume. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS volume. Procedure If the mon_allow_pool_delete option is not set to true , then set it to true before removing the CephFS volume: Example Remove the CephFS volume: Syntax Example 4.2. Ceph File System subvolume groups As a storage administrator, you can create, list, fetch absolute path, and remove Ceph File System (CephFS) subvolume groups. CephFS subvolume groups are abstractions at a directory level which effects policies, for example, file layouts, across a set of subvolumes. Starting with Red Hat Ceph Storage 5.0, the subvolume group snapshot feature is not supported. You can only list and remove the existing snapshots of these subvolume groups. This section describes how to: Create a file system subvolume group. Set and manage quotas on a file system subvolume group. List file system subvolume groups. Fetch absolute path of a file system subvolume group. List snapshots of a file system subvolume group. Remove snapshot of a file system subvolume group. Remove a file system subvolume group. 4.2.1. Creating a file system subvolume group This section describes how to create a Ceph File System (CephFS) subvolume group. Note When creating a subvolume group, you can specify its data pool layout, uid, gid, and file mode in octal numerals. By default, the subvolume group is created with an octal file mode '755', uid '0', gid '0', and data pool layout of its parent directory. Note See Setting and managing quotas on a file system subvolume group to set quotas while creating a subvolume group. Prerequisites A working Red Hat Ceph Storage cluster with a Ceph File System deployed. At a minimum read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. Procedure Create a CephFS subvolume group: Syntax Example The command succeeds even if the subvolume group already exists. 4.2.2. Setting and managing quotas on a file system subvolume group This section describes how to set and manage quotas on a Ceph File System (CephFS) subvolume group. Prerequisites A working Red Hat Ceph Storage cluster with a Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. Procedure Set quotas while creating a subvolume group by providing size in bytes: Syntax Example Resize a subvolume group: Syntax Example Fetch the metadata of a subvolume group: Syntax Example 4.2.3. Listing file system subvolume groups This section describes the step to list the Ceph File System (CephFS) subvolume groups. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume group. Procedure List the CephFS subvolume groups: Syntax Example 4.2.4. Fetching absolute path of a file system subvolume group This section shows how to fetch the absolute path of a Ceph File System (CephFS) subvolume group. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume group. Procedure Fetch the absolute path of the CephFS subvolume group: Syntax Example 4.2.5. Listing snapshots of a file system subvolume group This section provides the steps to list the snapshots of a Ceph File System (CephFS) subvolume group. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume group. Snapshots of the subvolume group. Procedure List the snapshots of a CephFS subvolume group: Syntax Example 4.2.6. Removing snapshot of a file system subvolume group This section provides the step to remove snapshots of a Ceph File System (CephFS) subvolume group. Note Using the --force flag allows the command to succeed that would otherwise fail if the snapshot did not exist. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A Ceph File System volume. A snapshot of the subvolume group. Procedure Remove the snapshot of the CephFS subvolume group: Syntax Example 4.2.7. Removing a file system subvolume group This section shows how to remove the Ceph File System (CephFS) subvolume group. Note The removal of a subvolume group fails if it is not empty or non-existent. The --force flag allows the non-existent subvolume group to be removed. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume group. Procedure Remove the CephFS subvolume group: Syntax Example 4.3. Ceph File System subvolumes As a storage administrator, you can create, list, fetch absolute path, fetch metadata, and remove Ceph File System (CephFS) subvolumes. Additionally, you can also create, list, and remove snapshots of these subvolumes. CephFS subvolumes are an abstraction for independent Ceph File Systems directory trees. This section describes how to: Create a file system subvolume. List file system subvolume. Resizing a file system subvolume. Fetch absolute path of a file system subvolume. Fetch metadata of a file system subvolume. Create snapshot of a file system subvolume. Cloning subvolumes from snapshots. List snapshots of a file system subvolume. Fetching metadata of the snapshots of a file system subvolume. Remove a file system subvolume. Remove snapshot of a file system subvolume. 4.3.1. Creating a file system subvolume This section describes how to create a Ceph File System (CephFS) subvolume. Note When creating a subvolume, you can specify its subvolume group, data pool layout, uid, gid, file mode in octal numerals, and size in bytes. The subvolume can be created in a separate RADOS namespace by specifying the --namespace-isolated option. By default, a subvolume is created within the default subvolume group, and with an octal file mode 755 , uid of its subvolume group, gid of its subvolume group, data pool layout of its parent directory, and no size limit. Prerequisites A working Red Hat Ceph Storage cluster with a Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. Procedure Create a CephFS subvolume: Note The --group_name parameter is optional. If the subvolume needs to be created within a subvolume group, then --group_name needs to be passed in the command. Syntax Example The command succeeds even if the subvolume already exists. 4.3.2. Listing file system subvolume This section describes the step to list the Ceph File System (CephFS) subvolume. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume. Procedure List the CephFS subvolume: Note The --group_name parameter is optional. If the subvolume needs to be created within a subvolume group, then --group_name needs to be passed in the command. Syntax Example 4.3.3. Resizing a file system subvolume This section describes the step to resize the Ceph File System (CephFS) subvolume. Note The ceph fs subvolume resize command resizes the subvolume quota using the size specified by new_size . The --no_shrink flag prevents the subvolume from shrinking below the currently used size of the subvolume. The subvolume can be resized to an infinite size by passing inf or infinite as the new_size . Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume. Procedure Resize a CephFS subvolume: Note The --group_name parameter is optional. If the subvolume needs to be created within a subvolume group, then --group_name needs to be passed in the command. Syntax Example 4.3.4. Fetching absolute path of a file system subvolume This section shows how to fetch the absolute path of a Ceph File System (CephFS) subvolume. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume. Procedure Fetch the absolute path of the CephFS subvolume: Note The --group_name parameter is optional. If the subvolume needs to be created within a subvolume group, then --group_name needs to be passed in the command. Syntax Example 4.3.5. Fetching metadata of a file system subvolume This section shows how to fetch metadata of a Ceph File System (CephFS) subvolume. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume. Procedure Fetch the metadata of a CephFS subvolume: Note The --group_name parameter is optional. If the subvolume needs to be fetched from within a subvolume group, then --group_name needs to be passed in the command. Syntax Example Example output The output format is JSON and contains the following fields: atime : access time of subvolume path in the format "YYYY-MM-DD HH:MM:SS". bytes_pcent : quota used in percentage if quota is set, else displays "undefined". bytes_quota : quota size in bytes if quota is set, else displays "infinite". bytes_used : current used size of the subvolume in bytes. created_at : time of creation of subvolume in the format "YYYY-MM-DD HH:MM:SS". ctime : change time of subvolume path in the format "YYYY-MM-DD HH:MM:SS". data_pool : data pool the subvolume belongs to. features : features supported by the subvolume, such as , "snapshot-clone", "snapshot-autoprotect", or "snapshot-retention". flavor : subvolume version, either 1 for version one or 2 for version two. gid : group ID of subvolume path. mode : mode of subvolume path. mon_addrs : list of monitor addresses. mtime : modification time of subvolume path in the format "YYYY-MM-DD HH:MM:SS". path : absolute path of a subvolume. pool_namespace : RADOS namespace of the subvolume. state : current state of the subvolume, such as, "complete" or "snapshot-retained". type : subvolume type indicating whether it is a clone or subvolume. uid : user ID of subvolume path. 4.3.6. Creating snapshot of a file system subvolume This section shows how to create snapshots of a Ceph File System (CephFS) subvolume. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume. In addition to read ( r ) and write ( w ) capabilities, clients also require s flag on a directory path within the file system. Procedure Verify that the s flag is set on the directory: Note The --group_name parameter is optional. If the subvolume needs to be created within a subvolume group, then --group_name needs to be passed in the command. Syntax Example 1 2 In the example, client.0 can create or delete snapshots in the bar directory of file system cephfs_a . Create a snapshot of the Ceph File System subvolume: Syntax Example 4.3.7. Cloning subvolumes from snapshots Subvolumes can be created by cloning subvolume snapshots. It is an asynchronous operation involving copying data from a snapshot to a subvolume. Note Cloning is inefficient for very large data sets. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. To create or delete snapshots, in addition to read and write capability, clients require s flag on a directory path within the filesystem. Syntax In the following example, client.0 can create or delete snapshots in the bar directory of filesystem cephfs_a . Example Procedure Create a Ceph File System (CephFS) volume: Syntax Example This creates the CephFS file system, its data and metadata pools. Create a subvolume group. By default, the subvolume group is created with an octal file mode 755 , and data pool layout of its parent directory. Note The --group_name parameter is optional. If the subvolume needs to be created within a subvolume group, then --group_name needs to be passed in the command. Syntax Example Create a subvolume. By default, a subvolume is created within the default subvolume group, and with an octal file mode 755 , uid of its subvolume group, gid of its subvolume group, data pool layout of its parent directory, and no size limit. Syntax Example Create a snapshot of a subvolume: Syntax Example Initiate a clone operation: Note By default, cloned subvolumes are created in the default group. If the source subvolume and the target clone are in the default group, run the following command: Syntax Example If the source subvolume is in the non-default group, then specify the source subvolume group in the following command: Syntax Example If the target clone is to a non-default group, then specify the target group in the following command: Syntax Example Check the status of the clone operation: Syntax Example Additional Resources See the Managing Ceph users section in the Red Hat Ceph Storage Administration Guide . 4.3.8. Listing snapshots of a file system subvolume This section provides the step to list the snapshots of a Ceph File system (CephFS) subvolume. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume. Snapshots of the subvolume. Procedure List the snapshots of a CephFS subvolume: Note The --group_name parameter is optional. If the subvolume needs to be listed from within a subvolume group, then --group_name needs to be passed in the command. Syntax Example 4.3.9. Fetching metadata of the snapshots of a file system subvolume This section provides the step to fetch the metadata of the snapshots of a Ceph File System (CephFS) subvolume. Prerequisites A working Red Hat Ceph Storage cluster with CephFS deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume. Snapshots of the subvolume. Procedure Fetch the metadata of the snapshots of a CephFS subvolume: Note The --group_name parameter is optional. If the subvolume needs to be fetched from within a subvolume group, then --group_name needs to be passed in the command. Syntax Example Example output The output format is JSON and contains the following fields: created_at : time of creation of snapshot in the format "YYYY-MM-DD HH:MM:SS:ffffff". data_pool : data pool the snapshot belongs to. has_pending_clones : "yes" if snapshot clone is in progress otherwise "no". size : snapshot size in bytes. 4.3.10. Removing a file system subvolume This section describes the step to remove the Ceph File System (CephFS) subvolume. Note The ceph fs subvolume rm command removes the subvolume and its contents in two steps. First, it moves the subvolume to a trash folder, and then asynchronously purges its contents. A subvolume can be removed retaining existing snapshots of the subvolume using the --retain-snapshots option. If snapshots are retained, the subvolume is considered empty for all operations not involving the retained snapshots. Retained snapshots can be used as a clone source to recreate the subvolume, or cloned to a newer subvolume. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A CephFS subvolume. Procedure Remove a CephFS subvolume: Note The --group_name parameter is optional. If the subvolume needs to be removed from within a subvolume group, then --group_name needs to be passed in the command. Syntax Example To recreate a subvolume from a retained snapshot: Syntax NEW_SUBVOLUME can either be the same subvolume which was deleted earlier or clone it to a new subvolume. Example 4.3.11. Removing snapshot of a file system subvolume This section provides the step to remove snapshots of a Ceph File System (CephFS) subvolume group. Note Using the --force flag allows the command to succeed that would otherwise fail if the snapshot did not exist. Prerequisites A working Red Hat Ceph Storage cluster with Ceph File System deployed. At least read access on the Ceph Monitor. Read and write capability on the Ceph Manager nodes. A Ceph File System volume. A snapshot of the subvolume group. Procedure Remove the snapshot of the CephFS subvolume: Note The --group_name parameter is optional. If the subvolume needs to be removed from within a subvolume group, then --group_name needs to be passed in the command. Syntax Example Additional Resources See the Managing Ceph users section in the Red Hat Ceph Storage Administration Guide . 4.4. Metadata information on Ceph File System subvolumes As a storage administrator, you can set, get, list, and remove metadata information of Ceph File System (CephFS) subvolumes. The custom metadata is for users to store their metadata in subvolumes. Users can store the key-value pairs similar to xattr in a Ceph File System. This section describes how to: Setting custom metadata on the file system subvolume Getting custom metadata on the file system subvolume Listing custom metadata on the file system subvolume Removing custom metadata from the file system subvolume 4.4.1. Setting custom metadata on the file system subvolume You can set custom metadata on the file system subvolume as a key-value pair. Note If the key_name already exists then the old value is replaced by the new value. Note The KEY_NAME and VALUE should be a string of ASCII characters as specified in python's string.printable . The KEY_NAME is case-insensitive and is always stored in lower case. Important Custom metadata on a subvolume is not preserved when snapshotting the subvolume, and hence, is also not preserved when cloning the subvolume snapshot. Prerequisites A running Red Hat Ceph Storage cluster. A Ceph File System (CephFS), CephFS volume, subvolume group, and subvolume created. Procedure Set the metadata on the CephFS subvolume: Syntax Example Optional: Set the custom metadata with a space in the KEY_NAME : Example This creates another metadata with KEY_NAME as test meta for the VALUE cluster . Optional: You can also set the same metadata with a different value: Example 4.4.2. Getting custom metadata on the file system subvolume You can get the custom metadata, the key-value pairs, of a Ceph File System (CephFS) in a volume, and optionally, in a specific subvolume group. Prerequisites A running Red Hat Ceph Storage cluster. A CephFS volume, subvolume group, and subvolume created. A custom metadata created on the CephFS subvolume. Procedure Get the metadata on the CephFS subvolume: Syntax Example 4.4.3. Listing custom metadata on the file system subvolume You can list the custom metadata associated with the key of a Ceph File System (CephFS) in a volume, and optionally, in a specific subvolume group. Prerequisites A running Red Hat Ceph Storage cluster. A CephFS volume, subvolume group, and subvolume created. A custom metadata created on the CephFS subvolume. Procedure List the metadata on the CephFS subvolume: Syntax Example 4.4.4. Removing custom metadata from the file system subvolume You can remove the custom metadata, the key-value pairs, of a Ceph File System (CephFS) in a volume, and optionally, in a specific subvolume group. Prerequisites A running Red Hat Ceph Storage cluster. A CephFS volume, subvolume group, and subvolume created. A custom metadata created on the CephFS subvolume. Procedure Remove the custom metadata on the CephFS subvolume: Syntax Example List the metadata: Example | [
"ceph fs volume create VOLUME_NAME",
"ceph fs volume create cephfs",
"ceph fs volume ls",
"ceph fs volume info VOLUME_NAME",
"ceph fs volume info cephfs { \"mon_addrs\": [ \"192.168.1.7:40977\", ], \"pending_subvolume_deletions\": 0, \"pools\": { \"data\": [ { \"avail\": 106288709632, \"name\": \"cephfs.cephfs.data\", \"used\": 4096 } ], \"metadata\": [ { \"avail\": 106288709632, \"name\": \"cephfs.cephfs.meta\", \"used\": 155648 } ] }, \"used_size\": 0 }",
"ceph config set mon mon_allow_pool_delete true",
"ceph fs volume rm VOLUME_NAME [--yes-i-really-mean-it]",
"ceph fs volume rm cephfs --yes-i-really-mean-it",
"ceph fs subvolumegroup create VOLUME_NAME GROUP_NAME [--pool_layout DATA_POOL_NAME --uid UID --gid GID --mode OCTAL_MODE ]",
"ceph fs subvolumegroup create cephfs subgroup0",
"ceph fs subvolumegroup create VOLUME_NAME GROUP_NAME [--size SIZE_IN_BYTES ] [--pool_layout DATA_POOL_NAME ] [--uid UID ] [--gid GID ] [--mode OCTAL_MODE ]",
"ceph fs subvolumegroup create cephfs subvolgroup_2 10737418240",
"ceph fs subvolumegroup resize VOLUME_NAME GROUP_NAME new_size [--no_shrink]",
"ceph fs subvolumegroup resize cephfs subvolgroup_2 20737418240 [ { \"bytes_used\": 10768679044 }, { \"bytes_quota\": 20737418240 }, { \"bytes_pcent\": \"51.93\" } ]",
"ceph fs subvolumegroup info VOLUME_NAME GROUP_NAME",
"ceph fs subvolumegroup info cephfs subvolgroup_2 { \"atime\": \"2022-10-05 18:00:39\", \"bytes_pcent\": \"51.85\", \"bytes_quota\": 20768679043, \"bytes_used\": 10768679044, \"created_at\": \"2022-10-05 18:00:39\", \"ctime\": \"2022-10-05 18:21:26\", \"data_pool\": \"cephfs.cephfs.data\", \"gid\": 0, \"mode\": 16877, \"mon_addrs\": [ \"60.221.178.236:1221\", \"205.64.75.112:1221\", \"20.209.241.242:1221\" ], \"mtime\": \"2022-10-05 18:01:25\", \"uid\": 0 }",
"ceph fs subvolumegroup ls VOLUME_NAME",
"ceph fs subvolumegroup ls cephfs",
"ceph fs subvolumegroup getpath VOLUME_NAME GROUP_NAME",
"ceph fs subvolumegroup getpath cephfs subgroup0",
"ceph fs subvolumegroup snapshot ls VOLUME_NAME GROUP_NAME",
"ceph fs subvolumegroup snapshot ls cephfs subgroup0",
"ceph fs subvolumegroup snapshot rm VOLUME_NAME GROUP_NAME SNAP_NAME [--force]",
"ceph fs subvolumegroup snapshot rm cephfs subgroup0 snap0 --force",
"ceph fs subvolumegroup rm VOLUME_NAME GROUP_NAME [--force]",
"ceph fs subvolumegroup rm cephfs subgroup0 --force",
"ceph fs subvolume create VOLUME_NAME SUBVOLUME_NAME [--size SIZE_IN_BYTES --group_name SUBVOLUME_GROUP_NAME --pool_layout DATA_POOL_NAME --uid _UID --gid GID --mode OCTAL_MODE ] [--namespace-isolated]",
"ceph fs subvolume create cephfs sub0 --group_name subgroup0 --namespace-isolated",
"ceph fs subvolume ls VOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME ]",
"ceph fs subvolume ls cephfs --group_name subgroup0",
"ceph fs subvolume resize VOLUME_NAME SUBVOLUME_NAME NEW_SIZE [--group_name SUBVOLUME_GROUP_NAME ] [--no_shrink]",
"ceph fs subvolume resize cephfs sub0 1024000000 --group_name subgroup0 --no_shrink",
"ceph fs subvolume getpath VOLUME_NAME SUBVOLUME_NAME [--group_name _SUBVOLUME_GROUP_NAME ]",
"ceph fs subvolume getpath cephfs sub0 --group_name subgroup0",
"ceph fs subvolume info VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME ]",
"ceph fs subvolume info cephfs sub0 --group_name subgroup0",
"ceph fs subvolume info cephfs sub0 { \"atime\": \"2023-07-14 08:52:46\", \"bytes_pcent\": \"0.00\", \"bytes_quota\": 1024000000, \"bytes_used\": 0, \"created_at\": \"2023-07-14 08:52:46\", \"ctime\": \"2023-07-14 08:53:54\", \"data_pool\": \"cephfs.cephfs.data\", \"features\": [ \"snapshot-clone\", \"snapshot-autoprotect\", \"snapshot-retention\" ], \"flavor\": \"2\", \"gid\": 0, \"mode\": 16877, \"mon_addrs\": [ \"10.0.208.172:6789\", \"10.0.211.197:6789\", \"10.0.209.212:6789\" ], \"mtime\": \"2023-07-14 08:52:46\", \"path\": \"/volumes/_nogroup/sub0/834c5cbc-f5db-4481-80a3-aca92ff0e7f3\", \"pool_namespace\": \"\", \"state\": \"complete\", \"type\": \"subvolume\", \"uid\": 0 }",
"ceph auth get CLIENT_NAME",
"ceph auth get client.0 [client.0] key = AQAz7EVWygILFRAAdIcuJ12opU/JKyfFmxhuaw== caps mds = \"allow rw, allow rws path=/bar\" 1 caps mon = \"allow r\" caps osd = \"allow rw tag cephfs data=cephfs_a\" 2",
"ceph fs subvolume snapshot create VOLUME_NAME SUBVOLUME_NAME SNAP_NAME [--group_name GROUP_NAME ]",
"ceph fs subvolume snapshot create cephfs sub0 snap0 --group_name subgroup0",
"CLIENT_NAME key = AQAz7EVWygILFRAAdIcuJ12opU/JKyfFmxhuaw== caps mds = allow rw, allow rws path= DIRECTORY_PATH caps mon = allow r caps osd = allow rw tag cephfs data= DIRECTORY_NAME",
"[client.0] key = AQAz7EVWygILFRAAdIcuJ12opU/JKyfFmxhuaw== caps mds = \"allow rw, allow rws path=/bar\" caps mon = \"allow r\" caps osd = \"allow rw tag cephfs data=cephfs_a\"",
"ceph fs volume create VOLUME_NAME",
"ceph fs volume create cephfs",
"ceph fs subvolumegroup create VOLUME_NAME GROUP_NAME [--pool_layout DATA_POOL_NAME --uid UID --gid GID --mode OCTAL_MODE ]",
"ceph fs subvolumegroup create cephfs subgroup0",
"ceph fs subvolume create VOLUME_NAME SUBVOLUME_NAME [--size SIZE_IN_BYTES --group_name SUBVOLUME_GROUP_NAME --pool_layout DATA_POOL_NAME --uid _UID --gid GID --mode OCTAL_MODE ]",
"ceph fs subvolume create cephfs sub0 --group_name subgroup0",
"ceph fs subvolume snapshot create VOLUME_NAME _SUBVOLUME_NAME SNAP_NAME [--group_name SUBVOLUME_GROUP_NAME ]",
"ceph fs subvolume snapshot create cephfs sub0 snap0 --group_name subgroup0",
"ceph fs subvolume snapshot clone VOLUME_NAME SUBVOLUME_NAME SNAP_NAME TARGET_CLONE_NAME",
"ceph fs subvolume snapshot clone cephfs sub0 snap0 clone0",
"ceph fs subvolume snapshot clone VOLUME_NAME SUBVOLUME_NAME SNAP_NAME TARGET_CLONE_NAME --group_name SUBVOLUME_GROUP_NAME",
"ceph fs subvolume snapshot clone cephfs sub0 snap0 clone0 --group_name subgroup0",
"ceph fs subvolume snapshot clone VOLUME_NAME SUBVOLUME_NAME SNAP_NAME TARGET_CLONE_NAME --target_group_name SUBVOLUME_GROUP_NAME",
"ceph fs subvolume snapshot clone cephfs sub0 snap0 clone0 --target_group_name subgroup1",
"ceph fs clone status VOLUME_NAME CLONE_NAME [--group_name TARGET_GROUP_NAME ]",
"ceph fs clone status cephfs clone0 --group_name subgroup1 { \"status\": { \"state\": \"complete\" } }",
"ceph fs subvolume snapshot ls VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME ]",
"ceph fs subvolume snapshot ls cephfs sub0 --group_name subgroup0",
"ceph fs subvolume snapshot info VOLUME_NAME SUBVOLUME_NAME SNAP_NAME [--group_name SUBVOLUME_GROUP_NAME ]",
"ceph fs subvolume snapshot info cephfs sub0 snap0 --group_name subgroup0",
"{ \"created_at\": \"2022-05-09 06:18:47.330682\", \"data_pool\": \"cephfs_data\", \"has_pending_clones\": \"no\", \"size\": 0 }",
"ceph fs subvolume rm VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME ] [--force] [--retain-snapshots]",
"ceph fs subvolume rm cephfs sub0 --group_name subgroup0 --retain-snapshots",
"ceph fs subvolume snapshot clone VOLUME_NAME DELETED_SUBVOLUME RETAINED_SNAPSHOT NEW_SUBVOLUME --group_name SUBVOLUME_GROUP_NAME --target_group_name SUBVOLUME_TARGET_GROUP_NAME",
"ceph fs subvolume snapshot clone cephfs sub0 snap0 sub1 --group_name subgroup0 --target_group_name subgroup0",
"ceph fs subvolume snapshot rm VOLUME_NAME SUBVOLUME_NAME SNAP_NAME [--group_name GROUP_NAME --force]",
"ceph fs subvolume snapshot rm cephfs sub0 snap0 --group_name subgroup0 --force",
"ceph fs subvolume metadata set VOLUME_NAME SUBVOLUME_NAME KEY_NAME VALUE [--group_name SUBVOLUME_GROUP_NAME ]",
"ceph fs subvolume metadata set cephfs sub0 test_meta cluster --group_name subgroup0",
"ceph fs subvolume metadata set cephfs sub0 \"test meta\" cluster --group_name subgroup0",
"ceph fs subvolume metadata set cephfs sub0 \"test_meta\" cluster2 --group_name subgroup0",
"ceph fs subvolume metadata get VOLUME_NAME SUBVOLUME_NAME KEY_NAME [--group_name SUBVOLUME_GROUP_NAME ]",
"ceph fs subvolume metadata get cephfs sub0 test_meta --group_name subgroup0 cluster",
"ceph fs subvolume metadata ls VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME ]",
"ceph fs subvolume metadata ls cephfs sub0 { \"test_meta\": \"cluster\" }",
"ceph fs subvolume metadata rm VOLUME_NAME SUBVOLUME_NAME KEY_NAME [--group_name SUBVOLUME_GROUP_NAME ]",
"ceph fs subvolume metadata rm cephfs sub0 test_meta --group_name subgroup0",
"ceph fs subvolume metadata ls cephfs sub0 {}"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/file_system_guide/management-of-ceph-file-system-volumes-subvolume-groups-and-subvolumes |
Chapter 94. KafkaConnectStatus schema reference | Chapter 94. KafkaConnectStatus schema reference Used in: KafkaConnect Property Description conditions List of status conditions. Condition array observedGeneration The generation of the CRD that was last reconciled by the operator. integer url The URL of the REST API endpoint for managing and monitoring Kafka Connect connectors. string connectorPlugins The list of connector plugins available in this Kafka Connect deployment. ConnectorPlugin array labelSelector Label selector for pods providing this resource. string replicas The current number of pods being used to provide this resource. integer | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-KafkaConnectStatus-reference |
Chapter 8. Desktop | Chapter 8. Desktop GNOME Shell rebased to version 3.28 In Red Hat Enterprise Linux 7.6, GNOME Shell has been rebased to upstream version 3.28. Notable enhancements include: New GNOME Boxes features New on-screen keyboard Extended devices support, most significantly integration for the Thunderbolt 3 interface Improvements for GNOME Software, dconf-editor and GNOME Terminal Note that Nautilus file manager has been kept in version 3.26 to preserve the behavior of the desktop icons. (BZ#1567133) The sane-backends package is now built with systemd support Scanner Access Now Easy (SANE) is a universal scanner interface whose backend's and library's features are provided by the sane-backends package. This update brings the following changes to SANE: The sane-backends package is built with systemd support. The saned daemon can be run without the need to create unit files manually, because these files are now shipped with sane-backends . (BZ# 1512252 ) FreeType rebased to version 2.8 The FreeType font engine has been rebased to version 2.8, which is required by GNOME 3.28. The 2.8 version has been modified to be API and Application Binary Interface (ABI) compatible with the version 2.4.11. (BZ# 1576504 ) Nvidia Volta-based graphics cards are now supported This update adds support for Nvidia Volta-based graphics cards. As a result, the modesetting user-space driver, which is able to handle the basic operations and single graphic output, is used. However, 3D graphic is handled by the llvmpipe driver because Nvidia did not share public signed firmware for 3D. To reach maximum performance of the card, use the Nvidia binary driver. (BZ#1457161) xorg-x11-server rebased to version 1.20.0-0.1 The xorg-x11-server packages have been rebased to upstream version 1.20.0-0.1, which provides a number of bug fixes and enhancements over the version: Added support for the following input devices: Wacom Cintiq Pro 24, Wacom Cintiq Pro 32 tablet, Wacom Pro Pen 3D. Added support for Intel Cannon Lake and Whiskey Lake platform GPUs. Added support for S3TC texture compression in OpenGL Added support for X11 backing store always mode. Added support for Nvidia Volta series of graphics. Added support for AMD Vega graphics and Raven APU. (BZ#1564632) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.6_release_notes/new_features_desktop |
Chapter 2. Architectures | Chapter 2. Architectures Red Hat Enterprise Linux 7.6 is distributed with the kernel version 3.10.0-957, which provides support for the following architectures: [1] 64-bit AMD 64-bit Intel IBM POWER7+ (big endian) IBM POWER8 (big endian) [2] IBM POWER8 (little endian) [3] IBM POWER9 (little endian) [4] [5] IBM Z [4] [6] 64-bit ARM [4] [1] Note that the Red Hat Enterprise Linux 7.6 installation is supported only on 64-bit hardware. Red Hat Enterprise Linux 7.6 is able to run 32-bit operating systems, including versions of Red Hat Enterprise Linux, as virtual machines. [2] Red Hat Enterprise Linux 7.6 POWER8 (big endian) are currently supported as KVM guests on Red Hat Enterprise Linux 7.6 POWER8 systems that run the KVM hypervisor, and on PowerVM. [3] Red Hat Enterprise Linux 7.6 POWER8 (little endian) is currently supported as a KVM guest on Red Hat Enterprise Linux 7.6 POWER8 systems that run the KVM hypervisor, and on PowerVM. In addition, Red Hat Enterprise Linux 7.6 POWER8 (little endian) guests are supported on Red Hat Enterprise Linux 7.6 POWER9 systems that run the KVM hypervisor in POWER8-compatibility mode on version 4.14 kernel using the kernel-alt package. [4] This architecture is supported with the kernel version 4.14, provided by the kernel-alt packages. For details, see the Red Hat Enterprise Linux 7.5 . [5] Red Hat Enterprise Linux 7.6 POWER9 (little endian) is currently supported as a KVM guest on Red Hat Enterprise Linux 7.6 POWER9 systems that run the KVM hypervisor on version 4.14 kernel using the kernel-alt package, and on PowerVM. [6] Red Hat Enterprise Linux 7.6 for IBM Z (both the 3.10 kernel version and the 4.14 kernel version) is currently supported as a KVM guest on Red Hat Enterprise Linux 7.6 for IBM Z hosts that run the KVM hypervisor on version 4.14 kernel using the kernel-alt package. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.6_release_notes/chap-red_hat_enterprise_linux-7.6_release_notes-architectures |
7.30. corosync | 7.30. corosync 7.30.1. RHBA-2013:0497 - corosync bug fix update Updated corosync packages that fix several bugs and add multiple enhancements are now available for Red Hat Enterprise Linux 6. The corosync packages provide the Corosync Cluster Engine and C Application Programming Interfaces (APIs) for Red Hat Enterprise Linux cluster software. Bug Fixes BZ# 783068 Prior to this update, the corosync-notifyd service did not run after restarting the process. This update modifies the init script to wait for the actual exit of previously running instances of the process. Now, the corosync-notifyd service runs as expected after restarting. BZ# 786735 Prior to this update, an incorrect node ID was sent in recovery messages when corosync entered recovery. As a consequence, debugging problems in the source code was difficult. This update sets the correct node ID. BZ# 786737 Upon receiving the JoinMSG message in the OPERATIONAL state, a node enters the GATHER state. However, if JoinMSG was discarded, the nodes sending this JoinMSG could not receive a response until other nodes have had their tokens expired. This caused the nodes having entered the GATHER state spend more time to rejoin the ring. With this update, the underlying source code has been modified to address this issue. BZ# 787789 Prior to this update the netfilter firewall blocked input and output multicast packets, corosync coould become suspended, failed to create membership and cluster could not be used. After this update, corosync is no longer dependent on multicast loop kernel feature for local messages delivery, but uses the socpair unix dgram socket. BZ# 794744 Previously, on InfiniBand devices, corosync autogenerated the node ID when the configuration file or the cluster manager ( cman ) already set one. This update modifies the underlying code to recognize user-set mode IDs. Now, corosync autogenerates node IDs only when the user has not entered one. BZ# 821352 Prior to this update, corosync sockets were bound to a PEERs IP address instead of the local IP address when the IP address was configured as peer-to-peer (netmask /32). As a consequence, corosync was unable to create memberships. This update modifies the underlying code to use the correct information about the local IP address. BZ# 824902 Prior to this update, the corosync logic always used the first IP address that was found. As a consequence, users could not use more than one IP address on the same network. This update modifies the logic to use the first network address if no exact match was found. Now, users can bind to the IP address they select. BZ# 827100 Prior to this update, some sockets were not bound to a concrete IP address but listened on all interfaces in the UDPU mode. As a consequence, users could encounter problems when configuring the firewall. This update binds all sockets correctly. BZ#847232 Prior to this update, configuration file names that consisted of more than 255 characters could cause corosync to abort unexpectedly. This update returns the complete item value. In case of the old ABI, corosync prints an error. Now, corosync no longer aborts with longer names. BZ# 838524 When corosync was running with the votequorum library enabled, votequorum's register reloaded the configuration handler after each change in the configuration database (confdb). This caused corosync to run slower and to eventually encounter an Out Of Memory error. After this update, a register callback is only performed during startup. As a result, corosync no longer slows down or encounters an Out Of Memory error. BZ#848210 Prior to this update, the corosync-notifyd output was considerably slow and corosync memory grew when D-Bus output was enabled. Memory was not freed when corosync-notifyd was closed. This update modifies the corosync-notifyd event handler not to wait when there is nothing to receive and send from or to D-Bus. Now, corosync frees memory when the IPC client exits and corosync-notifyd produces output in speed of incoming events. BZ# 830799 Previously, the node cluster did not correspond with the CPG library membership. Consequently, the nodes were recognized as unknown , and corosync warning messages were not returned. A patch with an enhanced log from CPG has been provided to fix this bug. Now, the nodes work with CPG correctly, and appropriate warning messages are returned. BZ# 902397 Due to a regression, the corosync utility did not work with IPv6, which caused the network interface to be down. A patch has been provided to fix this bug. Corosync now works with IPv6 as expected, and the network interface is up. BZ# 838524 When corosync was running with the votequorum library enabled, votequorum's register reloaded the configuration handler after each change in the configuration database (confdb). This caused corosync to run slower and to eventually encounter an Out Of Memory error. After this update, a register callback is only performed during startup. As a result, corosync no longer slows down or encounters an Out Of Memory error. BZ# 865039 Previously, during heavy cluster operations, one of the nodes failed sending numerous of the following messages to the syslog file: A patch has been applied to address this issue. BZ# 850757 Prior to this update, corosync dropped ORF tokens together with memb_join packets when using CPU timing on certain networks. As a consequence, the RRP interface could be wrongly marked as faulty. This update drops only memb_join messages. BZ# 861032 Prior to this update, the corosync.conf parser failed if the ring number was larger than the allowed maximum of 1. As a consequence, corosync could abort with a segmentation fault. This update adds a check to the corosync.conf parser. Now, an error message is printed if the ring number is larger than 1. BZ# 863940 Prior to this update, corosync stopped on multiple nodes. As a consequence, corosync could, under certain circumstances, abort with a segmentation fault. This update ensures that the corosync service no longer calls callbacks on unloaded services. BZ# 869609 Prior to this update, corosync could abort with a segmentation fault when a large number of corosync nodes were started together. This update modifies the underlying code to ensure that the NULL pointer is not dereferenced. Now, corosync no longer encounters segmentation faults when starting multiple nodes at the same time. BZ# 876908 Prior to this update, the parsercorosync-objctl command with additional parameters could cause the error "Error reloading DB 11". This update removes the reloading function and handles changes of changed objects in the configuration data base ( confdb ). Now, the logging level can be changed as expected. BZ# 873059 Several typos in the corosync(8) manual page have been fixed. Also, manual pages for confdb_* functions have been added. Enhancements BZ#770455 With this update, the corosync log includes the hostname and the process ID of the processes that join the cluster to allow for better troubleshooting. BZ# 794522 This update adds the manual page confdb_keys.8 to provide descriptions for corosync runtime statistics that are returned by corosync-objctl . BZ# 838743 This update adds the new trace level to filter corosync flow messages to improve debugging. Users of corosync are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. 7.30.2. RHBA-2013:0824 - corosync bug fix update An updated corosync package that fixes several bugs is now available for Red Hat Enterprise Linux 6. The Corosync packages provide the Corosync Cluster Engine and C Application Programming Interfaces (APIs) for Red Hat Enterprise Linux cluster software. Bug Fix BZ# 929101 When running applications which used the Corosync IPC library, some messages in the dispatch() function were lost or duplicated. This update properly checks the return values of the dispatch_put() function, returns the correct remaining bytes in the IPC ring buffer, and ensures that the IPC client is correctly informed about the real number of messages in the ring buffer. Now, messages in the dispatch() function are no longer lost or duplicated. Users of corosync are advised to upgrade to these updated packages, which fix this bug. | [
"dlm_controld[32123]: cpg_dispatch error 2"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/corosync |
4.67. fuse | 4.67. fuse 4.67.1. RHBA-2011:1756 - fuse bug fix update An updated fuse package that fixes one bug is now available in Red Hat Enterprise Linux 6. The fuse package contains the file system in userspace utilities and libraries required for using fuse file systems. Bug Fix BZ# 723757 Prior to this update, fusermount used an incorrect path to unmount. As a result, fusermount was unable to unmount mounted fuse file systems. This update, modifies fusermount to use the correct mount path. Now, mounted fuse file systems can be successfully unmounted with fusermount. All users who use fuse file systems in their environment are advised to upgrade to this updated fuse package, which fixes this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/fuse |
8.9. Scanning Containers and Container Images for Vulnerabilities | 8.9. Scanning Containers and Container Images for Vulnerabilities Use these procedures to find security vulnerabilities in a container or a container image. You can use either the oscap-docker command-line utility or the atomic scan command-line utility to find security vulnerabilities in a container or a container image. With oscap-docker , you can use the oscap program to scan container images and containers. With atomic scan , you can use OpenSCAP scanning capabilities to scan container images and containers on the system. You can scan for known CVE vulnerabilities and for configuration compliance. Additionally, you can remediate container images to the specified policy. 8.9.1. Scanning Container Images and Containers for Vulnerabilities Using oscap-docker You can scan containers and container images using the oscap-docker utility. Note The oscap-docker command requires root privileges and the ID of a container is the second argument. Prerequisites The openscap-containers package is installed. Procedure Find the ID of a container or a container image, for example: Scan the container or the container image for vulnerabilities and save results to the vulnerability.html file: Important To scan a container, replace the image-cve argument with container-cve . Verification Inspect the results in a browser of your choice, for example: Additional Resources For more information, see the oscap-docker(8) and oscap(8) man pages. 8.9.2. Scanning Container Images and Containers for Vulnerabilities Using atomic scan With the atomic scan utility, you can scan containers and container images for known security vulnerabilities as defined in the CVE OVAL definitions released by Red Hat . The atomic scan command has the following form: where ID is the ID of the container image or container you want to scan. Warning The atomic scan functionality is deprecated, and the OpenSCAP container image is no longer updated for new vulnerabilities. Therefore, prefer the oscap-docker utility for vulnerability scanning purposes. Use cases To scan all container images, use the --images directive. To scan all containers, use the --containers directive. To scan both types, use the --all directive. To list all available command-line options, use the atomic scan --help command. The default scan type of the atomic scan command is CVE scan . Use it for checking a target for known security vulnerabilities as defined in the CVE OVAL definitions released by Red Hat . Prerequisites You have downloaded and installed the OpenSCAP container image from Red Hat Container Catalog (RHCC) using the atomic install rhel7/openscap command. Procedure Verify you have the latest OpenSCAP container image to ensure the definitions are up to date: Scan a RHEL 7.2 container image with several known security vulnerabilities: Additional Resources Product Documentation for Red Hat Enterprise Linux Atomic Host contains a detailed description of the atomic command usage and containers. The Red Hat Customer Portal provides a guide to the Atomic command-line interface (CLI) . | [
"~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE registry.access.redhat.com/ubi7/ubi latest 096cae65a207 7 weeks ago 239 MB",
"~]# oscap-docker image-cve 096cae65a207 --report vulnerability.html",
"~]USD firefox vulnerability.html &",
"~]# atomic scan [OPTIONS] [ID]",
"~]# atomic help registry.access.redhat.com/rhel7/openscap | grep version",
"~]# atomic scan registry.access.redhat.com/rhel7:7.2 docker run -t --rm -v /etc/localtime:/etc/localtime -v /run/atomic/2017-11-01-14-49-36-614281:/scanin -v /var/lib/atomic/openscap/2017-11-01-14-49-36-614281:/scanout:rw,Z -v /etc/oscapd:/etc/oscapd:ro registry.access.redhat.com/rhel7/openscap oscapd-evaluate scan --no-standard-compliance --targets chroots-in-dir:///scanin --output /scanout registry.access.redhat.com/rhel7:7.2 (98a88a8b722a718) The following issues were found: RHSA-2017:2832: nss security update (Important) Severity: Important RHSA URL: https://access.redhat.com/errata/RHSA-2017:2832 RHSA ID: RHSA-2017:2832-01 Associated CVEs: CVE ID: CVE-2017-7805 CVE URL: https://access.redhat.com/security/cve/CVE-2017-7805"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/scanning-container-and-container-images-for-vulnerabilities_scanning-the-system-for-configuration-compliance-and-vulnerabilities |
Chapter 25. Online Storage Management | Chapter 25. Online Storage Management It is often desirable to add, remove or re-size storage devices while the operating system is running, and without rebooting. This chapter outlines the procedures that may be used to reconfigure storage devices on Red Hat Enterprise Linux 7 host systems while the system is running. It covers iSCSI and Fibre Channel storage interconnects; other interconnect types may be added it the future. This chapter focuses on adding, removing, modifying, and monitoring storage devices. It does not discuss the Fibre Channel or iSCSI protocols in detail. For more information about these protocols, refer to other documentation. This chapter makes reference to various sysfs objects. Red Hat advises that the sysfs object names and directory structure are subject to change in major Red Hat Enterprise Linux releases. This is because the upstream Linux kernel does not provide a stable internal API. For guidelines on how to reference sysfs objects in a transportable way, refer to the document /usr/share/doc/kernel-doc- version /Documentation/sysfs-rules.txt in the kernel source tree for guidelines. Warning Online storage reconfiguration must be done carefully. System failures or interruptions during the process can lead to unexpected results. Red Hat advises that you reduce system load to the maximum extent possible during the change operations. This will reduce the chance of I/O errors, out-of-memory errors, or similar errors occurring in the midst of a configuration change. The following sections provide more specific guidelines regarding this. In addition, Red Hat recommends that you back up all data before reconfiguring online storage. 25.1. Target Setup Red Hat Enterprise Linux 7 uses the targetcli shell as a front end for viewing, editing, and saving the configuration of the Linux-IO Target without the need to manipulate the kernel target's configuration files directly. The targetcli tool is a command-line interface that allows an administrator to export local storage resources, which are backed by either files, volumes, local SCSI devices, or RAM disks, to remote systems. The targetcli tool has a tree-based layout, includes built-in tab completion, and provides full auto-complete support and inline documentation. The hierarchy of targetcli does not always match the kernel interface exactly because targetcli is simplified where possible. Important To ensure that the changes made in targetcli are persistent, start and enable the target service: 25.1.1. Installing and Running targetcli To install targetcli , use: Start the target service: Configure target to start at boot time: Open port 3260 in the firewall and reload the firewall configuration: Use the targetcli command, and then use the ls command for the layout of the tree interface: Note In Red Hat Enterprise Linux 7.0, using the targetcli command from Bash, for example, targetcli iscsi/ create , does not work and does not return an error. Starting with Red Hat Enterprise Linux 7.1, an error status code is provided to make using targetcli with shell scripts more useful. 25.1.2. Creating a Backstore Backstores enable support for different methods of storing an exported LUN's data on the local machine. Creating a storage object defines the resources the backstore uses. Note In Red Hat Enterprise Linux 6, the term 'backing-store' is used to refer to the mappings created. However, to avoid confusion between the various ways 'backstores' can be used, in Red Hat Enterprise Linux 7 the term 'storage objects' refers to the mappings created and 'backstores' is used to describe the different types of backing devices. The backstore devices that LIO supports are: FILEIO (Linux file-backed storage) FILEIO storage objects can support either write_back or write_thru operation. The write_back enables the local file system cache. This improves performance but increases the risk of data loss. It is recommended to use write_back=false to disable write_back in favor of write_thru . To create a fileio storage object, run the command /backstores/fileio create file_name file_location file_size write_back=false . For example: BLOCK (Linux BLOCK devices) The block driver allows the use of any block device that appears in the /sys/block to be used with LIO. This includes physical devices (for example, HDDs, SSDs, CDs, DVDs) and logical devices (for example, software or hardware RAID volumes, or LVM volumes). Note BLOCK backstores usually provide the best performance. To create a BLOCK backstore using any block device, use the following command: Note You can also create a BLOCK backstore on a logical volume. PSCSI (Linux pass-through SCSI devices) Any storage object that supports direct pass-through of SCSI commands without SCSI emulation, and with an underlying SCSI device that appears with lsscsi in /proc/scsi/scsi (such as a SAS hard drive) can be configured as a backstore. SCSI-3 and higher is supported with this subsystem. Warning PSCSI should only be used by advanced users. Advanced SCSI commands such as for Aysmmetric Logical Unit Assignment (ALUAs) or Persistent Reservations (for example, those used by VMware ESX, and vSphere) are usually not implemented in the device firmware and can cause malfunctions or crashes. When in doubt, use BLOCK for production setups instead. To create a PSCSI backstore for a physical SCSI device, a TYPE_ROM device using /dev/sr0 in this example, use: Memory Copy RAM disk (Linux RAMDISK_MCP) Memory Copy RAM disks ( ramdisk ) provide RAM disks with full SCSI emulation and separate memory mappings using memory copy for initiators. This provides capability for multi-sessions and is particularly useful for fast, volatile mass storage for production purposes. To create a 1GB RAM disk backstore, use the following command: 25.1.3. Creating an iSCSI Target To create an iSCSI target: Procedure 25.1. Creating an iSCSI target Run targetcli . Move into the iSCSI configuration path: Note The cd command is also accepted to change directories, as well as simply listing the path to move into. Create an iSCSI target using a default target name. Or create an iSCSI target using a specified name. Verify that the newly created target is visible when targets are listed with ls . Note As of Red Hat Enterprise Linux 7.1, whenever a target is created, a default portal is also created. 25.1.4. Configuring an iSCSI Portal To configure an iSCSI portal, an iSCSI target must first be created and associated with a TPG. For instructions on how to do this, refer to Section 25.1.3, "Creating an iSCSI Target" . Note As of Red Hat Enterprise Linux 7.1 when an iSCSI target is created, a default portal is created as well. This portal is set to listen on all IP addresses with the default port number (that is, 0.0.0.0:3260). To remove this and add only specified portals, use /iscsi/ iqn-name /tpg1/portals delete ip_address=0.0.0.0 ip_port=3260 then create a new portal with the required information. Procedure 25.2. Creating an iSCSI Portal Move into the TPG. There are two ways to create a portal: create a default portal, or create a portal specifying what IP address to listen to. Creating a default portal uses the default iSCSI port 3260 and allows the target to listen on all IP addresses on that port. To create a portal specifying what IP address to listen to, use the following command. Verify that the newly created portal is visible with the ls command. 25.1.5. Configuring LUNs To configure LUNs, first create storage objects. See Section 25.1.2, "Creating a Backstore" for more information. Procedure 25.3. Configuring LUNs Create LUNs of already created storage objects. Show the changes. Note Be aware that the default LUN name starts at 0, as opposed to 1 as was the case when using tgtd in Red Hat Enterprise Linux 6. Configure ACLs. For more information, see Section 25.1.6, "Configuring ACLs" . Important By default, LUNs are created with read-write permissions. In the event that a new LUN is added after ACLs have been created that LUN will be automatically mapped to all available ACLs. This can cause a security risk. Use the following procedure to create a LUN as read-only. Procedure 25.4. Create a Read-only LUN To create a LUN with read-only permissions, first use the following command: This prevents the auto mapping of LUNs to existing ACLs allowing the manual mapping of LUNs. , manually create the LUN with the command iscsi/ target_iqn_name /tpg1/acls/ initiator_iqn_name / create mapped_lun= next_sequential_LUN_number tpg_lun_or_backstore= backstore write_protect=1 . The mapped_lun1 line now has (ro) at the end (unlike mapped_lun0's (rw)) stating that it is read-only. Configure ACLs. For more information, see Section 25.1.6, "Configuring ACLs" . 25.1.6. Configuring ACLs Create an ACL for each initiator that will be connecting. This enforces authentication when that initiator connects, allowing only LUNs to be exposed to each initiator. Usually each initator has exclusive access to a LUN. Both targets and initiators have unique identifying names. The initiator's unique name must be known to configure ACLs. For open-iscsi initiators, this can be found in /etc/iscsi/initiatorname.iscsi . Procedure 25.5. Configuring ACLs Move into the acls directory. Create an ACL. Either use the initiator name found in /etc/iscsi/initiatorname.iscsi on the initiator, or if using a name that is easier to remember, refer to Section 25.2, "Creating an iSCSI Initiator" to ensure ACL matches the initiator. For example: Note The given example's behavior depends on the setting used. In this case, the global setting auto_add_mapped_luns is used. This automatically maps LUNs to any created ACL. You can set user-created ACLs within the TPG node on the target server: Show the changes. 25.1.7. Configuring Fibre Channel over Ethernet (FCoE) Target In addition to mounting LUNs over FCoE, as described in Section 25.5, "Configuring a Fibre Channel over Ethernet Interface" , exporting LUNs to other machines over FCoE is also supported with the aid of targetcli . Important Before proceeding, refer to Section 25.5, "Configuring a Fibre Channel over Ethernet Interface" and verify that basic FCoE setup is completed, and that fcoeadm -i displays configured FCoE interfaces. Procedure 25.6. Configure FCoE target Setting up an FCoE target requires the installation of the targetcli package, along with its dependencies. Refer to Section 25.1, "Target Setup" for more information on targetcli basics and set up. Create an FCoE target instance on an FCoE interface. If FCoE interfaces are present on the system, tab-completing after create will list available interfaces. If not, ensure fcoeadm -i shows active interfaces. Map a backstore to the target instance. Example 25.1. Example of Mapping a Backstore to the Target Instance Allow access to the LUN from an FCoE initiator. The LUN should now be accessible to that initiator. To make the changes persistent across reboots, use the saveconfig command and type yes when prompted. If this is not done the configuration will be lost after rebooting. Exit targetcli by typing exit or entering ctrl + D . 25.1.8. Removing Objects with targetcli To remove an backstore use the command: To remove parts of an iSCSI target, such as an ACL, use the following command: To remove the entire target, including all ACLs, LUNs, and portals, use the following command: 25.1.9. targetcli References For more information on targetcli , refer to the following resources: man targetcli The targetcli man page. It includes an example walk through. Screencast by Andy Grover https://www.youtube.com/watch?v=BkBGTBadOO8 Note This was uploaded on February 28, 2012. As such, the service name has changed from targetcli to target . | [
"systemctl start target # systemctl enable target",
"yum install targetcli",
"systemctl start target",
"systemctl enable target",
"firewall-cmd --permanent --add-port=3260/tcp Success # firewall-cmd --reload Success",
"targetcli : /> ls o- /........................................[...] o- backstores.............................[...] | o- block.................[Storage Objects: 0] | o- fileio................[Storage Objects: 0] | o- pscsi.................[Storage Objects: 0] | o- ramdisk...............[Storage Ojbects: 0] o- iscsi...........................[Targets: 0] o- loopback........................[Targets: 0]",
"/> /backstores/fileio create file1 /tmp/disk1.img 200M write_back=false Created fileio file1 with size 209715200",
"fdisk /dev/ vdb Welcome to fdisk (util-linux 2.23.2). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Device does not contain a recognized partition table Building a new DOS disklabel with disk identifier 0x39dc48fb. Command (m for help): n Partition type: p primary (0 primary, 0 extended, 4 free) e extended Select (default p): *Enter* Using default response p Partition number (1-4, default 1): *Enter* First sector (2048-2097151, default 2048): *Enter* Using default value 2048 Last sector, +sectors or +size{K,M,G} (2048-2097151, default 2097151): +250M Partition 1 of type Linux and of size 250 MiB is set Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks.",
"/> /backstores/block create name=block_backend dev=/dev/ vdb Generating a wwn serial. Created block storage object block_backend using /dev/ vdb .",
"/> backstores/pscsi/ create name=pscsi_backend dev=/dev/sr0 Generating a wwn serial. Created pscsi storage object pscsi_backend using /dev/sr0",
"/> backstores/ramdisk/ create name=rd_backend size=1GB Generating a wwn serial. Created rd_mcp ramdisk rd_backend with size 1GB.",
"/> iscsi/",
"/iscsi> create Created target iqn.2003-01.org.linux-iscsi.hostname.x8664:sn.78b473f296ff Created TPG1",
"/iscsi > create iqn.2006-04.com.example:444 Created target iqn.2006-04.com.example:444 Created TPG1",
"/iscsi > ls o- iscsi.......................................[1 Target] o- iqn.2006-04.com.example:444................[1 TPG] o- tpg1...........................[enabled, auth] o- acls...............................[0 ACL] o- luns...............................[0 LUN] o- portals.........................[0 Portal]",
"/iscsi> iqn.2006-04.example:444/tpg1/",
"/iscsi/iqn.20...mple:444/tpg1> portals/ create Using default IP port 3260 Binding to INADDR_Any (0.0.0.0) Created network portal 0.0.0.0:3260",
"/iscsi/iqn.20...mple:444/tpg1> portals/ create 192.168.122.137 Using default IP port 3260 Created network portal 192.168.122.137:3260",
"/iscsi/iqn.20...mple:444/tpg1> ls o- tpg.................................. [enambled, auth] o- acls ......................................[0 ACL] o- luns ......................................[0 LUN] o- portals ................................[1 Portal] o- 192.168.122.137:3260......................[OK]",
"/iscsi/iqn.20...mple:444/tpg1> luns/ create /backstores/ramdisk/rd_backend Created LUN 0. /iscsi/iqn.20...mple:444/tpg1> luns/ create /backstores/block/block_backend Created LUN 1. /iscsi/iqn.20...mple:444/tpg1> luns/ create /backstores/fileio/file1 Created LUN 2.",
"/iscsi/iqn.20...mple:444/tpg1> ls o- tpg.................................. [enambled, auth] o- acls ......................................[0 ACL] o- luns .....................................[3 LUNs] | o- lun0.........................[ramdisk/ramdisk1] | o- lun1.................[block/block1 (/dev/vdb1)] | o- lun2...................[fileio/file1 (/foo.img)] o- portals ................................[1 Portal] o- 192.168.122.137:3260......................[OK]",
"/> set global auto_add_mapped_luns=false Parameter auto_add_mapped_luns is now 'false'.",
"/> iscsi/iqn.2015-06.com.redhat:target/tpg1/acls/iqn.2015-06.com.redhat:initiator/ create mapped_lun=1 tpg_lun_or_backstore=/backstores/block/block2 write_protect=1 Created LUN 1. Created Mapped LUN 1. /> ls o- / ...................................................... [...] o- backstores ........................................... [...] <snip> o- iscsi ......................................... [Targets: 1] | o- iqn.2015-06.com.redhat:target .................. [TPGs: 1] | o- tpg1 ............................ [no-gen-acls, no-auth] | o- acls ....................................... [ACLs: 2] | | o- iqn.2015-06.com.redhat:initiator .. [Mapped LUNs: 2] | | | o- mapped_lun0 .............. [lun0 block/disk1 (rw)] | | | o- mapped_lun1 .............. [lun1 block/disk2 (ro)] | o- luns ....................................... [LUNs: 2] | | o- lun0 ...................... [block/disk1 (/dev/vdb)] | | o- lun1 ...................... [block/disk2 (/dev/vdc)] <snip>",
"/iscsi/iqn.20...mple:444/tpg1> acls/",
"/iscsi/iqn.20...444/tpg1/acls> create iqn.2006-04.com.example.foo:888 Created Node ACL for iqn.2006-04.com.example.foo:888 Created mapped LUN 2. Created mapped LUN 1. Created mapped LUN 0.",
"/iscsi/iqn.20...scsi:444/tpg1> set attribute generate_node_acls=1",
"/iscsi/iqn.20...444/tpg1/acls> ls o- acls .................................................[1 ACL] o- iqn.2006-04.com.example.foo:888 ....[3 Mapped LUNs, auth] o- mapped_lun0 .............[lun0 ramdisk/ramdisk1 (rw)] o- mapped_lun1 .................[lun1 block/block1 (rw)] o- mapped_lun2 .................[lun2 fileio/file1 (rw)]",
"/> tcm_fc/ create 00:11:22:33:44:55:66:77",
"/> tcm_fc/ 00:11:22:33:44:55:66:77",
"/> luns/ create /backstores/fileio/ example2",
"/> acls/ create 00:99:88:77:66:55:44:33",
"/> /backstores/ backstore-type / backstore-name",
"/> /iscsi/ iqn-name /tpg/ acls / delete iqn-name",
"/> /iscsi delete iqn-name"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/online-storage-management |
Chapter 11. Troubleshooting common installation problems | Chapter 11. Troubleshooting common installation problems If you are experiencing difficulties installing the Red Hat OpenShift AI Operator, read this section to understand what could be causing the problem and how to resolve it. If the problem is not included here or in the release notes, contact Red Hat Support . When opening a support case, it is helpful to include debugging information about your cluster. You can collect this information by using the must-gather tool as described in Must-Gather for Red Hat OpenShift AI and Gathering data about your cluster . You can also adjust the log level of OpenShift AI Operator components to increase or reduce log verbosity to suit your use case. For more information, see Configuring the OpenShift AI Operator logger . 11.1. The Red Hat OpenShift AI Operator cannot be retrieved from the image registry Problem When attempting to retrieve the Red Hat OpenShift AI Operator from the image registry, an Failure to pull from quay error message appears. The Red Hat OpenShift AI Operator might be unavailable for retrieval in the following circumstances: The image registry is unavailable. There is a problem with your network connection. Your cluster is not operational and is therefore unable to retrieve the image registry. Diagnosis Check the logs in the Events section in OpenShift for further information about the Failure to pull from quay error message. Resolution Contact Red Hat support. 11.2. OpenShift AI does not install on unsupported infrastructure Problem You are deploying on an environment that is not documented as supported by the Red Hat OpenShift AI Operator. Diagnosis In the OpenShift web console, switch to the Administrator perspective. Click Workloads Pods . Set the Project to All Projects or redhat-ods-operator . Click the rhods-operator-<random string> pod. The Pod details page appears. Click Logs . Select rhods-operator from the drop-down list. Check the log for the ERROR: Deploying on USDinfrastructure, which is not supported. Failing Installation error message. Resolution Before proceeding with a new installation, ensure that you have a fully supported environment on which to install OpenShift AI. For more information, see Red Hat OpenShift AI: Supported Configurations . 11.3. The creation of the OpenShift AI Custom Resource (CR) fails Problem During the installation process, the OpenShift AI Custom Resource (CR) does not get created. This issue occurs in unknown circumstances. Diagnosis In the OpenShift web console, switch to the Administrator perspective. Click Workloads Pods . Set the Project to All Projects or redhat-ods-operator . Click the rhods-operator-<random string> pod. The Pod details page appears. Click Logs . Select rhods-operator from the drop-down list. Check the log for the ERROR: Attempt to create the ODH CR failed. error message. Resolution Contact Red Hat support. 11.4. The creation of the OpenShift AI Notebooks Custom Resource (CR) fails Problem During the installation process, the OpenShift AI Notebooks Custom Resource (CR) does not get created. This issue occurs in unknown circumstances. Diagnosis In the OpenShift web console, switch to the Administrator perspective. Click Workloads Pods . Set the Project to All Projects or redhat-ods-operator . Click the rhods-operator-<random string> pod. The Pod details page appears. Click Logs . Select rhods-operator from the drop-down list. Check the log for the ERROR: Attempt to create the RHODS Notebooks CR failed. error message. Resolution Contact Red Hat support. 11.5. The OpenShift AI dashboard is not accessible Problem After installing OpenShift AI, the redhat-ods-applications , redhat-ods-monitoring , and redhat-ods-operator project namespaces are Active but you cannot access the dashboard due to an error in the pod. Diagnosis In the OpenShift web console, switch to the Administrator perspective. Click Workloads Pods . Set the Project to All Projects . Click Filter and select the checkbox for every status except Running and Completed . The page displays the pods that have an error. Resolution To see more information and troubleshooting steps for a pod, on the Pods page, click the link in the Status column for the pod. If the Status column does not display a link, click the pod name to open the pod details page and then click the Logs tab. 11.6. Reinstalling OpenShift AI fails with an error Problem After uninstalling the OpenShift AI Operator and reinstalling it by using the CLI, the reinstallation fails with an unable to find DSCInitialization error in the OpenShift AI Operator pod log. This issue can occur if the Auth custom resource from the installation was not deleted after uninstalling the OpenShift AI Operator and before reinstalling it. For more information, see Understanding the uninstallation process . Diagnosis In the OpenShift web console, switch to the Administrator perspective. Click Workloads Pods . Set the Project to All Projects or redhat-ods-operator . Click the rhods-operator-<random string> pod. The Pod details page appears. Click Logs . Select rhods-operator from the drop-down list. Check the log for an error message similar to the following: Resolution Uninstall the OpenShift AI Operator. Delete the Auth custom resource: In the OpenShift web console, switch to the Administrator perspective. Click API Explorer . From the All groups drop-down list, select or enter services.platform.opendatahub.io . Click the Auth kind. Click the Instances tab. Click the action menu (...) and select Delete Auth . The Delete Auth dialog appears. Click Delete . Install the OpenShift AI Operator again. 11.7. The dedicated-admins Role-based access control (RBAC) policy cannot be created Problem The Role-based access control (RBAC) policy for the dedicated-admins group in the target project cannot be created. This issue occurs in unknown circumstances. Diagnosis In the OpenShift web console, switch to the Administrator perspective. Click Workloads Pods . Set the Project to All Projects or redhat-ods-operator . Click the rhods-operator-<random string> pod. The Pod details page appears. Click Logs . Select rhods-operator from the drop-down list. Check the log for the ERROR: Attempt to create the RBAC policy for dedicated admins group in USDtarget_project failed. error message. Resolution Contact Red Hat support. 11.8. The PagerDuty secret does not get created Problem An issue with Managed Tenants SRE automation process causes the PagerDuty's secret to not get created. Diagnosis In the OpenShift web console, switch to the Administrator perspective. Click Workloads Pods . Set the Project to All Projects or redhat-ods-operator . Click the rhods-operator-<random string> pod. The Pod details page appears. Click Logs . Select rhods-operator from the drop-down list. Check the log for the ERROR: Pagerduty secret does not exist error message. Resolution Contact Red Hat support. 11.9. The SMTP secret does not exist Problem An issue with Managed Tenants SRE automation process causes the SMTP secret to not get created. Diagnosis In the OpenShift web console, switch to the Administrator perspective. Click Workloads Pods . Set the Project to All Projects or redhat-ods-operator . Click the rhods-operator-<random string> pod. The Pod details page appears. Click Logs . Select rhods-operator from the drop-down list. Check the log for the ERROR: SMTP secret does not exist error message. Resolution Contact Red Hat support. 11.10. The ODH parameter secret does not get created Problem An issue with the OpenShift AI Operator's flow could result in failure to create the ODH parameter. Diagnosis In the OpenShift web console, switch to the Administrator perspective. Click Workloads Pods . Set the Project to All Projects or redhat-ods-operator . Click the rhods-operator-<random string> pod. The Pod details page appears. Click Logs . Select rhods-operator from the drop-down list. Check the log for the ERROR: Addon managed odh parameter secret does not exist. error message. Resolution Contact Red Hat support. 11.11. Data science pipelines are not enabled after installing OpenShift AI 2.9 or later due to existing Argo Workflows resources Problem After installing OpenShift AI 2.9 or later with an Argo Workflows installation that is not installed by OpenShift AI on your cluster, data science pipelines are not enabled despite the datasciencepipelines component being enabled in the DataScienceCluster object. Diagnosis After you install OpenShift AI 2.9 or later, the Data Science Pipelines tab is not visible on the OpenShift AI dashboard navigation menu. Resolution Delete the separate installation of Argo workflows on your cluster. After you have removed any Argo Workflows resources that are not created by OpenShift AI from your cluster, data science pipelines are enabled automatically. | [
"{\"name\":\"auth\"},\"namespace\":\"\",\"name\":\"auth\",\"reconcileID\":\"7bff53ae-1252-46fe-831a-fdc824078a1b\",\"error\":\"unable to find DSCInitialization\",\"stacktrace\":\"sigs.k8s.io/controller-runtime/pkg/internal/controller."
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/installing_and_uninstalling_openshift_ai_self-managed/troubleshooting-common-installation-problems_install |
Chapter 25. Setting up Stratis file systems | Chapter 25. Setting up Stratis file systems Stratis runs as a service to manage pools of physical storage devices, simplifying local storage management with ease of use while helping you set up and manage complex storage configurations. Important Stratis is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview . 25.1. What is Stratis Stratis is a local storage-management solution for Linux. It is focused on simplicity and ease of use, and gives you access to advanced storage features. Stratis makes the following activities easier: Initial configuration of storage Making changes later Using advanced storage features Stratis is a local storage management system that supports advanced storage features. The central concept of Stratis is a storage pool . This pool is created from one or more local disks or partitions, and file systems are created from the pool. The pool enables many useful features, such as: File system snapshots Thin provisioning Tiering Encryption Additional resources Stratis website 25.2. Components of a Stratis volume Learn about the components that comprise a Stratis volume. Externally, Stratis presents the following volume components on the command line and the API: blockdev Block devices, such as a disk or a disk partition. pool Composed of one or more block devices. A pool has a fixed total size, equal to the size of the block devices. The pool contains most Stratis layers, such as the non-volatile data cache using the dm-cache target. Stratis creates a /dev/stratis/ my-pool / directory for each pool. This directory contains links to devices that represent Stratis file systems in the pool. filesystem Each pool can contain one or more file systems, which store files. File systems are thinly provisioned and do not have a fixed total size. The actual size of a file system grows with the data stored on it. If the size of the data approaches the virtual size of the file system, Stratis grows the thin volume and the file system automatically. The file systems are formatted with XFS. Important Stratis tracks information about file systems created using Stratis that XFS is not aware of, and changes made using XFS do not automatically create updates in Stratis. Users must not reformat or reconfigure XFS file systems that are managed by Stratis. Stratis creates links to file systems at the /dev/stratis/ my-pool / my-fs path. Note Stratis uses many Device Mapper devices, which show up in dmsetup listings and the /proc/partitions file. Similarly, the lsblk command output reflects the internal workings and layers of Stratis. 25.3. Block devices usable with Stratis Storage devices that can be used with Stratis. Supported devices Stratis pools have been tested to work on these types of block devices: LUKS LVM logical volumes MD RAID DM Multipath iSCSI HDDs and SSDs NVMe devices Unsupported devices Because Stratis contains a thin-provisioning layer, Red Hat does not recommend placing a Stratis pool on block devices that are already thinly-provisioned. 25.4. Installing Stratis Install the required packages for Stratis. Procedure Install packages that provide the Stratis service and command-line utilities: Verify that the stratisd service is enabled: 25.5. Creating an unencrypted Stratis pool You can create an unencrypted Stratis pool from one or more block devices. Prerequisites Stratis is installed. For more information, see Installing Stratis . The stratisd service is running. The block devices on which you are creating a Stratis pool are not in use and are not mounted. Each block device on which you are creating a Stratis pool is at least 1 GB. On the IBM Z architecture, the /dev/dasd* block devices must be partitioned. Use the partition device for creating the Stratis pool. For information about partitioning DASD devices, see Configuring a Linux instance on IBM Z . Note You cannot encrypt an unencrypted Stratis pool. Procedure Erase any file system, partition table, or RAID signatures that exist on each block device that you want to use in the Stratis pool: where block-device is the path to the block device; for example, /dev/sdb . Create the new unencrypted Stratis pool on the selected block device: where block-device is the path to an empty or wiped block device. You can also specify multiple block devices on a single line by using the following command: Verification Verify that the new Stratis pool was created: 25.6. Creating an unencrypted Stratis pool by using the web console You can use the web console to create an unencrypted Stratis pool from one or more block devices. Prerequisites You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The stratisd service is running. The block devices on which you are creating a Stratis pool are not in use and are not mounted. Each block device on which you are creating a Stratis pool is at least 1 GB. Note You cannot encrypt an unencrypted Stratis pool after it is created. Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . Click Storage . In the Storage table, click the menu button and select Create Stratis pool . In the Name field, enter a name for the Stratis pool. Select the Block devices from which you want to create the Stratis pool. Optional: If you want to specify the maximum size for each file system that is created in pool, select Manage filesystem sizes . Click Create . Verification Go to the Storage section and verify that you can see the new Stratis pool in the Devices table. 25.7. Creating an encrypted Stratis pool To secure your data, you can create an encrypted Stratis pool from one or more block devices. When you create an encrypted Stratis pool, the kernel keyring is used as the primary encryption mechanism. After subsequent system reboots this kernel keyring is used to unlock the encrypted Stratis pool. When creating an encrypted Stratis pool from one or more block devices, note the following: Each block device is encrypted using the cryptsetup library and implements the LUKS2 format. Each Stratis pool can either have a unique key or share the same key with other pools. These keys are stored in the kernel keyring. The block devices that comprise a Stratis pool must be either all encrypted or all unencrypted. It is not possible to have both encrypted and unencrypted block devices in the same Stratis pool. Block devices added to the data tier of an encrypted Stratis pool are automatically encrypted. Prerequisites Stratis v2.1.0 or later is installed. For more information, see Installing Stratis . The stratisd service is running. The block devices on which you are creating a Stratis pool are not in use and are not mounted. The block devices on which you are creating a Stratis pool are at least 1GB in size each. On the IBM Z architecture, the /dev/dasd* block devices must be partitioned. Use the partition in the Stratis pool. For information about partitioning DASD devices, see Configuring a Linux instance on IBM Z . Procedure Erase any file system, partition table, or RAID signatures that exist on each block device that you want to use in the Stratis pool: where block-device is the path to the block device; for example, /dev/sdb . If you have not created a key set already, run the following command and follow the prompts to create a key set to use for the encryption. where key-description is a reference to the key that gets created in the kernel keyring. Create the encrypted Stratis pool and specify the key description to use for the encryption. You can also specify the key path using the --keyfile-path option instead of using the key-description option. where key-description References the key that exists in the kernel keyring, which you created in the step. my-pool Specifies the name of the new Stratis pool. block-device Specifies the path to an empty or wiped block device. You can also specify multiple block devices on a single line by using the following command: Verification Verify that the new Stratis pool was created: 25.8. Creating an encrypted Stratis pool by using the web console To secure your data, you can use the web console to create an encrypted Stratis pool from one or more block devices. When creating an encrypted Stratis pool from one or more block devices, note the following: Each block device is encrypted using the cryptsetup library and implements the LUKS2 format. Each Stratis pool can either have a unique key or share the same key with other pools. These keys are stored in the kernel keyring. The block devices that comprise a Stratis pool must be either all encrypted or all unencrypted. It is not possible to have both encrypted and unencrypted block devices in the same Stratis pool. Block devices added to the data tier of an encrypted Stratis pool are automatically encrypted. Prerequisites You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . Stratis v2.1.0 or later is installed. The stratisd service is running. The block devices on which you are creating a Stratis pool are not in use and are not mounted. Each block device on which you are creating a Stratis pool is at least 1 GB. Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . Click Storage . In the Storage table, click the menu button and select Create Stratis pool . In the Name field, enter a name for the Stratis pool. Select the Block devices from which you want to create the Stratis pool. Select the type of encryption, you can use a passphrase, a Tang keyserver, or both: Passphrase: Enter a passphrase. Confirm the passphrase. Tang keyserver: Enter the keyserver address. For more information, see Deploying a Tang server with SELinux in enforcing mode . Optional: If you want to specify the maximum size for each file system that is created in pool, select Manage filesystem sizes . Click Create . Verification Go to the Storage section and verify that you can see the new Stratis pool in the Devices table. 25.9. Renaming a Stratis pool by using the web console You can use the web console to rename an existing Stratis pool. Prerequisites You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . Stratis is installed. The web console detects and installs Stratis by default. However, for manually installing Stratis, see Installing Stratis . The stratisd service is running. A Stratis pool is created. Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . Click Storage . In the Storage table, click the Stratis pool you want to rename. On the Stratis pool page, click edit to the Name field. In the Rename Stratis pool dialog box, enter a new name. Click Rename . 25.10. Setting overprovisioning mode in Stratis file system A storage stack can reach a state of overprovision. If the file system size becomes bigger than the pool backing it, the pool becomes full. To prevent this, disable overprovisioning, which ensures that the size of all file systems on the pool does not exceed the available physical storage provided by the pool. If you use Stratis for critical applications or the root file system, this mode prevents certain failure cases. If you enable overprovisioning, an API signal notifies you when your storage has been fully allocated. The notification serves as a warning to the user to inform them that when all the remaining pool space fills up, Stratis has no space left to extend to. Prerequisites Stratis is installed. For more information, see Installing Stratis . Procedure To set up the pool correctly, you have two possibilities: Create a pool from one or more block devices: Set overprovisioning mode in the existing pool: If set to "yes", you enable overprovisioning to the pool. This means that the sum of the logical sizes of the Stratis file systems, supported by the pool, can exceed the amount of available data space. Verification Run the following to view the full list of Stratis pools: Check if there is an indication of the pool overprovisioning mode flag in the stratis pool list output. The " ~ " is a math symbol for "NOT", so ~Op means no-overprovisioning. Optional: Run the following to check overprovisioning on a specific pool: 25.11. Binding a Stratis pool to NBDE Binding an encrypted Stratis pool to Network Bound Disk Encryption (NBDE) requires a Tang server. When a system containing the Stratis pool reboots, it connects with the Tang server to automatically unlock the encrypted pool without you having to provide the kernel keyring description. Note Binding a Stratis pool to a supplementary Clevis encryption mechanism does not remove the primary kernel keyring encryption. Prerequisites Stratis v2.3.0 or later is installed. For more information, see Installing Stratis . The stratisd service is running. You have created an encrypted Stratis pool, and you have the key description of the key that was used for the encryption. For more information, see Creating an encrypted Stratis pool . You can connect to the Tang server. For more information, see Deploying a Tang server with SELinux in enforcing mode . Procedure Bind an encrypted Stratis pool to NBDE: where my-pool Specifies the name of the encrypted Stratis pool. tang-server Specifies the IP address or URL of the Tang server. Additional resources Configuring automated unlocking of encrypted volumes using policy-based decryption 25.12. Binding a Stratis pool to TPM When you bind an encrypted Stratis pool to the Trusted Platform Module (TPM) 2.0, the system containing the pool reboots, and the pool is automatically unlocked without you having to provide the kernel keyring description. Prerequisites Stratis v2.3.0 or later is installed. For more information, see Installing Stratis . The stratisd service is running. You have created an encrypted Stratis pool. For more information, see Creating an encrypted Stratis pool . Procedure Bind an encrypted Stratis pool to TPM: where my-pool Specifies the name of the encrypted Stratis pool. key-description References the key that exists in the kernel keyring, which was generated when you created the encrypted Stratis pool. 25.13. Unlocking an encrypted Stratis pool with kernel keyring After a system reboot, your encrypted Stratis pool or the block devices that comprise it might not be visible. You can unlock the pool using the kernel keyring that was used to encrypt the pool. Prerequisites Stratis v2.1.0 is installed. For more information, see Installing Stratis . The stratisd service is running. You have created an encrypted Stratis pool. For more information, see Creating an encrypted Stratis pool . Procedure Re-create the key set using the same key description that was used previously: where key-description references the key that exists in the kernel keyring, which was generated when you created the encrypted Stratis pool. Verify that the Stratis pool is visible: 25.14. Unbinding a Stratis pool from supplementary encryption When you unbind an encrypted Stratis pool from a supported supplementary encryption mechanism, the primary kernel keyring encryption remains in place. This is not true for pools that are created with Clevis encryption from the start. Prerequisites Stratis v2.3.0 or later is installed on your system. For more information, see Installing Stratis . You have created an encrypted Stratis pool. For more information, see Creating an encrypted Stratis pool . The encrypted Stratis pool is bound to a supported supplementary encryption mechanism. Procedure Unbind an encrypted Stratis pool from a supplementary encryption mechanism: where my-pool specifies the name of the Stratis pool you want to unbind. Additional resources Binding an encrypted Stratis pool to NBDE Binding an encrypted Stratis pool to TPM 25.15. Starting and stopping Stratis pool You can start and stop Stratis pools. This gives you the option to dissasemble or bring down all the objects that were used to construct the pool, such as file systems, cache devices, thin pool, and encrypted devices. Note that if the pool actively uses any device or file system, it might issue a warning and not be able to stop. The stopped state is recorded in the pool's metadata. These pools do not start on the following boot, until the pool receives a start command. Prerequisites Stratis is installed. For more information, see Installing Stratis . The stratisd service is running. You have created either an unencrypted or an encrypted Stratis pool. See Creating an unencrypted Stratis pool or Creating an encrypted Stratis pool . Procedure Use the following command to start the Stratis pool. The --unlock-method option specifies the method of unlocking the pool if it is encrypted: Alternatively, use the following command to stop the Stratis pool. This tears down the storage stack but leaves all metadata intact: Verification Use the following command to list all pools on the system: Use the following command to list all not previously started pools. If the UUID is specified, the command prints detailed information about the pool corresponding to the UUID: 25.16. Creating a Stratis file system Create a Stratis file system on an existing Stratis pool. Prerequisites Stratis is installed. For more information, see Installing Stratis . The stratisd service is running. You have created a Stratis pool. See Creating an unencrypted Stratis pool or Creating an encrypted Stratis pool . Procedure Create a Stratis file system on a pool: where number-and-unit Specifies the size of a file system. The specification format must follow the standard size specification format for input, that is B, KiB, MiB, GiB, TiB or PiB. my-pool Specifies the name of the Stratis pool. my-fs Specifies an arbitrary name for the file system. For example: Example 25.1. Creating a Stratis file system Verification List file systems within the pool to check if the Stratis file system is created: Additional resources Mounting a Stratis file system 25.17. Creating a file system on a Stratis pool by using the web console You can use the web console to create a file system on an existing Stratis pool. Prerequisites You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The stratisd service is running. A Stratis pool is created. Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . Click Storage . Click the Stratis pool on which you want to create a file system. On the Stratis pool page, scroll to the Stratis filesystems section and click Create new filesystem . Enter a name for the file system. Enter a mount point for the file system. Select the mount option. In the At boot drop-down menu, select when you want to mount your file system. Create the file system: If you want to create and mount the file system, click Create and mount . If you want to only create the file system, click Create only . Verification The new file system is visible on the Stratis pool page under the Stratis filesystems tab. 25.18. Mounting a Stratis file system Mount an existing Stratis file system to access the content. Prerequisites Stratis is installed. For more information, see Installing Stratis . The stratisd service is running. You have created a Stratis file system. For more information, see Creating a Stratis file system . Procedure To mount the file system, use the entries that Stratis maintains in the /dev/stratis/ directory: The file system is now mounted on the mount-point directory and ready to use. 25.19. Setting up non-root Stratis file systems in /etc/fstab using a systemd service You can manage setting up non-root file systems in /etc/fstab using a systemd service. Prerequisites Stratis is installed. See Installing Stratis . The stratisd service is running. You have created a Stratis file system. See Creating a Stratis file system . Procedure As root, edit the /etc/fstab file and add a line to set up non-root file systems: Additional resources Persistently mounting file systems | [
"yum install stratisd stratis-cli",
"systemctl enable --now stratisd",
"wipefs --all block-device",
"stratis pool create my-pool block-device",
"stratis pool create my-pool block-device-1 block-device-2",
"stratis pool list",
"wipefs --all block-device",
"stratis key set --capture-key key-description",
"stratis pool create --key-desc key-description my-pool block-device",
"stratis pool create --key-desc key-description my-pool block-device-1 block-device-2",
"stratis pool list",
"stratis pool create pool-name /dev/sdb",
"stratis pool overprovision pool-name <yes|no>",
"stratis pool list Name Total Physical Properties UUID Alerts pool-name 1.42 TiB / 23.96 MiB / 1.42 TiB ~Ca,~Cr,~Op cb7cb4d8-9322-4ac4-a6fd-eb7ae9e1e540",
"stratis pool overprovision pool-name yes stratis pool list Name Total Physical Properties UUID Alerts pool-name 1.42 TiB / 23.96 MiB / 1.42 TiB ~Ca,~Cr,~Op cb7cb4d8-9322-4ac4-a6fd-eb7ae9e1e540",
"stratis pool bind nbde --trust-url my-pool tang-server",
"stratis pool bind tpm my-pool key-description",
"stratis key set --capture-key key-description",
"stratis pool list",
"stratis pool unbind clevis my-pool",
"stratis pool start pool-uuid --unlock-method <keyring|clevis>",
"stratis pool stop pool-name",
"stratis pool list",
"stratis pool list --stopped --uuid UUID",
"stratis filesystem create --size number-and-unit my-pool my-fs",
"stratis filesystem create --size 10GiB pool1 filesystem1",
"stratis fs list my-pool",
"mount /dev/stratis/ my-pool / my-fs mount-point",
"/dev/stratis/ my-pool/my-fs mount-point xfs defaults,x-systemd.requires=stratis-fstab-setup@ pool-uuid .service,x-systemd.after=stratis-fstab-setup@ pool-uuid .service dump-value fsck_value"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_file_systems/setting-up-stratis-file-systems_managing-file-systems |
Chapter 5. collectd plugins | Chapter 5. collectd plugins You can configure multiple collectd plugins depending on your Red Hat OpenStack Platform (RHOSP) 16.2 environment. The following list of plugins shows the available heat template ExtraConfig parameters that you can set to override the default values. Each section provides the general configuration name for the ExtraConfig option. For example, if there is a collectd plugin called example_plugin , the format of the plugin title is collectd::plugin::example_plugin . Reference the tables of available parameters for specific plugins, such as in the following example: ExtraConfig: collectd::plugin::example_plugin::<parameter>: <value> Reference the metrics tables of specific plugins for Prometheus or Grafana queries. collectd::plugin::aggregation You can aggregate several values into one with the aggregation plugin. Use the aggregation functions such as sum , average , min , and max to calculate metrics, for example average and total CPU statistics. Table 5.1. aggregation parameters Parameter Type host String plugin String plugininstance Integer agg_type String typeinstance String sethost String setplugin String setplugininstance Integer settypeinstance String groupby Array of Strings calculatesum Boolean calculatenum Boolean calculateaverage Boolean calculateminimum Boolean calculatemaximum Boolean calculatestddev Boolean Example configuration: Deploy three aggregate configurations to create the following files: aggregator-calcCpuLoadAvg.conf : average CPU load for all CPU cores grouped by host and state aggregator-calcCpuLoadMinMax.conf : minimum and maximum CPU load groups by host and state aggregator-calcMemoryTotalMaxAvg.conf : maximum, average, and total for memory grouped by type The aggregation configurations use the default cpu and memory plugin configurations. parameter_defaults: CollectdExtraPlugins: - aggregation ExtraConfig: collectd::plugin::aggregation::aggregators: calcCpuLoadAvg: plugin: "cpu" agg_type: "cpu" groupby: - "Host" - "TypeInstance" calculateaverage: True calcCpuLoadMinMax: plugin: "cpu" agg_type: "cpu" groupby: - "Host" - "TypeInstance" calculatemaximum: True calculateminimum: True calcMemoryTotalMaxAvg: plugin: "memory" agg_type: "memory" groupby: - "TypeInstance" calculatemaximum: True calculateaverage: True calculatesum: True collectd::plugin::amqp1 Use the amqp1 plugin to write values to an amqp1 message bus, for example, AMQ Interconnect. Table 5.2. amqp1 parameters Parameter Type manage_package Boolean transport String host String port Integer user String password String address String instances Hash retry_delay Integer send_queue_limit Integer interval Integer Use the send_queue_limit parameter to limit the length of the outgoing metrics queue. Note If there is no AMQP1 connection, the plugin continues to queue messages to send, which can result in unbounded memory consumption. The default value is 0, which disables the outgoing metrics queue. Increase the value of the send_queue_limit parameter if metrics are missing. Example configuration: parameter_defaults: CollectdExtraPlugins: - amqp1 ExtraConfig: collectd::plugin::amqp1::send_queue_limit: 5000 collectd::plugin::apache Use the apache plugin to collect Apache data from the mod_status plugin that is provided by the Apache web server. Each instance provided has a per- interval value specified in seconds. If you provide the timeout interval parameter for an instance, the value is in milliseconds. Table 5.3. apache parameters Parameter Type instances Hash interval Integer manage-package Boolean package_install_options List Table 5.4. apache instances parameters Parameter Type url HTTP URL user String password String verifypeer Boolean verifyhost Boolean cacert AbsolutePath sslciphers String timeout Integer Example configuration: In this example, the instance name is localhost , which connects to the Apache web server at http://10.0.0.111/mod_status?auto . You must append ?auto to the end of the URL to prevent the status page returning as a type that is incompatible with the plugin. parameter_defaults: CollectdExtraPlugins: - apache ExtraConfig: collectd::plugin::apache::instances: localhost: url: "http://10.0.0.111/mod_status?auto" Additional resources For more information about configuring the apache plugin, see apache . collectd::plugin::battery Use the battery plugin to report the remaining capacity, power, or voltage of laptop batteries. Table 5.5. battery parameters Parameter Type values_percentage Boolean report_degraded Boolean query_state_fs Boolean interval Integer Additional resources For more information about configuring the battery plugin, see battery . collectd::plugin::bind Use the bind plugin to retrieve encoded statistics about queries and responses from a DNS server, and submit those values to collectd. Table 5.6. bind parameters Parameter Type url HTTP URL memorystats Boolean opcodes Boolean parsetime Boolean qtypes Boolean resolverstats Boolean serverstats Boolean zonemaintstats Boolean views Array interval Integer Table 5.7. bind views parameters Parameter Type name String qtypes Boolean resolverstats Boolean cacherrsets Boolean zones List of strings Example configuration: parameter_defaults: CollectdExtraPlugins: - bind ExtraConfig: collectd::plugins::bind: url: http://localhost:8053/ memorystats: true opcodes: true parsetime: false qtypes: true resolverstats: true serverstats: true zonemaintstats: true views: - name: internal qtypes: true resolverstats: true cacherrsets: true - name: external qtypes: true resolverstats: true cacherrsets: true zones: - "example.com/IN" collectd::plugin::ceph Use the ceph plugin to gather data from ceph daemons. Table 5.8. ceph parameters Parameter Type daemons Array longrunavglatency Boolean convertspecialmetrictypes Boolean package_name String Example configuration: parameter_defaults: ExtraConfig: collectd::plugin::ceph::daemons: - ceph-osd.0 - ceph-osd.1 - ceph-osd.2 - ceph-osd.3 - ceph-osd.4 Note If an Object Storage Daemon (OSD) is not on every node, you must list the OSDs. When you deploy collectd, the ceph plugin is added to the Ceph nodes. Do not add the ceph plugin on Ceph nodes to CollectdExtraPlugins because this results in a deployment failure. Additional resources For more information about configuring the ceph plugin, see ceph . collectd::plugins::cgroups Use the cgroups plugin to collect information for processes in a cgroup. Table 5.9. cgroups parameters Parameter Type ignore_selected Boolean interval Integer cgroups List Additional resources For more information about configuring the cgroups plugin, see cgroups . collectd::plugin::connectivity Use the connectivity plugin to monitor the state of network interfaces. Note If no interfaces are listed, all interfaces are monitored by default. Table 5.10. connectivity parameters Parameter Type interfaces Array Example configuration: parameter_defaults: ExtraConfig: collectd::plugin::connectivity::interfaces: - eth0 - eth1 Additional resources For more information about configuring the connectivity plugin, see connectivity . collectd::plugin::conntrack Use the conntrack plugin to track the number of entries in the Linux connection-tracking table. There are no parameters for this plugin. collectd::plugin::contextswitch Use the ContextSwitch plugin to collect the number of context switches that the system handles. The only parameter available is interval , which is a polling interval defined in seconds. Additional resources For more information about configuring the contextswitch plugin, see contextswitch . collectd::plugin::cpu Use the cpu plugin to monitor the time that the CPU spends in various states, for example, idle, executing user code, executing system code, waiting for IO-operations, and other states. The cpu plugin collects jiffies , not percentage values. The value of a jiffy depends on the clock frequency of your hardware platform, and therefore is not an absolute time interval unit. To report a percentage value, set the Boolean parameters reportbycpu and reportbystate to true , and then set the Boolean parameter valuespercentage to true. This plugin is enabled by default. Table 5.11. cpu metrics Name Description Query idle Amount of idle time collectd_cpu_total{...,type_instance='idle'} interrupt CPU blocked by interrupts collectd_cpu_total{...,type_instance='interrupt'} nice Amount of time running low priority processes collectd_cpu_total{...,type_instance='nice'} softirq Amount of cycles spent in servicing interrupt requests collectd_cpu_total{...,type_instance='waitirq'} steal The percentage of time a virtual CPU waits for a real CPU while the hypervisor is servicing another virtual processor collectd_cpu_total{...,type_instance='steal'} system Amount of time spent on system level (kernel) collectd_cpu_total{...,type_instance='system'} user Jiffies that user processes use collectd_cpu_total{...,type_instance='user'} wait CPU waiting on outstanding I/O request collectd_cpu_total{...,type_instance='wait'} Table 5.12. cpu parameters Parameter Type Defaults reportbystate Boolean true valuespercentage Boolean true reportbycpu Boolean true reportnumcpu Boolean false reportgueststate Boolean false subtractgueststate Boolean true interval Integer 120 Example configuration: parameter_defaults: CollectdExtraPlugins: - cpu ExtraConfig: collectd::plugin::cpu::reportbystate: true Additional resources For more information about configuring the cpu plugin, see cpu . collectd::plugin::cpufreq Use the cpufreq plugin to collect the current CPU frequency. There are no parameters for this plugin. collectd::plugin::csv Use the csv plugin to write values to a local file in CSV format. Table 5.13. csv parameters Parameter Type datadir String storerates Boolean interval Integer collectd::plugin::df Use the df plugin to collect disk space usage information for file systems. This plugin is enabled by default. Table 5.14. df metrics Name Description Query free Amount of free disk space collectd_df_df_complex{...,type_instance="free"} reserved Amount of reserved disk space collectd_df_df_complex{...,type_instance="reserved"} used Amount of used disk space collectd_df_df_complex{...,type_instance="used"} Table 5.15. df parameters Parameter Type Defaults devices Array [] fstypes Array ['xfs'] ignoreselected Boolean true mountpoints Array [] reportbydevice Boolean true reportinodes Boolean true reportreserved Boolean true valuesabsolute Boolean true valuespercentage Boolean false Example configuration: parameter_defaults: ExtraConfig: collectd::plugin::df::fstypes: ['tmpfs','xfs'] Additional resources For more information about configuring the df plugin, see df . collectd::plugin::disk Use the disk plugin to collect performance statistics of hard disks and, if supported, partitions. Note The disk plugin monitors all disks by default. You can use the ignoreselected parameter to ignore a list of disks. The example configuration ignores the sda , sdb , and sdc disks, and monitors all disks not included in the list. This plugin is enabled by default. Table 5.16. disk parameters Parameter Type Defaults disks Array [] ignoreselected Boolean false udevnameattr String <undefined> Table 5.17. disk metrics Name Description merged The number of queued operations that can be merged together, for example, one physical disk access served two or more logical operations. time The average time an I/O-operation takes to complete. The values might not be accurate. io_time Time spent doing I/Os (ms). You can use this metric as a device load percentage. A value of 1 second matches 100% of load. weighted_io_time Measure of both I/O completion time and the backlog that might be accumulating. pending_operations Shows queue size of pending I/O operations. Example configuration: parameter_defaults: ExtraConfig: collectd::plugin::disk::disks: ['sda', 'sdb', 'sdc'] collectd::plugin::disk::ignoreselected: true Additional resources For more information about configuring the disk plugin, see disk . collectd::plugin::hugepages Use the hugepages plugin to collect hugepages information. Table 5.18. hugepages parameters Parameter Type Defaults report_per_node_hp Boolean true report_root_hp Boolean true values_pages Boolean true values_bytes Boolean false values_percentage Boolean false Example configuration: parameter_defaults: ExtraConfig: collectd::plugin::hugepages::values_percentage: true Additional resources For more information about configuring the hugepages plugin, see hugepages . collectd::plugin::interface Use the interface plugin to measure interface traffic in octets, packets per second, and error rate per second. Table 5.19. interface parameters Parameter Type Default interfaces Array [] ignoreselected Boolean false reportinactive Boolean true Example configuration: parameter_defaults: ExtraConfig: collectd::plugin::interface::interfaces: - lo collectd::plugin::interface::ignoreselected: true Additional resources For more information about configuring the interfaces plugin, see interfaces . collectd::plugin::load Use the load plugin to collect the system load and an overview of the system use. Table 5.20. plugin parameters Parameter Type Default report_relative Boolean true Example configuration: parameter_defaults: ExtraConfig: collectd::plugin::load::report_relative: false Additional resources For more information about configuring the load plugin, see load . collectd::plugin::mcelog Use the mcelog plugin to send notifications and statistics that are relevant to Machine Check Exceptions when they occur. Configure mcelog to run in daemon mode and enable logging capabilities. Table 5.21. mcelog parameters Parameter Type Mcelogfile String Memory Hash { mcelogclientsocket[string], persistentnotification[boolean] } Example configuration: parameter_defaults: CollectdExtraPlugins: mcelog CollectdEnableMcelog: true Additional resources For more information about configuring the mcelog plugin, see mcelog . collectd::plugin::memcached Use the memcached plugin to retrieve information about memcached cache usage, memory, and other related information. Table 5.22. memcached parameters Parameter Type instances Hash interval Integer Example configuration: parameter_defaults: CollectdExtraPlugins: - memcached ExtraConfig: collectd::plugin::memcached::instances: local: host: "%{hiera('fqdn_canonical')}" port: 11211 Additional resources For more information about configuring the memcached plugin, see memcached . collectd::plugin::memory Use the memory plugin to retrieve information about the memory of the system. Table 5.23. memory parameters Parameter Type Defaults valuesabsolute Boolean true valuespercentage Boolean Example configuration: parameter_defaults: ExtraConfig: collectd::plugin::memory::valuesabsolute: true collectd::plugin::memory::valuespercentage: false Additional resources For more information about configuring the memory plugin, see memory . collectd::plugin::ntpd Use the ntpd plugin to query a local NTP server that is configured to allow access to statistics, and retrieve information about the configured parameters and the time sync status. Table 5.24. ntpd parameters Parameter Type host Hostname port Port number (Integer) reverselookups Boolean includeunitid Boolean interval Integer Example configuration: parameter_defaults: CollectdExtraPlugins: - ntpd ExtraConfig: collectd::plugin::ntpd::host: localhost collectd::plugin::ntpd::port: 123 collectd::plugin::ntpd::reverselookups: false collectd::plugin::ntpd::includeunitid: false Additional resources For more information about configuring the ntpd plugin, see ntpd . collectd::plugin::ovs_stats Use the ovs_stats plugin to collect statistics of OVS-connected interfaces. The ovs_stats plugin uses the OVSDB management protocol (RFC7047) monitor mechanism to get statistics from OVSDB. Table 5.25. ovs_stats parameters Parameter Type address String bridges List port Integer socket String Example configuration: The following example shows how to enable the ovs_stats plugin. If you deploy your overcloud with OVS, you do not need to enable the ovs_stats plugin. parameter_defaults: CollectdExtraPlugins: - ovs_stats ExtraConfig: collectd::plugin::ovs_stats::socket: '/run/openvswitch/db.sock' Additional resources For more information about configuring the ovs_stats plugin, see ovs_stats . collectd::plugin::processes The processes plugin provides information about system processes. If you do not specify custom process matching, the plugin collects only the number of processes by state and the process fork rate. To collect more details about specific processes, you can use the process parameter to specify a process name or the process_match option to specify process names that match a regular expression. The statistics for a process_match output are grouped by process name. Table 5.26. plugin parameters Parameter Type Defaults processes Array <undefined> process_matches Array <undefined> collect_context_switch Boolean <undefined> collect_file_descriptor Boolean <undefined> collect_memory_maps Boolean <undefined> Additional resources For more information about configuring the processes plugin, see processes . collectd::plugin::smart Use the smart plugin to collect SMART (self-monitoring, analysis and reporting technology) information from physical disks on the node. You must also set the parameter CollectdContainerAdditionalCapAdd to CAP_SYS_RAWIO to allow the smart plugin to read SMART telemetry. If you do not set the CollectdContainerAdditionalCapAdd parameter, the following message is written to the collectd error logs: smart plugin: Running collectd as root, but the CAP_SYS_RAWIO capability is missing. The plugin's read function will probably fail. Is your init system dropping capabilities? . Table 5.27. smart parameters Parameter Type disks Array ignoreselected Boolean interval Integer Example configuration: parameter_defaults: CollectdExtraPlugins: - smart CollectdContainerAdditionalCapAdd: "CAP_SYS_RAWIO" Additional information For more information about configuring the smart plugin, see smart . collectd::plugin::swap Use the swap plugin to collect information about the available and used swap space. Table 5.28. swap parameters Parameter Type reportbydevice Boolean reportbytes Boolean valuesabsolute Boolean valuespercentage Boolean reportio Boolean Example configuration: parameter_defaults: CollectdExtraPlugins: - swap ExtraConfig: collectd::plugin::swap::reportbydevice: false collectd::plugin::swap::reportbytes: true collectd::plugin::swap::valuesabsolute: true collectd::plugin::swap::valuespercentage: false collectd::plugin::swap::reportio: true collectd::plugin::tcpconns Use the tcpconns plugin to collect information about the number of TCP connections inbound or outbound from the configured port. The local port configuration represents ingress connections. The remote port configuration represents egress connections. Table 5.29. tcpconns parameters Parameter Type localports Port (Array) remoteports Port (Array) listening Boolean allportssummary Boolean Example configuration: parameter_defaults: CollectdExtraPlugins: - tcpconns ExtraConfig: collectd::plugin::tcpconns::listening: false collectd::plugin::tcpconns::localports: - 22 collectd::plugin::tcpconns::remoteports: - 22 collectd::plugin::thermal Use the thermal plugin to retrieve ACPI thermal zone information. Table 5.30. thermal parameters Parameter Type devices Array ignoreselected Boolean interval Integer Example configuration: parameter_defaults: CollectdExtraPlugins: - thermal collectd::plugin::uptime Use the uptime plugin to collect information about system uptime. Table 5.31. uptime parameters Parameter Type interval Integer collectd::plugin::virt Use the virt plugin to collect CPU, disk, network load, and other metrics through the libvirt API for virtual machines on the host. This plugin is enabled by default on compute hosts. Table 5.32. virt parameters Parameter Type connection String refresh_interval Hash domain String block_device String interface_device String ignore_selected Boolean plugin_instance_format String hostname_format String interface_format String extra_stats String Example configuration: ExtraConfig: collectd::plugin::virt::hostname_format: "name uuid hostname" collectd::plugin::virt::plugin_instance_format: metadata Additional resources For more information about configuring the virt plugin, see virt . collectd::plugin::vmem Use the vmem plugin to collect information about virtual memory from the kernel subsystem. Table 5.33. vmem parameters Parameter Type verbose Boolean interval Integer Example configuration: parameter_defaults: CollectdExtraPlugins: - vmem ExtraConfig: collectd::plugin::vmem::verbose: true collectd::plugin::write_http Use the write_http output plugin to submit values to an HTTP server by using POST requests and encoding metrics with JSON, or by using the PUTVAL command. Table 5.34. write_http parameters Parameter Type ensure Enum[ present , absent ] nodes Hash[String, Hash[String, Scalar]] urls Hash[String, Hash[String, Scalar]] manage_package Boolean Example configuration: parameter_defaults: CollectdExtraPlugins: - write_http ExtraConfig: collectd::plugin::write_http::nodes: collectd: url: "http://collectd.tld.org/collectd" metrics: true header: "X-Custom-Header: custom_value" Additional resources For more information about configuring the write_http plugin, see write_http . collectd::plugin::write_kafka Use the write_kafka plugin to send values to a Kafka topic. Configure the write_kafka plugin with one or more topic blocks. For each topic block, you must specify a unique name and one Kafka producer. You can use the following per-topic parameters inside the topic block: Table 5.35. write_kafka parameters Parameter Type kafka_hosts Array[String] topics Hash properties Hash meta Hash Example configuration: parameter_defaults: CollectdExtraPlugins: - write_kafka ExtraConfig: collectd::plugin::write_kafka::kafka_hosts: - remote.tld:9092 collectd::plugin::write_kafka::topics: mytopic: format: JSON Additional resources: For more information about how to configure the write_kafka plugin, see write_kafka . | [
"ExtraConfig: collectd::plugin::example_plugin::<parameter>: <value>",
"parameter_defaults: CollectdExtraPlugins: - aggregation ExtraConfig: collectd::plugin::aggregation::aggregators: calcCpuLoadAvg: plugin: \"cpu\" agg_type: \"cpu\" groupby: - \"Host\" - \"TypeInstance\" calculateaverage: True calcCpuLoadMinMax: plugin: \"cpu\" agg_type: \"cpu\" groupby: - \"Host\" - \"TypeInstance\" calculatemaximum: True calculateminimum: True calcMemoryTotalMaxAvg: plugin: \"memory\" agg_type: \"memory\" groupby: - \"TypeInstance\" calculatemaximum: True calculateaverage: True calculatesum: True",
"parameter_defaults: CollectdExtraPlugins: - amqp1 ExtraConfig: collectd::plugin::amqp1::send_queue_limit: 5000",
"parameter_defaults: CollectdExtraPlugins: - apache ExtraConfig: collectd::plugin::apache::instances: localhost: url: \"http://10.0.0.111/mod_status?auto\"",
"parameter_defaults: CollectdExtraPlugins: - bind ExtraConfig: collectd::plugins::bind: url: http://localhost:8053/ memorystats: true opcodes: true parsetime: false qtypes: true resolverstats: true serverstats: true zonemaintstats: true views: - name: internal qtypes: true resolverstats: true cacherrsets: true - name: external qtypes: true resolverstats: true cacherrsets: true zones: - \"example.com/IN\"",
"parameter_defaults: ExtraConfig: collectd::plugin::ceph::daemons: - ceph-osd.0 - ceph-osd.1 - ceph-osd.2 - ceph-osd.3 - ceph-osd.4",
"parameter_defaults: ExtraConfig: collectd::plugin::connectivity::interfaces: - eth0 - eth1",
"parameter_defaults: CollectdExtraPlugins: - cpu ExtraConfig: collectd::plugin::cpu::reportbystate: true",
"parameter_defaults: ExtraConfig: collectd::plugin::df::fstypes: ['tmpfs','xfs']",
"parameter_defaults: ExtraConfig: collectd::plugin::disk::disks: ['sda', 'sdb', 'sdc'] collectd::plugin::disk::ignoreselected: true",
"This plugin is enabled by default.",
"parameter_defaults: ExtraConfig: collectd::plugin::hugepages::values_percentage: true",
"This plugin is enabled by default.",
"parameter_defaults: ExtraConfig: collectd::plugin::interface::interfaces: - lo collectd::plugin::interface::ignoreselected: true",
"This plugin is enabled by default.",
"parameter_defaults: ExtraConfig: collectd::plugin::load::report_relative: false",
"parameter_defaults: CollectdExtraPlugins: mcelog CollectdEnableMcelog: true",
"parameter_defaults: CollectdExtraPlugins: - memcached ExtraConfig: collectd::plugin::memcached::instances: local: host: \"%{hiera('fqdn_canonical')}\" port: 11211",
"This plugin is enabled by default.",
"parameter_defaults: ExtraConfig: collectd::plugin::memory::valuesabsolute: true collectd::plugin::memory::valuespercentage: false",
"parameter_defaults: CollectdExtraPlugins: - ntpd ExtraConfig: collectd::plugin::ntpd::host: localhost collectd::plugin::ntpd::port: 123 collectd::plugin::ntpd::reverselookups: false collectd::plugin::ntpd::includeunitid: false",
"parameter_defaults: CollectdExtraPlugins: - ovs_stats ExtraConfig: collectd::plugin::ovs_stats::socket: '/run/openvswitch/db.sock'",
"parameter_defaults: CollectdExtraPlugins: - smart CollectdContainerAdditionalCapAdd: \"CAP_SYS_RAWIO\"",
"parameter_defaults: CollectdExtraPlugins: - swap ExtraConfig: collectd::plugin::swap::reportbydevice: false collectd::plugin::swap::reportbytes: true collectd::plugin::swap::valuesabsolute: true collectd::plugin::swap::valuespercentage: false collectd::plugin::swap::reportio: true",
"parameter_defaults: CollectdExtraPlugins: - tcpconns ExtraConfig: collectd::plugin::tcpconns::listening: false collectd::plugin::tcpconns::localports: - 22 collectd::plugin::tcpconns::remoteports: - 22",
"parameter_defaults: CollectdExtraPlugins: - thermal",
"This plugin is enabled by default.",
"ExtraConfig: collectd::plugin::virt::hostname_format: \"name uuid hostname\" collectd::plugin::virt::plugin_instance_format: metadata",
"parameter_defaults: CollectdExtraPlugins: - vmem ExtraConfig: collectd::plugin::vmem::verbose: true",
"parameter_defaults: CollectdExtraPlugins: - write_http ExtraConfig: collectd::plugin::write_http::nodes: collectd: url: \"http://collectd.tld.org/collectd\" metrics: true header: \"X-Custom-Header: custom_value\"",
"parameter_defaults: CollectdExtraPlugins: - write_kafka ExtraConfig: collectd::plugin::write_kafka::kafka_hosts: - remote.tld:9092 collectd::plugin::write_kafka::topics: mytopic: format: JSON"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/operational_measurements/collectd-plugins_assembly |
Chapter 3. Distributed tracing and Service Mesh | Chapter 3. Distributed tracing and Service Mesh 3.1. Configuring Red Hat OpenShift distributed tracing platform with Service Mesh Integrating Red Hat OpenShift distributed tracing platform with Red Hat OpenShift Service Mesh is made of up two parts: Red Hat OpenShift distributed tracing platform (Tempo) and Red Hat OpenShift distributed tracing data collection. Red Hat OpenShift distributed tracing platform (Tempo) Provides distributed tracing to monitor and troubleshoot transactions in complex distributed systems. Tempo is based on the open source Grafana Tempo project. For more about information about distributed tracing platform (Tempo), its features, installation, and configuration, see: Red Hat OpenShift distributed tracing platform (Tempo) . Red Hat OpenShift distributed tracing data collection Is based on the open source OpenTelemetry project , which aims to provide unified, standardized, and vendor-neutral telemetry data collection for cloud-native software. Red Hat OpenShift distributed tracing data collection product provides support for deploying and managing the OpenTelemetry Collector and simplifying the workload instrumentation. The OpenTelemetry Collector can receive, process, and forward telemetry data in multiple formats, making it the ideal component for telemetry processing and interoperability between telemetry systems. The Collector provides a unified solution for collecting and processing metrics, traces, and logs. For more information about distributed tracing data collection, its features, installation, and configuration, see: Red Hat OpenShift distributed tracing data collection . 3.1.1. Configuring Red Hat OpenShift distributed tracing data collection with Service Mesh You can integrate Red Hat OpenShift Service Mesh with Red Hat OpenShift distributed tracing data collection to instrument, generate, collect, and export OpenTelemetry traces, metrics, and logs to analyze and understand your software's performance and behavior. Prerequisites Tempo Operator is installed. See: Installing the Tempo Operator . Red Hat OpenShift distributed tracing data collection Operator is installed. See: Installing the Red Hat build of OpenTelemetry A TempoStack is installed and configured in a tempo namespace. See: Installing a TempoStack instance . An Istio instance is created. An Istio CNI instance is created. Procedure Navigate to the Red Hat OpenShift distributed tracing data collection Operator and install the OpenTelemetryCollector resource in the istio-system namespace: Example OpenTelemetry Collector in istio-system namespace kind: OpenTelemetryCollector apiVersion: opentelemetry.io/v1beta1 metadata: name: otel namespace: istio-system spec: observability: metrics: {} deploymentUpdateStrategy: {} config: exporters: otlp: endpoint: 'tempo-sample-distributor.tempo.svc.cluster.local:4317' tls: insecure: true receivers: otlp: protocols: grpc: endpoint: '0.0.0.0:4317' http: {} service: pipelines: traces: exporters: - otlp receivers: - otlp Configure Red Hat OpenShift Service Mesh to enable tracing, and define the distributed tracing data collection tracing providers in your meshConfig : Example enabling tracing and defining tracing providers apiVersion: sailoperator.io/v1alpha1 kind: Istio metadata: # ... name: default spec: namespace: istio-system # ... values: meshConfig: enableTracing: true extensionProviders: - name: otel opentelemetry: port: 4317 service: otel-collector.istio-system.svc.cluster.local 1 1 The service field is the OpenTelemetry collector service in the istio-system namespace. Create an Istio Telemetry resource to enable tracers defined in spec.values.meshConfig.ExtensionProviders : Example Istio Telemetry resource apiVersion: telemetry.istio.io/v1 kind: Telemetry metadata: name: otel-demo namespace: istio-system spec: tracing: - providers: - name: otel randomSamplingPercentage: 100 Note Once you verify that you can see traces, lower the randomSamplingPercentage value or set it to default to reduce the number of requests. Create the bookinfo namespace by running the following command: USD oc create ns bookinfo Depending on the update strategy you are using, enable sidecar injection in the namespace by running the appropriate commands: If you are using the InPlace update strategy, run the following command: USD oc label namespace curl istio-injection=enabled If you are using the RevisionBased update strategy, run the following commands: Display the revision name by running the following command: USD oc get istiorevisions.sailoperator.io Example output NAME TYPE READY STATUS IN USE VERSION AGE default-v1-23-0 Local True Healthy True v1.23.0 3m33s Label the namespace with the revision name to enable sidecar injection by running the following command: USD oc label namespace curl istio.io/rev=default-v1-23-0 Deploy the bookinfo application in the bookinfo namespace by running the following command: USD oc apply -f https://raw.githubusercontent.com/istio/istio/release-1.23/samples/bookinfo/platform/kube/bookinfo.yaml -n bookinfo Generate traffic to the productpage pod to generate traces: USD oc exec -it -n bookinfo deployments/productpage-v1 -c istio-proxy -- curl localhost:9080/productpage Validate the integration by running the following command to see traces in the UI: USD oc get routes -n tempo tempo-sample-query-frontend Note The OpenShift route for Jaeger UI must be created in the Tempo namespace. You can either manually create it for the tempo-sample-query-frontend service, or update the Tempo custom resource with .spec.template.queryFrontend.jaegerQuery.ingress.type: route . | [
"kind: OpenTelemetryCollector apiVersion: opentelemetry.io/v1beta1 metadata: name: otel namespace: istio-system spec: observability: metrics: {} deploymentUpdateStrategy: {} config: exporters: otlp: endpoint: 'tempo-sample-distributor.tempo.svc.cluster.local:4317' tls: insecure: true receivers: otlp: protocols: grpc: endpoint: '0.0.0.0:4317' http: {} service: pipelines: traces: exporters: - otlp receivers: - otlp",
"apiVersion: sailoperator.io/v1alpha1 kind: Istio metadata: name: default spec: namespace: istio-system values: meshConfig: enableTracing: true extensionProviders: - name: otel opentelemetry: port: 4317 service: otel-collector.istio-system.svc.cluster.local 1",
"apiVersion: telemetry.istio.io/v1 kind: Telemetry metadata: name: otel-demo namespace: istio-system spec: tracing: - providers: - name: otel randomSamplingPercentage: 100",
"oc create ns bookinfo",
"oc label namespace curl istio-injection=enabled",
"oc get istiorevisions.sailoperator.io",
"NAME TYPE READY STATUS IN USE VERSION AGE default-v1-23-0 Local True Healthy True v1.23.0 3m33s",
"oc label namespace curl istio.io/rev=default-v1-23-0",
"oc apply -f https://raw.githubusercontent.com/istio/istio/release-1.23/samples/bookinfo/platform/kube/bookinfo.yaml -n bookinfo",
"oc exec -it -n bookinfo deployments/productpage-v1 -c istio-proxy -- curl localhost:9080/productpage",
"oc get routes -n tempo tempo-sample-query-frontend"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_service_mesh/3.0.0tp1/html/observability/distributed-tracing-and-service-mesh |
Chapter 1. Support policy for Red Hat build of OpenJDK | Chapter 1. Support policy for Red Hat build of OpenJDK Red Hat will support select major versions of Red Hat build of OpenJDK in its products. For consistency, these are the same versions that Oracle designates as long-term support (LTS) for the Oracle JDK. A major version of Red Hat build of OpenJDK will be supported for a minimum of six years from the time that version is first introduced. For more information, see the OpenJDK Life Cycle and Support Policy . Note RHEL 6 reached the end of life in November 2020. Because of this, Red Hat build of OpenJDK is not supporting RHEL 6 as a supported configuration. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.12/rn-openjdk-support-policy |
Chapter 48. Security | Chapter 48. Security The tang-nagios and clevis-udisk2 subpackages available as a Technology Preview The tang and clevis packages, which are part of the Red Hat Enterprise Linux Network Bound Disk Encryption (NBDE) project, contain also the tang-nagios and clevis-udisk2 subpackages. These subpackages are provided only as a Technology Preview. (BZ#1467338) USBGuard is now available for IBM Power as a Technology Preview The usbguard packages, which provide system protection against intrusive USB devices, are now available. With this update, the USBGuard software framework for the IBM Power architectures is provided as a Technology Preview. Full support is targeted for a later release of Red Hat Enterprise Linux. Note that USB is not supported on IBM z Systems, and the USBGuard framework cannot be provided on those systems. (BZ# 1467369 ) | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.4_release_notes/technology_previews_security |
Chapter 8. Direct Migration Requirements | Chapter 8. Direct Migration Requirements Direct Migration is available with Migration Toolkit for Containers (MTC) 1.4.0 or later. There are two parts of the Direct Migration: Direct Volume Migration Direct Image Migration Direct Migration enables the migration of persistent volumes and internal images directly from the source cluster to the destination cluster without an intermediary replication repository (object storage). 8.1. Prerequisites Expose the internal registries for both clusters (source and destination) involved in the migration for external traffic. Ensure the remote source and destination clusters can communicate using OpenShift Container Platform routes on port 443. Configure the exposed registry route in the source and destination MTC clusters; do this by specifying the spec.exposedRegistryPath field or from the MTC UI. Note If the destination cluster is the same as the host cluster (where a migration controller exists), there is no need to configure the exposed registry route for that particular MTC cluster. The spec.exposedRegistryPath is required only for Direct Image Migration and not Direct Volume Migration. Ensure the two spec flags in MigPlan custom resource (CR) indirectImageMigration and indirectVolumeMigration are set to false for Direct Migration to be performed. The default value for these flags is false . The Direct Migration feature of MTC uses the Rsync utility. 8.2. Rsync configuration for direct volume migration Direct Volume Migration (DVM) in MTC uses Rsync to synchronize files between the source and the target persistent volumes (PVs), using a direct connection between the two PVs. Rsync is a command-line tool that allows you to transfer files and directories to local and remote destinations. The rsync command used by DVM is optimized for clusters functioning as expected. The MigrationController CR exposes the following variables to configure rsync_options in Direct Volume Migration: Variable Type Default value Description rsync_opt_bwlimit int Not set When set to a positive integer, --bwlimit=<int> option is added to Rsync command. rsync_opt_archive bool true Sets the --archive option in the Rsync command. rsync_opt_partial bool true Sets the --partial option in the Rsync command. rsync_opt_delete bool true Sets the --delete option in the Rsync command. rsync_opt_hardlinks bool true Sets the --hard-links option is the Rsync command. rsync_opt_info string COPY2 DEL2 REMOVE2 SKIP2 FLIST2 PROGRESS2 STATS2 Enables detailed logging in Rsync Pod. rsync_opt_extras string Empty Reserved for any other arbitrary options. Setting the options set through the variables above are global for all migrations. The configuration will take effect for all future migrations as soon as the Operator successfully reconciles the MigrationController CR. Any ongoing migration can use the updated settings depending on which step it currently is in. Therefore, it is recommended that the settings be applied before running a migration. The users can always update the settings as needed. Use the rsync_opt_extras variable with caution. Any options passed using this variable are appended to the rsync command, with addition. Ensure you add white spaces when specifying more than one option. Any error in specifying options can lead to a failed migration. However, you can update MigrationController CR as many times as you require for future migrations. Customizing the rsync_opt_info flag can adversely affect the progress reporting capabilities in MTC. However, removing progress reporting can have a performance advantage. This option should only be used when the performance of Rsync operation is observed to be unacceptable. Note The default configuration used by DVM is tested in various environments. It is acceptable for most production use cases provided the clusters are healthy and performing well. These configuration variables should be used in case the default settings do not work and the Rsync operation fails. 8.2.1. Resource limit configurations for Rsync pods The MigrationController CR exposes following variables to configure resource usage requirements and limits on Rsync: Variable Type Default Description source_rsync_pod_cpu_limits string 1 Source rsync pod's CPU limit source_rsync_pod_memory_limits string 1Gi Source rsync pod's memory limit source_rsync_pod_cpu_requests string 400m Source rsync pod's cpu requests source_rsync_pod_memory_requests string 1Gi Source rsync pod's memory requests target_rsync_pod_cpu_limits string 1 Target rsync pod's cpu limit target_rsync_pod_cpu_requests string 400m Target rsync pod's cpu requests target_rsync_pod_memory_limits string 1Gi Target rsync pod's memory limit target_rsync_pod_memory_requests string 1Gi Target rsync pod's memory requests 8.2.1.1. Supplemental group configuration for Rsync pods If Persistent Volume Claims (PVC) are using a shared storage, the access to storage can be configured by adding supplemental groups to Rsync pod definitions in order for the pods to allow access: Variable Type Default Description src_supplemental_groups string Not Set Comma separated list of supplemental groups for source Rsync pods target_supplemental_groups string Not Set Comma separated list of supplemental groups for target Rsync Pods For example, the MigrationController CR can be updated to set the values: spec: src_supplemental_groups: "1000,2000" target_supplemental_groups: "2000,3000" 8.2.1.2. Rsync retry configuration With Migration Toolkit for Containers (MTC) 1.4.3 and later, a new ability of retrying a failed Rsync operation is introduced. By default, the migration controller retries Rsync until all of the data is successfully transferred from the source to the target volume or a specified number of retries is met. The default retry limit is set to 20 . For larger volumes, a limit of 20 retries may not be sufficient. You can increase the retry limit by using the following variable in the MigrationController CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_backoff_limit: 40 In this example, the retry limit is increased to 40 . 8.2.1.3. Running Rsync as either root or non-root OpenShift Container Platform environments have the PodSecurityAdmission controller enabled by default. This controller requires cluster administrators to enforce Pod Security Standards by means of namespace labels. All workloads in the cluster are expected to run one of the following Pod Security Standard levels: Privileged , Baseline or Restricted . Every cluster has its own default policy set. To guarantee successful data transfer in all environments, Migration Toolkit for Containers (MTC) 1.7.5 introduced changes in Rsync pods, including running Rsync pods as non-root user by default. This ensures that data transfer is possible even for workloads that do not necessarily require higher privileges. This change was made because it is best to run workloads with the lowest level of privileges possible. 8.2.1.3.1. Manually overriding default non-root operation for data transfer Although running Rsync pods as non-root user works in most cases, data transfer might fail when you run workloads as root user on the source side. MTC provides two ways to manually override default non-root operation for data transfer: Configure all migrations to run an Rsync pod as root on the destination cluster for all migrations. Run an Rsync pod as root on the destination cluster per migration. In both cases, you must set the following labels on the source side of any namespaces that are running workloads with higher privileges before migration: enforce , audit , and warn. To learn more about Pod Security Admission and setting values for labels, see Controlling pod security admission synchronization . 8.2.1.3.2. Configuring the MigrationController CR as root or non-root for all migrations By default, Rsync runs as non-root. On the destination cluster, you can configure the MigrationController CR to run Rsync as root. Procedure Configure the MigrationController CR as follows: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] migration_rsync_privileged: true This configuration will apply to all future migrations. 8.2.1.3.3. Configuring the MigMigration CR as root or non-root per migration On the destination cluster, you can configure the MigMigration CR to run Rsync as root or non-root, with the following non-root options: As a specific user ID (UID) As a specific group ID (GID) Procedure To run Rsync as root, configure the MigMigration CR according to this example: apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsRoot: true To run Rsync as a specific User ID (UID) or as a specific Group ID (GID), configure the MigMigration CR according to this example: apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsUser: 10010001 runAsGroup: 3 8.2.2. MigCluster Configuration For every MigCluster resource created in Migration Toolkit for Containers (MTC), a ConfigMap named migration-cluster-config is created in the Migration Operator's namespace on the cluster which MigCluster resource represents. The migration-cluster-config allows you to configure MigCluster specific values. The Migration Operator manages the migration-cluster-config . You can configure every value in the ConfigMap using the variables exposed in the MigrationController CR: Variable Type Required Description migration_stage_image_fqin string No Image to use for Stage Pods (applicable only to IndirectVolumeMigration) migration_registry_image_fqin string No Image to use for Migration Registry rsync_endpoint_type string No Type of endpoint for data transfer ( Route , ClusterIP , NodePort ) rsync_transfer_image_fqin string No Image to use for Rsync Pods (applicable only to DirectVolumeMigration) migration_rsync_privileged bool No Whether to run Rsync Pods as privileged or not migration_rsync_super_privileged bool No Whether to run Rsync Pods as super privileged containers ( spc_t SELinux context) or not cluster_subdomain string No Cluster's subdomain migration_registry_readiness_timeout int No Readiness timeout (in seconds) for Migration Registry Deployment migration_registry_liveness_timeout int No Liveness timeout (in seconds) for Migration Registry Deployment exposed_registry_validation_path string No Subpath to validate exposed registry in a MigCluster (for example /v2) 8.3. Direct migration known issues 8.3.1. Applying the Skip SELinux relabel workaround with spc_t automatically on workloads running on OpenShift Container Platform When attempting to migrate a namespace with Migration Toolkit for Containers (MTC) and a substantial volume associated with it, the rsync-server may become frozen without any further information to troubleshoot the issue. 8.3.1.1. Diagnosing the need for the Skip SELinux relabel workaround Search for an error of Unable to attach or mount volumes for pod... timed out waiting for the condition in the kubelet logs from the node where the rsync-server for the Direct Volume Migration (DVM) runs. Example kubelet log kubenswrapper[3879]: W0326 16:30:36.749224 3879 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/8905d88e-6531-4d65-9c2a-eff11dc7eb29/volumes/kubernetes.io~csi/pvc-287d1988-3fd9-4517-a0c7-22539acd31e6/mount and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699 kubenswrapper[3879]: E0326 16:32:02.706363 3879 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition" pod="caboodle-preprod/rsync-server" kubenswrapper[3879]: E0326 16:32:02.706496 3879 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition" pod="caboodle-preprod/rsync-server" podUID=8905d88e-6531-4d65-9c2a-eff11dc7eb29 8.3.1.2. Resolving using the Skip SELinux relabel workaround To resolve this issue, set the migration_rsync_super_privileged parameter to true in both the source and destination MigClusters using the MigrationController custom resource (CR). Example MigrationController CR apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: migration_rsync_super_privileged: true 1 azure_resource_group: "" cluster_name: host mig_namespace_limit: "10" mig_pod_limit: "100" mig_pv_limit: "100" migration_controller: true migration_log_reader: true migration_ui: true migration_velero: true olm_managed: true restic_timeout: 1h version: 1.8.3 1 The value of the migration_rsync_super_privileged parameter indicates whether or not to run Rsync Pods as super privileged containers ( spc_t selinux context ). Valid settings are true or false . | [
"spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_backoff_limit: 40",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] migration_rsync_privileged: true",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsRoot: true",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsUser: 10010001 runAsGroup: 3",
"kubenswrapper[3879]: W0326 16:30:36.749224 3879 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/8905d88e-6531-4d65-9c2a-eff11dc7eb29/volumes/kubernetes.io~csi/pvc-287d1988-3fd9-4517-a0c7-22539acd31e6/mount and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699 kubenswrapper[3879]: E0326 16:32:02.706363 3879 kubelet.go:1841] \"Unable to attach or mount volumes for pod; skipping pod\" err=\"unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition\" pod=\"caboodle-preprod/rsync-server\" kubenswrapper[3879]: E0326 16:32:02.706496 3879 pod_workers.go:965] \"Error syncing pod, skipping\" err=\"unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition\" pod=\"caboodle-preprod/rsync-server\" podUID=8905d88e-6531-4d65-9c2a-eff11dc7eb29",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: migration_rsync_super_privileged: true 1 azure_resource_group: \"\" cluster_name: host mig_namespace_limit: \"10\" mig_pod_limit: \"100\" mig_pv_limit: \"100\" migration_controller: true migration_log_reader: true migration_ui: true migration_velero: true olm_managed: true restic_timeout: 1h version: 1.8.3"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/migration_toolkit_for_containers/mtc-direct-migration-requirements |
Chapter 1. Getting started using the RHEL web console | Chapter 1. Getting started using the RHEL web console The following sections aim to help you install the web console in Red Hat Enterprise Linux 7 and open the web console in your browser. You will also learn how to add remote hosts and monitor them in the the web console. 1.1. Prerequisites Installed Red Hat Enterprise Linux 7.5 or newer. Enabled networking. Registered system with appropriate subscription attached. To obtain subscription, see link: Managing subscriptions in the web console . 1.2. What is the RHEL web console The RHEL web console is a Red Hat Enterprise Linux 7 web-based interface designed for managing and monitoring your local system, as well as Linux servers located in your network environment. The RHEL web console enables you a wide range of administration tasks, including: Managing services Managing user accounts Managing and monitoring system services Configuring network interfaces and firewall Reviewing system logs Managing virtual machines Creating diagnostic reports Setting kernel dump configuration Configuring SELinux Updating software Managing system subscriptions The RHEL web console uses the same system APIs as you would in a terminal, and actions performed in a terminal are immediately reflected in the RHEL web console. You can monitor the logs of systems in the network environment, as well as their performance, displayed as graphs. In addition, you can change the settings directly in the web console or through the terminal. 1.3. Installing the web console Red Hat Enterprise Linux 7 includes the RHEL web console installed by default in many installation variants. If this is not the case on your system, install the Cockpit package and set up the cockpit.socket service to enable the RHEL web console. Procedure Install the cockpit package: Optionally, enable and start the cockpit.socket service, which runs a web server. This step is necessary, if you need to connect to the system through the web console. To verify the installation and configuration, you can open the web console . If you are using a custom firewall profile, you need to add the cockpit service to firewalld to open port 9090 in the firewall: Additional resources For installing the RHEL web console on a different Linux distribution, see Running Cockpit 1.4. Logging in to the web console The following describes the first login to the RHEL web console using a system user name and password. Prerequisites Use one of the following browsers for opening the web console: Mozilla Firefox 52 and later Google Chrome 57 and later Microsoft Edge 16 and later System user account credentials The RHEL web console uses a specific PAM stack located at /etc/pam.d/cockpit . Authentication with PAM allows you to log in with the user name and password of any local account on the system. Procedure Open the web console in your web browser: Locally: https://localhost:9090 Remotely with the server's hostname: https://example.com:9090 Remotely with the server's IP address: https://192.0.2.2:9090 If you use a self-signed certificate, the browser issues a warning. Check the certificate and accept the security exception to proceed with the login. The console loads a certificate from the /etc/cockpit/ws-certs.d directory and uses the last file with a .cert extension in alphabetical order. To avoid having to grant security exceptions, install a certificate signed by a certificate authority (CA). In the login screen, enter your system user name and password. Optionally, click the Reuse my password for privileged tasks option. If the user account you are using to log in has sudo privileges, this makes it possible to perform privileged tasks in the web console, such as installing software or configuring SELinux. Click Log In . After successful authentication, the RHEL web console interface opens. Additional resources To learn about SSL certificates, see Overview of Certificates and Security of the RHEL System Administrator's Guide. | [
"sudo yum install cockpit",
"sudo systemctl enable --now cockpit.socket",
"sudo firewall-cmd --add-service=cockpit --permanent firewall-cmd --reload"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/managing_systems_using_the_rhel_7_web_console/getting-started-with-the-rhel-web-console_system-management-using-the-rhel-7-web-console |
Chapter 26. Installing applications using Flatpak | Chapter 26. Installing applications using Flatpak You can install certain applications using the Flatpak package manager. The following sections describe how to search for, install, launch, and update Flatpak applications on the command line and in the graphical interface. Important Red Hat provides Flatpak applications only as a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview . The Flatpak package manager itself is fully supported. 26.1. The Flatpak technology Flatpak provides a sandbox environment for application building, deployment, distribution, and installation. Applications that you launch using Flatpak have minimum access to the host system, which protects the system installation against third-party applications. Flatpak provides application stability regardless of the versions of libraries installed on the host system. Flatpak applications are distributed from repositories called remotes. Red Hat provides a remote with RHEL applications. Additionally, third-party remotes are available as well. Red Hat does not support applications from third-party remotes. 26.2. Setting up Flatpak This procedure installs the Flatpak package manager. Procedure Install the flatpak package: 26.3. Enabling the Red Hat Flatpak remote This procedure configures the Red Hat Container Catalog as a Flatpak remote on your system. Prerequisites You have an account on the Red Hat Customer Portal. Note For large-scale deployments where the users do not have Customer Portal accounts, Red Hat recommends using registry service accounts. For details, see Registry Service Accounts . Procedure Enable the rhel Flatpak remote: Log into the Red Hat Container Catalog: Provide the credentials to your Red Hat Customer Portal account or your registry service account tokens. By default, Podman saves the credentials only until you log out. Optional: Save your credentials permanently. Use one of the following options: Save the credentials for the current user: Save the credentials system-wide: For best practices, Red Hat recommends that you log into the Red Hat Container Catalog using registry account tokens when installing credentials system-wide. Verification List the enabled Flatpak remotes: 26.4. Searching for Flatpak applications This procedure searches for an application in the enabled Flatpak remotes on the command line. The search uses the application name and description. Prerequisites Flatpak is installed. The Red Hat Flatpak repository is enabled. Procedure Search for an application by name: For example, to search for the LibreOffice application, use: The search results include the ID of the application: 26.5. Installing Flatpak applications This procedure installs a selected application from the enabled Flatpak remotes on the command line. Prerequisites Flatpak is installed. The Red Hat Flatpak remote is enabled. Procedure Install an application from the rhel remote: Replace application-id with the ID of the application. For example: 26.6. Launching Flatpak applications This procedure launches an installed Flatpak application from the command line. Prerequisites Flatpak is installed. The selected Flatpak application is installed. Procedure Launch the application: Replace application-id with the ID of the application. For example: 26.7. Updating Flatpak applications This procedure updates one or more installed Flatpak applications to the most recent version in the corresponding Flatpak remote. Prerequisites Flatpak is installed. A Flatpak remote is enabled. Procedure Update one or more Flatpak applications: To update a specific Flatpak application, specify the application ID: To update all Flatpak applications, specify no application ID: 26.8. Installing Flatpak applications in the graphical interface This procedure searches for Flatpak applications using the Software application. Prerequisites Flatpak is installed. The Red Hat Flatpak remote is enabled. Procedure Open the Software application. Make sure that the Explore tab is active. Click the search button in the upper-left corner of the window. In the input box, type the name of the application that you want to install, such as LibreOffice . Select the correct application in the search results. If the application is listed several times, select the version where the Source field in the Details section reports flatpaks.redhat.io . Click the Install button. If Software asks you to log in, enter your Customer Portal credentials or your registry service account tokens. Wait for the installation process to complete. Optional: Click the Launch button to launch the application. 26.9. Updating Flatpak applications in the graphical interface This procedure updates one or more installed Flatpak applications using the Software application. Prerequisites Flatpak is installed. A Flatpak remote is enabled. Procedure Open the Software application. Select the Updates tab. In the Application Updates section, you can find all available updates to Flatpak applications. Update one or more applications: To apply all available updates, click the Update All button. To update only a specific application, click the Update button to the application item. Optional: Enable automatic application updates. Click the menu button in the upper-right corner of the window. Select Update Preferences . Enable Automatic Updates . Flatpak applications now update automatically. | [
"yum install flatpak",
"flatpak remote-add --if-not-exists rhel https://flatpaks.redhat.io/rhel.flatpakrepo",
"podman login registry.redhat.io Username: your-user-name Password: your-password",
"cp USDXDG_RUNTIME_DIR/containers/auth.json USDHOME/.config/flatpak/oci-auth.json",
"cp USDXDG_RUNTIME_DIR/containers/auth.json /etc/flatpak/oci-auth.json",
"flatpak remotes Name Options rhel system,oci,no-gpg-verify",
"flatpak search application-name",
"flatpak search LibreOffice",
"Application ID Version Branch Remotes Description org.libreoffice.LibreOffice stable rhel The LibreOffice productivity suite",
"flatpak install rhel application-id",
"flatpak install rhel org.libreoffice.LibreOffice",
"flatpak run application-id",
"flatpak run org.libreoffice.LibreOffice",
"flatpak update application-id",
"flatpak update"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_the_desktop_environment_in_rhel_8/assembly_installing-applications-using-flatpak_using-the-desktop-environment-in-rhel-8 |
Chapter 9. Creating and managing topics | Chapter 9. Creating and managing topics Messages in Kafka are always sent to or received from a topic. This chapter describes how to create and manage Kafka topics. 9.1. Partitions and replicas A topic is always split into one or more partitions. Partitions act as shards. That means that every message sent by a producer is always written only into a single partition. Each partition can have one or more replicas, which will be stored on different brokers in the cluster. When creating a topic you can configure the number of replicas using the replication factor . Replication factor defines the number of copies which will be held within the cluster. One of the replicas for a given partition will be elected as a leader. The leader replica will be used by the producers to send new messages and by the consumers to consume messages. The other replicas will be follower replicas. The followers replicate the leader. If the leader fails, one of the in-sync followers will automatically become the new leader. Each server acts as a leader for some of its partitions and a follower for others so the load is well balanced within the cluster. Note The replication factor determines the number of replicas including the leader and the followers. For example, if you set the replication factor to 3 , then there will be one leader and two follower replicas. 9.2. Message retention The message retention policy defines how long the messages will be stored on the Kafka brokers. It can be defined based on time, partition size or both. For example, you can define that the messages should be kept: For 7 days Until the partition has 1GB of messages. Once the limit is reached, the oldest messages will be removed. For 7 days or until the 1GB limit has been reached. Whatever limit comes first will be used. Warning Kafka brokers store messages in log segments. The messages which are past their retention policy will be deleted only when a new log segment is created. New log segments are created when the log segment exceeds the configured log segment size. Additionally, users can request new segments to be created periodically. Kafka brokers support a compacting policy. For a topic with the compacted policy, the broker will always keep only the last message for each key. The older messages with the same key will be removed from the partition. Because compacting is a periodically executed action, it does not happen immediately when the new message with the same key is sent to the partition. Instead it might take some time until the older messages are removed. For more information about the message retention configuration options, see Section 9.5, "Topic configuration" . 9.3. Topic auto-creation By default, Kafka automatically creates a topic if a producer or consumer attempts to send or receive messages from a non-existent topic. This behavior is governed by the auto.create.topics.enable configuration property, which is set to true by default. For production environments, it is recommended to disable automatic topic creation. To do so, set auto.create.topics.enable to false in the Kafka configuration properties file: Disabling automatic topic creation 9.4. Topic deletion Kafka provides the option to prevent topic deletion, controlled by the delete.topic.enable property . By default, this property is set to true , allowing topics to be deleted. However, setting it to false in the Kafka configuration properties file will disable topic deletion. In this case, attempts to delete a topic will return a success status, but the topic itself will not be deleted. Disabling topic deletion 9.5. Topic configuration Auto-created topics will use the default topic configuration which can be specified in the broker properties file. However, when creating topics manually, their configuration can be specified at creation time. It is also possible to change a topic's configuration after it has been created. The main topic configuration options for manually created topics are: cleanup.policy Configures the retention policy to delete or compact . The delete policy will delete old records. The compact policy will enable log compaction. The default value is delete . For more information about log compaction, see Kafka website . compression.type Specifies the compression which is used for stored messages. Valid values are gzip , snappy , lz4 , uncompressed (no compression) and producer (retain the compression codec used by the producer). The default value is producer . max.message.bytes The maximum size of a batch of messages allowed by the Kafka broker, in bytes. The default value is 1000012 . min.insync.replicas The minimum number of replicas which must be in sync for a write to be considered successful. The default value is 1 . retention.ms Maximum number of milliseconds for which log segments will be retained. Log segments older than this value will be deleted. The default value is 604800000 (7 days). retention.bytes The maximum number of bytes a partition will retain. Once the partition size grows over this limit, the oldest log segments will be deleted. Value of -1 indicates no limit. The default value is -1 . segment.bytes The maximum file size of a single commit log segment file in bytes. When the segment reaches its size, a new segment will be started. The default value is 1073741824 bytes (1 gibibyte). The defaults for auto-created topics can be specified in the Kafka broker configuration using similar options: log.cleanup.policy See cleanup.policy above. compression.type See compression.type above. message.max.bytes See max.message.bytes above. min.insync.replicas See min.insync.replicas above. log.retention.ms See retention.ms above. log.retention.bytes See retention.bytes above. log.segment.bytes See segment.bytes above. default.replication.factor Default replication factor for automatically created topics. Default value is 1 . num.partitions Default number of partitions for automatically created topics. Default value is 1 . 9.6. Internal topics Internal topics are created and used internally by the Kafka brokers and clients. Kafka has several internal topics, two of which are used to store consumer offsets ( __consumer_offsets ) and transaction state ( __transaction_state ). __consumer_offsets and __transaction_state topics can be configured using dedicated Kafka broker configuration options starting with prefix offsets.topic. and transaction.state.log. . The most important configuration options are: offsets.topic.replication.factor Number of replicas for __consumer_offsets topic. The default value is 3 . offsets.topic.num.partitions Number of partitions for __consumer_offsets topic. The default value is 50 . transaction.state.log.replication.factor Number of replicas for __transaction_state topic. The default value is 3 . transaction.state.log.num.partitions Number of partitions for __transaction_state topic. The default value is 50 . transaction.state.log.min.isr Minimum number of replicas that must acknowledge a write to __transaction_state topic to be considered successful. If this minimum cannot be met, then the producer will fail with an exception. The default value is 2 . 9.7. Creating a topic Use the kafka-topics.sh tool to manage topics. kafka-topics.sh is part of the Streams for Apache Kafka distribution and is found in the bin directory. Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. Creating a topic Create a topic using the kafka-topics.sh utility and specify the following: Host and port of the Kafka broker in the --bootstrap-server option. The new topic to be created in the --create option. Topic name in the --topic option. The number of partitions in the --partitions option. Topic replication factor in the --replication-factor option. You can also override some of the default topic configuration options using the option --config . This option can be used multiple times to override different options. ./bin/kafka-topics.sh --bootstrap-server <broker_address> --create --topic <topic_name> --partitions <number_of_partitions> --replication-factor <replication_factor> --config <option_1>=<value_1> --config <option_2>=<value_2> Example of the command to create a topic named mytopic Verify that the topic exists using kafka-topics.sh . ./bin/kafka-topics.sh --bootstrap-server <broker_address> --describe --topic <topic_name> Example of the command to describe a topic named mytopic ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe --topic mytopic 9.8. Listing and describing topics The kafka-topics.sh tool can be used to list and describe topics. kafka-topics.sh is part of the Streams for Apache Kafka distribution and can be found in the bin directory. Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. Describing a topic Describe a topic using the kafka-topics.sh utility and specify the following: Host and port of the Kafka broker in the --bootstrap-server option. Use the --describe option to specify that you want to describe a topic. Topic name must be specified in the --topic option. When the --topic option is omitted, it describes all available topics. ./bin/kafka-topics.sh --bootstrap-server <broker_host>:<port> --describe --topic <topic_name> Example of the command to describe a topic named mytopic ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe --topic mytopic The command lists all partitions and replicas which belong to this topic. It also lists all topic configuration options. 9.9. Modifying a topic configuration The kafka-configs.sh tool can be used to modify topic configurations. kafka-configs.sh is part of the Streams for Apache Kafka distribution and can be found in the bin directory. Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. Modify topic configuration Use the kafka-configs.sh tool to get the current configuration. Specify the host and port of the Kafka broker in the --bootstrap-server option. Set the --entity-type as topic and --entity-name to the name of your topic. Use --describe option to get the current configuration. ./bin/kafka-configs.sh --bootstrap-server <broker_host>:<port> --entity-type topics --entity-name <topic_name> --describe Example of the command to get configuration of a topic named mytopic ./bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name mytopic --describe Use the kafka-configs.sh tool to change the configuration. Specify the host and port of the Kafka broker in the --bootstrap-server option. Set the --entity-type as topic and --entity-name to the name of your topic. Use --alter option to modify the current configuration. Specify the options you want to add or change in the option --add-config . ./bin/kafka-configs.sh --bootstrap-server <broker_host>:<port> --entity-type topics --entity-name <topic_name> --alter --add-config <option>=<value> Example of the command to change configuration of a topic named mytopic ./bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name mytopic --alter --add-config min.insync.replicas=1 Use the kafka-configs.sh tool to delete an existing configuration option. Specify the host and port of the Kafka broker in the --bootstrap-server option. Set the --entity-type as topic and --entity-name to the name of your topic. Use --delete-config option to remove existing configuration option. Specify the options you want to remove in the option --remove-config . ./bin/kafka-configs.sh --bootstrap-server <broker_host>:<port> --entity-type topics --entity-name <topic_name> --alter --delete-config <option> Example of the command to change configuration of a topic named mytopic ./bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name mytopic --alter --delete-config min.insync.replicas 9.10. Deleting a topic The kafka-topics.sh tool can be used to manage topics. kafka-topics.sh is part of the Streams for Apache Kafka distribution and can be found in the bin directory. Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. Deleting a topic Delete a topic using the kafka-topics.sh utility. Host and port of the Kafka broker in the --bootstrap-server option. Use the --delete option to specify that an existing topic should be deleted. Topic name must be specified in the --topic option. ./bin/kafka-topics.sh --bootstrap-server <broker_host>:<port> --delete --topic <topic_name> Example of the command to create a topic named mytopic ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic mytopic Verify that the topic was deleted using kafka-topics.sh . ./bin/kafka-topics.sh --bootstrap-server <broker_host>:<port> --list Example of the command to list all topics ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --list | [
"auto.create.topics.enable=false",
"delete.topic.enable=false",
"./bin/kafka-topics.sh --bootstrap-server <broker_address> --create --topic <topic_name> --partitions <number_of_partitions> --replication-factor <replication_factor> --config <option_1>=<value_1> --config <option_2>=<value_2>",
"[source,shell,subs=+quotes] ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic mytopic --partitions 50 --replication-factor 3 --config cleanup.policy=compact --config min.insync.replicas=2",
"./bin/kafka-topics.sh --bootstrap-server <broker_address> --describe --topic <topic_name>",
"./bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe --topic mytopic",
"./bin/kafka-topics.sh --bootstrap-server <broker_host>:<port> --describe --topic <topic_name>",
"./bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe --topic mytopic",
"./bin/kafka-configs.sh --bootstrap-server <broker_host>:<port> --entity-type topics --entity-name <topic_name> --describe",
"./bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name mytopic --describe",
"./bin/kafka-configs.sh --bootstrap-server <broker_host>:<port> --entity-type topics --entity-name <topic_name> --alter --add-config <option>=<value>",
"./bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name mytopic --alter --add-config min.insync.replicas=1",
"./bin/kafka-configs.sh --bootstrap-server <broker_host>:<port> --entity-type topics --entity-name <topic_name> --alter --delete-config <option>",
"./bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name mytopic --alter --delete-config min.insync.replicas",
"./bin/kafka-topics.sh --bootstrap-server <broker_host>:<port> --delete --topic <topic_name>",
"./bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic mytopic",
"./bin/kafka-topics.sh --bootstrap-server <broker_host>:<port> --list",
"./bin/kafka-topics.sh --bootstrap-server localhost:9092 --list"
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/using_streams_for_apache_kafka_on_rhel_in_kraft_mode/topics-str |
Provisioning APIs | Provisioning APIs OpenShift Container Platform 4.18 Reference guide for provisioning APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/provisioning_apis/index |
Part IV. Managing the Subsystem Instances | Part IV. Managing the Subsystem Instances | null | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/managing_the_subsystem_instances |
15.26. Troubleshooting Replication-Related Problems | 15.26. Troubleshooting Replication-Related Problems This section lists some error messages, explains possible causes, and offers remedies. It is possible to get more debugging information for replication by setting the error log level to 8192 , which is replication debugging. See Section 21.3.7, "Configuring the Log Levels" . To change the error log level to 8192 : Because log level is additive, running the above command will result in excessive messages in the error log. So, use it judiciously. 15.26.1. Possible Replication-related Error Messages The following sections describe many common replication problems. agmt=%s (%s:%d) Replica has a different generation ID than the local data Reason: The consumer specified at the beginning of this message has not been (successfully) initialized yet, or it was initialized from a different root supplier. Impact: The local supplier will not replicate any data to the consumer. Remedy: Ignore this message if it occurs before the consumer is initialized. Otherwise, reinitialize the consumer if the message is persistent. In a multi-supplier environment, all the servers should be initialized only once from a root supplier, directly or indirectly. For example, M1 initializes M2 and M4, M2 then initializes M3, and so on. The important thing to note is that M2 must not start initializing M3 until M2's own initialization is done (check the total update status from the M1's web console or M1 or M2's error log). Also, M2 should not initialize M1 back. Warning: data for replica's was reloaded, and it no longer matches the data in the changelog. Recreating the changelog file. This could affect replication with replica's consumers, in which case the consumers should be reinitialized. Reason: This message may appear only when a supplier is restarted. It indicates that the supplier was unable to write the changelog or did not flush out its RUV at its last shutdown. The former is usually because of a disk-space problem, and the latter because a server crashed or was ungracefully shut down. Impact: The server will not be able to send the changes to a consumer if the consumer's maxcsn no longer exists in the server's changelog. Remedy: Check the disk space and the possible core file (under the server's logs directory). If this is a single-supplier replication, reinitialize the consumers. Otherwise, if the server later complains that it cannot locate some CSN for a consumer, see if the consumer can get the CSN from other suppliers. If not, reinitialize the consumer. agmt=%s(%s:%d): Can't locate CSN %s in the changelog (DB rc=%d). The consumer may need to be reinitialized. Reason: Most likely the changelog was recreated because of the disk is full or the server ungracefully shutdown. Impact: The local server will not be able to send any more change to that consumer until the consumer is reinitialized or gets the CSN from other suppliers. Remedy: If this is a single-supplier replication, reinitialize the consumers. Otherwise, see if the consumer can get the CSN from other suppliers. If not, reinitialize the consumer. Too much time skew Reason: The system clocks on the host machines are extremely out of sync. Impact: The system clock is used to generate a part of the CSN. In order to reflect the change sequence among multiple suppliers, suppliers would forward-adjust their local clocks based on the remote clocks of the other suppliers. Because the adjustment is limited to a certain amount, any difference that exceeds the permitted limit will cause the replication session to be aborted. Remedy: Synchronize the system clocks on the Directory Server host machines. If applicable, run the network time protocol ( ntp ) daemon on those hosts. agmt=%s(%s:%d): Warning: Unable to send endReplication extended operation (%s) Reason: The consumer is not responding. Impact: If the consumer recovers without being restarted, there is a chance that the replica on the consumer will be locked forever if it did not receive the release lock message from the supplier. Remedy: Watch if the consumer can receive any new change from any of its suppliers, or start the replication monitor, and see if all the suppliers of this consumer warn that the replica is busy. If the replica appears to be locked forever and no supplier can get in, restart the consumer. Changelog is getting too big. Reason: Either changelog purge is turned off, which is the default setting, or changelog purge is turned on, but some consumers are way behind the supplier. Remedy: By default, changelog purge is turned off. To turn it on from the command line, run ldapmodify as follows: 1d means 1 day. Other valid time units are s for seconds, m for minutes, h for hours, and w for weeks. A value of 0 turns off the purge. With changelog purge turned on, a purge thread that wakes up every five minutes will remove a change if its age is greater than the value of nsslapd-changelogmaxage and if it has been replayed to all the direct consumers of this supplier (supplier or hub). If it appears that the changelog is not purged when the purge threshold is reached, check the maximum time lag from the replication monitor among all the consumers. Irrespective of what the purge threshold is, no change will be purged before it is replayed by all the consumers. The Replication Monitor is not responding. Reason: The LDAPS port is specified in some replication agreement, but the certificate database is not specified or not accessible by the Replication Monitor. If there is no LDAPS port problem, one of the servers in the replication topology might hang. Remedy: Map the TLS port to a non-TLS port in the configuration file of the Replication Monitor. For example, if 636 is the TLS port and 389 is the non-TLS port, add the following line in the [connection] section: In the Replication Monitor, some consumers show just the header of the table. Reason: No change has originated from the corresponding suppliers. In this case, the MaxCSN : in the header part should be "None" . Remedy: There is nothing wrong if there is no change originated from a supplier. | [
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-errorlog-level=8192",
"ldapmodify -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x dn: cn=changelog5,cn=config changetype: modify add: nsslapd-changelogmaxage nsslapd-changelogmaxage: 1d",
"*:636=389:*:password"
]
| https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/managing_replication-troubleshooting_replication_related_problems |
13.4. Specifying the Location of a Driver Update Image File or a Driver Update Disk | 13.4. Specifying the Location of a Driver Update Image File or a Driver Update Disk If the installer detects more than one possible device that could hold a driver update, it prompts you to select the correct device. If you are not sure which option represents the device on which the driver update is stored, try the various options in order until you find the correct one. Figure 13.7. Selecting a driver disk source If the device that you choose contains no suitable update media, the installer will prompt you to make another choice. If you made a driver update disk on CD, DVD, or USB flash drive, the installer now loads the driver update. However, if the device that you selected is a type of device that could contain more than one partition (whether the device currently has more than one partition or not), the installer might prompt you to select the partition that holds the driver update. Figure 13.8. Selecting a driver disk partition The installer prompts you to specify which file contains the driver update: Figure 13.9. Selecting an ISO image Expect to see these screens if you stored the driver update on an internal hard drive or on a USB storage device. You should not see them if the driver update is on a CD or DVD. Regardless of whether you are providing a driver update in the form of an image file or with a driver update disk, the installer now copies the appropriate update files into a temporary storage area (located in system RAM and not on disk). The installer might ask whether you would like to use additional driver updates. If you select Yes , you can load additional updates in turn. When you have no further driver updates to load, select No . If you stored the driver update on removable media, you can now safely eject or disconnect the disk or device. The installer no longer requires the driver update, and you can re-use the media for other purposes. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sect-Driver_updates-Specifying_the_location_of_a_driver_update_image_file_or_driver_update_disk-ppc |
24.3.2. Problems When You Try to Log In | 24.3.2. Problems When You Try to Log In If you did not create a user account in the firstboot screens, switch to a console by pressing Ctrl + Alt + F2 , log in as root and use the password you assigned to root. If you cannot remember your root password, boot your system into single user mode by appending the boot option single to the zipl boot menu or by any other means to append kernel command line options at IPL. Once you have booted into single user mode and have access to the # prompt, you must type passwd root , which allows you to enter a new password for root. At this point you can type shutdown -r now to reboot the system with the new root password. If you cannot remember your user account password, you must become root. To become root, type su - and enter your root password when prompted. Then, type passwd <username> . This allows you to enter a new password for the specified user account. If the graphical login screen does not appear, check your hardware for compatibility issues. The Hardware Compatibility List can be found at: | [
"https://hardware.redhat.com/"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ch24s03s02 |
Chapter 12. Managing AMQ Streams | Chapter 12. Managing AMQ Streams This chapter covers tasks to maintain a deployment of AMQ Streams. 12.1. Working with custom resources You can use oc commands to retrieve information and perform other operations on AMQ Streams custom resources. Using oc with the status subresource of a custom resource allows you to get the information about the resource. 12.1.1. Performing oc operations on custom resources Use oc commands, such as get , describe , edit , or delete , to perform operations on resource types. For example, oc get kafkatopics retrieves a list of all Kafka topics and oc get kafkas retrieves all deployed Kafka clusters. When referencing resource types, you can use both singular and plural names: oc get kafkas gets the same results as oc get kafka . You can also use the short name of the resource. Learning short names can save you time when managing AMQ Streams. The short name for Kafka is k , so you can also run oc get k to list all Kafka clusters. oc get k NAME DESIRED KAFKA REPLICAS DESIRED ZK REPLICAS my-cluster 3 3 Table 12.1. Long and short names for each AMQ Streams resource AMQ Streams resource Long name Short name Kafka kafka k Kafka Topic kafkatopic kt Kafka User kafkauser ku Kafka Connect kafkaconnect kc Kafka Connector kafkaconnector kctr Kafka Mirror Maker kafkamirrormaker kmm Kafka Mirror Maker 2 kafkamirrormaker2 kmm2 Kafka Bridge kafkabridge kb Kafka Rebalance kafkarebalance kr 12.1.1.1. Resource categories Categories of custom resources can also be used in oc commands. All AMQ Streams custom resources belong to the category strimzi , so you can use strimzi to get all the AMQ Streams resources with one command. For example, running oc get strimzi lists all AMQ Streams custom resources in a given namespace. oc get strimzi NAME DESIRED KAFKA REPLICAS DESIRED ZK REPLICAS kafka.kafka.strimzi.io/my-cluster 3 3 NAME PARTITIONS REPLICATION FACTOR kafkatopic.kafka.strimzi.io/kafka-apps 3 3 NAME AUTHENTICATION AUTHORIZATION kafkauser.kafka.strimzi.io/my-user tls simple The oc get strimzi -o name command returns all resource types and resource names. The -o name option fetches the output in the type/name format oc get strimzi -o name kafka.kafka.strimzi.io/my-cluster kafkatopic.kafka.strimzi.io/kafka-apps kafkauser.kafka.strimzi.io/my-user You can combine this strimzi command with other commands. For example, you can pass it into a oc delete command to delete all resources in a single command. oc delete USD(oc get strimzi -o name) kafka.kafka.strimzi.io "my-cluster" deleted kafkatopic.kafka.strimzi.io "kafka-apps" deleted kafkauser.kafka.strimzi.io "my-user" deleted Deleting all resources in a single operation might be useful, for example, when you are testing new AMQ Streams features. 12.1.1.2. Querying the status of sub-resources There are other values you can pass to the -o option. For example, by using -o yaml you get the output in YAML format. Using -o json will return it as JSON. You can see all the options in oc get --help . One of the most useful options is the JSONPath support , which allows you to pass JSONPath expressions to query the Kubernetes API. A JSONPath expression can extract or navigate specific parts of any resource. For example, you can use the JSONPath expression {.status.listeners[?(@.name=="tls")].bootstrapServers} to get the bootstrap address from the status of the Kafka custom resource and use it in your Kafka clients. Here, the command finds the bootstrapServers value of the listener named tls : oc get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name=="tls")].bootstrapServers}{"\n"}' my-cluster-kafka-bootstrap.myproject.svc:9093 By changing the name condition you can also get the address of the other Kafka listeners. You can use jsonpath to extract any other property or group of properties from any custom resource. 12.1.2. AMQ Streams custom resource status information Several resources have a status property, as described in the following table. Table 12.2. Custom resource status properties AMQ Streams resource Schema reference Publishes status information on... Kafka Section 13.2.55, " KafkaStatus schema reference" The Kafka cluster. KafkaConnect Section 13.2.85, " KafkaConnectStatus schema reference" The Kafka Connect cluster, if deployed. KafkaConnector Section 13.2.123, " KafkaConnectorStatus schema reference" KafkaConnector resources, if deployed. KafkaMirrorMaker Section 13.2.111, " KafkaMirrorMakerStatus schema reference" The Kafka MirrorMaker tool, if deployed. KafkaTopic Section 13.2.89, " KafkaTopicStatus schema reference" Kafka topics in your Kafka cluster. KafkaUser Section 13.2.105, " KafkaUserStatus schema reference" Kafka users in your Kafka cluster. KafkaBridge Section 13.2.120, " KafkaBridgeStatus schema reference" The AMQ Streams Kafka Bridge, if deployed. The status property of a resource provides information on the resource's: Current state , in the status.conditions property Last observed generation , in the status.observedGeneration property The status property also provides resource-specific information. For example: KafkaStatus provides information on listener addresses, and the id of the Kafka cluster. KafkaConnectStatus provides the REST API endpoint for Kafka Connect connectors. KafkaUserStatus provides the user name of the Kafka user and the Secret in which their credentials are stored. KafkaBridgeStatus provides the HTTP address at which external client applications can access the Bridge service. A resource's current state is useful for tracking progress related to the resource achieving its desired state , as defined by the spec property. The status conditions provide the time and reason the state of the resource changed and details of events preventing or delaying the operator from realizing the resource's desired state. The last observed generation is the generation of the resource that was last reconciled by the Cluster Operator. If the value of observedGeneration is different from the value of metadata.generation , the operator has not yet processed the latest update to the resource. If these values are the same, the status information reflects the most recent changes to the resource. AMQ Streams creates and maintains the status of custom resources, periodically evaluating the current state of the custom resource and updating its status accordingly. When performing an update on a custom resource using oc edit , for example, its status is not editable. Moreover, changing the status would not affect the configuration of the Kafka cluster. Here we see the status property specified for a Kafka custom resource. Kafka custom resource with status apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: spec: # ... status: conditions: 1 - lastTransitionTime: 2021-07-23T23:46:57+0000 status: "True" type: Ready 2 observedGeneration: 4 3 listeners: 4 - addresses: - host: my-cluster-kafka-bootstrap.myproject.svc port: 9092 type: plain - addresses: - host: my-cluster-kafka-bootstrap.myproject.svc port: 9093 certificates: - | -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- type: tls - addresses: - host: 172.29.49.180 port: 9094 certificates: - | -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- type: external clusterId: CLUSTER-ID 5 # ... 1 Status conditions describe criteria related to the status that cannot be deduced from the existing resource information, or are specific to the instance of a resource. 2 The Ready condition indicates whether the Cluster Operator currently considers the Kafka cluster able to handle traffic. 3 The observedGeneration indicates the generation of the Kafka custom resource that was last reconciled by the Cluster Operator. 4 The listeners describe the current Kafka bootstrap addresses by type. 5 The Kafka cluster id. Important The address in the custom resource status for external listeners with type nodeport is currently not supported. Note The Kafka bootstrap addresses listed in the status do not signify that those endpoints or the Kafka cluster is in a ready state. Accessing status information You can access status information for a resource from the command line. For more information, see Section 12.1.3, "Finding the status of a custom resource" . 12.1.3. Finding the status of a custom resource This procedure describes how to find the status of a custom resource. Prerequisites An OpenShift cluster. The Cluster Operator is running. Procedure Specify the custom resource and use the -o jsonpath option to apply a standard JSONPath expression to select the status property: oc get kafka <kafka_resource_name> -o jsonpath='{.status}' This expression returns all the status information for the specified custom resource. You can use dot notation, such as status.listeners or status.observedGeneration , to fine-tune the status information you wish to see. Additional resources Section 12.1.2, "AMQ Streams custom resource status information" For more information about using JSONPath, see JSONPath support . 12.2. Pausing reconciliation of custom resources Sometimes it is useful to pause the reconciliation of custom resources managed by AMQ Streams Operators, so that you can perform fixes or make updates. If reconciliations are paused, any changes made to custom resources are ignored by the Operators until the pause ends. If you want to pause reconciliation of a custom resource, set the strimzi.io/pause-reconciliation annotation to true in its configuration. This instructs the appropriate Operator to pause reconciliation of the custom resource. For example, you can apply the annotation to the KafkaConnect resource so that reconciliation by the Cluster Operator is paused. You can also create a custom resource with the pause annotation enabled. The custom resource is created, but it is ignored. Prerequisites The AMQ Streams Operator that manages the custom resource is running. Procedure Annotate the custom resource in OpenShift, setting pause-reconciliation to true : oc annotate KIND-OF-CUSTOM-RESOURCE NAME-OF-CUSTOM-RESOURCE strimzi.io/pause-reconciliation="true" For example, for the KafkaConnect custom resource: oc annotate KafkaConnect my-connect strimzi.io/pause-reconciliation="true" Check that the status conditions of the custom resource show a change to ReconciliationPaused : oc describe KIND-OF-CUSTOM-RESOURCE NAME-OF-CUSTOM-RESOURCE The type condition changes to ReconciliationPaused at the lastTransitionTime . Example custom resource with a paused reconciliation condition type apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: annotations: strimzi.io/pause-reconciliation: "true" strimzi.io/use-connector-resources: "true" creationTimestamp: 2021-03-12T10:47:11Z #... spec: # ... status: conditions: - lastTransitionTime: 2021-03-12T10:47:41.689249Z status: "True" type: ReconciliationPaused Resuming from pause To resume reconciliation, you can set the annotation to false , or remove the annotation. Additional resources Customizing OpenShift resources Finding the status of a custom resource 12.3. Evicting pods with AMQ Streams Drain Cleaner Kafka and ZooKeeper pods might be evicted during OpenShift upgrades, maintenance or pod rescheduling. If your Kafka broker and ZooKeeper pods were deployed by AMQ Streams, you can use the AMQ Streams Drain Cleaner tool to handle the pod evictions. You need to set the podDisruptionBudget for your Kafka deployment to 0 (zero) for the AMQ Streams Drain Cleaner to work. By deploying the AMQ Streams Drain Cleaner, you can use the Cluster Operator to move Kafka pods instead of OpenShift. The Cluster Operator ensures that topics are never under-replicated. Kafka can remain operational during the eviction process. The Cluster Operator waits for topics to synchronize, as the OpenShift worker nodes drain consecutively. An admission webhook notifies the AMQ Streams Drain Cleaner of pod eviction requests to the Kubernetes API. The AMQ Streams Drain Cleaner then adds a rolling update annotation to the pods to be drained. This informs the Cluster Operator to perform a rolling update of an evicted pod. Note If you are not using the AMQ Streams Drain Cleaner, you can add pod annotations to perform rolling updates manually . Webhook configuration The AMQ Streams Drain Cleaner deployment files include a ValidatingWebhookConfiguration resource file. The resource provides the configuration for registering the webhook with the Kubernetes API. The configuration defines the rules for the Kubernetes API to follow in the event of a pod eviction request. The rules specify that only CREATE operations related to pods/eviction sub-resources are intercepted. If these rules are met, the API forwards the notification. The clientConfig points to the AMQ Streams Drain Cleaner service and /drainer endpoint that exposes the webhook. The webhook uses a secure TLS connection, which requires authentication. The caBundle property specifies the certificate chain to validate HTTPS communication. Certificates are encoded in Base64. Webhook configuration for pod eviction notifications apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration # ... webhooks: - name: strimzi-drain-cleaner.strimzi.io rules: - apiGroups: [""] apiVersions: ["v1"] operations: ["CREATE"] resources: ["pods/eviction"] scope: "Namespaced" clientConfig: service: namespace: "strimzi-drain-cleaner" name: "strimzi-drain-cleaner" path: /drainer port: 443 caBundle: Cg== # ... 12.3.1. Prerequisites To deploy and use the AMQ Streams Drain Cleaner, you need to download the deployment files. The AMQ Streams Drain Cleaner deployment files are provided with the downloadable installation and example files from the AMQ Streams software downloads page . 12.3.2. Deploying the AMQ Streams Drain Cleaner Deploy the AMQ Streams Drain Cleaner to the OpenShift cluster where the Cluster Operator and Kafka cluster are running. Prerequisites You have downloaded the AMQ Streams Drain Cleaner deployment files . You have a highly available Kafka cluster deployment running with OpenShift worker nodes that you would like to update. Topics are replicated for high availability. Topic configuration specifies a replication factor of at least 3 and a minimum number of in-sync replicas to 1 less than the replication factor. Kafka topic replicated for high availability apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 1 replicas: 3 config: # ... min.insync.replicas: 2 # ... Excluding ZooKeeper If you don't want to include ZooKeeper, you can remove the --zookeeper command option from the AMQ Streams Drain Cleaner Deployment configuration file. apiVersion: apps/v1 kind: Deployment spec: # ... template: spec: serviceAccountName: strimzi-drain-cleaner containers: - name: strimzi-drain-cleaner # ... command: - "/application" - "-Dquarkus.http.host=0.0.0.0" - "--kafka" - "--zookeeper" 1 # ... 1 Remove this option to exclude ZooKeeper from AMQ Streams Drain Cleaner operations. Procedure Configure a pod disruption budget of 0 (zero) for your Kafka deployment using template settings in the Kafka resource. Specifying a pod disruption budget apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: template: podDisruptionBudget: maxUnavailable: 0 # ... zookeeper: template: podDisruptionBudget: maxUnavailable: 0 # ... Reducing the maximum pod disruption budget to zero prevents OpenShift from automatically evicting the pods in case of voluntary disruptions, so pods must be evicted by the AMQ Streams Drain Cleaner. Add the same configuration for ZooKeeper if you want to use AMQ Streams Drain Cleaner to drain ZooKeeper nodes. Update the Kafka resource: oc apply -f <kafka-configuration-file> Deploy the AMQ Streams Drain Cleaner: oc apply -f ./install/drain-cleaner/openshift 12.3.3. Using the AMQ Streams Drain Cleaner Use the AMQ Streams Drain Cleaner in combination with the Cluster Operator to move Kafka broker or ZooKeeper pods from nodes that are being drained. When you run the AMQ Streams Drain Cleaner, it annotates pods with a rolling update pod annotation. The Cluster Operator performs rolling updates based on the annotation. Prerequisites You have deployed the AMQ Streams Drain Cleaner . Procedure Drain a specified OpenShift node hosting the Kafka broker or ZooKeeper pods. oc get nodes oc drain <name-of-node> --delete-emptydir-data --ignore-daemonsets --timeout=6000s --force Check the eviction events in the AMQ Streams Drain Cleaner log to verify that the pods have been annotated for restart. AMQ Streams Drain Cleaner log show annotations of pods INFO ... Received eviction webhook for Pod my-cluster-zookeeper-2 in namespace my-project INFO ... Pod my-cluster-zookeeper-2 in namespace my-project will be annotated for restart INFO ... Pod my-cluster-zookeeper-2 in namespace my-project found and annotated for restart INFO ... Received eviction webhook for Pod my-cluster-kafka-0 in namespace my-project INFO ... Pod my-cluster-kafka-0 in namespace my-project will be annotated for restart INFO ... Pod my-cluster-kafka-0 in namespace my-project found and annotated for restart Check the reconciliation events in the Cluster Operator log to verify the rolling updates. Cluster Operator log shows rolling updates INFO PodOperator:68 - Reconciliation #13(timer) Kafka(my-project/my-cluster): Rolling Pod my-cluster-zookeeper-2 INFO PodOperator:68 - Reconciliation #13(timer) Kafka(my-project/my-cluster): Rolling Pod my-cluster-kafka-0 INFO AbstractOperator:500 - Reconciliation #13(timer) Kafka(my-project/my-cluster): reconciled 12.4. Manually starting rolling updates of Kafka and ZooKeeper clusters AMQ Streams supports the use of annotations on resources to manually trigger a rolling update of Kafka and ZooKeeper clusters through the Cluster Operator. Rolling updates restart the pods of the resource with new ones. Manually performing a rolling update on a specific pod or set of pods is usually only required in exceptional circumstances. However, rather than deleting the pods directly, if you perform the rolling update through the Cluster Operator you ensure the following: The manual deletion of the pod does not conflict with simultaneous Cluster Operator operations, such as deleting other pods in parallel. The Cluster Operator logic handles the Kafka configuration specifications, such as the number of in-sync replicas. 12.4.1. Prerequisites To perform a manual rolling update, you need a running Cluster Operator and Kafka cluster. See the Deploying and Upgrading AMQ Streams on OpenShift guide for instructions on running a: Cluster Operator Kafka cluster 12.4.2. Performing a rolling update using a pod management annotation This procedure describes how to trigger a rolling update of a Kafka cluster or ZooKeeper cluster. To trigger the update, you add an annotation to the resource you are using to manage the pods running on the cluster. You annotate the StatefulSet or StrimziPodSet resource (if you enabled the UseStrimziPodSets feature gate ). Procedure Find the name of the resource that controls the Kafka or ZooKeeper pods you want to manually update. For example, if your Kafka cluster is named my-cluster , the corresponding names are my-cluster-kafka and my-cluster-zookeeper . Use oc annotate to annotate the appropriate resource in OpenShift. Annotating a StatefulSet oc annotate statefulset <cluster_name> -kafka strimzi.io/manual-rolling-update=true oc annotate statefulset <cluster_name> -zookeeper strimzi.io/manual-rolling-update=true Annotating a StrimziPodSet oc annotate strimzipodset <cluster_name> -kafka strimzi.io/manual-rolling-update=true oc annotate strimzipodset <cluster_name> -zookeeper strimzi.io/manual-rolling-update=true Wait for the reconciliation to occur (every two minutes by default). A rolling update of all pods within the annotated resource is triggered, as long as the annotation was detected by the reconciliation process. When the rolling update of all the pods is complete, the annotation is removed from the resource. 12.4.3. Performing a rolling update using a Pod annotation This procedure describes how to manually trigger a rolling update of an existing Kafka cluster or ZooKeeper cluster using an OpenShift Pod annotation. When multiple pods are annotated, consecutive rolling updates are performed within the same reconciliation run. Prerequisites You can perform a rolling update on a Kafka cluster regardless of the topic replication factor used. But for Kafka to stay operational during the update, you'll need the following: A highly available Kafka cluster deployment running with nodes that you wish to update. Topics replicated for high availability. Topic configuration specifies a replication factor of at least 3 and a minimum number of in-sync replicas to 1 less than the replication factor. Kafka topic replicated for high availability apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 1 replicas: 3 config: # ... min.insync.replicas: 2 # ... Procedure Find the name of the Kafka or ZooKeeper Pod you want to manually update. For example, if your Kafka cluster is named my-cluster , the corresponding Pod names are my-cluster-kafka-index and my-cluster-zookeeper-index . The index starts at zero and ends at the total number of replicas minus one. Annotate the Pod resource in OpenShift. Use oc annotate : oc annotate pod cluster-name -kafka- index strimzi.io/manual-rolling-update=true oc annotate pod cluster-name -zookeeper- index strimzi.io/manual-rolling-update=true Wait for the reconciliation to occur (every two minutes by default). A rolling update of the annotated Pod is triggered, as long as the annotation was detected by the reconciliation process. When the rolling update of a pod is complete, the annotation is removed from the Pod . 12.5. Discovering services using labels and annotations Service discovery makes it easier for client applications running in the same OpenShift cluster as AMQ Streams to interact with a Kafka cluster. A service discovery label and annotation is generated for services used to access the Kafka cluster: Internal Kafka bootstrap service HTTP Bridge service The label helps to make the service discoverable, and the annotation provides connection details that a client application can use to make the connection. The service discovery label, strimzi.io/discovery , is set as true for the Service resources. The service discovery annotation has the same key, providing connection details in JSON format for each service. Example internal Kafka bootstrap service apiVersion: v1 kind: Service metadata: annotations: strimzi.io/discovery: |- [ { "port" : 9092, "tls" : false, "protocol" : "kafka", "auth" : "scram-sha-512" }, { "port" : 9093, "tls" : true, "protocol" : "kafka", "auth" : "tls" } ] labels: strimzi.io/cluster: my-cluster strimzi.io/discovery: "true" strimzi.io/kind: Kafka strimzi.io/name: my-cluster-kafka-bootstrap name: my-cluster-kafka-bootstrap spec: #... Example HTTP Bridge service apiVersion: v1 kind: Service metadata: annotations: strimzi.io/discovery: |- [ { "port" : 8080, "tls" : false, "auth" : "none", "protocol" : "http" } ] labels: strimzi.io/cluster: my-bridge strimzi.io/discovery: "true" strimzi.io/kind: KafkaBridge strimzi.io/name: my-bridge-bridge-service 12.5.1. Returning connection details on services You can find the services by specifying the discovery label when fetching services from the command line or a corresponding API call. oc get service -l strimzi.io/discovery=true The connection details are returned when retrieving the service discovery label. 12.6. Recovering a cluster from persistent volumes You can recover a Kafka cluster from persistent volumes (PVs) if they are still present. You might want to do this, for example, after: A namespace was deleted unintentionally A whole OpenShift cluster is lost, but the PVs remain in the infrastructure 12.6.1. Recovery from namespace deletion Recovery from namespace deletion is possible because of the relationship between persistent volumes and namespaces. A PersistentVolume (PV) is a storage resource that lives outside of a namespace. A PV is mounted into a Kafka pod using a PersistentVolumeClaim (PVC), which lives inside a namespace. The reclaim policy for a PV tells a cluster how to act when a namespace is deleted. If the reclaim policy is set as: Delete (default), PVs are deleted when PVCs are deleted within a namespace Retain , PVs are not deleted when a namespace is deleted To ensure that you can recover from a PV if a namespace is deleted unintentionally, the policy must be reset from Delete to Retain in the PV specification using the persistentVolumeReclaimPolicy property: apiVersion: v1 kind: PersistentVolume # ... spec: # ... persistentVolumeReclaimPolicy: Retain Alternatively, PVs can inherit the reclaim policy of an associated storage class. Storage classes are used for dynamic volume allocation. By configuring the reclaimPolicy property for the storage class, PVs that use the storage class are created with the appropriate reclaim policy. The storage class is configured for the PV using the storageClassName property. apiVersion: v1 kind: StorageClass metadata: name: gp2-retain parameters: # ... # ... reclaimPolicy: Retain apiVersion: v1 kind: PersistentVolume # ... spec: # ... storageClassName: gp2-retain Note If you are using Retain as the reclaim policy, but you want to delete an entire cluster, you need to delete the PVs manually. Otherwise they will not be deleted, and may cause unnecessary expenditure on resources. 12.6.2. Recovery from loss of an OpenShift cluster When a cluster is lost, you can use the data from disks/volumes to recover the cluster if they were preserved within the infrastructure. The recovery procedure is the same as with namespace deletion, assuming PVs can be recovered and they were created manually. 12.6.3. Recovering a deleted cluster from persistent volumes This procedure describes how to recover a deleted cluster from persistent volumes (PVs). In this situation, the Topic Operator identifies that topics exist in Kafka, but the KafkaTopic resources do not exist. When you get to the step to recreate your cluster, you have two options: Use Option 1 when you can recover all KafkaTopic resources. The KafkaTopic resources must therefore be recovered before the cluster is started so that the corresponding topics are not deleted by the Topic Operator. Use Option 2 when you are unable to recover all KafkaTopic resources. In this case, you deploy your cluster without the Topic Operator, delete the Topic Operator topic store metadata, and then redeploy the Kafka cluster with the Topic Operator so it can recreate the KafkaTopic resources from the corresponding topics. Note If the Topic Operator is not deployed, you only need to recover the PersistentVolumeClaim (PVC) resources. Before you begin In this procedure, it is essential that PVs are mounted into the correct PVC to avoid data corruption. A volumeName is specified for the PVC and this must match the name of the PV. For more information, see: Persistent Volume Claim naming JBOD and Persistent Volume Claims Note The procedure does not include recovery of KafkaUser resources, which must be recreated manually. If passwords and certificates need to be retained, secrets must be recreated before creating the KafkaUser resources. Procedure Check information on the PVs in the cluster: oc get pv Information is presented for PVs with data. Example output showing columns important to this procedure: NAME RECLAIMPOLICY CLAIM pvc-5e9c5c7f-3317-11ea-a650-06e1eadd9a4c ... Retain ... myproject/data-my-cluster-zookeeper-1 pvc-5e9cc72d-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-my-cluster-zookeeper-0 pvc-5ead43d1-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-my-cluster-zookeeper-2 pvc-7e1f67f9-3317-11ea-a650-06e1eadd9a4c ... Retain ... myproject/data-0-my-cluster-kafka-0 pvc-7e21042e-3317-11ea-9786-02deaf9aa87e ... Retain ... myproject/data-0-my-cluster-kafka-1 pvc-7e226978-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-0-my-cluster-kafka-2 NAME shows the name of each PV. RECLAIM POLICY shows that PVs are retained . CLAIM shows the link to the original PVCs. Recreate the original namespace: oc create namespace myproject Recreate the original PVC resource specifications, linking the PVCs to the appropriate PV: For example: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data-0-my-cluster-kafka-0 spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi storageClassName: gp2-retain volumeMode: Filesystem volumeName: pvc-7e1f67f9-3317-11ea-a650-06e1eadd9a4c Edit the PV specifications to delete the claimRef properties that bound the original PVC. For example: apiVersion: v1 kind: PersistentVolume metadata: annotations: kubernetes.io/createdby: aws-ebs-dynamic-provisioner pv.kubernetes.io/bound-by-controller: "yes" pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs creationTimestamp: "<date>" finalizers: - kubernetes.io/pv-protection labels: failure-domain.beta.kubernetes.io/region: eu-west-1 failure-domain.beta.kubernetes.io/zone: eu-west-1c name: pvc-7e226978-3317-11ea-97b0-0aef8816c7ea resourceVersion: "39431" selfLink: /api/v1/persistentvolumes/pvc-7e226978-3317-11ea-97b0-0aef8816c7ea uid: 7efe6b0d-3317-11ea-a650-06e1eadd9a4c spec: accessModes: - ReadWriteOnce awsElasticBlockStore: fsType: xfs volumeID: aws://eu-west-1c/vol-09db3141656d1c258 capacity: storage: 100Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: data-0-my-cluster-kafka-2 namespace: myproject resourceVersion: "39113" uid: 54be1c60-3319-11ea-97b0-0aef8816c7ea nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: failure-domain.beta.kubernetes.io/zone operator: In values: - eu-west-1c - key: failure-domain.beta.kubernetes.io/region operator: In values: - eu-west-1 persistentVolumeReclaimPolicy: Retain storageClassName: gp2-retain volumeMode: Filesystem In the example, the following properties are deleted: claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: data-0-my-cluster-kafka-2 namespace: myproject resourceVersion: "39113" uid: 54be1c60-3319-11ea-97b0-0aef8816c7ea Deploy the Cluster Operator. oc create -f install/cluster-operator -n my-project Recreate your cluster. Follow the steps depending on whether or not you have all the KafkaTopic resources needed to recreate your cluster. Option 1 : If you have all the KafkaTopic resources that existed before you lost your cluster, including internal topics such as committed offsets from __consumer_offsets : Recreate all KafkaTopic resources. It is essential that you recreate the resources before deploying the cluster, or the Topic Operator will delete the topics. Deploy the Kafka cluster. For example: oc apply -f kafka.yaml Option 2 : If you do not have all the KafkaTopic resources that existed before you lost your cluster: Deploy the Kafka cluster, as with the first option, but without the Topic Operator by removing the topicOperator property from the Kafka resource before deploying. If you include the Topic Operator in the deployment, the Topic Operator will delete all the topics. Delete the internal topic store topics from the Kafka cluster: oc run kafka-admin -ti --image=registry.redhat.io/amq7/amq-streams-kafka-31-rhel8:2.1.0 --rm=true --restart=Never -- ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi-topic-operator-kstreams-topic-store-changelog --delete && ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi_store_topic --delete The command must correspond to the type of listener and authentication used to access the Kafka cluster. Enable the Topic Operator by redeploying the Kafka cluster with the topicOperator property to recreate the KafkaTopic resources. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: #... entityOperator: topicOperator: {} 1 #... 1 Here we show the default configuration, which has no additional properties. You specify the required configuration using the properties described in Section 13.2.45, " EntityTopicOperatorSpec schema reference" . Verify the recovery by listing the KafkaTopic resources: oc get KafkaTopic 12.7. Setting limits on brokers using the Kafka Static Quota plugin Use the Kafka Static Quota plugin to set throughput and storage limits on brokers in your Kafka cluster. You enable the plugin and set limits by configuring the Kafka resource. You can set a byte-rate threshold and storage quotas to put limits on the clients interacting with your brokers. You can set byte-rate thresholds for producer and consumer bandwidth. The total limit is distributed across all clients accessing the broker. For example, you can set a byte-rate threshold of 40 MBps for producers. If two producers are running, they are each limited to a throughput of 20 MBps. Storage quotas throttle Kafka disk storage limits between a soft limit and hard limit. The limits apply to all available disk space. Producers are slowed gradually between the soft and hard limit. The limits prevent disks filling up too quickly and exceeding their capacity. Full disks can lead to issues that are hard to rectify. The hard limit is the maximum storage limit. Note For JBOD storage, the limit applies across all disks. If a broker is using two 1 TB disks and the quota is 1.1 TB, one disk might fill and the other disk will be almost empty. Prerequisites The Cluster Operator that manages the Kafka cluster is running. Procedure Add the plugin properties to the config of the Kafka resource. The plugin properties are shown in this example configuration. Example Kafka Static Quota plugin configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... config: client.quota.callback.class: io.strimzi.kafka.quotas.StaticQuotaCallback 1 client.quota.callback.static.produce: 1000000 2 client.quota.callback.static.fetch: 1000000 3 client.quota.callback.static.storage.soft: 400000000000 4 client.quota.callback.static.storage.hard: 500000000000 5 client.quota.callback.static.storage.check-interval: 5 6 1 Loads the Kafka Static Quota plugin. 2 Sets the producer byte-rate threshold. 1 MBps in this example. 3 Sets the consumer byte-rate threshold. 1 MBps in this example. 4 Sets the lower soft limit for storage. 400 GB in this example. 5 Sets the higher hard limit for storage. 500 GB in this example. 6 Sets the interval in seconds between checks on storage. 5 seconds in this example. You can set this to 0 to disable the check. Update the resource. oc apply -f <kafka_configuration_file> Additional resources Kafka broker configuration tuning Setting user quotas 12.8. Tuning Kafka configuration Use configuration properties to optimize the performance of Kafka brokers, producers and consumers. A minimum set of configuration properties is required, but you can add or adjust properties to change how producers and consumers interact with Kafka brokers. For example, you can tune latency and throughput of messages so that clients can respond to data in real time. You might start by analyzing metrics to gauge where to make your initial configurations, then make incremental changes and further comparisons of metrics until you have the configuration you need. For more information about Apache Kafka configuration properties, see the Apache Kafka documentation . 12.8.1. Kafka broker configuration tuning Use configuration properties to optimize the performance of Kafka brokers. You can use standard Kafka broker configuration options, except for properties managed directly by AMQ Streams. 12.8.1.1. Basic broker configuration Certain broker configuration options are managed directly by AMQ Streams, driven by the Kafka custom resource specification: broker.id is the ID of the Kafka broker log.dirs are the directories for log data zookeeper.connect is the configuration to connect Kafka with ZooKeeper listener exposes the Kafka cluster to clients authorization mechanisms allow or decline actions executed by users authentication mechanisms prove the identity of users requiring access to Kafka Broker IDs start from 0 (zero) and correspond to the number of broker replicas. Log directories are mounted to /var/lib/kafka/data/kafka-log IDX based on the spec.kafka.storage configuration in the Kafka custom resource. IDX is the Kafka broker pod index. As such, you cannot configure these options through the config property of the Kafka custom resource. For a list of exclusions, see the KafkaClusterSpec schema reference . However, a typical broker configuration will include settings for properties related to topics, threads and logs. Basic broker configuration properties # ... num.partitions=1 default.replication.factor=3 offsets.topic.replication.factor=3 transaction.state.log.replication.factor=3 transaction.state.log.min.isr=2 log.retention.hours=168 log.segment.bytes=1073741824 log.retention.check.interval.ms=300000 num.network.threads=3 num.io.threads=8 num.recovery.threads.per.data.dir=1 socket.send.buffer.bytes=102400 socket.receive.buffer.bytes=102400 socket.request.max.bytes=104857600 group.initial.rebalance.delay.ms=0 zookeeper.connection.timeout.ms=6000 # ... 12.8.1.2. Replicating topics for high availability Basic topic properties set the default number of partitions and replication factor for topics, which will apply to topics that are created without these properties being explicitly set, including when topics are created automatically. # ... num.partitions=1 auto.create.topics.enable=false default.replication.factor=3 min.insync.replicas=2 replica.fetch.max.bytes=1048576 # ... The auto.create.topics.enable property is enabled by default so that topics that do not already exist are created automatically when needed by producers and consumers. If you are using automatic topic creation, you can set the default number of partitions for topics using num.partitions . Generally, however, this property is disabled so that more control is provided over topics through explicit topic creation. For example, you can use the AMQ Streams KafkaTopic resource or applications to create topics. For high availability environments, it is advisable to increase the replication factor to at least 3 for topics and set the minimum number of in-sync replicas required to 1 less than the replication factor. For topics created using the KafkaTopic resource, the replication factor is set using spec.replicas . For data durability , you should also set min.insync.replicas in your topic configuration and message delivery acknowledgments using acks=all in your producer configuration. Use replica.fetch.max.bytes to set the maximum size, in bytes, of messages fetched by each follower that replicates the leader partition. Change this value according to the average message size and throughput. When considering the total memory allocation required for read/write buffering, the memory available must also be able to accommodate the maximum replicated message size when multiplied by all followers. The delete.topic.enable property is enabled by default to allow topics to be deleted. In a production environment, you should disable this property to avoid accidental topic deletion, resulting in data loss. You can, however, temporarily enable it and delete topics and then disable it again. If delete.topic.enable is enabled, you can delete topics using the KafkaTopic resource. # ... auto.create.topics.enable=false delete.topic.enable=true # ... 12.8.1.3. Internal topic settings for transactions and commits If you are using transactions to enable atomic writes to partitions from producers, the state of the transactions is stored in the internal __transaction_state topic. By default, the brokers are configured with a replication factor of 3 and a minimum of 2 in-sync replicas for this topic, which means that a minimum of three brokers are required in your Kafka cluster. # ... transaction.state.log.replication.factor=3 transaction.state.log.min.isr=2 # ... Similarly, the internal __consumer_offsets topic, which stores consumer state, has default settings for the number of partitions and replication factor. # ... offsets.topic.num.partitions=50 offsets.topic.replication.factor=3 # ... Do not reduce these settings in production. You can increase the settings in a production environment. As an exception, you might want to reduce the settings in a single-broker test environment. 12.8.1.4. Improving request handling throughput by increasing I/O threads Network threads handle requests to the Kafka cluster, such as produce and fetch requests from client applications. Produce requests are placed in a request queue. Responses are placed in a response queue. The number of network threads should reflect the replication factor and the levels of activity from client producers and consumers interacting with the Kafka cluster. If you are going to have a lot of requests, you can increase the number of threads, using the amount of time threads are idle to determine when to add more threads. To reduce congestion and regulate the request traffic, you can limit the number of requests allowed in the request queue before the network thread is blocked. I/O threads pick up requests from the request queue to process them. Adding more threads can improve throughput, but the number of CPU cores and disk bandwidth imposes a practical upper limit. At a minimum, the number of I/O threads should equal the number of storage volumes. # ... num.network.threads=3 1 queued.max.requests=500 2 num.io.threads=8 3 num.recovery.threads.per.data.dir=1 4 # ... 1 The number of network threads for the Kafka cluster. 2 The number of requests allowed in the request queue. 3 The number of I/O threads for a Kafka broker. 4 The number of threads used for log loading at startup and flushing at shutdown. Configuration updates to the thread pools for all brokers might occur dynamically at the cluster level. These updates are restricted to between half the current size and twice the current size. Note Kafka broker metrics can help with working out the number of threads required. For example, metrics for the average time network threads are idle ( kafka.network:type=SocketServer,name=NetworkProcessorAvgIdlePercent ) indicate the percentage of resources used. If there is 0% idle time, all resources are in use, which means that adding more threads might be beneficial. If threads are slow or limited due to the number of disks, you can try increasing the size of the buffers for network requests to improve throughput: # ... replica.socket.receive.buffer.bytes=65536 # ... And also increase the maximum number of bytes Kafka can receive: # ... socket.request.max.bytes=104857600 # ... 12.8.1.5. Increasing bandwidth for high latency connections Kafka batches data to achieve reasonable throughput over high-latency connections from Kafka to clients, such as connections between datacenters. However, if high latency is a problem, you can increase the size of the buffers for sending and receiving messages. # ... socket.send.buffer.bytes=1048576 socket.receive.buffer.bytes=1048576 # ... You can estimate the optimal size of your buffers using a bandwidth-delay product calculation, which multiplies the maximum bandwidth of the link (in bytes/s) with the round-trip delay (in seconds) to give an estimate of how large a buffer is required to sustain maximum throughput. 12.8.1.6. Managing logs with data retention policies Kafka uses logs to store message data. Logs are a series of segments associated with various indexes. New messages are written to an active segment, and never subsequently modified. Segments are read when serving fetch requests from consumers. Periodically, the active segment is rolled to become read-only and a new active segment is created to replace it. There is only a single segment active at a time. Older segments are retained until they are eligible for deletion. Configuration at the broker level sets the maximum size in bytes of a log segment and the amount of time in milliseconds before an active segment is rolled: # ... log.segment.bytes=1073741824 log.roll.ms=604800000 # ... You can override these settings at the topic level using segment.bytes and segment.ms . Whether you need to lower or raise these values depends on the policy for segment deletion. A larger size means the active segment contains more messages and is rolled less often. Segments also become eligible for deletion less often. You can set time-based or size-based log retention and cleanup policies so that logs are kept manageable. Depending on your requirements, you can use log retention configuration to delete old segments. If log retention policies are used, non-active log segments are removed when retention limits are reached. Deleting old segments bounds the storage space required for the log so you do not exceed disk capacity. For time-based log retention, you set a retention period based on hours, minutes and milliseconds. The retention period is based on the time messages were appended to the segment. The milliseconds configuration has priority over minutes, which has priority over hours. The minutes and milliseconds configuration is null by default, but the three options provide a substantial level of control over the data you wish to retain. Preference should be given to the milliseconds configuration, as it is the only one of the three properties that is dynamically updateable. # ... log.retention.ms=1680000 # ... If log.retention.ms is set to -1, no time limit is applied to log retention, so all logs are retained. Disk usage should always be monitored, but the -1 setting is not generally recommended as it can lead to issues with full disks, which can be hard to rectify. For size-based log retention, you set a maximum log size (of all segments in the log) in bytes: # ... log.retention.bytes=1073741824 # ... In other words, a log will typically have approximately log.retention.bytes/log.segment.bytes segments once it reaches a steady state. When the maximum log size is reached, older segments are removed. A potential issue with using a maximum log size is that it does not take into account the time messages were appended to a segment. You can use time-based and size-based log retention for your cleanup policy to get the balance you need. Whichever threshold is reached first triggers the cleanup. If you wish to add a time delay before a segment file is deleted from the system, you can add the delay using log.segment.delete.delay.ms for all topics at the broker level or file.delete.delay.ms for specific topics in the topic configuration. # ... log.segment.delete.delay.ms=60000 # ... 12.8.1.7. Removing log data with cleanup policies The method of removing older log data is determined by the log cleaner configuration. The log cleaner is enabled for the broker by default: # ... log.cleaner.enable=true # ... You can set the cleanup policy at the topic or broker level. Broker-level configuration is the default for topics that do not have policy set. You can set policy to delete logs, compact logs, or do both: # ... log.cleanup.policy=compact,delete # ... The delete policy corresponds to managing logs with data retention policies. It is suitable when data does not need to be retained forever. The compact policy guarantees to keep the most recent message for each message key. Log compaction is suitable where message values are changeable, and you want to retain the latest update. If cleanup policy is set to delete logs, older segments are deleted based on log retention limits. Otherwise, if the log cleaner is not enabled, and there are no log retention limits, the log will continue to grow. If cleanup policy is set for log compaction, the head of the log operates as a standard Kafka log, with writes for new messages appended in order. In the tail of a compacted log, where the log cleaner operates, records will be deleted if another record with the same key occurs later in the log. Messages with null values are also deleted. If you're not using keys, you can't use compaction because keys are needed to identify related messages. While Kafka guarantees that the latest messages for each key will be retained, it does not guarantee that the whole compacted log will not contain duplicates. Figure 12.1. Log showing key value writes with offset positions before compaction Using keys to identify messages, Kafka compaction keeps the latest message (with the highest offset) for a specific message key, eventually discarding earlier messages that have the same key. In other words, the message in its latest state is always available and any out-of-date records of that particular message are eventually removed when the log cleaner runs. You can restore a message back to a state. Records retain their original offsets even when surrounding records get deleted. Consequently, the tail can have non-contiguous offsets. When consuming an offset that's no longer available in the tail, the record with the higher offset is found. Figure 12.2. Log after compaction If you choose only a compact policy, your log can still become arbitrarily large. In which case, you can set policy to compact and delete logs. If you choose to compact and delete, first the log data is compacted, removing records with a key in the head of the log. After which, data that falls before the log retention threshold is deleted. Figure 12.3. Log retention point and compaction point You set the frequency the log is checked for cleanup in milliseconds: # ... log.retention.check.interval.ms=300000 # ... Adjust the log retention check interval in relation to the log retention settings. Smaller retention sizes might require more frequent checks. The frequency of cleanup should be often enough to manage the disk space, but not so often it affects performance on a topic. You can also set a time in milliseconds to put the cleaner on standby if there are no logs to clean: # ... log.cleaner.backoff.ms=15000 # ... If you choose to delete older log data, you can set a period in milliseconds to retain the deleted data before it is purged: # ... log.cleaner.delete.retention.ms=86400000 # ... The deleted data retention period gives time to notice the data is gone before it is irretrievably deleted. To delete all messages related to a specific key, a producer can send a tombstone message. A tombstone has a null value and acts as a marker to tell a consumer the value is deleted. After compaction, only the tombstone is retained, which must be for a long enough period for the consumer to know that the message is deleted. When older messages are deleted, having no value, the tombstone key is also deleted from the partition. 12.8.1.8. Managing disk utilization There are many other configuration settings related to log cleanup, but of particular importance is memory allocation. The deduplication property specifies the total memory for cleanup across all log cleaner threads. You can set an upper limit on the percentage of memory used through the buffer load factor. # ... log.cleaner.dedupe.buffer.size=134217728 log.cleaner.io.buffer.load.factor=0.9 # ... Each log entry uses exactly 24 bytes, so you can work out how many log entries the buffer can handle in a single run and adjust the setting accordingly. If possible, consider increasing the number of log cleaner threads if you are looking to reduce the log cleaning time: # ... log.cleaner.threads=8 # ... If you are experiencing issues with 100% disk bandwidth usage, you can throttle the log cleaner I/O so that the sum of the read/write operations is less than a specified double value based on the capabilities of the disks performing the operations: # ... log.cleaner.io.max.bytes.per.second=1.7976931348623157E308 # ... 12.8.1.9. Handling large message sizes The default batch size for messages is 1MB, which is optimal for maximum throughput in most use cases. Kafka can accommodate larger batches at a reduced throughput, assuming adequate disk capacity. Large message sizes are handled in four ways: Producer-side message compression writes compressed messages to the log. Reference-based messaging sends only a reference to data stored in some other system in the message's value. Inline messaging splits messages into chunks that use the same key, which are then combined on output using a stream-processor like Kafka Streams. Broker and producer/consumer client application configuration built to handle larger message sizes. The reference-based messaging and message compression options are recommended and cover most situations. With any of these options, care must be take to avoid introducing performance issues. Producer-side compression For producer configuration, you specify a compression.type , such as Gzip, which is then applied to batches of data generated by the producer. Using the broker configuration compression.type=producer , the broker retains whatever compression the producer used. Whenever producer and topic compression do not match, the broker has to compress batches again prior to appending them to the log, which impacts broker performance. Compression also adds additional processing overhead on the producer and decompression overhead on the consumer, but includes more data in a batch, so is often beneficial to throughput when message data compresses well. Combine producer-side compression with fine-tuning of the batch size to facilitate optimum throughput. Using metrics helps to gauge the average batch size needed. Reference-based messaging Reference-based messaging is useful for data replication when you do not know how big a message will be. The external data store must be fast, durable, and highly available for this configuration to work. Data is written to the data store and a reference to the data is returned. The producer sends a message containing the reference to Kafka. The consumer gets the reference from the message and uses it to fetch the data from the data store. Figure 12.4. Reference-based messaging flow As the message passing requires more trips, end-to-end latency will increase. Another significant drawback of this approach is there is no automatic clean up of the data in the external system when the Kafka message gets cleaned up. A hybrid approach would be to only send large messages to the data store and process standard-sized messages directly. Inline messaging Inline messaging is complex, but it does not have the overhead of depending on external systems like reference-based messaging. The producing client application has to serialize and then chunk the data if the message is too big. The producer then uses the Kafka ByteArraySerializer or similar to serialize each chunk again before sending it. The consumer tracks messages and buffers chunks until it has a complete message. The consuming client application receives the chunks, which are assembled before deserialization. Complete messages are delivered to the rest of the consuming application in order according to the offset of the first or last chunk for each set of chunked messages. Successful delivery of the complete message is checked against offset metadata to avoid duplicates during a rebalance. Figure 12.5. Inline messaging flow Inline messaging has a performance overhead on the consumer side because of the buffering required, particularly when handling a series of large messages in parallel. The chunks of large messages can become interleaved, so that it is not always possible to commit when all the chunks of a message have been consumed if the chunks of another large message in the buffer are incomplete. For this reason, the buffering is usually supported by persisting message chunks or by implementing commit logic. Configuration to handle larger messages If larger messages cannot be avoided, and to avoid blocks at any point of the message flow, you can increase message limits. To do this, configure message.max.bytes at the topic level to set the maximum record batch size for individual topics. If you set message.max.bytes at the broker level, larger messages are allowed for all topics. The broker will reject any message that is greater than the limit set with message.max.bytes . The buffer size for the producers ( max.request.size ) and consumers ( message.max.bytes ) must be able to accommodate the larger messages. 12.8.1.10. Controlling the log flush of message data Log flush properties control the periodic writes of cached message data to disk. The scheduler specifies the frequency of checks on the log cache in milliseconds: # ... log.flush.scheduler.interval.ms=2000 # ... You can control the frequency of the flush based on the maximum amount of time that a message is kept in-memory and the maximum number of messages in the log before writing to disk: # ... log.flush.interval.ms=50000 log.flush.interval.messages=100000 # ... The wait between flushes includes the time to make the check and the specified interval before the flush is carried out. Increasing the frequency of flushes can affect throughput. Generally, the recommendation is to not set explicit flush thresholds and let the operating system perform background flush using its default settings. Partition replication provides greater data durability than writes to any single disk as a failed broker can recover from its in-sync replicas. If you are using application flush management, setting lower flush thresholds might be appropriate if you are using faster disks. 12.8.1.11. Partition rebalancing for availability Partitions can be replicated across brokers for fault tolerance. For a given partition, one broker is elected leader and handles all produce requests (writes to the log). Partition followers on other brokers replicate the partition data of the partition leader for data reliability in the event of the leader failing. Followers do not normally serve clients, though rack configuration allows a consumer to consume messages from the closest replica when a Kafka cluster spans multiple datacenters. Followers operate only to replicate messages from the partition leader and allow recovery should the leader fail. Recovery requires an in-sync follower. Followers stay in sync by sending fetch requests to the leader, which returns messages to the follower in order. The follower is considered to be in sync if it has caught up with the most recently committed message on the leader. The leader checks this by looking at the last offset requested by the follower. An out-of-sync follower is usually not eligible as a leader should the current leader fail, unless unclean leader election is allowed . You can adjust the lag time before a follower is considered out of sync: # ... replica.lag.time.max.ms=30000 # ... Lag time puts an upper limit on the time to replicate a message to all in-sync replicas and how long a producer has to wait for an acknowledgment. If a follower fails to make a fetch request and catch up with the latest message within the specified lag time, it is removed from in-sync replicas. You can reduce the lag time to detect failed replicas sooner, but by doing so you might increase the number of followers that fall out of sync needlessly. The right lag time value depends on both network latency and broker disk bandwidth. When a leader partition is no longer available, one of the in-sync replicas is chosen as the new leader. The first broker in a partition's list of replicas is known as the preferred leader. By default, Kafka is enabled for automatic partition leader rebalancing based on a periodic check of leader distribution. That is, Kafka checks to see if the preferred leader is the current leader. A rebalance ensures that leaders are evenly distributed across brokers and brokers are not overloaded. You can use Cruise Control for AMQ Streams to figure out replica assignments to brokers that balance load evenly across the cluster. Its calculation takes into account the differing load experienced by leaders and followers. A failed leader affects the balance of a Kafka cluster because the remaining brokers get the extra work of leading additional partitions. For the assignment found by Cruise Control to actually be balanced it is necessary that partitions are lead by the preferred leader. Kafka can automatically ensure that the preferred leader is being used (where possible), changing the current leader if necessary. This ensures that the cluster remains in the balanced state found by Cruise Control. You can control the frequency, in seconds, of the rebalance check and the maximum percentage of imbalance allowed for a broker before a rebalance is triggered. #... auto.leader.rebalance.enable=true leader.imbalance.check.interval.seconds=300 leader.imbalance.per.broker.percentage=10 #... The percentage leader imbalance for a broker is the ratio between the current number of partitions for which the broker is the current leader and the number of partitions for which it is the preferred leader. You can set the percentage to zero to ensure that preferred leaders are always elected, assuming they are in sync. If the checks for rebalances need more control, you can disable automated rebalances. You can then choose when to trigger a rebalance using the kafka-leader-election.sh command line tool. Note The Grafana dashboards provided with AMQ Streams show metrics for under-replicated partitions and partitions that do not have an active leader. 12.8.1.12. Unclean leader election Leader election to an in-sync replica is considered clean because it guarantees no loss of data. And this is what happens by default. But what if there is no in-sync replica to take on leadership? Perhaps the ISR (in-sync replica) only contained the leader when the leader's disk died. If a minimum number of in-sync replicas is not set, and there are no followers in sync with the partition leader when its hard drive fails irrevocably, data is already lost. Not only that, but a new leader cannot be elected because there are no in-sync followers. You can configure how Kafka handles leader failure: # ... unclean.leader.election.enable=false # ... Unclean leader election is disabled by default, which means that out-of-sync replicas cannot become leaders. With clean leader election, if no other broker was in the ISR when the old leader was lost, Kafka waits until that leader is back online before messages can be written or read. Unclean leader election means out-of-sync replicas can become leaders, but you risk losing messages. The choice you make depends on whether your requirements favor availability or durability. You can override the default configuration for specific topics at the topic level. If you cannot afford the risk of data loss, then leave the default configuration. 12.8.1.13. Avoiding unnecessary consumer group rebalances For consumers joining a new consumer group, you can add a delay so that unnecessary rebalances to the broker are avoided: # ... group.initial.rebalance.delay.ms=3000 # ... The delay is the amount of time that the coordinator waits for members to join. The longer the delay, the more likely it is that all the members will join in time and avoid a rebalance. But the delay also prevents the group from consuming until the period has ended. Additional resources Setting limits on brokers using the Kafka Static Quota plugin 12.8.2. Kafka producer configuration tuning Use a basic producer configuration with optional properties that are tailored to specific use cases. Adjusting your configuration to maximize throughput might increase latency or vice versa. You will need to experiment and tune your producer configuration to get the balance you need. 12.8.2.1. Basic producer configuration Connection and serializer properties are required for every producer. Generally, it is good practice to add a client id for tracking, and use compression on the producer to reduce batch sizes in requests. In a basic producer configuration: The order of messages in a partition is not guaranteed. The acknowledgment of messages reaching the broker does not guarantee durability. Basic producer configuration properties # ... bootstrap.servers=localhost:9092 1 key.serializer=org.apache.kafka.common.serialization.StringSerializer 2 value.serializer=org.apache.kafka.common.serialization.StringSerializer 3 client.id=my-client 4 compression.type=gzip 5 # ... 1 (Required) Tells the producer to connect to a Kafka cluster using a host:port bootstrap server address for a Kafka broker. The producer uses the address to discover and connect to all brokers in the cluster. Use a comma-separated list to specify two or three addresses in case a server is down, but it's not necessary to provide a list of all the brokers in the cluster. 2 (Required) Serializer to transform the key of each message to bytes prior to them being sent to a broker. 3 (Required) Serializer to transform the value of each message to bytes prior to them being sent to a broker. 4 (Optional) The logical name for the client, which is used in logs and metrics to identify the source of a request. 5 (Optional) The codec for compressing messages, which are sent and might be stored in compressed format and then decompressed when reaching a consumer. Compression is useful for improving throughput and reducing the load on storage, but might not be suitable for low latency applications where the cost of compression or decompression could be prohibitive. 12.8.2.2. Data durability You can apply greater data durability, to minimize the likelihood that messages are lost, using message delivery acknowledgments. # ... acks=all 1 # ... 1 Specifying acks=all forces a partition leader to replicate messages to a certain number of followers before acknowledging that the message request was successfully received. Because of the additional checks, acks=all increases the latency between the producer sending a message and receiving acknowledgment. The number of brokers which need to have appended the messages to their logs before the acknowledgment is sent to the producer is determined by the topic's min.insync.replicas configuration. A typical starting point is to have a topic replication factor of 3, with two in-sync replicas on other brokers. In this configuration, the producer can continue unaffected if a single broker is unavailable. If a second broker becomes unavailable, the producer won't receive acknowledgments and won't be able to produce more messages. Topic configuration to support acks=all # ... min.insync.replicas=2 1 # ... 1 Use 2 in-sync replicas. The default is 1 . Note If the system fails, there is a risk of unsent data in the buffer being lost. 12.8.2.3. Ordered delivery Idempotent producers avoid duplicates as messages are delivered exactly once. IDs and sequence numbers are assigned to messages to ensure the order of delivery, even in the event of failure. If you are using acks=all for data consistency, enabling idempotency makes sense for ordered delivery. Ordered delivery with idempotency # ... enable.idempotence=true 1 max.in.flight.requests.per.connection=5 2 acks=all 3 retries=2147483647 4 # ... 1 Set to true to enable the idempotent producer. 2 With idempotent delivery the number of in-flight requests may be greater than 1 while still providing the message ordering guarantee. The default is 5 in-flight requests. 3 Set acks to all . 4 Set the number of attempts to resend a failed message request. If you are not using acks=all and idempotency because of the performance cost, set the number of in-flight (unacknowledged) requests to 1 to preserve ordering. Otherwise, a situation is possible where Message-A fails only to succeed after Message-B was already written to the broker. Ordered delivery without idempotency # ... enable.idempotence=false 1 max.in.flight.requests.per.connection=1 2 retries=2147483647 # ... 1 Set to false to disable the idempotent producer. 2 Set the number of in-flight requests to exactly 1 . 12.8.2.4. Reliability guarantees Idempotence is useful for exactly once writes to a single partition. Transactions, when used with idempotence, allow exactly once writes across multiple partitions. Transactions guarantee that messages using the same transactional ID are produced once, and either all are successfully written to the respective logs or none of them are. # ... enable.idempotence=true max.in.flight.requests.per.connection=5 acks=all retries=2147483647 transactional.id= UNIQUE-ID 1 transaction.timeout.ms=900000 2 # ... 1 Specify a unique transactional ID. 2 Set the maximum allowed time for transactions in milliseconds before a timeout error is returned. The default is 900000 or 15 minutes. The choice of transactional.id is important in order that the transactional guarantee is maintained. Each transactional id should be used for a unique set of topic partitions. For example, this can be achieved using an external mapping of topic partition names to transactional ids, or by computing the transactional id from the topic partition names using a function that avoids collisions. 12.8.2.5. Optimizing throughput and latency Usually, the requirement of a system is to satisfy a particular throughput target for a proportion of messages within a given latency. For example, targeting 500,000 messages per second with 95% of messages being acknowledged within 2 seconds. It's likely that the messaging semantics (message ordering and durability) of your producer are defined by the requirements for your application. For instance, it's possible that you don't have the option of using acks=0 or acks=1 without breaking some important property or guarantee provided by your application. Broker restarts have a significant impact on high percentile statistics. For example, over a long period the 99th percentile latency is dominated by behavior around broker restarts. This is worth considering when designing benchmarks or comparing performance numbers from benchmarking with performance numbers seen in production. Depending on your objective, Kafka offers a number of configuration parameters and techniques for tuning producer performance for throughput and latency. Message batching ( linger.ms and batch.size ) Message batching delays sending messages in the hope that more messages destined for the same broker will be sent, allowing them to be batched into a single produce request. Batching is a compromise between higher latency in return for higher throughput. Time-based batching is configured using linger.ms , and size-based batching is configured using batch.size . Compression ( compression.type ) Message compression adds latency in the producer (CPU time spent compressing the messages), but makes requests (and potentially disk writes) smaller, which can increase throughput. Whether compression is worthwhile, and the best compression to use, will depend on the messages being sent. Compression happens on the thread which calls KafkaProducer.send() , so if the latency of this method matters for your application you should consider using more threads. Pipelining ( max.in.flight.requests.per.connection ) Pipelining means sending more requests before the response to a request has been received. In general more pipelining means better throughput, up to a threshold at which other effects, such as worse batching, start to counteract the effect on throughput. Lowering latency When your application calls KafkaProducer.send() the messages are: Processed by any interceptors Serialized Assigned to a partition Compressed Added to a batch of messages in a per-partition queue At which point the send() method returns. So the time send() is blocked is determined by: The time spent in the interceptors, serializers and partitioner The compression algorithm used The time spent waiting for a buffer to use for compression Batches will remain in the queue until one of the following occurs: The batch is full (according to batch.size ) The delay introduced by linger.ms has passed The sender is about to send message batches for other partitions to the same broker, and it is possible to add this batch too The producer is being flushed or closed Look at the configuration for batching and buffering to mitigate the impact of send() blocking on latency. # ... linger.ms=100 1 batch.size=16384 2 buffer.memory=33554432 3 # ... 1 The linger property adds a delay in milliseconds so that larger batches of messages are accumulated and sent in a request. The default is 0'. 2 If a maximum batch.size in bytes is used, a request is sent when the maximum is reached, or messages have been queued for longer than linger.ms (whichever comes sooner). Adding the delay allows batches to accumulate messages up to the batch size. 3 The buffer size must be at least as big as the batch size, and be able to accommodate buffering, compression and in-flight requests. Increasing throughput Improve throughput of your message requests by adjusting the maximum time to wait before a message is delivered and completes a send request. You can also direct messages to a specified partition by writing a custom partitioner to replace the default. # ... delivery.timeout.ms=120000 1 partitioner.class=my-custom-partitioner 2 # ... 1 The maximum time in milliseconds to wait for a complete send request. You can set the value to MAX_LONG to delegate to Kafka an indefinite number of retries. The default is 120000 or 2 minutes. 2 Specify the class name of the custom partitioner. 12.8.3. Kafka consumer configuration tuning Use a basic consumer configuration with optional properties that are tailored to specific use cases. When tuning your consumers your primary concern will be ensuring that they cope efficiently with the amount of data ingested. As with the producer tuning, be prepared to make incremental changes until the consumers operate as expected. 12.8.3.1. Basic consumer configuration Connection and deserializer properties are required for every consumer. Generally, it is good practice to add a client id for tracking. In a consumer configuration, irrespective of any subsequent configuration: The consumer fetches from a given offset and consumes the messages in order, unless the offset is changed to skip or re-read messages. The broker does not know if the consumer processed the responses, even when committing offsets to Kafka, because the offsets might be sent to a different broker in the cluster. Basic consumer configuration properties # ... bootstrap.servers=localhost:9092 1 key.deserializer=org.apache.kafka.common.serialization.StringDeserializer 2 value.deserializer=org.apache.kafka.common.serialization.StringDeserializer 3 client.id=my-client 4 group.id=my-group-id 5 # ... 1 (Required) Tells the consumer to connect to a Kafka cluster using a host:port bootstrap server address for a Kafka broker. The consumer uses the address to discover and connect to all brokers in the cluster. Use a comma-separated list to specify two or three addresses in case a server is down, but it is not necessary to provide a list of all the brokers in the cluster. If you are using a loadbalancer service to expose the Kafka cluster, you only need the address for the service because the availability is handled by the loadbalancer. 2 (Required) Deserializer to transform the bytes fetched from the Kafka broker into message keys. 3 (Required) Deserializer to transform the bytes fetched from the Kafka broker into message values. 4 (Optional) The logical name for the client, which is used in logs and metrics to identify the source of a request. The id can also be used to throttle consumers based on processing time quotas. 5 (Conditional) A group id is required for a consumer to be able to join a consumer group. 12.8.3.2. Scaling data consumption using consumer groups Consumer groups share a typically large data stream generated by one or multiple producers from a given topic. Consumers are grouped using a group.id property, allowing messages to be spread across the members. One of the consumers in the group is elected leader and decides how the partitions are assigned to the consumers in the group. Each partition can only be assigned to a single consumer. If you do not already have as many consumers as partitions, you can scale data consumption by adding more consumer instances with the same group.id . Adding more consumers to a group than there are partitions will not help throughput, but it does mean that there are consumers on standby should one stop functioning. If you can meet throughput goals with fewer consumers, you save on resources. Consumers within the same consumer group send offset commits and heartbeats to the same broker. So the greater the number of consumers in the group, the higher the request load on the broker. # ... group.id=my-group-id 1 # ... 1 Add a consumer to a consumer group using a group id. 12.8.3.3. Message ordering guarantees Kafka brokers receive fetch requests from consumers that ask the broker to send messages from a list of topics, partitions and offset positions. A consumer observes messages in a single partition in the same order that they were committed to the broker, which means that Kafka only provides ordering guarantees for messages in a single partition. Conversely, if a consumer is consuming messages from multiple partitions, the order of messages in different partitions as observed by the consumer does not necessarily reflect the order in which they were sent. If you want a strict ordering of messages from one topic, use one partition per consumer. 12.8.3.4. Optimizing throughput and latency Control the number of messages returned when your client application calls KafkaConsumer.poll() . Use the fetch.max.wait.ms and fetch.min.bytes properties to increase the minimum amount of data fetched by the consumer from the Kafka broker. Time-based batching is configured using fetch.max.wait.ms , and size-based batching is configured using fetch.min.bytes . If CPU utilization in the consumer or broker is high, it might be because there are too many requests from the consumer. You can adjust fetch.max.wait.ms and fetch.min.bytes properties higher so that there are fewer requests and messages are delivered in bigger batches. By adjusting higher, throughput is improved with some cost to latency. You can also adjust higher if the amount of data being produced is low. For example, if you set fetch.max.wait.ms to 500ms and fetch.min.bytes to 16384 bytes, when Kafka receives a fetch request from the consumer it will respond when the first of either threshold is reached. Conversely, you can adjust the fetch.max.wait.ms and fetch.min.bytes properties lower to improve end-to-end latency. # ... fetch.max.wait.ms=500 1 fetch.min.bytes=16384 2 # ... 1 The maximum time in milliseconds the broker will wait before completing fetch requests. The default is 500 milliseconds. 2 If a minimum batch size in bytes is used, a request is sent when the minimum is reached, or messages have been queued for longer than fetch.max.wait.ms (whichever comes sooner). Adding the delay allows batches to accumulate messages up to the batch size. Lowering latency by increasing the fetch request size Use the fetch.max.bytes and max.partition.fetch.bytes properties to increase the maximum amount of data fetched by the consumer from the Kafka broker. The fetch.max.bytes property sets a maximum limit in bytes on the amount of data fetched from the broker at one time. The max.partition.fetch.bytes sets a maximum limit in bytes on how much data is returned for each partition, which must always be larger than the number of bytes set in the broker or topic configuration for max.message.bytes . The maximum amount of memory a client can consume is calculated approximately as: NUMBER-OF-BROKERS * fetch.max.bytes and NUMBER-OF-PARTITIONS * max.partition.fetch.bytes If memory usage can accommodate it, you can increase the values of these two properties. By allowing more data in each request, latency is improved as there are fewer fetch requests. # ... fetch.max.bytes=52428800 1 max.partition.fetch.bytes=1048576 2 # ... 1 The maximum amount of data in bytes returned for a fetch request. 2 The maximum amount of data in bytes returned for each partition. 12.8.3.5. Avoiding data loss or duplication when committing offsets The Kafka auto-commit mechanism allows a consumer to commit the offsets of messages automatically. If enabled, the consumer will commit offsets received from polling the broker at 5000ms intervals. The auto-commit mechanism is convenient, but it introduces a risk of data loss and duplication. If a consumer has fetched and transformed a number of messages, but the system crashes with processed messages in the consumer buffer when performing an auto-commit, that data is lost. If the system crashes after processing the messages, but before performing the auto-commit, the data is duplicated on another consumer instance after rebalancing. Auto-committing can avoid data loss only when all messages are processed before the poll to the broker, or the consumer closes. To minimize the likelihood of data loss or duplication, you can set enable.auto.commit to false and develop your client application to have more control over committing offsets. Or you can use auto.commit.interval.ms to decrease the intervals between commits. # ... enable.auto.commit=false 1 # ... 1 Auto commit is set to false to provide more control over committing offsets. By setting to enable.auto.commit to false , you can commit offsets after all processing has been performed and the message has been consumed. For example, you can set up your application to call the Kafka commitSync and commitAsync commit APIs. The commitSync API commits the offsets in a message batch returned from polling. You call the API when you are finished processing all the messages in the batch. If you use the commitSync API, the application will not poll for new messages until the last offset in the batch is committed. If this negatively affects throughput, you can commit less frequently, or you can use the commitAsync API. The commitAsync API does not wait for the broker to respond to a commit request, but risks creating more duplicates when rebalancing. A common approach is to combine both commit APIs in an application, with the commitSync API used just before shutting the consumer down or rebalancing to make sure the final commit is successful. 12.8.3.5.1. Controlling transactional messages Consider using transactional ids and enabling idempotence ( enable.idempotence=true ) on the producer side to guarantee exactly-once delivery. On the consumer side, you can then use the isolation.level property to control how transactional messages are read by the consumer. The isolation.level property has two valid values: read_committed read_uncommitted (default) Use read_committed to ensure that only transactional messages that have been committed are read by the consumer. However, this will cause an increase in end-to-end latency, because the consumer will not be able to return a message until the brokers have written the transaction markers that record the result of the transaction ( committed or aborted ). # ... enable.auto.commit=false isolation.level=read_committed 1 # ... 1 Set to read_committed so that only committed messages are read by the consumer. 12.8.3.6. Recovering from failure to avoid data loss Use the session.timeout.ms and heartbeat.interval.ms properties to configure the time taken to check and recover from consumer failure within a consumer group. The session.timeout.ms property specifies the maximum amount of time in milliseconds a consumer within a consumer group can be out of contact with a broker before being considered inactive and a rebalancing is triggered between the active consumers in the group. When the group rebalances, the partitions are reassigned to the members of the group. The heartbeat.interval.ms property specifies the interval in milliseconds between heartbeat checks to the consumer group coordinator to indicate that the consumer is active and connected. The heartbeat interval must be lower, usually by a third, than the session timeout interval. If you set the session.timeout.ms property lower, failing consumers are detected earlier, and rebalancing can take place quicker. However, take care not to set the timeout so low that the broker fails to receive a heartbeat in time and triggers an unnecessary rebalance. Decreasing the heartbeat interval reduces the chance of accidental rebalancing, but more frequent heartbeats increases the overhead on broker resources. 12.8.3.7. Managing offset policy Use the auto.offset.reset property to control how a consumer behaves when no offsets have been committed, or a committed offset is no longer valid or deleted. Suppose you deploy a consumer application for the first time, and it reads messages from an existing topic. Because this is the first time the group.id is used, the __consumer_offsets topic does not contain any offset information for this application. The new application can start processing all existing messages from the start of the log or only new messages. The default reset value is latest , which starts at the end of the partition, and consequently means some messages are missed. To avoid data loss, but increase the amount of processing, set auto.offset.reset to earliest to start at the beginning of the partition. Also consider using the earliest option to avoid messages being lost when the offsets retention period ( offsets.retention.minutes ) configured for a broker has ended. If a consumer group or standalone consumer is inactive and commits no offsets during the retention period, previously committed offsets are deleted from __consumer_offsets . # ... heartbeat.interval.ms=3000 1 session.timeout.ms=10000 2 auto.offset.reset=earliest 3 # ... 1 Adjust the heartbeat interval lower according to anticipated rebalances. 2 If no heartbeats are received by the Kafka broker before the timeout duration expires, the consumer is removed from the consumer group and a rebalance is initiated. If the broker configuration has a group.min.session.timeout.ms and group.max.session.timeout.ms , the session timeout value must be within that range. 3 Set to earliest to return to the start of a partition and avoid data loss if offsets were not committed. If the amount of data returned in a single fetch request is large, a timeout might occur before the consumer has processed it. In this case, you can lower max.partition.fetch.bytes or increase session.timeout.ms . 12.8.3.8. Minimizing the impact of rebalances The rebalancing of a partition between active consumers in a group is the time it takes for: Consumers to commit their offsets The new consumer group to be formed The group leader to assign partitions to group members The consumers in the group to receive their assignments and start fetching Clearly, the process increases the downtime of a service, particularly when it happens repeatedly during a rolling restart of a consumer group cluster. In this situation, you can use the concept of static membership to reduce the number of rebalances. Rebalancing assigns topic partitions evenly among consumer group members. Static membership uses persistence so that a consumer instance is recognized during a restart after a session timeout. The consumer group coordinator can identify a new consumer instance using a unique id that is specified using the group.instance.id property. During a restart, the consumer is assigned a new member id, but as a static member it continues with the same instance id, and the same assignment of topic partitions is made. If the consumer application does not make a call to poll at least every max.poll.interval.ms milliseconds, the consumer is considered to be failed, causing a rebalance. If the application cannot process all the records returned from poll in time, you can avoid a rebalance by using the max.poll.interval.ms property to specify the interval in milliseconds between polls for new messages from a consumer. Or you can use the max.poll.records property to set a maximum limit on the number of records returned from the consumer buffer, allowing your application to process fewer records within the max.poll.interval.ms limit. # ... group.instance.id= UNIQUE-ID 1 max.poll.interval.ms=300000 2 max.poll.records=500 3 # ... 1 The unique instance id ensures that a new consumer instance receives the same assignment of topic partitions. 2 Set the interval to check the consumer is continuing to process messages. 3 Sets the number of processed records returned from the consumer. 12.9. Frequently asked questions 12.9.1. Questions related to the Cluster Operator 12.9.1.1. Why do I need cluster administrator privileges to install AMQ Streams? To install AMQ Streams, you need to be able to create the following cluster-scoped resources: Custom Resource Definitions (CRDs) to instruct OpenShift about resources that are specific to AMQ Streams, such as Kafka and KafkaConnect ClusterRoles and ClusterRoleBindings Cluster-scoped resources, which are not scoped to a particular OpenShift namespace, typically require cluster administrator privileges to install. As a cluster administrator, you can inspect all the resources being installed (in the /install/ directory) to ensure that the ClusterRoles do not grant unnecessary privileges. After installation, the Cluster Operator runs as a regular Deployment , so any standard (non-admin) OpenShift user with privileges to access the Deployment can configure it. The cluster administrator can grant standard users the privileges necessary to manage Kafka custom resources. See also: Why does the Cluster Operator need to create ClusterRoleBindings ? Can standard OpenShift users create Kafka custom resources? 12.9.1.2. Why does the Cluster Operator need to create ClusterRoleBindings ? OpenShift has built-in privilege escalation prevention , which means that the Cluster Operator cannot grant privileges it does not have itself, specifically, it cannot grant such privileges in a namespace it cannot access. Therefore, the Cluster Operator must have the privileges necessary for all the components it orchestrates. The Cluster Operator needs to be able to grant access so that: The Topic Operator can manage KafkaTopics , by creating Roles and RoleBindings in the namespace that the operator runs in The User Operator can manage KafkaUsers , by creating Roles and RoleBindings in the namespace that the operator runs in The failure domain of a Node is discovered by AMQ Streams, by creating a ClusterRoleBinding When using rack-aware partition assignment, the broker pod needs to be able to get information about the Node it is running on, for example, the Availability Zone in Amazon AWS. A Node is a cluster-scoped resource, so access to it can only be granted through a ClusterRoleBinding , not a namespace-scoped RoleBinding . 12.9.1.3. Can standard OpenShift users create Kafka custom resources? By default, standard OpenShift users will not have the privileges necessary to manage the custom resources handled by the Cluster Operator. The cluster administrator can grant a user the necessary privileges using OpenShift RBAC resources. For more information, see Designating AMQ Streams administrators in the Deploying and Upgrading AMQ Streams on OpenShift guide. 12.9.1.4. What do the failed to acquire lock warnings in the log mean? For each cluster, the Cluster Operator executes only one operation at a time. The Cluster Operator uses locks to make sure that there are never two parallel operations running for the same cluster. Other operations must wait until the current operation completes before the lock is released. INFO Examples of cluster operations include cluster creation , rolling update , scale down , and scale up . If the waiting time for the lock takes too long, the operation times out and the following warning message is printed to the log: 2018-03-04 17:09:24 WARNING AbstractClusterOperations:290 - Failed to acquire lock for kafka cluster lock::kafka::myproject::my-cluster Depending on the exact configuration of STRIMZI_FULL_RECONCILIATION_INTERVAL_MS and STRIMZI_OPERATION_TIMEOUT_MS , this warning message might appear occasionally without indicating any underlying issues. Operations that time out are picked up in the periodic reconciliation, so that the operation can acquire the lock and execute again. Should this message appear periodically, even in situations when there should be no other operations running for a given cluster, it might indicate that the lock was not properly released due to an error. If this is the case, try restarting the Cluster Operator. 12.9.1.5. Why is hostname verification failing when connecting to NodePorts using TLS? Currently, off-cluster access using NodePorts with TLS encryption enabled does not support TLS hostname verification. As a result, the clients that verify the hostname will fail to connect. For example, the Java client will fail with the following exception: Caused by: java.security.cert.CertificateException: No subject alternative names matching IP address 168.72.15.231 found at sun.security.util.HostnameChecker.matchIP(HostnameChecker.java:168) at sun.security.util.HostnameChecker.match(HostnameChecker.java:94) at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:455) at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:436) at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:252) at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:136) at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1501) ... 17 more To connect, you must disable hostname verification. In the Java client, you can do this by setting the configuration option ssl.endpoint.identification.algorithm to an empty string. When configuring the client using a properties file, you can do it this way: ssl.endpoint.identification.algorithm= When configuring the client directly in Java, set the configuration option to an empty string: props.put("ssl.endpoint.identification.algorithm", ""); | [
"get k NAME DESIRED KAFKA REPLICAS DESIRED ZK REPLICAS my-cluster 3 3",
"get strimzi NAME DESIRED KAFKA REPLICAS DESIRED ZK REPLICAS kafka.kafka.strimzi.io/my-cluster 3 3 NAME PARTITIONS REPLICATION FACTOR kafkatopic.kafka.strimzi.io/kafka-apps 3 3 NAME AUTHENTICATION AUTHORIZATION kafkauser.kafka.strimzi.io/my-user tls simple",
"get strimzi -o name kafka.kafka.strimzi.io/my-cluster kafkatopic.kafka.strimzi.io/kafka-apps kafkauser.kafka.strimzi.io/my-user",
"delete USD(oc get strimzi -o name) kafka.kafka.strimzi.io \"my-cluster\" deleted kafkatopic.kafka.strimzi.io \"kafka-apps\" deleted kafkauser.kafka.strimzi.io \"my-user\" deleted",
"get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.name==\"tls\")].bootstrapServers}{\"\\n\"}' my-cluster-kafka-bootstrap.myproject.svc:9093",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: spec: # status: conditions: 1 - lastTransitionTime: 2021-07-23T23:46:57+0000 status: \"True\" type: Ready 2 observedGeneration: 4 3 listeners: 4 - addresses: - host: my-cluster-kafka-bootstrap.myproject.svc port: 9092 type: plain - addresses: - host: my-cluster-kafka-bootstrap.myproject.svc port: 9093 certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- type: tls - addresses: - host: 172.29.49.180 port: 9094 certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- type: external clusterId: CLUSTER-ID 5",
"get kafka <kafka_resource_name> -o jsonpath='{.status}'",
"annotate KIND-OF-CUSTOM-RESOURCE NAME-OF-CUSTOM-RESOURCE strimzi.io/pause-reconciliation=\"true\"",
"annotate KafkaConnect my-connect strimzi.io/pause-reconciliation=\"true\"",
"describe KIND-OF-CUSTOM-RESOURCE NAME-OF-CUSTOM-RESOURCE",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: annotations: strimzi.io/pause-reconciliation: \"true\" strimzi.io/use-connector-resources: \"true\" creationTimestamp: 2021-03-12T10:47:11Z # spec: # status: conditions: - lastTransitionTime: 2021-03-12T10:47:41.689249Z status: \"True\" type: ReconciliationPaused",
"apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration webhooks: - name: strimzi-drain-cleaner.strimzi.io rules: - apiGroups: [\"\"] apiVersions: [\"v1\"] operations: [\"CREATE\"] resources: [\"pods/eviction\"] scope: \"Namespaced\" clientConfig: service: namespace: \"strimzi-drain-cleaner\" name: \"strimzi-drain-cleaner\" path: /drainer port: 443 caBundle: Cg== #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 1 replicas: 3 config: # min.insync.replicas: 2 #",
"apiVersion: apps/v1 kind: Deployment spec: # template: spec: serviceAccountName: strimzi-drain-cleaner containers: - name: strimzi-drain-cleaner # command: - \"/application\" - \"-Dquarkus.http.host=0.0.0.0\" - \"--kafka\" - \"--zookeeper\" 1 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: template: podDisruptionBudget: maxUnavailable: 0 # zookeeper: template: podDisruptionBudget: maxUnavailable: 0 #",
"apply -f <kafka-configuration-file>",
"apply -f ./install/drain-cleaner/openshift",
"get nodes drain <name-of-node> --delete-emptydir-data --ignore-daemonsets --timeout=6000s --force",
"INFO ... Received eviction webhook for Pod my-cluster-zookeeper-2 in namespace my-project INFO ... Pod my-cluster-zookeeper-2 in namespace my-project will be annotated for restart INFO ... Pod my-cluster-zookeeper-2 in namespace my-project found and annotated for restart INFO ... Received eviction webhook for Pod my-cluster-kafka-0 in namespace my-project INFO ... Pod my-cluster-kafka-0 in namespace my-project will be annotated for restart INFO ... Pod my-cluster-kafka-0 in namespace my-project found and annotated for restart",
"INFO PodOperator:68 - Reconciliation #13(timer) Kafka(my-project/my-cluster): Rolling Pod my-cluster-zookeeper-2 INFO PodOperator:68 - Reconciliation #13(timer) Kafka(my-project/my-cluster): Rolling Pod my-cluster-kafka-0 INFO AbstractOperator:500 - Reconciliation #13(timer) Kafka(my-project/my-cluster): reconciled",
"annotate statefulset <cluster_name> -kafka strimzi.io/manual-rolling-update=true annotate statefulset <cluster_name> -zookeeper strimzi.io/manual-rolling-update=true",
"annotate strimzipodset <cluster_name> -kafka strimzi.io/manual-rolling-update=true annotate strimzipodset <cluster_name> -zookeeper strimzi.io/manual-rolling-update=true",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 1 replicas: 3 config: # min.insync.replicas: 2 #",
"annotate pod cluster-name -kafka- index strimzi.io/manual-rolling-update=true annotate pod cluster-name -zookeeper- index strimzi.io/manual-rolling-update=true",
"apiVersion: v1 kind: Service metadata: annotations: strimzi.io/discovery: |- [ { \"port\" : 9092, \"tls\" : false, \"protocol\" : \"kafka\", \"auth\" : \"scram-sha-512\" }, { \"port\" : 9093, \"tls\" : true, \"protocol\" : \"kafka\", \"auth\" : \"tls\" } ] labels: strimzi.io/cluster: my-cluster strimzi.io/discovery: \"true\" strimzi.io/kind: Kafka strimzi.io/name: my-cluster-kafka-bootstrap name: my-cluster-kafka-bootstrap spec: #",
"apiVersion: v1 kind: Service metadata: annotations: strimzi.io/discovery: |- [ { \"port\" : 8080, \"tls\" : false, \"auth\" : \"none\", \"protocol\" : \"http\" } ] labels: strimzi.io/cluster: my-bridge strimzi.io/discovery: \"true\" strimzi.io/kind: KafkaBridge strimzi.io/name: my-bridge-bridge-service",
"get service -l strimzi.io/discovery=true",
"apiVersion: v1 kind: PersistentVolume spec: # persistentVolumeReclaimPolicy: Retain",
"apiVersion: v1 kind: StorageClass metadata: name: gp2-retain parameters: # reclaimPolicy: Retain",
"apiVersion: v1 kind: PersistentVolume spec: # storageClassName: gp2-retain",
"get pv",
"NAME RECLAIMPOLICY CLAIM pvc-5e9c5c7f-3317-11ea-a650-06e1eadd9a4c ... Retain ... myproject/data-my-cluster-zookeeper-1 pvc-5e9cc72d-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-my-cluster-zookeeper-0 pvc-5ead43d1-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-my-cluster-zookeeper-2 pvc-7e1f67f9-3317-11ea-a650-06e1eadd9a4c ... Retain ... myproject/data-0-my-cluster-kafka-0 pvc-7e21042e-3317-11ea-9786-02deaf9aa87e ... Retain ... myproject/data-0-my-cluster-kafka-1 pvc-7e226978-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-0-my-cluster-kafka-2",
"create namespace myproject",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data-0-my-cluster-kafka-0 spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi storageClassName: gp2-retain volumeMode: Filesystem volumeName: pvc-7e1f67f9-3317-11ea-a650-06e1eadd9a4c",
"apiVersion: v1 kind: PersistentVolume metadata: annotations: kubernetes.io/createdby: aws-ebs-dynamic-provisioner pv.kubernetes.io/bound-by-controller: \"yes\" pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs creationTimestamp: \"<date>\" finalizers: - kubernetes.io/pv-protection labels: failure-domain.beta.kubernetes.io/region: eu-west-1 failure-domain.beta.kubernetes.io/zone: eu-west-1c name: pvc-7e226978-3317-11ea-97b0-0aef8816c7ea resourceVersion: \"39431\" selfLink: /api/v1/persistentvolumes/pvc-7e226978-3317-11ea-97b0-0aef8816c7ea uid: 7efe6b0d-3317-11ea-a650-06e1eadd9a4c spec: accessModes: - ReadWriteOnce awsElasticBlockStore: fsType: xfs volumeID: aws://eu-west-1c/vol-09db3141656d1c258 capacity: storage: 100Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: data-0-my-cluster-kafka-2 namespace: myproject resourceVersion: \"39113\" uid: 54be1c60-3319-11ea-97b0-0aef8816c7ea nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: failure-domain.beta.kubernetes.io/zone operator: In values: - eu-west-1c - key: failure-domain.beta.kubernetes.io/region operator: In values: - eu-west-1 persistentVolumeReclaimPolicy: Retain storageClassName: gp2-retain volumeMode: Filesystem",
"claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: data-0-my-cluster-kafka-2 namespace: myproject resourceVersion: \"39113\" uid: 54be1c60-3319-11ea-97b0-0aef8816c7ea",
"create -f install/cluster-operator -n my-project",
"apply -f kafka.yaml",
"run kafka-admin -ti --image=registry.redhat.io/amq7/amq-streams-kafka-31-rhel8:2.1.0 --rm=true --restart=Never -- ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi-topic-operator-kstreams-topic-store-changelog --delete && ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi_store_topic --delete",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # entityOperator: topicOperator: {} 1 #",
"get KafkaTopic",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # config: client.quota.callback.class: io.strimzi.kafka.quotas.StaticQuotaCallback 1 client.quota.callback.static.produce: 1000000 2 client.quota.callback.static.fetch: 1000000 3 client.quota.callback.static.storage.soft: 400000000000 4 client.quota.callback.static.storage.hard: 500000000000 5 client.quota.callback.static.storage.check-interval: 5 6",
"apply -f <kafka_configuration_file>",
"num.partitions=1 default.replication.factor=3 offsets.topic.replication.factor=3 transaction.state.log.replication.factor=3 transaction.state.log.min.isr=2 log.retention.hours=168 log.segment.bytes=1073741824 log.retention.check.interval.ms=300000 num.network.threads=3 num.io.threads=8 num.recovery.threads.per.data.dir=1 socket.send.buffer.bytes=102400 socket.receive.buffer.bytes=102400 socket.request.max.bytes=104857600 group.initial.rebalance.delay.ms=0 zookeeper.connection.timeout.ms=6000",
"num.partitions=1 auto.create.topics.enable=false default.replication.factor=3 min.insync.replicas=2 replica.fetch.max.bytes=1048576",
"auto.create.topics.enable=false delete.topic.enable=true",
"transaction.state.log.replication.factor=3 transaction.state.log.min.isr=2",
"offsets.topic.num.partitions=50 offsets.topic.replication.factor=3",
"num.network.threads=3 1 queued.max.requests=500 2 num.io.threads=8 3 num.recovery.threads.per.data.dir=1 4",
"replica.socket.receive.buffer.bytes=65536",
"socket.request.max.bytes=104857600",
"socket.send.buffer.bytes=1048576 socket.receive.buffer.bytes=1048576",
"log.segment.bytes=1073741824 log.roll.ms=604800000",
"log.retention.ms=1680000",
"log.retention.bytes=1073741824",
"log.segment.delete.delay.ms=60000",
"log.cleaner.enable=true",
"log.cleanup.policy=compact,delete",
"log.retention.check.interval.ms=300000",
"log.cleaner.backoff.ms=15000",
"log.cleaner.delete.retention.ms=86400000",
"log.cleaner.dedupe.buffer.size=134217728 log.cleaner.io.buffer.load.factor=0.9",
"log.cleaner.threads=8",
"log.cleaner.io.max.bytes.per.second=1.7976931348623157E308",
"log.flush.scheduler.interval.ms=2000",
"log.flush.interval.ms=50000 log.flush.interval.messages=100000",
"replica.lag.time.max.ms=30000",
"# auto.leader.rebalance.enable=true leader.imbalance.check.interval.seconds=300 leader.imbalance.per.broker.percentage=10 #",
"unclean.leader.election.enable=false",
"group.initial.rebalance.delay.ms=3000",
"bootstrap.servers=localhost:9092 1 key.serializer=org.apache.kafka.common.serialization.StringSerializer 2 value.serializer=org.apache.kafka.common.serialization.StringSerializer 3 client.id=my-client 4 compression.type=gzip 5",
"acks=all 1",
"min.insync.replicas=2 1",
"enable.idempotence=true 1 max.in.flight.requests.per.connection=5 2 acks=all 3 retries=2147483647 4",
"enable.idempotence=false 1 max.in.flight.requests.per.connection=1 2 retries=2147483647",
"enable.idempotence=true max.in.flight.requests.per.connection=5 acks=all retries=2147483647 transactional.id= UNIQUE-ID 1 transaction.timeout.ms=900000 2",
"linger.ms=100 1 batch.size=16384 2 buffer.memory=33554432 3",
"delivery.timeout.ms=120000 1 partitioner.class=my-custom-partitioner 2",
"bootstrap.servers=localhost:9092 1 key.deserializer=org.apache.kafka.common.serialization.StringDeserializer 2 value.deserializer=org.apache.kafka.common.serialization.StringDeserializer 3 client.id=my-client 4 group.id=my-group-id 5",
"group.id=my-group-id 1",
"fetch.max.wait.ms=500 1 fetch.min.bytes=16384 2",
"NUMBER-OF-BROKERS * fetch.max.bytes and NUMBER-OF-PARTITIONS * max.partition.fetch.bytes",
"fetch.max.bytes=52428800 1 max.partition.fetch.bytes=1048576 2",
"enable.auto.commit=false 1",
"enable.auto.commit=false isolation.level=read_committed 1",
"heartbeat.interval.ms=3000 1 session.timeout.ms=10000 2 auto.offset.reset=earliest 3",
"group.instance.id= UNIQUE-ID 1 max.poll.interval.ms=300000 2 max.poll.records=500 3",
"2018-03-04 17:09:24 WARNING AbstractClusterOperations:290 - Failed to acquire lock for kafka cluster lock::kafka::myproject::my-cluster",
"Caused by: java.security.cert.CertificateException: No subject alternative names matching IP address 168.72.15.231 found at sun.security.util.HostnameChecker.matchIP(HostnameChecker.java:168) at sun.security.util.HostnameChecker.match(HostnameChecker.java:94) at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:455) at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:436) at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:252) at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:136) at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1501) ... 17 more",
"ssl.endpoint.identification.algorithm=",
"props.put(\"ssl.endpoint.identification.algorithm\", \"\");"
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.1/html/configuring_amq_streams_on_openshift/management-tasks-str |
Chapter 55. overcloud | Chapter 55. overcloud This chapter describes the commands under the overcloud command. 55.1. overcloud admin authorize Deploy the ssh keys needed by Mistral. Usage: Table 55.1. Command arguments Value Summary -h, --help Show this help message and exit --stack STACK Name or id of heat stack (default=env: OVERCLOUD_STACK_NAME) --overcloud-ssh-user OVERCLOUD_SSH_USER User for ssh access to overcloud nodes --overcloud-ssh-key OVERCLOUD_SSH_KEY Key path for ssh access to overcloud nodes. Whenundefined the key will be autodetected. --overcloud-ssh-network OVERCLOUD_SSH_NETWORK Network name to use for ssh access to overcloud nodes. --overcloud-ssh-enable-timeout OVERCLOUD_SSH_ENABLE_TIMEOUT Timeout for the ssh enable process to finish. --overcloud-ssh-port-timeout OVERCLOUD_SSH_PORT_TIMEOUT Timeout for to wait for the ssh port to become active. 55.2. overcloud backup Backup the Overcloud Usage: Table 55.2. Command arguments Value Summary --init [INIT] Initialize environment for backup, using rear or nfs as args which will check for package install and configured ReaR or NFS server. Defaults to: rear. i.e. --init rear. WARNING: This flag will be deprecated and replaced by --setup-rear and --setup-nfs . --setup-nfs Setup the nfs server on the backup node which will install required packages and configuration on the host BackupNode in the ansible inventory. --setup-rear Setup rear on the overcloud controller hosts which will install and configure ReaR. --cron Sets up a new cron job that by default will execute a weekly backup at Sundays midnight, but that can be customized by using the tripleo_backup_and_restore_cron extra-var. --inventory INVENTORY Tripleo inventory file generated with tripleo-ansible- inventory command. Defaults to: /home/stack/tripleo- inventory.yaml. --storage-ip STORAGE_IP Storage ip is an optional parameter which allows for an ip of a storage server to be specified, overriding the default undercloud. WARNING: This flag will be deprecated in favor of --extra-vars which will allow to pass this and other variables. --extra-vars EXTRA_VARS Set additional variables as dict or as an absolute path of a JSON or YAML file type. i.e. --extra-vars {"key": "val", "key2": "val2"} i.e. --extra-vars /path/to/my_vars.yaml i.e. --extra-vars /path/to/my_vars.json. For more information about the variables that can be passed, visit: https://opendev.org/openstack/tripleo-ansible/src/bran ch/master/tripleo_ansible/roles/backup_and_restore/def aults/main.yml. 55.3. overcloud cell export Export cell information used as import of another cell Usage: Table 55.3. Command arguments Value Summary -h, --help Show this help message and exit --control-plane-stack <control plane stack> Name of the environment main heat stack to export information from. (default=Env: OVERCLOUD_STACK_NAME) --cell-stack <cell stack>, -e <cell stack> Name of the controller cell heat stack to export information from. Used in case of: control plane stack cell controller stack multiple compute stacks --output-file <output file>, -o <output file> Name of the output file for the cell data export. it will default to "<name>.yaml" --config-download-dir CONFIG_DOWNLOAD_DIR Directory to search for config-download export data. Defaults to USDHOME/config-download --force-overwrite, -f Overwrite output file if it exists. 55.4. overcloud config download Download Overcloud Config Usage: Table 55.4. Command arguments Value Summary -h, --help Show this help message and exit --name NAME The name of the plan, which is used for the object storage container, workflow environment and orchestration stack names. --config-dir CONFIG_DIR The directory where the configuration files will be pushed --config-type CONFIG_TYPE Type of object config to be extract from the deployment, defaults to all keys available --no-preserve-config If specified, will delete and recreate the --config- dir if it already exists. Default is to use the existing dir location and overwrite files. Files in --config-dir not from the stack will be preserved by default. 55.5. overcloud container image build Build overcloud container images with kolla-build. Usage: Table 55.5. Command arguments Value Summary -h, --help Show this help message and exit --config-file <yaml config file> Yaml config file specifying the images to build. may be specified multiple times. Order is preserved, and later files will override some options in files. Other options will append. If not specified, the default set of containers will be built. --kolla-config-file <config file> Path to a kolla config file to use. multiple config files can be specified, with values in later files taking precedence. By default, tripleo kolla conf file /usr/share/tripleo-common/container- images/tripleo_kolla_config_overrides.conf is added. --list-images Show the images which would be built instead of building them. --list-dependencies Show the image build dependencies instead of building them. --exclude <container-name> Name of a container to match against the list of containers to be built to skip. Can be specified multiple times. --use-buildah Use buildah instead of docker to build the images with Kolla. --work-dir <container builds directory> Tripleo container builds directory, storing configs and logs for each image and its dependencies. 55.6. overcloud container image prepare Generate files defining the images, tags and registry. Usage: Table 55.6. Command arguments Value Summary -h, --help Show this help message and exit --template-file <yaml template file> Yaml template file which the images config file will be built from. Default: /usr/share/tripleo-common/container- images/tripleo_containers.yaml.j2 --push-destination <location> Location of image registry to push images to. if specified, a push_destination will be set for every image entry. --tag <tag> Override the default tag substitution. if --tag-from- label is specified, start discovery with this tag. Default: 16.2 --tag-from-label <image label> Use the value of the specified label(s) to discover the tag. Labels can be combined in a template format, for example: {version}-{release} --namespace <namespace> Override the default namespace substitution. Default: registry.redhat.io/rhosp-rhel8 --prefix <prefix> Override the default name prefix substitution. Default: openstack- --suffix <suffix> Override the default name suffix substitution. Default: --set <variable=value> Set the value of a variable in the template, even if it has no dedicated argument such as "--suffix". --exclude <regex> Pattern to match against resulting imagename entries to exclude from the final output. Can be specified multiple times. --include <regex> Pattern to match against resulting imagename entries to include in final output. Can be specified multiple times, entries not matching any --include will be excluded. --exclude is ignored if --include is used. --output-images-file <file path> File to write resulting image entries to, as well as stdout. Any existing file will be overwritten. --environment-file <file path>, -e <file path> Environment files specifying which services are containerized. Entries will be filtered to only contain images used by containerized services. (Can be specified more than once.) --environment-directory <HEAT ENVIRONMENT DIRECTORY> Environment file directories that are automatically added to the update command. Entries will be filtered to only contain images used by containerized services. Can be specified more than once. Files in directories are loaded in ascending sort order. --output-env-file <file path> File to write heat environment file which specifies all image parameters. Any existing file will be overwritten. --roles-file ROLES_FILE, -r ROLES_FILE Roles file, overrides the default roles_data.yaml in the t-h-t templates directory used for deployment. May be an absolute path or the path relative to the templates dir. --modify-role MODIFY_ROLE Name of ansible role to run between every image upload pull and push. --modify-vars MODIFY_VARS Ansible variable file containing variables to use when invoking the role --modify-role. 55.7. overcloud container image tag discover Discover the versioned tag for an image. Usage: Table 55.7. Command arguments Value Summary -h, --help Show this help message and exit --image <container image> Fully qualified name of the image to discover the tag for (Including registry and stable tag). --tag-from-label <image label> Use the value of the specified label(s) to discover the tag. Labels can be combined in a template format, for example: {version}-{release} 55.8. overcloud container image upload Push overcloud container images to registries. Usage: Table 55.8. Command arguments Value Summary -h, --help Show this help message and exit --config-file <yaml config file> Yaml config file specifying the image build. may be specified multiple times. Order is preserved, and later files will override some options in files. Other options will append. --cleanup <full, partial, none> Cleanup behavior for local images left after upload. The default full will attempt to delete all local images. partial will leave images required for deployment on this host. none will do no cleanup. 55.9. overcloud credentials Create the overcloudrc files Usage: Table 55.9. Positional arguments Value Summary plan The name of the plan you want to create rc files for. Table 55.10. Command arguments Value Summary -h, --help Show this help message and exit --directory [DIRECTORY] The directory to create the rc files. defaults to the current directory. 55.10. overcloud delete Delete overcloud stack and plan Usage: Table 55.11. Positional arguments Value Summary stack Name or id of heat stack to delete(default=env: OVERCLOUD_STACK_NAME) Table 55.12. Command arguments Value Summary -h, --help Show this help message and exit -y, --yes Skip yes/no prompt (assume yes). -s, --skip-ipa-cleanup Skip removing overcloud hosts, services, and dns records from FreeIPA. This is particularly relevant for deployments using certificates from FreeIPA for TLS. By default, overcloud hosts, services, and DNS records will be removed from FreeIPA before deleting the overcloud. Using this option might require you to manually cleanup FreeIPA later. 55.11. overcloud deploy Deploy Overcloud Usage: Table 55.13. Command arguments Value Summary --templates [TEMPLATES] The directory containing the heat templates to deploy --stack STACK Stack name to create or update --timeout <TIMEOUT>, -t <TIMEOUT> Deployment timeout in minutes. --control-scale CONTROL_SCALE New number of control nodes. (deprecated. use an environment file and set the parameter ControllerCount. This option will be removed in the "U" release.) --compute-scale COMPUTE_SCALE New number of compute nodes. (deprecated. use an environment file and set the parameter ComputeCount. This option will be removed in the "U" release.) --ceph-storage-scale CEPH_STORAGE_SCALE New number of ceph storage nodes. (deprecated. use an environment file and set the parameter CephStorageCount. This option will be removed in the "U" release.) --block-storage-scale BLOCK_STORAGE_SCALE New number of cinder storage nodes. (deprecated. use an environment file and set the parameter BlockStorageCount. This option will be removed in the "U" release.) --swift-storage-scale SWIFT_STORAGE_SCALE New number of swift storage nodes. (deprecated. use an environment file and set the parameter ObjectStorageCount. This option will be removed in the "U" release.) --control-flavor CONTROL_FLAVOR Nova flavor to use for control nodes. (deprecated. use an environment file and set the parameter OvercloudControlFlavor. This option will be removed in the "U" release.) --compute-flavor COMPUTE_FLAVOR Nova flavor to use for compute nodes. (deprecated. use an environment file and set the parameter OvercloudComputeFlavor. This option will be removed in the "U" release.) --ceph-storage-flavor CEPH_STORAGE_FLAVOR Nova flavor to use for ceph storage nodes. (DEPRECATED. Use an environment file and set the parameter OvercloudCephStorageFlavor. This option will be removed in the "U" release.) --block-storage-flavor BLOCK_STORAGE_FLAVOR Nova flavor to use for cinder storage nodes (DEPRECATED. Use an environment file and set the parameter OvercloudBlockStorageFlavor. This option will be removed in the "U" release.) --swift-storage-flavor SWIFT_STORAGE_FLAVOR Nova flavor to use for swift storage nodes (DEPRECATED. Use an environment file and set the parameter OvercloudSwiftStorageFlavor. This option will be removed in the "U" release.) --libvirt-type {kvm,qemu} Libvirt domain type. --ntp-server NTP_SERVER The ntp for overcloud nodes. --no-proxy NO_PROXY A comma separated list of hosts that should not be proxied. --overcloud-ssh-user OVERCLOUD_SSH_USER User for ssh access to overcloud nodes --overcloud-ssh-key OVERCLOUD_SSH_KEY Key path for ssh access to overcloud nodes. Whenundefined the key will be autodetected. --overcloud-ssh-network OVERCLOUD_SSH_NETWORK Network name to use for ssh access to overcloud nodes. --overcloud-ssh-enable-timeout OVERCLOUD_SSH_ENABLE_TIMEOUT Timeout for the ssh enable process to finish. --overcloud-ssh-port-timeout OVERCLOUD_SSH_PORT_TIMEOUT Timeout for to wait for the ssh port to become active. --environment-file <HEAT ENVIRONMENT FILE>, -e <HEAT ENVIRONMENT FILE> Environment files to be passed to the heat stack- create or heat stack-update command. (Can be specified more than once.) --environment-directory <HEAT ENVIRONMENT DIRECTORY> Environment file directories that are automatically added to the heat stack-create or heat stack-update commands. Can be specified more than once. Files in directories are loaded in ascending sort order. --roles-file ROLES_FILE, -r ROLES_FILE Roles file, overrides the default roles_data.yaml in the --templates directory. May be an absolute path or the path relative to --templates --networks-file NETWORKS_FILE, -n NETWORKS_FILE Networks file, overrides the default network_data.yaml in the --templates directory --plan-environment-file PLAN_ENVIRONMENT_FILE, -p PLAN_ENVIRONMENT_FILE Plan environment file, overrides the default plan- environment.yaml in the --templates directory --no-cleanup Don't cleanup temporary files, just log their location --update-plan-only Only update the plan. do not perform the actual deployment. NOTE: Will move to a discrete command in a future release. --validation-errors-nonfatal Allow the deployment to continue in spite of validation errors. Note that attempting deployment while errors exist is likely to fail. --validation-warnings-fatal Exit if there are warnings from the configuration pre- checks. --disable-validations Deprecated. disable the pre-deployment validations entirely. These validations are the built-in pre- deployment validations. To enable external validations from tripleo-validations, use the --run-validations flag. These validations are now run via the external validations in tripleo-validations. --inflight-validations Activate in-flight validations during the deploy. in- flight validations provide a robust way to ensure deployed services are running right after their activation. Defaults to False. --dry-run Only run validations, but do not apply any changes. --run-validations Run external validations from the tripleo-validations project. --skip-postconfig Skip the overcloud post-deployment configuration. --force-postconfig Force the overcloud post-deployment configuration. --skip-deploy-identifier Skip generation of a unique identifier for the DeployIdentifier parameter. The software configuration deployment steps will only be triggered if there is an actual change to the configuration. This option should be used with Caution, and only if there is confidence that the software configuration does not need to be run, such as when scaling out certain roles. --answers-file ANSWERS_FILE Path to a yaml file with arguments and parameters. --disable-password-generation Disable password generation. --deployed-server Use pre-provisioned overcloud nodes. removes baremetal,compute and image services requirements from theundercloud node. Must only be used with the-- disable-validations. --config-download Run deployment via config-download mechanism. this is now the default, and this CLI options may be removed in the future. --no-config-download, --stack-only Disable the config-download workflow and only create the stack and associated OpenStack resources. No software configuration will be applied. --config-download-only Disable the stack create/update, and only run the config-download workflow to apply the software configuration. --output-dir OUTPUT_DIR Directory to use for saved output when using --config- download. The directory must be writeable by the mistral user. When not specified, the default server side value will be used (/var/lib/mistral/<execution id>. --override-ansible-cfg OVERRIDE_ANSIBLE_CFG Path to ansible configuration file. the configuration in the file will override any configuration used by config-download by default. --config-download-timeout CONFIG_DOWNLOAD_TIMEOUT Timeout (in minutes) to use for config-download steps. If unset, will default to however much time is leftover from the --timeout parameter after the stack operation. --deployment-python-interpreter DEPLOYMENT_PYTHON_INTERPRETER The path to python interpreter to use for the deployment actions. This may need to be used if deploying on a python2 host from a python3 system or vice versa. -b <baremetal_deployment.yaml>, --baremetal-deployment <baremetal_deployment.yaml> Configuration file describing the baremetal deployment --limit LIMIT A string that identifies a single node or comma- separatedlist of nodes the config-download Ansible playbook execution will be limited to. For example: --limit "compute-0,compute-1,compute-5". --tags TAGS A list of tags to use when running the config-download ansible-playbook command. --skip-tags SKIP_TAGS A list of tags to skip when running the config- download ansible-playbook command. --ansible-forks ANSIBLE_FORKS The number of ansible forks to use for the config- download ansible-playbook command. 55.12. overcloud execute Execute a Heat software config on the servers. Usage: Table 55.14. Positional arguments Value Summary file_in None Table 55.15. Command arguments Value Summary -h, --help Show this help message and exit -s SERVER_NAME, --server_name SERVER_NAME Nova server_name or partial name to match. -g GROUP, --group GROUP Heat software config "group" type. defaults to "script". 55.13. overcloud export ceph Export Ceph information used as import of another stack Export Ceph information from one or more stacks to be used as input of another stack. Creates a valid YAML file with the CephExternalMultiConfig parameter populated. Usage: Table 55.16. Command arguments Value Summary -h, --help Show this help message and exit --stack <stack> Name of the overcloud stack(s) to export ceph information from. If a comma delimited list of stacks is passed, Ceph information for all stacks will be exported into a single file. (default=Env: OVERCLOUD_STACK_NAME) --cephx-key-client-name <cephx>, -k <cephx> Name of the cephx client key to export. (default=openstack) --output-file <output file>, -o <output file> Name of the output file for the ceph data export. Defaults to "ceph-export-<STACK>.yaml" if one stack is provided. Defaults to "ceph-export-<N>-stacks.yaml" if N stacks are provided. --force-overwrite, -f Overwrite output file if it exists. --config-download-dir CONFIG_DOWNLOAD_DIR Directory to search for config-download export data. Defaults to /var/lib/mistral 55.14. overcloud export Export stack information used as import of another stack Usage: Table 55.17. Command arguments Value Summary -h, --help Show this help message and exit --stack <stack> Name of the environment main heat stack to export information from. (default=Env: OVERCLOUD_STACK_NAME) --output-file <output file>, -o <output file> Name of the output file for the stack data export. it will default to "<name>.yaml" --force-overwrite, -f Overwrite output file if it exists. --config-download-dir CONFIG_DOWNLOAD_DIR Directory to search for config-download export data. Defaults to /var/lib/mistral/<stack> --no-password-excludes Dont exclude certain passwords from the password export. Defaults to False in that some passwords will be excluded that are not typically necessary. 55.15. overcloud external-update run Run external minor update Ansible playbook This will run the external minor update Ansible playbook, executing tasks from the undercloud. The update playbooks are made available after completion of the overcloud update prepare command. Usage: Table 55.18. Command arguments Value Summary -h, --help Show this help message and exit --static-inventory STATIC_INVENTORY Path to an existing ansible inventory to use. if not specified, one will be generated in ~/tripleo-ansible- inventory.yaml --ssh-user SSH_USER Deprecated: only tripleo-admin should be used as ssh user. --tags TAGS A string specifying the tag or comma separated list of tags to be passed as --tags to ansible-playbook. --skip-tags SKIP_TAGS A string specifying the tag or comma separated list of tags to be passed as --skip-tags to ansible-playbook. --stack STACK Name or id of heat stack (default=env: OVERCLOUD_STACK_NAME) -e EXTRA_VARS, --extra-vars EXTRA_VARS Set additional variables as key=value or yaml/json --no-workflow Run ansible-playbook directly via system command instead of running Ansiblevia the TripleO mistral workflows. -y, --yes Use -y or --yes to skip the confirmation required before any upgrade operation. Use this with caution! --limit LIMIT A string that identifies a single node or comma- separatedlist of nodes the config-download Ansible playbook execution will be limited to. For example: --limit "compute-0,compute-1,compute-5". --ansible-forks ANSIBLE_FORKS The number of ansible forks to use for the config- download ansible-playbook command. 55.16. overcloud external-upgrade run Run external major upgrade Ansible playbook This will run the external major upgrade Ansible playbook, executing tasks from the undercloud. The upgrade playbooks are made available after completion of the overcloud upgrade prepare command. Usage: Table 55.19. Command arguments Value Summary -h, --help Show this help message and exit --static-inventory STATIC_INVENTORY Path to an existing ansible inventory to use. if not specified, one will be generated in ~/tripleo-ansible- inventory.yaml --ssh-user SSH_USER Deprecated: only tripleo-admin should be used as ssh user. --tags TAGS A string specifying the tag or comma separated list of tags to be passed as --tags to ansible-playbook. --skip-tags SKIP_TAGS A string specifying the tag or comma separated list of tags to be passed as --skip-tags to ansible-playbook. --stack STACK Name or id of heat stack (default=env: OVERCLOUD_STACK_NAME) -e EXTRA_VARS, --extra-vars EXTRA_VARS Set additional variables as key=value or yaml/json --no-workflow Run ansible-playbook directly via system command instead of running Ansiblevia the TripleO mistral workflows. -y, --yes Use -y or --yes to skip the confirmation required before any upgrade operation. Use this with caution! --limit LIMIT A string that identifies a single node or comma- separatedlist of nodes the config-download Ansible playbook execution will be limited to. For example: --limit "compute-0,compute-1,compute-5". --ansible-forks ANSIBLE_FORKS The number of ansible forks to use for the config- download ansible-playbook command. 55.17. overcloud failures Get deployment failures Usage: Table 55.20. Command arguments Value Summary -h, --help Show this help message and exit --plan PLAN, --stack PLAN Name of the stack/plan. (default: overcloud) 55.18. overcloud generate fencing Generate fencing parameters Usage: Table 55.21. Positional arguments Value Summary instackenv None Table 55.22. Command arguments Value Summary -h, --help Show this help message and exit -a FENCE_ACTION, --action FENCE_ACTION Deprecated: this option is ignored. --delay DELAY Wait delay seconds before fencing is started --ipmi-lanplus Deprecated: this is the default. --ipmi-no-lanplus Do not use lanplus. defaults to: false --ipmi-cipher IPMI_CIPHER Ciphersuite to use (same as ipmitool -c parameter. --ipmi-level IPMI_LEVEL Privilegel level on ipmi device. valid levels: callback, user, operator, administrator. --output OUTPUT Write parameters to a file 55.19. overcloud image build Build images for the overcloud Usage: Table 55.23. Command arguments Value Summary -h, --help Show this help message and exit --config-file <yaml config file> Yaml config file specifying the image build. may be specified multiple times. Order is preserved, and later files will override some options in files. Other options will append. --image-name <image name> Name of image to build. may be specified multiple times. If unspecified, will build all images in given YAML files. --no-skip Skip build if cached image exists. --output-directory OUTPUT_DIRECTORY Output directory for images. defaults to USDTRIPLEO_ROOT,or current directory if unset. --temp-dir TEMP_DIR Temporary directory to use when building the images. Defaults to USDTMPDIR or current directory if unset. 55.20. overcloud image upload Make existing image files available for overcloud deployment. Usage: Table 55.24. Command arguments Value Summary -h, --help Show this help message and exit --image-path IMAGE_PATH Path to directory containing image files --os-image-name OS_IMAGE_NAME Openstack disk image filename --ironic-python-agent-name IPA_NAME Openstack ironic-python-agent (agent) image filename --http-boot HTTP_BOOT Root directory for the ironic-python-agent image. if uploading images for multiple architectures/platforms, vary this argument such that a distinct folder is created for each architecture/platform. --update-existing Update images if already exist --whole-disk When set, the overcloud-full image to be uploaded will be considered as a whole disk one --architecture ARCHITECTURE Architecture type for these images, x86_64 , i386 and ppc64le are common options. This option should match at least one arch value in instackenv.json --platform PLATFORM Platform type for these images. platform is a sub- category of architecture. For example you may have generic images for x86_64 but offer images specific to SandyBridge (SNB). --image-type {os,ironic-python-agent} If specified, allows to restrict the image type to upload (os for the overcloud image or ironic-python- agent for the ironic-python-agent one) --progress Show progress bar for upload files action --local Copy files locally, even if there is an image service endpoint --local-path LOCAL_PATH Root directory for image file copy destination when there is no image endpoint, or when --local is specified 55.21. overcloud netenv validate Validate the network environment file. Usage: Table 55.25. Command arguments Value Summary -h, --help Show this help message and exit -f NETENV, --file NETENV Path to the network environment file 55.22. overcloud node bios configure Apply BIOS configuration on given nodes Usage: Table 55.26. Positional arguments Value Summary <node_uuid> Baremetal node uuids for the node(s) to configure bios Table 55.27. Command arguments Value Summary -h, --help Show this help message and exit --all-manageable Configure bios for all nodes currently in manageable state --configuration <configuration> Bios configuration (yaml/json string or file name). 55.23. overcloud node bios reset Reset BIOS configuration to factory default Usage: Table 55.28. Positional arguments Value Summary <node_uuid> Baremetal node uuids for the node(s) to reset bios Table 55.29. Command arguments Value Summary -h, --help Show this help message and exit --all-manageable Reset bios on all nodes currently in manageable state 55.24. overcloud node clean Run node(s) through cleaning. Usage: Table 55.30. Positional arguments Value Summary <node_uuid> Baremetal node uuids for the node(s) to be cleaned Table 55.31. Command arguments Value Summary -h, --help Show this help message and exit --all-manageable Clean all nodes currently in manageable state --provide Provide (make available) the nodes once cleaned 55.25. overcloud node configure Configure Node boot options. Usage: Table 55.32. Positional arguments Value Summary <node_uuid> Baremetal node uuids for the node(s) to be configured Table 55.33. Command arguments Value Summary -h, --help Show this help message and exit --all-manageable Configure all nodes currently in manageable state --deploy-kernel DEPLOY_KERNEL Image with deploy kernel. --deploy-ramdisk DEPLOY_RAMDISK Image with deploy ramdisk. --instance-boot-option {local,netboot} Whether to set instances for booting from local hard drive (local) or network (netboot). --root-device ROOT_DEVICE Define the root device for nodes. can be either a list of device names (without /dev) to choose from or one of two strategies: largest or smallest. For it to work this command should be run after the introspection. --root-device-minimum-size ROOT_DEVICE_MINIMUM_SIZE Minimum size (in gib) of the detected root device. Used with --root-device. --overwrite-root-device-hints Whether to overwrite existing root device hints when --root-device is used. 55.26. overcloud node delete Delete overcloud nodes. Usage: Table 55.34. Positional arguments Value Summary <node> Node id(s) to delete (otherwise specified in the --baremetal-deployment file) Table 55.35. Command arguments Value Summary -h, --help Show this help message and exit -b <BAREMETAL DEPLOYMENT FILE>, --baremetal-deployment <BAREMETAL DEPLOYMENT FILE> Configuration file describing the baremetal deployment --stack STACK Name or id of heat stack to scale (default=env: OVERCLOUD_STACK_NAME) --templates [TEMPLATES] The directory containing the heat templates to deploy. This argument is deprecated. The command now utilizes a deployment plan, which should be updated prior to running this command, should that be required. Otherwise this argument will be silently ignored. -e <HEAT ENVIRONMENT FILE>, --environment-file <HEAT ENVIRONMENT FILE> Environment files to be passed to the heat stack- create or heat stack-update command. (Can be specified more than once.) This argument is deprecated. The command now utilizes a deployment plan, which should be updated prior to running this command, should that be required. Otherwise this argument will be silently ignored. --timeout <TIMEOUT> Timeout in minutes to wait for the nodes to be deleted. Keep in mind that due to keystone session duration that timeout has an upper bound of 4 hours -y, --yes Skip yes/no prompt (assume yes). 55.27. overcloud node discover Discover overcloud nodes by polling their BMCs. Usage: Table 55.36. Command arguments Value Summary -h, --help Show this help message and exit --ip <ips> Ip address(es) to probe --range <range> Ip range to probe --credentials <key:value> Key/value pairs of possible credentials --port <ports> Bmc port(s) to probe --introspect Introspect the imported nodes --run-validations Run the pre-deployment validations. these external validations are from the TripleO Validations project. --provide Provide (make available) the nodes --no-deploy-image Skip setting the deploy kernel and ramdisk. --instance-boot-option {local,netboot} Whether to set instances for booting from local hard drive (local) or network (netboot). --concurrency CONCURRENCY Maximum number of nodes to introspect at once. 55.28. overcloud node import Import baremetal nodes from a JSON, YAML or CSV file. The node status will be set to manageable by default. Usage: Table 55.37. Positional arguments Value Summary env_file None Table 55.38. Command arguments Value Summary -h, --help Show this help message and exit --introspect Introspect the imported nodes --run-validations Run the pre-deployment validations. these external validations are from the TripleO Validations project. --validate-only Validate the env_file and then exit without actually importing the nodes. --provide Provide (make available) the nodes --no-deploy-image Skip setting the deploy kernel and ramdisk. --instance-boot-option {local,netboot} Whether to set instances for booting from local hard drive (local) or network (netboot). --http-boot HTTP_BOOT Root directory for the ironic-python-agent image --concurrency CONCURRENCY Maximum number of nodes to introspect at once. 55.29. overcloud node introspect Introspect specified nodes or all nodes in manageable state. Usage: Table 55.39. Positional arguments Value Summary <node_uuid> Baremetal node uuids for the node(s) to be introspected Table 55.40. Command arguments Value Summary -h, --help Show this help message and exit --all-manageable Introspect all nodes currently in manageable state --provide Provide (make available) the nodes once introspected --run-validations Run the pre-deployment validations. these external validations are from the TripleO Validations project. --concurrency CONCURRENCY Maximum number of nodes to introspect at once. 55.30. overcloud node provide Mark nodes as available based on UUIDs or current manageable state. Usage: Table 55.41. Positional arguments Value Summary <node_uuid> Baremetal node uuids for the node(s) to be provided Table 55.42. Command arguments Value Summary -h, --help Show this help message and exit --all-manageable Provide all nodes currently in manageable state 55.31. overcloud node provision Provision new nodes using Ironic. Usage: Table 55.43. Positional arguments Value Summary <baremetal_deployment.yaml> Configuration file describing the baremetal deployment Table 55.44. Command arguments Value Summary -h, --help Show this help message and exit -o OUTPUT, --output OUTPUT The output environment file path --stack STACK Name or id of heat stack (default=env: OVERCLOUD_STACK_NAME) --overcloud-ssh-user OVERCLOUD_SSH_USER User for ssh access to newly deployed nodes --overcloud-ssh-key OVERCLOUD_SSH_KEY Key path for ssh access toovercloud nodes. when undefined the keywill be autodetected. --concurrency CONCURRENCY Maximum number of nodes to provision at once. (default=20) --timeout TIMEOUT Number of seconds to wait for the node provision to complete. (default=3600) 55.32. overcloud node unprovision Unprovisions nodes using Ironic. Usage: Table 55.45. Positional arguments Value Summary <baremetal_deployment.yaml> Configuration file describing the baremetal deployment Table 55.46. Command arguments Value Summary -h, --help Show this help message and exit --stack STACK Name or id of heat stack (default=env: OVERCLOUD_STACK_NAME) --all Unprovision every instance in the deployment -y, --yes Skip yes/no prompt (assume yes) 55.33. overcloud parameters set Set a parameters for a plan Usage: Table 55.47. Positional arguments Value Summary name The name of the plan, which is used for the swift container, Mistral environment and Heat stack names. file_in None Table 55.48. Command arguments Value Summary -h, --help Show this help message and exit 55.34. overcloud plan create Create a deployment plan Usage: Table 55.49. Positional arguments Value Summary name The name of the plan, which is used for the object storage container, workflow environment and orchestration stack names. Table 55.50. Command arguments Value Summary -h, --help Show this help message and exit --templates TEMPLATES The directory containing the heat templates to deploy. If this or --source_url isn't provided, the templates packaged on the Undercloud will be used. --plan-environment-file PLAN_ENVIRONMENT_FILE, -p PLAN_ENVIRONMENT_FILE Plan environment file, overrides the default plan- environment.yaml in the --templates directory --disable-password-generation Disable password generation. --source-url SOURCE_URL The url of a git repository containing the heat templates to deploy. If this or --templates isn't provided, the templates packaged on the Undercloud will be used. 55.35. overcloud plan delete Delete an overcloud deployment plan. The plan will not be deleted if a stack exists with the same name. Usage: Table 55.51. Positional arguments Value Summary <name> Name of the plan(s) to delete Table 55.52. Command arguments Value Summary -h, --help Show this help message and exit 55.36. overcloud plan deploy Deploy a deployment plan Usage: Table 55.53. Positional arguments Value Summary name The name of the plan to deploy. Table 55.54. Command arguments Value Summary -h, --help Show this help message and exit --timeout <TIMEOUT>, -t <TIMEOUT> Deployment timeout in minutes. --run-validations Run the pre-deployment validations. these external validations are from the TripleO Validations project. 55.37. overcloud plan export Export a deployment plan Usage: Table 55.55. Positional arguments Value Summary <name> Name of the plan to export. Table 55.56. Command arguments Value Summary -h, --help Show this help message and exit --output-file <output file>, -o <output file> Name of the output file for export. it will default to "<name>.tar.gz". --force-overwrite, -f Overwrite output file if it exists. 55.38. overcloud plan list List overcloud deployment plans. Usage: Table 55.57. Command arguments Value Summary -h, --help Show this help message and exit Table 55.58. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 55.59. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 55.60. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 55.61. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 55.39. overcloud profiles list List overcloud node profiles Usage: Table 55.62. Command arguments Value Summary -h, --help Show this help message and exit --all List all nodes, even those not available to nova. --control-scale CONTROL_SCALE New number of control nodes. --compute-scale COMPUTE_SCALE New number of compute nodes. --ceph-storage-scale CEPH_STORAGE_SCALE New number of ceph storage nodes. --block-storage-scale BLOCK_STORAGE_SCALE New number of cinder storage nodes. --swift-storage-scale SWIFT_STORAGE_SCALE New number of swift storage nodes. --control-flavor CONTROL_FLAVOR Nova flavor to use for control nodes. --compute-flavor COMPUTE_FLAVOR Nova flavor to use for compute nodes. --ceph-storage-flavor CEPH_STORAGE_FLAVOR Nova flavor to use for ceph storage nodes. --block-storage-flavor BLOCK_STORAGE_FLAVOR Nova flavor to use for cinder storage nodes --swift-storage-flavor SWIFT_STORAGE_FLAVOR Nova flavor to use for swift storage nodes Table 55.63. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 55.64. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 55.65. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 55.66. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 55.40. overcloud profiles match Assign and validate profiles on nodes Usage: Table 55.67. Command arguments Value Summary -h, --help Show this help message and exit --dry-run Only run validations, but do not apply any changes. --control-scale CONTROL_SCALE New number of control nodes. --compute-scale COMPUTE_SCALE New number of compute nodes. --ceph-storage-scale CEPH_STORAGE_SCALE New number of ceph storage nodes. --block-storage-scale BLOCK_STORAGE_SCALE New number of cinder storage nodes. --swift-storage-scale SWIFT_STORAGE_SCALE New number of swift storage nodes. --control-flavor CONTROL_FLAVOR Nova flavor to use for control nodes. --compute-flavor COMPUTE_FLAVOR Nova flavor to use for compute nodes. --ceph-storage-flavor CEPH_STORAGE_FLAVOR Nova flavor to use for ceph storage nodes. --block-storage-flavor BLOCK_STORAGE_FLAVOR Nova flavor to use for cinder storage nodes --swift-storage-flavor SWIFT_STORAGE_FLAVOR Nova flavor to use for swift storage nodes 55.41. overcloud raid create Create RAID on given nodes Usage: Table 55.68. Positional arguments Value Summary configuration Raid configuration (yaml/json string or file name). Table 55.69. Command arguments Value Summary -h, --help Show this help message and exit --node NODE Nodes to create raid on (expected to be in manageable state). Can be specified multiple times. 55.42. overcloud role list List availables roles (DEPRECATED). Please use "openstack overcloud roles list" instead. Usage: Table 55.70. Command arguments Value Summary -h, --help Show this help message and exit --roles-path <roles directory> Filesystem path containing the role yaml files. by default this is /usr/share/openstack-tripleo-heat- templates/roles 55.43. overcloud role show Show information about a given role (DEPRECATED). Please use "openstack overcloud roles show" intead. Usage: Table 55.71. Positional arguments Value Summary <role> Role to display more information about. Table 55.72. Command arguments Value Summary -h, --help Show this help message and exit --roles-path <roles directory> Filesystem path containing the role yaml files. by default this is /usr/share/openstack-tripleo-heat- templates/roles 55.44. overcloud roles generate Generate roles_data.yaml file Usage: Table 55.73. Positional arguments Value Summary <role> List of roles to use to generate the roles_data.yaml file for the deployment. NOTE: Ordering is important if no role has the "primary" and "controller" tags. If no role is tagged then the first role listed will be considered the primary role. This usually is the controller role. Table 55.74. Command arguments Value Summary -h, --help Show this help message and exit --roles-path <roles directory> Filesystem path containing the role yaml files. by default this is /usr/share/openstack-tripleo-heat- templates/roles -o <output file>, --output-file <output file> File to capture all output to. for example, roles_data.yaml --skip-validate Skip role metadata type validation whengenerating the roles_data.yaml 55.45. overcloud roles list List the current and available roles in a given plan Usage: Table 55.75. Command arguments Value Summary -h, --help Show this help message and exit --name NAME The name of the plan, which is used for the object storage container, workflow environment and orchestration stack names. --detail Include details about each role --current Only show the information for the roles currently enabled for the plan. Table 55.76. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 55.77. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 55.78. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 55.79. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 55.46. overcloud roles show Show details for a specific role, given a plan Usage: Table 55.80. Positional arguments Value Summary <role> Name of the role to look up. Table 55.81. Command arguments Value Summary -h, --help Show this help message and exit --name NAME The name of the plan, which is used for the object storage container, workflow environment and orchestration stack names. Table 55.82. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 55.83. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 55.84. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 55.85. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 55.47. overcloud status Get deployment status Usage: Table 55.86. Command arguments Value Summary -h, --help Show this help message and exit --plan PLAN, --stack PLAN Name of the stack/plan. (default: overcloud) 55.48. overcloud support report collect Run sosreport on selected servers. Usage: Table 55.87. Positional arguments Value Summary server_name Server name, group name, or partial name to match. for example "Controller" will match all controllers for an environment. Table 55.88. Command arguments Value Summary -h, --help Show this help message and exit -c, --container Deprecated: swift container to store logs to -o DESTINATION, --output DESTINATION Output directory for the report --stack STACK Stack name to use for log collection. --skip-container-delete Deprecated: do not delete the container after the files have been downloaded. Ignored if --collect-only or --download-only is provided. -t TIMEOUT, --timeout TIMEOUT Maximum time to wait for the log collection and container deletion workflows to finish. -n CONCURRENCY, --concurrency CONCURRENCY Number of parallel log collection and object deletion tasks to run. --collect-only Deprecated: skip log downloads, only collect logs and put in the container --download-only Deprecated: skip generation, only download from the provided container -v VERBOSITY, --verbose VERBOSITY None 55.49. overcloud update converge Converge the update on Overcloud nodes. This restores the plan and stack so that normal deployment workflow is back in place. Usage: Table 55.89. Command arguments Value Summary --templates [TEMPLATES] The directory containing the heat templates to deploy --stack STACK Stack name to create or update --timeout <TIMEOUT>, -t <TIMEOUT> Deployment timeout in minutes. --control-scale CONTROL_SCALE New number of control nodes. (deprecated. use an environment file and set the parameter ControllerCount. This option will be removed in the "U" release.) --compute-scale COMPUTE_SCALE New number of compute nodes. (deprecated. use an environment file and set the parameter ComputeCount. This option will be removed in the "U" release.) --ceph-storage-scale CEPH_STORAGE_SCALE New number of ceph storage nodes. (deprecated. use an environment file and set the parameter CephStorageCount. This option will be removed in the "U" release.) --block-storage-scale BLOCK_STORAGE_SCALE New number of cinder storage nodes. (deprecated. use an environment file and set the parameter BlockStorageCount. This option will be removed in the "U" release.) --swift-storage-scale SWIFT_STORAGE_SCALE New number of swift storage nodes. (deprecated. use an environment file and set the parameter ObjectStorageCount. This option will be removed in the "U" release.) --control-flavor CONTROL_FLAVOR Nova flavor to use for control nodes. (deprecated. use an environment file and set the parameter OvercloudControlFlavor. This option will be removed in the "U" release.) --compute-flavor COMPUTE_FLAVOR Nova flavor to use for compute nodes. (deprecated. use an environment file and set the parameter OvercloudComputeFlavor. This option will be removed in the "U" release.) --ceph-storage-flavor CEPH_STORAGE_FLAVOR Nova flavor to use for ceph storage nodes. (DEPRECATED. Use an environment file and set the parameter OvercloudCephStorageFlavor. This option will be removed in the "U" release.) --block-storage-flavor BLOCK_STORAGE_FLAVOR Nova flavor to use for cinder storage nodes (DEPRECATED. Use an environment file and set the parameter OvercloudBlockStorageFlavor. This option will be removed in the "U" release.) --swift-storage-flavor SWIFT_STORAGE_FLAVOR Nova flavor to use for swift storage nodes (DEPRECATED. Use an environment file and set the parameter OvercloudSwiftStorageFlavor. This option will be removed in the "U" release.) --libvirt-type {kvm,qemu} Libvirt domain type. --ntp-server NTP_SERVER The ntp for overcloud nodes. --no-proxy NO_PROXY A comma separated list of hosts that should not be proxied. --overcloud-ssh-user OVERCLOUD_SSH_USER User for ssh access to overcloud nodes --overcloud-ssh-key OVERCLOUD_SSH_KEY Key path for ssh access to overcloud nodes. Whenundefined the key will be autodetected. --overcloud-ssh-network OVERCLOUD_SSH_NETWORK Network name to use for ssh access to overcloud nodes. --overcloud-ssh-enable-timeout OVERCLOUD_SSH_ENABLE_TIMEOUT Timeout for the ssh enable process to finish. --overcloud-ssh-port-timeout OVERCLOUD_SSH_PORT_TIMEOUT Timeout for to wait for the ssh port to become active. --environment-file <HEAT ENVIRONMENT FILE>, -e <HEAT ENVIRONMENT FILE> Environment files to be passed to the heat stack- create or heat stack-update command. (Can be specified more than once.) --environment-directory <HEAT ENVIRONMENT DIRECTORY> Environment file directories that are automatically added to the heat stack-create or heat stack-update commands. Can be specified more than once. Files in directories are loaded in ascending sort order. --roles-file ROLES_FILE, -r ROLES_FILE Roles file, overrides the default roles_data.yaml in the --templates directory. May be an absolute path or the path relative to --templates --networks-file NETWORKS_FILE, -n NETWORKS_FILE Networks file, overrides the default network_data.yaml in the --templates directory --plan-environment-file PLAN_ENVIRONMENT_FILE, -p PLAN_ENVIRONMENT_FILE Plan environment file, overrides the default plan- environment.yaml in the --templates directory --no-cleanup Don't cleanup temporary files, just log their location --update-plan-only Only update the plan. do not perform the actual deployment. NOTE: Will move to a discrete command in a future release. --validation-errors-nonfatal Allow the deployment to continue in spite of validation errors. Note that attempting deployment while errors exist is likely to fail. --validation-warnings-fatal Exit if there are warnings from the configuration pre- checks. --disable-validations Deprecated. disable the pre-deployment validations entirely. These validations are the built-in pre- deployment validations. To enable external validations from tripleo-validations, use the --run-validations flag. These validations are now run via the external validations in tripleo-validations. --inflight-validations Activate in-flight validations during the deploy. in- flight validations provide a robust way to ensure deployed services are running right after their activation. Defaults to False. --dry-run Only run validations, but do not apply any changes. --run-validations Run external validations from the tripleo-validations project. --skip-postconfig Skip the overcloud post-deployment configuration. --force-postconfig Force the overcloud post-deployment configuration. --skip-deploy-identifier Skip generation of a unique identifier for the DeployIdentifier parameter. The software configuration deployment steps will only be triggered if there is an actual change to the configuration. This option should be used with Caution, and only if there is confidence that the software configuration does not need to be run, such as when scaling out certain roles. --answers-file ANSWERS_FILE Path to a yaml file with arguments and parameters. --disable-password-generation Disable password generation. --deployed-server Use pre-provisioned overcloud nodes. removes baremetal,compute and image services requirements from theundercloud node. Must only be used with the-- disable-validations. --config-download Run deployment via config-download mechanism. this is now the default, and this CLI options may be removed in the future. --no-config-download, --stack-only Disable the config-download workflow and only create the stack and associated OpenStack resources. No software configuration will be applied. --config-download-only Disable the stack create/update, and only run the config-download workflow to apply the software configuration. --output-dir OUTPUT_DIR Directory to use for saved output when using --config- download. The directory must be writeable by the mistral user. When not specified, the default server side value will be used (/var/lib/mistral/<execution id>. --override-ansible-cfg OVERRIDE_ANSIBLE_CFG Path to ansible configuration file. the configuration in the file will override any configuration used by config-download by default. --config-download-timeout CONFIG_DOWNLOAD_TIMEOUT Timeout (in minutes) to use for config-download steps. If unset, will default to however much time is leftover from the --timeout parameter after the stack operation. --deployment-python-interpreter DEPLOYMENT_PYTHON_INTERPRETER The path to python interpreter to use for the deployment actions. This may need to be used if deploying on a python2 host from a python3 system or vice versa. -b <baremetal_deployment.yaml>, --baremetal-deployment <baremetal_deployment.yaml> Configuration file describing the baremetal deployment --limit LIMIT A string that identifies a single node or comma- separatedlist of nodes the config-download Ansible playbook execution will be limited to. For example: --limit "compute-0,compute-1,compute-5". --tags TAGS A list of tags to use when running the config-download ansible-playbook command. --skip-tags SKIP_TAGS A list of tags to skip when running the config- download ansible-playbook command. --ansible-forks ANSIBLE_FORKS The number of ansible forks to use for the config- download ansible-playbook command. -y, --yes Use -y or --yes to skip the confirmation required before any update operation. Use this with caution! 55.50. overcloud update prepare Run heat stack update for overcloud nodes to refresh heat stack outputs. The heat stack outputs are what we use later on to generate ansible playbooks which deliver the minor update workflow. This is used as the first step for a minor update of your overcloud. Usage: Table 55.90. Command arguments Value Summary --templates [TEMPLATES] The directory containing the heat templates to deploy --stack STACK Stack name to create or update --timeout <TIMEOUT>, -t <TIMEOUT> Deployment timeout in minutes. --control-scale CONTROL_SCALE New number of control nodes. (deprecated. use an environment file and set the parameter ControllerCount. This option will be removed in the "U" release.) --compute-scale COMPUTE_SCALE New number of compute nodes. (deprecated. use an environment file and set the parameter ComputeCount. This option will be removed in the "U" release.) --ceph-storage-scale CEPH_STORAGE_SCALE New number of ceph storage nodes. (deprecated. use an environment file and set the parameter CephStorageCount. This option will be removed in the "U" release.) --block-storage-scale BLOCK_STORAGE_SCALE New number of cinder storage nodes. (deprecated. use an environment file and set the parameter BlockStorageCount. This option will be removed in the "U" release.) --swift-storage-scale SWIFT_STORAGE_SCALE New number of swift storage nodes. (deprecated. use an environment file and set the parameter ObjectStorageCount. This option will be removed in the "U" release.) --control-flavor CONTROL_FLAVOR Nova flavor to use for control nodes. (deprecated. use an environment file and set the parameter OvercloudControlFlavor. This option will be removed in the "U" release.) --compute-flavor COMPUTE_FLAVOR Nova flavor to use for compute nodes. (deprecated. use an environment file and set the parameter OvercloudComputeFlavor. This option will be removed in the "U" release.) --ceph-storage-flavor CEPH_STORAGE_FLAVOR Nova flavor to use for ceph storage nodes. (DEPRECATED. Use an environment file and set the parameter OvercloudCephStorageFlavor. This option will be removed in the "U" release.) --block-storage-flavor BLOCK_STORAGE_FLAVOR Nova flavor to use for cinder storage nodes (DEPRECATED. Use an environment file and set the parameter OvercloudBlockStorageFlavor. This option will be removed in the "U" release.) --swift-storage-flavor SWIFT_STORAGE_FLAVOR Nova flavor to use for swift storage nodes (DEPRECATED. Use an environment file and set the parameter OvercloudSwiftStorageFlavor. This option will be removed in the "U" release.) --libvirt-type {kvm,qemu} Libvirt domain type. --ntp-server NTP_SERVER The ntp for overcloud nodes. --no-proxy NO_PROXY A comma separated list of hosts that should not be proxied. --overcloud-ssh-user OVERCLOUD_SSH_USER User for ssh access to overcloud nodes --overcloud-ssh-key OVERCLOUD_SSH_KEY Key path for ssh access to overcloud nodes. Whenundefined the key will be autodetected. --overcloud-ssh-network OVERCLOUD_SSH_NETWORK Network name to use for ssh access to overcloud nodes. --overcloud-ssh-enable-timeout OVERCLOUD_SSH_ENABLE_TIMEOUT Timeout for the ssh enable process to finish. --overcloud-ssh-port-timeout OVERCLOUD_SSH_PORT_TIMEOUT Timeout for to wait for the ssh port to become active. --environment-file <HEAT ENVIRONMENT FILE>, -e <HEAT ENVIRONMENT FILE> Environment files to be passed to the heat stack- create or heat stack-update command. (Can be specified more than once.) --environment-directory <HEAT ENVIRONMENT DIRECTORY> Environment file directories that are automatically added to the heat stack-create or heat stack-update commands. Can be specified more than once. Files in directories are loaded in ascending sort order. --roles-file ROLES_FILE, -r ROLES_FILE Roles file, overrides the default roles_data.yaml in the --templates directory. May be an absolute path or the path relative to --templates --networks-file NETWORKS_FILE, -n NETWORKS_FILE Networks file, overrides the default network_data.yaml in the --templates directory --plan-environment-file PLAN_ENVIRONMENT_FILE, -p PLAN_ENVIRONMENT_FILE Plan environment file, overrides the default plan- environment.yaml in the --templates directory --no-cleanup Don't cleanup temporary files, just log their location --update-plan-only Only update the plan. do not perform the actual deployment. NOTE: Will move to a discrete command in a future release. --validation-errors-nonfatal Allow the deployment to continue in spite of validation errors. Note that attempting deployment while errors exist is likely to fail. --validation-warnings-fatal Exit if there are warnings from the configuration pre- checks. --disable-validations Deprecated. disable the pre-deployment validations entirely. These validations are the built-in pre- deployment validations. To enable external validations from tripleo-validations, use the --run-validations flag. These validations are now run via the external validations in tripleo-validations. --inflight-validations Activate in-flight validations during the deploy. in- flight validations provide a robust way to ensure deployed services are running right after their activation. Defaults to False. --dry-run Only run validations, but do not apply any changes. --run-validations Run external validations from the tripleo-validations project. --skip-postconfig Skip the overcloud post-deployment configuration. --force-postconfig Force the overcloud post-deployment configuration. --skip-deploy-identifier Skip generation of a unique identifier for the DeployIdentifier parameter. The software configuration deployment steps will only be triggered if there is an actual change to the configuration. This option should be used with Caution, and only if there is confidence that the software configuration does not need to be run, such as when scaling out certain roles. --answers-file ANSWERS_FILE Path to a yaml file with arguments and parameters. --disable-password-generation Disable password generation. --deployed-server Use pre-provisioned overcloud nodes. removes baremetal,compute and image services requirements from theundercloud node. Must only be used with the-- disable-validations. --config-download Run deployment via config-download mechanism. this is now the default, and this CLI options may be removed in the future. --no-config-download, --stack-only Disable the config-download workflow and only create the stack and associated OpenStack resources. No software configuration will be applied. --config-download-only Disable the stack create/update, and only run the config-download workflow to apply the software configuration. --output-dir OUTPUT_DIR Directory to use for saved output when using --config- download. The directory must be writeable by the mistral user. When not specified, the default server side value will be used (/var/lib/mistral/<execution id>. --override-ansible-cfg OVERRIDE_ANSIBLE_CFG Path to ansible configuration file. the configuration in the file will override any configuration used by config-download by default. --config-download-timeout CONFIG_DOWNLOAD_TIMEOUT Timeout (in minutes) to use for config-download steps. If unset, will default to however much time is leftover from the --timeout parameter after the stack operation. --deployment-python-interpreter DEPLOYMENT_PYTHON_INTERPRETER The path to python interpreter to use for the deployment actions. This may need to be used if deploying on a python2 host from a python3 system or vice versa. -b <baremetal_deployment.yaml>, --baremetal-deployment <baremetal_deployment.yaml> Configuration file describing the baremetal deployment --limit LIMIT A string that identifies a single node or comma- separatedlist of nodes the config-download Ansible playbook execution will be limited to. For example: --limit "compute-0,compute-1,compute-5". --tags TAGS A list of tags to use when running the config-download ansible-playbook command. --skip-tags SKIP_TAGS A list of tags to skip when running the config- download ansible-playbook command. --ansible-forks ANSIBLE_FORKS The number of ansible forks to use for the config- download ansible-playbook command. -y, --yes Use -y or --yes to skip the confirmation required before any update operation. Use this with caution! 55.51. overcloud update run Run minor update ansible playbooks on Overcloud nodes Usage: Table 55.91. Command arguments Value Summary -h, --help Show this help message and exit --limit LIMIT A string that identifies a single node or comma- separatedlist of nodes the config-download Ansible playbook execution will be limited to. For example: --limit "compute-0,compute-1,compute-5". --playbook PLAYBOOK Ansible playbook to use for the minor update. defaults to the special value all which causes all the update playbooks to be executed. That is the update_steps_playbook.yaml and then thedeploy_steps_playbook.yaml. Set this to each of those playbooks in consecutive invocations of this command if you prefer to run them manually. Note: make sure to run both those playbooks so that all services are updated and running with the target version configuration. --ssh-user SSH_USER Deprecated: only tripleo-admin should be used as ssh user. --static-inventory STATIC_INVENTORY Path to an existing ansible inventory to use. if not specified, one will be generated in ~/tripleo-ansible- inventory.yaml --stack STACK Name or id of heat stack (default=env: OVERCLOUD_STACK_NAME) --no-workflow Run ansible-playbook directly via system command instead of running Ansiblevia the TripleO mistral workflows. --tags TAGS A list of tags to use when running the config-download ansible-playbook command. --skip-tags SKIP_TAGS A list of tags to skip when running the config- download ansible-playbook command. -y, --yes Use -y or --yes to skip the confirmation required before any update operation. Use this with caution! --ansible-forks ANSIBLE_FORKS The number of ansible forks to use for the config- download ansible-playbook command. 55.52. overcloud upgrade converge Major upgrade converge - reset Heat resources in the stored plan This is the last step for completion of a overcloud major upgrade. The main task is updating the plan and stack to unblock future stack updates. For the major upgrade workflow we have set specific values for some stack Heat resources. This unsets those back to their default values. Usage: Table 55.92. Command arguments Value Summary --templates [TEMPLATES] The directory containing the heat templates to deploy --stack STACK Stack name to create or update --timeout <TIMEOUT>, -t <TIMEOUT> Deployment timeout in minutes. --control-scale CONTROL_SCALE New number of control nodes. (deprecated. use an environment file and set the parameter ControllerCount. This option will be removed in the "U" release.) --compute-scale COMPUTE_SCALE New number of compute nodes. (deprecated. use an environment file and set the parameter ComputeCount. This option will be removed in the "U" release.) --ceph-storage-scale CEPH_STORAGE_SCALE New number of ceph storage nodes. (deprecated. use an environment file and set the parameter CephStorageCount. This option will be removed in the "U" release.) --block-storage-scale BLOCK_STORAGE_SCALE New number of cinder storage nodes. (deprecated. use an environment file and set the parameter BlockStorageCount. This option will be removed in the "U" release.) --swift-storage-scale SWIFT_STORAGE_SCALE New number of swift storage nodes. (deprecated. use an environment file and set the parameter ObjectStorageCount. This option will be removed in the "U" release.) --control-flavor CONTROL_FLAVOR Nova flavor to use for control nodes. (deprecated. use an environment file and set the parameter OvercloudControlFlavor. This option will be removed in the "U" release.) --compute-flavor COMPUTE_FLAVOR Nova flavor to use for compute nodes. (deprecated. use an environment file and set the parameter OvercloudComputeFlavor. This option will be removed in the "U" release.) --ceph-storage-flavor CEPH_STORAGE_FLAVOR Nova flavor to use for ceph storage nodes. (DEPRECATED. Use an environment file and set the parameter OvercloudCephStorageFlavor. This option will be removed in the "U" release.) --block-storage-flavor BLOCK_STORAGE_FLAVOR Nova flavor to use for cinder storage nodes (DEPRECATED. Use an environment file and set the parameter OvercloudBlockStorageFlavor. This option will be removed in the "U" release.) --swift-storage-flavor SWIFT_STORAGE_FLAVOR Nova flavor to use for swift storage nodes (DEPRECATED. Use an environment file and set the parameter OvercloudSwiftStorageFlavor. This option will be removed in the "U" release.) --libvirt-type {kvm,qemu} Libvirt domain type. --ntp-server NTP_SERVER The ntp for overcloud nodes. --no-proxy NO_PROXY A comma separated list of hosts that should not be proxied. --overcloud-ssh-user OVERCLOUD_SSH_USER User for ssh access to overcloud nodes --overcloud-ssh-key OVERCLOUD_SSH_KEY Key path for ssh access to overcloud nodes. Whenundefined the key will be autodetected. --overcloud-ssh-network OVERCLOUD_SSH_NETWORK Network name to use for ssh access to overcloud nodes. --overcloud-ssh-enable-timeout OVERCLOUD_SSH_ENABLE_TIMEOUT Timeout for the ssh enable process to finish. --overcloud-ssh-port-timeout OVERCLOUD_SSH_PORT_TIMEOUT Timeout for to wait for the ssh port to become active. --environment-file <HEAT ENVIRONMENT FILE>, -e <HEAT ENVIRONMENT FILE> Environment files to be passed to the heat stack- create or heat stack-update command. (Can be specified more than once.) --environment-directory <HEAT ENVIRONMENT DIRECTORY> Environment file directories that are automatically added to the heat stack-create or heat stack-update commands. Can be specified more than once. Files in directories are loaded in ascending sort order. --roles-file ROLES_FILE, -r ROLES_FILE Roles file, overrides the default roles_data.yaml in the --templates directory. May be an absolute path or the path relative to --templates --networks-file NETWORKS_FILE, -n NETWORKS_FILE Networks file, overrides the default network_data.yaml in the --templates directory --plan-environment-file PLAN_ENVIRONMENT_FILE, -p PLAN_ENVIRONMENT_FILE Plan environment file, overrides the default plan- environment.yaml in the --templates directory --no-cleanup Don't cleanup temporary files, just log their location --update-plan-only Only update the plan. do not perform the actual deployment. NOTE: Will move to a discrete command in a future release. --validation-errors-nonfatal Allow the deployment to continue in spite of validation errors. Note that attempting deployment while errors exist is likely to fail. --validation-warnings-fatal Exit if there are warnings from the configuration pre- checks. --disable-validations Deprecated. disable the pre-deployment validations entirely. These validations are the built-in pre- deployment validations. To enable external validations from tripleo-validations, use the --run-validations flag. These validations are now run via the external validations in tripleo-validations. --inflight-validations Activate in-flight validations during the deploy. in- flight validations provide a robust way to ensure deployed services are running right after their activation. Defaults to False. --dry-run Only run validations, but do not apply any changes. --run-validations Run external validations from the tripleo-validations project. --skip-postconfig Skip the overcloud post-deployment configuration. --force-postconfig Force the overcloud post-deployment configuration. --skip-deploy-identifier Skip generation of a unique identifier for the DeployIdentifier parameter. The software configuration deployment steps will only be triggered if there is an actual change to the configuration. This option should be used with Caution, and only if there is confidence that the software configuration does not need to be run, such as when scaling out certain roles. --answers-file ANSWERS_FILE Path to a yaml file with arguments and parameters. --disable-password-generation Disable password generation. --deployed-server Use pre-provisioned overcloud nodes. removes baremetal,compute and image services requirements from theundercloud node. Must only be used with the-- disable-validations. --config-download Run deployment via config-download mechanism. this is now the default, and this CLI options may be removed in the future. --no-config-download, --stack-only Disable the config-download workflow and only create the stack and associated OpenStack resources. No software configuration will be applied. --config-download-only Disable the stack create/update, and only run the config-download workflow to apply the software configuration. --output-dir OUTPUT_DIR Directory to use for saved output when using --config- download. The directory must be writeable by the mistral user. When not specified, the default server side value will be used (/var/lib/mistral/<execution id>. --override-ansible-cfg OVERRIDE_ANSIBLE_CFG Path to ansible configuration file. the configuration in the file will override any configuration used by config-download by default. --config-download-timeout CONFIG_DOWNLOAD_TIMEOUT Timeout (in minutes) to use for config-download steps. If unset, will default to however much time is leftover from the --timeout parameter after the stack operation. --deployment-python-interpreter DEPLOYMENT_PYTHON_INTERPRETER The path to python interpreter to use for the deployment actions. This may need to be used if deploying on a python2 host from a python3 system or vice versa. -b <baremetal_deployment.yaml>, --baremetal-deployment <baremetal_deployment.yaml> Configuration file describing the baremetal deployment --limit LIMIT A string that identifies a single node or comma- separatedlist of nodes the config-download Ansible playbook execution will be limited to. For example: --limit "compute-0,compute-1,compute-5". --tags TAGS A list of tags to use when running the config-download ansible-playbook command. --skip-tags SKIP_TAGS A list of tags to skip when running the config- download ansible-playbook command. --ansible-forks ANSIBLE_FORKS The number of ansible forks to use for the config- download ansible-playbook command. -y, --yes Use -y or --yes to skip the confirmation required before any upgrade operation. Use this with caution! 55.53. overcloud upgrade prepare Run heat stack update for overcloud nodes to refresh heat stack outputs. The heat stack outputs are what we use later on to generate ansible playbooks which deliver the major upgrade workflow. This is used as the first step for a major upgrade of your overcloud. Usage: Table 55.93. Command arguments Value Summary --templates [TEMPLATES] The directory containing the heat templates to deploy --stack STACK Stack name to create or update --timeout <TIMEOUT>, -t <TIMEOUT> Deployment timeout in minutes. --control-scale CONTROL_SCALE New number of control nodes. (deprecated. use an environment file and set the parameter ControllerCount. This option will be removed in the "U" release.) --compute-scale COMPUTE_SCALE New number of compute nodes. (deprecated. use an environment file and set the parameter ComputeCount. This option will be removed in the "U" release.) --ceph-storage-scale CEPH_STORAGE_SCALE New number of ceph storage nodes. (deprecated. use an environment file and set the parameter CephStorageCount. This option will be removed in the "U" release.) --block-storage-scale BLOCK_STORAGE_SCALE New number of cinder storage nodes. (deprecated. use an environment file and set the parameter BlockStorageCount. This option will be removed in the "U" release.) --swift-storage-scale SWIFT_STORAGE_SCALE New number of swift storage nodes. (deprecated. use an environment file and set the parameter ObjectStorageCount. This option will be removed in the "U" release.) --control-flavor CONTROL_FLAVOR Nova flavor to use for control nodes. (deprecated. use an environment file and set the parameter OvercloudControlFlavor. This option will be removed in the "U" release.) --compute-flavor COMPUTE_FLAVOR Nova flavor to use for compute nodes. (deprecated. use an environment file and set the parameter OvercloudComputeFlavor. This option will be removed in the "U" release.) --ceph-storage-flavor CEPH_STORAGE_FLAVOR Nova flavor to use for ceph storage nodes. (DEPRECATED. Use an environment file and set the parameter OvercloudCephStorageFlavor. This option will be removed in the "U" release.) --block-storage-flavor BLOCK_STORAGE_FLAVOR Nova flavor to use for cinder storage nodes (DEPRECATED. Use an environment file and set the parameter OvercloudBlockStorageFlavor. This option will be removed in the "U" release.) --swift-storage-flavor SWIFT_STORAGE_FLAVOR Nova flavor to use for swift storage nodes (DEPRECATED. Use an environment file and set the parameter OvercloudSwiftStorageFlavor. This option will be removed in the "U" release.) --libvirt-type {kvm,qemu} Libvirt domain type. --ntp-server NTP_SERVER The ntp for overcloud nodes. --no-proxy NO_PROXY A comma separated list of hosts that should not be proxied. --overcloud-ssh-user OVERCLOUD_SSH_USER User for ssh access to overcloud nodes --overcloud-ssh-key OVERCLOUD_SSH_KEY Key path for ssh access to overcloud nodes. Whenundefined the key will be autodetected. --overcloud-ssh-network OVERCLOUD_SSH_NETWORK Network name to use for ssh access to overcloud nodes. --overcloud-ssh-enable-timeout OVERCLOUD_SSH_ENABLE_TIMEOUT Timeout for the ssh enable process to finish. --overcloud-ssh-port-timeout OVERCLOUD_SSH_PORT_TIMEOUT Timeout for to wait for the ssh port to become active. --environment-file <HEAT ENVIRONMENT FILE>, -e <HEAT ENVIRONMENT FILE> Environment files to be passed to the heat stack- create or heat stack-update command. (Can be specified more than once.) --environment-directory <HEAT ENVIRONMENT DIRECTORY> Environment file directories that are automatically added to the heat stack-create or heat stack-update commands. Can be specified more than once. Files in directories are loaded in ascending sort order. --roles-file ROLES_FILE, -r ROLES_FILE Roles file, overrides the default roles_data.yaml in the --templates directory. May be an absolute path or the path relative to --templates --networks-file NETWORKS_FILE, -n NETWORKS_FILE Networks file, overrides the default network_data.yaml in the --templates directory --plan-environment-file PLAN_ENVIRONMENT_FILE, -p PLAN_ENVIRONMENT_FILE Plan environment file, overrides the default plan- environment.yaml in the --templates directory --no-cleanup Don't cleanup temporary files, just log their location --update-plan-only Only update the plan. do not perform the actual deployment. NOTE: Will move to a discrete command in a future release. --validation-errors-nonfatal Allow the deployment to continue in spite of validation errors. Note that attempting deployment while errors exist is likely to fail. --validation-warnings-fatal Exit if there are warnings from the configuration pre- checks. --disable-validations Deprecated. disable the pre-deployment validations entirely. These validations are the built-in pre- deployment validations. To enable external validations from tripleo-validations, use the --run-validations flag. These validations are now run via the external validations in tripleo-validations. --inflight-validations Activate in-flight validations during the deploy. in- flight validations provide a robust way to ensure deployed services are running right after their activation. Defaults to False. --dry-run Only run validations, but do not apply any changes. --run-validations Run external validations from the tripleo-validations project. --skip-postconfig Skip the overcloud post-deployment configuration. --force-postconfig Force the overcloud post-deployment configuration. --skip-deploy-identifier Skip generation of a unique identifier for the DeployIdentifier parameter. The software configuration deployment steps will only be triggered if there is an actual change to the configuration. This option should be used with Caution, and only if there is confidence that the software configuration does not need to be run, such as when scaling out certain roles. --answers-file ANSWERS_FILE Path to a yaml file with arguments and parameters. --disable-password-generation Disable password generation. --deployed-server Use pre-provisioned overcloud nodes. removes baremetal,compute and image services requirements from theundercloud node. Must only be used with the-- disable-validations. --config-download Run deployment via config-download mechanism. this is now the default, and this CLI options may be removed in the future. --no-config-download, --stack-only Disable the config-download workflow and only create the stack and associated OpenStack resources. No software configuration will be applied. --config-download-only Disable the stack create/update, and only run the config-download workflow to apply the software configuration. --output-dir OUTPUT_DIR Directory to use for saved output when using --config- download. The directory must be writeable by the mistral user. When not specified, the default server side value will be used (/var/lib/mistral/<execution id>. --override-ansible-cfg OVERRIDE_ANSIBLE_CFG Path to ansible configuration file. the configuration in the file will override any configuration used by config-download by default. --config-download-timeout CONFIG_DOWNLOAD_TIMEOUT Timeout (in minutes) to use for config-download steps. If unset, will default to however much time is leftover from the --timeout parameter after the stack operation. --deployment-python-interpreter DEPLOYMENT_PYTHON_INTERPRETER The path to python interpreter to use for the deployment actions. This may need to be used if deploying on a python2 host from a python3 system or vice versa. -b <baremetal_deployment.yaml>, --baremetal-deployment <baremetal_deployment.yaml> Configuration file describing the baremetal deployment --limit LIMIT A string that identifies a single node or comma- separatedlist of nodes the config-download Ansible playbook execution will be limited to. For example: --limit "compute-0,compute-1,compute-5". --tags TAGS A list of tags to use when running the config-download ansible-playbook command. --skip-tags SKIP_TAGS A list of tags to skip when running the config- download ansible-playbook command. --ansible-forks ANSIBLE_FORKS The number of ansible forks to use for the config- download ansible-playbook command. -y, --yes Use -y or --yes to skip the confirmation required before any upgrade operation. Use this with caution! 55.54. overcloud upgrade run Run major upgrade ansible playbooks on Overcloud nodes This will run the major upgrade ansible playbooks on the overcloud. By default all playbooks are executed, that is the upgrade_steps_playbook.yaml then the deploy_steps_playbook.yaml and then the post_upgrade_steps_playbook.yaml. The upgrade playbooks are made available after completion of the overcloud upgrade prepare command. This overcloud upgrade run command is the second step in the major upgrade workflow. Usage: Table 55.94. Command arguments Value Summary -h, --help Show this help message and exit --limit LIMIT A string that identifies a single node or comma- separatedlist of nodes the config-download Ansible playbook execution will be limited to. For example: --limit "compute-0,compute-1,compute-5". --playbook PLAYBOOK Ansible playbook to use for the major upgrade. Defaults to the special value all which causes all the upgrade playbooks to run. That is the upgrade_steps_playbook.yaml then deploy_steps_playbook.yaml and then post_upgrade_steps_playbook.yaml. Set this to each of those playbooks in consecutive invocations of this command if you prefer to run them manually. Note: you will have to run all of those playbooks so that all services are upgraded and running with the target version configuration. --static-inventory STATIC_INVENTORY Path to an existing ansible inventory to use. if not specified, one will be generated in ~/tripleo-ansible- inventory.yaml --ssh-user SSH_USER Deprecated: only tripleo-admin should be used as ssh user. --tags TAGS A string specifying the tag or comma separated list of tags to be passed as --tags to ansible-playbook. --skip-tags SKIP_TAGS A string specifying the tag or comma separated list of tags to be passed as --skip-tags to ansible-playbook. The currently supported values are validation and pre-upgrade . In particular validation is useful if you must re-run following a failed upgrade and some services cannot be started. --stack STACK Name or id of heat stack (default=env: OVERCLOUD_STACK_NAME) --no-workflow Run ansible-playbook directly via system command instead of running Ansiblevia the TripleO mistral workflows. -y, --yes Use -y or --yes to skip the confirmation required before any upgrade operation. Use this with caution! --ansible-forks ANSIBLE_FORKS The number of ansible forks to use for the config- download ansible-playbook command. | [
"openstack overcloud admin authorize [-h] [--stack STACK] [--overcloud-ssh-user OVERCLOUD_SSH_USER] [--overcloud-ssh-key OVERCLOUD_SSH_KEY] [--overcloud-ssh-network OVERCLOUD_SSH_NETWORK] [--overcloud-ssh-enable-timeout OVERCLOUD_SSH_ENABLE_TIMEOUT] [--overcloud-ssh-port-timeout OVERCLOUD_SSH_PORT_TIMEOUT]",
"openstack overcloud backup [--init [INIT]] [--setup-nfs] [--setup-rear] [--cron] [--inventory INVENTORY] [--storage-ip STORAGE_IP] [--extra-vars EXTRA_VARS]",
"openstack overcloud cell export [-h] [--control-plane-stack <control plane stack>] [--cell-stack <cell stack>] [--output-file <output file>] [--config-download-dir CONFIG_DOWNLOAD_DIR] [--force-overwrite]",
"openstack overcloud config download [-h] [--name NAME] [--config-dir CONFIG_DIR] [--config-type CONFIG_TYPE] [--no-preserve-config]",
"openstack overcloud container image build [-h] [--config-file <yaml config file>] --kolla-config-file <config file> [--list-images] [--list-dependencies] [--exclude <container-name>] [--use-buildah] [--work-dir <container builds directory>]",
"openstack overcloud container image prepare [-h] [--template-file <yaml template file>] [--push-destination <location>] [--tag <tag>] [--tag-from-label <image label>] [--namespace <namespace>] [--prefix <prefix>] [--suffix <suffix>] [--set <variable=value>] [--exclude <regex>] [--include <regex>] [--output-images-file <file path>] [--environment-file <file path>] [--environment-directory <HEAT ENVIRONMENT DIRECTORY>] [--output-env-file <file path>] [--roles-file ROLES_FILE] [--modify-role MODIFY_ROLE] [--modify-vars MODIFY_VARS]",
"openstack overcloud container image tag discover [-h] --image <container image> [--tag-from-label <image label>]",
"openstack overcloud container image upload [-h] --config-file <yaml config file> [--cleanup <full, partial, none>]",
"openstack overcloud credentials [-h] [--directory [DIRECTORY]] plan",
"openstack overcloud delete [-h] [-y] [-s] [stack]",
"openstack overcloud deploy [--templates [TEMPLATES]] [--stack STACK] [--timeout <TIMEOUT>] [--control-scale CONTROL_SCALE] [--compute-scale COMPUTE_SCALE] [--ceph-storage-scale CEPH_STORAGE_SCALE] [--block-storage-scale BLOCK_STORAGE_SCALE] [--swift-storage-scale SWIFT_STORAGE_SCALE] [--control-flavor CONTROL_FLAVOR] [--compute-flavor COMPUTE_FLAVOR] [--ceph-storage-flavor CEPH_STORAGE_FLAVOR] [--block-storage-flavor BLOCK_STORAGE_FLAVOR] [--swift-storage-flavor SWIFT_STORAGE_FLAVOR] [--libvirt-type {kvm,qemu}] [--ntp-server NTP_SERVER] [--no-proxy NO_PROXY] [--overcloud-ssh-user OVERCLOUD_SSH_USER] [--overcloud-ssh-key OVERCLOUD_SSH_KEY] [--overcloud-ssh-network OVERCLOUD_SSH_NETWORK] [--overcloud-ssh-enable-timeout OVERCLOUD_SSH_ENABLE_TIMEOUT] [--overcloud-ssh-port-timeout OVERCLOUD_SSH_PORT_TIMEOUT] [--environment-file <HEAT ENVIRONMENT FILE>] [--environment-directory <HEAT ENVIRONMENT DIRECTORY>] [--roles-file ROLES_FILE] [--networks-file NETWORKS_FILE] [--plan-environment-file PLAN_ENVIRONMENT_FILE] [--no-cleanup] [--update-plan-only] [--validation-errors-nonfatal] [--validation-warnings-fatal] [--disable-validations] [--inflight-validations] [--dry-run] [--run-validations] [--skip-postconfig] [--force-postconfig] [--skip-deploy-identifier] [--answers-file ANSWERS_FILE] [--disable-password-generation] [--deployed-server] [--config-download] [--no-config-download] [--config-download-only] [--output-dir OUTPUT_DIR] [--override-ansible-cfg OVERRIDE_ANSIBLE_CFG] [--config-download-timeout CONFIG_DOWNLOAD_TIMEOUT] [--deployment-python-interpreter DEPLOYMENT_PYTHON_INTERPRETER] [-b <baremetal_deployment.yaml>] [--limit LIMIT] [--tags TAGS] [--skip-tags SKIP_TAGS] [--ansible-forks ANSIBLE_FORKS]",
"openstack overcloud execute [-h] [-s SERVER_NAME] [-g GROUP] file_in",
"openstack overcloud export ceph [-h] [--stack <stack>] [--cephx-key-client-name <cephx>] [--output-file <output file>] [--force-overwrite] [--config-download-dir CONFIG_DOWNLOAD_DIR]",
"openstack overcloud export [-h] [--stack <stack>] [--output-file <output file>] [--force-overwrite] [--config-download-dir CONFIG_DOWNLOAD_DIR] [--no-password-excludes]",
"openstack overcloud external-update run [-h] [--static-inventory STATIC_INVENTORY] [--ssh-user SSH_USER] [--tags TAGS] [--skip-tags SKIP_TAGS] [--stack STACK] [-e EXTRA_VARS] [--no-workflow] [-y] [--limit LIMIT] [--ansible-forks ANSIBLE_FORKS]",
"openstack overcloud external-upgrade run [-h] [--static-inventory STATIC_INVENTORY] [--ssh-user SSH_USER] [--tags TAGS] [--skip-tags SKIP_TAGS] [--stack STACK] [-e EXTRA_VARS] [--no-workflow] [-y] [--limit LIMIT] [--ansible-forks ANSIBLE_FORKS]",
"openstack overcloud failures [-h] [--plan PLAN]",
"openstack overcloud generate fencing [-h] [-a FENCE_ACTION] [--delay DELAY] [--ipmi-lanplus] [--ipmi-no-lanplus] [--ipmi-cipher IPMI_CIPHER] [--ipmi-level IPMI_LEVEL] [--output OUTPUT] instackenv",
"openstack overcloud image build [-h] [--config-file <yaml config file>] [--image-name <image name>] [--no-skip] [--output-directory OUTPUT_DIRECTORY] [--temp-dir TEMP_DIR]",
"openstack overcloud image upload [-h] [--image-path IMAGE_PATH] [--os-image-name OS_IMAGE_NAME] [--ironic-python-agent-name IPA_NAME] [--http-boot HTTP_BOOT] [--update-existing] [--whole-disk] [--architecture ARCHITECTURE] [--platform PLATFORM] [--image-type {os,ironic-python-agent}] [--progress] [--local] [--local-path LOCAL_PATH]",
"openstack overcloud netenv validate [-h] [-f NETENV]",
"openstack overcloud node bios configure [-h] [--all-manageable] [--configuration <configuration>] [<node_uuid> [<node_uuid> ...]]",
"openstack overcloud node bios reset [-h] [--all-manageable] [<node_uuid> [<node_uuid> ...]]",
"openstack overcloud node clean [-h] [--all-manageable] [--provide] [<node_uuid> [<node_uuid> ...]]",
"openstack overcloud node configure [-h] [--all-manageable] [--deploy-kernel DEPLOY_KERNEL] [--deploy-ramdisk DEPLOY_RAMDISK] [--instance-boot-option {local,netboot}] [--root-device ROOT_DEVICE] [--root-device-minimum-size ROOT_DEVICE_MINIMUM_SIZE] [--overwrite-root-device-hints] [<node_uuid> [<node_uuid> ...]]",
"openstack overcloud node delete [-h] [-b <BAREMETAL DEPLOYMENT FILE>] [--stack STACK] [--templates [TEMPLATES]] [-e <HEAT ENVIRONMENT FILE>] [--timeout <TIMEOUT>] [-y] [<node> [<node> ...]]",
"openstack overcloud node discover [-h] (--ip <ips> | --range <range>) --credentials <key:value> [--port <ports>] [--introspect] [--run-validations] [--provide] [--no-deploy-image] [--instance-boot-option {local,netboot}] [--concurrency CONCURRENCY]",
"openstack overcloud node import [-h] [--introspect] [--run-validations] [--validate-only] [--provide] [--no-deploy-image] [--instance-boot-option {local,netboot}] [--http-boot HTTP_BOOT] [--concurrency CONCURRENCY] env_file",
"openstack overcloud node introspect [-h] [--all-manageable] [--provide] [--run-validations] [--concurrency CONCURRENCY] [<node_uuid> [<node_uuid> ...]]",
"openstack overcloud node provide [-h] [--all-manageable] [<node_uuid> [<node_uuid> ...]]",
"openstack overcloud node provision [-h] [-o OUTPUT] [--stack STACK] [--overcloud-ssh-user OVERCLOUD_SSH_USER] [--overcloud-ssh-key OVERCLOUD_SSH_KEY] [--concurrency CONCURRENCY] [--timeout TIMEOUT] <baremetal_deployment.yaml>",
"openstack overcloud node unprovision [-h] [--stack STACK] [--all] [-y] <baremetal_deployment.yaml>",
"openstack overcloud parameters set [-h] name file_in",
"openstack overcloud plan create [-h] [--templates TEMPLATES] [--plan-environment-file PLAN_ENVIRONMENT_FILE] [--disable-password-generation] [--source-url SOURCE_URL] name",
"openstack overcloud plan delete [-h] <name> [<name> ...]",
"openstack overcloud plan deploy [-h] [--timeout <TIMEOUT>] [--run-validations] name",
"openstack overcloud plan export [-h] [--output-file <output file>] [--force-overwrite] <name>",
"openstack overcloud plan list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN]",
"openstack overcloud profiles list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--all] [--control-scale CONTROL_SCALE] [--compute-scale COMPUTE_SCALE] [--ceph-storage-scale CEPH_STORAGE_SCALE] [--block-storage-scale BLOCK_STORAGE_SCALE] [--swift-storage-scale SWIFT_STORAGE_SCALE] [--control-flavor CONTROL_FLAVOR] [--compute-flavor COMPUTE_FLAVOR] [--ceph-storage-flavor CEPH_STORAGE_FLAVOR] [--block-storage-flavor BLOCK_STORAGE_FLAVOR] [--swift-storage-flavor SWIFT_STORAGE_FLAVOR]",
"openstack overcloud profiles match [-h] [--dry-run] [--control-scale CONTROL_SCALE] [--compute-scale COMPUTE_SCALE] [--ceph-storage-scale CEPH_STORAGE_SCALE] [--block-storage-scale BLOCK_STORAGE_SCALE] [--swift-storage-scale SWIFT_STORAGE_SCALE] [--control-flavor CONTROL_FLAVOR] [--compute-flavor COMPUTE_FLAVOR] [--ceph-storage-flavor CEPH_STORAGE_FLAVOR] [--block-storage-flavor BLOCK_STORAGE_FLAVOR] [--swift-storage-flavor SWIFT_STORAGE_FLAVOR]",
"openstack overcloud raid create [-h] --node NODE configuration",
"openstack overcloud role list [-h] [--roles-path <roles directory>]",
"openstack overcloud role show [-h] [--roles-path <roles directory>] <role>",
"openstack overcloud roles generate [-h] [--roles-path <roles directory>] [-o <output file>] [--skip-validate] <role> [<role> ...]",
"openstack overcloud roles list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--name NAME] [--detail] [--current]",
"openstack overcloud roles show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name NAME] <role>",
"openstack overcloud status [-h] [--plan PLAN]",
"openstack overcloud support report collect [-h] [-c] [-o DESTINATION] [--stack STACK] [--skip-container-delete] [-t TIMEOUT] [-n CONCURRENCY] [--collect-only | --download-only] [-v VERBOSITY] server_name",
"openstack overcloud update converge [--templates [TEMPLATES]] [--stack STACK] [--timeout <TIMEOUT>] [--control-scale CONTROL_SCALE] [--compute-scale COMPUTE_SCALE] [--ceph-storage-scale CEPH_STORAGE_SCALE] [--block-storage-scale BLOCK_STORAGE_SCALE] [--swift-storage-scale SWIFT_STORAGE_SCALE] [--control-flavor CONTROL_FLAVOR] [--compute-flavor COMPUTE_FLAVOR] [--ceph-storage-flavor CEPH_STORAGE_FLAVOR] [--block-storage-flavor BLOCK_STORAGE_FLAVOR] [--swift-storage-flavor SWIFT_STORAGE_FLAVOR] [--libvirt-type {kvm,qemu}] [--ntp-server NTP_SERVER] [--no-proxy NO_PROXY] [--overcloud-ssh-user OVERCLOUD_SSH_USER] [--overcloud-ssh-key OVERCLOUD_SSH_KEY] [--overcloud-ssh-network OVERCLOUD_SSH_NETWORK] [--overcloud-ssh-enable-timeout OVERCLOUD_SSH_ENABLE_TIMEOUT] [--overcloud-ssh-port-timeout OVERCLOUD_SSH_PORT_TIMEOUT] [--environment-file <HEAT ENVIRONMENT FILE>] [--environment-directory <HEAT ENVIRONMENT DIRECTORY>] [--roles-file ROLES_FILE] [--networks-file NETWORKS_FILE] [--plan-environment-file PLAN_ENVIRONMENT_FILE] [--no-cleanup] [--update-plan-only] [--validation-errors-nonfatal] [--validation-warnings-fatal] [--disable-validations] [--inflight-validations] [--dry-run] [--run-validations] [--skip-postconfig] [--force-postconfig] [--skip-deploy-identifier] [--answers-file ANSWERS_FILE] [--disable-password-generation] [--deployed-server] [--config-download] [--no-config-download] [--config-download-only] [--output-dir OUTPUT_DIR] [--override-ansible-cfg OVERRIDE_ANSIBLE_CFG] [--config-download-timeout CONFIG_DOWNLOAD_TIMEOUT] [--deployment-python-interpreter DEPLOYMENT_PYTHON_INTERPRETER] [-b <baremetal_deployment.yaml>] [--limit LIMIT] [--tags TAGS] [--skip-tags SKIP_TAGS] [--ansible-forks ANSIBLE_FORKS] [-y]",
"openstack overcloud update prepare [--templates [TEMPLATES]] [--stack STACK] [--timeout <TIMEOUT>] [--control-scale CONTROL_SCALE] [--compute-scale COMPUTE_SCALE] [--ceph-storage-scale CEPH_STORAGE_SCALE] [--block-storage-scale BLOCK_STORAGE_SCALE] [--swift-storage-scale SWIFT_STORAGE_SCALE] [--control-flavor CONTROL_FLAVOR] [--compute-flavor COMPUTE_FLAVOR] [--ceph-storage-flavor CEPH_STORAGE_FLAVOR] [--block-storage-flavor BLOCK_STORAGE_FLAVOR] [--swift-storage-flavor SWIFT_STORAGE_FLAVOR] [--libvirt-type {kvm,qemu}] [--ntp-server NTP_SERVER] [--no-proxy NO_PROXY] [--overcloud-ssh-user OVERCLOUD_SSH_USER] [--overcloud-ssh-key OVERCLOUD_SSH_KEY] [--overcloud-ssh-network OVERCLOUD_SSH_NETWORK] [--overcloud-ssh-enable-timeout OVERCLOUD_SSH_ENABLE_TIMEOUT] [--overcloud-ssh-port-timeout OVERCLOUD_SSH_PORT_TIMEOUT] [--environment-file <HEAT ENVIRONMENT FILE>] [--environment-directory <HEAT ENVIRONMENT DIRECTORY>] [--roles-file ROLES_FILE] [--networks-file NETWORKS_FILE] [--plan-environment-file PLAN_ENVIRONMENT_FILE] [--no-cleanup] [--update-plan-only] [--validation-errors-nonfatal] [--validation-warnings-fatal] [--disable-validations] [--inflight-validations] [--dry-run] [--run-validations] [--skip-postconfig] [--force-postconfig] [--skip-deploy-identifier] [--answers-file ANSWERS_FILE] [--disable-password-generation] [--deployed-server] [--config-download] [--no-config-download] [--config-download-only] [--output-dir OUTPUT_DIR] [--override-ansible-cfg OVERRIDE_ANSIBLE_CFG] [--config-download-timeout CONFIG_DOWNLOAD_TIMEOUT] [--deployment-python-interpreter DEPLOYMENT_PYTHON_INTERPRETER] [-b <baremetal_deployment.yaml>] [--limit LIMIT] [--tags TAGS] [--skip-tags SKIP_TAGS] [--ansible-forks ANSIBLE_FORKS] [-y]",
"openstack overcloud update run [-h] --limit LIMIT [--playbook PLAYBOOK] [--ssh-user SSH_USER] [--static-inventory STATIC_INVENTORY] [--stack STACK] [--no-workflow] [--tags TAGS] [--skip-tags SKIP_TAGS] [-y] [--ansible-forks ANSIBLE_FORKS]",
"openstack overcloud upgrade converge [--templates [TEMPLATES]] [--stack STACK] [--timeout <TIMEOUT>] [--control-scale CONTROL_SCALE] [--compute-scale COMPUTE_SCALE] [--ceph-storage-scale CEPH_STORAGE_SCALE] [--block-storage-scale BLOCK_STORAGE_SCALE] [--swift-storage-scale SWIFT_STORAGE_SCALE] [--control-flavor CONTROL_FLAVOR] [--compute-flavor COMPUTE_FLAVOR] [--ceph-storage-flavor CEPH_STORAGE_FLAVOR] [--block-storage-flavor BLOCK_STORAGE_FLAVOR] [--swift-storage-flavor SWIFT_STORAGE_FLAVOR] [--libvirt-type {kvm,qemu}] [--ntp-server NTP_SERVER] [--no-proxy NO_PROXY] [--overcloud-ssh-user OVERCLOUD_SSH_USER] [--overcloud-ssh-key OVERCLOUD_SSH_KEY] [--overcloud-ssh-network OVERCLOUD_SSH_NETWORK] [--overcloud-ssh-enable-timeout OVERCLOUD_SSH_ENABLE_TIMEOUT] [--overcloud-ssh-port-timeout OVERCLOUD_SSH_PORT_TIMEOUT] [--environment-file <HEAT ENVIRONMENT FILE>] [--environment-directory <HEAT ENVIRONMENT DIRECTORY>] [--roles-file ROLES_FILE] [--networks-file NETWORKS_FILE] [--plan-environment-file PLAN_ENVIRONMENT_FILE] [--no-cleanup] [--update-plan-only] [--validation-errors-nonfatal] [--validation-warnings-fatal] [--disable-validations] [--inflight-validations] [--dry-run] [--run-validations] [--skip-postconfig] [--force-postconfig] [--skip-deploy-identifier] [--answers-file ANSWERS_FILE] [--disable-password-generation] [--deployed-server] [--config-download] [--no-config-download] [--config-download-only] [--output-dir OUTPUT_DIR] [--override-ansible-cfg OVERRIDE_ANSIBLE_CFG] [--config-download-timeout CONFIG_DOWNLOAD_TIMEOUT] [--deployment-python-interpreter DEPLOYMENT_PYTHON_INTERPRETER] [-b <baremetal_deployment.yaml>] [--limit LIMIT] [--tags TAGS] [--skip-tags SKIP_TAGS] [--ansible-forks ANSIBLE_FORKS] [-y]",
"openstack overcloud upgrade prepare [--templates [TEMPLATES]] [--stack STACK] [--timeout <TIMEOUT>] [--control-scale CONTROL_SCALE] [--compute-scale COMPUTE_SCALE] [--ceph-storage-scale CEPH_STORAGE_SCALE] [--block-storage-scale BLOCK_STORAGE_SCALE] [--swift-storage-scale SWIFT_STORAGE_SCALE] [--control-flavor CONTROL_FLAVOR] [--compute-flavor COMPUTE_FLAVOR] [--ceph-storage-flavor CEPH_STORAGE_FLAVOR] [--block-storage-flavor BLOCK_STORAGE_FLAVOR] [--swift-storage-flavor SWIFT_STORAGE_FLAVOR] [--libvirt-type {kvm,qemu}] [--ntp-server NTP_SERVER] [--no-proxy NO_PROXY] [--overcloud-ssh-user OVERCLOUD_SSH_USER] [--overcloud-ssh-key OVERCLOUD_SSH_KEY] [--overcloud-ssh-network OVERCLOUD_SSH_NETWORK] [--overcloud-ssh-enable-timeout OVERCLOUD_SSH_ENABLE_TIMEOUT] [--overcloud-ssh-port-timeout OVERCLOUD_SSH_PORT_TIMEOUT] [--environment-file <HEAT ENVIRONMENT FILE>] [--environment-directory <HEAT ENVIRONMENT DIRECTORY>] [--roles-file ROLES_FILE] [--networks-file NETWORKS_FILE] [--plan-environment-file PLAN_ENVIRONMENT_FILE] [--no-cleanup] [--update-plan-only] [--validation-errors-nonfatal] [--validation-warnings-fatal] [--disable-validations] [--inflight-validations] [--dry-run] [--run-validations] [--skip-postconfig] [--force-postconfig] [--skip-deploy-identifier] [--answers-file ANSWERS_FILE] [--disable-password-generation] [--deployed-server] [--config-download] [--no-config-download] [--config-download-only] [--output-dir OUTPUT_DIR] [--override-ansible-cfg OVERRIDE_ANSIBLE_CFG] [--config-download-timeout CONFIG_DOWNLOAD_TIMEOUT] [--deployment-python-interpreter DEPLOYMENT_PYTHON_INTERPRETER] [-b <baremetal_deployment.yaml>] [--limit LIMIT] [--tags TAGS] [--skip-tags SKIP_TAGS] [--ansible-forks ANSIBLE_FORKS] [-y]",
"openstack overcloud upgrade run [-h] --limit LIMIT [--playbook PLAYBOOK] [--static-inventory STATIC_INVENTORY] [--ssh-user SSH_USER] [--tags TAGS] [--skip-tags SKIP_TAGS] [--stack STACK] [--no-workflow] [-y] [--ansible-forks ANSIBLE_FORKS]"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/overcloud |
Chapter 4. Verifying OpenShift Data Foundation deployment | Chapter 4. Verifying OpenShift Data Foundation deployment Use this section to verify that OpenShift Data Foundation is deployed correctly. 4.1. Verifying the state of the pods Procedure Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see the following table: Set filter for Running and Completed pods to verify that the following pods are in Running and Completed state: Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (3 pods distributed across storage nodes) MGR rook-ceph-mgr-* (1 pod on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods distributed across storage nodes) RGW rook-ceph-rgw-ocs-storagecluster-cephobjectstore-* (1 pod on any storage node) CSI cephfs csi-cephfsplugin-* (1 pod on each storage node) csi-cephfsplugin-provisioner-* (2 pods distributed across storage nodes) rbd csi-rbdplugin-* (1 pod on each storage node) csi-rbdplugin-provisioner-* (2 pods distributed across storage nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node) OSD rook-ceph-osd-* (1 pod for each device) rook-ceph-osd-prepare-ocs-deviceset-* (1 pod for each device) 4.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 4.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 4.4. Verifying that the specific storage classes exist Procedure Click Storage Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io ocs-storagecluster-ceph-rgw | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_openshift_data_foundation_on_vmware_vsphere/verifying_openshift_data_foundation_deployment |
Chapter 1. OpenShift image registry overview | Chapter 1. OpenShift image registry overview OpenShift Container Platform can build images from your source code, deploy them, and manage their lifecycle. It provides an internal, integrated container image registry that can be deployed in your OpenShift Container Platform environment to locally manage images. This overview contains reference information and links for registries commonly used with OpenShift Container Platform, with a focus on the OpenShift image registry. 1.1. Glossary of common terms for OpenShift image registry This glossary defines the common terms that are used in the registry content. container Lightweight and executable images that consist software and all its dependencies. Because containers virtualize the operating system, you can run containers in data center, a public or private cloud, or your local host. Image Registry Operator The Image Registry Operator runs in the openshift-image-registry namespace, and manages the registry instance in that location. image repository An image repository is a collection of related container images and tags identifying images. mirror registry The mirror registry is a registry that holds the mirror of OpenShift Container Platform images. namespace A namespace isolates groups of resources within a single cluster. pod The pod is the smallest logical unit in Kubernetes. A pod is comprised of one or more containers to run in a worker node. private registry A registry is a server that implements the container image registry API. A private registry is a registry that requires authentication to allow users access its contents. public registry A registry is a server that implements the container image registry API. A public registry is a registry that serves its contently publicly. Quay.io A public Red Hat Quay Container Registry instance provided and maintained by Red Hat, that serves most of the container images and Operators to OpenShift Container Platform clusters. OpenShift image registry OpenShift image registry is the registry provided by OpenShift Container Platform to manage images. registry authentication To push and pull images to and from private image repositories, the registry needs to authenticate its users with credentials. route Exposes a service to allow for network access to pods from users and applications outside the OpenShift Container Platform instance. scale down To decrease the number of replicas. scale up To increase the number of replicas. service A service exposes a running application on a set of pods. 1.2. Integrated OpenShift image registry OpenShift Container Platform provides a built-in container image registry that runs as a standard workload on the cluster. The registry is configured and managed by an infrastructure Operator. It provides an out-of-the-box solution for users to manage the images that run their workloads, and runs on top of the existing cluster infrastructure. This registry can be scaled up or down like any other cluster workload and does not require specific infrastructure provisioning. In addition, it is integrated into the cluster user authentication and authorization system, which means that access to create and retrieve images is controlled by defining user permissions on the image resources. The registry is typically used as a publication target for images built on the cluster, as well as being a source of images for workloads running on the cluster. When a new image is pushed to the registry, the cluster is notified of the new image and other components can react to and consume the updated image. Image data is stored in two locations. The actual image data is stored in a configurable storage location, such as cloud storage or a filesystem volume. The image metadata, which is exposed by the standard cluster APIs and is used to perform access control, is stored as standard API resources, specifically images and imagestreams. Additional resources Image Registry Operator in OpenShift Container Platform 1.3. Third-party registries OpenShift Container Platform can create containers using images from third-party registries, but it is unlikely that these registries offer the same image notification support as the integrated OpenShift image registry. In this situation, OpenShift Container Platform will fetch tags from the remote registry upon imagestream creation. To refresh the fetched tags, run oc import-image <stream> . When new images are detected, the previously described build and deployment reactions occur. 1.3.1. Authentication OpenShift Container Platform can communicate with registries to access private image repositories using credentials supplied by the user. This allows OpenShift Container Platform to push and pull images to and from private repositories. 1.3.1.1. Registry authentication with Podman Some container image registries require access authorization. Podman is an open source tool for managing containers and container images and interacting with image registries. You can use Podman to authenticate your credentials, pull the registry image, and store local images in a local file system. The following is a generic example of authenticating the registry with Podman. Procedure Use the Red Hat Ecosystem Catalog to search for specific container images from the Red Hat Repository and select the required image. Click Get this image to find the command for your container image. Log in by running the following command and entering your username and password to authenticate: USD podman login registry.redhat.io Username:<your_registry_account_username> Password:<your_registry_account_password> Download the image and save it locally by running the following command: USD podman pull registry.redhat.io/<repository_name> 1.4. Red Hat Quay registries If you need an enterprise-quality container image registry, Red Hat Quay is available both as a hosted service and as software you can install in your own data center or cloud environment. Advanced features in Red Hat Quay include geo-replication, image scanning, and the ability to roll back images. Visit the Quay.io site to set up your own hosted Quay registry account. After that, follow the Quay Tutorial to log in to the Quay registry and start managing your images. You can access your Red Hat Quay registry from OpenShift Container Platform like any remote container image registry. Additional resources Red Hat Quay product documentation 1.5. Authentication enabled Red Hat registry All container images available through the Container images section of the Red Hat Ecosystem Catalog are hosted on an image registry, registry.redhat.io . The registry, registry.redhat.io , requires authentication for access to images and hosted content on OpenShift Container Platform. Following the move to the new registry, the existing registry will be available for a period of time. Note OpenShift Container Platform pulls images from registry.redhat.io , so you must configure your cluster to use it. The new registry uses standard OAuth mechanisms for authentication, with the following methods: Authentication token. Tokens, which are generated by administrators, are service accounts that give systems the ability to authenticate against the container image registry. Service accounts are not affected by changes in user accounts, so the token authentication method is reliable and resilient. This is the only supported authentication option for production clusters. Web username and password. This is the standard set of credentials you use to log in to resources such as access.redhat.com . While it is possible to use this authentication method with OpenShift Container Platform, it is not supported for production deployments. Restrict this authentication method to stand-alone projects outside OpenShift Container Platform. You can use podman login with your credentials, either username and password or authentication token, to access content on the new registry. All imagestreams point to the new registry, which uses the installation pull secret to authenticate. You must place your credentials in either of the following places: openshift namespace . Your credentials must exist in the openshift namespace so that the imagestreams in the openshift namespace can import. Your host . Your credentials must exist on your host because Kubernetes uses the credentials from your host when it goes to pull images. Additional resources Registry service accounts | [
"podman login registry.redhat.io Username:<your_registry_account_username> Password:<your_registry_account_password>",
"podman pull registry.redhat.io/<repository_name>"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/registry/registry-overview-1 |
Chapter 36. StatefulSetTemplate schema reference | Chapter 36. StatefulSetTemplate schema reference Used in: KafkaClusterTemplate , ZookeeperClusterTemplate Property Description metadata Metadata applied to the resource. MetadataTemplate podManagementPolicy PodManagementPolicy which will be used for this StatefulSet. Valid values are Parallel and OrderedReady . Defaults to Parallel . string (one of [OrderedReady, Parallel]) | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-StatefulSetTemplate-reference |
21.11. KVM Networking Performance | 21.11. KVM Networking Performance By default, KVM virtual machines are assigned a virtual Realtek 8139 (rtl8139) NIC (network interface controller). Whereas Red Hat Enterprise Linux guests are assigned a virtio NIC by default, Windows guests or the guest type is not specified. The rtl8139 virtualized NIC works fine in most environments,but this device can suffer from performance degradation problems on some networks, such as a 10 Gigabit Ethernet. To improve performance, you can switch to the paravirtualized network driver. Note Note that the virtualized Intel PRO/1000 ( e1000 ) driver is also supported as an emulated driver choice. To use the e1000 driver, replace virtio in the procedure below with e1000 . For the best performance it is recommended to use the virtio driver. Procedure 21.4. Switching to the virtio driver Shutdown the guest operating system. Edit the guest's configuration file with the virsh command (where GUEST is the guest's name): The virsh edit command uses the USDEDITOR shell variable to determine which editor to use. Find the network interface section of the configuration. This section resembles the snippet below: Change the type attribute of the model element from 'rtl8139' to 'virtio' . This will change the driver from the rtl8139 driver to the e1000 driver. Save the changes and exit the text editor Restart the guest operating system. Creating New Guests Using Other Network Drivers Alternatively, new guests can be created with a different network driver. This may be required if you are having difficulty installing guests over a network connection. This method requires you to have at least one guest already created (possibly installed from CD or DVD) to use as a template. Create an XML template from an existing guest (in this example, named Guest1 ): Copy and edit the XML file and update the unique fields: virtual machine name, UUID, disk image, MAC address, and any other unique parameters. Note that you can delete the UUID and MAC address lines and virsh will generate a UUID and MAC address. Add the model line in the network interface section: Create the new virtual machine: | [
"virsh edit GUEST",
"<interface type='network'> [output truncated] <model type='rtl8139' /> </interface>",
"<interface type='network'> [output truncated] <model type= 'virtio' /> </interface>",
"virsh dumpxml Guest1 > /tmp/ guest-template .xml",
"cp /tmp/ guest-template .xml /tmp/ new-guest .xml vi /tmp/ new-guest .xml",
"<interface type='network'> [output truncated] <model type='virtio' /> </interface>",
"virsh define /tmp/new-guest.xml virsh start new-guest"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-virtualization-troubleshooting-kvm_networking_performance |
6.6. Insufficient Free Extents for a Logical Volume | 6.6. Insufficient Free Extents for a Logical Volume You may get the error message "Insufficient free extents" when creating a logical volume when you think you have enough extents based on the output of the vgdisplay or vgs commands. This is because these commands round figures to 2 decimal places to provide human-readable output. To specify exact size, use free physical extent count instead of a multiple of bytes to determine the size of the logical volume. The vgdisplay command, by default, includes this line of output that indicates the free physical extents. Alternately, you can use the vg_free_count and vg_extent_count arguments of the vgs command to display the free extents and the total number of extents. With 8780 free physical extents, you can enter the following command, using the lower-case l argument to use extents instead of bytes: This uses all the free extents in the volume group. Alternately, you can extend the logical volume to use a percentage of the remaining free space in the volume group by using the -l argument of the lvcreate command. For information, see Section 4.4.1, "Creating Linear Logical Volumes" . | [
"vgdisplay --- Volume group --- Free PE / Size 8780 / 34.30 GB",
"vgs -o +vg_free_count,vg_extent_count VG #PV #LV #SN Attr VSize VFree Free #Ext testvg 2 0 0 wz--n- 34.30G 34.30G 8780 8780",
"lvcreate -l 8780 -n testlv testvg",
"vgs -o +vg_free_count,vg_extent_count VG #PV #LV #SN Attr VSize VFree Free #Ext testvg 2 1 0 wz--n- 34.30G 0 0 8780"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/logical_volume_manager_administration/nofreeext |
Chapter 14. Using Red Hat build of OptaPlanner in an IDE: an employee rostering example | Chapter 14. Using Red Hat build of OptaPlanner in an IDE: an employee rostering example As a business rules developer, you can use an IDE to build, run, and modify the optaweb-employee-rostering starter application that uses the Red Hat build of OptaPlanner functionality. Prerequisites You use an integrated development environment, such as Red Hat CodeReady Studio or IntelliJ IDEA. You have an understanding of the Java language. You have an understanding of React and TypeScript. This requirement is necessary to develop the OptaWeb UI. 14.1. Overview of the employee rostering starter application The employee rostering starter application assigns employees to shifts on various positions in an organization. For example, you can use the application to distribute shifts in a hospital between nurses, guard duty shifts across a number of locations, or shifts on an assembly line between workers. Optimal employee rostering must take a number of variables into account. For example, different skills can be required for shifts in different positions. Also, some employees might be unavailable for some time slots or might prefer a particular time slot. Moreover, an employee can have a contract that limits the number of hours that the employee can work in a single time period. The Red Hat build of OptaPlanner rules for this starter application use both hard and soft constraints. During an optimization, the planning engine may not violate hard constraints, for example, if an employee is unavailable (out sick), or that an employee cannot work two spots in a single shift. The planning engine tries to adhere to soft constraints, such as an employee's preference to not work a specific shift, but can violate them if the optimal solution requires it. 14.2. Building and running the employee rostering starter application You can build the employee rostering starter application from the source code and run it as a JAR file. Alternatively, you can use your IDE, for example, Eclipse (including Red Hat CodeReady Studio), to build and run the application. 14.2.1. Preparing deployment files You must download and prepare the deployment files before building and deploying the application. Procedure Navigate to the Software Downloads page in the Red Hat Customer Portal (login required), and select the product and version from the drop-down options: Product: Process Automation Manager Version: 7.13.5 Download Red Hat Process Automation Manager 7.13.5 Kogito and OptaPlanner 8 Decision Services Quickstarts ( rhpam-7.13.5-kogito-and-optaplanner-quickstarts.zip ). Extract the rhpam-7.13.5-kogito-and-optaplanner-quickstarts.zip file. Download Red Hat Decision Manager 7.13 Maven Repository Kogito and OptaPlanner 8 Maven Repository ( rhpam-7.13.5-kogito-maven-repository.zip ). Extract the rhpam-7.13.5-kogito-maven-repository.zip file. Copy the contents of the rhpam-7.13.5-kogito-maven-repository/maven-repository subdirectory into the ~/.m2/repository directory. Navigate to the optaweb-8.13.0.Final-redhat-00013/optaweb-employee-rostering directory. This folder is the base folder in subsequent parts of this document. Note File and folder names might have higher version numbers than specifically noted in this document. 14.2.2. Running the Employee Rostering starter application JAR file You can run the Employee Rostering starter application from a JAR file included in the Red Hat Process Automation Manager 7.13.5 Kogito and OptaPlanner 8 Decision Services Quickstarts download. Prerequisites You have downloaded and extracted the rhpam-7.13.5-kogito-and-optaplanner-quickstarts.zip file as described in Section 14.2.1, "Preparing deployment files" . A Java Development Kit is installed. Maven is installed. The host has access to the Internet. The build process uses the Internet for downloading Maven packages from external repositories. Procedure In a command terminal, change to the rhpam-7.13.5-kogito-and-optaplanner-quickstarts/optaweb-8.13.0.Final-redhat-00013/optaweb-employee-rostering directory. Enter the following command: mvn clean install -DskipTests Wait for the build process to complete. Navigate to the rhpam-7.13.5-kogito-and-optaplanner-quickstarts/optaweb-8.13.0.Final-redhat-00013/optaweb-employee-rostering/optaweb-employee-rostering-standalone/target directory. Enter the following command to run the Employee Rostering JAR file: java -jar quarkus-app/quarkus-run.jar Note The value of the quarkus.datasource.db-kind parameter is set to H2 by default at build time. To use a different database, you must rebuild the standalone module and specify the database type on the command line. For example, to use a PostgreSQL database, enter the following command: mvn clean install -DskipTests -Dquarkus.profile=postgres To access the application, enter http://localhost:8080/ in a web browser. 14.2.3. Building and running the Employee Rostering starter application using Maven You can use the command line to build and run the employee rostering starter application. If you use this procedure, the data is stored in memory and is lost when the server is stopped. To build and run the application with a database server for persistent storage, see Section 14.2.4, "Building and running the employee rostering starter application with persistent data storage from the command line" . Prerequisites You have prepared the deployment files as described in Section 14.2.1, "Preparing deployment files" . A Java Development Kit is installed. Maven is installed. The host has access to the Internet. The build process uses the Internet for downloading Maven packages from external repositories. Procedure Navigate to the optaweb-employee-rostering-backend directory. Enter the following command: mvn quarkus:dev Navigate to the optaweb-employee-rostering-frontend directory. Enter the following command: npm start Note If you use npm to start the server, npm monitors code changes. To access the application, enter http://localhost:3000/ in a web browser. 14.2.4. Building and running the employee rostering starter application with persistent data storage from the command line If you use the command line to build the employee rostering starter application and run it, you can provide a database server for persistent data storage. Prerequisites You have prepared the deployment files as described in Section 14.2.1, "Preparing deployment files" . A Java Development Kit is installed. Maven is installed. The host has access to the Internet. The build process uses the Internet for downloading Maven packages from external repositories. You have a deployed MySQL or PostrgeSQL database server. Procedure In a command terminal, navigate to the optaweb-employee-rostering-standalone/target directory. Enter the following command to run the Employee Rostering JAR file: java \ -Dquarkus.datasource.username=<DATABASE_USER> \ -Dquarkus.datasource.password=<DATABASE_PASSWORD> \ -Dquarkus.datasource.jdbc.url=<DATABASE_URL> \ -jar quarkus-app/quarkus-run.jar In this example, replace the following placeholders: <DATABASE_URL> : URL to connect to the database <DATABASE_USER> : The user to connect to the database <DATABASE_PASSWORD> : The password for <DATABASE_USER> Note The value of the quarkus.datasource.db-kind parameter is set to H2 by default at build time. To use a different database, you must rebuild the standalone module and specify the database type on the command line. For example, to use a PostgreSQL database, enter the following command: mvn clean install -DskipTests -Dquarkus.profile=postgres 14.2.5. Building and running the employee rostering starter application using IntelliJ IDEA You can use IntelliJ IDEA to build and run the employee rostering starter application. Prerequisites You have downloaded the Employee Rostering source code, available from the Employee Rostering GitHub page. IntelliJ IDEA, Maven, and Node.js are installed. The host has access to the Internet. The build process uses the Internet for downloading Maven packages from external repositories. Procedure Start IntelliJ IDEA. From the IntelliJ IDEA main menu, select File Open . Select the root directory of the application source and click OK . From the main menu, select Run Edit Configurations . In the window that appears, expand Templates and select Maven . The Maven sidebar appears. In the Maven sidebar, select optaweb-employee-rostering-backend from the Working Directory menu. In Command Line , enter mvn quarkus:dev . To start the back end, click OK . In a command terminal, navigate to the optaweb-employee-rostering-frontend directory. Enter the following command to start the front end: To access the application, enter http://localhost:3000/ in a web browser. 14.3. Overview of the source code of the employee rostering starter application The employee rostering starter application consists of the following principal components: A backend that implements the rostering logic using Red Hat build of OptaPlanner and provides a REST API A frontend module that implements a user interface using React and interacts with the backend module through the REST API You can build and use these components independently. In particular, you can implement a different user interface and use the REST API to call the server. In addition to the two main components, the employee rostering template contains a generator of random source data (useful for demonstration and testing purposes) and a benchmarking application. Modules and key classes The Java source code of the employee rostering template contains several Maven modules. Each of these modules includes a separate Maven project file ( pom.xml ), but they are intended for building in a common project. The modules contain a number of files, including Java classes. This document lists all the modules, as well as the classes and other files that contain the key information for the employee rostering calculations. optaweb-employee-rostering-benchmark module: Contains an additional application that generates random data and benchmarks the solution. optaweb-employee-rostering-distribution module: Contains README files. optaweb-employee-rostering-docs module: Contains documentation files. optaweb-employee-rostering-frontend module: Contains the client application with the user interface, developed in React. optaweb-employee-rostering-backend module: Contains the server application that uses OptaPlanner to perform the rostering calculation. src/main/java/org.optaweb.employeerostering.service.roster/rosterGenerator.java : Generates random input data for demonstration and testing purposes. If you change the required input data, change the generator accordingly. src/main/java/org.optaweb.employeerostering.domain.employee/EmployeeAvailability.java : Defines availability information for an employee. For every time slot, an employee can be unavailable, available, or the time slot can be designated a preferred time slot for the employee. src/main/java/org.optaweb.employeerostering.domain.employee/Employee.java : Defines an employee. An employee has a name, a list of skills, and works under a contract. Skills are represented by skill objects. src/main/java/org.optaweb.employeerostering.domain.roster/Roster.java : Defines the calculated rostering information. src/main/java/org.optaweb.employeerostering.domain.shift/Shift.java : Defines a shift to which an employee can be assigned. A shift is defined by a time slot and a spot. For example, in a diner there could be a shift in the Kitchen spot for the February 20 8AM-4PM time slot. Multiple shifts can be defined for a specific spot and time slot. In this case, multiple employees are required for this spot and time slot. src/main/java/org.optaweb.employeerostering.domain.skill/Skill.java : Defines a skill that an employee can have. src/main/java/org.optaweb.employeerostering.domain.spot/Spot.java : Defines a spot where employees can be placed. For example, a Kitchen can be a spot. src/main/java/org.optaweb.employeerostering.domain.contract/Contract.java : Defines a contract that sets limits on work time for an employee in various time periods. src/main/java/org.optaweb.employeerostering.domain.tenant/Tenant.java : Defines a tenant. Each tenant represents an independent set of data. Changes in the data for one tenant do not affect any other tenants. *View.java : Classes related to domain objects that define value sets that are calculated from other information; the client application can read these values through the REST API, but not write them. *Service.java : Interfaces located in the service package that define the REST API. Both the server and the client application separately define implementations of these interfaces. optaweb-employee-rostering-standalone module: Contains the assembly configurations for the standalone application. 14.4. Modifying the employee rostering starter application To modify the employee rostering starter application to suit your needs, you must change the rules that govern the optimization process. You must also ensure that the data structures include the required data and provide the required calculations for the rules. If the required data is not present in the user interface, you must also modify the user interface. The following procedure outlines the general approach to modifying the employee rostering starter application. Prerequisites You have a build environment that successfully builds the application. You can read and modify Java code. Procedure Plan the required changes. Answer the following questions: What are the additional scenarios that must be avoided? These scenarios are hard constraints . What are the additional scenarios that the optimizer must try to avoid when possible? These scenarios are soft constraints . What data is required to calculate if each scenario is happening in a potential solution? Which of the data can be derived from the information that the user enters in the existing version? Which of the data can be hardcoded? Which of the data must be entered by the user and is not entered in the current version? If any required data can be calculated from the current data or can be hardcoded, add the calculations or hardcoding to existing view or utility classes. If the data must be calculated on the server side, add REST API endpoints to read it. If any required data must be entered by the user, add the data to the classes representing the data entities (for example, the Employee class), add REST API endpoints to read and write the data, and modify the user interface to enter the data. When all of the data is available, modify the rules. For most modifications, you must add a new rule. The rules are located in the src/main/java/org/optaweb/employeerostering/service/solver/EmployeeRosteringConstraintProvider.java file of the optaweb-employee-rostering-backend module. After modifying the application, build and run it. | [
"mvn clean install -DskipTests",
"java -jar quarkus-app/quarkus-run.jar",
"mvn quarkus:dev",
"npm start",
"java -Dquarkus.datasource.username=<DATABASE_USER> -Dquarkus.datasource.password=<DATABASE_PASSWORD> -Dquarkus.datasource.jdbc.url=<DATABASE_URL> -jar quarkus-app/quarkus-run.jar",
"npm start"
]
| https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_solvers_with_red_hat_build_of_optaplanner_in_red_hat_decision_manager/assembly-optimizer-modifying-er-template-ide |
6.10. Enabling and Disabling Cluster Resources | 6.10. Enabling and Disabling Cluster Resources The following command enables the resource specified by resource_id . The following command disables the resource specified by resource_id . | [
"pcs resource enable resource_id",
"pcs resource disable resource_id"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-starting_stopping_resources-HAAR |
Chapter 7. Federal Information Processing Standard (FIPS) | Chapter 7. Federal Information Processing Standard (FIPS) Red Hat Ceph Storage uses FIPS validated cryptography modules when run on Red Hat Enterprise Linux 7.6 or Red Hat Enterprise Linux 8.1, or Red Hat Enterprise Linux 8.2. Enable FIPS mode on Red Hat Enterprise Linux either during system installation or after it. For bare-metal deployments, follow the instructions in the Red Hat Enterprise Linux 8 Security Hardening Guide . For container deployments, follow the instructions in the Red Hat Enterprise Linux 8 Security Hardening Guide . Additional Resources Refer to the US Government Standards to know the latest information on FIPS validations. | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/data_security_and_hardening_guide/federal-information-processing-standard-fips-sec |
Chapter 7. Deployment options on Kubernetes | Chapter 7. Deployment options on Kubernetes When you create a site on Kubernetes, there are many options you can use. For example, you can set the number of pods and the resources allocated to each pod. This guide focusses on the following goals: Section 7.1, "Scaling for increased traffic" Section 7.2, "Creating a high availability site" Section 7.3, "Service synchronization" 7.1. Scaling for increased traffic For optimal network latency and throughput, you can adjust the CPU allocation for the router using the router-cpu option. Router CPU is the primary factor governing Skupper network performance. Note Increasing the number of routers does not improve network performance. An incoming router-to-router link is associated with just one active router. Additional routers do not receive traffic while that router is responding Determine the router CPU allocation you require. By default, the router CPU allocation is BestEffort as described in Pod Quality of Service Classes . Consider the following CPU allocation options: Router CPU Description 1 Helps avoid issues with BestEffort on low resource clusters 2 Suitable for production environments 5 Maximum performance If you are using the Skupper CLI, set the CPU allocation for the router using the --router-cpu option. For example: USD skupper init --router-cpu 2 If you are using YAML, set the CPU allocation for the router by setting a value for the router-cpu attribute. For example: apiVersion: v1 kind: ConfigMap metadata: name: "skupper-site" data: name: "my-site" router-cpu: 2 7.2. Creating a high availability site By default, Kubernetes restarts any router that becomes unresponsive. (If you encounter router restarts, consider Section 7.1, "Scaling for increased traffic" in order to improve responsiveness.) If the cluster where you are running Skupper is very busy, it may take time for Kubernetes to schedule a new router pod. You can "preschedule" a backup router by deploying two routers in a site. If you are using the Skupper CLI, set the number of routers to 2 using the --routers option: USD skupper init --routers 2 If you are using YAML, set the number of routers to 2 by setting the routers attribute: apiVersion: v1 kind: ConfigMap metadata: name: "skupper-site" data: name: "my-site" routers: 2 Setting the number of routers to more than two does not provide increased availability and can adversely affect performance. Note: Clients must reconnect when a router restarts or traffic is redirected to a backup router. 7.3. Service synchronization By default, creating a site enables that site to synchronize all services from other default sites. This means that all services exposed on the service network are available in the current site. For example, if you expose the backend service in the east site, that service is automatically created in the west site. However, if you want more granular control over which services are available, you can disable service-sync . This might be required if: You expose many services and not all are required on all sites. You are concerned that a specific service is not available on a specific site. To disable service synchronization: USD skupper init --service-sync false or use the following YAML: apiVersion: v1 kind: ConfigMap metadata: name: skupper-site data: name: my-site service-sync: false To check whether synchronization is enabled, check the value for service-sync in the output from the following command: USD kubectl get cm skupper-site -o json If you disable service-sync and you want to consume an exposed service on a specific site, you can create that service using the following command: skupper service create <name> <port> where <name> is the service name on the site where the service is exposed and <port> is the port used to expose that service. Notes: When considering whether services are synchronized between two sites, service-sync must be enabled on both sites. If you use the command skupper service delete on a site, that command only works if the service was created on that site. Podman sites do not support service-sync . | [
"skupper init --router-cpu 2",
"apiVersion: v1 kind: ConfigMap metadata: name: \"skupper-site\" data: name: \"my-site\" router-cpu: 2",
"skupper init --routers 2",
"apiVersion: v1 kind: ConfigMap metadata: name: \"skupper-site\" data: name: \"my-site\" routers: 2",
"skupper init --service-sync false",
"apiVersion: v1 kind: ConfigMap metadata: name: skupper-site data: name: my-site service-sync: false",
"kubectl get cm skupper-site -o json",
"skupper service create <name> <port>"
]
| https://docs.redhat.com/en/documentation/red_hat_service_interconnect/1.8/html/installation/deployment_options_on_kubernetes |
Chapter 4. Kafka 4 impact and adoption schedule | Chapter 4. Kafka 4 impact and adoption schedule Streams for Apache Kafka 3.0 is scheduled for release in 2025. The introduction of Apache Kafka 4 in the release brings significant changes to how Kafka clusters are deployed, configured, and operated. For more information on how these changes affect the Streams for Apache Kafka 3.0 release, refer to the article Streams for Apache Kafka 3.0: Kafka 4 Impact and Adoption . | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/release_notes_for_streams_for_apache_kafka_2.9_on_openshift/kafka-4-str |
Chapter 9. DeploymentConfig [apps.openshift.io/v1] | Chapter 9. DeploymentConfig [apps.openshift.io/v1] Description Deployment Configs define the template for a pod and manages deploying new images or configuration changes. A single deployment configuration is usually analogous to a single micro-service. Can support many different deployment patterns, including full restart, customizable rolling updates, and fully custom behaviors, as well as pre- and post- deployment hooks. Each individual deployment is represented as a replication controller. A deployment is "triggered" when its configuration is changed or a tag in an Image Stream is changed. Triggers can be disabled to allow manual control over a deployment. The "strategy" determines how the deployment is carried out and may be changed at any time. The latestVersion field is updated when a new deployment is triggered by any means. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Deprecated: Use deployments or other means for declarative updates for pods instead. Type object Required spec 9.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object DeploymentConfigSpec represents the desired state of the deployment. status object DeploymentConfigStatus represents the current deployment state. 9.1.1. .spec Description DeploymentConfigSpec represents the desired state of the deployment. Type object Property Type Description minReadySeconds integer MinReadySeconds is the minimum number of seconds for which a newly created pod should be ready without any of its container crashing, for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) paused boolean Paused indicates that the deployment config is paused resulting in no new deployments on template changes or changes in the template caused by other triggers. replicas integer Replicas is the number of desired replicas. revisionHistoryLimit integer RevisionHistoryLimit is the number of old ReplicationControllers to retain to allow for rollbacks. This field is a pointer to allow for differentiation between an explicit zero and not specified. Defaults to 10. (This only applies to DeploymentConfigs created via the new group API resource, not the legacy resource.) selector object (string) Selector is a label query over pods that should match the Replicas count. strategy object DeploymentStrategy describes how to perform a deployment. template PodTemplateSpec Template is the object that describes the pod that will be created if insufficient replicas are detected. test boolean Test ensures that this deployment config will have zero replicas except while a deployment is running. This allows the deployment config to be used as a continuous deployment test - triggering on images, running the deployment, and then succeeding or failing. Post strategy hooks and After actions can be used to integrate successful deployment with an action. triggers array Triggers determine how updates to a DeploymentConfig result in new deployments. If no triggers are defined, a new deployment can only occur as a result of an explicit client update to the DeploymentConfig with a new LatestVersion. If null, defaults to having a config change trigger. triggers[] object DeploymentTriggerPolicy describes a policy for a single trigger that results in a new deployment. 9.1.2. .spec.strategy Description DeploymentStrategy describes how to perform a deployment. Type object Property Type Description activeDeadlineSeconds integer ActiveDeadlineSeconds is the duration in seconds that the deployer pods for this deployment config may be active on a node before the system actively tries to terminate them. annotations object (string) Annotations is a set of key, value pairs added to custom deployer and lifecycle pre/post hook pods. customParams object CustomDeploymentStrategyParams are the input to the Custom deployment strategy. labels object (string) Labels is a set of key, value pairs added to custom deployer and lifecycle pre/post hook pods. recreateParams object RecreateDeploymentStrategyParams are the input to the Recreate deployment strategy. resources ResourceRequirements Resources contains resource requirements to execute the deployment and any hooks. rollingParams object RollingDeploymentStrategyParams are the input to the Rolling deployment strategy. type string Type is the name of a deployment strategy. 9.1.3. .spec.strategy.customParams Description CustomDeploymentStrategyParams are the input to the Custom deployment strategy. Type object Property Type Description command array (string) Command is optional and overrides CMD in the container Image. environment array (EnvVar) Environment holds the environment which will be given to the container for Image. image string Image specifies a container image which can carry out a deployment. 9.1.4. .spec.strategy.recreateParams Description RecreateDeploymentStrategyParams are the input to the Recreate deployment strategy. Type object Property Type Description mid object LifecycleHook defines a specific deployment lifecycle action. Only one type of action may be specified at any time. post object LifecycleHook defines a specific deployment lifecycle action. Only one type of action may be specified at any time. pre object LifecycleHook defines a specific deployment lifecycle action. Only one type of action may be specified at any time. timeoutSeconds integer TimeoutSeconds is the time to wait for updates before giving up. If the value is nil, a default will be used. 9.1.5. .spec.strategy.recreateParams.mid Description LifecycleHook defines a specific deployment lifecycle action. Only one type of action may be specified at any time. Type object Required failurePolicy Property Type Description execNewPod object ExecNewPodHook is a hook implementation which runs a command in a new pod based on the specified container which is assumed to be part of the deployment template. failurePolicy string FailurePolicy specifies what action to take if the hook fails. tagImages array TagImages instructs the deployer to tag the current image referenced under a container onto an image stream tag. tagImages[] object TagImageHook is a request to tag the image in a particular container onto an ImageStreamTag. 9.1.6. .spec.strategy.recreateParams.mid.execNewPod Description ExecNewPodHook is a hook implementation which runs a command in a new pod based on the specified container which is assumed to be part of the deployment template. Type object Required command containerName Property Type Description command array (string) Command is the action command and its arguments. containerName string ContainerName is the name of a container in the deployment pod template whose container image will be used for the hook pod's container. env array (EnvVar) Env is a set of environment variables to supply to the hook pod's container. volumes array (string) Volumes is a list of named volumes from the pod template which should be copied to the hook pod. Volumes names not found in pod spec are ignored. An empty list means no volumes will be copied. 9.1.7. .spec.strategy.recreateParams.mid.tagImages Description TagImages instructs the deployer to tag the current image referenced under a container onto an image stream tag. Type array 9.1.8. .spec.strategy.recreateParams.mid.tagImages[] Description TagImageHook is a request to tag the image in a particular container onto an ImageStreamTag. Type object Required containerName to Property Type Description containerName string ContainerName is the name of a container in the deployment config whose image value will be used as the source of the tag. If there is only a single container this value will be defaulted to the name of that container. to ObjectReference To is the target ImageStreamTag to set the container's image onto. 9.1.9. .spec.strategy.recreateParams.post Description LifecycleHook defines a specific deployment lifecycle action. Only one type of action may be specified at any time. Type object Required failurePolicy Property Type Description execNewPod object ExecNewPodHook is a hook implementation which runs a command in a new pod based on the specified container which is assumed to be part of the deployment template. failurePolicy string FailurePolicy specifies what action to take if the hook fails. tagImages array TagImages instructs the deployer to tag the current image referenced under a container onto an image stream tag. tagImages[] object TagImageHook is a request to tag the image in a particular container onto an ImageStreamTag. 9.1.10. .spec.strategy.recreateParams.post.execNewPod Description ExecNewPodHook is a hook implementation which runs a command in a new pod based on the specified container which is assumed to be part of the deployment template. Type object Required command containerName Property Type Description command array (string) Command is the action command and its arguments. containerName string ContainerName is the name of a container in the deployment pod template whose container image will be used for the hook pod's container. env array (EnvVar) Env is a set of environment variables to supply to the hook pod's container. volumes array (string) Volumes is a list of named volumes from the pod template which should be copied to the hook pod. Volumes names not found in pod spec are ignored. An empty list means no volumes will be copied. 9.1.11. .spec.strategy.recreateParams.post.tagImages Description TagImages instructs the deployer to tag the current image referenced under a container onto an image stream tag. Type array 9.1.12. .spec.strategy.recreateParams.post.tagImages[] Description TagImageHook is a request to tag the image in a particular container onto an ImageStreamTag. Type object Required containerName to Property Type Description containerName string ContainerName is the name of a container in the deployment config whose image value will be used as the source of the tag. If there is only a single container this value will be defaulted to the name of that container. to ObjectReference To is the target ImageStreamTag to set the container's image onto. 9.1.13. .spec.strategy.recreateParams.pre Description LifecycleHook defines a specific deployment lifecycle action. Only one type of action may be specified at any time. Type object Required failurePolicy Property Type Description execNewPod object ExecNewPodHook is a hook implementation which runs a command in a new pod based on the specified container which is assumed to be part of the deployment template. failurePolicy string FailurePolicy specifies what action to take if the hook fails. tagImages array TagImages instructs the deployer to tag the current image referenced under a container onto an image stream tag. tagImages[] object TagImageHook is a request to tag the image in a particular container onto an ImageStreamTag. 9.1.14. .spec.strategy.recreateParams.pre.execNewPod Description ExecNewPodHook is a hook implementation which runs a command in a new pod based on the specified container which is assumed to be part of the deployment template. Type object Required command containerName Property Type Description command array (string) Command is the action command and its arguments. containerName string ContainerName is the name of a container in the deployment pod template whose container image will be used for the hook pod's container. env array (EnvVar) Env is a set of environment variables to supply to the hook pod's container. volumes array (string) Volumes is a list of named volumes from the pod template which should be copied to the hook pod. Volumes names not found in pod spec are ignored. An empty list means no volumes will be copied. 9.1.15. .spec.strategy.recreateParams.pre.tagImages Description TagImages instructs the deployer to tag the current image referenced under a container onto an image stream tag. Type array 9.1.16. .spec.strategy.recreateParams.pre.tagImages[] Description TagImageHook is a request to tag the image in a particular container onto an ImageStreamTag. Type object Required containerName to Property Type Description containerName string ContainerName is the name of a container in the deployment config whose image value will be used as the source of the tag. If there is only a single container this value will be defaulted to the name of that container. to ObjectReference To is the target ImageStreamTag to set the container's image onto. 9.1.17. .spec.strategy.rollingParams Description RollingDeploymentStrategyParams are the input to the Rolling deployment strategy. Type object Property Type Description intervalSeconds integer IntervalSeconds is the time to wait between polling deployment status after update. If the value is nil, a default will be used. maxSurge IntOrString MaxSurge is the maximum number of pods that can be scheduled above the original number of pods. Value can be an absolute number (ex: 5) or a percentage of total pods at the start of the update (ex: 10%). Absolute number is calculated from percentage by rounding up. This cannot be 0 if MaxUnavailable is 0. By default, 25% is used. Example: when this is set to 30%, the new RC can be scaled up by 30% immediately when the rolling update starts. Once old pods have been killed, new RC can be scaled up further, ensuring that total number of pods running at any time during the update is atmost 130% of original pods. maxUnavailable IntOrString MaxUnavailable is the maximum number of pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of total pods at the start of update (ex: 10%). Absolute number is calculated from percentage by rounding down. This cannot be 0 if MaxSurge is 0. By default, 25% is used. Example: when this is set to 30%, the old RC can be scaled down by 30% immediately when the rolling update starts. Once new pods are ready, old RC can be scaled down further, followed by scaling up the new RC, ensuring that at least 70% of original number of pods are available at all times during the update. post object LifecycleHook defines a specific deployment lifecycle action. Only one type of action may be specified at any time. pre object LifecycleHook defines a specific deployment lifecycle action. Only one type of action may be specified at any time. timeoutSeconds integer TimeoutSeconds is the time to wait for updates before giving up. If the value is nil, a default will be used. updatePeriodSeconds integer UpdatePeriodSeconds is the time to wait between individual pod updates. If the value is nil, a default will be used. 9.1.18. .spec.strategy.rollingParams.post Description LifecycleHook defines a specific deployment lifecycle action. Only one type of action may be specified at any time. Type object Required failurePolicy Property Type Description execNewPod object ExecNewPodHook is a hook implementation which runs a command in a new pod based on the specified container which is assumed to be part of the deployment template. failurePolicy string FailurePolicy specifies what action to take if the hook fails. tagImages array TagImages instructs the deployer to tag the current image referenced under a container onto an image stream tag. tagImages[] object TagImageHook is a request to tag the image in a particular container onto an ImageStreamTag. 9.1.19. .spec.strategy.rollingParams.post.execNewPod Description ExecNewPodHook is a hook implementation which runs a command in a new pod based on the specified container which is assumed to be part of the deployment template. Type object Required command containerName Property Type Description command array (string) Command is the action command and its arguments. containerName string ContainerName is the name of a container in the deployment pod template whose container image will be used for the hook pod's container. env array (EnvVar) Env is a set of environment variables to supply to the hook pod's container. volumes array (string) Volumes is a list of named volumes from the pod template which should be copied to the hook pod. Volumes names not found in pod spec are ignored. An empty list means no volumes will be copied. 9.1.20. .spec.strategy.rollingParams.post.tagImages Description TagImages instructs the deployer to tag the current image referenced under a container onto an image stream tag. Type array 9.1.21. .spec.strategy.rollingParams.post.tagImages[] Description TagImageHook is a request to tag the image in a particular container onto an ImageStreamTag. Type object Required containerName to Property Type Description containerName string ContainerName is the name of a container in the deployment config whose image value will be used as the source of the tag. If there is only a single container this value will be defaulted to the name of that container. to ObjectReference To is the target ImageStreamTag to set the container's image onto. 9.1.22. .spec.strategy.rollingParams.pre Description LifecycleHook defines a specific deployment lifecycle action. Only one type of action may be specified at any time. Type object Required failurePolicy Property Type Description execNewPod object ExecNewPodHook is a hook implementation which runs a command in a new pod based on the specified container which is assumed to be part of the deployment template. failurePolicy string FailurePolicy specifies what action to take if the hook fails. tagImages array TagImages instructs the deployer to tag the current image referenced under a container onto an image stream tag. tagImages[] object TagImageHook is a request to tag the image in a particular container onto an ImageStreamTag. 9.1.23. .spec.strategy.rollingParams.pre.execNewPod Description ExecNewPodHook is a hook implementation which runs a command in a new pod based on the specified container which is assumed to be part of the deployment template. Type object Required command containerName Property Type Description command array (string) Command is the action command and its arguments. containerName string ContainerName is the name of a container in the deployment pod template whose container image will be used for the hook pod's container. env array (EnvVar) Env is a set of environment variables to supply to the hook pod's container. volumes array (string) Volumes is a list of named volumes from the pod template which should be copied to the hook pod. Volumes names not found in pod spec are ignored. An empty list means no volumes will be copied. 9.1.24. .spec.strategy.rollingParams.pre.tagImages Description TagImages instructs the deployer to tag the current image referenced under a container onto an image stream tag. Type array 9.1.25. .spec.strategy.rollingParams.pre.tagImages[] Description TagImageHook is a request to tag the image in a particular container onto an ImageStreamTag. Type object Required containerName to Property Type Description containerName string ContainerName is the name of a container in the deployment config whose image value will be used as the source of the tag. If there is only a single container this value will be defaulted to the name of that container. to ObjectReference To is the target ImageStreamTag to set the container's image onto. 9.1.26. .spec.triggers Description Triggers determine how updates to a DeploymentConfig result in new deployments. If no triggers are defined, a new deployment can only occur as a result of an explicit client update to the DeploymentConfig with a new LatestVersion. If null, defaults to having a config change trigger. Type array 9.1.27. .spec.triggers[] Description DeploymentTriggerPolicy describes a policy for a single trigger that results in a new deployment. Type object Property Type Description imageChangeParams object DeploymentTriggerImageChangeParams represents the parameters to the ImageChange trigger. type string Type of the trigger 9.1.28. .spec.triggers[].imageChangeParams Description DeploymentTriggerImageChangeParams represents the parameters to the ImageChange trigger. Type object Required from Property Type Description automatic boolean Automatic means that the detection of a new tag value should result in an image update inside the pod template. containerNames array (string) ContainerNames is used to restrict tag updates to the specified set of container names in a pod. If multiple triggers point to the same containers, the resulting behavior is undefined. Future API versions will make this a validation error. If ContainerNames does not point to a valid container, the trigger will be ignored. Future API versions will make this a validation error. from ObjectReference From is a reference to an image stream tag to watch for changes. From.Name is the only required subfield - if From.Namespace is blank, the namespace of the current deployment trigger will be used. lastTriggeredImage string LastTriggeredImage is the last image to be triggered. 9.1.29. .status Description DeploymentConfigStatus represents the current deployment state. Type object Required latestVersion observedGeneration replicas updatedReplicas availableReplicas unavailableReplicas Property Type Description availableReplicas integer AvailableReplicas is the total number of available pods targeted by this deployment config. conditions array Conditions represents the latest available observations of a deployment config's current state. conditions[] object DeploymentCondition describes the state of a deployment config at a certain point. details object DeploymentDetails captures information about the causes of a deployment. latestVersion integer LatestVersion is used to determine whether the current deployment associated with a deployment config is out of sync. observedGeneration integer ObservedGeneration is the most recent generation observed by the deployment config controller. readyReplicas integer Total number of ready pods targeted by this deployment. replicas integer Replicas is the total number of pods targeted by this deployment config. unavailableReplicas integer UnavailableReplicas is the total number of unavailable pods targeted by this deployment config. updatedReplicas integer UpdatedReplicas is the total number of non-terminated pods targeted by this deployment config that have the desired template spec. 9.1.30. .status.conditions Description Conditions represents the latest available observations of a deployment config's current state. Type array 9.1.31. .status.conditions[] Description DeploymentCondition describes the state of a deployment config at a certain point. Type object Required type status Property Type Description lastTransitionTime Time The last time the condition transitioned from one status to another. lastUpdateTime Time The last time this condition was updated. message string A human readable message indicating details about the transition. reason string The reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of deployment condition. 9.1.32. .status.details Description DeploymentDetails captures information about the causes of a deployment. Type object Required causes Property Type Description causes array Causes are extended data associated with all the causes for creating a new deployment causes[] object DeploymentCause captures information about a particular cause of a deployment. message string Message is the user specified change message, if this deployment was triggered manually by the user 9.1.33. .status.details.causes Description Causes are extended data associated with all the causes for creating a new deployment Type array 9.1.34. .status.details.causes[] Description DeploymentCause captures information about a particular cause of a deployment. Type object Required type Property Type Description imageTrigger object DeploymentCauseImageTrigger represents details about the cause of a deployment originating from an image change trigger type string Type of the trigger that resulted in the creation of a new deployment 9.1.35. .status.details.causes[].imageTrigger Description DeploymentCauseImageTrigger represents details about the cause of a deployment originating from an image change trigger Type object Required from Property Type Description from ObjectReference From is a reference to the changed object which triggered a deployment. The field may have the kinds DockerImage, ImageStreamTag, or ImageStreamImage. 9.2. API endpoints The following API endpoints are available: /apis/apps.openshift.io/v1/deploymentconfigs GET : list or watch objects of kind DeploymentConfig /apis/apps.openshift.io/v1/watch/deploymentconfigs GET : watch individual changes to a list of DeploymentConfig. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps.openshift.io/v1/namespaces/{namespace}/deploymentconfigs DELETE : delete collection of DeploymentConfig GET : list or watch objects of kind DeploymentConfig POST : create a DeploymentConfig /apis/apps.openshift.io/v1/watch/namespaces/{namespace}/deploymentconfigs GET : watch individual changes to a list of DeploymentConfig. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps.openshift.io/v1/namespaces/{namespace}/deploymentconfigs/{name} DELETE : delete a DeploymentConfig GET : read the specified DeploymentConfig PATCH : partially update the specified DeploymentConfig PUT : replace the specified DeploymentConfig /apis/apps.openshift.io/v1/watch/namespaces/{namespace}/deploymentconfigs/{name} GET : watch changes to an object of kind DeploymentConfig. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/apps.openshift.io/v1/namespaces/{namespace}/deploymentconfigs/{name}/status GET : read status of the specified DeploymentConfig PATCH : partially update status of the specified DeploymentConfig PUT : replace status of the specified DeploymentConfig 9.2.1. /apis/apps.openshift.io/v1/deploymentconfigs HTTP method GET Description list or watch objects of kind DeploymentConfig Table 9.1. HTTP responses HTTP code Reponse body 200 - OK DeploymentConfigList schema 401 - Unauthorized Empty 9.2.2. /apis/apps.openshift.io/v1/watch/deploymentconfigs HTTP method GET Description watch individual changes to a list of DeploymentConfig. deprecated: use the 'watch' parameter with a list operation instead. Table 9.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 9.2.3. /apis/apps.openshift.io/v1/namespaces/{namespace}/deploymentconfigs HTTP method DELETE Description delete collection of DeploymentConfig Table 9.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 9.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind DeploymentConfig Table 9.5. HTTP responses HTTP code Reponse body 200 - OK DeploymentConfigList schema 401 - Unauthorized Empty HTTP method POST Description create a DeploymentConfig Table 9.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.7. Body parameters Parameter Type Description body DeploymentConfig schema Table 9.8. HTTP responses HTTP code Reponse body 200 - OK DeploymentConfig schema 201 - Created DeploymentConfig schema 202 - Accepted DeploymentConfig schema 401 - Unauthorized Empty 9.2.4. /apis/apps.openshift.io/v1/watch/namespaces/{namespace}/deploymentconfigs HTTP method GET Description watch individual changes to a list of DeploymentConfig. deprecated: use the 'watch' parameter with a list operation instead. Table 9.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 9.2.5. /apis/apps.openshift.io/v1/namespaces/{namespace}/deploymentconfigs/{name} Table 9.10. Global path parameters Parameter Type Description name string name of the DeploymentConfig HTTP method DELETE Description delete a DeploymentConfig Table 9.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 9.12. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified DeploymentConfig Table 9.13. HTTP responses HTTP code Reponse body 200 - OK DeploymentConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified DeploymentConfig Table 9.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.15. HTTP responses HTTP code Reponse body 200 - OK DeploymentConfig schema 201 - Created DeploymentConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified DeploymentConfig Table 9.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.17. Body parameters Parameter Type Description body DeploymentConfig schema Table 9.18. HTTP responses HTTP code Reponse body 200 - OK DeploymentConfig schema 201 - Created DeploymentConfig schema 401 - Unauthorized Empty 9.2.6. /apis/apps.openshift.io/v1/watch/namespaces/{namespace}/deploymentconfigs/{name} Table 9.19. Global path parameters Parameter Type Description name string name of the DeploymentConfig HTTP method GET Description watch changes to an object of kind DeploymentConfig. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 9.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 9.2.7. /apis/apps.openshift.io/v1/namespaces/{namespace}/deploymentconfigs/{name}/status Table 9.21. Global path parameters Parameter Type Description name string name of the DeploymentConfig HTTP method GET Description read status of the specified DeploymentConfig Table 9.22. HTTP responses HTTP code Reponse body 200 - OK DeploymentConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified DeploymentConfig Table 9.23. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.24. HTTP responses HTTP code Reponse body 200 - OK DeploymentConfig schema 201 - Created DeploymentConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified DeploymentConfig Table 9.25. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.26. Body parameters Parameter Type Description body DeploymentConfig schema Table 9.27. HTTP responses HTTP code Reponse body 200 - OK DeploymentConfig schema 201 - Created DeploymentConfig schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/workloads_apis/deploymentconfig-apps-openshift-io-v1 |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/making-open-source-more-inclusive |
Chapter 22. OpenLMI | Chapter 22. OpenLMI The Open Linux Management Infrastructure , commonly abbreviated as OpenLMI , is a common infrastructure for the management of Linux systems. It builds on top of existing tools and serves as an abstraction layer in order to hide much of the complexity of the underlying system from system administrators. OpenLMI is distributed with a set of services that can be accessed locally or remotely and provides multiple language bindings, standard APIs, and standard scripting interfaces that can be used to manage and monitor hardware, operating systems, and system services. 22.1. About OpenLMI OpenLMI is designed to provide a common management interface to production servers running the Red Hat Enterprise Linux system on both physical and virtual machines. It consists of the following three components: System management agents - these agents are installed on a managed system and implement an object model that is presented to a standard object broker. The initial agents implemented in OpenLMI include storage configuration and network configuration, but later work will address additional elements of system management. The system management agents are commonly referred to as Common Information Model providers or CIM providers . A standard object broker - the object broker manages system management agents and provides an interface to them. The standard object broker is also known as a CIM Object Monitor or CIMOM . Client applications and scripts - the client applications and scripts call the system management agents through the standard object broker. The OpenLMI project complements existing management initiatives by providing a low-level interface that can be used by scripts or system management consoles. Interfaces distributed with OpenLMI include C, C++, Python, Java, and an interactive command line client, and all of them offer the same full access to the capabilities implemented in each agent. This ensures that you always have access to exactly the same capabilities no matter which programming interface you decide to use. 22.1.1. Main Features The following are key benefits of installing and using OpenLMI on your system: OpenLMI provides a standard interface for configuration, management, and monitoring of your local and remote systems. It allows you to configure, manage, and monitor production servers running on both physical and virtual machines. It is distributed with a collection of CIM providers that allow you to configure, manage, and monitor storage devices and complex networks. It allows you to call system management functions from C, C++, Python, and Java programs, and includes LMIShell, which provides a command line interface. It is free software based on open industry standards. 22.1.2. Management Capabilities Key capabilities of OpenLMI include the management of storage devices, networks, system services, user accounts, hardware and software configuration, power management, and interaction with Active Directory. For a complete list of CIM providers that are distributed with Red Hat Enterprise Linux 7, see Table 22.1, "Available CIM Providers" . Table 22.1. Available CIM Providers Package Name Description openlmi-account A CIM provider for managing user accounts. openlmi-logicalfile A CIM provider for reading files and directories. openlmi-networking A CIM provider for network management. openlmi-powermanagement A CIM provider for power management. openlmi-service A CIM provider for managing system services. openlmi-storage A CIM provider for storage management. openlmi-fan A CIM provider for controlling computer fans. openlmi-hardware A CIM provider for retrieving hardware information. openlmi-realmd A CIM provider for configuring realmd. openlmi-software [a] A CIM provider for software management. [a] In Red Hat Enterprise Linux 7, the OpenLMI Software provider is included as a Technology Preview . This provider is fully functional, but has a known performance scaling issue where listing large numbers of software packages may consume excessive amount of memory and time. To work around this issue, adjust package searches to return as few packages as possible. 22.2. Installing OpenLMI OpenLMI is distributed as a collection of RPM packages that include the CIMOM, individual CIM providers, and client applications. This allows you distinguish between a managed and client system and install only those components you need. 22.2.1. Installing OpenLMI on a Managed System A managed system is the system you intend to monitor and manage by using the OpenLMI client tools. To install OpenLMI on a managed system, complete the following steps: Install the tog-pegasus package by typing the following at a shell prompt as root : This command installs the OpenPegasus CIMOM and all its dependencies to the system and creates a user account for the pegasus user. Install required CIM providers by running the following command as root : This command installs the CIM providers for storage, network, service, account, and power management. For a complete list of CIM providers distributed with Red Hat Enterprise Linux 7, see Table 22.1, "Available CIM Providers" . Edit the /etc/Pegasus/access.conf configuration file to customize the list of users that are allowed to connect to the OpenPegasus CIMOM. By default, only the pegasus user is allowed to access the CIMOM both remotely and locally. To activate this user account, run the following command as root to set the user's password: Start the OpenPegasus CIMOM by activating the tog-pegasus.service unit. To activate the tog-pegasus.service unit in the current session, type the following at a shell prompt as root : To configure the tog-pegasus.service unit to start automatically at boot time, type as root : If you intend to interact with the managed system from a remote machine, enable TCP communication on port 5989 ( wbem-https ). To open this port in the current session, run the following command as root : To open port 5989 for TCP communication permanently, type as root : You can now connect to the managed system and interact with it by using the OpenLMI client tools as described in Section 22.4, "Using LMIShell" . If you intend to perform OpenLMI operations directly on the managed system, also complete the steps described in Section 22.2.2, "Installing OpenLMI on a Client System" . 22.2.2. Installing OpenLMI on a Client System A client system is the system from which you intend to interact with the managed system. In a typical scenario, the client system and the managed system are installed on two separate machines, but you can also install the client tools on the managed system and interact with it directly. To install OpenLMI on a client system, complete the following steps: Install the openlmi-tools package by typing the following at a shell prompt as root : This command installs LMIShell, an interactive client and interpreter for accessing CIM objects provided by OpenPegasus, and all its dependencies to the system. Configure SSL certificates for OpenPegasus as described in Section 22.3, "Configuring SSL Certificates for OpenPegasus" . You can now use the LMIShell client to interact with the managed system as described in Section 22.4, "Using LMIShell" . 22.3. Configuring SSL Certificates for OpenPegasus OpenLMI uses the Web-Based Enterprise Management (WBEM) protocol that functions over an HTTP transport layer. Standard HTTP Basic authentication is performed in this protocol, which means that the user name and password are transmitted alongside the requests. Configuring the OpenPegasus CIMOM to use HTTPS for communication is necessary to ensure secure authentication. A Secure Sockets Layer (SSL) or Transport Layer Security (TLS) certificate is required on the managed system to establish an encrypted channel. There are two ways of managing SSL/TLS certificates on a system: Self-signed certificates require less infrastructure to use, but are more difficult to deploy to clients and manage securely. Authority-signed certificates are easier to deploy to clients once they are set up, but may require a greater initial investment. When using an authority-signed certificate, it is necessary to configure a trusted certificate authority on the client systems. The authority can then be used for signing all of the managed systems' CIMOM certificates. Certificates can also be part of a certificate chain, so the certificate used for signing the managed systems' certificates may in turn be signed by another, higher authority (such as Verisign, CAcert, RSA and many others). The default certificate and trust store locations on the file system are listed in Table 22.2, "Certificate and Trust Store Locations" . Table 22.2. Certificate and Trust Store Locations Configuration Option Location Description sslCertificateFilePath /etc/Pegasus/server.pem Public certificate of the CIMOM. sslKeyFilePath /etc/Pegasus/file.pem Private key known only to the CIMOM. sslTrustStore /etc/Pegasus/client.pem The file or directory providing the list of trusted certificate authorities. Important If you modify any of the files mentioned in Table 22.2, "Certificate and Trust Store Locations" , restart the tog-pegasus service to make sure it recognizes the new certificates. To restart the service, type the following at a shell prompt as root : For more information on how to manage system services in Red Hat Enterprise Linux 7, see Chapter 10, Managing Services with systemd . 22.3.1. Managing Self-signed Certificates A self-signed certificate uses its own private key to sign itself and it is not connected to any chain of trust. On a managed system, if certificates have not been provided by the administrator prior to the first time that the tog-pegasus service is started, a set of self-signed certificates will be automatically generated using the system's primary host name as the certificate subject. Important The automatically generated self-signed certificates are valid by default for 10 years, but they have no automatic-renewal capability. Any modification to these certificates will require manually creating new certificates following guidelines provided by the OpenSSL or Mozilla NSS documentation on the subject. To configure client systems to trust the self-signed certificate, complete the following steps: Copy the /etc/Pegasus/server.pem certificate from the managed system to the /etc/pki/ca-trust/source/anchors/ directory on the client system. To do so, type the following at a shell prompt as root : Replace hostname with the host name of the managed system. Note that this command only works if the sshd service is running on the managed system and is configured to allow the root user to log in to the system over the SSH protocol. For more information on how to install and configure the sshd service and use the scp command to transfer files over the SSH protocol, see Chapter 12, OpenSSH . Verify the integrity of the certificate on the client system by comparing its check sum with the check sum of the original file. To calculate the check sum of the /etc/Pegasus/server.pem file on the managed system, run the following command as root on that system: To calculate the check sum of the /etc/pki/ca-trust/source/anchors/pegasus- hostname .pem file on the client system, run the following command on this system: Replace hostname with the host name of the managed system. Update the trust store on the client system by running the following command as root : 22.3.2. Managing Authority-signed Certificates with Identity Management (Recommended) The Identity Management feature of Red Hat Enterprise Linux provides a domain controller which simplifies the management of SSL certificates within systems joined to the domain. Among others, the Identity Management server provides an embedded Certificate Authority. See the Red Hat Enterprise Linux 7 Linux Domain Identity, Authentication, and Policy Guide or the FreeIPA documentation for information on how to join the client and managed systems to the domain. It is necessary to register the managed system to Identity Management; for client systems the registration is optional. The following steps are required on the managed system: Install the ipa-client package and register the system to Identity Management as described in the Red Hat Enterprise Linux 7 Linux Domain Identity, Authentication, and Policy Guide . Copy the Identity Management signing certificate to the trusted store by typing the following command as root : Update the trust store by running the following command as root : Register Pegasus as a service in the Identity Management domain by running the following command as a privileged domain user: Replace hostname with the host name of the managed system. This command can be run from any system in the Identity Management domain that has the ipa-admintools package installed. It creates a service entry in Identity Management that can be used to generate signed SSL certificates. Back up the PEM files located in the /etc/Pegasus/ directory (recommended). Retrieve the signed certificate by running the following command as root : Replace hostname with the host name of the managed system. The certificate and key files are now kept in proper locations. The certmonger daemon installed on the managed system by the ipa-client-install script ensures that the certificate is kept up-to-date and renewed as necessary. For more information, see the Red Hat Enterprise Linux 7 Linux Domain Identity, Authentication, and Policy Guide . To register the client system and update the trust store, follow the steps below. Install the ipa-client package and register the system to Identity Management as described in the Red Hat Enterprise Linux 7 Linux Domain Identity, Authentication, and Policy Guide . Copy the Identity Management signing certificate to the trusted store by typing the following command as root : Update the trust store by running the following command as root : If the client system is not meant to be registered in Identity Management, complete the following steps to update the trust store. Copy the /etc/ipa/ca.crt file securely from any other system joined to the same Identity Management domain to the trusted store /etc/pki/ca-trust/source/anchors/ directory as root . Update the trust store by running the following command as root : 22.3.3. Managing Authority-signed Certificates Manually Managing authority-signed certificates with other mechanisms than Identity Management requires more manual configuration. It is necessary to ensure that all of the clients trust the certificate of the authority that will be signing the managed system certificates: If a certificate authority is trusted by default, it is not necessary to perform any particular steps to accomplish this. If the certificate authority is not trusted by default, the certificate has to be imported on the client and managed systems. Copy the certificate to the trusted store by typing the following command as root : Update the trust store by running the following command as root : On the managed system, complete the following steps: Create a new SSL configuration file /etc/Pegasus/ssl.cnf to store information about the certificate. The contents of this file must be similar to the following example: Replace hostname with the fully qualified domain name of the managed system. Generate a private key on the managed system by using the following command as root : Generate a certificate signing request (CSR) by running this command as root : Send the /etc/Pegasus/server.csr file to the certificate authority for signing. The detailed procedure of submitting the file depends on the particular certificate authority. When the signed certificate is received from the certificate authority, save it as /etc/Pegasus/server.pem . Copy the certificate of the trusted authority to the Pegasus trust store to make sure that Pegasus is capable of trusting its own certificate by running as root : After accomplishing all the described steps, the clients that trust the signing authority are able to successfully communicate with the managed server's CIMOM. Important Unlike the Identity Management solution, if the certificate expires and needs to be renewed, all of the described manual steps have to be carried out again. It is recommended to renew the certificates before they expire. 22.4. Using LMIShell LMIShell is an interactive client and non-interactive interpreter that can be used to access CIM objects provided by the OpenPegasus CIMOM. It is based on the Python interpreter, but also implements additional functions and classes for interacting with CIM objects. 22.4.1. Starting, Using, and Exiting LMIShell Similarly to the Python interpreter, you can use LMIShell either as an interactive client, or as a non-interactive interpreter for LMIShell scripts. Starting LMIShell in Interactive Mode To start the LMIShell interpreter in interactive mode, run the lmishell command with no additional arguments: By default, when LMIShell attempts to establish a connection with a CIMOM, it validates the server-side certificate against the Certification Authorities trust store. To disable this validation, run the lmishell command with the --noverify or -n command line option: Using Tab Completion When running in interactive mode, the LMIShell interpreter allows you press the Tab key to complete basic programming structures and CIM objects, including namespaces, classes, methods, and object properties. Browsing History By default, LMIShell stores all commands you type at the interactive prompt in the ~/.lmishell_history file. This allows you to browse the command history and re-use already entered lines in interactive mode without the need to type them at the prompt again. To move backward in the command history, press the Up Arrow key or the Ctrl + p key combination. To move forward in the command history, press the Down Arrow key or the Ctrl + n key combination. LMIShell also supports an incremental reverse search. To look for a particular line in the command history, press Ctrl + r and start typing any part of the command. For example: To clear the command history, use the clear_history() function as follows: You can configure the number of lines that are stored in the command history by changing the value of the history_length option in the ~/.lmishellrc configuration file. In addition, you can change the location of the history file by changing the value of the history_file option in this configuration file. For example, to set the location of the history file to ~/.lmishell_history and configure LMIShell to store the maximum of 1000 lines in it, add the following lines to the ~/.lmishellrc file: Handling Exceptions By default, the LMIShell interpreter handles all exceptions and uses return values. To disable this behavior in order to handle all exceptions in the code, use the use_exceptions() function as follows: To re-enable the automatic exception handling, use: You can permanently disable the exception handling by changing the value of the use_exceptions option in the ~/.lmishellrc configuration file to True : Configuring a Temporary Cache With the default configuration, LMIShell connection objects use a temporary cache for storing CIM class names and CIM classes in order to reduce network communication. To clear this temporary cache, use the clear_cache() method as follows: Replace object_name with the name of a connection object. To disable the temporary cache for a particular connection object, use the use_cache() method as follows: To enable it again, use: You can permanently disable the temporary cache for connection objects by changing the value of the use_cache option in the ~/.lmishellrc configuration file to False : Exiting LMIShell To terminate the LMIShell interpreter and return to the shell prompt, press the Ctrl + d key combination or issue the quit() function as follows: Running an LMIShell Script To run an LMIShell script, run the lmishell command as follows: Replace file_name with the name of the script. To inspect an LMIShell script after its execution, also specify the --interact or -i command line option: The preferred file extension of LMIShell scripts is .lmi . 22.4.2. Connecting to a CIMOM LMIShell allows you to connect to a CIMOM that is running either locally on the same system, or on a remote machine accessible over the network. Connecting to a Remote CIMOM To access CIM objects provided by a remote CIMOM, create a connection object by using the connect() function as follows: Replace host_name with the host name of the managed system, user_name with the name of a user that is allowed to connect to the OpenPegasus CIMOM running on that system, and password with the user's password. If the password is omitted, LMIShell prompts the user to enter it. The function returns an LMIConnection object. Example 22.1. Connecting to a Remote CIMOM To connect to the OpenPegasus CIMOM running on server.example.com as user pegasus , type the following at the interactive prompt: Connecting to a Local CIMOM LMIShell allows you to connect to a local CIMOM by using a Unix socket. For this type of connection, you must run the LMIShell interpreter as the root user and the /var/run/tog-pegasus/cimxml.socket socket must exist. To access CIM objects provided by a local CIMOM, create a connection object by using the connect() function as follows: Replace host_name with localhost , 127.0.0.1 , or ::1 . The function returns an LMIConnection object or None . Example 22.2. Connecting to a Local CIMOM To connect to the OpenPegasus CIMOM running on localhost as the root user, type the following at the interactive prompt: Verifying a Connection to a CIMOM The connect() function returns either an LMIConnection object, or None if the connection could not be established. In addition, when the connect() function fails to establish a connection, it prints an error message to standard error output. To verify that a connection to a CIMOM has been established successfully, use the isinstance() function as follows: Replace object_name with the name of the connection object. This function returns True if object_name is an LMIConnection object, or False otherwise. Example 22.3. Verifying a Connection to a CIMOM To verify that the c variable created in Example 22.1, "Connecting to a Remote CIMOM" contains an LMIConnection object, type the following at the interactive prompt: Alternatively, you can verify that c is not None : 22.4.3. Working with Namespaces LMIShell namespaces provide a natural means of organizing available classes and serve as a hierarchic access point to other namespaces and classes. The root namespace is the first entry point of a connection object. Listing Available Namespaces To list all available namespaces, use the print_namespaces() method as follows: Replace object_name with the name of the object to inspect. This method prints available namespaces to standard output. To get a list of available namespaces, access the object attribute namespaces : This returns a list of strings. Example 22.4. Listing Available Namespaces To inspect the root namespace object of the c connection object created in Example 22.1, "Connecting to a Remote CIMOM" and list all available namespaces, type the following at the interactive prompt: To assign a list of these namespaces to a variable named root_namespaces , type: Accessing Namespace Objects To access a particular namespace object, use the following syntax: Replace object_name with the name of the object to inspect and namespace_name with the name of the namespace to access. This returns an LMINamespace object. Example 22.5. Accessing Namespace Objects To access the cimv2 namespace of the c connection object created in Example 22.1, "Connecting to a Remote CIMOM" and assign it to a variable named ns , type the following at the interactive prompt: 22.4.4. Working with Classes LMIShell classes represent classes provided by a CIMOM. You can access and list their properties, methods, instances, instance names, and ValueMap properties, print their documentation strings, and create new instances and instance names. Listing Available Classes To list all available classes in a particular namespace, use the print_classes() method as follows: Replace namespace_object with the namespace object to inspect. This method prints available classes to standard output. To get a list of available classes, use the classes() method: This method returns a list of strings. Example 22.6. Listing Available Classes To inspect the ns namespace object created in Example 22.5, "Accessing Namespace Objects" and list all available classes, type the following at the interactive prompt: To assign a list of these classes to a variable named cimv2_classes , type: Accessing Class Objects To access a particular class object that is provided by the CIMOM, use the following syntax: Replace namespace_object with the name of the namespace object to inspect and class_name with the name of the class to access. Example 22.7. Accessing Class Objects To access the LMI_IPNetworkConnection class of the ns namespace object created in Example 22.5, "Accessing Namespace Objects" and assign it to a variable named cls , type the following at the interactive prompt: Examining Class Objects All class objects store information about their name and the namespace they belong to, as well as detailed class documentation. To get the name of a particular class object, use the following syntax: Replace class_object with the name of the class object to inspect. This returns a string representation of the object name. To get information about the namespace a class object belongs to, use: This returns a string representation of the namespace. To display detailed class documentation, use the doc() method as follows: Example 22.8. Examining Class Objects To inspect the cls class object created in Example 22.7, "Accessing Class Objects" and display its name and corresponding namespace, type the following at the interactive prompt: To access class documentation, type: Listing Available Methods To list all available methods of a particular class object, use the print_methods() method as follows: Replace class_object with the name of the class object to inspect. This method prints available methods to standard output. To get a list of available methods, use the methods() method: This method returns a list of strings. Example 22.9. Listing Available Methods To inspect the cls class object created in Example 22.7, "Accessing Class Objects" and list all available methods, type the following at the interactive prompt: To assign a list of these methods to a variable named service_methods , type: Listing Available Properties To list all available properties of a particular class object, use the print_properties() method as follows: Replace class_object with the name of the class object to inspect. This method prints available properties to standard output. To get a list of available properties, use the properties() method: This method returns a list of strings. Example 22.10. Listing Available Properties To inspect the cls class object created in Example 22.7, "Accessing Class Objects" and list all available properties, type the following at the interactive prompt: To assign a list of these classes to a variable named service_properties , type: Listing and Viewing ValueMap Properties CIM classes may contain ValueMap properties in their Managed Object Format ( MOF ) definition. ValueMap properties contain constant values, which may be useful when calling methods or checking returned values. To list all available ValueMap properties of a particular class object, use the print_valuemap_properties() method as follows: Replace class_object with the name of the class object to inspect. This method prints available ValueMap properties to standard output: To get a list of available ValueMap properties, use the valuemap_properties() method: This method returns a list of strings. Example 22.11. Listing ValueMap Properties To inspect the cls class object created in Example 22.7, "Accessing Class Objects" and list all available ValueMap properties, type the following at the interactive prompt: To assign a list of these ValueMap properties to a variable named service_valuemap_properties , type: To access a particular ValueMap property, use the following syntax: Replace valuemap_property with the name of the ValueMap property to access. To list all available constant values, use the print_values() method as follows: This method prints available named constant values to standard output. You can also get a list of available constant values by using the values() method: This method returns a list of strings. Example 22.12. Accessing ValueMap Properties Example 22.11, "Listing ValueMap Properties" mentions a ValueMap property named RequestedState . To inspect this property and list available constant values, type the following at the interactive prompt: To assign a list of these constant values to a variable named requested_state_values , type: To access a particular constant value, use the following syntax: Replace constant_value_name with the name of the constant value. Alternatively, you can use the value() method as follows: To determine the name of a particular constant value, use the value_name() method: This method returns a string. Example 22.13. Accessing Constant Values Example 22.12, "Accessing ValueMap Properties" shows that the RequestedState property provides a constant value named Reset . To access this named constant value, type the following at the interactive prompt: To determine the name of this constant value, type: Fetching a CIMClass Object Many class methods do not require access to a CIMClass object, which is why LMIShell only fetches this object from the CIMOM when a called method actually needs it. To fetch the CIMClass object manually, use the fetch() method as follows: Replace class_object with the name of the class object. Note that methods that require access to a CIMClass object fetch it automatically. 22.4.5. Working with Instances LMIShell instances represent instances provided by a CIMOM. You can get and set their properties, list and call their methods, print their documentation strings, get a list of associated or association objects, push modified objects to the CIMOM, and delete individual instances from the CIMOM. Accessing Instances To get a list of all available instances of a particular class object, use the instances() method as follows: Replace class_object with the name of the class object to inspect. This method returns a list of LMIInstance objects. To access the first instance of a class object, use the first_instance() method: This method returns an LMIInstance object. In addition to listing all instances or returning the first one, both instances() and first_instance() support an optional argument to allow you to filter the results: Replace criteria with a dictionary consisting of key-value pairs, where keys represent instance properties and values represent required values of these properties. Example 22.14. Accessing Instances To find the first instance of the cls class object created in Example 22.7, "Accessing Class Objects" that has the ElementName property equal to eth0 and assign it to a variable named device , type the following at the interactive prompt: Examining Instances All instance objects store information about their class name and the namespace they belong to, as well as detailed documentation about their properties and values. In addition, instance objects allow you to retrieve a unique identification object. To get the class name of a particular instance object, use the following syntax: Replace instance_object with the name of the instance object to inspect. This returns a string representation of the class name. To get information about the namespace an instance object belongs to, use: This returns a string representation of the namespace. To retrieve a unique identification object for an instance object, use: This returns an LMIInstanceName object. Finally, to display detailed documentation, use the doc() method as follows: Example 22.15. Examining Instances To inspect the device instance object created in Example 22.14, "Accessing Instances" and display its class name and the corresponding namespace, type the following at the interactive prompt: To access instance object documentation, type: Creating New Instances Certain CIM providers allow you to create new instances of specific classes objects. To create a new instance of a class object, use the create_instance() method as follows: Replace class_object with the name of the class object and properties with a dictionary that consists of key-value pairs, where keys represent instance properties and values represent property values. This method returns an LMIInstance object. Example 22.16. Creating New Instances The LMI_Group class represents system groups and the LMI_Account class represents user accounts on the managed system. To use the ns namespace object created in Example 22.5, "Accessing Namespace Objects" , create instances of these two classes for the system group named pegasus and the user named lmishell-user , and assign them to variables named group and user , type the following at the interactive prompt: To get an instance of the LMI_Identity class for the lmishell-user user, type: The LMI_MemberOfGroup class represents system group membership. To use the LMI_MemberOfGroup class to add the lmishell-user to the pegasus group, create a new instance of this class as follows: Deleting Individual Instances To delete a particular instance from the CIMOM, use the delete() method as follows: Replace instance_object with the name of the instance object to delete. This method returns a boolean. Note that after deleting an instance, its properties and methods become inaccessible. Example 22.17. Deleting Individual Instances The LMI_Account class represents user accounts on the managed system. To use the ns namespace object created in Example 22.5, "Accessing Namespace Objects" , create an instance of the LMI_Account class for the user named lmishell-user , and assign it to a variable named user , type the following at the interactive prompt: To delete this instance and remove the lmishell-user from the system, type: Listing and Accessing Available Properties To list all available properties of a particular instance object, use the print_properties() method as follows: Replace instance_object with the name of the instance object to inspect. This method prints available properties to standard output. To get a list of available properties, use the properties() method: This method returns a list of strings. Example 22.18. Listing Available Properties To inspect the device instance object created in Example 22.14, "Accessing Instances" and list all available properties, type the following at the interactive prompt: To assign a list of these properties to a variable named device_properties , type: To get the current value of a particular property, use the following syntax: Replace property_name with the name of the property to access. To modify the value of a particular property, assign a value to it as follows: Replace value with the new value of the property. Note that in order to propagate the change to the CIMOM, you must also execute the push() method: This method returns a three-item tuple consisting of a return value, return value parameters, and an error string. Example 22.19. Accessing Individual Properties To inspect the device instance object created in Example 22.14, "Accessing Instances" and display the value of the property named SystemName , type the following at the interactive prompt: Listing and Using Available Methods To list all available methods of a particular instance object, use the print_methods() method as follows: Replace instance_object with the name of the instance object to inspect. This method prints available methods to standard output. To get a list of available methods, use the method() method: This method returns a list of strings. Example 22.20. Listing Available Methods To inspect the device instance object created in Example 22.14, "Accessing Instances" and list all available methods, type the following at the interactive prompt: To assign a list of these methods to a variable named network_device_methods , type: To call a particular method, use the following syntax: Replace instance_object with the name of the instance object to use, method_name with the name of the method to call, parameter with the name of the parameter to set, and value with the value of this parameter. Methods return a three-item tuple consisting of a return value, return value parameters, and an error string. Important LMIInstance objects do not automatically refresh their contents (properties, methods, qualifiers, and so on). To do so, use the refresh() method as described below. Example 22.21. Using Methods The PG_ComputerSystem class represents the system. To create an instance of this class by using the ns namespace object created in Example 22.5, "Accessing Namespace Objects" and assign it to a variable named sys , type the following at the interactive prompt: The LMI_AccountManagementService class implements methods that allow you to manage users and groups in the system. To create an instance of this class and assign it to a variable named acc , type: To create a new user named lmishell-user in the system, use the CreateAccount() method as follows: LMIShell support synchronous method calls: when you use a synchronous method, LMIShell waits for the corresponding Job object to change its state to "finished" and then returns the return parameters of this job. LMIShell is able to perform a synchronous method call if the given method returns an object of one of the following classes: LMI_StorageJob LMI_SoftwareInstallationJob LMI_NetworkJob LMIShell first tries to use indications as the waiting method. If it fails, it uses a polling method instead. To perform a synchronous method call, use the following syntax: Replace instance_object with the name of the instance object to use, method_name with the name of the method to call, parameter with the name of the parameter to set, and value with the value of this parameter. All synchronous methods have the Sync prefix in their name and return a three-item tuple consisting of the job's return value, job's return value parameters, and job's error string. You can also force LMIShell to use only polling method. To do so, specify the PreferPolling parameter as follows: Listing and Viewing ValueMap Parameters CIM methods may contain ValueMap parameters in their Managed Object Format ( MOF ) definition. ValueMap parameters contain constant values. To list all available ValueMap parameters of a particular method, use the print_valuemap_parameters() method as follows: Replace instance_object with the name of the instance object and method_name with the name of the method to inspect. This method prints available ValueMap parameters to standard output. To get a list of available ValueMap parameters, use the valuemap_parameters() method: This method returns a list of strings. Example 22.22. Listing ValueMap Parameters To inspect the acc instance object created in Example 22.21, "Using Methods" and list all available ValueMap parameters of the CreateAccount() method, type the following at the interactive prompt: To assign a list of these ValueMap parameters to a variable named create_account_parameters , type: To access a particular ValueMap parameter, use the following syntax: Replace valuemap_parameter with the name of the ValueMap parameter to access. To list all available constant values, use the print_values() method as follows: This method prints available named constant values to standard output. You can also get a list of available constant values by using the values() method: This method returns a list of strings. Example 22.23. Accessing ValueMap Parameters Example 22.22, "Listing ValueMap Parameters" mentions a ValueMap parameter named CreateAccount . To inspect this parameter and list available constant values, type the following at the interactive prompt: To assign a list of these constant values to a variable named create_account_values , type: To access a particular constant value, use the following syntax: Replace constant_value_name with the name of the constant value. Alternatively, you can use the value() method as follows: To determine the name of a particular constant value, use the value_name() method: This method returns a string. Example 22.24. Accessing Constant Values Example 22.23, "Accessing ValueMap Parameters" shows that the CreateAccount ValueMap parameter provides a constant value named Failed . To access this named constant value, type the following at the interactive prompt: To determine the name of this constant value, type: Refreshing Instance Objects Local objects used by LMIShell, which represent CIM objects at CIMOM side, can get outdated, if such objects change while working with LMIShell's ones. To update the properties and methods of a particular instance object, use the refresh() method as follows: Replace instance_object with the name of the object to refresh. This method returns a three-item tuple consisting of a return value, return value parameter, and an error string. Example 22.25. Refreshing Instance Objects To update the properties and methods of the device instance object created in Example 22.14, "Accessing Instances" , type the following at the interactive prompt: Displaying MOF Representation To display the Managed Object Format ( MOF ) representation of an instance object, use the tomof() method as follows: Replace instance_object with the name of the instance object to inspect. This method prints the MOF representation of the object to standard output. Example 22.26. Displaying MOF Representation To display the MOF representation of the device instance object created in Example 22.14, "Accessing Instances" , type the following at the interactive prompt: 22.4.6. Working with Instance Names LMIShell instance names are objects that hold a set of primary keys and their values. This type of an object exactly identifies an instance. Accessing Instance Names CIMInstance objects are identified by CIMInstanceName objects. To get a list of all available instance name objects, use the instance_names() method as follows: Replace class_object with the name of the class object to inspect. This method returns a list of LMIInstanceName objects. To access the first instance name object of a class object, use the first_instance_name() method: This method returns an LMIInstanceName object. In addition to listing all instance name objects or returning the first one, both instance_names() and first_instance_name() support an optional argument to allow you to filter the results: Replace criteria with a dictionary consisting of key-value pairs, where keys represent key properties and values represent required values of these key properties. Example 22.27. Accessing Instance Names To find the first instance name of the cls class object created in Example 22.7, "Accessing Class Objects" that has the Name key property equal to eth0 and assign it to a variable named device_name , type the following at the interactive prompt: Examining Instance Names All instance name objects store information about their class name and the namespace they belong to. To get the class name of a particular instance name object, use the following syntax: Replace instance_name_object with the name of the instance name object to inspect. This returns a string representation of the class name. To get information about the namespace an instance name object belongs to, use: This returns a string representation of the namespace. Example 22.28. Examining Instance Names To inspect the device_name instance name object created in Example 22.27, "Accessing Instance Names" and display its class name and the corresponding namespace, type the following at the interactive prompt: Creating New Instance Names LMIShell allows you to create a new wrapped CIMInstanceName object if you know all primary keys of a remote object. This instance name object can then be used to retrieve the whole instance object. To create a new instance name of a class object, use the new_instance_name() method as follows: Replace class_object with the name of the class object and key_properties with a dictionary that consists of key-value pairs, where keys represent key properties and values represent key property values. This method returns an LMIInstanceName object. Example 22.29. Creating New Instance Names The LMI_Account class represents user accounts on the managed system. To use the ns namespace object created in Example 22.5, "Accessing Namespace Objects" and create a new instance name of the LMI_Account class representing the lmishell-user user on the managed system, type the following at the interactive prompt: Listing and Accessing Key Properties To list all available key properties of a particular instance name object, use the print_key_properties() method as follows: Replace instance_name_object with the name of the instance name object to inspect. This method prints available key properties to standard output. To get a list of available key properties, use the key_properties() method: This method returns a list of strings. Example 22.30. Listing Available Key Properties To inspect the device_name instance name object created in Example 22.27, "Accessing Instance Names" and list all available key properties, type the following at the interactive prompt: To assign a list of these key properties to a variable named device_name_properties , type: To get the current value of a particular key property, use the following syntax: Replace key_property_name with the name of the key property to access. Example 22.31. Accessing Individual Key Properties To inspect the device_name instance name object created in Example 22.27, "Accessing Instance Names" and display the value of the key property named SystemName , type the following at the interactive prompt: Converting Instance Names to Instances Each instance name can be converted to an instance. To do so, use the to_instance() method as follows: Replace instance_name_object with the name of the instance name object to convert. This method returns an LMIInstance object. Example 22.32. Converting Instance Names to Instances To convert the device_name instance name object created in Example 22.27, "Accessing Instance Names" to an instance object and assign it to a variable named device , type the following at the interactive prompt: 22.4.7. Working with Associated Objects The Common Information Model defines an association relationship between managed objects. Accessing Associated Instances To get a list of all objects associated with a particular instance object, use the associators() method as follows: To access the first object associated with a particular instance object, use the first_associator() method: Replace instance_object with the name of the instance object to inspect. You can filter the results by specifying the following parameters: AssocClass - Each returned object must be associated with the source object through an instance of this class or one of its subclasses. The default value is None . ResultClass - Each returned object must be either an instance of this class or one of its subclasses, or it must be this class or one of its subclasses. The default value is None . Role - Each returned object must be associated with the source object through an association in which the source object plays the specified role. The name of the property in the association class that refers to the source object must match the value of this parameter. The default value is None . ResultRole - Each returned object must be associated with the source object through an association in which the returned object plays the specified role. The name of the property in the association class that refers to the returned object must match the value of this parameter. The default value is None . The remaining parameters refer to: IncludeQualifiers - A boolean indicating whether all qualifiers of each object (including qualifiers on the object and on any returned properties) should be included as QUALIFIER elements in the response. The default value is False . IncludeClassOrigin - A boolean indicating whether the CLASSORIGIN attribute should be present on all appropriate elements in each returned object. The default value is False . PropertyList - The members of this list define one or more property names. Returned objects will not include elements for any properties missing from this list. If PropertyList is an empty list, no properties are included in returned objects. If it is None , no additional filtering is defined. The default value is None . Example 22.33. Accessing Associated Instances The LMI_StorageExtent class represents block devices available in the system. To use the ns namespace object created in Example 22.5, "Accessing Namespace Objects" , create an instance of the LMI_StorageExtent class for the block device named /dev/vda , and assign it to a variable named vda , type the following at the interactive prompt: To get a list of all disk partitions on this block device and assign it to a variable named vda_partitions , use the associators() method as follows: Accessing Associated Instance Names To get a list of all associated instance names of a particular instance object, use the associator_names() method as follows: To access the first associated instance name of a particular instance object, use the first_associator_name() method: Replace instance_object with the name of the instance object to inspect. You can filter the results by specifying the following parameters: AssocClass - Each returned name identifies an object that must be associated with the source object through an instance of this class or one of its subclasses. The default value is None . ResultClass - Each returned name identifies an object that must be either an instance of this class or one of its subclasses, or it must be this class or one of its subclasses. The default value is None . Role - Each returned name identifies an object that must be associated with the source object through an association in which the source object plays the specified role. The name of the property in the association class that refers to the source object must match the value of this parameter. The default value is None . ResultRole - Each returned name identifies an object that must be associated with the source object through an association in which the returned named object plays the specified role. The name of the property in the association class that refers to the returned object must match the value of this parameter. The default value is None . Example 22.34. Accessing Associated Instance Names To use the vda instance object created in Example 22.33, "Accessing Associated Instances" , get a list of its associated instance names, and assign it to a variable named vda_partitions , type: 22.4.8. Working with Association Objects The Common Information Model defines an association relationship between managed objects. Association objects define the relationship between two other objects. Accessing Association Instances To get a list of association objects that refer to a particular target object, use the references() method as follows: To access the first association object that refers to a particular target object, use the first_reference() method: Replace instance_object with the name of the instance object to inspect. You can filter the results by specifying the following parameters: ResultClass - Each returned object must be either an instance of this class or one of its subclasses, or it must be this class or one of its subclasses. The default value is None . Role - Each returned object must refer to the target object through a property with a name that matches the value of this parameter. The default value is None . The remaining parameters refer to: IncludeQualifiers - A boolean indicating whether each object (including qualifiers on the object and on any returned properties) should be included as a QUALIFIER element in the response. The default value is False . IncludeClassOrigin - A boolean indicating whether the CLASSORIGIN attribute should be present on all appropriate elements in each returned object. The default value is False . PropertyList - The members of this list define one or more property names. Returned objects will not include elements for any properties missing from this list. If PropertyList is an empty list, no properties are included in returned objects. If it is None , no additional filtering is defined. The default value is None . Example 22.35. Accessing Association Instances The LMI_LANEndpoint class represents a communication endpoint associated with a certain network interface device. To use the ns namespace object created in Example 22.5, "Accessing Namespace Objects" , create an instance of the LMI_LANEndpoint class for the network interface device named eth0, and assign it to a variable named lan_endpoint , type the following at the interactive prompt: To access the first association object that refers to an LMI_BindsToLANEndpoint object and assign it to a variable named bind , type: You can now use the Dependent property to access the dependent LMI_IPProtocolEndpoint class that represents the IP address of the corresponding network interface device: Accessing Association Instance Names To get a list of association instance names of a particular instance object, use the reference_names() method as follows: To access the first association instance name of a particular instance object, use the first_reference_name() method: Replace instance_object with the name of the instance object to inspect. You can filter the results by specifying the following parameters: ResultClass - Each returned object name identifies either an instance of this class or one of its subclasses, or this class or one of its subclasses. The default value is None . Role - Each returned object identifies an object that refers to the target instance through a property with a name that matches the value of this parameter. The default value is None . Example 22.36. Accessing Association Instance Names To use the lan_endpoint instance object created in Example 22.35, "Accessing Association Instances" , access the first association instance name that refers to an LMI_BindsToLANEndpoint object, and assign it to a variable named bind , type: You can now use the Dependent property to access the dependent LMI_IPProtocolEndpoint class that represents the IP address of the corresponding network interface device: 22.4.9. Working with Indications Indication is a reaction to a specific event that occurs in response to a particular change in data. LMIShell can subscribe to an indication in order to receive such event responses. Subscribing to Indications To subscribe to an indication, use the subscribe_indication() method as follows: Alternatively, you can use a shorter version of the method call as follows: Replace connection_object with a connection object and host_name with the host name of the system you want to deliver the indications to. By default, all subscriptions created by the LMIShell interpreter are automatically deleted when the interpreter terminates. To change this behavior, pass the Permanent=True keyword parameter to the subscribe_indication() method call. This will prevent LMIShell from deleting the subscription. Example 22.37. Subscribing to Indications To use the c connection object created in Example 22.1, "Connecting to a Remote CIMOM" and subscribe to an indication named cpu , type the following at the interactive prompt: Listing Subscribed Indications To list all the subscribed indications, use the print_subscribed_indications() method as follows: Replace connection_object with the name of the connection object to inspect. This method prints subscribed indications to standard output. To get a list of subscribed indications, use the subscribed_indications() method: This method returns a list of strings. Example 22.38. Listing Subscribed Indications To inspect the c connection object created in Example 22.1, "Connecting to a Remote CIMOM" and list all subscribed indications, type the following at the interactive prompt: To assign a list of these indications to a variable named indications , type: Unsubscribing from Indications By default, all subscriptions created by the LMIShell interpreter are automatically deleted when the interpreter terminates. To delete an individual subscription sooner, use the unsubscribe_indication() method as follows: Replace connection_object with the name of the connection object and indication_name with the name of the indication to delete. To delete all subscriptions, use the unsubscribe_all_indications() method: Example 22.39. Unsubscribing from Indications To use the c connection object created in Example 22.1, "Connecting to a Remote CIMOM" and unsubscribe from the indication created in Example 22.37, "Subscribing to Indications" , type the following at the interactive prompt: Implementing an Indication Handler The subscribe_indication() method allows you to specify the host name of the system you want to deliver the indications to. The following example shows how to implement an indication handler: The first argument of the handler is an LmiIndication object, which contains a list of methods and objects exported by the indication. Other parameters are user specific: those arguments need to be specified when adding a handler to the listener. In the example above, the add_handler() method call uses a special string with eight "X" characters. These characters are replaced with a random string that is generated by listeners in order to avoid a possible handler name collision. To use the random string, start the indication listener first and then subscribe to an indication so that the Destination property of the handler object contains the following value: schema :// host_name / random_string . Example 22.40. Implementing an Indication Handler The following script illustrates how to write a handler that monitors a managed system located at 192.168.122.1 and calls the indication_callback() function whenever a new user account is created: 22.4.10. Example Usage This section provides a number of examples for various CIM providers distributed with the OpenLMI packages. All examples in this section use the following two variable definitions: Replace host_name with the host name of the managed system, user_name with the name of user that is allowed to connect to OpenPegasus CIMOM running on that system, and password with the user's password. Using the OpenLMI Service Provider The openlmi-service package installs a CIM provider for managing system services. The examples below illustrate how to use this CIM provider to list available system services and how to start, stop, enable, and disable them. Example 22.41. Listing Available Services To list all available services on the managed machine along with information regarding whether the service has been started ( TRUE ) or stopped ( FALSE ) and the status string, use the following code snippet: To list only the services that are enabled by default, use this code snippet: Note that the value of the EnabledDefault property is equal to 2 for enabled services and 3 for disabled services. To display information about the cups service, use the following: Example 22.42. Starting and Stopping Services To start and stop the cups service and to see its current status, use the following code snippet: Example 22.43. Enabling and Disabling Services To enable and disable the cups service and to display its EnabledDefault property, use the following code snippet: Using the OpenLMI Networking Provider The openlmi-networking package installs a CIM provider for networking. The examples below illustrate how to use this CIM provider to list IP addresses associated with a certain port number, create a new connection, configure a static IP address, and activate a connection. Example 22.44. Listing IP Addresses Associated with a Given Port Number To list all IP addresses associated with the eth0 network interface, use the following code snippet: This code snippet uses the LMI_IPProtocolEndpoint class associated with a given LMI_IPNetworkConnection class. To display the default gateway, use this code snippet: The default gateway is represented by an LMI_NetworkRemoteServiceAccessPoint instance with the AccessContext property equal to DefaultGateway . To get a list of DNS servers, the object model needs to be traversed as follows: Get the LMI_IPProtocolEndpoint instances associated with a given LMI_IPNetworkConnection using LMI_NetworkSAPSAPDependency . Use the same association for the LMI_DNSProtocolEndpoint instances. The LMI_NetworkRemoteServiceAccessPoint instances with the AccessContext property equal to the DNS Server associated through LMI_NetworkRemoteAccessAvailableToElement have the DNS server address in the AccessInfo property. There can be more possible paths to get to the RemoteServiceAccessPath and entries can be duplicated. The following code snippet uses the set() function to remove duplicate entries from the list of DNS servers: Example 22.45. Creating a New Connection and Configuring a Static IP Address To create a new setting with a static IPv4 and stateless IPv6 configuration for network interface eth0, use the following code snippet: This code snippet creates a new setting by calling the LMI_CreateIPSetting() method on the instance of LMI_IPNetworkConnectionCapabilities , which is associated with LMI_IPNetworkConnection through LMI_IPNetworkConnectionElementCapabilities . It also uses the push() method to modify the setting. Example 22.46. Activating a Connection To apply a setting to the network interface, call the ApplySettingToIPNetworkConnection() method of the LMI_IPConfigurationService class. This method is asynchronous and returns a job. The following code snippets illustrates how to call this method synchronously: The Mode parameter affects how the setting is applied. The most commonly used values of this parameter are as follows: 1 - apply the setting now and make it auto-activated. 2 - make the setting auto-activated and do not apply it now. 4 - disconnect and disable auto-activation. 5 - do not change the setting state, only disable auto-activation. 32768 - apply the setting. 32769 - disconnect. Using the OpenLMI Storage Provider The openlmi-storage package installs a CIM provider for storage management. The examples below illustrate how to use this CIM provider to create a volume group, create a logical volume, build a file system, mount a file system, and list block devices known to the system. In addition to the c and ns variables, these examples use the following variable definitions: Example 22.47. Creating a Volume Group To create a new volume group located in /dev/myGroup/ that has three members and the default extent size of 4 MB, use the following code snippet: Example 22.48. Creating a Logical Volume To create two logical volumes with the size of 100 MB, use this code snippet: Example 22.49. Creating a File System To create an ext3 file system on logical volume lv from Example 22.48, "Creating a Logical Volume" , use the following code snippet: Example 22.50. Mounting a File System To mount the file system created in Example 22.49, "Creating a File System" , use the following code snippet: Example 22.51. Listing Block Devices To list all block devices known to the system, use the following code snippet: Using the OpenLMI Hardware Provider The openlmi-hardware package installs a CIM provider for monitoring hardware. The examples below illustrate how to use this CIM provider to retrieve information about CPU, memory modules, PCI devices, and the manufacturer and model of the machine. Example 22.52. Viewing CPU Information To display basic CPU information such as the CPU name, the number of processor cores, and the number of hardware threads, use the following code snippet: Example 22.53. Viewing Memory Information To display basic information about memory modules such as their individual sizes, use the following code snippet: Example 22.54. Viewing Chassis Information To display basic information about the machine such as its manufacturer or its model, use the following code snippet: Example 22.55. Listing PCI Devices To list all PCI devices known to the system, use the following code snippet: 22.5. Using OpenLMI Scripts The LMIShell interpreter is built on top of Python modules that can be used to develop custom management tools. The OpenLMI Scripts project provides a number of Python libraries for interfacing with OpenLMI providers. In addition, it is distributed with lmi , an extensible utility that can be used to interact with these libraries from the command line. To install OpenLMI Scripts on your system, type the following at a shell prompt: This command installs the Python modules and the lmi utility in the ~/.local/ directory. To extend the functionality of the lmi utility, install additional OpenLMI modules by using the following command: For a complete list of available modules, see the Python website . For more information about OpenLMI Scripts, see the official OpenLMI Scripts documentation . 22.6. Additional Resources For more information about OpenLMI and system management in general, see the resources listed below. Installed Documentation lmishell (1) - The manual page for the lmishell client and interpreter provides detailed information about its execution and usage. Online Documentation Red Hat Enterprise Linux 7 Networking Guide - The Networking Guide for Red Hat Enterprise Linux 7 documents relevant information regarding the configuration and administration of network interfaces and network services on the system. Red Hat Enterprise Linux 7 Storage Administration Guide - The Storage Administration Guide for Red Hat Enterprise Linux 7 provides instructions on how to manage storage devices and file systems on the system. Red Hat Enterprise Linux 7 Power Management Guide - The Power Management Guide for Red Hat Enterprise Linux 7 explains how to manage power consumption of the system effectively. It discusses different techniques that lower power consumption for both servers and laptops, and explains how each technique affects the overall performance of the system. Red Hat Enterprise Linux 7 Linux Domain Identity, Authentication, and Policy Guide - The Linux Domain Identity, Authentication, and Policy Guide for Red Hat Enterprise Linux 7 covers all aspects of installing, configuring, and managing IPA domains, including both servers and clients. The guide is intended for IT and systems administrators. FreeIPA Documentation - The FreeIPA Documentation serves as the primary user documentation for using the FreeIPA Identity Management project. OpenSSL Home Page - The OpenSSL home page provides an overview of the OpenSSL project. Mozilla NSS Documentation - The Mozilla NSS Documentation serves as the primary user documentation for using the Mozilla NSS project. See Also Chapter 4, Managing Users and Groups documents how to manage system users and groups in the graphical user interface and on the command line. Chapter 9, Yum describes how to use the Yum package manager to search, install, update, and uninstall packages on the command line. Chapter 10, Managing Services with systemd provides an introduction to systemd and documents how to use the systemctl command to manage system services, configure systemd targets, and execute power management commands. Chapter 12, OpenSSH describes how to configure an SSH server and how to use the ssh , scp , and sftp client utilities to access it. | [
"install tog-pegasus",
"install openlmi-{storage,networking,service,account,powermanagement}",
"passwd pegasus",
"systemctl start tog-pegasus.service",
"systemctl enable tog-pegasus.service",
"firewall-cmd --add-port 5989/tcp",
"firewall-cmd --permanent --add-port 5989/tcp",
"install openlmi-tools",
"systemctl restart tog-pegasus.service",
"scp root@ hostname :/etc/Pegasus/server.pem /etc/pki/ca-trust/source/anchors/pegasus- hostname .pem",
"sha1sum /etc/Pegasus/server.pem",
"sha1sum /etc/pki/ca-trust/source/anchors/pegasus- hostname .pem",
"update-ca-trust extract",
"cp /etc/ipa/ca.crt /etc/pki/ca-trust/source/anchors/ipa.crt",
"update-ca-trust extract",
"ipa service-add CIMOM/ hostname",
"ipa-getcert request -f /etc/Pegasus/server.pem -k /etc/Pegasus/file.pem -N CN= hostname -K CIMOM/ hostname",
"cp /etc/ipa/ca.crt /etc/pki/ca-trust/source/anchors/ipa.crt",
"update-ca-trust extract",
"update-ca-trust extract",
"cp /path/to/ca.crt /etc/pki/ca-trust/source/anchors/ca.crt",
"update-ca-trust extract",
"[ req ] distinguished_name = req_distinguished_name prompt = no [ req_distinguished_name ] C = US ST = Massachusetts L = Westford O = Fedora OU = Fedora OpenLMI CN = hostname",
"openssl genrsa -out /etc/Pegasus/file.pem 1024",
"openssl req -config /etc/Pegasus/ssl.cnf -new -key /etc/Pegasus/file.pem -out /etc/Pegasus/server.csr",
"cp /path/to/ca.crt /etc/Pegasus/client.pem",
"lmishell",
"lmishell --noverify",
"> (reverse-i-search)` connect ': c = connect(\"server.example.com\", \"pegasus\")",
"clear_history ()",
"history_file = \"~/.lmishell_history\" history_length = 1000",
"use_exceptions ()",
"use_exception ( False )",
"use_exceptions = True",
"object_name . clear_cache ()",
"object_name . use_cache ( False )",
"object_name . use_cache ( True )",
"use_cache = False",
"> quit() ~]USD",
"lmishell file_name",
"lmishell --interact file_name",
"connect ( host_name , user_name , password )",
"> c = connect(\"server.example.com\", \"pegasus\") password: >",
"connect ( host_name )",
"> c = connect(\"localhost\") >",
"isinstance ( object_name , LMIConnection )",
"> isinstance(c, LMIConnection) True >",
"> c is None False >",
"object_name . print_namespaces ()",
"object_name . namespaces",
"> c.root.print_namespaces() cimv2 interop PG_InterOp PG_Internal >",
"> root_namespaces = c.root.namespaces >",
"object_name . namespace_name",
"> ns = c.root.cimv2 >",
"namespace_object . print_classes()",
"namespace_object . classes ()",
"> ns.print_classes() CIM_CollectionInSystem CIM_ConcreteIdentity CIM_ControlledBy CIM_DeviceSAPImplementation CIM_MemberOfStatusCollection >",
"> cimv2_classes = ns.classes() >",
"namespace_object . class_name",
"> cls = ns.LMI_IPNetworkConnection >",
"class_object . classname",
"class_object . namespace",
"class_object . doc ()",
"> cls.classname 'LMI_IPNetworkConnection' > cls.namespace 'root/cimv2' >",
"> cls.doc() Class: LMI_IPNetworkConnection SuperClass: CIM_IPNetworkConnection [qualifier] string UMLPackagePath: 'CIM::Network::IP' [qualifier] string Version: '0.1.0'",
"class_object . print_methods ()",
"class_object . methods()",
"> cls.print_methods() RequestStateChange >",
"> service_methods = cls.methods() >",
"class_object . print_properties ()",
"class_object . properties ()",
"> cls.print_properties() RequestedState HealthState StatusDescriptions TransitioningToState Generation >",
"> service_properties = cls.properties() >",
"class_object . print_valuemap_properties ()",
"class_object . valuemap_properties ()",
"> cls.print_valuemap_properties() RequestedState HealthState TransitioningToState DetailedStatus OperationalStatus >",
"> service_valuemap_properties = cls.valuemap_properties() >",
"class_object . valuemap_property Values",
"class_object . valuemap_property Values . print_values ()",
"class_object . valuemap_property Values . values ()",
"> cls.RequestedStateValues.print_values() Reset NoChange NotApplicable Quiesce Unknown >",
"> requested_state_values = cls.RequestedStateValues.values() >",
"class_object . valuemap_property Values . constant_value_name",
"class_object . valuemap_property Values . value (\" constant_value_name \")",
"class_object . valuemap_property Values . value_name (\" constant_value \")",
"> cls.RequestedStateValues.Reset 11 > cls.RequestedStateValues.value(\"Reset\") 11 >",
"> cls.RequestedStateValues.value_name(11) u'Reset' >",
"class_object . fetch ()",
"class_object . instances ()",
"class_object . first_instance ()",
"class_object . instances ( criteria )",
"class_object . first_instance ( criteria )",
"> device = cls.first_instance({\"ElementName\": \"eth0\"}) >",
"instance_object . classname",
"instance_object . namespace",
"instance_object . path",
"instance_object . doc ()",
"> device.classname u'LMI_IPNetworkConnection' > device.namespace 'root/cimv2' >",
"> device.doc() Instance of LMI_IPNetworkConnection [property] uint16 RequestedState = '12' [property] uint16 HealthState [property array] string [] StatusDescriptions",
"class_object . create_instance ( properties )",
"> group = ns.LMI_Group.first_instance({\"Name\" : \"pegasus\"}) > user = ns.LMI_Account.first_instance({\"Name\" : \"lmishell-user\"}) >",
"> identity = user.first_associator(ResultClass=\"LMI_Identity\") >",
"> ns.LMI_MemberOfGroup.create_instance({ ... \"Member\" : identity.path, ... \"Collection\" : group.path}) LMIInstance(classname=\"LMI_MemberOfGroup\", ...) >",
"instance_object . delete ()",
"> user = ns.LMI_Account.first_instance({\"Name\" : \"lmishell-user\"}) >",
"> user.delete() True >",
"instance_object . print_properties ()",
"instance_object . properties ()",
"> device.print_properties() RequestedState HealthState StatusDescriptions TransitioningToState Generation >",
"> device_properties = device.properties() >",
"instance_object . property_name",
"instance_object . property_name = value",
"instance_object . push ()",
"> device.SystemName u'server.example.com' >",
"instance_object . print_methods ()",
"instance_object . methods ()",
"> device.print_methods() RequestStateChange >",
"> network_device_methods = device.methods() >",
"instance_object . method_name ( parameter = value , ...)",
"> sys = ns.PG_ComputerSystem.first_instance() >",
"> acc = ns.LMI_AccountManagementService.first_instance() >",
"> acc.CreateAccount(Name=\"lmishell-user\", System=sys) LMIReturnValue(rval=0, rparams=NocaseDict({u'Account': LMIInstanceName(classname=\"LMI_Account\"...), u'Identities': [LMIInstanceName(classname=\"LMI_Identity\"...), LMIInstanceName(classname=\"LMI_Identity\"...)]}), errorstr='')",
"instance_object . Sync method_name ( parameter = value , ...)",
"instance_object . Sync method_name ( PreferPolling = True parameter = value , ...)",
"instance_object . method_name . print_valuemap_parameters ()",
"instance_object . method_name . valuemap_parameters ()",
"> acc.CreateAccount.print_valuemap_parameters() CreateAccount >",
"> create_account_parameters = acc.CreateAccount.valuemap_parameters() >",
"instance_object . method_name . valuemap_parameter Values",
"instance_object . method_name . valuemap_parameter Values . print_values ()",
"instance_object . method_name . valuemap_parameter Values . values ()",
"> acc.CreateAccount.CreateAccountValues.print_values() Operationunsupported Failed Unabletosetpasswordusercreated Unabletocreatehomedirectoryusercreatedandpasswordset Operationcompletedsuccessfully >",
"> create_account_values = acc.CreateAccount.CreateAccountValues.values() >",
"instance_object . method_name . valuemap_parameter Values . constant_value_name",
"instance_object . method_name . valuemap_parameter Values . value (\" constant_value_name \")",
"instance_object . method_name . valuemap_parameter Values . value_name (\" constant_value \")",
"> acc.CreateAccount.CreateAccountValues.Failed 2 > acc.CreateAccount.CreateAccountValues.value(\"Failed\") 2 >",
"> acc.CreateAccount.CreateAccountValues.value_name(2) u'Failed' >",
"instance_object . refresh ()",
"> device.refresh() LMIReturnValue(rval=True, rparams=NocaseDict({}), errorstr='') >",
"instance_object . tomof ()",
"> device.tomof() instance of LMI_IPNetworkConnection { RequestedState = 12; HealthState = NULL; StatusDescriptions = NULL; TransitioningToState = 12;",
"class_object . instance_names ()",
"class_object . first_instance_name ()",
"class_object . instance_names ( criteria )",
"class_object . first_instance_name ( criteria )",
"> device_name = cls.first_instance_name({\"Name\": \"eth0\"}) >",
"instance_name_object . classname",
"instance_name_object . namespace",
"> device_name.classname u'LMI_IPNetworkConnection' > device_name.namespace 'root/cimv2' >",
"class_object . new_instance_name ( key_properties )",
"> instance_name = ns.LMI_Account.new_instance_name({ ... \"CreationClassName\" : \"LMI_Account\", ... \"Name\" : \"lmishell-user\", ... \"SystemCreationClassName\" : \"PG_ComputerSystem\", ... \"SystemName\" : \"server\"}) >",
"instance_name_object . print_key_properties ()",
"instance_name_object . key_properties ()",
"> device_name.print_key_properties() CreationClassName SystemName Name SystemCreationClassName >",
"> device_name_properties = device_name.key_properties() >",
"instance_name_object . key_property_name",
"> device_name.SystemName u'server.example.com' >",
"instance_name_object . to_instance ()",
"> device = device_name.to_instance() >",
"instance_object . associators ( AssocClass= class_name , ResultClass= class_name , ResultRole= role , IncludeQualifiers= include_qualifiers , IncludeClassOrigin= include_class_origin , PropertyList= property_list )",
"instance_object . first_associator ( AssocClass= class_name , ResultClass= class_name , ResultRole= role , IncludeQualifiers= include_qualifiers , IncludeClassOrigin= include_class_origin , PropertyList= property_list )",
"> vda = ns.LMI_StorageExtent.first_instance({ ... \"DeviceID\" : \"/dev/vda\"}) >",
"> vda_partitions = vda.associators(ResultClass=\"LMI_DiskPartition\") >",
"instance_object . associator_names ( AssocClass= class_name , ResultClass= class_name , Role= role , ResultRole= role )",
"instance_object . first_associator_name ( AssocClass= class_object , ResultClass= class_object , Role= role , ResultRole= role )",
"> vda_partitions = vda.associator_names(ResultClass=\"LMI_DiskPartition\") >",
"instance_object . references ( ResultClass= class_name , Role= role , IncludeQualifiers= include_qualifiers , IncludeClassOrigin= include_class_origin , PropertyList= property_list )",
"instance_object . first_reference ( ... ResultClass= class_name , ... Role= role , ... IncludeQualifiers= include_qualifiers , ... IncludeClassOrigin= include_class_origin , ... PropertyList= property_list ) >",
"> lan_endpoint = ns.LMI_LANEndpoint.first_instance({ ... \"Name\" : \"eth0\"}) >",
"> bind = lan_endpoint.first_reference( ... ResultClass=\"LMI_BindsToLANEndpoint\") >",
"> ip = bind.Dependent.to_instance() > print ip.IPv4Address 192.168.122.1 >",
"instance_object . reference_names ( ResultClass= class_name , Role= role )",
"instance_object . first_reference_name ( ResultClass= class_name , Role= role )",
"> bind = lan_endpoint.first_reference_name( ... ResultClass=\"LMI_BindsToLANEndpoint\")",
"> ip = bind.Dependent.to_instance() > print ip.IPv4Address 192.168.122.1 >",
"connection_object . subscribe_indication ( QueryLanguage= \"WQL\" , Query= 'SELECT * FROM CIM_InstModification' , Name= \"cpu\" , CreationNamespace= \"root/interop\" , SubscriptionCreationClassName= \"CIM_IndicationSubscription\" , FilterCreationClassName= \"CIM_IndicationFilter\" , FilterSystemCreationClassName= \"CIM_ComputerSystem\" , FilterSourceNamespace= \"root/cimv2\" , HandlerCreationClassName= \"CIM_IndicationHandlerCIMXML\" , HandlerSystemCreationClassName= \"CIM_ComputerSystem\" , Destination= \"http://host_name:5988\" )",
"connection_object . subscribe_indication ( Query= 'SELECT * FROM CIM_InstModification' , Name= \"cpu\" , Destination= \"http://host_name:5988\" )",
"> c.subscribe_indication( ... QueryLanguage=\"WQL\", ... Query='SELECT * FROM CIM_InstModification', ... Name=\"cpu\", ... CreationNamespace=\"root/interop\", ... SubscriptionCreationClassName=\"CIM_IndicationSubscription\", ... FilterCreationClassName=\"CIM_IndicationFilter\", ... FilterSystemCreationClassName=\"CIM_ComputerSystem\", ... FilterSourceNamespace=\"root/cimv2\", ... HandlerCreationClassName=\"CIM_IndicationHandlerCIMXML\", ... HandlerSystemCreationClassName=\"CIM_ComputerSystem\", ... Destination=\"http://server.example.com:5988\") LMIReturnValue(rval=True, rparams=NocaseDict({}), errorstr='') >",
"connection_object . print_subscribed_indications ()",
"connection_object . subscribed_indications ()",
"> c.print_subscribed_indications() >",
"> indications = c.subscribed_indications() >",
"connection_object . unsubscribe_indication ( indication_name )",
"connection_object . unsubscribe_all_indications ()",
"> c.unsubscribe_indication('cpu') LMIReturnValue(rval=True, rparams=NocaseDict({}), errorstr='') >",
"> def handler(ind, arg1, arg2, kwargs): ... exported_objects = ind.exported_objects() ... do_something_with(exported_objects) > listener = LmiIndicationListener(\"0.0.0.0\", listening_port) > listener.add_handler(\"indication-name-XXXXXXXX\", handler, arg1, arg2, kwargs) > listener.start() >",
"#!/usr/bin/lmishell import sys from time import sleep from lmi.shell.LMIUtil import LMIPassByRef from lmi.shell.LMIIndicationListener import LMIIndicationListener These are passed by reference to indication_callback var1 = LMIPassByRef(\"some_value\") var2 = LMIPassByRef(\"some_other_value\") def indication_callback(ind, var1, var2): # Do something with ind, var1 and var2 print ind.exported_objects() print var1.value print var2.value c = connect(\"hostname\", \"username\", \"password\") listener = LMIIndicationListener(\"0.0.0.0\", 65500) unique_name = listener.add_handler( \"demo-XXXXXXXX\", # Creates a unique name for me indication_callback, # Callback to be called var1, # Variable passed by ref var2 # Variable passed by ref ) listener.start() print c.subscribe_indication( Name=unique_name, Query=\"SELECT * FROM LMI_AccountInstanceCreationIndication WHERE SOURCEINSTANCE ISA LMI_Account\", Destination=\"192.168.122.1:65500\" ) try: while True: sleep(60) except KeyboardInterrupt: sys.exit(0)",
"c = connect(\"host_name\", \"user_name\", \"password\") ns = c.root.cimv2",
"for service in ns.LMI_Service.instances(): print \"%s:\\t%s\" % (service.Name, service.Status)",
"cls = ns.LMI_Service for service in cls.instances(): if service.EnabledDefault == cls.EnabledDefaultValues.Enabled: print service.Name",
"cups = ns.LMI_Service.first_instance({\"Name\": \"cups.service\"}) cups.doc()",
"cups = ns.LMI_Service.first_instance({\"Name\": \"cups.service\"}) cups.StartService() print cups.Status cups.StopService() print cups.Status",
"cups = ns.LMI_Service.first_instance({\"Name\": \"cups.service\"}) cups.TurnServiceOff() print cups.EnabledDefault cups.TurnServiceOn() print cups.EnabledDefault",
"device = ns.LMI_IPNetworkConnection.first_instance({'ElementName': 'eth0'}) for endpoint in device.associators(AssocClass=\"LMI_NetworkSAPSAPDependency\", ResultClass=\"LMI_IPProtocolEndpoint\"): if endpoint.ProtocolIFType == ns.LMI_IPProtocolEndpoint.ProtocolIFTypeValues.IPv4: print \"IPv4: %s/%s\" % (endpoint.IPv4Address, endpoint.SubnetMask) elif endpoint.ProtocolIFType == ns.LMI_IPProtocolEndpoint.ProtocolIFTypeValues.IPv6: print \"IPv6: %s/%d\" % (endpoint.IPv6Address, endpoint.IPv6SubnetPrefixLength)",
"for rsap in device.associators(AssocClass=\"LMI_NetworkRemoteAccessAvailableToElement\", ResultClass=\"LMI_NetworkRemoteServiceAccessPoint\"): if rsap.AccessContext == ns.LMI_NetworkRemoteServiceAccessPoint.AccessContextValues.DefaultGateway: print \"Default Gateway: %s\" % rsap.AccessInfo",
"dnsservers = set() for ipendpoint in device.associators(AssocClass=\"LMI_NetworkSAPSAPDependency\", ResultClass=\"LMI_IPProtocolEndpoint\"): for dnsedpoint in ipendpoint.associators(AssocClass=\"LMI_NetworkSAPSAPDependency\", ResultClass=\"LMI_DNSProtocolEndpoint\"): for rsap in dnsedpoint.associators(AssocClass=\"LMI_NetworkRemoteAccessAvailableToElement\", ResultClass=\"LMI_NetworkRemoteServiceAccessPoint\"): if rsap.AccessContext == ns.LMI_NetworkRemoteServiceAccessPoint.AccessContextValues.DNSServer: dnsservers.add(rsap.AccessInfo) print \"DNS:\", \", \".join(dnsservers)",
"capability = ns.LMI_IPNetworkConnectionCapabilities.first_instance({ 'ElementName': 'eth0' }) result = capability.LMI_CreateIPSetting(Caption='eth0 Static', IPv4Type=capability.LMI_CreateIPSetting.IPv4TypeValues.Static, IPv6Type=capability.LMI_CreateIPSetting.IPv6TypeValues.Stateless) setting = result.rparams[\"SettingData\"].to_instance() for settingData in setting.associators(AssocClass=\"LMI_OrderedIPAssignmentComponent\"): if setting.ProtocolIFType == ns.LMI_IPAssignmentSettingData.ProtocolIFTypeValues.IPv4: # Set static IPv4 address settingData.IPAddresses = [\"192.168.1.100\"] settingData.SubnetMasks = [\"255.255.0.0\"] settingData.GatewayAddresses = [\"192.168.1.1\"] settingData.push()",
"setting = ns.LMI_IPAssignmentSettingData.first_instance({ \"Caption\": \"eth0 Static\" }) port = ns.LMI_IPNetworkConnection.first_instance({ 'ElementName': 'ens8' }) service = ns.LMI_IPConfigurationService.first_instance() service.SyncApplySettingToIPNetworkConnection(SettingData=setting, IPNetworkConnection=port, Mode=32768)",
"MEGABYTE = 1024*1024 storage_service = ns.LMI_StorageConfigurationService.first_instance() filesystem_service = ns.LMI_FileSystemConfigurationService.first_instance()",
"Find the devices to add to the volume group (filtering the CIM_StorageExtent.instances() call would be faster, but this is easier to read): sda1 = ns.CIM_StorageExtent.first_instance({\"Name\": \"/dev/sda1\"}) sdb1 = ns.CIM_StorageExtent.first_instance({\"Name\": \"/dev/sdb1\"}) sdc1 = ns.CIM_StorageExtent.first_instance({\"Name\": \"/dev/sdc1\"}) Create a new volume group: (ret, outparams, err) = storage_service.SyncCreateOrModifyVG( ElementName=\"myGroup\", InExtents=[sda1, sdb1, sdc1]) vg = outparams['Pool'].to_instance() print \"VG\", vg.PoolID, \"with extent size\", vg.ExtentSize, \"and\", vg.RemainingExtents, \"free extents created.\"",
"Find the volume group: vg = ns.LMI_VGStoragePool.first_instance({\"Name\": \"/dev/mapper/myGroup\"}) Create the first logical volume: (ret, outparams, err) = storage_service.SyncCreateOrModifyLV( ElementName=\"Vol1\", InPool=vg, Size=100 * MEGABYTE) lv = outparams['TheElement'].to_instance() print \"LV\", lv.DeviceID, \"with\", lv.BlockSize * lv.NumberOfBlocks, \"bytes created.\" Create the second logical volume: (ret, outparams, err) = storage_service.SyncCreateOrModifyLV( ElementName=\"Vol2\", InPool=vg, Size=100 * MEGABYTE) lv = outparams['TheElement'].to_instance() print \"LV\", lv.DeviceID, \"with\", lv.BlockSize * lv.NumberOfBlocks, \"bytes created.\"",
"(ret, outparams, err) = filesystem_service.SyncLMI_CreateFileSystem( FileSystemType=filesystem_service.LMI_CreateFileSystem.FileSystemTypeValues.EXT3, InExtents=[lv])",
"Find the file system on the logical volume: fs = lv.first_associator(ResultClass=\"LMI_LocalFileSystem\") mount_service = ns.LMI_MountConfigurationService.first_instance() (rc, out, err) = mount_service.SyncCreateMount( FileSystemType='ext3', Mode=32768, # just mount FileSystem=fs, MountPoint='/mnt/test', FileSystemSpec=lv.Name)",
"devices = ns.CIM_StorageExtent.instances() for device in devices: if lmi_isinstance(device, ns.CIM_Memory): # Memory and CPU caches are StorageExtents too, do not print them continue print device.classname, print device.DeviceID, print device.Name, print device.BlockSize*device.NumberOfBlocks",
"cpu = ns.LMI_Processor.first_instance() cpu_cap = cpu.associators(ResultClass=\"LMI_ProcessorCapabilities\")[0] print cpu.Name print cpu_cap.NumberOfProcessorCores print cpu_cap.NumberOfHardwareThreads",
"mem = ns.LMI_Memory.first_instance() for i in mem.associators(ResultClass=\"LMI_PhysicalMemory\"): print i.Name",
"chassis = ns.LMI_Chassis.first_instance() print chassis.Manufacturer print chassis.Model",
"for pci in ns.LMI_PCIDevice.instances(): print pci.Name",
"easy_install --user openlmi-scripts",
"easy_install --user package_name"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system_administrators_guide/chap-OpenLMI |
Chapter 3. Creating applications | Chapter 3. Creating applications 3.1. Using templates The following sections provide an overview of templates, as well as how to use and create them. 3.1.1. Understanding templates A template describes a set of objects that can be parameterized and processed to produce a list of objects for creation by OpenShift Container Platform. A template can be processed to create anything you have permission to create within a project, for example services, build configurations, and deployment configurations. A template can also define a set of labels to apply to every object defined in the template. You can create a list of objects from a template using the CLI or, if a template has been uploaded to your project or the global template library, using the web console. 3.1.2. Uploading a template If you have a JSON or YAML file that defines a template, you can upload the template to projects using the CLI. This saves the template to the project for repeated use by any user with appropriate access to that project. Instructions about writing your own templates are provided later in this topic. Procedure Upload a template using one of the following methods: Upload a template to your current project's template library, pass the JSON or YAML file with the following command: USD oc create -f <filename> Upload a template to a different project using the -n option with the name of the project: USD oc create -f <filename> -n <project> The template is now available for selection using the web console or the CLI. 3.1.3. Creating an application by using the web console You can use the web console to create an application from a template. Procedure Select Developer from the context selector at the top of the web console navigation menu. While in the desired project, click +Add Click All services in the Developer Catalog tile. Click Builder Images under Type to see the available builder images. Note Only image stream tags that have the builder tag listed in their annotations appear in this list, as demonstrated here: kind: "ImageStream" apiVersion: "image.openshift.io/v1" metadata: name: "ruby" creationTimestamp: null spec: # ... tags: - name: "2.6" annotations: description: "Build and run Ruby 2.6 applications" iconClass: "icon-ruby" tags: "builder,ruby" 1 supports: "ruby:2.6,ruby" version: "2.6" # ... 1 Including builder here ensures this image stream tag appears in the web console as a builder. Modify the settings in the new application screen to configure the objects to support your application. 3.1.4. Creating objects from templates by using the CLI You can use the CLI to process templates and use the configuration that is generated to create objects. 3.1.4.1. Adding labels Labels are used to manage and organize generated objects, such as pods. The labels specified in the template are applied to every object that is generated from the template. Procedure Add labels in the template from the command line: USD oc process -f <filename> -l name=otherLabel 3.1.4.2. Listing parameters The list of parameters that you can override are listed in the parameters section of the template. Procedure You can list parameters with the CLI by using the following command and specifying the file to be used: USD oc process --parameters -f <filename> Alternatively, if the template is already uploaded: USD oc process --parameters -n <project> <template_name> For example, the following shows the output when listing the parameters for one of the quick start templates in the default openshift project: USD oc process --parameters -n openshift rails-postgresql-example Example output NAME DESCRIPTION GENERATOR VALUE SOURCE_REPOSITORY_URL The URL of the repository with your application source code https://github.com/sclorg/rails-ex.git SOURCE_REPOSITORY_REF Set this to a branch name, tag or other ref of your repository if you are not using the default branch CONTEXT_DIR Set this to the relative path to your project if it is not in the root of your repository APPLICATION_DOMAIN The exposed hostname that will route to the Rails service rails-postgresql-example.openshiftapps.com GITHUB_WEBHOOK_SECRET A secret string used to configure the GitHub webhook expression [a-zA-Z0-9]{40} SECRET_KEY_BASE Your secret key for verifying the integrity of signed cookies expression [a-z0-9]{127} APPLICATION_USER The application user that is used within the sample application to authorize access on pages openshift APPLICATION_PASSWORD The application password that is used within the sample application to authorize access on pages secret DATABASE_SERVICE_NAME Database service name postgresql POSTGRESQL_USER database username expression user[A-Z0-9]{3} POSTGRESQL_PASSWORD database password expression [a-zA-Z0-9]{8} POSTGRESQL_DATABASE database name root POSTGRESQL_MAX_CONNECTIONS database max connections 10 POSTGRESQL_SHARED_BUFFERS database shared buffers 12MB The output identifies several parameters that are generated with a regular expression-like generator when the template is processed. 3.1.4.3. Generating a list of objects Using the CLI, you can process a file defining a template to return the list of objects to standard output. Procedure Process a file defining a template to return the list of objects to standard output: USD oc process -f <filename> Alternatively, if the template has already been uploaded to the current project: USD oc process <template_name> Create objects from a template by processing the template and piping the output to oc create : USD oc process -f <filename> | oc create -f - Alternatively, if the template has already been uploaded to the current project: USD oc process <template> | oc create -f - You can override any parameter values defined in the file by adding the -p option for each <name>=<value> pair you want to override. A parameter reference appears in any text field inside the template items. For example, in the following the POSTGRESQL_USER and POSTGRESQL_DATABASE parameters of a template are overridden to output a configuration with customized environment variables: Creating a List of objects from a template USD oc process -f my-rails-postgresql \ -p POSTGRESQL_USER=bob \ -p POSTGRESQL_DATABASE=mydatabase The JSON file can either be redirected to a file or applied directly without uploading the template by piping the processed output to the oc create command: USD oc process -f my-rails-postgresql \ -p POSTGRESQL_USER=bob \ -p POSTGRESQL_DATABASE=mydatabase \ | oc create -f - If you have large number of parameters, you can store them in a file and then pass this file to oc process : USD cat postgres.env POSTGRESQL_USER=bob POSTGRESQL_DATABASE=mydatabase USD oc process -f my-rails-postgresql --param-file=postgres.env You can also read the environment from standard input by using "-" as the argument to --param-file : USD sed s/bob/alice/ postgres.env | oc process -f my-rails-postgresql --param-file=- 3.1.5. Modifying uploaded templates You can edit a template that has already been uploaded to your project. Procedure Modify a template that has already been uploaded: USD oc edit template <template> 3.1.6. Using instant app and quick start templates OpenShift Container Platform provides a number of default instant app and quick start templates to make it easy to quickly get started creating a new application for different languages. Templates are provided for Rails (Ruby), Django (Python), Node.js, CakePHP (PHP), and Dancer (Perl). Your cluster administrator must create these templates in the default, global openshift project so you have access to them. By default, the templates build using a public source repository on GitHub that contains the necessary application code. Procedure You can list the available default instant app and quick start templates with: USD oc get templates -n openshift To modify the source and build your own version of the application: Fork the repository referenced by the template's default SOURCE_REPOSITORY_URL parameter. Override the value of the SOURCE_REPOSITORY_URL parameter when creating from the template, specifying your fork instead of the default value. By doing this, the build configuration created by the template now points to your fork of the application code, and you can modify the code and rebuild the application at will. Note Some of the instant app and quick start templates define a database deployment configuration. The configuration they define uses ephemeral storage for the database content. These templates should be used for demonstration purposes only as all database data is lost if the database pod restarts for any reason. 3.1.6.1. Quick start templates A quick start template is a basic example of an application running on OpenShift Container Platform. Quick starts come in a variety of languages and frameworks, and are defined in a template, which is constructed from a set of services, build configurations, and deployment configurations. This template references the necessary images and source repositories to build and deploy the application. To explore a quick start, create an application from a template. Your administrator must have already installed these templates in your OpenShift Container Platform cluster, in which case you can simply select it from the web console. Quick starts refer to a source repository that contains the application source code. To customize the quick start, fork the repository and, when creating an application from the template, substitute the default source repository name with your forked repository. This results in builds that are performed using your source code instead of the provided example source. You can then update the code in your source repository and launch a new build to see the changes reflected in the deployed application. 3.1.6.1.1. Web framework quick start templates These quick start templates provide a basic application of the indicated framework and language: CakePHP: a PHP web framework that includes a MySQL database Dancer: a Perl web framework that includes a MySQL database Django: a Python web framework that includes a PostgreSQL database NodeJS: a NodeJS web application that includes a MongoDB database Rails: a Ruby web framework that includes a PostgreSQL database 3.1.7. Writing templates You can define new templates to make it easy to recreate all the objects of your application. The template defines the objects it creates along with some metadata to guide the creation of those objects. The following is an example of a simple template object definition (YAML): apiVersion: template.openshift.io/v1 kind: Template metadata: name: redis-template annotations: description: "Description" iconClass: "icon-redis" tags: "database,nosql" objects: - apiVersion: v1 kind: Pod metadata: name: redis-master spec: containers: - env: - name: REDIS_PASSWORD value: USD{REDIS_PASSWORD} image: dockerfile/redis name: master ports: - containerPort: 6379 protocol: TCP parameters: - description: Password used for Redis authentication from: '[A-Z0-9]{8}' generate: expression name: REDIS_PASSWORD labels: redis: master 3.1.7.1. Writing the template description The template description informs you what the template does and helps you find it when searching in the web console. Additional metadata beyond the template name is optional, but useful to have. In addition to general descriptive information, the metadata also includes a set of tags. Useful tags include the name of the language the template is related to for example, Java, PHP, Ruby, and so on. The following is an example of template description metadata: kind: Template apiVersion: template.openshift.io/v1 metadata: name: cakephp-mysql-example 1 annotations: openshift.io/display-name: "CakePHP MySQL Example (Ephemeral)" 2 description: >- An example CakePHP application with a MySQL database. For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/cakephp-ex/blob/master/README.md. WARNING: Any data stored will be lost upon pod destruction. Only use this template for testing." 3 openshift.io/long-description: >- This template defines resources needed to develop a CakePHP application, including a build configuration, application DeploymentConfig, and database DeploymentConfig. The database is stored in non-persistent storage, so this configuration should be used for experimental purposes only. 4 tags: "quickstart,php,cakephp" 5 iconClass: icon-php 6 openshift.io/provider-display-name: "Red Hat, Inc." 7 openshift.io/documentation-url: "https://github.com/sclorg/cakephp-ex" 8 openshift.io/support-url: "https://access.redhat.com" 9 message: "Your admin credentials are USD{ADMIN_USERNAME}:USD{ADMIN_PASSWORD}" 10 1 The unique name of the template. 2 A brief, user-friendly name, which can be employed by user interfaces. 3 A description of the template. Include enough detail that users understand what is being deployed and any caveats they must know before deploying. It should also provide links to additional information, such as a README file. Newlines can be included to create paragraphs. 4 Additional template description. This may be displayed by the service catalog, for example. 5 Tags to be associated with the template for searching and grouping. Add tags that include it into one of the provided catalog categories. Refer to the id and categoryAliases in CATALOG_CATEGORIES in the console constants file. The categories can also be customized for the whole cluster. 6 An icon to be displayed with your template in the web console. Example 3.1. Available icons icon-3scale icon-aerogear icon-amq icon-angularjs icon-ansible icon-apache icon-beaker icon-camel icon-capedwarf icon-cassandra icon-catalog-icon icon-clojure icon-codeigniter icon-cordova icon-datagrid icon-datavirt icon-debian icon-decisionserver icon-django icon-dotnet icon-drupal icon-eap icon-elastic icon-erlang icon-fedora icon-freebsd icon-git icon-github icon-gitlab icon-glassfish icon-go-gopher icon-golang icon-grails icon-hadoop icon-haproxy icon-helm icon-infinispan icon-jboss icon-jenkins icon-jetty icon-joomla icon-jruby icon-js icon-knative icon-kubevirt icon-laravel icon-load-balancer icon-mariadb icon-mediawiki icon-memcached icon-mongodb icon-mssql icon-mysql-database icon-nginx icon-nodejs icon-openjdk icon-openliberty icon-openshift icon-openstack icon-other-linux icon-other-unknown icon-perl icon-phalcon icon-php icon-play iconpostgresql icon-processserver icon-python icon-quarkus icon-rabbitmq icon-rails icon-redhat icon-redis icon-rh-integration icon-rh-spring-boot icon-rh-tomcat icon-ruby icon-scala icon-serverlessfx icon-shadowman icon-spring-boot icon-spring icon-sso icon-stackoverflow icon-suse icon-symfony icon-tomcat icon-ubuntu icon-vertx icon-wildfly icon-windows icon-wordpress icon-xamarin icon-zend 7 The name of the person or organization providing the template. 8 A URL referencing further documentation for the template. 9 A URL where support can be obtained for the template. 10 An instructional message that is displayed when this template is instantiated. This field should inform the user how to use the newly created resources. Parameter substitution is performed on the message before being displayed so that generated credentials and other parameters can be included in the output. Include links to any -steps documentation that users should follow. 3.1.7.2. Writing template labels Templates can include a set of labels. These labels are added to each object created when the template is instantiated. Defining a label in this way makes it easy for users to find and manage all the objects created from a particular template. The following is an example of template object labels: kind: "Template" apiVersion: "v1" ... labels: template: "cakephp-mysql-example" 1 app: "USD{NAME}" 2 1 A label that is applied to all objects created from this template. 2 A parameterized label that is also applied to all objects created from this template. Parameter expansion is carried out on both label keys and values. 3.1.7.3. Writing template parameters Parameters allow a value to be supplied by you or generated when the template is instantiated. Then, that value is substituted wherever the parameter is referenced. References can be defined in any field in the objects list field. This is useful for generating random passwords or allowing you to supply a hostname or other user-specific value that is required to customize the template. Parameters can be referenced in two ways: As a string value by placing values in the form USD{PARAMETER_NAME} in any string field in the template. As a JSON or YAML value by placing values in the form USD{{PARAMETER_NAME}} in place of any field in the template. When using the USD{PARAMETER_NAME} syntax, multiple parameter references can be combined in a single field and the reference can be embedded within fixed data, such as "http://USD{PARAMETER_1}USD{PARAMETER_2}" . Both parameter values are substituted and the resulting value is a quoted string. When using the USD{{PARAMETER_NAME}} syntax only a single parameter reference is allowed and leading and trailing characters are not permitted. The resulting value is unquoted unless, after substitution is performed, the result is not a valid JSON object. If the result is not a valid JSON value, the resulting value is quoted and treated as a standard string. A single parameter can be referenced multiple times within a template and it can be referenced using both substitution syntaxes within a single template. A default value can be provided, which is used if you do not supply a different value: The following is an example of setting an explicit value as the default value: parameters: - name: USERNAME description: "The user name for Joe" value: joe Parameter values can also be generated based on rules specified in the parameter definition, for example generating a parameter value: parameters: - name: PASSWORD description: "The random user password" generate: expression from: "[a-zA-Z0-9]{12}" In the example, processing generates a random password 12 characters long consisting of all upper and lowercase alphabet letters and numbers. The syntax available is not a full regular expression syntax. However, you can use \w , \d , \a , and \A modifiers: [\w]{10} produces 10 alphabet characters, numbers, and underscores. This follows the PCRE standard and is equal to [a-zA-Z0-9_]{10} . [\d]{10} produces 10 numbers. This is equal to [0-9]{10} . [\a]{10} produces 10 alphabetical characters. This is equal to [a-zA-Z]{10} . [\A]{10} produces 10 punctuation or symbol characters. This is equal to [~!@#USD%\^&*()\-_+={}\[\]\\|<,>.?/"';:`]{10} . Note Depending on if the template is written in YAML or JSON, and the type of string that the modifier is embedded within, you might need to escape the backslash with a second backslash. The following examples are equivalent: Example YAML template with a modifier parameters: - name: singlequoted_example generate: expression from: '[\A]{10}' - name: doublequoted_example generate: expression from: "[\\A]{10}" Example JSON template with a modifier { "parameters": [ { "name": "json_example", "generate": "expression", "from": "[\\A]{10}" } ] } Here is an example of a full template with parameter definitions and references: kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: cakephp-mysql-example annotations: description: Defines how to build the application spec: source: type: Git git: uri: "USD{SOURCE_REPOSITORY_URL}" 1 ref: "USD{SOURCE_REPOSITORY_REF}" contextDir: "USD{CONTEXT_DIR}" - kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: frontend spec: replicas: "USD{{REPLICA_COUNT}}" 2 parameters: - name: SOURCE_REPOSITORY_URL 3 displayName: Source Repository URL 4 description: The URL of the repository with your application source code 5 value: https://github.com/sclorg/cakephp-ex.git 6 required: true 7 - name: GITHUB_WEBHOOK_SECRET description: A secret string used to configure the GitHub webhook generate: expression 8 from: "[a-zA-Z0-9]{40}" 9 - name: REPLICA_COUNT description: Number of replicas to run value: "2" required: true message: "... The GitHub webhook secret is USD{GITHUB_WEBHOOK_SECRET} ..." 10 1 This value is replaced with the value of the SOURCE_REPOSITORY_URL parameter when the template is instantiated. 2 This value is replaced with the unquoted value of the REPLICA_COUNT parameter when the template is instantiated. 3 The name of the parameter. This value is used to reference the parameter within the template. 4 The user-friendly name for the parameter. This is displayed to users. 5 A description of the parameter. Provide more detailed information for the purpose of the parameter, including any constraints on the expected value. Descriptions should use complete sentences to follow the console's text standards. Do not make this a duplicate of the display name. 6 A default value for the parameter which is used if you do not override the value when instantiating the template. Avoid using default values for things like passwords, instead use generated parameters in combination with secrets. 7 Indicates this parameter is required, meaning you cannot override it with an empty value. If the parameter does not provide a default or generated value, you must supply a value. 8 A parameter which has its value generated. 9 The input to the generator. In this case, the generator produces a 40 character alphanumeric value including upper and lowercase characters. 10 Parameters can be included in the template message. This informs you about generated values. 3.1.7.4. Writing the template object list The main portion of the template is the list of objects which is created when the template is instantiated. This can be any valid API object, such as a build configuration, deployment configuration, or service. The object is created exactly as defined here, with any parameter values substituted in prior to creation. The definition of these objects can reference parameters defined earlier. The following is an example of an object list: kind: "Template" apiVersion: "v1" metadata: name: my-template objects: - kind: "Service" 1 apiVersion: "v1" metadata: name: "cakephp-mysql-example" annotations: description: "Exposes and load balances the application pods" spec: ports: - name: "web" port: 8080 targetPort: 8080 selector: name: "cakephp-mysql-example" 1 The definition of a service, which is created by this template. Note If an object definition metadata includes a fixed namespace field value, the field is stripped out of the definition during template instantiation. If the namespace field contains a parameter reference, normal parameter substitution is performed and the object is created in whatever namespace the parameter substitution resolved the value to, assuming the user has permission to create objects in that namespace. 3.1.7.5. Marking a template as bindable The Template Service Broker advertises one service in its catalog for each template object of which it is aware. By default, each of these services is advertised as being bindable, meaning an end user is permitted to bind against the provisioned service. Procedure Template authors can prevent end users from binding against services provisioned from a given template. Prevent end user from binding against services provisioned from a given template by adding the annotation template.openshift.io/bindable: "false" to the template. 3.1.7.6. Exposing template object fields Template authors can indicate that fields of particular objects in a template should be exposed. The Template Service Broker recognizes exposed fields on ConfigMap , Secret , Service , and Route objects, and returns the values of the exposed fields when a user binds a service backed by the broker. To expose one or more fields of an object, add annotations prefixed by template.openshift.io/expose- or template.openshift.io/base64-expose- to the object in the template. Each annotation key, with its prefix removed, is passed through to become a key in a bind response. Each annotation value is a Kubernetes JSONPath expression, which is resolved at bind time to indicate the object field whose value should be returned in the bind response. Note Bind response key-value pairs can be used in other parts of the system as environment variables. Therefore, it is recommended that every annotation key with its prefix removed should be a valid environment variable name - beginning with a character A-Z , a-z , or _ , and being followed by zero or more characters A-Z , a-z , 0-9 , or _ . Note Unless escaped with a backslash, Kubernetes' JSONPath implementation interprets characters such as . , @ , and others as metacharacters, regardless of their position in the expression. Therefore, for example, to refer to a ConfigMap datum named my.key , the required JSONPath expression would be {.data['my\.key']} . Depending on how the JSONPath expression is then written in YAML, an additional backslash might be required, for example "{.data['my\\.key']}" . The following is an example of different objects' fields being exposed: kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: ConfigMap apiVersion: v1 metadata: name: my-template-config annotations: template.openshift.io/expose-username: "{.data['my\\.username']}" data: my.username: foo - kind: Secret apiVersion: v1 metadata: name: my-template-config-secret annotations: template.openshift.io/base64-expose-password: "{.data['password']}" stringData: password: <password> - kind: Service apiVersion: v1 metadata: name: my-template-service annotations: template.openshift.io/expose-service_ip_port: "{.spec.clusterIP}:{.spec.ports[?(.name==\"web\")].port}" spec: ports: - name: "web" port: 8080 - kind: Route apiVersion: route.openshift.io/v1 metadata: name: my-template-route annotations: template.openshift.io/expose-uri: "http://{.spec.host}{.spec.path}" spec: path: mypath An example response to a bind operation given the above partial template follows: { "credentials": { "username": "foo", "password": "YmFy", "service_ip_port": "172.30.12.34:8080", "uri": "http://route-test.router.default.svc.cluster.local/mypath" } } Procedure Use the template.openshift.io/expose- annotation to return the field value as a string. This is convenient, although it does not handle arbitrary binary data. If you want to return binary data, use the template.openshift.io/base64-expose- annotation instead to base64 encode the data before it is returned. 3.1.7.7. Waiting for template readiness Template authors can indicate that certain objects within a template should be waited for before a template instantiation by the service catalog, Template Service Broker, or TemplateInstance API is considered complete. To use this feature, mark one or more objects of kind Build , BuildConfig , Deployment , DeploymentConfig , Job , or StatefulSet in a template with the following annotation: "template.alpha.openshift.io/wait-for-ready": "true" Template instantiation is not complete until all objects marked with the annotation report ready. Similarly, if any of the annotated objects report failed, or if the template fails to become ready within a fixed timeout of one hour, the template instantiation fails. For the purposes of instantiation, readiness and failure of each object kind are defined as follows: Kind Readiness Failure Build Object reports phase complete. Object reports phase canceled, error, or failed. BuildConfig Latest associated build object reports phase complete. Latest associated build object reports phase canceled, error, or failed. Deployment Object reports new replica set and deployment available. This honors readiness probes defined on the object. Object reports progressing condition as false. DeploymentConfig Object reports new replication controller and deployment available. This honors readiness probes defined on the object. Object reports progressing condition as false. Job Object reports completion. Object reports that one or more failures have occurred. StatefulSet Object reports all replicas ready. This honors readiness probes defined on the object. Not applicable. The following is an example template extract, which uses the wait-for-ready annotation. Further examples can be found in the OpenShift Container Platform quick start templates. kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: ... annotations: # wait-for-ready used on BuildConfig ensures that template instantiation # will fail immediately if build fails template.alpha.openshift.io/wait-for-ready: "true" spec: ... - kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: ... annotations: template.alpha.openshift.io/wait-for-ready: "true" spec: ... - kind: Service apiVersion: v1 metadata: name: ... spec: ... Additional recommendations Set memory, CPU, and storage default sizes to make sure your application is given enough resources to run smoothly. Avoid referencing the latest tag from images if that tag is used across major versions. This can cause running applications to break when new images are pushed to that tag. A good template builds and deploys cleanly without requiring modifications after the template is deployed. 3.1.7.8. Creating a template from existing objects Rather than writing an entire template from scratch, you can export existing objects from your project in YAML form, and then modify the YAML from there by adding parameters and other customizations as template form. Procedure Export objects in a project in YAML form: USD oc get -o yaml all > <yaml_filename> You can also substitute a particular resource type or multiple resources instead of all . Run oc get -h for more examples. The object types included in oc get -o yaml all are: BuildConfig Build DeploymentConfig ImageStream Pod ReplicationController Route Service Note Using the all alias is not recommended because the contents might vary across different clusters and versions. Instead, specify all required resources. 3.2. Creating applications by using the Developer perspective The Developer perspective in the web console provides you the following options from the +Add view to create applications and associated services and deploy them on OpenShift Container Platform: Getting started resources : Use these resources to help you get started with Developer Console. You can choose to hide the header using the Options menu . Creating applications using samples : Use existing code samples to get started with creating applications on the OpenShift Container Platform. Build with guided documentation : Follow the guided documentation to build applications and familiarize yourself with key concepts and terminologies. Explore new developer features : Explore the new features and resources within the Developer perspective. Developer catalog : Explore the Developer Catalog to select the required applications, services, or source to image builders, and then add it to your project. All Services : Browse the catalog to discover services across OpenShift Container Platform. Database : Select the required database service and add it to your application. Operator Backed : Select and deploy the required Operator-managed service. Helm chart : Select the required Helm chart to simplify deployment of applications and services. Devfile : Select a devfile from the Devfile registry to declaratively define a development environment. Event Source : Select an event source to register interest in a class of events from a particular system. Note The Managed services option is also available if the RHOAS Operator is installed. Git repository : Import an existing codebase, Devfile, or Dockerfile from your Git repository using the From Git , From Devfile , or From Dockerfile options respectively, to build and deploy an application on OpenShift Container Platform. Container images : Use existing images from an image stream or registry to deploy it on to the OpenShift Container Platform. Pipelines : Use Tekton pipeline to create CI/CD pipelines for your software delivery process on the OpenShift Container Platform. Serverless : Explore the Serverless options to create, build, and deploy stateless and serverless applications on the OpenShift Container Platform. Channel : Create a Knative channel to create an event forwarding and persistence layer with in-memory and reliable implementations. Samples : Explore the available sample applications to create, build, and deploy an application quickly. Quick Starts : Explore the quick start options to create, import, and run applications with step-by-step instructions and tasks. From Local Machine : Explore the From Local Machine tile to import or upload files on your local machine for building and deploying applications easily. Import YAML : Upload a YAML file to create and define resources for building and deploying applications. Upload JAR file : Upload a JAR file to build and deploy Java applications. Share my Project : Use this option to add or remove users to a project and provide accessibility options to them. Helm Chart repositories : Use this option to add Helm Chart repositories in a namespace. Re-ordering of resources : Use these resources to re-order pinned resources added to your navigation pane. The drag-and-drop icon is displayed on the left side of the pinned resource when you hover over it in the navigation pane. The dragged resource can be dropped only in the section where it resides. Note that certain options, such as Pipelines , Event Source , and Import Virtual Machines , are displayed only when the OpenShift Pipelines Operator , OpenShift Serverless Operator , and OpenShift Virtualization Operator are installed, respectively. 3.2.1. Prerequisites To create applications using the Developer perspective ensure that: You have logged in to the web console . You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. To create serverless applications, in addition to the preceding prerequisites, ensure that: You have installed the OpenShift Serverless Operator . You have created a KnativeServing resource in the knative-serving namespace . 3.2.2. Creating sample applications You can use the sample applications in the +Add flow of the Developer perspective to create, build, and deploy applications quickly. Prerequisites You have logged in to the OpenShift Container Platform web console and are in the Developer perspective. Procedure In the +Add view, click the Samples tile to see the Samples page. On the Samples page, select one of the available sample applications to see the Create Sample Application form. In the Create Sample Application Form : In the Name field, the deployment name is displayed by default. You can modify this name as required. In the Builder Image Version , a builder image is selected by default. You can modify this image version by using the Builder Image Version drop-down list. A sample Git repository URL is added by default. Click Create to create the sample application. The build status of the sample application is displayed on the Topology view. After the sample application is created, you can see the deployment added to the application. 3.2.3. Creating applications by using Quick Starts The Quick Starts page shows you how to create, import, and run applications on OpenShift Container Platform, with step-by-step instructions and tasks. Prerequisites You have logged in to the OpenShift Container Platform web console and are in the Developer perspective. Procedure In the +Add view, click the Getting Started resources Build with guided documentation View all quick starts link to view the Quick Starts page. In the Quick Starts page, click the tile for the quick start that you want to use. Click Start to begin the quick start. Perform the steps that are displayed. 3.2.4. Importing a codebase from Git to create an application You can use the Developer perspective to create, build, and deploy an application on OpenShift Container Platform using an existing codebase in GitHub. The following procedure walks you through the From Git option in the Developer perspective to create an application. Procedure In the +Add view, click From Git in the Git Repository tile to see the Import from git form. In the Git section, enter the Git repository URL for the codebase you want to use to create an application. For example, enter the URL of this sample Node.js application https://github.com/sclorg/nodejs-ex . The URL is then validated. Optional: You can click Show Advanced Git Options to add details such as: Git Reference to point to code in a specific branch, tag, or commit to be used to build the application. Context Dir to specify the subdirectory for the application source code you want to use to build the application. Source Secret to create a Secret Name with credentials for pulling your source code from a private repository. Optional: You can import a Devfile , a Dockerfile , Builder Image , or a Serverless Function through your Git repository to further customize your deployment. If your Git repository contains a Devfile , a Dockerfile , a Builder Image , or a func.yaml , it is automatically detected and populated on the respective path fields. If a Devfile , a Dockerfile , or a Builder Image are detected in the same repository, the Devfile is selected by default. If func.yaml is detected in the Git repository, the Import Strategy changes to Serverless Function . Alternatively, you can create a serverless function by clicking Create Serverless function in the +Add view using the Git repository URL. To edit the file import type and select a different strategy, click Edit import strategy option. If multiple Devfiles , a Dockerfiles , or a Builder Images are detected, to import a specific instance, specify the respective paths relative to the context directory. After the Git URL is validated, the recommended builder image is selected and marked with a star. If the builder image is not auto-detected, select a builder image. For the https://github.com/sclorg/nodejs-ex Git URL, by default the Node.js builder image is selected. Optional: Use the Builder Image Version drop-down to specify a version. Optional: Use the Edit import strategy to select a different strategy. Optional: For the Node.js builder image, use the Run command field to override the command to run the application. In the General section: In the Application field, enter a unique name for the application grouping, for example, myapp . Ensure that the application name is unique in a namespace. The Name field to identify the resources created for this application is automatically populated based on the Git repository URL if there are no existing applications. If there are existing applications, you can choose to deploy the component within an existing application, create a new application, or keep the component unassigned. Note The resource name must be unique in a namespace. Modify the resource name if you get an error. In the Resources section, select: Deployment , to create an application in plain Kubernetes style. Deployment Config , to create an OpenShift Container Platform style application. Serverless Deployment , to create a Knative service. Note To set the default resource preference for importing an application, go to User Preferences Applications Resource type field. The Serverless Deployment option is displayed in the Import from Git form only if the OpenShift Serverless Operator is installed in your cluster. The Resources section is not available while creating a serverless function. For further details, refer to the OpenShift Serverless documentation. In the Pipelines section, select Add Pipeline , and then click Show Pipeline Visualization to see the pipeline for the application. A default pipeline is selected, but you can choose the pipeline you want from the list of available pipelines for the application. Note The Add pipeline checkbox is checked and Configure PAC is selected by default if the following criterias are fulfilled: Pipeline operator is installed pipelines-as-code is enabled .tekton directory is detected in the Git repository Add a webhook to your repository. If Configure PAC is checked and the GitHub App is set up, you can see the Use GitHub App and Setup a webhook options. If GitHub App is not set up, you can only see the Setup a webhook option: Go to Settings Webhooks and click Add webhook . Set the Payload URL to the Pipelines as Code controller public URL. Select the content type as application/json . Add a webhook secret and note it in an alternate location. With openssl installed on your local machine, generate a random secret. Click Let me select individual events and select these events: Commit comments , Issue comments , Pull request , and Pushes . Click Add webhook . Optional: In the Advanced Options section, the Target port and the Create a route to the application is selected by default so that you can access your application using a publicly available URL. If your application does not expose its data on the default public port, 80, clear the check box, and set the target port number you want to expose. Optional: You can use the following advanced options to further customize your application: Routing By clicking the Routing link, you can perform the following actions: Customize the hostname for the route. Specify the path the router watches. Select the target port for the traffic from the drop-down list. Secure your route by selecting the Secure Route check box. Select the required TLS termination type and set a policy for insecure traffic from the respective drop-down lists. Note For serverless applications, the Knative service manages all the routing options above. However, you can customize the target port for traffic, if required. If the target port is not specified, the default port of 8080 is used. Domain mapping If you are creating a Serverless Deployment , you can add a custom domain mapping to the Knative service during creation. In the Advanced options section, click Show advanced Routing options . If the domain mapping CR that you want to map to the service already exists, you can select it from the Domain mapping drop-down menu. If you want to create a new domain mapping CR, type the domain name into the box, and select the Create option. For example, if you type in example.com , the Create option is Create "example.com" . Health Checks Click the Health Checks link to add Readiness, Liveness, and Startup probes to your application. All the probes have prepopulated default data; you can add the probes with the default data or customize it as required. To customize the health probes: Click Add Readiness Probe , if required, modify the parameters to check if the container is ready to handle requests, and select the check mark to add the probe. Click Add Liveness Probe , if required, modify the parameters to check if a container is still running, and select the check mark to add the probe. Click Add Startup Probe , if required, modify the parameters to check if the application within the container has started, and select the check mark to add the probe. For each of the probes, you can specify the request type - HTTP GET , Container Command , or TCP Socket , from the drop-down list. The form changes as per the selected request type. You can then modify the default values for the other parameters, such as the success and failure thresholds for the probe, number of seconds before performing the first probe after the container starts, frequency of the probe, and the timeout value. Build Configuration and Deployment Click the Build Configuration and Deployment links to see the respective configuration options. Some options are selected by default; you can customize them further by adding the necessary triggers and environment variables. For serverless applications, the Deployment option is not displayed as the Knative configuration resource maintains the desired state for your deployment instead of a DeploymentConfig resource. Scaling Click the Scaling link to define the number of pods or instances of the application you want to deploy initially. If you are creating a serverless deployment, you can also configure the following settings: Min Pods determines the lower limit for the number of pods that must be running at any given time for a Knative service. This is also known as the minScale setting. Max Pods determines the upper limit for the number of pods that can be running at any given time for a Knative service. This is also known as the maxScale setting. Concurrency target determines the number of concurrent requests desired for each instance of the application at a given time. Concurrency limit determines the limit for the number of concurrent requests allowed for each instance of the application at a given time. Concurrency utilization determines the percentage of the concurrent requests limit that must be met before Knative scales up additional pods to handle additional traffic. Autoscale window defines the time window over which metrics are averaged to provide input for scaling decisions when the autoscaler is not in panic mode. A service is scaled-to-zero if no requests are received during this window. The default duration for the autoscale window is 60s . This is also known as the stable window. Resource Limit Click the Resource Limit link to set the amount of CPU and Memory resources a container is guaranteed or allowed to use when running. Labels Click the Labels link to add custom labels to your application. Click Create to create the application and a success notification is displayed. You can see the build status of the application in the Topology view. 3.2.5. Creating applications by deploying container image You can use an external image registry or an image stream tag from an internal registry to deploy an application on your cluster. Prerequisites You have logged in to the OpenShift Container Platform web console and are in the Developer perspective. Procedure In the +Add view, click Container images to view the Deploy Images page. In the Image section: Select Image name from external registry to deploy an image from a public or a private registry, or select Image stream tag from internal registry to deploy an image from an internal registry. Select an icon for your image in the Runtime icon tab. In the General section: In the Application name field, enter a unique name for the application grouping. In the Name field, enter a unique name to identify the resources created for this component. In the Resource type section, select the resource type to generate: Select Deployment to enable declarative updates for Pod and ReplicaSet objects. Select DeploymentConfig to define the template for a Pod object, and manage deploying new images and configuration sources. Select Serverless Deployment to enable scaling to zero when idle. Click Create . You can view the build status of the application in the Topology view. 3.2.6. Deploying a Java application by uploading a JAR file You can use the web console Developer perspective to upload a JAR file by using the following options: Navigate to the +Add view of the Developer perspective, and click Upload JAR file in the From Local Machine tile. Browse and select your JAR file, or drag a JAR file to deploy your application. Navigate to the Topology view and use the Upload JAR file option, or drag a JAR file to deploy your application. Use the in-context menu in the Topology view, and then use the Upload JAR file option to upload your JAR file to deploy your application. Prerequisites The Cluster Samples Operator must be installed by a cluster administrator. You have access to the OpenShift Container Platform web console and are in the Developer perspective. Procedure In the Topology view, right-click anywhere to view the Add to Project menu. Hover over the Add to Project menu to see the menu options, and then select the Upload JAR file option to see the Upload JAR file form. Alternatively, you can drag the JAR file into the Topology view. In the JAR file field, browse for the required JAR file on your local machine and upload it. Alternatively, you can drag the JAR file on to the field. A toast alert is displayed at the top right if an incompatible file type is dragged into the Topology view. A field error is displayed if an incompatible file type is dropped on the field in the upload form. The runtime icon and builder image are selected by default. If a builder image is not auto-detected, select a builder image. If required, you can change the version using the Builder Image Version drop-down list. Optional: In the Application Name field, enter a unique name for your application to use for resource labelling. In the Name field, enter a unique component name for the associated resources. Optional: Use the Resource type drop-down list to change the resource type. In the Advanced options menu, click Create a Route to the Application to configure a public URL for your deployed application. Click Create to deploy the application. A toast notification is shown to notify you that the JAR file is being uploaded. The toast notification also includes a link to view the build logs. Note If you attempt to close the browser tab while the build is running, a web alert is displayed. After the JAR file is uploaded and the application is deployed, you can view the application in the Topology view. 3.2.7. Using the Devfile registry to access devfiles You can use the devfiles in the +Add flow of the Developer perspective to create an application. The +Add flow provides a complete integration with the devfile community registry . A devfile is a portable YAML file that describes your development environment without needing to configure it from scratch. Using the Devfile registry , you can use a preconfigured devfile to create an application. Procedure Navigate to Developer Perspective +Add Developer Catalog All Services . A list of all the available services in the Developer Catalog is displayed. Under Type , click Devfiles to browse for devfiles that support a particular language or framework. Alternatively, you can use the keyword filter to search for a particular devfile using their name, tag, or description. Click the devfile you want to use to create an application. The devfile tile displays the details of the devfile, including the name, description, provider, and the documentation of the devfile. Click Create to create an application and view the application in the Topology view. 3.2.8. Using the Developer Catalog to add services or components to your application You use the Developer Catalog to deploy applications and services based on Operator backed services such as Databases, Builder Images, and Helm Charts. The Developer Catalog contains a collection of application components, services, event sources, or source-to-image builders that you can add to your project. Cluster administrators can customize the content made available in the catalog. Procedure In the Developer perspective, navigate to the +Add view and from the Developer Catalog tile, click All Services to view all the available services in the Developer Catalog . Under All Services , select the kind of service or the component you need to add to your project. For this example, select Databases to list all the database services and then click MariaDB to see the details for the service. Click Instantiate Template to see an automatically populated template with details for the MariaDB service, and then click Create to create and view the MariaDB service in the Topology view. Figure 3.1. MariaDB in Topology 3.2.9. Additional resources For more information about Knative routing settings for OpenShift Serverless, see Routing . For more information about domain mapping settings for OpenShift Serverless, see Configuring a custom domain for a Knative service . For more information about Knative autoscaling settings for OpenShift Serverless, see Autoscaling . For more information about adding a new user to a project, see Working with projects . For more information about creating a Helm Chart repository, see Creating Helm Chart repositories . 3.3. Creating applications from installed Operators Operators are a method of packaging, deploying, and managing a Kubernetes application. You can create applications on OpenShift Container Platform using Operators that have been installed by a cluster administrator. This guide walks developers through an example of creating applications from an installed Operator using the OpenShift Container Platform web console. Additional resources See the Operators guide for more on how Operators work and how the Operator Lifecycle Manager is integrated in OpenShift Container Platform. 3.3.1. Creating an etcd cluster using an Operator This procedure walks through creating a new etcd cluster using the etcd Operator, managed by Operator Lifecycle Manager (OLM). Prerequisites Access to an OpenShift Container Platform 4.17 cluster. The etcd Operator already installed cluster-wide by an administrator. Procedure Create a new project in the OpenShift Container Platform web console for this procedure. This example uses a project called my-etcd . Navigate to the Operators Installed Operators page. The Operators that have been installed to the cluster by the cluster administrator and are available for use are shown here as a list of cluster service versions (CSVs). CSVs are used to launch and manage the software provided by the Operator. Tip You can get this list from the CLI using: USD oc get csv On the Installed Operators page, click the etcd Operator to view more details and available actions. As shown under Provided APIs , this Operator makes available three new resource types, including one for an etcd Cluster (the EtcdCluster resource). These objects work similar to the built-in native Kubernetes ones, such as Deployment or ReplicaSet , but contain logic specific to managing etcd. Create a new etcd cluster: In the etcd Cluster API box, click Create instance . The page allows you to make any modifications to the minimal starting template of an EtcdCluster object, such as the size of the cluster. For now, click Create to finalize. This triggers the Operator to start up the pods, services, and other components of the new etcd cluster. Click the example etcd cluster, then click the Resources tab to see that your project now contains a number of resources created and configured automatically by the Operator. Verify that a Kubernetes service has been created that allows you to access the database from other pods in your project. All users with the edit role in a given project can create, manage, and delete application instances (an etcd cluster, in this example) managed by Operators that have already been created in the project, in a self-service manner, just like a cloud service. If you want to enable additional users with this ability, project administrators can add the role using the following command: USD oc policy add-role-to-user edit <user> -n <target_project> You now have an etcd cluster that will react to failures and rebalance data as pods become unhealthy or are migrated between nodes in the cluster. Most importantly, cluster administrators or developers with proper access can now easily use the database with their applications. 3.4. Creating applications by using the CLI You can create an OpenShift Container Platform application from components that include source or binary code, images, and templates by using the OpenShift Container Platform CLI. The set of objects created by new-app depends on the artifacts passed as input: source repositories, images, or templates. 3.4.1. Creating an application from source code With the new-app command you can create applications from source code in a local or remote Git repository. The new-app command creates a build configuration, which itself creates a new application image from your source code. The new-app command typically also creates a Deployment object to deploy the new image, and a service to provide load-balanced access to the deployment running your image. OpenShift Container Platform automatically detects whether the pipeline, source, or docker build strategy should be used, and in the case of source build, detects an appropriate language builder image. 3.4.1.1. Local To create an application from a Git repository in a local directory: USD oc new-app /<path to source code> Note If you use a local Git repository, the repository must have a remote named origin that points to a URL that is accessible by the OpenShift Container Platform cluster. If there is no recognized remote, running the new-app command will create a binary build. 3.4.1.2. Remote To create an application from a remote Git repository: USD oc new-app https://github.com/sclorg/cakephp-ex To create an application from a private remote Git repository: USD oc new-app https://github.com/youruser/yourprivaterepo --source-secret=yoursecret Note If you use a private remote Git repository, you can use the --source-secret flag to specify an existing source clone secret that will get injected into your build config to access the repository. You can use a subdirectory of your source code repository by specifying a --context-dir flag. To create an application from a remote Git repository and a context subdirectory: USD oc new-app https://github.com/sclorg/s2i-ruby-container.git \ --context-dir=2.0/test/puma-test-app Also, when specifying a remote URL, you can specify a Git branch to use by appending #<branch_name> to the end of the URL: USD oc new-app https://github.com/openshift/ruby-hello-world.git#beta4 3.4.1.3. Build strategy detection OpenShift Container Platform automatically determines which build strategy to use by detecting certain files: If a Jenkins file exists in the root or specified context directory of the source repository when creating a new application, OpenShift Container Platform generates a pipeline build strategy. Note The pipeline build strategy is deprecated; consider using Red Hat OpenShift Pipelines instead. If a Dockerfile exists in the root or specified context directory of the source repository when creating a new application, OpenShift Container Platform generates a docker build strategy. If neither a Jenkins file nor a Dockerfile is detected, OpenShift Container Platform generates a source build strategy. Override the automatically detected build strategy by setting the --strategy flag to docker , pipeline , or source . USD oc new-app /home/user/code/myapp --strategy=docker Note The oc command requires that files containing build sources are available in a remote Git repository. For all source builds, you must use git remote -v . 3.4.1.4. Language detection If you use the source build strategy, new-app attempts to determine the language builder to use by the presence of certain files in the root or specified context directory of the repository: Table 3.1. Languages detected by new-app Language Files dotnet project.json , *.csproj jee pom.xml nodejs app.json , package.json perl cpanfile , index.pl php composer.json , index.php python requirements.txt , setup.py ruby Gemfile , Rakefile , config.ru scala build.sbt golang Godeps , main.go After a language is detected, new-app searches the OpenShift Container Platform server for image stream tags that have a supports annotation matching the detected language, or an image stream that matches the name of the detected language. If a match is not found, new-app searches the Docker Hub registry for an image that matches the detected language based on name. You can override the image the builder uses for a particular source repository by specifying the image, either an image stream or container specification, and the repository with a ~ as a separator. Note that if this is done, build strategy detection and language detection are not carried out. For example, to use the myproject/my-ruby imagestream with the source in a remote repository: USD oc new-app myproject/my-ruby~https://github.com/openshift/ruby-hello-world.git To use the openshift/ruby-20-centos7:latest container image stream with the source in a local repository: USD oc new-app openshift/ruby-20-centos7:latest~/home/user/code/my-ruby-app Note Language detection requires the Git client to be locally installed so that your repository can be cloned and inspected. If Git is not available, you can avoid the language detection step by specifying the builder image to use with your repository with the <image>~<repository> syntax. The -i <image> <repository> invocation requires that new-app attempt to clone repository to determine what type of artifact it is, so this will fail if Git is not available. The -i <image> --code <repository> invocation requires new-app clone repository to determine whether image should be used as a builder for the source code, or deployed separately, as in the case of a database image. 3.4.2. Creating an application from an image You can deploy an application from an existing image. Images can come from image streams in the OpenShift Container Platform server, images in a specific registry, or images in the local Docker server. The new-app command attempts to determine the type of image specified in the arguments passed to it. However, you can explicitly tell new-app whether the image is a container image using the --docker-image argument or an image stream using the -i|--image-stream argument. Note If you specify an image from your local Docker repository, you must ensure that the same image is available to the OpenShift Container Platform cluster nodes. 3.4.2.1. Docker Hub MySQL image Create an application from the Docker Hub MySQL image, for example: USD oc new-app mysql 3.4.2.2. Image in a private registry Create an application using an image in a private registry, specify the full container image specification: USD oc new-app myregistry:5000/example/myimage 3.4.2.3. Existing image stream and optional image stream tag Create an application from an existing image stream and optional image stream tag: USD oc new-app my-stream:v1 3.4.3. Creating an application from a template You can create an application from a previously stored template or from a template file, by specifying the name of the template as an argument. For example, you can store a sample application template and use it to create an application. Upload an application template to your current project's template library. The following example uploads an application template from a file called examples/sample-app/application-template-stibuild.json : USD oc create -f examples/sample-app/application-template-stibuild.json Then create a new application by referencing the application template. In this example, the template name is ruby-helloworld-sample : USD oc new-app ruby-helloworld-sample To create a new application by referencing a template file in your local file system, without first storing it in OpenShift Container Platform, use the -f|--file argument. For example: USD oc new-app -f examples/sample-app/application-template-stibuild.json 3.4.3.1. Template parameters When creating an application based on a template, use the -p|--param argument to set parameter values that are defined by the template: USD oc new-app ruby-helloworld-sample \ -p ADMIN_USERNAME=admin -p ADMIN_PASSWORD=mypassword You can store your parameters in a file, then use that file with --param-file when instantiating a template. If you want to read the parameters from standard input, use --param-file=- . The following is an example file called helloworld.params : ADMIN_USERNAME=admin ADMIN_PASSWORD=mypassword Reference the parameters in the file when instantiating a template: USD oc new-app ruby-helloworld-sample --param-file=helloworld.params 3.4.4. Modifying application creation The new-app command generates OpenShift Container Platform objects that build, deploy, and run the application that is created. Normally, these objects are created in the current project and assigned names that are derived from the input source repositories or the input images. However, with new-app you can modify this behavior. Table 3.2. new-app output objects Object Description BuildConfig A BuildConfig object is created for each source repository that is specified in the command line. The BuildConfig object specifies the strategy to use, the source location, and the build output location. ImageStreams For the BuildConfig object, two image streams are usually created. One represents the input image. With source builds, this is the builder image. With Docker builds, this is the FROM image. The second one represents the output image. If a container image was specified as input to new-app , then an image stream is created for that image as well. DeploymentConfig A DeploymentConfig object is created either to deploy the output of a build, or a specified image. The new-app command creates emptyDir volumes for all Docker volumes that are specified in containers included in the resulting DeploymentConfig object . Service The new-app command attempts to detect exposed ports in input images. It uses the lowest numeric exposed port to generate a service that exposes that port. To expose a different port, after new-app has completed, simply use the oc expose command to generate additional services. Other Other objects can be generated when instantiating templates, according to the template. 3.4.4.1. Specifying environment variables When generating applications from a template, source, or an image, you can use the -e|--env argument to pass environment variables to the application container at run time: USD oc new-app openshift/postgresql-92-centos7 \ -e POSTGRESQL_USER=user \ -e POSTGRESQL_DATABASE=db \ -e POSTGRESQL_PASSWORD=password The variables can also be read from file using the --env-file argument. The following is an example file called postgresql.env : POSTGRESQL_USER=user POSTGRESQL_DATABASE=db POSTGRESQL_PASSWORD=password Read the variables from the file: USD oc new-app openshift/postgresql-92-centos7 --env-file=postgresql.env Additionally, environment variables can be given on standard input by using --env-file=- : USD cat postgresql.env | oc new-app openshift/postgresql-92-centos7 --env-file=- Note Any BuildConfig objects created as part of new-app processing are not updated with environment variables passed with the -e|--env or --env-file argument. 3.4.4.2. Specifying build environment variables When generating applications from a template, source, or an image, you can use the --build-env argument to pass environment variables to the build container at run time: USD oc new-app openshift/ruby-23-centos7 \ --build-env HTTP_PROXY=http://myproxy.net:1337/ \ --build-env GEM_HOME=~/.gem The variables can also be read from a file using the --build-env-file argument. The following is an example file called ruby.env : HTTP_PROXY=http://myproxy.net:1337/ GEM_HOME=~/.gem Read the variables from the file: USD oc new-app openshift/ruby-23-centos7 --build-env-file=ruby.env Additionally, environment variables can be given on standard input by using --build-env-file=- : USD cat ruby.env | oc new-app openshift/ruby-23-centos7 --build-env-file=- 3.4.4.3. Specifying labels When generating applications from source, images, or templates, you can use the -l|--label argument to add labels to the created objects. Labels make it easy to collectively select, configure, and delete objects associated with the application. USD oc new-app https://github.com/openshift/ruby-hello-world -l name=hello-world 3.4.4.4. Viewing the output without creation To see a dry-run of running the new-app command, you can use the -o|--output argument with a yaml or json value. You can then use the output to preview the objects that are created or redirect it to a file that you can edit. After you are satisfied, you can use oc create to create the OpenShift Container Platform objects. To output new-app artifacts to a file, run the following: USD oc new-app https://github.com/openshift/ruby-hello-world \ -o yaml > myapp.yaml Edit the file: USD vi myapp.yaml Create a new application by referencing the file: USD oc create -f myapp.yaml 3.4.4.5. Creating objects with different names Objects created by new-app are normally named after the source repository, or the image used to generate them. You can set the name of the objects produced by adding a --name flag to the command: USD oc new-app https://github.com/openshift/ruby-hello-world --name=myapp 3.4.4.6. Creating objects in a different project Normally, new-app creates objects in the current project. However, you can create objects in a different project by using the -n|--namespace argument: USD oc new-app https://github.com/openshift/ruby-hello-world -n myproject 3.4.4.7. Creating multiple objects The new-app command allows creating multiple applications specifying multiple parameters to new-app . Labels specified in the command line apply to all objects created by the single command. Environment variables apply to all components created from source or images. To create an application from a source repository and a Docker Hub image: USD oc new-app https://github.com/openshift/ruby-hello-world mysql Note If a source code repository and a builder image are specified as separate arguments, new-app uses the builder image as the builder for the source code repository. If this is not the intent, specify the required builder image for the source using the ~ separator. 3.4.4.8. Grouping images and source in a single pod The new-app command allows deploying multiple images together in a single pod. To specify which images to group together, use the + separator. The --group command line argument can also be used to specify the images that should be grouped together. To group the image built from a source repository with other images, specify its builder image in the group: USD oc new-app ruby+mysql To deploy an image built from source and an external image together: USD oc new-app \ ruby~https://github.com/openshift/ruby-hello-world \ mysql \ --group=ruby+mysql 3.4.4.9. Searching for images, templates, and other inputs To search for images, templates, and other inputs for the oc new-app command, add the --search and --list flags. For example, to find all of the images or templates that include PHP: USD oc new-app --search php 3.4.4.10. Setting the import mode To set the import mode when using oc new-app , add the --import-mode flag. This flag can be appended with Legacy or PreserveOriginal , which provides users the option to create image streams using a single sub-manifest, or all manifests, respectively. USD oc new-app --image=registry.redhat.io/ubi8/httpd-24:latest --import-mode=Legacy --name=test USD oc new-app --image=registry.redhat.io/ubi8/httpd-24:latest --import-mode=PreserveOriginal --name=test 3.5. Creating applications using Ruby on Rails Ruby on Rails is a web framework written in Ruby. This guide covers using Rails 4 on OpenShift Container Platform. Warning Go through the whole tutorial to have an overview of all the steps necessary to run your application on the OpenShift Container Platform. If you experience a problem try reading through the entire tutorial and then going back to your issue. It can also be useful to review your steps to ensure that all the steps were run correctly. 3.5.1. Prerequisites Basic Ruby and Rails knowledge. Locally installed version of Ruby 2.0.0+, Rubygems, Bundler. Basic Git knowledge. Running instance of OpenShift Container Platform 4. Make sure that an instance of OpenShift Container Platform is running and is available. Also make sure that your oc CLI client is installed and the command is accessible from your command shell, so you can use it to log in using your email address and password. 3.5.2. Setting up the database Rails applications are almost always used with a database. For local development use the PostgreSQL database. Procedure Install the database: USD sudo yum install -y postgresql postgresql-server postgresql-devel Initialize the database: USD sudo postgresql-setup initdb This command creates the /var/lib/pgsql/data directory, in which the data is stored. Start the database: USD sudo systemctl start postgresql.service When the database is running, create your rails user: USD sudo -u postgres createuser -s rails Note that the user created has no password. 3.5.3. Writing your application If you are starting your Rails application from scratch, you must install the Rails gem first. Then you can proceed with writing your application. Procedure Install the Rails gem: USD gem install rails Example output Successfully installed rails-4.3.0 1 gem installed After you install the Rails gem, create a new application with PostgreSQL as your database: USD rails new rails-app --database=postgresql Change into your new application directory: USD cd rails-app If you already have an application, make sure the pg (postgresql) gem is present in your Gemfile . If not, edit your Gemfile by adding the gem: gem 'pg' Generate a new Gemfile.lock with all your dependencies: USD bundle install In addition to using the postgresql database with the pg gem, you also must ensure that the config/database.yml is using the postgresql adapter. Make sure you updated default section in the config/database.yml file, so it looks like this: default: &default adapter: postgresql encoding: unicode pool: 5 host: localhost username: rails password: <password> Create your application's development and test databases: USD rake db:create This creates development and test database in your PostgreSQL server. 3.5.3.1. Creating a welcome page Since Rails 4 no longer serves a static public/index.html page in production, you must create a new root page. To have a custom welcome page must do following steps: Create a controller with an index action. Create a view page for the welcome controller index action. Create a route that serves applications root page with the created controller and view. Rails offers a generator that completes all necessary steps for you. Procedure Run Rails generator: USD rails generate controller welcome index All the necessary files are created. edit line 2 in config/routes.rb file as follows: Run the rails server to verify the page is available: USD rails server You should see your page by visiting http://localhost:3000 in your browser. If you do not see the page, check the logs that are output to your server to debug. 3.5.3.2. Configuring application for OpenShift Container Platform To have your application communicate with the PostgreSQL database service running in OpenShift Container Platform you must edit the default section in your config/database.yml to use environment variables, which you must define later, upon the database service creation. Procedure Edit the default section in your config/database.yml with pre-defined variables as follows: Sample config/database YAML file <% user = ENV.key?("POSTGRESQL_ADMIN_PASSWORD") ? "root" : ENV["POSTGRESQL_USER"] %> <% password = ENV.key?("POSTGRESQL_ADMIN_PASSWORD") ? ENV["POSTGRESQL_ADMIN_PASSWORD"] : ENV["POSTGRESQL_PASSWORD"] %> <% db_service = ENV.fetch("DATABASE_SERVICE_NAME","").upcase %> default: &default adapter: postgresql encoding: unicode # For details on connection pooling, see rails configuration guide # http://guides.rubyonrails.org/configuring.html#database-pooling pool: <%= ENV["POSTGRESQL_MAX_CONNECTIONS"] || 5 %> username: <%= user %> password: <%= password %> host: <%= ENV["#{db_service}_SERVICE_HOST"] %> port: <%= ENV["#{db_service}_SERVICE_PORT"] %> database: <%= ENV["POSTGRESQL_DATABASE"] %> 3.5.3.3. Storing your application in Git Building an application in OpenShift Container Platform usually requires that the source code be stored in a git repository, so you must install git if you do not already have it. Prerequisites Install git. Procedure Make sure you are in your Rails application directory by running the ls -1 command. The output of the command should look like: USD ls -1 Example output app bin config config.ru db Gemfile Gemfile.lock lib log public Rakefile README.rdoc test tmp vendor Run the following commands in your Rails app directory to initialize and commit your code to git: USD git init USD git add . USD git commit -m "initial commit" After your application is committed you must push it to a remote repository. GitHub account, in which you create a new repository. Set the remote that points to your git repository: USD git remote add origin [email protected]:<namespace/repository-name>.git Push your application to your remote git repository. USD git push 3.5.4. Deploying your application to OpenShift Container Platform You can deploy you application to OpenShift Container Platform. After creating the rails-app project, you are automatically switched to the new project namespace. Deploying your application in OpenShift Container Platform involves three steps: Creating a database service from OpenShift Container Platform's PostgreSQL image. Creating a frontend service from OpenShift Container Platform's Ruby 2.0 builder image and your Ruby on Rails source code, which are wired with the database service. Creating a route for your application. Procedure To deploy your Ruby on Rails application, create a new project for the application: USD oc new-project rails-app --description="My Rails application" --display-name="Rails Application" 3.5.4.1. Creating the database service Your Rails application expects a running database service. For this service use PostgreSQL database image. To create the database service, use the oc new-app command. To this command you must pass some necessary environment variables which are used inside the database container. These environment variables are required to set the username, password, and name of the database. You can change the values of these environment variables to anything you would like. The variables are as follows: POSTGRESQL_DATABASE POSTGRESQL_USER POSTGRESQL_PASSWORD Setting these variables ensures: A database exists with the specified name. A user exists with the specified name. The user can access the specified database with the specified password. Procedure Create the database service: USD oc new-app postgresql -e POSTGRESQL_DATABASE=db_name -e POSTGRESQL_USER=username -e POSTGRESQL_PASSWORD=password To also set the password for the database administrator, append to the command with: -e POSTGRESQL_ADMIN_PASSWORD=admin_pw Watch the progress: USD oc get pods --watch 3.5.4.2. Creating the frontend service To bring your application to OpenShift Container Platform, you must specify a repository in which your application lives. Procedure Create the frontend service and specify database related environment variables that were setup when creating the database service: USD oc new-app path/to/source/code --name=rails-app -e POSTGRESQL_USER=username -e POSTGRESQL_PASSWORD=password -e POSTGRESQL_DATABASE=db_name -e DATABASE_SERVICE_NAME=postgresql With this command, OpenShift Container Platform fetches the source code, sets up the builder, builds your application image, and deploys the newly created image together with the specified environment variables. The application is named rails-app . Verify the environment variables have been added by viewing the JSON document of the rails-app deployment config: USD oc get dc rails-app -o json You should see the following section: Example output env": [ { "name": "POSTGRESQL_USER", "value": "username" }, { "name": "POSTGRESQL_PASSWORD", "value": "password" }, { "name": "POSTGRESQL_DATABASE", "value": "db_name" }, { "name": "DATABASE_SERVICE_NAME", "value": "postgresql" } ], Check the build process: USD oc logs -f build/rails-app-1 After the build is complete, look at the running pods in OpenShift Container Platform: USD oc get pods You should see a line starting with myapp-<number>-<hash> , and that is your application running in OpenShift Container Platform. Before your application is functional, you must initialize the database by running the database migration script. There are two ways you can do this: Manually from the running frontend container: Exec into frontend container with rsh command: USD oc rsh <frontend_pod_id> Run the migration from inside the container: USD RAILS_ENV=production bundle exec rake db:migrate If you are running your Rails application in a development or test environment you do not have to specify the RAILS_ENV environment variable. By adding pre-deployment lifecycle hooks in your template. 3.5.4.3. Creating a route for your application You can expose a service to create a route for your application. Procedure To expose a service by giving it an externally-reachable hostname like www.example.com use OpenShift Container Platform route. In your case you need to expose the frontend service by typing: USD oc expose service rails-app --hostname=www.example.com Warning Ensure the hostname you specify resolves into the IP address of the router. | [
"oc create -f <filename>",
"oc create -f <filename> -n <project>",
"kind: \"ImageStream\" apiVersion: \"image.openshift.io/v1\" metadata: name: \"ruby\" creationTimestamp: null spec: tags: - name: \"2.6\" annotations: description: \"Build and run Ruby 2.6 applications\" iconClass: \"icon-ruby\" tags: \"builder,ruby\" 1 supports: \"ruby:2.6,ruby\" version: \"2.6\"",
"oc process -f <filename> -l name=otherLabel",
"oc process --parameters -f <filename>",
"oc process --parameters -n <project> <template_name>",
"oc process --parameters -n openshift rails-postgresql-example",
"NAME DESCRIPTION GENERATOR VALUE SOURCE_REPOSITORY_URL The URL of the repository with your application source code https://github.com/sclorg/rails-ex.git SOURCE_REPOSITORY_REF Set this to a branch name, tag or other ref of your repository if you are not using the default branch CONTEXT_DIR Set this to the relative path to your project if it is not in the root of your repository APPLICATION_DOMAIN The exposed hostname that will route to the Rails service rails-postgresql-example.openshiftapps.com GITHUB_WEBHOOK_SECRET A secret string used to configure the GitHub webhook expression [a-zA-Z0-9]{40} SECRET_KEY_BASE Your secret key for verifying the integrity of signed cookies expression [a-z0-9]{127} APPLICATION_USER The application user that is used within the sample application to authorize access on pages openshift APPLICATION_PASSWORD The application password that is used within the sample application to authorize access on pages secret DATABASE_SERVICE_NAME Database service name postgresql POSTGRESQL_USER database username expression user[A-Z0-9]{3} POSTGRESQL_PASSWORD database password expression [a-zA-Z0-9]{8} POSTGRESQL_DATABASE database name root POSTGRESQL_MAX_CONNECTIONS database max connections 10 POSTGRESQL_SHARED_BUFFERS database shared buffers 12MB",
"oc process -f <filename>",
"oc process <template_name>",
"oc process -f <filename> | oc create -f -",
"oc process <template> | oc create -f -",
"oc process -f my-rails-postgresql -p POSTGRESQL_USER=bob -p POSTGRESQL_DATABASE=mydatabase",
"oc process -f my-rails-postgresql -p POSTGRESQL_USER=bob -p POSTGRESQL_DATABASE=mydatabase | oc create -f -",
"cat postgres.env POSTGRESQL_USER=bob POSTGRESQL_DATABASE=mydatabase",
"oc process -f my-rails-postgresql --param-file=postgres.env",
"sed s/bob/alice/ postgres.env | oc process -f my-rails-postgresql --param-file=-",
"oc edit template <template>",
"oc get templates -n openshift",
"apiVersion: template.openshift.io/v1 kind: Template metadata: name: redis-template annotations: description: \"Description\" iconClass: \"icon-redis\" tags: \"database,nosql\" objects: - apiVersion: v1 kind: Pod metadata: name: redis-master spec: containers: - env: - name: REDIS_PASSWORD value: USD{REDIS_PASSWORD} image: dockerfile/redis name: master ports: - containerPort: 6379 protocol: TCP parameters: - description: Password used for Redis authentication from: '[A-Z0-9]{8}' generate: expression name: REDIS_PASSWORD labels: redis: master",
"kind: Template apiVersion: template.openshift.io/v1 metadata: name: cakephp-mysql-example 1 annotations: openshift.io/display-name: \"CakePHP MySQL Example (Ephemeral)\" 2 description: >- An example CakePHP application with a MySQL database. For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/cakephp-ex/blob/master/README.md. WARNING: Any data stored will be lost upon pod destruction. Only use this template for testing.\" 3 openshift.io/long-description: >- This template defines resources needed to develop a CakePHP application, including a build configuration, application DeploymentConfig, and database DeploymentConfig. The database is stored in non-persistent storage, so this configuration should be used for experimental purposes only. 4 tags: \"quickstart,php,cakephp\" 5 iconClass: icon-php 6 openshift.io/provider-display-name: \"Red Hat, Inc.\" 7 openshift.io/documentation-url: \"https://github.com/sclorg/cakephp-ex\" 8 openshift.io/support-url: \"https://access.redhat.com\" 9 message: \"Your admin credentials are USD{ADMIN_USERNAME}:USD{ADMIN_PASSWORD}\" 10",
"kind: \"Template\" apiVersion: \"v1\" labels: template: \"cakephp-mysql-example\" 1 app: \"USD{NAME}\" 2",
"parameters: - name: USERNAME description: \"The user name for Joe\" value: joe",
"parameters: - name: PASSWORD description: \"The random user password\" generate: expression from: \"[a-zA-Z0-9]{12}\"",
"parameters: - name: singlequoted_example generate: expression from: '[\\A]{10}' - name: doublequoted_example generate: expression from: \"[\\\\A]{10}\"",
"{ \"parameters\": [ { \"name\": \"json_example\", \"generate\": \"expression\", \"from\": \"[\\\\A]{10}\" } ] }",
"kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: cakephp-mysql-example annotations: description: Defines how to build the application spec: source: type: Git git: uri: \"USD{SOURCE_REPOSITORY_URL}\" 1 ref: \"USD{SOURCE_REPOSITORY_REF}\" contextDir: \"USD{CONTEXT_DIR}\" - kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: frontend spec: replicas: \"USD{{REPLICA_COUNT}}\" 2 parameters: - name: SOURCE_REPOSITORY_URL 3 displayName: Source Repository URL 4 description: The URL of the repository with your application source code 5 value: https://github.com/sclorg/cakephp-ex.git 6 required: true 7 - name: GITHUB_WEBHOOK_SECRET description: A secret string used to configure the GitHub webhook generate: expression 8 from: \"[a-zA-Z0-9]{40}\" 9 - name: REPLICA_COUNT description: Number of replicas to run value: \"2\" required: true message: \"... The GitHub webhook secret is USD{GITHUB_WEBHOOK_SECRET} ...\" 10",
"kind: \"Template\" apiVersion: \"v1\" metadata: name: my-template objects: - kind: \"Service\" 1 apiVersion: \"v1\" metadata: name: \"cakephp-mysql-example\" annotations: description: \"Exposes and load balances the application pods\" spec: ports: - name: \"web\" port: 8080 targetPort: 8080 selector: name: \"cakephp-mysql-example\"",
"kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: ConfigMap apiVersion: v1 metadata: name: my-template-config annotations: template.openshift.io/expose-username: \"{.data['my\\\\.username']}\" data: my.username: foo - kind: Secret apiVersion: v1 metadata: name: my-template-config-secret annotations: template.openshift.io/base64-expose-password: \"{.data['password']}\" stringData: password: <password> - kind: Service apiVersion: v1 metadata: name: my-template-service annotations: template.openshift.io/expose-service_ip_port: \"{.spec.clusterIP}:{.spec.ports[?(.name==\\\"web\\\")].port}\" spec: ports: - name: \"web\" port: 8080 - kind: Route apiVersion: route.openshift.io/v1 metadata: name: my-template-route annotations: template.openshift.io/expose-uri: \"http://{.spec.host}{.spec.path}\" spec: path: mypath",
"{ \"credentials\": { \"username\": \"foo\", \"password\": \"YmFy\", \"service_ip_port\": \"172.30.12.34:8080\", \"uri\": \"http://route-test.router.default.svc.cluster.local/mypath\" } }",
"\"template.alpha.openshift.io/wait-for-ready\": \"true\"",
"kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: annotations: # wait-for-ready used on BuildConfig ensures that template instantiation # will fail immediately if build fails template.alpha.openshift.io/wait-for-ready: \"true\" spec: - kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: annotations: template.alpha.openshift.io/wait-for-ready: \"true\" spec: - kind: Service apiVersion: v1 metadata: name: spec:",
"oc get -o yaml all > <yaml_filename>",
"oc get csv",
"oc policy add-role-to-user edit <user> -n <target_project>",
"oc new-app /<path to source code>",
"oc new-app https://github.com/sclorg/cakephp-ex",
"oc new-app https://github.com/youruser/yourprivaterepo --source-secret=yoursecret",
"oc new-app https://github.com/sclorg/s2i-ruby-container.git --context-dir=2.0/test/puma-test-app",
"oc new-app https://github.com/openshift/ruby-hello-world.git#beta4",
"oc new-app /home/user/code/myapp --strategy=docker",
"oc new-app myproject/my-ruby~https://github.com/openshift/ruby-hello-world.git",
"oc new-app openshift/ruby-20-centos7:latest~/home/user/code/my-ruby-app",
"oc new-app mysql",
"oc new-app myregistry:5000/example/myimage",
"oc new-app my-stream:v1",
"oc create -f examples/sample-app/application-template-stibuild.json",
"oc new-app ruby-helloworld-sample",
"oc new-app -f examples/sample-app/application-template-stibuild.json",
"oc new-app ruby-helloworld-sample -p ADMIN_USERNAME=admin -p ADMIN_PASSWORD=mypassword",
"ADMIN_USERNAME=admin ADMIN_PASSWORD=mypassword",
"oc new-app ruby-helloworld-sample --param-file=helloworld.params",
"oc new-app openshift/postgresql-92-centos7 -e POSTGRESQL_USER=user -e POSTGRESQL_DATABASE=db -e POSTGRESQL_PASSWORD=password",
"POSTGRESQL_USER=user POSTGRESQL_DATABASE=db POSTGRESQL_PASSWORD=password",
"oc new-app openshift/postgresql-92-centos7 --env-file=postgresql.env",
"cat postgresql.env | oc new-app openshift/postgresql-92-centos7 --env-file=-",
"oc new-app openshift/ruby-23-centos7 --build-env HTTP_PROXY=http://myproxy.net:1337/ --build-env GEM_HOME=~/.gem",
"HTTP_PROXY=http://myproxy.net:1337/ GEM_HOME=~/.gem",
"oc new-app openshift/ruby-23-centos7 --build-env-file=ruby.env",
"cat ruby.env | oc new-app openshift/ruby-23-centos7 --build-env-file=-",
"oc new-app https://github.com/openshift/ruby-hello-world -l name=hello-world",
"oc new-app https://github.com/openshift/ruby-hello-world -o yaml > myapp.yaml",
"vi myapp.yaml",
"oc create -f myapp.yaml",
"oc new-app https://github.com/openshift/ruby-hello-world --name=myapp",
"oc new-app https://github.com/openshift/ruby-hello-world -n myproject",
"oc new-app https://github.com/openshift/ruby-hello-world mysql",
"oc new-app ruby+mysql",
"oc new-app ruby~https://github.com/openshift/ruby-hello-world mysql --group=ruby+mysql",
"oc new-app --search php",
"oc new-app --image=registry.redhat.io/ubi8/httpd-24:latest --import-mode=Legacy --name=test",
"oc new-app --image=registry.redhat.io/ubi8/httpd-24:latest --import-mode=PreserveOriginal --name=test",
"sudo yum install -y postgresql postgresql-server postgresql-devel",
"sudo postgresql-setup initdb",
"sudo systemctl start postgresql.service",
"sudo -u postgres createuser -s rails",
"gem install rails",
"Successfully installed rails-4.3.0 1 gem installed",
"rails new rails-app --database=postgresql",
"cd rails-app",
"gem 'pg'",
"bundle install",
"default: &default adapter: postgresql encoding: unicode pool: 5 host: localhost username: rails password: <password>",
"rake db:create",
"rails generate controller welcome index",
"root 'welcome#index'",
"rails server",
"<% user = ENV.key?(\"POSTGRESQL_ADMIN_PASSWORD\") ? \"root\" : ENV[\"POSTGRESQL_USER\"] %> <% password = ENV.key?(\"POSTGRESQL_ADMIN_PASSWORD\") ? ENV[\"POSTGRESQL_ADMIN_PASSWORD\"] : ENV[\"POSTGRESQL_PASSWORD\"] %> <% db_service = ENV.fetch(\"DATABASE_SERVICE_NAME\",\"\").upcase %> default: &default adapter: postgresql encoding: unicode # For details on connection pooling, see rails configuration guide # http://guides.rubyonrails.org/configuring.html#database-pooling pool: <%= ENV[\"POSTGRESQL_MAX_CONNECTIONS\"] || 5 %> username: <%= user %> password: <%= password %> host: <%= ENV[\"#{db_service}_SERVICE_HOST\"] %> port: <%= ENV[\"#{db_service}_SERVICE_PORT\"] %> database: <%= ENV[\"POSTGRESQL_DATABASE\"] %>",
"ls -1",
"app bin config config.ru db Gemfile Gemfile.lock lib log public Rakefile README.rdoc test tmp vendor",
"git init",
"git add .",
"git commit -m \"initial commit\"",
"git remote add origin [email protected]:<namespace/repository-name>.git",
"git push",
"oc new-project rails-app --description=\"My Rails application\" --display-name=\"Rails Application\"",
"oc new-app postgresql -e POSTGRESQL_DATABASE=db_name -e POSTGRESQL_USER=username -e POSTGRESQL_PASSWORD=password",
"-e POSTGRESQL_ADMIN_PASSWORD=admin_pw",
"oc get pods --watch",
"oc new-app path/to/source/code --name=rails-app -e POSTGRESQL_USER=username -e POSTGRESQL_PASSWORD=password -e POSTGRESQL_DATABASE=db_name -e DATABASE_SERVICE_NAME=postgresql",
"oc get dc rails-app -o json",
"env\": [ { \"name\": \"POSTGRESQL_USER\", \"value\": \"username\" }, { \"name\": \"POSTGRESQL_PASSWORD\", \"value\": \"password\" }, { \"name\": \"POSTGRESQL_DATABASE\", \"value\": \"db_name\" }, { \"name\": \"DATABASE_SERVICE_NAME\", \"value\": \"postgresql\" } ],",
"oc logs -f build/rails-app-1",
"oc get pods",
"oc rsh <frontend_pod_id>",
"RAILS_ENV=production bundle exec rake db:migrate",
"oc expose service rails-app --hostname=www.example.com"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/building_applications/creating-applications |
Chapter 15. Deleting applications | Chapter 15. Deleting applications You can delete applications created in your project. 15.1. Deleting applications using the Developer perspective You can delete an application and all of its associated components using the Topology view in the Developer perspective: Click the application you want to delete to see the side panel with the resource details of the application. Click the Actions drop-down menu displayed on the upper right of the panel, and select Delete Application to see a confirmation dialog box. Enter the name of the application and click Delete to delete it. You can also right-click the application you want to delete and click Delete Application to delete it. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/building_applications/odc-deleting-applications |
Chapter 17. Managing hosts using Ansible playbooks | Chapter 17. Managing hosts using Ansible playbooks Ansible is an automation tool used to configure systems, deploy software, and perform rolling updates. Ansible includes support for Identity Management (IdM), and you can use Ansible modules to automate host management. The following concepts and operations are performed when managing hosts and host entries using Ansible playbooks: Ensuring the presence of IdM host entries that are only defined by their FQDNs Ensuring the presence of IdM host entries with IP addresses Ensuring the presence of multiple IdM host entries with random passwords Ensuring the presence of an IdM host entry with multiple IP addresses Ensuring the absence of IdM host entries 17.1. Ensuring the presence of an IdM host entry with FQDN using Ansible playbooks Follow this procedure to ensure the presence of host entries in Identity Management (IdM) using Ansible playbooks. The host entries are only defined by their fully-qualified domain names (FQDNs). Specifying the FQDN name of the host is enough if at least one of the following conditions applies: The IdM server is not configured to manage DNS. The host does not have a static IP address or the IP address is not known at the time the host is configured. Adding a host defined only by an FQDN essentially creates a placeholder entry in the IdM DNS service. For example, laptops may be preconfigured as IdM clients, but they do not have IP addresses at the time they are configured. When the DNS service dynamically updates its records, the host's current IP address is detected and its DNS record is updated. Note Without Ansible, host entries are created in IdM using the ipa host-add command. The result of adding a host to IdM is the state of the host being present in IdM. Because of the Ansible reliance on idempotence, to add a host to IdM using Ansible, you must create a playbook in which you define the state of the host as present: state: present . Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the FQDN of the host whose presence in IdM you want to ensure. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/host/add-host.yml file: Run the playbook: Note The procedure results in a host entry in the IdM LDAP server being created but not in enrolling the host into the IdM Kerberos realm. For that, you must deploy the host as an IdM client. For details, see Installing an Identity Management client using an Ansible playbook . Verification Log in to your IdM server as admin: Enter the ipa host-show command and specify the name of the host: The output confirms that host01.idm.example.com exists in IdM. 17.2. Ensuring the presence of an IdM host entry with DNS information using Ansible playbooks Follow this procedure to ensure the presence of host entries in Identity Management (IdM) using Ansible playbooks. The host entries are defined by their fully-qualified domain names (FQDNs) and their IP addresses. Note Without Ansible, host entries are created in IdM using the ipa host-add command. The result of adding a host to IdM is the state of the host being present in IdM. Because of the Ansible reliance on idempotence, to add a host to IdM using Ansible, you must create a playbook in which you define the state of the host as present: state: present . Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the fully-qualified domain name (FQDN) of the host whose presence in IdM you want to ensure. In addition, if the IdM server is configured to manage DNS and you know the IP address of the host, specify a value for the ip_address parameter. The IP address is necessary for the host to exist in the DNS resource records. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/host/host-present.yml file. You can also include other, additional information: Run the playbook: Note The procedure results in a host entry in the IdM LDAP server being created but not in enrolling the host into the IdM Kerberos realm. For that, you must deploy the host as an IdM client. For details, see Installing an Identity Management client using an Ansible playbook . Verification Log in to your IdM server as admin: Enter the ipa host-show command and specify the name of the host: The output confirms host01.idm.example.com exists in IdM. 17.3. Ensuring the presence of multiple IdM host entries with random passwords using Ansible playbooks The ipahost module allows the system administrator to ensure the presence or absence of multiple host entries in IdM using just one Ansible task. Follow this procedure to ensure the presence of multiple host entries that are only defined by their fully-qualified domain names (FQDNs). Running the Ansible playbook generates random passwords for the hosts. Note Without Ansible, host entries are created in IdM using the ipa host-add command. The result of adding a host to IdM is the state of the host being present in IdM. Because of the Ansible reliance on idempotence, to add a host to IdM using Ansible, you must create a playbook in which you define the state of the host as present: state: present . Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the fully-qualified domain name (FQDN) of the hosts whose presence in IdM you want to ensure. To make the Ansible playbook generate a random password for each host even when the host already exists in IdM and update_password is limited to on_create , add the random: true and force: true options. To simplify this step, you can copy and modify the example from the /usr/share/doc/ansible-freeipa/README-host.md Markdown file: Run the playbook: Note To deploy the hosts as IdM clients using random, one-time passwords (OTPs), see Authorization options for IdM client enrollment using an Ansible playbook or Installing a client by using a one-time password: Interactive installation . Verification Log in to your IdM server as admin: Enter the ipa host-show command and specify the name of one of the hosts: The output confirms host01.idm.example.com exists in IdM with a random password. 17.4. Ensuring the presence of an IdM host entry with multiple IP addresses using Ansible playbooks Follow this procedure to ensure the presence of a host entry in Identity Management (IdM) using Ansible playbooks. The host entry is defined by its fully-qualified domain name (FQDN) and its multiple IP addresses. Note In contrast to the ipa host utility, the Ansible ipahost module can ensure the presence or absence of several IPv4 and IPv6 addresses for a host. The ipa host-mod command cannot handle IP addresses. Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file. Specify, as the name of the ipahost variable, the fully-qualified domain name (FQDN) of the host whose presence in IdM you want to ensure. Specify each of the multiple IPv4 and IPv6 ip_address values on a separate line by using the ip_address syntax. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/host/host-member-ipaddresses-present.yml file. You can also include additional information: Run the playbook: Note The procedure creates a host entry in the IdM LDAP server but does not enroll the host into the IdM Kerberos realm. For that, you must deploy the host as an IdM client. For details, see Installing an Identity Management client using an Ansible playbook . Verification Log in to your IdM server as admin: Enter the ipa host-show command and specify the name of the host: The output confirms that host01.idm.example.com exists in IdM. To verify that the multiple IP addresses of the host exist in the IdM DNS records, enter the ipa dnsrecord-show command and specify the following information: The name of the IdM domain The name of the host The output confirms that all the IPv4 and IPv6 addresses specified in the playbook are correctly associated with the host01.idm.example.com host entry. 17.5. Ensuring the absence of an IdM host entry using Ansible playbooks Follow this procedure to ensure the absence of host entries in Identity Management (IdM) using Ansible playbooks. Prerequisites IdM administrator credentials Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the fully-qualified domain name (FQDN) of the host whose absence from IdM you want to ensure. If your IdM domain has integrated DNS, use the updatedns: true option to remove the associated records of any kind for the host from the DNS. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/host/delete-host.yml file: Run the playbook: Note The procedure results in: The host not being present in the IdM Kerberos realm. The host entry not being present in the IdM LDAP server. To remove the specific IdM configuration of system services, such as System Security Services Daemon (SSSD), from the client host itself, you must run the ipa-client-install --uninstall command on the client. For details, see Uninstalling an IdM client . Verification Log into ipaserver as admin: Display information about host01.idm.example.com : The output confirms that the host does not exist in IdM. 17.6. Additional resources See the /usr/share/doc/ansible-freeipa/README-host.md Markdown file. See the additional playbooks in the /usr/share/doc/ansible-freeipa/playbooks/host directory. | [
"[ipaserver] server.idm.example.com",
"--- - name: Host present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Host host01.idm.example.com present ipahost: ipaadmin_password: \"{{ ipaadmin_password }}\" name: host01.idm.example.com state: present force: true",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-host-is-present.yml",
"ssh [email protected] Password:",
"ipa host-show host01.idm.example.com Host name: host01.idm.example.com Principal name: host/[email protected] Principal alias: host/[email protected] Password: False Keytab: False Managed by: host01.idm.example.com",
"[ipaserver] server.idm.example.com",
"--- - name: Host present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure host01.idm.example.com is present ipahost: ipaadmin_password: \"{{ ipaadmin_password }}\" name: host01.idm.example.com description: Example host ip_address: 192.168.0.123 locality: Lab ns_host_location: Lab ns_os_version: CentOS 7 ns_hardware_platform: Lenovo T61 mac_address: - \"08:00:27:E3:B1:2D\" - \"52:54:00:BD:97:1E\" state: present",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-host-is-present.yml",
"ssh [email protected] Password:",
"ipa host-show host01.idm.example.com Host name: host01.idm.example.com Description: Example host Locality: Lab Location: Lab Platform: Lenovo T61 Operating system: CentOS 7 Principal name: host/[email protected] Principal alias: host/[email protected] MAC address: 08:00:27:E3:B1:2D, 52:54:00:BD:97:1E Password: False Keytab: False Managed by: host01.idm.example.com",
"[ipaserver] server.idm.example.com",
"--- - name: Ensure hosts with random password hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Hosts host01.idm.example.com and host02.idm.example.com present with random passwords ipahost: ipaadmin_password: \"{{ ipaadmin_password }}\" hosts: - name: host01.idm.example.com random: true force: true - name: host02.idm.example.com random: true force: true register: ipahost",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hosts-are-present.yml [...] TASK [Hosts host01.idm.example.com and host02.idm.example.com present with random passwords] changed: [r8server.idm.example.com] => {\"changed\": true, \"host\": {\"host01.idm.example.com\": {\"randompassword\": \"0HoIRvjUdH0Ycbf6uYdWTxH\"}, \"host02.idm.example.com\": {\"randompassword\": \"5VdLgrf3wvojmACdHC3uA3s\"}}}",
"ssh [email protected] Password:",
"ipa host-show host01.idm.example.com Host name: host01.idm.example.com Password: True Keytab: False Managed by: host01.idm.example.com",
"[ipaserver] server.idm.example.com",
"--- - name: Host member IP addresses present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure host101.example.com IP addresses present ipahost: ipaadmin_password: \"{{ ipaadmin_password }}\" name: host01.idm.example.com ip_address: - 192.168.0.123 - fe80::20c:29ff:fe02:a1b3 - 192.168.0.124 - fe80::20c:29ff:fe02:a1b4 force: true",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-host-with-multiple-IP-addreses-is-present.yml",
"ssh [email protected] Password:",
"ipa host-show host01.idm.example.com Principal name: host/[email protected] Principal alias: host/[email protected] Password: False Keytab: False Managed by: host01.idm.example.com",
"ipa dnsrecord-show idm.example.com host01 [...] Record name: host01 A record: 192.168.0.123, 192.168.0.124 AAAA record: fe80::20c:29ff:fe02:a1b3, fe80::20c:29ff:fe02:a1b4",
"[ipaserver] server.idm.example.com",
"--- - name: Host absent hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Host host01.idm.example.com absent ipahost: ipaadmin_password: \"{{ ipaadmin_password }}\" name: host01.idm.example.com updatedns: true state: absent",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-host-absent.yml",
"ssh [email protected] Password: [admin@server /]USD",
"ipa host-show host01.idm.example.com ipa: ERROR: host01.idm.example.com: host not found"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_ansible_to_install_and_manage_identity_management/managing-hosts-using-ansible-playbooks_using-ansible-to-install-and-manage-idm |
Appendix B. Common libvirt Errors and Troubleshooting | Appendix B. Common libvirt Errors and Troubleshooting This appendix documents common libvirt -related problems and errors along with instructions for dealing with them. Locate the error on the table below and follow the corresponding link under Solution for detailed troubleshooting information. Table B.1. Common libvirt errors Error Description of problem Solution libvirtd Failed to Start The libvirt daemon failed to start. However, there is no information about this error in /var/log/messages . Section B.1, " libvirtd failed to start" Cannot read CA certificate This is one of several errors that occur when the URI fails to connect to the hypervisor. Section B.2, "The URI Failed to Connect to the Hypervisor" Failed to connect socket ... : Permission denied This is one of several errors that occur when the URI fails to connect to the hypervisor. Section B.2, "The URI Failed to Connect to the Hypervisor" Other connectivity errors These are other errors that occur when the URI fails to connect to the hypervisor. Section B.2, "The URI Failed to Connect to the Hypervisor" Internal error guest CPU is not compatible with host CPU The guest virtual machine cannot be started because the host and guest processors are different. Section B.3, "The guest virtual machine cannot be started: internal error guest CPU is not compatible with host CPU " Failed to create domain from vm.xml error: monitor socket did not show up.: Connection refused The guest virtual machine (or domain) starting fails and returns this error or similar. Section B.4, "Guest starting fails with error: monitor socket did not show up " Internal error cannot find character device (null) This error can occur when attempting to connect a guest's console. It reports that there is no serial console configured for the guest virtual machine. Section B.5, " Internal error cannot find character device (null) " No boot device After building a guest virtual machine from an existing disk image, the guest booting stalls. However, the guest can start successfully using the QEMU command directly. Section B.6, "Guest virtual machine booting stalls with error: No boot device " The virtual network "default" has not been started If the default network (or other locally-created network) is unable to start, any virtual machine configured to use that network for its connectivity will also fail to start. Section B.7, "Virtual network default has not been started" PXE boot (or DHCP) on guest failed A guest virtual machine starts successfully, but is unable to acquire an IP address from DHCP, boot using the PXE protocol, or both. This is often a result of a long forward delay time set for the bridge, or when the iptables package and kernel do not support checksum mangling rules. Section B.8, "PXE Boot (or DHCP) on Guest Failed" Guest can reach outside network, but cannot reach host when using macvtap interface A guest can communicate with other guests, but cannot connect to the host machine after being configured to use a macvtap (or type='direct' ) network interface. This is actually not an error - it is the defined behavior of macvtap. Section B.9, "Guest Can Reach Outside Network, but Cannot Reach Host when Using macvtap Interface" Could not add rule to fixup DHCP response checksums on network 'default' This warning message is almost always harmless, but is often mistakenly seen as evidence of a problem. Section B.10, "Could not add rule to fixup DHCP response checksums on network 'default' " Unable to add bridge br0 port vnet0: No such device This error message or the similar Failed to add tap interface to bridge 'br0' : No such device reveal that the bridge device specified in the guest's (or domain's) <interface> definition does not exist. Section B.11, "Unable to add bridge br0 port vnet0: No such device" Warning: could not open /dev/net/tun: no virtual network emulation qemu-kvm: -netdev tap,script=/etc/my-qemu-ifup,id=hostnet0: Device 'tap' could not be initialized The guest virtual machine does not start after configuring a type='ethernet' (or 'generic ethernet') interface in the host system. This error or similar appears either in libvirtd.log , /var/log/libvirt/qemu/ name_of_guest .log , or in both. Section B.12, "Guest is Unable to Start with Error: warning: could not open /dev/net/tun " Unable to resolve address name_of_host service '49155': Name or service not known QEMU guest migration fails and this error message appears with an unfamiliar host name. Section B.13, "Migration Fails with Error: unable to resolve address " Unable to allow access for disk path /var/lib/libvirt/images/qemu.img: No such file or directory A guest virtual machine cannot be migrated because libvirt cannot access the disk image(s). Section B.14, "Migration Fails with Unable to allow access for disk path: No such file or directory " No guest virtual machines are present when libvirtd is started The libvirt daemon is successfully started, but no guest virtual machines appear to be present when running virsh list --all . Section B.15, "No Guest Virtual Machines are Present when libvirtd is Started" Unable to connect to server at 'host:16509': Connection refused ... error: failed to connect to the hypervisor While libvirtd should listen on TCP ports for connections, the connection to the hypervisor fails. Section B.16, "Unable to connect to server at 'host:16509': Connection refused ... error: failed to connect to the hypervisor" Common XML errors libvirt uses XML documents to store structured data. Several common errors occur with XML documents when they are passed to libvirt through the API. This entry provides instructions for editing guest XML definitions, and details common errors in XML syntax and configuration. Section B.17, "Common XML Errors" B.1. libvirtd failed to start Symptom The libvirt daemon does not start automatically. Starting the libvirt daemon manually fails as well: Moreover, there is not 'more info' about this error in /var/log/messages . Investigation Change libvirt's logging in /etc/libvirt/libvirtd.conf by uncommenting the line below. To uncomment the line, open the /etc/libvirt/libvirtd.conf file in a text editor, remove the hash (or # ) symbol from the beginning of the following line, and save the change: Note This line is commented out by default to prevent libvirt from producing excessive log messages. After diagnosing the problem, it is recommended to comment this line again in the /etc/libvirt/libvirtd.conf file. Restart libvirt to determine if this has solved the problem. If libvirtd still does not start successfully, an error similar to the following will be shown in the /var/log/messages file: Feb 6 17:22:09 bart libvirtd: 17576: info : libvirt version: 0.9.9 Feb 6 17:22:09 bart libvirtd: 17576: error : virNetTLSContextCheckCertFile:92: Cannot read CA certificate '/etc/pki/CA/cacert.pem': No such file or directory Feb 6 17:22:09 bart /etc/init.d/libvirtd[17573]: start-stop-daemon: failed to start `/usr/sbin/libvirtd' Feb 6 17:22:09 bart /etc/init.d/libvirtd[17565]: ERROR: libvirtd failed to start The libvirtd man page shows that the missing cacert.pem file is used as TLS authority when libvirt is run in Listen for TCP/IP connections mode. This means the --listen parameter is being passed. Solution Configure the libvirt daemon's settings with one of the following methods: Install a CA certificate. Note For more information on CA certificates and configuring system authentication, refer to the Configuring Authentication chapter in the Red Hat Enterprise Linux 6 Deployment Guide . Do not use TLS; use bare TCP instead. In /etc/libvirt/libvirtd.conf set listen_tls = 0 and listen_tcp = 1 . The default values are listen_tls = 1 and listen_tcp = 0 . Do not pass the --listen parameter. In /etc/sysconfig/libvirtd.conf change the LIBVIRTD_ARGS variable. | [
"/etc/init.d/libvirtd start * Caching service dependencies ... [ ok ] * Starting libvirtd /usr/sbin/libvirtd: error: Unable to initialize network sockets. Check /var/log/messages or run without --daemon for more info. * start-stop-daemon: failed to start `/usr/sbin/libvirtd' [ !! ] * ERROR: libvirtd failed to start",
"log_outputs=\"3:syslog:libvirtd\"",
"Feb 6 17:22:09 bart libvirtd: 17576: info : libvirt version: 0.9.9 Feb 6 17:22:09 bart libvirtd: 17576: error : virNetTLSContextCheckCertFile:92: Cannot read CA certificate '/etc/pki/CA/cacert.pem': No such file or directory Feb 6 17:22:09 bart /etc/init.d/libvirtd[17573]: start-stop-daemon: failed to start `/usr/sbin/libvirtd' Feb 6 17:22:09 bart /etc/init.d/libvirtd[17565]: ERROR: libvirtd failed to start"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/appx-common-libvirt-errors-troubleshooting |
Chapter 1. Introduction | Chapter 1. Introduction Red Hat OpenStack Platform director creates a cloud environment called the overcloud . The director provides the ability to configure extra features for an overcloud, including integration with Red Hat Ceph Storage (both Ceph Storage clusters created with the director or existing Ceph Storage clusters). This guide contains instructions for deploying a containerized Red Hat Ceph Storage cluster with your overcloud. Director uses Ansible playbooks provided through the ceph-ansible package to deploy a containerized Ceph cluster. The director also manages the configuration and scaling operations of the cluster. For more information about containerized services in OpenStack, see Configuring a basic overcloud with the CLI tools in the Director Installation and Usage guide. 1.1. Introduction to Ceph Storage Red Hat Ceph Storage is a distributed data object store designed to provide excellent performance, reliability, and scalability. Distributed object stores are the future of storage, because they accommodate unstructured data, and because clients can use modern object interfaces and legacy interfaces simultaneously. At the core of every Ceph deployment is the Ceph Storage cluster, which consists of several types of daemons, but primarily, these two: Ceph OSD (Object Storage Daemon) Ceph OSDs store data on behalf of Ceph clients. Additionally, Ceph OSDs utilize the CPU and memory of Ceph nodes to perform data replication, rebalancing, recovery, monitoring and reporting functions. Ceph Monitor A Ceph monitor maintains a master copy of the Ceph storage cluster map with the current state of the storage cluster. For more information about Red Hat Ceph Storage, see the Red Hat Ceph Storage Architecture Guide . 1.2. Requirements This guide contains information supplementary to the Director Installation and Usage guide. Before you deploy a containerized Ceph Storage cluster with your overcloud, your environment must contain the following configuration: An undercloud host with the Red Hat OpenStack Platform director installed. See Installing director . Any additional hardware recommended for Red Hat Ceph Storage. For more information about recommended hardware, see the Red Hat Ceph Storage Hardware Guide . Important The Ceph Monitor service installs on the overcloud Controller nodes, so you must provide adequate resources to avoid performance issues. Ensure that the Controller nodes in your environment use at least 16 GB of RAM for memory and solid-state drive (SSD) storage for the Ceph monitor data. For a medium to large Ceph installation, provide at least 500 GB of Ceph monitor data. This space is necessary to avoid levelDB growth if the cluster becomes unstable. If you use the Red Hat OpenStack Platform director to create Ceph Storage nodes, note the following requirements. 1.2.1. Ceph Storage node requirements Ceph Storage nodes are responsible for providing object storage in a Red Hat OpenStack Platform environment. Placement Groups (PGs) Ceph uses placement groups to facilitate dynamic and efficient object tracking at scale. In the case of OSD failure or cluster rebalancing, Ceph can move or replicate a placement group and its contents, which means a Ceph cluster can re-balance and recover efficiently. The default placement group count that director creates is not always optimal so it is important to calculate the correct placement group count according to your requirements. You can use the placement group calculator to calculate the correct count: Placement Groups (PGs) per Pool Calculator Processor 64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions. Memory Red Hat typically recommends a baseline of 16 GB of RAM per OSD host, with an additional 2 GB of RAM per OSD daemon. Disk layout Sizing is dependent on your storage requirements. Red Hat recommends that your Ceph Storage node configuration includes three or more disks in a layout similar to the following example: /dev/sda - The root disk. The director copies the main overcloud image to the disk. Ensure that the disk has a minimum of 40 GB of available disk space. /dev/sdb - The journal disk. This disk divides into partitions for Ceph OSD journals. For example, /dev/sdb1 , /dev/sdb2 , and /dev/sdb3 . The journal disk is usually a solid state drive (SSD) to aid with system performance. /dev/sdc and onward - The OSD disks. Use as many disks as necessary for your storage requirements. Note Red Hat OpenStack Platform director uses ceph-ansible , which does not support installing the OSD on the root disk of Ceph Storage nodes. This means that you need at least two disks for a supported Ceph Storage node. Network Interface Cards A minimum of one 1 Gbps Network Interface Cards, although Red Hat recommends that you use at least two NICs in a production environment. Use additional network interface cards for bonded interfaces or to delegate tagged VLAN traffic. Red Hat recommends that you use a 10 Gbps interface for storage nodes, especially if you want to create an OpenStack Platform environment that serves a high volume of traffic. Power management Each Controller node requires a supported power management interface, such as Intelligent Platform Management Interface (IPMI) functionality on the motherboard of the server. 1.3. Additional resources The /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml environment file instructs the director to use playbooks derived from the ceph-ansible project. These playbooks are installed in /usr/share/ceph-ansible/ of the undercloud. In particular, the following file contains all the default settings that the playbooks apply: /usr/share/ceph-ansible/group_vars/all.yml.sample Warning While ceph-ansible uses playbooks to deploy containerized Ceph Storage, do not edit these files to customize your deployment. Instead, use heat environment files to override the defaults set by these playbooks. If you edit the ceph-ansible playbooks directly, your deployment will fail. For more information about the playbook collection, see the documentation for this project ( http://docs.ceph.com/ceph-ansible/master/ ) to learn more about the playbook collection. Alternatively, for information about the default settings applied by director for containerized Ceph Storage, see the heat templates in /usr/share/openstack-tripleo-heat-templates/deployment/ceph-ansible . Note Reading these templates requires a deeper understanding of how environment files and heat templates work in director. See Understanding Heat Templates and Environment Files for reference. Lastly, for more information about containerized services in OpenStack, see Configuring a basic overcloud with the CLI tools in the Director Installation and Usage guide. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/deploying_an_overcloud_with_containerized_red_hat_ceph/intro |
C.5.3. Remove a Passphrase or Key from a Device | C.5.3. Remove a Passphrase or Key from a Device You will be prompted for the passphrase you wish to remove and then for any one of the remaining passphrases for authentication. | [
"cryptsetup luksRemoveKey <device>"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/apcs05s03 |
8.184. rpm | 8.184. rpm 8.184.1. RHBA-2013:1665 - rpm bug fix update Updated rpm packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The RPM Package Manager (RPM) is a command-line driven package management system capable of installing, uninstalling, verifying, querying, and updating software packages. Bug Fixes BZ# 868332 Previously, the brp-python-bytecompile script skipped those paths that included "/usr/lib.*/python.+/" string. Consequently, when creating an RPM that contained Python modules in paths (like "/opt/myapp/usr/lib64/python2.6/site-packages/mypackage/"), bytecode was not created. The reproducer specification has been changed, and bytecode is now created for all paths. BZ# 904818 When wildcard characters were used with the "%caps" tag in the spec file, the rpmbuild utility terminated unexpectedly. The provided patch corrects the problem by making a copy of the caps data for each file it applies to, and thus rpmbuild no longer crashes in the described scenario. BZ# 919435 Previously, when installing a package with a high (80k) number of files, the RPM Package Manager terminated unexpectedly with a segmentation fault. As a workaround, the segmentation fault has been replaced with an error message when a package with a high number of files fails to install. BZ# 920190 Previously the rpm program attempted to unconditionally process any "%include" directive it found in a spec file, either leading to unwanted content in the package or error messages. The updated rpm package properly honours various spec conditionals for "%include". BZ# 963724 With this update, Red Hat Enterprise Linux 5 backwards-compatibility option, "%_strict_script_errors macro", has been added. The default behavior of Red Hat Enterprise Linux 6 does not change with this update and users that do not demand this option specifically are not advised to use it. Users of rpm are advised to upgrade to these updated packages, which fix these bugs. All running applications linked against the RPM library must be restarted for this update to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/rpm |
Chapter 1. Basic authentication | Chapter 1. Basic authentication HTTP Basic authentication is one of the least resource-demanding techniques that enforce access controls to web resources. You can secure your Quarkus application endpoints by using HTTP Basic authentication. Quarkus includes a built-in authentication mechanism for Basic authentication. Basic authentication uses fields in the HTTP header and does not rely on HTTP cookies, session identifiers, or login pages. 1.1. Authorization header An HTTP user agent, like a web browser, uses an Authorization header to provide a username and password in each HTTP request. The header is specified as Authorization: Basic <credentials> , where credentials are the Base64 encoding of the user ID and password, joined by a colon. Example: If the user name is Alice and the password is secret , the HTTP authorization header would be Authorization: Basic QWxjZTpzZWNyZXQ= , where QWxjZTpzZWNyZXQ= is a Base64 encoded representation of the Alice:secret string. The Basic authentication mechanism does not provide confidentiality protection for the transmitted credentials. The credentials are merely encoded with Base64 when in transit, and not encrypted or hashed in any way. Therefore, to provide confidentiality, use Basic authentication with HTTPS. Basic authentication is a well-specified, simple challenge and response scheme that all web browsers and most web servers understand. 1.2. Limitations with using Basic authentication The following table outlines some limitations of using HTTP Basic authentication to secure your Quarkus applications: Table 1.1. Limitations of HTTP Basic authentication Limitation Description Credentials are sent as plain text Use HTTPS with Basic authentication to avoid exposing the credentials. The risk of exposing credentials as plain text increases if a load balancer terminates HTTPS because the request is forwarded to Quarkus over HTTP. Furthermore, in multi-hop deployments, the credentials can be exposed if HTTPS is used between the client and the first Quarkus endpoint only, and the credentials are propagated to the Quarkus endpoint over HTTP. Credentials are sent with each request In Basic authentication, a username and password must be sent with each request, increasing the risk of exposing credentials. Application complexity increases The Quarkus application must validate that usernames, passwords, and roles are managed securely. This process, however, can introduce significant complexity to the application. Depending on the use case, other authentication mechanisms that delegate username, password, and role management to specialized services might be more secure. 1.3. Implementing Basic authentication in Quarkus For more information about how you can secure your Quarkus applications by using Basic authentication, see the following resources: Enable Basic authentication Getting started with Security by using Basic authentication and Jakarta Persistence 1.4. Role-based access control Red Hat build of Quarkus also includes built-in security to allow for role-based access control (RBAC) based on the common security annotations @RolesAllowed , @DenyAll , @PermitAll on REST endpoints and CDI beans. For more information, see the Quarkus Authorization of web endpoints guide. 1.5. References Quarkus Security overview Quarkus Security architecture Other supported authentication mechanisms Identity providers Authorization of web endpoints | null | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/basic_authentication/security-basic-authentication |
probe::signal.syskill.return | probe::signal.syskill.return Name probe::signal.syskill.return - Sending kill signal completed Synopsis signal.syskill.return Values None | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-signal-syskill-return |
Chapter 3. Configuring the Red Hat Ceph Storage cluster for HCI | Chapter 3. Configuring the Red Hat Ceph Storage cluster for HCI This chapter describes how to configure and deploy the Red Hat Ceph Storage cluster for HCI environments. 3.1. Deployment prerequisites Confirm the following has been performed before attempting to configure and deploy the Red Hat Ceph Storage cluster: Provision of bare metal instances and their networks using the Bare Metal Provisioning service (ironic). For more information about the provisioning of bare metal instances, see Configuring the Bare Metal Provisioning service . 3.2. The openstack overcloud ceph deploy command If you deploy the Ceph cluster using director, you must use the openstack overcloud ceph deploy command. For a complete listing of command options and parameters, see openstack overcloud ceph deploy in the Command line interface reference . The command openstack overcloud ceph deploy --help provides the current options and parameters available in your environment. 3.3. Ceph configuration overrides for HCI A standard format initialization file is an option for Ceph cluster configuration. This initialization file is then used to configure the Ceph cluster with either the cephadm bootstap --config <file_name> or openstack overcloud ceph deploy --config <file_name> commands. Colocating Ceph OSD and Compute services on hyperconverged nodes risks resource contention between Red Hat Ceph Storage and Compute services. This occurs because the services are not aware of the colocation. Resource contention can result in service degradation, which offsets the benefits of hyperconvergence. Resource allocation can be tuned using an initialization file to manage resource contention. The following creates an initialization file called initial-ceph.conf and then uses the openstack overcloud ceph deploy command to configure the HCI deployment. The osd_memory_target_autotune option is set to true so that the OSD daemons adjust their memory consumption based on the osd_memory_target config option. The autotune_memory_target_ratio defaults to 0.7 . This indicates 70% of the total RAM in the system is the starting point from which any memory consumed by non-autotuned Ceph daemons are subtracted. Then the remaining memory is divided by the OSDs, assuming all OSDs have osd_memory_target_autotune set to true . For HCI deployments, set the mgr/cephadm/autotune_memory_target_ratio to 0.2 to ensure more memory is available for the Compute service. The 0.2 value is a cautious starting point. After deployment, use the ceph command to change this value if necessary. A two NUMA node system can host a latency sensitive Nova workload on one NUMA node and a Ceph OSD workload on the other NUMA node. To configure Ceph OSDs to use a specific NUMA node not used by the Compute workload, use either of the following Ceph OSD configurations: osd_numa_node sets affinity to a numa node osd_numa_auto_affinity automatically sets affinity to the NUMA node where storage and network match If there are network interfaces on both NUMA nodes and the disk controllers are NUMA node 0, use a network interface on NUMA node 0 for the storage network and host the Ceph OSD workload on NUMA node 0. Host the Nova workload on NUMA node 1 and have it use the network interfaces on NUMA node 1. Setting osd_numa_auto_affinity to true to achieve this configuration. Alternatively, the osd_numa_node could be set directly to 0 and a value would not be set for osd_numa_auto_affinity so that it defaults to false . When a hyperconverged cluster backfills as a result of an OSD going offline, the backfill process can be slowed down. In exchange for a slower recovery, the backfill activity has less of an impact on the collocated Compute workload. Red Hat Ceph Storage has the following defaults to control the rate of backfill activity: osd_recovery_op_priority = 3 osd_max_backfills = 1 osd_recovery_max_active_hdd = 3 osd_recovery_max_active_ssd = 10 Note It is not necessary to pass these defaults in an initialization file as they are the default values. If values other than the defaults are desired for the inital configuration, add them to the initialization file with the required values before deployment. After deployment, use the command 'ceph config set osd'. 3.4. Configuring time synchronization The Time Synchronization Service (chrony) is enabled for time synchronization by default. You can perform the following tasks to configure the service. Configuring time synchronization with a delimited list Configuring time synchronization with an environment file Disabling time synchronization Note Time synchronization is configured using either a delimited list or an environment file. Use the procedure that is best suited to your administrative practices. 3.4.1. Configuring time synchronization with a delimited list You can configure the Time Synchronization Service (chrony) to use a delimited list to configure NTP servers. Procedure Log in to the undercloud node as the stack user. Configure NTP servers with a delimited list: Replace <ntp_server_list> with a comma delimited list of servers. 3.4.2. Configuring time synchronization with an environment file You can configure the Time Synchronization Service (chrony) to use an environment file that defines NTP servers. Procedure Log in to the undercloud node as the stack user. Create an environment file, such as /home/stack/templates/ntp-parameters.yaml , to contain the NTP server configuration. Add the NtpServer parameter. The NtpServer parameter contains a comma delimited list of NTP servers. Configure NTP servers with an environment file: Replace <ntp_file_name> with the name of the environment file you created. 3.4.3. Disabling time synchronization The Time Synchronization Service (chrony) is enabled by default. You can disable the service if you do not want to use it. Procedure Log in to the undercloud node as the stack user. Disable the Time Synchronization Service (chrony): 3.5. Configuring a top level domain suffix You can configure a top level domain (TLD) suffix. This suffix is added to the short hostname to create a fully qualified domain name for overcloud nodes. Note A fully qualified domain name is required for TLS-e configuration. Procedure Log in to the undercloud node as the stack user. Configure the top level domain suffix: Replace <domain_name> with the required domain name. 3.6. Configuring the Red Hat Ceph Storage cluster name You can deploy the Red Hat Ceph Storage cluster with a name that you configure. The default name is ceph . Procedure Log in to the undercloud node as the stack user. Configure the name of the Ceph Storage cluster by using the following command: openstack overcloud ceph deploy \ --cluster <cluster_name> USD openstack overcloud ceph deploy \ --cluster central \ Note Keyring files are not created at this time. Keyring files are created during the overcloud deployment. Keyring files inherit the cluster name configured during this procedure. For more information about overcloud deployment see Section 5.8, "Initiating overcloud deployment for HCI" In the example above, the Ceph cluster is named central . The configuration and keyring files for the central Ceph cluster would be created in /etc/ceph during the deployment process. Troubleshooting The following error may be displayed if you configure a custom name for the Ceph Storage cluster: monclient: get_monmap_and_config cannot identify monitors to contact because If this error is displayed, use the following command after Ceph deployment: cephadm shell --config <configuration_file> --keyring <keyring_file> For example, if this error was displayed when you configured the cluster name to central , you would use the following command: The following command could also be used as an alternative: 3.7. Configuring network options with the network data file The network data file describes the networks used by the Red Hat Ceph Storage cluster. Procedure Log in to the undercloud node as the stack user. Create a YAML format file that defines the custom network attributes called network_data.yaml . Important Using network isolation, the standard network deployment consists of two storage networks which map to the two Ceph networks: The storage network, storage , maps to the Ceph network, public_network . This network handles storage traffic such as the RBD traffic from the Compute nodes to the Ceph cluster. The storage network, storage_mgmt , maps to the Ceph network, cluster_network . This network handles storage management traffic such as data replication between Ceph OSDs. Use the openstack overcloud ceph deploy command with the --crush-hierarchy option to deploy the configuration. Important The openstack overcloud ceph deploy command uses the network data file specified by the --network-data option to determine the networks to be used as the public_network and cluster_network . The command assumes these networks are named storage and storage_mgmt in network data file unless a different name is specified by the --public-network-name and --cluster-network-name options. You must use the --network-data option when deploying with network isolation. The default undercloud (192.168.24.0/24) will be used for both the public_network and cluster_network if you do not use this option. 3.8. Configuring network options with a configuration file Network options can be specified with a configuration file as an alternative to the network data file. Important Using this method to configure network options overwrites automatically generated values in network_data.yaml . Ensure you set all four values when using this network configuration method. Procedure Log in to the undercloud node as the stack user. Create a standard format initialization file to configure the Ceph cluster. If you have already created a file to include other configuration options, you can add the network configuration to it. Add the following parameters to the [global] section of the file: public_network cluster_network ms_bind_ipv4 Important Ensure the public_network and cluster_network map to the same networks as storage and storage_mgmt . The following is an example of a configuration file entry for a network configuration with multiple subnets and custom networking names: Use the command openstack overcloud ceph deploy with the --config option to deploy the configuration file. 3.9. Configuring a CRUSH hierarchy for an OSD You can configure a custom Controlled Replication Under Scalable Hashing (CRUSH) hierarchy during OSD deployment to add the OSD location attribute to the Ceph Storage cluster hosts specification. The location attribute configures where the OSD is placed within the CRUSH hierarchy. Note The location attribute sets only the initial CRUSH location. Subsequent changes of the attribute are ignored. Procedure Log in to the undercloud node as the stack user. Source the stackrc undercloud credentials file: USD source ~/stackrc Create a configuration file to define the custom CRUSH hierarchy, for example, crush_hierarchy.yaml . Add the following configuration to the file: Replace <osd_host> with the hostnames of the nodes where the OSDs are deployed, for example, ceph-0 . Replace <rack_num> with the number of the rack where the OSDs are deployed, for example, r0 . Deploy the Ceph cluster with your custom OSD layout: The Ceph cluster is created with the custom OSD layout. The example file above would result in the following OSD layout. Note Device classes are automatically detected by Ceph but CRUSH rules are associated with pools. Pools are still defined and created using the CephCrushRules parameter during the overcloud deployment. Additional resources See Red Hat Ceph Storage workload considerations in the Red Hat Ceph Storage Installation Guide for additional information. 3.10. Configuring Ceph service placement options You can define what nodes run what Ceph services using a custom roles file. A custom roles file is only necessary when default role assignments are not used because of the environment. For example, when deploying hyperconverged nodes, the predeployed compute nodes should be labeled as osd with a service type of osd to have a placement list containing a list of compute instances. Service definitions in the roles_data.yaml file determine which bare metal instance runs which service. By default, the Controller role has the CephMon and CephMgr service while the CephStorage role has the CephOSD service. Unlike most composable services, Ceph services do not require heat output to determine how services are configured. The roles_data.yaml file always determines Ceph service placement even though the deployed Ceph process occurs before Heat runs. Procedure Log in to the undercloud node as the stack user. Create a YAML format file that defines the custom roles. Deploy the configuration file: 3.11. Configuring SSH user options for Ceph nodes The openstack overcloud ceph deploy command creates the user and keys and distributes them to the hosts so it is not necessary to perform the procedures in this section. However, it is a supported option. Cephadm connects to all managed remote Ceph nodes using SSH. The Red Hat Ceph Storage cluster deployment process creates an account and SSH key pair on all overcloud Ceph nodes. The key pair is then given to Cephadm so it can communicate with the nodes. 3.11.1. Creating the SSH user before Red Hat Ceph Storage cluster creation You can create the SSH user before Ceph cluster creation with the openstack overcloud ceph user enable command. Procedure Log in to the undercloud node as the stack user. Create the SSH user: USD openstack overcloud ceph user enable <specification_file> Replace <specification_file> with the path and name of a Ceph specification file that describes the cluster where the user is created and the public SSH keys are installed. The specification file provides the information to determine which nodes to modify and if the private keys are required. For more information on creating a specification file, see Generating the service specification . Note The default user name is ceph-admin . To specify a different user name, use the --cephadm-ssh-user option to specify a different one. openstack overcloud ceph user enable --cephadm-ssh-user <custom_user_name> It is recommended to use the default name and not use the --cephadm-ssh-user parameter. If the user is created in advance, use the parameter --skip-user-create when executing openstack overcloud ceph deploy . 3.11.2. Disabling the SSH user Disabling the SSH user disables cephadm . Disabling cephadm removes the ability of the service to administer the Ceph cluster and prevents associated commands from working. It also prevents Ceph node overcloud scaling operations. It also removes all public and private SSH keys. Procedure Log in to the undercloud node as the stack user. Use the command openstack overcloud ceph user disable --fsid <FSID> <specification_file> to disable the SSH user. Replace <FSID> with the File System ID of the cluster. The FSID is a unique identifier for the cluster. The FSID is located in the deployed_ceph.yaml environment file. Replace <specification_file> with the path and name of a Ceph specification file that describes the cluster where the user was created. Important The openstack overcloud ceph user disable command is not recommended unless it is necessary to disable cephadm . Important To enable the SSH user and Ceph orchestrator service after being disabled, use the openstack overcloud ceph user enable --fsid <FSID> <specification_file> command. Note This command requires the path to a Ceph specification file to determine: Which hosts require the SSH user. Which hosts have the _admin label and require the private SSH key. Which hosts require the public SSH key. For more information about specification files and how to generate them, see Generating the service specification. 3.12. Accessing Ceph Storage containers Preparing container images in the Installing and managing Red Hat OpenStack Platform with director guide contains procedures and information on how to prepare the registry and your undercloud and overcloud configuration to use container images. Use the information in this section to adapt these procedures to access Ceph Storage containers. There are two options for accessing Ceph Storage containers from the overcloud. Downloading containers directly from a remote registry Cacheing containers on the undercloud 3.12.1. Cacheing containers on the undercloud The procedure Modifying images during preparation describes using the following command: If you do not use the --container-image-prepare option to provide authentication credentials to the openstack overcloud ceph deploy command and directly download the Ceph containers from a remote registry, as described in Downloading containers directly from a remote registry , you must run the sudo openstack tripleo container image prepare command before deploying Ceph. 3.12.2. Downloading containers directly from a remote registry You can configure Ceph to download containers directly from a remote registry. The cephadm command uses the credentials that are configured in the containers-prepare-parameter.yaml file to authenticate to the remote registry and download the Red Hat Ceph Storage container. Procedure Create a containers-prepare-parameter.yaml file using the procedure Preparing container images in the Installing and managing Red Hat OpenStack Platform with director guide. Add the remote registry credentials to the containers-prepare-parameter.yaml file using the ContainerImageRegistryCredentials parameter as described in Obtaining container images from private registries . When you deploy Ceph, pass the containers-prepare-parameter.yaml file using the openstack overcloud ceph deploy command. Note If you do not cache the containers on the undercloud, as described in Cacheing containers on the undercloud , then you should pass the same containers-prepare-parameter.yaml file to the openstack overcloud ceph deploy command when you deploy Ceph. This will cache containers on the undercloud. | [
"cat <<EOF > initial-ceph.conf [osd] osd_memory_target_autotune = true osd_numa_auto_affinity = true [mgr] mgr/cephadm/autotune_memory_target_ratio = 0.2 EOF openstack overcloud ceph deploy --config initial-ceph.conf",
"openstack overcloud ceph deploy --ntp-server \"<ntp_server_list>\"",
"openstack overcloud ceph deploy --ntp-server \"0.pool.ntp.org,1.pool.ntp.org\"",
"parameter_defaults: NtpServer: 0.pool.ntp.org,1.pool.ntp.org",
"openstack overcloud ceph deploy --ntp-heat-env-file \"<ntp_file_name>\"",
"openstack overcloud ceph deploy --ntp-heat-env-file \"/home/stack/templates/ntp-parameters.yaml\"",
"openstack overcloud ceph deploy --skip-ntp",
"openstack overcloud ceph deploy --tld \"<domain_name>\"",
"openstack overcloud ceph deploy --tld \"example.local\"",
"ls -l /etc/ceph/ total 16 -rw-------. 1 root root 63 Mar 26 21:49 central.client.admin.keyring -rw-------. 1 167 167 201 Mar 26 22:17 central.client.openstack.keyring -rw-------. 1 167 167 134 Mar 26 22:17 central.client.radosgw.keyring -rw-r--r--. 1 root root 177 Mar 26 21:49 central.conf",
"cephadm shell --config /etc/ceph/central.conf --keyring /etc/ceph/central.client.admin.keyring",
"cephadm shell --mount /etc/ceph:/etc/ceph export CEPH_ARGS='--cluster central'",
"openstack overcloud ceph deploy deployed_metal.yaml -o deployed_ceph.yaml --network-data network_data.yaml",
"[global] public_network = 172.16.14.0/24,172.16.15.0/24 cluster_network = 172.16.12.0/24,172.16.13.0/24 ms_bind_ipv4 = True ms_bind_ipv6 = False",
"openstack overcloud ceph deploy --config initial-ceph.conf --network-data network_data.yaml",
"<osd_host>: root: default rack: <rack_num> <osd_host>: root: default rack: <rack_num> <osd_host>: root: default rack: <rack_num>",
"openstack overcloud ceph deploy deployed_metal.yaml -o deployed_ceph.yaml --osd-spec osd_spec.yaml --crush-hierarchy crush_hierarchy.yaml",
"ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.02939 root default -3 0.00980 rack r0 -2 0.00980 host ceph-node-00 0 hdd 0.00980 osd.0 up 1.00000 1.00000 -5 0.00980 rack r1 -4 0.00980 host ceph-node-01 1 hdd 0.00980 osd.1 up 1.00000 1.00000 -7 0.00980 rack r2 -6 0.00980 host ceph-node-02 2 hdd 0.00980 osd.2 up 1.00000 1.00000",
"openstack overcloud ceph deploy deployed_metal.yaml -o deployed_ceph.yaml --roles-data custom_roles.yaml",
"sudo openstack tripleo container image prepare -e ~/containers-prepare-parameter.yaml \\",
"openstack overcloud ceph deploy --container-image-prepare containers-prepare-parameter.yaml"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/deploying_a_hyperconverged_infrastructure/assembly_deployed_hci_ceph_storage_cluster_hci |
Chapter 2. Setting notifications and email preferences | Chapter 2. Setting notifications and email preferences By configuring notifications and user preferences settings in the Red Hat Hybrid Cloud Console, Red Hat Insights will notify you of policy changes to your Red Hat Enterprise Linux systems. 2.1. Enabling notifications and integrations for the policies service You can enable the notifications service on the Red Hat Hybrid Cloud Console to send notifications whenever the policy service detects an issue and generates an alert. Using the notifications service frees you from having to continually check the Red Hat Insights Dashboard for alerts. For example, you can configure the notifications service to automatically send an email message whenever the policies service detects that a server's security software is out of date, or to send an email digest of all the alerts that the policies service generates each day. In addition to sending email messages, you can configure the notifications service to send policies event data in other ways: Using an authenticated client to query Red Hat Insights APIs for event data Using webhooks to send events to third-party applications that accept inbound requests Integrating notifications with applications such as Splunk to route policies events to the application dashboard Enabling the notifications service requires three main steps: First, an Organization Administrator creates a User access group with the Notifications administrator role, and then adds account members to the group. , a Notifications administrator sets up behavior groups for events in the notifications service. Behavior groups specify the delivery method for each notification. For example, a behavior group can specify whether email notifications are sent to all users, or just to Organization administrators. Finally, users who receive email notifications from events must set their user preferences so that they receive individual emails for each event. Additional resources For more information about configuring Hybrid Cloud Console notifications to learn of identified events that have occurred and could impact your organization, see Configuring notifications on the Red Hat Hybrid Cloud Console . For more information about configuring Hybrid Cloud Console notifications to integrate with third-party applications, see Integrating the Red Hat Hybrid Cloud Console with third-party applications . 2.2. Setting user preferences To receive email notifications, you can set or update your email preferences using the following procedure. Procedure Navigate to Operations > Policies . Click Open user preferences . The My Notifications page appears. Select Red Hat Enterprise Linux > Policies from the left menu. Check the appropriate boxes to define your policies notification preferences. Depending on your email notification preferences, you can subscribe to Instant notification emails for each system with triggered policies or a Daily digest summarizing triggered application events in a 24-hour time frame. To unsubscribe from all notifications, select Unsubscribe from all . Note Subscribing to instant notifications can result in receiving many emails on large inventories. To reduce the volume of emails, consider selecting the Daily digest option. Click Submit . | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/monitoring_and_reacting_to_configuration_changes_using_policies/policies-notifications_intro-policies |
Chapter 3. Reference design specifications | Chapter 3. Reference design specifications 3.1. Telco core and RAN DU reference design specifications The telco core reference design specification (RDS) describes OpenShift Container Platform 4.17 clusters running on commodity hardware that can support large scale telco applications including control plane and some centralized data plane functions. The telco RAN RDS describes the configuration for clusters running on commodity hardware to host 5G workloads in the Radio Access Network (RAN). 3.1.1. Reference design specifications for telco 5G deployments Red Hat and certified partners offer deep technical expertise and support for networking and operational capabilities required to run telco applications on OpenShift Container Platform 4.17 clusters. Red Hat's telco partners require a well-integrated, well-tested, and stable environment that can be replicated at scale for enterprise 5G solutions. The telco core and RAN DU reference design specifications (RDS) outline the recommended solution architecture based on a specific version of OpenShift Container Platform. Each RDS describes a tested and validated platform configuration for telco core and RAN DU use models. The RDS ensures an optimal experience when running your applications by defining the set of critical KPIs for telco 5G core and RAN DU. Following the RDS minimizes high severity escalations and improves application stability. 5G use cases are evolving and your workloads are continually changing. Red Hat is committed to iterating over the telco core and RAN DU RDS to support evolving requirements based on customer and partner feedback. 3.1.2. Reference design scope The telco core and telco RAN reference design specifications (RDS) capture the recommended, tested, and supported configurations to get reliable and repeatable performance for clusters running the telco core and telco RAN profiles. Each RDS includes the released features and supported configurations that are engineered and validated for clusters to run the individual profiles. The configurations provide a baseline OpenShift Container Platform installation that meets feature and KPI targets. Each RDS also describes expected variations for each individual configuration. Validation of each RDS includes many long duration and at-scale tests. Note The validated reference configurations are updated for each major Y-stream release of OpenShift Container Platform. Z-stream patch releases are periodically re-tested against the reference configurations. 3.1.3. Deviations from the reference design Deviating from the validated telco core and telco RAN DU reference design specifications (RDS) can have significant impact beyond the specific component or feature that you change. Deviations require analysis and engineering in the context of the complete solution. Important All deviations from the RDS should be analyzed and documented with clear action tracking information. Due diligence is expected from partners to understand how to bring deviations into line with the reference design. This might require partners to provide additional resources to engage with Red Hat to work towards enabling their use case to achieve a best in class outcome with the platform. This is critical for the supportability of the solution and ensuring alignment across Red Hat and with partners. Deviation from the RDS can have some or all of the following consequences: It can take longer to resolve issues. There is a risk of missing project service-level agreements (SLAs), project deadlines, end provider performance requirements, and so on. Unapproved deviations may require escalation at executive levels. Note Red Hat prioritizes the servicing of requests for deviations based on partner engagement priorities. 3.2. Telco RAN DU reference design specification 3.2.1. Telco RAN DU 4.17 reference design overview The Telco RAN distributed unit (DU) 4.17 reference design configures an OpenShift Container Platform 4.17 cluster running on commodity hardware to host telco RAN DU workloads. It captures the recommended, tested, and supported configurations to get reliable and repeatable performance for a cluster running the telco RAN DU profile. 3.2.1.1. Deployment architecture overview You deploy the telco RAN DU 4.17 reference configuration to managed clusters from a centrally managed RHACM hub cluster. The reference design specification (RDS) includes configuration of the managed clusters and the hub cluster components. Figure 3.1. Telco RAN DU deployment architecture overview 3.2.2. Telco RAN DU use model overview Use the following information to plan telco RAN DU workloads, cluster resources, and hardware specifications for the hub cluster and managed single-node OpenShift clusters. 3.2.2.1. Telco RAN DU application workloads DU worker nodes must have 3rd Generation Xeon (Ice Lake) 2.20 GHz or better CPUs with firmware tuned for maximum performance. 5G RAN DU user applications and workloads should conform to the following best practices and application limits: Develop cloud-native network functions (CNFs) that conform to the latest version of the Red Hat Best Practices for Kubernetes . Use SR-IOV for high performance networking. Use exec probes sparingly and only when no other suitable options are available Do not use exec probes if a CNF uses CPU pinning. Use other probe implementations, for example, httpGet or tcpSocket . When you need to use exec probes, limit the exec probe frequency and quantity. The maximum number of exec probes must be kept below 10, and frequency must not be set to less than 10 seconds. Avoid using exec probes unless there is absolutely no viable alternative. Note Startup probes require minimal resources during steady-state operation. The limitation on exec probes applies primarily to liveness and readiness probes. A test workload that conforms to the dimensions of the reference DU application workload described in this specification can be found at openshift-kni/du-test-workloads . 3.2.2.2. Telco RAN DU representative reference application workload characteristics The representative reference application workload has the following characteristics: Has a maximum of 15 pods and 30 containers for the vRAN application including its management and control functions Uses a maximum of 2 ConfigMap and 4 Secret CRs per pod Uses a maximum of 10 exec probes with a frequency of not less than 10 seconds Incremental application load on the kube-apiserver is less than 10% of the cluster platform usage Note You can extract CPU load can from the platform metrics. For example: query=avg_over_time(pod:container_cpu_usage:sum{namespace="openshift-kube-apiserver"}[30m]) Application logs are not collected by the platform log collector Aggregate traffic on the primary CNI is less than 1 MBps 3.2.2.3. Telco RAN DU worker node cluster resource utilization The maximum number of running pods in the system, inclusive of application workloads and OpenShift Container Platform pods, is 120. Resource utilization OpenShift Container Platform resource utilization varies depending on many factors including application workload characteristics such as: Pod count Type and frequency of probes Messaging rates on primary CNI or secondary CNI with kernel networking API access rate Logging rates Storage IOPS Cluster resource requirements are applicable under the following conditions: The cluster is running the described representative application workload. The cluster is managed with the constraints described in "Telco RAN DU worker node cluster resource utilization". Components noted as optional in the RAN DU use model configuration are not applied. Important You will need to do additional analysis to determine the impact on resource utilization and ability to meet KPI targets for configurations outside the scope of the Telco RAN DU reference design. You might have to allocate additional resources in the cluster depending on your requirements. Additional resources Telco RAN DU 4.17 validated software components 3.2.2.4. Hub cluster management characteristics Red Hat Advanced Cluster Management (RHACM) is the recommended cluster management solution. Configure it to the following limits on the hub cluster: Configure a maximum of 5 RHACM policies with a compliant evaluation interval of at least 10 minutes. Use a maximum of 10 managed cluster templates in policies. Where possible, use hub-side templating. Disable all RHACM add-ons except for the policy-controller and observability-controller add-ons. Set Observability to the default configuration. Important Configuring optional components or enabling additional features will result in additional resource usage and can reduce overall system performance. For more information, see Reference design deployment components . Table 3.1. OpenShift platform resource utilization under reference application load Metric Limit Notes CPU usage Less than 4000 mc - 2 cores (4 hyperthreads) Platform CPU is pinned to reserved cores, including both hyperthreads in each reserved core. The system is engineered to use 3 CPUs (3000mc) at steady-state to allow for periodic system tasks and spikes. Memory used Less than 16G 3.2.2.5. Telco RAN DU RDS components The following sections describe the various OpenShift Container Platform components and configurations that you use to configure and deploy clusters to run telco RAN DU workloads. Figure 3.2. Telco RAN DU reference design components Note Ensure that components that are not included in the telco RAN DU profile do not affect the CPU resources allocated to workload applications. Important Out of tree drivers are not supported. Additional resources For details of the telco RAN RDS KPI test results, see Telco RAN DU 4.17 reference design specification KPI test results . This information is only available to customers and partners. 3.2.3. Telco RAN DU 4.17 reference design components The following sections describe the various OpenShift Container Platform components and configurations that you use to configure and deploy clusters to run RAN DU workloads. 3.2.3.1. Host firmware tuning New in this release You can now configure host firmware settings for managed clusters that you deploy with GitOps ZTP. Description Tune host firmware settings for optimal performance during initial cluster deployment. The managed cluster host firmware settings are available on the hub cluster as BareMetalHost custom resources (CRs) that are created when you deploy the managed cluster with the SiteConfig CR and GitOps ZTP. Limits and requirements Hyperthreading must be enabled Engineering considerations Tune all settings for maximum performance. All settings are expected to be for maximum performance unless tuned for power savings. You can tune host firmware for power savings at the expense of performance as required. Enable secure boot. With secure boot enabled, only signed kernel modules are loaded by the kernel. Out-of-tree drivers are not supported. Additional resources Managing host firmware settings with GitOps ZTP Configuring host firmware for low latency and high performance Creating a performance profile 3.2.3.2. Node Tuning Operator New in this release No reference design updates in this release Description You tune the cluster performance by creating a performance profile. Important The RAN DU use case requires the cluster to be tuned for low-latency performance. Limits and requirements The Node Tuning Operator uses the PerformanceProfile CR to configure the cluster. You need to configure the following settings in the RAN DU profile PerformanceProfile CR: Select reserved and isolated cores and ensure that you allocate at least 4 hyperthreads (equivalent to 2 cores) on Intel 3rd Generation Xeon (Ice Lake) 2.20 GHz CPUs or better with firmware tuned for maximum performance. Set the reserved cpuset to include both hyperthread siblings for each included core. Unreserved cores are available as allocatable CPU for scheduling workloads. Ensure that hyperthread siblings are not split across reserved and isolated cores. Configure reserved and isolated CPUs to include all threads in all cores based on what you have set as reserved and isolated CPUs. Set core 0 of each NUMA node to be included in the reserved CPU set. Set the huge page size to 1G. Note You should not add additional workloads to the management partition. Only those pods which are part of the OpenShift management platform should be annotated into the management partition. Engineering considerations You should use the RT kernel to meet performance requirements. However, you can use the non-RT kernel with a corresponding impact to cluster performance if required. The number of huge pages that you configure depends on the application workload requirements. Variation in this parameter is expected and allowed. Variation is expected in the configuration of reserved and isolated CPU sets based on selected hardware and additional components in use on the system. Variation must still meet the specified limits. Hardware without IRQ affinity support impacts isolated CPUs. To ensure that pods with guaranteed whole CPU QoS have full use of the allocated CPU, all hardware in the server must support IRQ affinity. For more information, see "Finding the effective IRQ affinity setting for a node". When you enable workload partitioning during cluster deployment with the cpuPartitioningMode: AllNodes setting, the reserved CPU set in the PerformanceProfile CR must include enough CPUs for the operating system, interrupts, and OpenShift platform pods. Important cgroups v1 is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. Additional resources Finding the effective IRQ affinity setting for a node 3.2.3.3. PTP Operator New in this release A new version two of the Precision Time Protocol (PTP) fast event REST API is available. Consumer applications can now subscribe directly to the events REST API in the PTP events producer sidecar. The PTP fast event REST API v2 is compliant with the O-RAN O-Cloud Notification API Specification for Event Consumers 3.0 . You can change the API version by setting the ptpEventConfig.apiVersion field in the PtpOperatorConfig resource. Description See "Recommended single-node OpenShift cluster configuration for vDU application workloads" for details of support and configuration of PTP in cluster nodes. The DU node can run in the following modes: As an ordinary clock (OC) synced to a grandmaster clock or boundary clock (T-BC). As a grandmaster clock (T-GM) synced from GPS with support for single or dual card E810 NICs. As dual boundary clocks (one per NIC) with support for E810 NICs. As a T-BC with a highly available (HA) system clock when there are multiple time sources on different NICs. Optional: as a boundary clock for radio units (RUs). Limits and requirements Limited to two boundary clocks for dual NIC and HA. Limited to two card E810 configurations for T-GM. Engineering considerations Configurations are provided for ordinary clock, boundary clock, boundary clock with highly available system clock, and grandmaster clock. PTP fast event notifications uses ConfigMap CRs to store PTP event subscriptions. The PTP events REST API v2 does not have a global subscription for all lower hierarchy resources contained in the resource path. You subscribe consumer applications to the various available event types separately. Additional resources Recommended PTP single-node OpenShift cluster configuration for vDU application workloads 3.2.3.4. SR-IOV Operator New in this release No reference design updates in this release Description The SR-IOV Operator provisions and configures the SR-IOV CNI and device plugins. Both netdevice (kernel VFs) and vfio (DPDK) devices are supported and applicable to the RAN use models. Limits and requirements Use OpenShift Container Platform supported devices SR-IOV and IOMMU enablement in BIOS: The SR-IOV Network Operator will automatically enable IOMMU on the kernel command line. SR-IOV VFs do not receive link state updates from the PF. If link down detection is needed you must configure this at the protocol level. NICs which do not support firmware updates using Secure Boot or kernel lockdown must be pre-configured with sufficient virtual functions (VFs) to support the number of VFs required by the application workload. Note You might need to disable the SR-IOV Operator plugin for unsupported NICs using the undocumented disablePlugins option. Engineering considerations SR-IOV interfaces with the vfio driver type are typically used to enable additional secondary networks for applications that require high throughput or low latency. Customer variation on the configuration and number of SriovNetwork and SriovNetworkNodePolicy custom resources (CRs) is expected. IOMMU kernel command line settings are applied with a MachineConfig CR at install time. This ensures that the SriovOperator CR does not cause a reboot of the node when adding them. SR-IOV support for draining nodes in parallel is not applicable in a single-node OpenShift cluster. If you exclude the SriovOperatorConfig CR from your deployment, the CR will not be created automatically. In scenarios where you pin or restrict workloads to specific nodes, the SR-IOV parallel node drain feature will not result in the rescheduling of pods. In these scenarios, the SR-IOV Operator disables the parallel node drain functionality. Additional resources Preparing the GitOps ZTP site configuration repository for version independence Configuring QinQ support for SR-IOV enabled workloads 3.2.3.5. Logging New in this release Cluster Logging Operator 6.0 is new in this release. Update your existing implementation to adapt to the new version of the API. Description Use logging to collect logs from the far edge node for remote analysis. The recommended log collector is Vector. Engineering considerations Handling logs beyond the infrastructure and audit logs, for example, from the application workload requires additional CPU and network bandwidth based on additional logging rate. As of OpenShift Container Platform 4.14, Vector is the reference log collector. Note Use of fluentd in the RAN use model is deprecated. Additional resources About logging 3.2.3.6. SRIOV-FEC Operator New in this release No reference design updates in this release Description SRIOV-FEC Operator is an optional 3rd party Certified Operator supporting FEC accelerator hardware. Limits and requirements Starting with FEC Operator v2.7.0: SecureBoot is supported The vfio driver for the PF requires the usage of vfio-token that is injected into Pods. Applications in the pod can pass the VF token to DPDK by using the EAL parameter --vfio-vf-token . Engineering considerations The SRIOV-FEC Operator uses CPU cores from the isolated CPU set. You can validate FEC readiness as part of the pre-checks for application deployment, for example, by extending the validation policy. Additional resources SRIOV-FEC Operator for Intel(R) vRAN Dedicated Accelerator manager container 3.2.3.7. Lifecycle Agent New in this release No reference design updates in this release Description The Lifecycle Agent provides local lifecycle management services for single-node OpenShift clusters. Limits and requirements The Lifecycle Agent is not applicable in multi-node clusters or single-node OpenShift clusters with an additional worker. Requires a persistent volume that you create when installing the cluster. See "Configuring a shared container directory between ostree stateroots when using GitOps ZTP" for partition requirements. Additional resources Understanding the image-based upgrade for single-node OpenShift clusters Configuring a shared container directory between ostree stateroots when using GitOps ZTP 3.2.3.8. Local Storage Operator New in this release No reference design updates in this release Description You can create persistent volumes that can be used as PVC resources by applications with the Local Storage Operator. The number and type of PV resources that you create depends on your requirements. Engineering considerations Create backing storage for PV CRs before creating the PV . This can be a partition, a local volume, LVM volume, or full disk. Refer to the device listing in LocalVolume CRs by the hardware path used to access each device to ensure correct allocation of disks and partitions. Logical names (for example, /dev/sda ) are not guaranteed to be consistent across node reboots. For more information, see the RHEL 9 documentation on device identifiers . 3.2.3.9. LVM Storage New in this release No reference design updates in this release Note Logical Volume Manager (LVM) Storage is an optional component. When you use LVM Storage as the storage solution, it replaces the Local Storage Operator. CPU resources are assigned to the management partition as platform overhead. The reference configuration must include one of these storage solutions, but not both. Description LVM Storage provides dynamic provisioning of block and file storage. LVM Storage creates logical volumes from local devices that can be used as PVC resources by applications. Volume expansion and snapshots are also possible. Limits and requirements In single-node OpenShift clusters, persistent storage must be provided by either LVM Storage or local storage, not both. Volume snapshots are excluded from the reference configuration. Engineering considerations LVM Storage can be used as the local storage implementation for the RAN DU use case. When LVM Storage is used as the storage solution, it replaces the Local Storage Operator, and the CPU required is assigned to the management partition as platform overhead. The reference configuration must include one of these storage solutions but not both. Ensure that sufficient disks or partitions are available for storage requirements. 3.2.3.10. Workload partitioning New in this release No reference design updates in this release Description Workload partitioning pins OpenShift platform and Day 2 Operator pods that are part of the DU profile to the reserved CPU set and removes the reserved CPU from node accounting. This leaves all unreserved CPU cores available for user workloads. Limits and requirements Namespace and Pod CRs must be annotated to allow the pod to be applied to the management partition Pods with CPU limits cannot be allocated to the partition. This is because mutation can change the pod QoS. For more information about the minimum number of CPUs that can be allocated to the management partition, see Node Tuning Operator . Engineering considerations Workload Partitioning pins all management pods to reserved cores. A sufficient number of cores must be allocated to the reserved set to account for operating system, management pods, and expected spikes in CPU use that occur when the workload starts, the node reboots, or other system events happen. Additional resources Workload partitioning 3.2.3.11. Cluster tuning New in this release No reference design updates in this release Description See "Cluster capabilities" for a full list of optional components that you can enable or disable before installation. Limits and requirements Cluster capabilities are not available for installer-provisioned installation methods. You must apply all platform tuning configurations. The following table lists the required platform tuning configurations: Table 3.2. Cluster capabilities configurations Feature Description Remove optional cluster capabilities Reduce the OpenShift Container Platform footprint by disabling optional cluster Operators on single-node OpenShift clusters only. Remove all optional Operators except the Marketplace and Node Tuning Operators. Configure cluster monitoring Configure the monitoring stack for reduced footprint by doing the following: Disable the local alertmanager and telemeter components. If you use RHACM observability, the CR must be augmented with appropriate additionalAlertManagerConfigs CRs to forward alerts to the hub cluster. Reduce the Prometheus retention period to 24h. Note The RHACM hub cluster aggregates managed cluster metrics. Disable networking diagnostics Disable networking diagnostics for single-node OpenShift because they are not required. Configure a single OperatorHub catalog source Configure the cluster to use a single catalog source that contains only the Operators required for a RAN DU deployment. Each catalog source increases the CPU use on the cluster. Using a single CatalogSource fits within the platform CPU budget. Disable the Console Operator If the cluster was deployed with the console disabled, the Console CR ( ConsoleOperatorDisable.yaml ) is not needed. If the cluster was deployed with the console enabled, you must apply the Console CR. Engineering considerations In OpenShift Container Platform 4.16 and later, clusters do not automatically revert to cgroups v1 when a PerformanceProfile CR is applied. If workloads running on the cluster require cgroups v1, you need to configure the cluster to use cgroups v1. Note If you need to configure cgroups v1, make the configuration as part of the initial cluster deployment. Additional resources Cluster capabilities 3.2.3.12. Machine configuration New in this release No reference design updates in this release Limits and requirements The CRI-O wipe disable MachineConfig assumes that images on disk are static other than during scheduled maintenance in defined maintenance windows. To ensure the images are static, do not set the pod imagePullPolicy field to Always . Table 3.3. Machine configuration options Feature Description Container runtime Sets the container runtime to crun for all node roles. kubelet config and container mount hiding Reduces the frequency of kubelet housekeeping and eviction monitoring to reduce CPU usage. Create a container mount namespace, visible to kubelet and CRI-O, to reduce system mount scanning resource usage. SCTP Optional configuration (enabled by default) Enables SCTP. SCTP is required by RAN applications but disabled by default in RHCOS. kdump Optional configuration (enabled by default) Enables kdump to capture debug information when a kernel panic occurs. Note The reference CRs which enable kdump have an increased memory reservation based on the set of drivers and kernel modules included in the reference configuration. CRI-O wipe disable Disables automatic wiping of the CRI-O image cache after unclean shutdown. SR-IOV-related kernel arguments Includes additional SR-IOV related arguments in the kernel command line. RCU Normal systemd service Sets rcu_normal after the system is fully started. One-shot time sync Runs a one-time NTP system time synchronization job for control plane or worker nodes. Additional resources Recommended single-node OpenShift cluster configuration for vDU application workloads . 3.2.3.13. Telco RAN DU deployment components The following sections describe the various OpenShift Container Platform components and configurations that you use to configure the hub cluster with Red Hat Advanced Cluster Management (RHACM). 3.2.3.13.1. Red Hat Advanced Cluster Management New in this release No reference design updates in this release Description Red Hat Advanced Cluster Management (RHACM) provides Multi Cluster Engine (MCE) installation and ongoing lifecycle management functionality for deployed clusters. You manage cluster configuration and upgrades declaratively by applying Policy custom resources (CRs) to clusters during maintenance windows. You apply policies with the RHACM policy controller as managed by Topology Aware Lifecycle Manager (TALM). The policy controller handles configuration, upgrades, and cluster statuses. When installing managed clusters, RHACM applies labels and initial ignition configuration to individual nodes in support of custom disk partitioning, allocation of roles, and allocation to machine config pools. You define these configurations with SiteConfig or ClusterInstance CRs. Limits and requirements 300 SiteConfig CRs per ArgoCD application. You can use multiple applications to achieve the maximum number of clusters supported by a single hub cluster. A single hub cluster supports up to 3500 deployed single-node OpenShift clusters with 5 Policy CRs bound to each cluster. Engineering considerations Use RHACM policy hub-side templating to better scale cluster configuration. You can significantly reduce the number of policies by using a single group policy or small number of general group policies where the group and per-cluster values are substituted into templates. Cluster specific configuration: managed clusters typically have some number of configuration values that are specific to the individual cluster. These configurations should be managed using RHACM policy hub-side templating with values pulled from ConfigMap CRs based on the cluster name. To save CPU resources on managed clusters, policies that apply static configurations should be unbound from managed clusters after GitOps ZTP installation of the cluster. Additional resources Using GitOps ZTP to provision clusters at the network far edge Red Hat Advanced Cluster Management for Kubernetes 3.2.3.13.2. Topology Aware Lifecycle Manager New in this release No reference design updates in this release Description Topology Aware Lifecycle Manager (TALM) is an Operator that runs only on the hub cluster for managing how changes including cluster and Operator upgrades, configuration, and so on are rolled out to the network. Limits and requirements TALM supports concurrent cluster deployment in batches of 400. Precaching and backup features are for single-node OpenShift clusters only. Engineering considerations Only policies that have the ran.openshift.io/ztp-deploy-wave annotation are automatically applied by TALM during initial cluster installation. You can create further ClusterGroupUpgrade CRs to control the policies that TALM remediates. Additional resources Updating managed clusters with the Topology Aware Lifecycle Manager 3.2.3.13.3. GitOps and GitOps ZTP plugins New in this release No reference design updates in this release Description GitOps and GitOps ZTP plugins provide a GitOps-based infrastructure for managing cluster deployment and configuration. Cluster definitions and configurations are maintained as a declarative state in Git. You can apply ClusterInstance CRs to the hub cluster where the SiteConfig Operator renders them as installation CRs. Alternatively, you can use the GitOps ZTP plugin to generate installation CRs directly from SiteConfig CRs. The GitOps ZTP plugin supports automatic wrapping of configuration CRs in policies based on PolicyGenTemplate CRs. Note You can deploy and manage multiple versions of OpenShift Container Platform on managed clusters using the baseline reference configuration CRs. You can use custom CRs alongside the baseline CRs. To maintain multiple per-version policies simultaneously, use Git to manage the versions of the source CRs and policy CRs ( PolicyGenTemplate or PolicyGenerator ). Keep reference CRs and custom CRs under different directories. Doing this allows you to patch and update the reference CRs by simple replacement of all directory contents without touching the custom CRs. Limits 300 SiteConfig CRs per ArgoCD application. You can use multiple applications to achieve the maximum number of clusters supported by a single hub cluster. Content in the /source-crs folder in Git overrides content provided in the GitOps ZTP plugin container. Git takes precedence in the search path. Add the /source-crs folder in the same directory as the kustomization.yaml file, which includes the PolicyGenTemplate as a generator. Note Alternative locations for the /source-crs directory are not supported in this context. The extraManifestPath field of the SiteConfig CR is deprecated from OpenShift Container Platform 4.15 and later. Use the new extraManifests.searchPaths field instead. Engineering considerations For multi-node cluster upgrades, you can pause MachineConfigPool ( MCP ) CRs during maintenance windows by setting the paused field to true . You can increase the number of nodes per MCP updated simultaneously by configuring the maxUnavailable setting in the MCP CR. The MaxUnavailable field defines the percentage of nodes in the pool that can be simultaneously unavailable during a MachineConfig update. Set maxUnavailable to the maximum tolerable value. This reduces the number of reboots in a cluster during upgrades which results in shorter upgrade times. When you finally unpause the MCP CR, all the changed configurations are applied with a single reboot. During cluster installation, you can pause custom MCP CRs by setting the paused field to true and setting maxUnavailable to 100% to improve installation times. To avoid confusion or unintentional overwriting of files when updating content, use unique and distinguishable names for user-provided CRs in the /source-crs folder and extra manifests in Git. The SiteConfig CR allows multiple extra-manifest paths. When files with the same name are found in multiple directory paths, the last file found takes precedence. This allows you to put the full set of version-specific Day 0 manifests (extra-manifests) in Git and reference them from the SiteConfig CR. With this feature, you can deploy multiple OpenShift Container Platform versions to managed clusters simultaneously. Additional resources Preparing the GitOps ZTP site configuration repository for version independence Adding custom content to the GitOps ZTP pipeline 3.2.3.13.4. Agent-based installer New in this release No reference design updates in this release Description Agent-based installer (ABI) provides installation capabilities without centralized infrastructure. The installation program creates an ISO image that you mount to the server. When the server boots it installs OpenShift Container Platform and supplied extra manifests. Note You can also use ABI to install OpenShift Container Platform clusters without a hub cluster. An image registry is still required when you use ABI in this manner. Agent-based installer (ABI) is an optional component. Limits and requirements You can supply a limited set of additional manifests at installation time. You must include MachineConfiguration CRs that are required by the RAN DU use case. Engineering considerations ABI provides a baseline OpenShift Container Platform installation. You install Day 2 Operators and the remainder of the RAN DU use case configurations after installation. Additional resources Installing an OpenShift Container Platform cluster with the Agent-based Installer 3.2.4. Telco RAN distributed unit (DU) reference configuration CRs Use the following custom resources (CRs) to configure and deploy OpenShift Container Platform clusters with the telco RAN DU profile. Some of the CRs are optional depending on your requirements. CR fields you can change are annotated in the CR with YAML comments. Note You can extract the complete set of RAN DU CRs from the ztp-site-generate container image. See Preparing the GitOps ZTP site configuration repository for more information. 3.2.4.1. Day 2 Operators reference CRs Table 3.4. Day 2 Operators CRs Component Reference CR Optional New in this release Cluster logging ClusterLogForwarder.yaml No No Cluster logging ClusterLogNS.yaml No No Cluster logging ClusterLogOperGroup.yaml No No Cluster logging ClusterLogServiceAccount.yaml No Yes Cluster logging ClusterLogServiceAccountAuditBinding.yaml No Yes Cluster logging ClusterLogServiceAccountInfrastructureBinding.yaml No Yes Cluster logging ClusterLogSubscription.yaml No No LifeCycle Agent Operator ImageBasedUpgrade.yaml Yes No LifeCycle Agent Operator LcaSubscription.yaml Yes No LifeCycle Agent Operator LcaSubscriptionNS.yaml Yes No LifeCycle Agent Operator LcaSubscriptionOperGroup.yaml Yes No Local Storage Operator StorageClass.yaml Yes No Local Storage Operator StorageLV.yaml Yes No Local Storage Operator StorageNS.yaml Yes No Local Storage Operator StorageOperGroup.yaml Yes No Local Storage Operator StorageSubscription.yaml Yes No LVM Operator LVMOperatorStatus.yaml Yes No LVM Operator StorageLVMCluster.yaml Yes No LVM Operator StorageLVMSubscription.yaml Yes No LVM Operator StorageLVMSubscriptionNS.yaml Yes No LVM Operator StorageLVMSubscriptionOperGroup.yaml Yes No Node Tuning Operator PerformanceProfile.yaml No No Node Tuning Operator TunedPerformancePatch.yaml No No PTP fast event notifications PtpConfigBoundaryForEvent.yaml Yes No PTP fast event notifications PtpConfigForHAForEvent.yaml Yes No PTP fast event notifications PtpConfigMasterForEvent.yaml Yes No PTP fast event notifications PtpConfigSlaveForEvent.yaml Yes No PTP Operator - high availability PtpConfigBoundary.yaml No No PTP Operator - high availability PtpConfigForHA.yaml No No PTP Operator PtpConfigDualCardGmWpc.yaml No No PTP Operator PtpConfigGmWpc.yaml No No PTP Operator PtpConfigSlave.yaml No No PTP Operator PtpOperatorConfig.yaml No No PTP Operator PtpOperatorConfigForEvent.yaml No No PTP Operator PtpSubscription.yaml No No PTP Operator PtpSubscriptionNS.yaml No No PTP Operator PtpSubscriptionOperGroup.yaml No No SR-IOV FEC Operator AcceleratorsNS.yaml Yes No SR-IOV FEC Operator AcceleratorsOperGroup.yaml Yes No SR-IOV FEC Operator AcceleratorsSubscription.yaml Yes No SR-IOV FEC Operator SriovFecClusterConfig.yaml Yes No SR-IOV Operator SriovNetwork.yaml No No SR-IOV Operator SriovNetworkNodePolicy.yaml No No SR-IOV Operator SriovOperatorConfig.yaml No No SR-IOV Operator SriovOperatorConfigForSNO.yaml No No SR-IOV Operator SriovSubscription.yaml No No SR-IOV Operator SriovSubscriptionNS.yaml No No SR-IOV Operator SriovSubscriptionOperGroup.yaml No No 3.2.4.2. Cluster tuning reference CRs Table 3.5. Cluster tuning CRs Component Reference CR Optional New in this release Composable OpenShift example-sno.yaml No No Console disable ConsoleOperatorDisable.yaml Yes No Disconnected registry 09-openshift-marketplace-ns.yaml No No Disconnected registry DefaultCatsrc.yaml No No Disconnected registry DisableOLMPprof.yaml No No Disconnected registry DisconnectedICSP.yaml No No Disconnected registry OperatorHub.yaml OperatorHub is required for single-node OpenShift and optional for multi-node clusters No Monitoring configuration ReduceMonitoringFootprint.yaml No No Network diagnostics disable DisableSnoNetworkDiag.yaml No No 3.2.4.3. Machine configuration reference CRs Table 3.6. Machine configuration CRs Component Reference CR Optional New in this release Container runtime (crun) enable-crun-master.yaml No No Container runtime (crun) enable-crun-worker.yaml No No Disable CRI-O wipe 99-crio-disable-wipe-master.yaml No No Disable CRI-O wipe 99-crio-disable-wipe-worker.yaml No No Kdump enable 06-kdump-master.yaml No No Kdump enable 06-kdump-worker.yaml No No Kubelet configuration / Container mount hiding 01-container-mount-ns-and-kubelet-conf-master.yaml No No Kubelet configuration / Container mount hiding 01-container-mount-ns-and-kubelet-conf-worker.yaml No No One-shot time sync 99-sync-time-once-master.yaml No No One-shot time sync 99-sync-time-once-worker.yaml No No SCTP 03-sctp-machine-config-master.yaml Yes No SCTP 03-sctp-machine-config-worker.yaml Yes No Set RCU normal 08-set-rcu-normal-master.yaml No No Set RCU normal 08-set-rcu-normal-worker.yaml No No SR-IOV-related kernel arguments 07-sriov-related-kernel-args-master.yaml No No SR-IOV-related kernel arguments 07-sriov-related-kernel-args-worker.yaml No No 3.2.4.4. YAML reference The following is a complete reference for all the custom resources (CRs) that make up the telco RAN DU 4.17 reference configuration. 3.2.4.4.1. Day 2 Operators reference YAML ClusterLogForwarder.yaml apiVersion: "observability.openshift.io/v1" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging annotations: {} spec: # outputs: USDoutputs # pipelines: USDpipelines serviceAccount: name: logcollector #apiVersion: "observability.openshift.io/v1" #kind: ClusterLogForwarder #metadata: # name: instance # namespace: openshift-logging # spec: # outputs: # - type: "kafka" # name: kafka-open # # below url is an example # kafka: # url: tcp://10.46.55.190:9092/test # filters: # - name: test-labels # type: openshiftLabels # openshiftLabels: # label1: test1 # label2: test2 # label3: test3 # label4: test4 # pipelines: # - name: all-to-default # inputRefs: # - audit # - infrastructure # filterRefs: # - test-labels # outputRefs: # - kafka-open # serviceAccount: # name: logcollector ClusterLogNS.yaml --- apiVersion: v1 kind: Namespace metadata: name: openshift-logging annotations: workload.openshift.io/allowed: management ClusterLogOperGroup.yaml --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging annotations: {} spec: targetNamespaces: - openshift-logging ClusterLogServiceAccount.yaml --- apiVersion: v1 kind: ServiceAccount metadata: name: logcollector namespace: openshift-logging annotations: {} ClusterLogServiceAccountAuditBinding.yaml --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: logcollector-audit-logs-binding annotations: {} roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: collect-audit-logs subjects: - kind: ServiceAccount name: logcollector namespace: openshift-logging ClusterLogServiceAccountInfrastructureBinding.yaml --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: logcollector-infrastructure-logs-binding annotations: {} roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: collect-infrastructure-logs subjects: - kind: ServiceAccount name: logcollector namespace: openshift-logging ClusterLogSubscription.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging annotations: {} spec: channel: "stable-6.0" name: cluster-logging source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown ImageBasedUpgrade.yaml apiVersion: lca.openshift.io/v1 kind: ImageBasedUpgrade metadata: name: upgrade spec: stage: Idle # When setting `stage: Prep`, remember to add the seed image reference object below. # seedImageRef: # image: USDimage # version: USDversion LcaSubscription.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lifecycle-agent namespace: openshift-lifecycle-agent annotations: {} spec: channel: "stable" name: lifecycle-agent source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown LcaSubscriptionNS.yaml apiVersion: v1 kind: Namespace metadata: name: openshift-lifecycle-agent annotations: workload.openshift.io/allowed: management labels: kubernetes.io/metadata.name: openshift-lifecycle-agent LcaSubscriptionOperGroup.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: lifecycle-agent namespace: openshift-lifecycle-agent annotations: {} spec: targetNamespaces: - openshift-lifecycle-agent StorageClass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: {} name: example-storage-class provisioner: kubernetes.io/no-provisioner reclaimPolicy: Delete StorageLV.yaml apiVersion: "local.storage.openshift.io/v1" kind: "LocalVolume" metadata: name: "local-disks" namespace: "openshift-local-storage" annotations: {} spec: logLevel: Normal managementState: Managed storageClassDevices: # The list of storage classes and associated devicePaths need to be specified like this example: - storageClassName: "example-storage-class" volumeMode: Filesystem fsType: xfs # The below must be adjusted to the hardware. # For stability and reliability, it's recommended to use persistent # naming conventions for devicePaths, such as /dev/disk/by-path. devicePaths: - /dev/disk/by-path/pci-0000:05:00.0-nvme-1 #--- ## How to verify ## 1. Create a PVC # apiVersion: v1 # kind: PersistentVolumeClaim # metadata: # name: local-pvc-name # spec: # accessModes: # - ReadWriteOnce # volumeMode: Filesystem # resources: # requests: # storage: 100Gi # storageClassName: example-storage-class #--- ## 2. Create a pod that mounts it # apiVersion: v1 # kind: Pod # metadata: # labels: # run: busybox # name: busybox # spec: # containers: # - image: quay.io/quay/busybox:latest # name: busybox # resources: {} # command: ["/bin/sh", "-c", "sleep infinity"] # volumeMounts: # - name: local-pvc # mountPath: /data # volumes: # - name: local-pvc # persistentVolumeClaim: # claimName: local-pvc-name # dnsPolicy: ClusterFirst # restartPolicy: Always ## 3. Run the pod on the cluster and verify the size and access of the `/data` mount StorageNS.yaml apiVersion: v1 kind: Namespace metadata: name: openshift-local-storage annotations: workload.openshift.io/allowed: management StorageOperGroup.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-local-storage namespace: openshift-local-storage annotations: {} spec: targetNamespaces: - openshift-local-storage StorageSubscription.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage annotations: {} spec: channel: "stable" name: local-storage-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown LVMOperatorStatus.yaml # This CR verifies the installation/upgrade of the Sriov Network Operator apiVersion: operators.coreos.com/v1 kind: Operator metadata: name: lvms-operator.openshift-storage annotations: {} status: components: refs: - kind: Subscription namespace: openshift-storage conditions: - type: CatalogSourcesUnhealthy status: "False" - kind: InstallPlan namespace: openshift-storage conditions: - type: Installed status: "True" - kind: ClusterServiceVersion namespace: openshift-storage conditions: - type: Succeeded status: "True" reason: InstallSucceeded StorageLVMCluster.yaml apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: lvmcluster namespace: openshift-storage annotations: {} spec: {} #example: creating a vg1 volume group leveraging all available disks on the node # except the installation disk. # storage: # deviceClasses: # - name: vg1 # thinPoolConfig: # name: thin-pool-1 # sizePercent: 90 # overprovisionRatio: 10 StorageLVMSubscription.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lvms-operator namespace: openshift-storage annotations: {} spec: channel: "stable" name: lvms-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown StorageLVMSubscriptionNS.yaml apiVersion: v1 kind: Namespace metadata: name: openshift-storage labels: workload.openshift.io/allowed: "management" openshift.io/cluster-monitoring: "true" annotations: {} StorageLVMSubscriptionOperGroup.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: lvms-operator-operatorgroup namespace: openshift-storage annotations: {} spec: targetNamespaces: - openshift-storage PerformanceProfile.yaml apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: # if you change this name make sure the 'include' line in TunedPerformancePatch.yaml # matches this name: include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # Also in file 'validatorCRs/informDuValidator.yaml': # name: 50-performance-USD{PerformanceProfile.metadata.name} name: openshift-node-performance-profile annotations: ran.openshift.io/reference-configuration: "ran-du.redhat.com" spec: additionalKernelArgs: - "rcupdate.rcu_normal_after_boot=0" - "efi=runtime" - "vfio_pci.enable_sriov=1" - "vfio_pci.disable_idle_d3=1" - "module_blacklist=irdma" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: "" nodeSelector: node-role.kubernetes.io/USDmcp: '' numa: topologyPolicy: "restricted" # To use the standard (non-realtime) kernel, set enabled to false realTimeKernel: enabled: true workloadHints: # WorkloadHints defines the set of upper level flags for different type of workloads. # See https://github.com/openshift/cluster-node-tuning-operator/blob/master/docs/performanceprofile/performance_profile.md#workloadhints # for detailed descriptions of each item. # The configuration below is set for a low latency, performance mode. realTime: true highPowerConsumption: false perPodPowerManagement: false TunedPerformancePatch.yaml apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: performance-patch namespace: openshift-cluster-node-tuning-operator annotations: {} spec: profile: - name: performance-patch # Please note: # - The 'include' line must match the associated PerformanceProfile name, following below pattern # include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # - When using the standard (non-realtime) kernel, remove the kernel.timer_migration override from # the [sysctl] section and remove the entire section if it is empty. data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-node-performance-profile [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* group.ice-gnss=0:f:10:*:ice-gnss.* group.ice-dplls=0:f:10:*:ice-dplls.* [service] service.stalld=start,enable service.chronyd=stop,disable recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: "USDmcp" priority: 19 profile: performance-patch PtpConfigBoundaryForEvent.yaml apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary namespace: openshift-ptp annotations: {} spec: profile: - name: "boundary" ptp4lOpts: "-2 --summary_interval -4" phc2sysOpts: "-a -r -m -n 24 -N 8 -R 16" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: "true" ptp4lConf: | # The interface name is hardware-specific [USDiface_slave] masterOnly 0 [USDiface_master_1] masterOnly 1 [USDiface_master_2] masterOnly 1 [USDiface_master_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 248 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 135 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: "boundary" priority: 4 match: - nodeLabel: "node-role.kubernetes.io/USDmcp" PtpConfigForHAForEvent.yaml apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary-ha namespace: openshift-ptp annotations: {} spec: profile: - name: "boundary-ha" ptp4lOpts: "" phc2sysOpts: "-a -r -m -n 24 -N 8 -R 16" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: "true" haProfiles: "USDprofile1,USDprofile2" recommend: - profile: "boundary-ha" priority: 4 match: - nodeLabel: "node-role.kubernetes.io/USDmcp" PtpConfigMasterForEvent.yaml # The grandmaster profile is provided for testing only # It is not installed on production clusters apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: grandmaster namespace: openshift-ptp annotations: {} spec: profile: - name: "grandmaster" # The interface name is hardware-specific interface: USDinterface ptp4lOpts: "-2 --summary_interval -4" phc2sysOpts: "-a -r -m -n 24 -N 8 -R 16" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: "true" ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 255 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: "grandmaster" priority: 4 match: - nodeLabel: "node-role.kubernetes.io/USDmcp" PtpConfigSlaveForEvent.yaml apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: du-ptp-slave namespace: openshift-ptp annotations: {} spec: profile: - name: "slave" # The interface name is hardware-specific interface: USDinterface ptp4lOpts: "-2 -s --summary_interval -4" phc2sysOpts: "-a -r -m -n 24 -N 8 -R 16" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: "true" ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 255 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: "slave" priority: 4 match: - nodeLabel: "node-role.kubernetes.io/USDmcp" PtpConfigBoundary.yaml apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary namespace: openshift-ptp annotations: {} spec: profile: - name: "boundary" ptp4lOpts: "-2" phc2sysOpts: "-a -r -n 24" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: "true" ptp4lConf: | # The interface name is hardware-specific [USDiface_slave] masterOnly 0 [USDiface_master_1] masterOnly 1 [USDiface_master_2] masterOnly 1 [USDiface_master_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 248 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 135 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: "boundary" priority: 4 match: - nodeLabel: "node-role.kubernetes.io/USDmcp" PtpConfigForHA.yaml apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary-ha namespace: openshift-ptp annotations: {} spec: profile: - name: "boundary-ha" ptp4lOpts: "" phc2sysOpts: "-a -r -n 24" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: "true" haProfiles: "USDprofile1,USDprofile2" recommend: - profile: "boundary-ha" priority: 4 match: - nodeLabel: "node-role.kubernetes.io/USDmcp" PtpConfigDualCardGmWpc.yaml # The grandmaster profile is provided for testing only # It is not installed on production clusters # In this example two cards USDiface_nic1 and USDiface_nic2 are connected via # SMA1 ports by a cable and USDiface_nic2 receives 1PPS signals from USDiface_nic1 apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: grandmaster namespace: openshift-ptp annotations: {} spec: profile: - name: "grandmaster" ptp4lOpts: "-2 --summary_interval -4" phc2sysOpts: -r -u 0 -m -w -N 8 -R 16 -s USDiface_nic1 -n 24 ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: "true" plugins: e810: enableDefaultConfig: false settings: LocalMaxHoldoverOffSet: 1500 LocalHoldoverTimeout: 14400 MaxInSpecOffset: 100 pins: USDe810_pins # "USDiface_nic1": # "U.FL2": "0 2" # "U.FL1": "0 1" # "SMA2": "0 2" # "SMA1": "2 1" # "USDiface_nic2": # "U.FL2": "0 2" # "U.FL1": "0 1" # "SMA2": "0 2" # "SMA1": "1 1" ublxCmds: - args: #ubxtool -P 29.20 -z CFG-HW-ANT_CFG_VOLTCTRL,1 - "-P" - "29.20" - "-z" - "CFG-HW-ANT_CFG_VOLTCTRL,1" reportOutput: false - args: #ubxtool -P 29.20 -e GPS - "-P" - "29.20" - "-e" - "GPS" reportOutput: false - args: #ubxtool -P 29.20 -d Galileo - "-P" - "29.20" - "-d" - "Galileo" reportOutput: false - args: #ubxtool -P 29.20 -d GLONASS - "-P" - "29.20" - "-d" - "GLONASS" reportOutput: false - args: #ubxtool -P 29.20 -d BeiDou - "-P" - "29.20" - "-d" - "BeiDou" reportOutput: false - args: #ubxtool -P 29.20 -d SBAS - "-P" - "29.20" - "-d" - "SBAS" reportOutput: false - args: #ubxtool -P 29.20 -t -w 5 -v 1 -e SURVEYIN,600,50000 - "-P" - "29.20" - "-t" - "-w" - "5" - "-v" - "1" - "-e" - "SURVEYIN,600,50000" reportOutput: true - args: #ubxtool -P 29.20 -p MON-HW - "-P" - "29.20" - "-p" - "MON-HW" reportOutput: true - args: #ubxtool -P 29.20 -p CFG-MSG,1,38,248 - "-P" - "29.20" - "-p" - "CFG-MSG,1,38,248" reportOutput: true ts2phcOpts: " " ts2phcConf: | [nmea] ts2phc.master 1 [global] use_syslog 0 verbose 1 logging_level 7 ts2phc.pulsewidth 100000000 #cat /dev/GNSS to find available serial port #example value of gnss_serialport is /dev/ttyGNSS_1700_0 ts2phc.nmea_serialport USDgnss_serialport leapfile /usr/share/zoneinfo/leap-seconds.list [USDiface_nic1] ts2phc.extts_polarity rising ts2phc.extts_correction 0 [USDiface_nic2] ts2phc.master 0 ts2phc.extts_polarity rising #this is a measured value in nanoseconds to compensate for SMA cable delay ts2phc.extts_correction -10 ptp4lConf: | [USDiface_nic1] masterOnly 1 [USDiface_nic1_1] masterOnly 1 [USDiface_nic1_2] masterOnly 1 [USDiface_nic1_3] masterOnly 1 [USDiface_nic2] masterOnly 1 [USDiface_nic2_1] masterOnly 1 [USDiface_nic2_2] masterOnly 1 [USDiface_nic2_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 6 clockAccuracy 0x27 offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval 0 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval -4 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 1 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0x20 recommend: - profile: "grandmaster" priority: 4 match: - nodeLabel: "node-role.kubernetes.io/USDmcp" PtpConfigGmWpc.yaml # The grandmaster profile is provided for testing only # It is not installed on production clusters apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: grandmaster namespace: openshift-ptp annotations: {} spec: profile: - name: "grandmaster" ptp4lOpts: "-2 --summary_interval -4" phc2sysOpts: -r -u 0 -m -w -N 8 -R 16 -s USDiface_master -n 24 ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: "true" plugins: e810: enableDefaultConfig: false settings: LocalMaxHoldoverOffSet: 1500 LocalHoldoverTimeout: 14400 MaxInSpecOffset: 100 pins: USDe810_pins # "USDiface_master": # "U.FL2": "0 2" # "U.FL1": "0 1" # "SMA2": "0 2" # "SMA1": "0 1" ublxCmds: - args: #ubxtool -P 29.20 -z CFG-HW-ANT_CFG_VOLTCTRL,1 - "-P" - "29.20" - "-z" - "CFG-HW-ANT_CFG_VOLTCTRL,1" reportOutput: false - args: #ubxtool -P 29.20 -e GPS - "-P" - "29.20" - "-e" - "GPS" reportOutput: false - args: #ubxtool -P 29.20 -d Galileo - "-P" - "29.20" - "-d" - "Galileo" reportOutput: false - args: #ubxtool -P 29.20 -d GLONASS - "-P" - "29.20" - "-d" - "GLONASS" reportOutput: false - args: #ubxtool -P 29.20 -d BeiDou - "-P" - "29.20" - "-d" - "BeiDou" reportOutput: false - args: #ubxtool -P 29.20 -d SBAS - "-P" - "29.20" - "-d" - "SBAS" reportOutput: false - args: #ubxtool -P 29.20 -t -w 5 -v 1 -e SURVEYIN,600,50000 - "-P" - "29.20" - "-t" - "-w" - "5" - "-v" - "1" - "-e" - "SURVEYIN,600,50000" reportOutput: true - args: #ubxtool -P 29.20 -p MON-HW - "-P" - "29.20" - "-p" - "MON-HW" reportOutput: true - args: #ubxtool -P 29.20 -p CFG-MSG,1,38,248 - "-P" - "29.20" - "-p" - "CFG-MSG,1,38,248" reportOutput: true ts2phcOpts: " " ts2phcConf: | [nmea] ts2phc.master 1 [global] use_syslog 0 verbose 1 logging_level 7 ts2phc.pulsewidth 100000000 #cat /dev/GNSS to find available serial port #example value of gnss_serialport is /dev/ttyGNSS_1700_0 ts2phc.nmea_serialport USDgnss_serialport leapfile /usr/share/zoneinfo/leap-seconds.list [USDiface_master] ts2phc.extts_polarity rising ts2phc.extts_correction 0 ptp4lConf: | [USDiface_master] masterOnly 1 [USDiface_master_1] masterOnly 1 [USDiface_master_2] masterOnly 1 [USDiface_master_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 6 clockAccuracy 0x27 offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval 0 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval -4 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0x20 recommend: - profile: "grandmaster" priority: 4 match: - nodeLabel: "node-role.kubernetes.io/USDmcp" PtpConfigSlave.yaml apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: ordinary namespace: openshift-ptp annotations: {} spec: profile: - name: "ordinary" # The interface name is hardware-specific interface: USDinterface ptp4lOpts: "-2 -s" phc2sysOpts: "-a -r -n 24" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: "true" ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 255 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: "ordinary" priority: 4 match: - nodeLabel: "node-role.kubernetes.io/USDmcp" PtpOperatorConfig.yaml apiVersion: ptp.openshift.io/v1 kind: PtpOperatorConfig metadata: name: default namespace: openshift-ptp annotations: {} spec: daemonNodeSelector: node-role.kubernetes.io/USDmcp: "" PtpOperatorConfigForEvent.yaml apiVersion: ptp.openshift.io/v1 kind: PtpOperatorConfig metadata: name: default namespace: openshift-ptp annotations: {} spec: daemonNodeSelector: node-role.kubernetes.io/USDmcp: "" ptpEventConfig: apiVersion: USDevent_api_version enableEventPublisher: true transportHost: "http://ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043" PtpSubscription.yaml --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ptp-operator-subscription namespace: openshift-ptp annotations: {} spec: channel: "stable" name: ptp-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown PtpSubscriptionNS.yaml --- apiVersion: v1 kind: Namespace metadata: name: openshift-ptp annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: "true" PtpSubscriptionOperGroup.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ptp-operators namespace: openshift-ptp annotations: {} spec: targetNamespaces: - openshift-ptp AcceleratorsNS.yaml apiVersion: v1 kind: Namespace metadata: name: vran-acceleration-operators annotations: {} AcceleratorsOperGroup.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: vran-operators namespace: vran-acceleration-operators annotations: {} spec: targetNamespaces: - vran-acceleration-operators AcceleratorsSubscription.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-fec-subscription namespace: vran-acceleration-operators annotations: {} spec: channel: stable name: sriov-fec source: certified-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown SriovFecClusterConfig.yaml apiVersion: sriovfec.intel.com/v2 kind: SriovFecClusterConfig metadata: name: config namespace: vran-acceleration-operators annotations: {} spec: drainSkip: USDdrainSkip # true if SNO, false by default priority: 1 nodeSelector: node-role.kubernetes.io/master: "" acceleratorSelector: pciAddress: USDpciAddress physicalFunction: pfDriver: "vfio-pci" vfDriver: "vfio-pci" vfAmount: 16 bbDevConfig: USDbbDevConfig #Recommended configuration for Intel ACC100 (Mount Bryce) FPGA here: https://github.com/smart-edge-open/openshift-operator/blob/main/spec/openshift-sriov-fec-operator.md#sample-cr-for-wireless-fec-acc100 #Recommended configuration for Intel N3000 FPGA here: https://github.com/smart-edge-open/openshift-operator/blob/main/spec/openshift-sriov-fec-operator.md#sample-cr-for-wireless-fec-n3000 SriovNetwork.yaml apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: "" namespace: openshift-sriov-network-operator annotations: {} spec: # resourceName: "" networkNamespace: openshift-sriov-network-operator # vlan: "" # spoofChk: "" # ipam: "" # linkState: "" # maxTxRate: "" # minTxRate: "" # vlanQoS: "" # trust: "" # capabilities: "" SriovNetworkNodePolicy.yaml apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: USDname namespace: openshift-sriov-network-operator annotations: {} spec: # The attributes for Mellanox/Intel based NICs as below. # deviceType: netdevice/vfio-pci # isRdma: true/false deviceType: USDdeviceType isRdma: USDisRdma nicSelector: # The exact physical function name must match the hardware used pfNames: [USDpfNames] nodeSelector: node-role.kubernetes.io/USDmcp: "" numVfs: USDnumVfs priority: USDpriority resourceName: USDresourceName SriovOperatorConfig.yaml apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator annotations: {} spec: configDaemonNodeSelector: "node-role.kubernetes.io/USDmcp": "" # Injector and OperatorWebhook pods can be disabled (set to "false") below # to reduce the number of management pods. It is recommended to start with the # webhook and injector pods enabled, and only disable them after verifying the # correctness of user manifests. # If the injector is disabled, containers using sr-iov resources must explicitly assign # them in the "requests"/"limits" section of the container spec, for example: # containers: # - name: my-sriov-workload-container # resources: # limits: # openshift.io/<resource_name>: "1" # requests: # openshift.io/<resource_name>: "1" enableInjector: false enableOperatorWebhook: false logLevel: 0 SriovOperatorConfigForSNO.yaml apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator annotations: {} spec: configDaemonNodeSelector: "node-role.kubernetes.io/USDmcp": "" # Injector and OperatorWebhook pods can be disabled (set to "false") below # to reduce the number of management pods. It is recommended to start with the # webhook and injector pods enabled, and only disable them after verifying the # correctness of user manifests. # If the injector is disabled, containers using sr-iov resources must explicitly assign # them in the "requests"/"limits" section of the container spec, for example: # containers: # - name: my-sriov-workload-container # resources: # limits: # openshift.io/<resource_name>: "1" # requests: # openshift.io/<resource_name>: "1" enableInjector: false enableOperatorWebhook: false # Disable drain is needed for Single Node Openshift disableDrain: true logLevel: 0 SriovSubscription.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator annotations: {} spec: channel: "stable" name: sriov-network-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown SriovSubscriptionNS.yaml apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator annotations: workload.openshift.io/allowed: management SriovSubscriptionOperGroup.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator annotations: {} spec: targetNamespaces: - openshift-sriov-network-operator 3.2.4.4.2. Cluster tuning reference YAML example-sno.yaml # example-node1-bmh-secret & assisted-deployment-pull-secret need to be created under same namespace example-sno --- apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: "example-sno" namespace: "example-sno" spec: baseDomain: "example.com" pullSecretRef: name: "assisted-deployment-pull-secret" clusterImageSetNameRef: "openshift-4.16" sshPublicKey: "ssh-rsa AAAA..." clusters: - clusterName: "example-sno" networkType: "OVNKubernetes" # installConfigOverrides is a generic way of passing install-config # parameters through the siteConfig. The 'capabilities' field configures # the composable openshift feature. In this 'capabilities' setting, we # remove all the optional set of components. # Notes: # - OperatorLifecycleManager is needed for 4.15 and later # - NodeTuning is needed for 4.13 and later, not for 4.12 and earlier # - Ingress is needed for 4.16 and later installConfigOverrides: | { "capabilities": { "baselineCapabilitySet": "None", "additionalEnabledCapabilities": [ "NodeTuning", "OperatorLifecycleManager", "Ingress" ] } } # It is strongly recommended to include crun manifests as part of the additional install-time manifests for 4.13+. # The crun manifests can be obtained from source-crs/optional-extra-manifest/ and added to the git repo ie.sno-extra-manifest. # extraManifestPath: sno-extra-manifest clusterLabels: # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples du-profile: "latest" # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples in ../policygentemplates: # ../policygentemplates/common-ranGen.yaml will apply to all clusters with 'common: true' common: true # ../policygentemplates/group-du-sno-ranGen.yaml will apply to all clusters with 'group-du-sno: ""' group-du-sno: "" # ../policygentemplates/example-sno-site.yaml will apply to all clusters with 'sites: "example-sno"' # Normally this should match or contain the cluster name so it only applies to a single cluster sites: "example-sno" clusterNetwork: - cidr: 1001:1::/48 hostPrefix: 64 machineNetwork: - cidr: 1111:2222:3333:4444::/64 serviceNetwork: - 1001:2::/112 additionalNTPSources: - 1111:2222:3333:4444::2 # Initiates the cluster for workload partitioning. Setting specific reserved/isolated CPUSets is done via PolicyTemplate # please see Workload Partitioning Feature for a complete guide. cpuPartitioningMode: AllNodes # Optionally; This can be used to override the KlusterletAddonConfig that is created for this cluster: #crTemplates: # KlusterletAddonConfig: "KlusterletAddonConfigOverride.yaml" nodes: - hostName: "example-node1.example.com" role: "master" # Optionally; This can be used to configure desired BIOS setting on a host: #biosConfigRef: # filePath: "example-hw.profile" bmcAddress: "idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1" bmcCredentialsName: name: "example-node1-bmh-secret" bootMACAddress: "AA:BB:CC:DD:EE:11" # Use UEFISecureBoot to enable secure boot. bootMode: "UEFISecureBoot" rootDeviceHints: deviceName: "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0" # disk partition at `/var/lib/containers` with ignitionConfigOverride. Some values must be updated. See DiskPartitionContainer.md for more details ignitionConfigOverride: | { "ignition": { "version": "3.2.0" }, "storage": { "disks": [ { "device": "/dev/disk/by-id/wwn-0x6b07b250ebb9d0002a33509f24af1f62", "partitions": [ { "label": "var-lib-containers", "sizeMiB": 0, "startMiB": 250000 } ], "wipeTable": false } ], "filesystems": [ { "device": "/dev/disk/by-partlabel/var-lib-containers", "format": "xfs", "mountOptions": [ "defaults", "prjquota" ], "path": "/var/lib/containers", "wipeFilesystem": true } ] }, "systemd": { "units": [ { "contents": "# Generated by Butane\n[Unit]\nRequires=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\nAfter=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\n\n[Mount]\nWhere=/var/lib/containers\nWhat=/dev/disk/by-partlabel/var-lib-containers\nType=xfs\nOptions=defaults,prjquota\n\n[Install]\nRequiredBy=local-fs.target", "enabled": true, "name": "var-lib-containers.mount" } ] } } nodeNetwork: interfaces: - name: eno1 macAddress: "AA:BB:CC:DD:EE:11" config: interfaces: - name: eno1 type: ethernet state: up ipv4: enabled: false ipv6: enabled: true address: # For SNO sites with static IP addresses, the node-specific, # API and Ingress IPs should all be the same and configured on # the interface - ip: 1111:2222:3333:4444::aaaa:1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 -hop-interface: eno1 -hop-address: 1111:2222:3333:4444::1 table-id: 254 ConsoleOperatorDisable.yaml apiVersion: operator.openshift.io/v1 kind: Console metadata: annotations: include.release.openshift.io/ibm-cloud-managed: "false" include.release.openshift.io/self-managed-high-availability: "false" include.release.openshift.io/single-node-developer: "false" release.openshift.io/create-only: "true" name: cluster spec: logLevel: Normal managementState: Removed operatorLogLevel: Normal 09-openshift-marketplace-ns.yaml # Taken from https://github.com/operator-framework/operator-marketplace/blob/53c124a3f0edfd151652e1f23c87dd39ed7646bb/manifests/01_namespace.yaml # Update it as the source evolves. apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: "" workload.openshift.io/allowed: "management" labels: openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/enforce: baseline pod-security.kubernetes.io/enforce-version: v1.25 pod-security.kubernetes.io/audit: baseline pod-security.kubernetes.io/audit-version: v1.25 pod-security.kubernetes.io/warn: baseline pod-security.kubernetes.io/warn-version: v1.25 name: "openshift-marketplace" DefaultCatsrc.yaml apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: default-cat-source namespace: openshift-marketplace annotations: target.workload.openshift.io/management: '{"effect": "PreferredDuringScheduling"}' spec: displayName: default-cat-source image: USDimageUrl publisher: Red Hat sourceType: grpc updateStrategy: registryPoll: interval: 1h status: connectionState: lastObservedState: READY DisableOLMPprof.yaml apiVersion: v1 kind: ConfigMap metadata: name: collect-profiles-config namespace: openshift-operator-lifecycle-manager annotations: {} data: pprof-config.yaml: | disabled: True DisconnectedICSP.yaml apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: disconnected-internal-icsp annotations: {} spec: # repositoryDigestMirrors: # - USDmirrors OperatorHub.yaml apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: name: cluster annotations: {} spec: disableAllDefaultSources: true ReduceMonitoringFootprint.yaml apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring annotations: {} data: config.yaml: | alertmanagerMain: enabled: false telemeterClient: enabled: false prometheusK8s: retention: 24h DisableSnoNetworkDiag.yaml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster annotations: {} spec: disableNetworkDiagnostics: true 3.2.4.4.3. Machine configuration reference YAML enable-crun-master.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: enable-crun-master spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/master: "" containerRuntimeConfig: defaultRuntime: crun enable-crun-worker.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: enable-crun-worker spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" containerRuntimeConfig: defaultRuntime: crun 99-crio-disable-wipe-master.yaml # Automatically generated by extra-manifests-builder # Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-crio-disable-wipe-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW9dCmNsZWFuX3NodXRkb3duX2ZpbGUgPSAiIgo= mode: 420 path: /etc/crio/crio.conf.d/99-crio-disable-wipe.toml 99-crio-disable-wipe-worker.yaml # Automatically generated by extra-manifests-builder # Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-crio-disable-wipe-worker spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW9dCmNsZWFuX3NodXRkb3duX2ZpbGUgPSAiIgo= mode: 420 path: /etc/crio/crio.conf.d/99-crio-disable-wipe.toml 06-kdump-master.yaml # Automatically generated by extra-manifests-builder # Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 06-kdump-enable-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M 06-kdump-worker.yaml # Automatically generated by extra-manifests-builder # Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 06-kdump-enable-worker spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M 01-container-mount-ns-and-kubelet-conf-master.yaml # Automatically generated by extra-manifests-builder # Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: container-mount-namespace-and-kubelet-conf-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKCmRlYnVnKCkgewogIGVjaG8gJEAgPiYyCn0KCnVzYWdlKCkgewogIGVjaG8gVXNhZ2U6ICQoYmFzZW5hbWUgJDApIFVOSVQgW2VudmZpbGUgW3Zhcm5hbWVdXQogIGVjaG8KICBlY2hvIEV4dHJhY3QgdGhlIGNvbnRlbnRzIG9mIHRoZSBmaXJzdCBFeGVjU3RhcnQgc3RhbnphIGZyb20gdGhlIGdpdmVuIHN5c3RlbWQgdW5pdCBhbmQgcmV0dXJuIGl0IHRvIHN0ZG91dAogIGVjaG8KICBlY2hvICJJZiAnZW52ZmlsZScgaXMgcHJvdmlkZWQsIHB1dCBpdCBpbiB0aGVyZSBpbnN0ZWFkLCBhcyBhbiBlbnZpcm9ubWVudCB2YXJpYWJsZSBuYW1lZCAndmFybmFtZSciCiAgZWNobyAiRGVmYXVsdCAndmFybmFtZScgaXMgRVhFQ1NUQVJUIGlmIG5vdCBzcGVjaWZpZWQiCiAgZXhpdCAxCn0KClVOSVQ9JDEKRU5WRklMRT0kMgpWQVJOQU1FPSQzCmlmIFtbIC16ICRVTklUIHx8ICRVTklUID09ICItLWhlbHAiIHx8ICRVTklUID09ICItaCIgXV07IHRoZW4KICB1c2FnZQpmaQpkZWJ1ZyAiRXh0cmFjdGluZyBFeGVjU3RhcnQgZnJvbSAkVU5JVCIKRklMRT0kKHN5c3RlbWN0bCBjYXQgJFVOSVQgfCBoZWFkIC1uIDEpCkZJTEU9JHtGSUxFI1wjIH0KaWYgW1sgISAtZiAkRklMRSBdXTsgdGhlbgogIGRlYnVnICJGYWlsZWQgdG8gZmluZCByb290IGZpbGUgZm9yIHVuaXQgJFVOSVQgKCRGSUxFKSIKICBleGl0CmZpCmRlYnVnICJTZXJ2aWNlIGRlZmluaXRpb24gaXMgaW4gJEZJTEUiCkVYRUNTVEFSVD0kKHNlZCAtbiAtZSAnL15FeGVjU3RhcnQ9LipcXCQvLC9bXlxcXSQvIHsgcy9eRXhlY1N0YXJ0PS8vOyBwIH0nIC1lICcvXkV4ZWNTdGFydD0uKlteXFxdJC8geyBzL15FeGVjU3RhcnQ9Ly87IHAgfScgJEZJTEUpCgppZiBbWyAkRU5WRklMRSBdXTsgdGhlbgogIFZBUk5BTUU9JHtWQVJOQU1FOi1FWEVDU1RBUlR9CiAgZWNobyAiJHtWQVJOQU1FfT0ke0VYRUNTVEFSVH0iID4gJEVOVkZJTEUKZWxzZQogIGVjaG8gJEVYRUNTVEFSVApmaQo= mode: 493 path: /usr/local/bin/extractExecStart - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKbnNlbnRlciAtLW1vdW50PS9ydW4vY29udGFpbmVyLW1vdW50LW5hbWVzcGFjZS9tbnQgIiRAIgo= mode: 493 path: /usr/local/bin/nsenterCmns systemd: units: - contents: | [Unit] Description=Manages a mount namespace that both kubelet and crio can use to share their container-specific mounts [Service] Type=oneshot RemainAfterExit=yes RuntimeDirectory=container-mount-namespace Environment=RUNTIME_DIRECTORY=%t/container-mount-namespace Environment=BIND_POINT=%t/container-mount-namespace/mnt ExecStartPre=bash -c "findmnt USD{RUNTIME_DIRECTORY} || mount --make-unbindable --bind USD{RUNTIME_DIRECTORY} USD{RUNTIME_DIRECTORY}" ExecStartPre=touch USD{BIND_POINT} ExecStart=unshare --mount=USD{BIND_POINT} --propagation slave mount --make-rshared / ExecStop=umount -R USD{RUNTIME_DIRECTORY} name: container-mount-namespace.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c "nsenter --mount=%t/container-mount-namespace/mnt \ USD{ORIG_EXECSTART}" name: 90-container-mount-namespace.conf name: crio.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c "nsenter --mount=%t/container-mount-namespace/mnt \ USD{ORIG_EXECSTART} --housekeeping-interval=30s" name: 90-container-mount-namespace.conf - contents: | [Service] Environment="OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s" Environment="OPENSHIFT_EVICTION_MONITORING_PERIOD_DURATION=30s" name: 30-kubelet-interval-tuning.conf name: kubelet.service 01-container-mount-ns-and-kubelet-conf-worker.yaml # Automatically generated by extra-manifests-builder # Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: container-mount-namespace-and-kubelet-conf-worker spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKCmRlYnVnKCkgewogIGVjaG8gJEAgPiYyCn0KCnVzYWdlKCkgewogIGVjaG8gVXNhZ2U6ICQoYmFzZW5hbWUgJDApIFVOSVQgW2VudmZpbGUgW3Zhcm5hbWVdXQogIGVjaG8KICBlY2hvIEV4dHJhY3QgdGhlIGNvbnRlbnRzIG9mIHRoZSBmaXJzdCBFeGVjU3RhcnQgc3RhbnphIGZyb20gdGhlIGdpdmVuIHN5c3RlbWQgdW5pdCBhbmQgcmV0dXJuIGl0IHRvIHN0ZG91dAogIGVjaG8KICBlY2hvICJJZiAnZW52ZmlsZScgaXMgcHJvdmlkZWQsIHB1dCBpdCBpbiB0aGVyZSBpbnN0ZWFkLCBhcyBhbiBlbnZpcm9ubWVudCB2YXJpYWJsZSBuYW1lZCAndmFybmFtZSciCiAgZWNobyAiRGVmYXVsdCAndmFybmFtZScgaXMgRVhFQ1NUQVJUIGlmIG5vdCBzcGVjaWZpZWQiCiAgZXhpdCAxCn0KClVOSVQ9JDEKRU5WRklMRT0kMgpWQVJOQU1FPSQzCmlmIFtbIC16ICRVTklUIHx8ICRVTklUID09ICItLWhlbHAiIHx8ICRVTklUID09ICItaCIgXV07IHRoZW4KICB1c2FnZQpmaQpkZWJ1ZyAiRXh0cmFjdGluZyBFeGVjU3RhcnQgZnJvbSAkVU5JVCIKRklMRT0kKHN5c3RlbWN0bCBjYXQgJFVOSVQgfCBoZWFkIC1uIDEpCkZJTEU9JHtGSUxFI1wjIH0KaWYgW1sgISAtZiAkRklMRSBdXTsgdGhlbgogIGRlYnVnICJGYWlsZWQgdG8gZmluZCByb290IGZpbGUgZm9yIHVuaXQgJFVOSVQgKCRGSUxFKSIKICBleGl0CmZpCmRlYnVnICJTZXJ2aWNlIGRlZmluaXRpb24gaXMgaW4gJEZJTEUiCkVYRUNTVEFSVD0kKHNlZCAtbiAtZSAnL15FeGVjU3RhcnQ9LipcXCQvLC9bXlxcXSQvIHsgcy9eRXhlY1N0YXJ0PS8vOyBwIH0nIC1lICcvXkV4ZWNTdGFydD0uKlteXFxdJC8geyBzL15FeGVjU3RhcnQ9Ly87IHAgfScgJEZJTEUpCgppZiBbWyAkRU5WRklMRSBdXTsgdGhlbgogIFZBUk5BTUU9JHtWQVJOQU1FOi1FWEVDU1RBUlR9CiAgZWNobyAiJHtWQVJOQU1FfT0ke0VYRUNTVEFSVH0iID4gJEVOVkZJTEUKZWxzZQogIGVjaG8gJEVYRUNTVEFSVApmaQo= mode: 493 path: /usr/local/bin/extractExecStart - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKbnNlbnRlciAtLW1vdW50PS9ydW4vY29udGFpbmVyLW1vdW50LW5hbWVzcGFjZS9tbnQgIiRAIgo= mode: 493 path: /usr/local/bin/nsenterCmns systemd: units: - contents: | [Unit] Description=Manages a mount namespace that both kubelet and crio can use to share their container-specific mounts [Service] Type=oneshot RemainAfterExit=yes RuntimeDirectory=container-mount-namespace Environment=RUNTIME_DIRECTORY=%t/container-mount-namespace Environment=BIND_POINT=%t/container-mount-namespace/mnt ExecStartPre=bash -c "findmnt USD{RUNTIME_DIRECTORY} || mount --make-unbindable --bind USD{RUNTIME_DIRECTORY} USD{RUNTIME_DIRECTORY}" ExecStartPre=touch USD{BIND_POINT} ExecStart=unshare --mount=USD{BIND_POINT} --propagation slave mount --make-rshared / ExecStop=umount -R USD{RUNTIME_DIRECTORY} name: container-mount-namespace.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c "nsenter --mount=%t/container-mount-namespace/mnt \ USD{ORIG_EXECSTART}" name: 90-container-mount-namespace.conf name: crio.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c "nsenter --mount=%t/container-mount-namespace/mnt \ USD{ORIG_EXECSTART} --housekeeping-interval=30s" name: 90-container-mount-namespace.conf - contents: | [Service] Environment="OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s" Environment="OPENSHIFT_EVICTION_MONITORING_PERIOD_DURATION=30s" name: 30-kubelet-interval-tuning.conf name: kubelet.service 99-sync-time-once-master.yaml # Automatically generated by extra-manifests-builder # Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-sync-time-once-master spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Sync time once After=network-online.target Wants=network-online.target [Service] Type=oneshot TimeoutStartSec=300 ExecCondition=/bin/bash -c 'systemctl is-enabled chronyd.service --quiet && exit 1 || exit 0' ExecStart=/usr/sbin/chronyd -n -f /etc/chrony.conf -q RemainAfterExit=yes [Install] WantedBy=multi-user.target enabled: true name: sync-time-once.service 99-sync-time-once-worker.yaml # Automatically generated by extra-manifests-builder # Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-sync-time-once-worker spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Sync time once After=network-online.target Wants=network-online.target [Service] Type=oneshot TimeoutStartSec=300 ExecCondition=/bin/bash -c 'systemctl is-enabled chronyd.service --quiet && exit 1 || exit 0' ExecStart=/usr/sbin/chronyd -n -f /etc/chrony.conf -q RemainAfterExit=yes [Install] WantedBy=multi-user.target enabled: true name: sync-time-once.service 03-sctp-machine-config-master.yaml # Automatically generated by extra-manifests-builder # Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: load-sctp-module-master spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8,sctp filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf 03-sctp-machine-config-worker.yaml # Automatically generated by extra-manifests-builder # Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: load-sctp-module-worker spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8,sctp filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf 08-set-rcu-normal-master.yaml # Automatically generated by extra-manifests-builder # Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 08-set-rcu-normal-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKIwojIERpc2FibGUgcmN1X2V4cGVkaXRlZCBhZnRlciBub2RlIGhhcyBmaW5pc2hlZCBib290aW5nCiMKIyBUaGUgZGVmYXVsdHMgYmVsb3cgY2FuIGJlIG92ZXJyaWRkZW4gdmlhIGVudmlyb25tZW50IHZhcmlhYmxlcwojCgojIERlZmF1bHQgd2FpdCB0aW1lIGlzIDYwMHMgPSAxMG06Ck1BWElNVU1fV0FJVF9USU1FPSR7TUFYSU1VTV9XQUlUX1RJTUU6LTYwMH0KCiMgRGVmYXVsdCBzdGVhZHktc3RhdGUgdGhyZXNob2xkID0gMiUKIyBBbGxvd2VkIHZhbHVlczoKIyAgNCAgLSBhYnNvbHV0ZSBwb2QgY291bnQgKCsvLSkKIyAgNCUgLSBwZXJjZW50IGNoYW5nZSAoKy8tKQojICAtMSAtIGRpc2FibGUgdGhlIHN0ZWFkeS1zdGF0ZSBjaGVjawpTVEVBRFlfU1RBVEVfVEhSRVNIT0xEPSR7U1RFQURZX1NUQVRFX1RIUkVTSE9MRDotMiV9CgojIERlZmF1bHQgc3RlYWR5LXN0YXRlIHdpbmRvdyA9IDYwcwojIElmIHRoZSBydW5uaW5nIHBvZCBjb3VudCBzdGF5cyB3aXRoaW4gdGhlIGdpdmVuIHRocmVzaG9sZCBmb3IgdGhpcyB0aW1lCiMgcGVyaW9kLCByZXR1cm4gQ1BVIHV0aWxpemF0aW9uIHRvIG5vcm1hbCBiZWZvcmUgdGhlIG1heGltdW0gd2FpdCB0aW1lIGhhcwojIGV4cGlyZXMKU1RFQURZX1NUQVRFX1dJTkRPVz0ke1NURUFEWV9TVEFURV9XSU5ET1c6LTYwfQoKIyBEZWZhdWx0IHN0ZWFkeS1zdGF0ZSBhbGxvd3MgYW55IHBvZCBjb3VudCB0byBiZSAic3RlYWR5IHN0YXRlIgojIEluY3JlYXNpbmcgdGhpcyB3aWxsIHNraXAgYW55IHN0ZWFkeS1zdGF0ZSBjaGVja3MgdW50aWwgdGhlIGNvdW50IHJpc2VzIGFib3ZlCiMgdGhpcyBudW1iZXIgdG8gYXZvaWQgZmFsc2UgcG9zaXRpdmVzIGlmIHRoZXJlIGFyZSBzb21lIHBlcmlvZHMgd2hlcmUgdGhlCiMgY291bnQgZG9lc24ndCBpbmNyZWFzZSBidXQgd2Uga25vdyB3ZSBjYW4ndCBiZSBhdCBzdGVhZHktc3RhdGUgeWV0LgpTVEVBRFlfU1RBVEVfTUlOSU1VTT0ke1NURUFEWV9TVEFURV9NSU5JTVVNOi0wfQoKIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIwoKd2l0aGluKCkgewogIGxvY2FsIGxhc3Q9JDEgY3VycmVudD0kMiB0aHJlc2hvbGQ9JDMKICBsb2NhbCBkZWx0YT0wIHBjaGFuZ2UKICBkZWx0YT0kKCggY3VycmVudCAtIGxhc3QgKSkKICBpZiBbWyAkY3VycmVudCAtZXEgJGxhc3QgXV07IHRoZW4KICAgIHBjaGFuZ2U9MAogIGVsaWYgW1sgJGxhc3QgLWVxIDAgXV07IHRoZW4KICAgIHBjaGFuZ2U9MTAwMDAwMAogIGVsc2UKICAgIHBjaGFuZ2U9JCgoICggIiRkZWx0YSIgKiAxMDApIC8gbGFzdCApKQogIGZpCiAgZWNobyAtbiAibGFzdDokbGFzdCBjdXJyZW50OiRjdXJyZW50IGRlbHRhOiRkZWx0YSBwY2hhbmdlOiR7cGNoYW5nZX0lOiAiCiAgbG9jYWwgYWJzb2x1dGUgbGltaXQKICBjYXNlICR0aHJlc2hvbGQgaW4KICAgIColKQogICAgICBhYnNvbHV0ZT0ke3BjaGFuZ2UjIy19ICMgYWJzb2x1dGUgdmFsdWUKICAgICAgbGltaXQ9JHt0aHJlc2hvbGQlJSV9CiAgICAgIDs7CiAgICAqKQogICAgICBhYnNvbHV0ZT0ke2RlbHRhIyMtfSAjIGFic29sdXRlIHZhbHVlCiAgICAgIGxpbWl0PSR0aHJlc2hvbGQKICAgICAgOzsKICBlc2FjCiAgaWYgW1sgJGFic29sdXRlIC1sZSAkbGltaXQgXV07IHRoZW4KICAgIGVjaG8gIndpdGhpbiAoKy8tKSR0aHJlc2hvbGQiCiAgICByZXR1cm4gMAogIGVsc2UKICAgIGVjaG8gIm91dHNpZGUgKCsvLSkkdGhyZXNob2xkIgogICAgcmV0dXJuIDEKICBmaQp9CgpzdGVhZHlzdGF0ZSgpIHsKICBsb2NhbCBsYXN0PSQxIGN1cnJlbnQ9JDIKICBpZiBbWyAkbGFzdCAtbHQgJFNURUFEWV9TVEFURV9NSU5JTVVNIF1dOyB0aGVuCiAgICBlY2hvICJsYXN0OiRsYXN0IGN1cnJlbnQ6JGN1cnJlbnQgV2FpdGluZyB0byByZWFjaCAkU1RFQURZX1NUQVRFX01JTklNVU0gYmVmb3JlIGNoZWNraW5nIGZvciBzdGVhZHktc3RhdGUiCiAgICByZXR1cm4gMQogIGZpCiAgd2l0aGluICIkbGFzdCIgIiRjdXJyZW50IiAiJFNURUFEWV9TVEFURV9USFJFU0hPTEQiCn0KCndhaXRGb3JSZWFkeSgpIHsKICBsb2dnZXIgIlJlY292ZXJ5OiBXYWl0aW5nICR7TUFYSU1VTV9XQUlUX1RJTUV9cyBmb3IgdGhlIGluaXRpYWxpemF0aW9uIHRvIGNvbXBsZXRlIgogIGxvY2FsIHQ9MCBzPTEwCiAgbG9jYWwgbGFzdENjb3VudD0wIGNjb3VudD0wIHN0ZWFkeVN0YXRlVGltZT0wCiAgd2hpbGUgW1sgJHQgLWx0ICRNQVhJTVVNX1dBSVRfVElNRSBdXTsgZG8KICAgIHNsZWVwICRzCiAgICAoKHQgKz0gcykpCiAgICAjIERldGVjdCBzdGVhZHktc3RhdGUgcG9kIGNvdW50CiAgICBjY291bnQ9JChjcmljdGwgcHMgMj4vZGV2L251bGwgfCB3YyAtbCkKICAgIGlmIFtbICRjY291bnQgLWd0IDAgXV0gJiYgc3RlYWR5c3RhdGUgIiRsYXN0Q2NvdW50IiAiJGNjb3VudCI7IHRoZW4KICAgICAgKChzdGVhZHlTdGF0ZVRpbWUgKz0gcykpCiAgICAgIGVjaG8gIlN0ZWFkeS1zdGF0ZSBmb3IgJHtzdGVhZHlTdGF0ZVRpbWV9cy8ke1NURUFEWV9TVEFURV9XSU5ET1d9cyIKICAgICAgaWYgW1sgJHN0ZWFkeVN0YXRlVGltZSAtZ2UgJFNURUFEWV9TVEFURV9XSU5ET1cgXV07IHRoZW4KICAgICAgICBsb2dnZXIgIlJlY292ZXJ5OiBTdGVhZHktc3RhdGUgKCsvLSAkU1RFQURZX1NUQVRFX1RIUkVTSE9MRCkgZm9yICR7U1RFQURZX1NUQVRFX1dJTkRPV31zOiBEb25lIgogICAgICAgIHJldHVybiAwCiAgICAgIGZpCiAgICBlbHNlCiAgICAgIGlmIFtbICRzdGVhZHlTdGF0ZVRpbWUgLWd0IDAgXV07IHRoZW4KICAgICAgICBlY2hvICJSZXNldHRpbmcgc3RlYWR5LXN0YXRlIHRpbWVyIgogICAgICAgIHN0ZWFkeVN0YXRlVGltZT0wCiAgICAgIGZpCiAgICBmaQogICAgbGFzdENjb3VudD0kY2NvdW50CiAgZG9uZQogIGxvZ2dlciAiUmVjb3Zlcnk6IFJlY292ZXJ5IENvbXBsZXRlIFRpbWVvdXQiCn0KCnNldFJjdU5vcm1hbCgpIHsKICBlY2hvICJTZXR0aW5nIHJjdV9ub3JtYWwgdG8gMSIKICBlY2hvIDEgPiAvc3lzL2tlcm5lbC9yY3Vfbm9ybWFsCn0KCm1haW4oKSB7CiAgd2FpdEZvclJlYWR5CiAgZWNobyAiV2FpdGluZyBmb3Igc3RlYWR5IHN0YXRlIHRvb2s6ICQoYXdrICd7cHJpbnQgaW50KCQxLzM2MDApImgiLCBpbnQoKCQxJTM2MDApLzYwKSJtIiwgaW50KCQxJTYwKSJzIn0nIC9wcm9jL3VwdGltZSkiCiAgc2V0UmN1Tm9ybWFsCn0KCmlmIFtbICIke0JBU0hfU09VUkNFWzBdfSIgPSAiJHswfSIgXV07IHRoZW4KICBtYWluICIke0B9IgogIGV4aXQgJD8KZmkK mode: 493 path: /usr/local/bin/set-rcu-normal.sh systemd: units: - contents: | [Unit] Description=Disable rcu_expedited after node has finished booting by setting rcu_normal to 1 [Service] Type=simple ExecStart=/usr/local/bin/set-rcu-normal.sh # Maximum wait time is 600s = 10m: Environment=MAXIMUM_WAIT_TIME=600 # Steady-state threshold = 2% # Allowed values: # 4 - absolute pod count (+/-) # 4% - percent change (+/-) # -1 - disable the steady-state check # Note: '%' must be escaped as '%%' in systemd unit files Environment=STEADY_STATE_THRESHOLD=2%% # Steady-state window = 120s # If the running pod count stays within the given threshold for this time # period, return CPU utilization to normal before the maximum wait time has # expires Environment=STEADY_STATE_WINDOW=120 # Steady-state minimum = 40 # Increasing this will skip any steady-state checks until the count rises above # this number to avoid false positives if there are some periods where the # count doesn't increase but we know we can't be at steady-state yet. Environment=STEADY_STATE_MINIMUM=40 [Install] WantedBy=multi-user.target enabled: true name: set-rcu-normal.service 08-set-rcu-normal-worker.yaml # Automatically generated by extra-manifests-builder # Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 08-set-rcu-normal-worker spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKIwojIERpc2FibGUgcmN1X2V4cGVkaXRlZCBhZnRlciBub2RlIGhhcyBmaW5pc2hlZCBib290aW5nCiMKIyBUaGUgZGVmYXVsdHMgYmVsb3cgY2FuIGJlIG92ZXJyaWRkZW4gdmlhIGVudmlyb25tZW50IHZhcmlhYmxlcwojCgojIERlZmF1bHQgd2FpdCB0aW1lIGlzIDYwMHMgPSAxMG06Ck1BWElNVU1fV0FJVF9USU1FPSR7TUFYSU1VTV9XQUlUX1RJTUU6LTYwMH0KCiMgRGVmYXVsdCBzdGVhZHktc3RhdGUgdGhyZXNob2xkID0gMiUKIyBBbGxvd2VkIHZhbHVlczoKIyAgNCAgLSBhYnNvbHV0ZSBwb2QgY291bnQgKCsvLSkKIyAgNCUgLSBwZXJjZW50IGNoYW5nZSAoKy8tKQojICAtMSAtIGRpc2FibGUgdGhlIHN0ZWFkeS1zdGF0ZSBjaGVjawpTVEVBRFlfU1RBVEVfVEhSRVNIT0xEPSR7U1RFQURZX1NUQVRFX1RIUkVTSE9MRDotMiV9CgojIERlZmF1bHQgc3RlYWR5LXN0YXRlIHdpbmRvdyA9IDYwcwojIElmIHRoZSBydW5uaW5nIHBvZCBjb3VudCBzdGF5cyB3aXRoaW4gdGhlIGdpdmVuIHRocmVzaG9sZCBmb3IgdGhpcyB0aW1lCiMgcGVyaW9kLCByZXR1cm4gQ1BVIHV0aWxpemF0aW9uIHRvIG5vcm1hbCBiZWZvcmUgdGhlIG1heGltdW0gd2FpdCB0aW1lIGhhcwojIGV4cGlyZXMKU1RFQURZX1NUQVRFX1dJTkRPVz0ke1NURUFEWV9TVEFURV9XSU5ET1c6LTYwfQoKIyBEZWZhdWx0IHN0ZWFkeS1zdGF0ZSBhbGxvd3MgYW55IHBvZCBjb3VudCB0byBiZSAic3RlYWR5IHN0YXRlIgojIEluY3JlYXNpbmcgdGhpcyB3aWxsIHNraXAgYW55IHN0ZWFkeS1zdGF0ZSBjaGVja3MgdW50aWwgdGhlIGNvdW50IHJpc2VzIGFib3ZlCiMgdGhpcyBudW1iZXIgdG8gYXZvaWQgZmFsc2UgcG9zaXRpdmVzIGlmIHRoZXJlIGFyZSBzb21lIHBlcmlvZHMgd2hlcmUgdGhlCiMgY291bnQgZG9lc24ndCBpbmNyZWFzZSBidXQgd2Uga25vdyB3ZSBjYW4ndCBiZSBhdCBzdGVhZHktc3RhdGUgeWV0LgpTVEVBRFlfU1RBVEVfTUlOSU1VTT0ke1NURUFEWV9TVEFURV9NSU5JTVVNOi0wfQoKIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIwoKd2l0aGluKCkgewogIGxvY2FsIGxhc3Q9JDEgY3VycmVudD0kMiB0aHJlc2hvbGQ9JDMKICBsb2NhbCBkZWx0YT0wIHBjaGFuZ2UKICBkZWx0YT0kKCggY3VycmVudCAtIGxhc3QgKSkKICBpZiBbWyAkY3VycmVudCAtZXEgJGxhc3QgXV07IHRoZW4KICAgIHBjaGFuZ2U9MAogIGVsaWYgW1sgJGxhc3QgLWVxIDAgXV07IHRoZW4KICAgIHBjaGFuZ2U9MTAwMDAwMAogIGVsc2UKICAgIHBjaGFuZ2U9JCgoICggIiRkZWx0YSIgKiAxMDApIC8gbGFzdCApKQogIGZpCiAgZWNobyAtbiAibGFzdDokbGFzdCBjdXJyZW50OiRjdXJyZW50IGRlbHRhOiRkZWx0YSBwY2hhbmdlOiR7cGNoYW5nZX0lOiAiCiAgbG9jYWwgYWJzb2x1dGUgbGltaXQKICBjYXNlICR0aHJlc2hvbGQgaW4KICAgIColKQogICAgICBhYnNvbHV0ZT0ke3BjaGFuZ2UjIy19ICMgYWJzb2x1dGUgdmFsdWUKICAgICAgbGltaXQ9JHt0aHJlc2hvbGQlJSV9CiAgICAgIDs7CiAgICAqKQogICAgICBhYnNvbHV0ZT0ke2RlbHRhIyMtfSAjIGFic29sdXRlIHZhbHVlCiAgICAgIGxpbWl0PSR0aHJlc2hvbGQKICAgICAgOzsKICBlc2FjCiAgaWYgW1sgJGFic29sdXRlIC1sZSAkbGltaXQgXV07IHRoZW4KICAgIGVjaG8gIndpdGhpbiAoKy8tKSR0aHJlc2hvbGQiCiAgICByZXR1cm4gMAogIGVsc2UKICAgIGVjaG8gIm91dHNpZGUgKCsvLSkkdGhyZXNob2xkIgogICAgcmV0dXJuIDEKICBmaQp9CgpzdGVhZHlzdGF0ZSgpIHsKICBsb2NhbCBsYXN0PSQxIGN1cnJlbnQ9JDIKICBpZiBbWyAkbGFzdCAtbHQgJFNURUFEWV9TVEFURV9NSU5JTVVNIF1dOyB0aGVuCiAgICBlY2hvICJsYXN0OiRsYXN0IGN1cnJlbnQ6JGN1cnJlbnQgV2FpdGluZyB0byByZWFjaCAkU1RFQURZX1NUQVRFX01JTklNVU0gYmVmb3JlIGNoZWNraW5nIGZvciBzdGVhZHktc3RhdGUiCiAgICByZXR1cm4gMQogIGZpCiAgd2l0aGluICIkbGFzdCIgIiRjdXJyZW50IiAiJFNURUFEWV9TVEFURV9USFJFU0hPTEQiCn0KCndhaXRGb3JSZWFkeSgpIHsKICBsb2dnZXIgIlJlY292ZXJ5OiBXYWl0aW5nICR7TUFYSU1VTV9XQUlUX1RJTUV9cyBmb3IgdGhlIGluaXRpYWxpemF0aW9uIHRvIGNvbXBsZXRlIgogIGxvY2FsIHQ9MCBzPTEwCiAgbG9jYWwgbGFzdENjb3VudD0wIGNjb3VudD0wIHN0ZWFkeVN0YXRlVGltZT0wCiAgd2hpbGUgW1sgJHQgLWx0ICRNQVhJTVVNX1dBSVRfVElNRSBdXTsgZG8KICAgIHNsZWVwICRzCiAgICAoKHQgKz0gcykpCiAgICAjIERldGVjdCBzdGVhZHktc3RhdGUgcG9kIGNvdW50CiAgICBjY291bnQ9JChjcmljdGwgcHMgMj4vZGV2L251bGwgfCB3YyAtbCkKICAgIGlmIFtbICRjY291bnQgLWd0IDAgXV0gJiYgc3RlYWR5c3RhdGUgIiRsYXN0Q2NvdW50IiAiJGNjb3VudCI7IHRoZW4KICAgICAgKChzdGVhZHlTdGF0ZVRpbWUgKz0gcykpCiAgICAgIGVjaG8gIlN0ZWFkeS1zdGF0ZSBmb3IgJHtzdGVhZHlTdGF0ZVRpbWV9cy8ke1NURUFEWV9TVEFURV9XSU5ET1d9cyIKICAgICAgaWYgW1sgJHN0ZWFkeVN0YXRlVGltZSAtZ2UgJFNURUFEWV9TVEFURV9XSU5ET1cgXV07IHRoZW4KICAgICAgICBsb2dnZXIgIlJlY292ZXJ5OiBTdGVhZHktc3RhdGUgKCsvLSAkU1RFQURZX1NUQVRFX1RIUkVTSE9MRCkgZm9yICR7U1RFQURZX1NUQVRFX1dJTkRPV31zOiBEb25lIgogICAgICAgIHJldHVybiAwCiAgICAgIGZpCiAgICBlbHNlCiAgICAgIGlmIFtbICRzdGVhZHlTdGF0ZVRpbWUgLWd0IDAgXV07IHRoZW4KICAgICAgICBlY2hvICJSZXNldHRpbmcgc3RlYWR5LXN0YXRlIHRpbWVyIgogICAgICAgIHN0ZWFkeVN0YXRlVGltZT0wCiAgICAgIGZpCiAgICBmaQogICAgbGFzdENjb3VudD0kY2NvdW50CiAgZG9uZQogIGxvZ2dlciAiUmVjb3Zlcnk6IFJlY292ZXJ5IENvbXBsZXRlIFRpbWVvdXQiCn0KCnNldFJjdU5vcm1hbCgpIHsKICBlY2hvICJTZXR0aW5nIHJjdV9ub3JtYWwgdG8gMSIKICBlY2hvIDEgPiAvc3lzL2tlcm5lbC9yY3Vfbm9ybWFsCn0KCm1haW4oKSB7CiAgd2FpdEZvclJlYWR5CiAgZWNobyAiV2FpdGluZyBmb3Igc3RlYWR5IHN0YXRlIHRvb2s6ICQoYXdrICd7cHJpbnQgaW50KCQxLzM2MDApImgiLCBpbnQoKCQxJTM2MDApLzYwKSJtIiwgaW50KCQxJTYwKSJzIn0nIC9wcm9jL3VwdGltZSkiCiAgc2V0UmN1Tm9ybWFsCn0KCmlmIFtbICIke0JBU0hfU09VUkNFWzBdfSIgPSAiJHswfSIgXV07IHRoZW4KICBtYWluICIke0B9IgogIGV4aXQgJD8KZmkK mode: 493 path: /usr/local/bin/set-rcu-normal.sh systemd: units: - contents: | [Unit] Description=Disable rcu_expedited after node has finished booting by setting rcu_normal to 1 [Service] Type=simple ExecStart=/usr/local/bin/set-rcu-normal.sh # Maximum wait time is 600s = 10m: Environment=MAXIMUM_WAIT_TIME=600 # Steady-state threshold = 2% # Allowed values: # 4 - absolute pod count (+/-) # 4% - percent change (+/-) # -1 - disable the steady-state check # Note: '%' must be escaped as '%%' in systemd unit files Environment=STEADY_STATE_THRESHOLD=2%% # Steady-state window = 120s # If the running pod count stays within the given threshold for this time # period, return CPU utilization to normal before the maximum wait time has # expires Environment=STEADY_STATE_WINDOW=120 # Steady-state minimum = 40 # Increasing this will skip any steady-state checks until the count rises above # this number to avoid false positives if there are some periods where the # count doesn't increase but we know we can't be at steady-state yet. Environment=STEADY_STATE_MINIMUM=40 [Install] WantedBy=multi-user.target enabled: true name: set-rcu-normal.service 07-sriov-related-kernel-args-master.yaml # Automatically generated by extra-manifests-builder # Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 07-sriov-related-kernel-args-master spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on - iommu=pt 07-sriov-related-kernel-args-worker.yaml # Automatically generated by extra-manifests-builder # Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 07-sriov-related-kernel-args-worker spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on - iommu=pt 3.2.5. Telco RAN DU reference configuration software specifications The following information describes the telco RAN DU reference design specification (RDS) validated software versions. 3.2.5.1. Telco RAN DU 4.17 validated software components The Red Hat telco RAN DU 4.17 solution has been validated using the following Red Hat software products for OpenShift Container Platform managed clusters and hub clusters. Table 3.7. Telco RAN DU managed cluster validated software components Component Software version Managed cluster version 4.17 Cluster Logging Operator 6.0 Local Storage Operator 4.17 OpenShift API for Data Protection (OADP) 1.4.1 PTP Operator 4.17 SRIOV Operator 4.17 SRIOV-FEC Operator 2.9 Lifecycle Agent 4.17 Table 3.8. Hub cluster validated software components Component Software version Hub cluster version 4.17 Red Hat Advanced Cluster Management (RHACM) 2.11 GitOps ZTP plugin 4.17 Red Hat OpenShift GitOps 1.13 Topology Aware Lifecycle Manager (TALM) 4.17 3.3. Telco core reference design specification 3.3.1. Telco core 4.17 reference design overview The telco core reference design specification (RDS) configures an OpenShift Container Platform cluster running on commodity hardware to host telco core workloads. 3.3.1.1. Telco core cluster service-based architecture and networking topology The Telco core reference design specification (RDS) describes a platform that supports large-scale telco applications including control plane functions such as signaling and aggregation. It also includes some centralized data plane functions, for example, user plane functions (UPF). These functions generally require scalability, complex networking support, resilient software-defined storage, and support performance requirements that are less stringent and constrained than far-edge deployments like RAN. Figure 3.3. Telco core cluster service-based architecture and networking topology The telco core cluster service-based architecture consists of the following components: Network data analytics functions ( NWDAF ) Network slice selection functions ( NSFF ) Authentication server functions ( AUSF ) Unified data managements ( UDM ) Network repository functions ( NRF ) Network exposure functions ( NEF ) Application functions ( AF ) Access and mobility functions ( AMF ) Session management functions ( SMF ) Policy control functions ( PCF ) Charging functions ( CHF ) User equipment ( UE ) Radio access network ( RAN ) User plane functions ( UPF ) Data plane networking ( DN ) 3.3.2. Telco core 4.17 use model overview Telco core clusters are configured as standard three control plane clusters with worker nodes configured with the stock non real-time (RT) kernel. To support workloads with varying networking and performance requirements, worker nodes are segmented using MachineConfigPool CRs. For example, this is done to separate non-user data plane nodes from high-throughput nodes. To support the required telco operational features, the clusters have a standard set of Operator Lifecycle Manager (OLM) Day 2 Operators installed. The networking prerequisites for telco core functions are diverse and encompass an array of networking attributes and performance benchmarks. IPv6 is mandatory, with dual-stack configurations being prevalent. Certain functions demand maximum throughput and transaction rates, necessitating user plane networking support such as DPDK. Other functions adhere to conventional cloud-native patterns and can use solutions such as OVN-K, kernel networking, and load balancing. Telco core use model architecture 3.3.2.1. Common baseline model The following configurations and use model description are applicable to all telco core use cases. Cluster The cluster conforms to these requirements: High-availability (3+ supervisor nodes) control plane Non-schedulable supervisor nodes Multiple MachineConfigPool resources Storage Core use cases require persistent storage as provided by external OpenShift Data Foundation. For more information, see the "Storage" subsection in "Reference core design components". Networking Telco core clusters networking conforms to these requirements: Dual stack IPv4/IPv6 Fully disconnected: Clusters do not have access to public networking at any point in their lifecycle. Multiple networks: Segmented networking provides isolation between OAM, signaling, and storage traffic. Cluster network type: OVN-Kubernetes is required for IPv6 support. Core clusters have multiple layers of networking supported by underlying RHCOS, SR-IOV Operator, Load Balancer, and other components detailed in the following "Networking" section. At a high level these layers include: Cluster networking: The cluster network configuration is defined and applied through the installation configuration. Updates to the configuration can be done at Day 2 through the NMState Operator. Initial configuration can be used to establish: Host interface configuration Active/Active Bonding (Link Aggregation Control Protocol (LACP)) Secondary or additional networks: OpenShift CNI is configured through the Network additionalNetworks or NetworkAttachmentDefinition CRs. MACVLAN Application Workload: User plane networking is running in cloud-native network functions (CNFs). Service Mesh Use of Service Mesh by telco CNFs is very common. It is expected that all core clusters will include a Service Mesh implementation. Service Mesh implementation and configuration is outside the scope of this specification. 3.3.2.1.1. Telco core RDS engineering considerations The following engineering considerations are relevant for the telco core common use model. Worker nodes Worker nodes should run on Intel 3rd Generation Xeon (IceLake) processors or newer. Note Alternatively, if your worker nodes have Skylake or earlier processors, you must disable the mitigations for silicon security vulnerabilities such as Spectre. Failure to do can result in a 40% decrease in transaction performance. Enable IRQ Balancing for worker nodes. Set the globallyDisableIrqLoadBalancing field in the PerformanceProfile custom resource (CR) to false . Annotate pods with QoS class of Guaranteed to ensure that they are isolated. See "CPU partitioning and performance tuning" for more information. All nodes in the cluster Enable Hyper-Threading for all nodes. Ensure CPU architecture is x86_64 only. Ensure that nodes are running the stock (non-RT) kernel. Ensure that nodes are not configured for workload partitioning. Power management and performance The balance between power management and maximum performance varies between the MachineConfigPool resources in the cluster. Cluster scaling Scale number of cluster nodes to at least 120 nodes. CPU partitioning CPU partitioning is configured using PerformanceProfile CRs, one for every MachineConfigPool CR in the cluster. See "CPU partitioning and performance tuning" for more information. Additional resources CPU partitioning and performance tuning 3.3.2.1.2. Application workloads Application workloads running on core clusters might include a mix of high-performance networking CNFs and traditional best-effort or burstable pod workloads. Guaranteed QoS scheduling is available to pods that require exclusive or dedicated use of CPUs due to performance or security requirements. Typically pods hosting high-performance and low-latency-sensitive Cloud Native Functions (CNFs) utilizing user plane networking with DPDK necessitate the exclusive utilization of entire CPUs. This is accomplished through node tuning and guaranteed Quality of Service (QoS) scheduling. For pods that require exclusive use of CPUs, be aware of the potential implications of hyperthreaded systems and configure them to request multiples of 2 CPUs when the entire core (2 hyperthreads) must be allocated to the pod. Pods running network functions that do not require the high throughput and low latency networking are typically scheduled with best-effort or burstable QoS and do not require dedicated or isolated CPU cores. Workload limits CNF applications should conform to the latest version of the Red Hat Best Practices for Kubernetes guide. For a mix of best-effort and burstable QoS pods. Guaranteed QoS pods might be used but require correct configuration of reserved and isolated CPUs in the PerformanceProfile . Guaranteed QoS Pods must include annotations for fully isolating CPUs. Best effort and burstable pods are not guaranteed exclusive use of a CPU. Workloads might be preempted by other workloads, operating system daemons, or kernel tasks. Exec probes should be avoided unless there is no viable alternative. Do not use exec probes if a CNF is using CPU pinning. Other probe implementations, for example httpGet/tcpSocket , should be used. Note Startup probes require minimal resources during steady-state operation. The limitation on exec probes applies primarily to liveness and readiness probes. Signaling workload Signaling workloads typically use SCTP, REST, gRPC, or similar TCP or UDP protocols. The transactions per second (TPS) is in the order of hundreds of thousands using secondary CNI (multus) configured as MACVLAN or SR-IOV. Signaling workloads run in pods with either guaranteed or burstable QoS. 3.3.3. Telco core reference design components The following sections describe the various OpenShift Container Platform components and configurations that you use to configure and deploy clusters to run telco core workloads. 3.3.3.1. CPU partitioning and performance tuning New in this release No reference design updates in this release Description CPU partitioning allows for the separation of sensitive workloads from generic purposes, auxiliary processes, interrupts, and driver work queues to achieve improved performance and latency. Limits and requirements The operating system needs a certain amount of CPU to perform all the support tasks including kernel networking. A system with just user plane networking applications (DPDK) needs at least one Core (2 hyperthreads when enabled) reserved for the operating system and the infrastructure components. A system with Hyper-Threading enabled must always put all core sibling threads to the same pool of CPUs. The set of reserved and isolated cores must include all CPU cores. Core 0 of each NUMA node must be included in the reserved CPU set. Isolated cores might be impacted by interrupts. The following annotations must be attached to the pod if guaranteed QoS pods require full use of the CPU: When per-pod power management is enabled with PerformanceProfile.workloadHints.perPodPowerManagement the following annotations must also be attached to the pod if guaranteed QoS pods require full use of the CPU: Engineering considerations The minimum reserved capacity ( systemReserved ) required can be found by following the guidance in "Which amount of CPU and memory are recommended to reserve for the system in OpenShift 4 nodes?" The actual required reserved CPU capacity depends on the cluster configuration and workload attributes. This reserved CPU value must be rounded up to a full core (2 hyper-thread) alignment. Changes to the CPU partitioning will drain and reboot the nodes in the MCP. The reserved CPUs reduce the pod density, as the reserved CPUs are removed from the allocatable capacity of the OpenShift node. The real-time workload hint should be enabled if the workload is real-time capable. Hardware without Interrupt Request (IRQ) affinity support will impact isolated CPUs. To ensure that pods with guaranteed CPU QoS have full use of allocated CPU, all hardware in the server must support IRQ affinity. OVS dynamically manages its cpuset configuration to adapt to network traffic needs. You do not need to reserve additional CPUs for handling high network throughput on the primary CNI. If workloads running on the cluster require cgroups v1, you can configure nodes to use cgroups v1 as part of the initial cluster deployment. For more information, see "Enabling Linux cgroup v1 during installation". Additional resources Creating a performance profile Configuring host firmware for low latency and high performance Enabling Linux cgroup v1 during installation 3.3.3.2. Service Mesh Description Telco core cloud-native functions (CNFs) typically require a service mesh implementation. Note Specific service mesh features and performance requirements are dependent on the application. The selection of service mesh implementation and configuration is outside the scope of this documentation. You must account for the impact of service mesh on cluster resource usage and performance, including additional latency introduced in pod networking, in your implementation. Additional resources About OpenShift Service Mesh 3.3.3.3. Networking New in this release Telco core validation is now extended with bonding, MACVLAN, IPVLAN and SR-IOV networking scenarios. Description The cluster is configured in dual-stack IP configuration (IPv4 and IPv6). The validated physical network configuration consists of two dual-port NICs. One NIC is shared among the primary CNI (OVN-Kubernetes) and IPVLAN and MACVLAN traffic, the second NIC is dedicated to SR-IOV VF-based Pod traffic. A Linux bonding interface ( bond0 ) is created in an active-active LACP IEEE 802.3ad configuration with the two NIC ports attached. Note The top-of-rack networking equipment must support and be configured for multi-chassis link aggregation (mLAG) technology. VLAN interfaces are created on top of bond0 , including for the primary CNI. Bond and VLAN interfaces are created at install time during network configuration. Apart from the VLAN ( VLAN0 ) used by the primary CNI, the other VLANS can be created on Day 2 using the Kubernetes NMState Operator. MACVLAN and IPVLAN interfaces are created with their corresponding CNIs. They do not share the same base interface. SR-IOV VFs are managed by the SR-IOV Network Operator. The following diagram provides an overview of SR-IOV NIC sharing: Figure 3.4. SR-IOV NIC sharing Additional resources Understanding networking 3.3.3.4. Cluster Network Operator New in this release No reference design updates in this release Description The Cluster Network Operator (CNO) deploys and manages the cluster network components including the default OVN-Kubernetes network plugin during OpenShift Container Platform cluster installation. It allows configuring primary interface MTU settings, OVN gateway modes to use node routing tables for pod egress, and additional secondary networks such as MACVLAN. Limits and requirements OVN-Kubernetes is required for IPv6 support. Large MTU cluster support requires connected network equipment to be set to the same or larger value. MACVLAN and IPVLAN cannot co-locate on the same main interface due to their reliance on the same underlying kernel mechanism, specifically the rx_handler . This handler allows a third-party module to process incoming packets before the host processes them, and only one such handler can be registered per network interface. Since both MACVLAN and IPVLAN need to register their own rx_handler to function, they conflict and cannot coexist on the same interface. See ipvlan/ipvlan_main.c#L82 and net/macvlan.c#L1260 for details. Alternative NIC configurations include splitting the shared NIC into multiple NICs or using a single dual-port NIC. Important Splitting the shared NIC into multiple NICs or using a single dual-port NIC has not been validated with the telco core reference design. Single-stack IP cluster not validated. Engineering considerations Pod egress traffic is handled by kernel routing table with the routingViaHost option. Appropriate static routes must be configured in the host. Additional resources Cluster Network Operator 3.3.3.5. Load balancer New in this release In OpenShift Container Platform 4.17, frr-k8s is now the default and fully supported Border Gateway Protocol (BGP) backend. The deprecated frr BGP mode is still available. You should upgrade clusters to use the frr-k8s backend. Description MetalLB is a load-balancer implementation that uses standard routing protocols for bare-metal clusters. It enables a Kubernetes service to get an external IP address which is also added to the host network for the cluster. Note Some use cases might require features not available in MetalLB, for example stateful load balancing. Where necessary, use an external third party load balancer. Selection and configuration of an external load balancer is outside the scope of this document. When you use an external third party load balancer, ensure that it meets all performance and resource utilization requirements. Limits and requirements Stateful load balancing is not supported by MetalLB. An alternate load balancer implementation must be used if this is a requirement for workload CNFs. The networking infrastructure must ensure that the external IP address is routable from clients to the host network for the cluster. Engineering considerations MetalLB is used in BGP mode only for core use case models. For core use models, MetalLB is supported with only the OVN-Kubernetes network provider used in local gateway mode. See routingViaHost in the "Cluster Network Operator" section. BGP configuration in MetalLB varies depending on the requirements of the network and peers. Address pools can be configured as needed, allowing variation in addresses, aggregation length, auto assignment, and other relevant parameters. MetalLB uses BGP for announcing routes only. Only the transmitInterval and minimumTtl parameters are relevant in this mode. Other parameters in the BFD profile should remain close to the default settings. Shorter values might lead to errors and impact performance. Additional resources When to use MetalLB 3.3.3.6. SR-IOV New in this release No reference design updates in this release Description SR-IOV enables physical network interfaces (PFs) to be divided into multiple virtual functions (VFs). VFs can then be assigned to multiple pods to achieve higher throughput performance while keeping the pods isolated. The SR-IOV Network Operator provisions and manages SR-IOV CNI, network device plugin, and other components of the SR-IOV stack. Limits and requirements Supported network interface controllers are listed in "Supported devices". The SR-IOV Network Operator automatically enables IOMMU on the kernel command line. SR-IOV VFs do not receive link state updates from PF. If link down detection is needed, it must be done at the protocol level. MultiNetworkPolicy CRs can be applied to netdevice networks only. This is because the implementation uses the iptables tool, which cannot manage vfio interfaces. Engineering considerations SR-IOV interfaces in vfio mode are typically used to enable additional secondary networks for applications that require high throughput or low latency. If you exclude the SriovOperatorConfig CR from your deployment, the CR will not be created automatically. NICs that do not support firmware updates under secure boot or kernel lock-down must be pre-configured with enough VFs enabled to support the number of VFs needed by the application workload. Note The SR-IOV Network Operator plugin for these NICs might need to be disabled using the undocumented disablePlugins option. Additional resources About Single Root I/O Virtualization (SR-IOV) hardware networks Supported devices 3.3.3.7. NMState Operator New in this release No reference design updates in this release Description The NMState Operator provides a Kubernetes API for performing network configurations across cluster nodes. Limits and requirements Not applicable Engineering considerations The initial networking configuration is applied using NMStateConfig content in the installation CRs. The NMState Operator is used only when needed for network updates. When SR-IOV virtual functions are used for host networking, the NMState Operator using NodeNetworkConfigurationPolicy is used to configure those VF interfaces, for example, VLANs and the MTU. Additional resources Kubernetes NMState Operator 3.3.3.8. Logging New in this release Cluster Logging Operator 6.0 is new in this release. Update your existing implementation to adapt to the new version of the API. Description The Cluster Logging Operator enables collection and shipping of logs off the node for remote archival and analysis. The reference configuration ships audit and infrastructure logs to a remote archive by using Kafka. Limits and requirements Not applicable Engineering considerations The impact of cluster CPU use is based on the number or size of logs generated and the amount of log filtering configured. The reference configuration does not include shipping of application logs. Inclusion of application logs in the configuration requires evaluation of the application logging rate and sufficient additional CPU resources allocated to the reserved set. Additional resources About logging 3.3.3.9. Power Management New in this release No reference design updates in this release Description Use the Performance Profile to configure clusters with high power mode, low power mode, or mixed mode. The choice of power mode depends on the characteristics of the workloads running on the cluster, particularly how sensitive they are to latency. Limits and requirements Power configuration relies on appropriate BIOS configuration, for example, enabling C-states and P-states. Configuration varies between hardware vendors. Engineering considerations Latency: To ensure that latency-sensitive workloads meet their requirements, you will need either a high-power configuration or a per-pod power management configuration. Per-pod power management is only available for Guaranteed QoS Pods with dedicated pinned CPUs. Additional resources Performance Profile Configuring power saving for nodes Configuring power saving for nodes that run colocated high and low priority workloads 3.3.3.10. Storage Cloud native storage services can be provided by multiple solutions including OpenShift Data Foundation from Red Hat or third parties. 3.3.3.10.1. OpenShift Data Foundation New in this release No reference design updates in this release Description Red Hat OpenShift Data Foundation is a software-defined storage service for containers. For Telco core clusters, storage support is provided by OpenShift Data Foundation storage services running externally to the application workload cluster. Limits and requirements In an IPv4/IPv6 dual-stack networking environment, OpenShift Data Foundation uses IPv4 addressing. For more information, see Support OpenShift dual stack with OpenShift Data Foundation using IPv4 . Engineering considerations OpenShift Data Foundation network traffic should be isolated from other traffic on a dedicated network, for example, by using VLAN isolation. Other storage solutions can be used to provide persistent storage for core clusters. Note The configuration and integration of these solutions is outside the scope of the telco core RDS. Integration of the storage solution into the core cluster must include correct sizing and performance analysis to ensure the storage meets overall performance and resource utilization requirements. Additional resources Red Hat OpenShift Data Foundation 3.3.3.11. Telco core deployment components The following sections describe the various OpenShift Container Platform components and configurations that you use to configure the hub cluster with Red Hat Advanced Cluster Management (RHACM). 3.3.3.11.1. Red Hat Advanced Cluster Management New in this release No reference design updates in this release Description Red Hat Advanced Cluster Management (RHACM) provides Multi Cluster Engine (MCE) installation and ongoing lifecycle management functionality for deployed clusters. You manage cluster configuration and upgrades declaratively by applying Policy custom resources (CRs) to clusters during maintenance windows. You apply policies with the RHACM policy controller as managed by Topology Aware Lifecycle Manager (TALM). When installing managed clusters, RHACM applies labels and initial ignition configuration to individual nodes in support of custom disk partitioning, allocation of roles, and allocation to machine config pools. You define these configurations with SiteConfig or ClusterInstance CRs. Limits and requirements Size your cluster according to the limits specified in Sizing your cluster . RHACM scaling limits are described in Performance and scalability . Engineering considerations Use RHACM policy hub-side templating to better scale cluster configuration. You can significantly reduce the number of policies by using a single group policy or small number of general group policies where the group and per-cluster values are substituted into templates. Cluster specific configuration: managed clusters typically have some number of configuration values that are specific to the individual cluster. These configurations should be managed using RHACM policy hub-side templating with values pulled from ConfigMap CRs based on the cluster name. Additional resources Using GitOps ZTP to provision clusters at the network far edge Red Hat Advanced Cluster Management for Kubernetes 3.3.3.11.2. Topology Aware Lifecycle Manager New in this release No reference design updates in this release Description Topology Aware Lifecycle Manager (TALM) is an Operator that runs only on the hub cluster for managing how changes including cluster and Operator upgrades, configuration, and so on are rolled out to the network. Limits and requirements TALM supports concurrent cluster deployment in batches of 400. Precaching and backup features are for single-node OpenShift clusters only. Engineering considerations Only policies that have the ran.openshift.io/ztp-deploy-wave annotation are automatically applied by TALM during initial cluster installation. You can create further ClusterGroupUpgrade CRs to control the policies that TALM remediates. Additional resources Updating managed clusters with the Topology Aware Lifecycle Manager 3.3.3.11.3. GitOps and GitOps ZTP plugins New in this release No reference design updates in this release Description GitOps and GitOps ZTP plugins provide a GitOps-based infrastructure for managing cluster deployment and configuration. Cluster definitions and configurations are maintained as a declarative state in Git. You can apply ClusterInstance CRs to the hub cluster where the SiteConfig Operator renders them as installation CRs. Alternatively, you can use the GitOps ZTP plugin to generate installation CRs directly from SiteConfig CRs. The GitOps ZTP plugin supports automatic wrapping of configuration CRs in policies based on PolicyGenTemplate CRs. Note You can deploy and manage multiple versions of OpenShift Container Platform on managed clusters using the baseline reference configuration CRs. You can use custom CRs alongside the baseline CRs. To maintain multiple per-version policies simultaneously, use Git to manage the versions of the source CRs and policy CRs ( PolicyGenTemplate or PolicyGenerator ). Keep reference CRs and custom CRs under different directories. Doing this allows you to patch and update the reference CRs by simple replacement of all directory contents without touching the custom CRs. Limits 300 SiteConfig CRs per ArgoCD application. You can use multiple applications to achieve the maximum number of clusters supported by a single hub cluster. Content in the /source-crs folder in Git overrides content provided in the GitOps ZTP plugin container. Git takes precedence in the search path. Add the /source-crs folder in the same directory as the kustomization.yaml file, which includes the PolicyGenTemplate as a generator. Note Alternative locations for the /source-crs directory are not supported in this context. The extraManifestPath field of the SiteConfig CR is deprecated from OpenShift Container Platform 4.15 and later. Use the new extraManifests.searchPaths field instead. Engineering considerations For multi-node cluster upgrades, you can pause MachineConfigPool ( MCP ) CRs during maintenance windows by setting the paused field to true . You can increase the number of nodes per MCP updated simultaneously by configuring the maxUnavailable setting in the MCP CR. The MaxUnavailable field defines the percentage of nodes in the pool that can be simultaneously unavailable during a MachineConfig update. Set maxUnavailable to the maximum tolerable value. This reduces the number of reboots in a cluster during upgrades which results in shorter upgrade times. When you finally unpause the MCP CR, all the changed configurations are applied with a single reboot. During cluster installation, you can pause custom MCP CRs by setting the paused field to true and setting maxUnavailable to 100% to improve installation times. To avoid confusion or unintentional overwriting of files when updating content, use unique and distinguishable names for user-provided CRs in the /source-crs folder and extra manifests in Git. The SiteConfig CR allows multiple extra-manifest paths. When files with the same name are found in multiple directory paths, the last file found takes precedence. This allows you to put the full set of version-specific Day 0 manifests (extra-manifests) in Git and reference them from the SiteConfig CR. With this feature, you can deploy multiple OpenShift Container Platform versions to managed clusters simultaneously. Additional resources Preparing the GitOps ZTP site configuration repository for version independence Adding custom content to the GitOps ZTP pipeline 3.3.3.11.4. Agent-based installer New in this release No reference design updates in this release Description You can install telco core clusters with the Agent-based installer (ABI) on bare-metal servers without requiring additional servers or virtual machines for managing the installation. ABI supports installations in disconnected environments. With ABI, you install clusters by using declarative custom resources (CRs). Note Agent-based installer is an optional component. The recommended installation method is by using Red Hat Advanced Cluster Management or multicluster engine for Kubernetes Operator. Limits and requirements You need to have a disconnected mirror registry with all required content mirrored to do Agent-based installs in a disconnected environment. Engineering considerations Networking configuration should be applied as NMState custom resources (CRs) during cluster installation. Additional resources Installing an OpenShift Container Platform cluster with the Agent-based Installer 3.3.3.12. Monitoring New in this release No reference design updates in this release Description The Cluster Monitoring Operator (CMO) is included by default in OpenShift Container Platform and provides monitoring (metrics, dashboards, and alerting) for the platform components and optionally user projects as well. Note The default handling of pod CPU and memory metrics is based on upstream Kubernetes cAdvisor and makes a tradeoff that prefers handling of stale data over metric accuracy. This leads to spiky data that will create false triggers of alerts over user-specified thresholds. OpenShift supports an opt-in dedicated service monitor feature creating an additional set of pod CPU and memory metrics that do not suffer from the spiky behavior. For additional information, see Dedicated Service Monitors - Questions and Answers . Limits and requirements Monitoring configuration must enable the dedicated service monitor feature for accurate representation of pod metrics Engineering considerations You configure the Prometheus retention period. The value used is a tradeoff between operational requirements for maintaining historical data on the cluster against CPU and storage resources. Longer retention periods increase the need for storage and require additional CPU to manage the indexing of data. Additional resources About OpenShift Container Platform monitoring 3.3.3.13. Scheduling New in this release No reference design updates in this release Description The scheduler is a cluster-wide component responsible for selecting the right node for a given workload. It is a core part of the platform and does not require any specific configuration in the common deployment scenarios. However, there are few specific use cases described in the following section. NUMA-aware scheduling can be enabled through the NUMA Resources Operator. For more information, see "Scheduling NUMA-aware workloads". Limits and requirements The default scheduler does not understand the NUMA locality of workloads. It only knows about the sum of all free resources on a worker node. This might cause workloads to be rejected when scheduled to a node with the topology manager policy set to single-numa-node or restricted . For example, consider a pod requesting 6 CPUs and being scheduled to an empty node that has 4 CPUs per NUMA node. The total allocatable capacity of the node is 8 CPUs and the scheduler will place the pod there. The node local admission will fail, however, as there are only 4 CPUs available in each of the NUMA nodes. All clusters with multi-NUMA nodes are required to use the NUMA Resources Operator. Use the machineConfigPoolSelector field in the KubeletConfig CR to select all nodes where NUMA aligned scheduling is needed. All machine config pools must have consistent hardware configuration for example all nodes are expected to have the same NUMA zone count. Engineering considerations Pods might require annotations for correct scheduling and isolation. For more information on annotations, see "CPU partitioning and performance tuning". You can configure SR-IOV virtual function NUMA affinity to be ignored during scheduling by using the excludeTopology field in SriovNetworkNodePolicy CR. Additional resources Controlling pod placement using the scheduler Scheduling NUMA-aware workloads CPU partitioning and performance tuning 3.3.3.14. Node configuration New in this release Container mount namespace encapsulation and kdump are now available in the telco core RDS. Description Container mount namespace encapsulation creates a container mount namespace that reduces system mount scanning and is visible to kubelet and CRI-O. kdump is an optional configuration that is enabled by default that captures debug information when a kernel panic occurs. The reference CRs which enable kdump include an increased memory reservation based on the set of drivers and kernel modules included in the reference configuration. Limits and requirements Use of kdump and container mount namespace encapsulation is made available through additional kernel modules. You should analyze these modules to determine impact on CPU load, system performance, and ability to meet required KPIs. Engineering considerations Install the following kernel modules with MachineConfig CRs. These modules provide extended kernel functionality to cloud-native functions (CNFs). sctp ip_gre ip6_tables ip6t_REJECT ip6table_filter ip6table_mangle iptable_filter iptable_mangle iptable_nat xt_multiport xt_owner xt_REDIRECT xt_statistic xt_TCPMSS Additional resources Automatic kernel crash dumps with kdump Optimizing CPU usage with mount namespace encapsulation 3.3.3.15. Host firmware and boot loader configuration New in this release Secure boot is now recommended for cluster hosts configured with the telco core reference design. Engineering considerations Enabling secure boot is the recommended configuration. Note When secure boot is enabled, only signed kernel modules are loaded by the kernel. Out-of-tree drivers are not supported. 3.3.3.16. Disconnected environment New in this release No reference design updates in this release Description Telco core clusters are expected to be installed in networks without direct access to the internet. All container images needed to install, configure, and operator the cluster must be available in a disconnected registry. This includes OpenShift Container Platform images, Day 2 Operator Lifecycle Manager (OLM) Operator images, and application workload images. Limits and requirements A unique name is required for all custom CatalogSources. Do not reuse the default catalog names. A valid time source must be configured as part of cluster installation. Additional resources About cluster updates in a disconnected environment 3.3.3.17. Security New in this release Secure boot host firmware setting is now recommended for telco core clusters. For more information, see "Host firmware and boot loader configuration". Description You should harden clusters against multiple attack vectors. In OpenShift Container Platform, there is no single component or feature responsible for securing a cluster. Use the following security-oriented features and configurations to secure your clusters: SecurityContextConstraints (SCC) : All workload pods should be run with restricted-v2 or restricted SCC. Seccomp : All pods should be run with the RuntimeDefault (or stronger) seccomp profile. Rootless DPDK pods : Many user-plane networking (DPDK) CNFs require pods to run with root privileges. With this feature, a conformant DPDK pod can be run without requiring root privileges. Rootless DPDK pods create a tap device in a rootless pod that injects traffic from a DPDK application to the kernel. Storage : The storage network should be isolated and non-routable to other cluster networks. See the "Storage" section for additional details. Limits and requirements Rootless DPDK pods requires the following additional configuration steps: Configure the TAP plugin with the container_t SELinux context. Enable the container_use_devices SELinux boolean on the hosts. Engineering considerations For rootless DPDK pod support, the SELinux boolean container_use_devices must be enabled on the host for the TAP device to be created. This introduces a security risk that is acceptable for short to mid-term use. Other solutions will be explored. Additional resources Managing security context constraints Host firmware and boot loader configuration 3.3.3.18. Scalability New in this release No reference design updates in this release Limits and requirements Cluster should scale to at least 120 nodes. Additional resources Telco core RDS engineering considerations 3.3.4. Telco core 4.17 reference configuration CRs Use the following custom resources (CRs) to configure and deploy OpenShift Container Platform clusters with the telco core profile. Use the CRs to form the common baseline used in all the specific use models unless otherwise indicated. 3.3.4.1. Extracting the telco core reference design configuration CRs You can extract the complete set of custom resources (CRs) for the telco core profile from the telco-core-rds-rhel9 container image. The container image has both the required CRs, and the optional CRs, for the telco core profile. Prerequisites You have installed podman . Procedure Extract the content from the telco-core-rds-rhel9 container image by running the following commands: USD mkdir -p ./out USD podman run -it registry.redhat.io/openshift4/openshift-telco-core-rds-rhel9:v4.17 | base64 -d | tar xv -C out Verification The out directory has the following folder structure. You can view the telco core CRs in the out/telco-core-rds/ directory. Example output out/ βββ telco-core-rds βββ configuration β βββ reference-crs β βββ optional β β βββ logging β β βββ networking β β β βββ multus β β β βββ tap_cni β β βββ other β β βββ tuning β βββ required β βββ networking β β βββ metallb β β βββ multinetworkpolicy β β βββ sriov β βββ other β βββ performance β βββ scheduling β βββ storage β βββ odf-external βββ install 3.3.4.2. Networking reference CRs Table 3.9. Networking CRs Component Reference CR Optional New in this release Baseline Network.yaml Yes No Baseline networkAttachmentDefinition.yaml Yes No Load balancer addr-pool.yaml No No Load balancer bfd-profile.yaml No No Load balancer bgp-advr.yaml No No Load balancer bgp-peer.yaml No No Load balancer community.yaml No No Load balancer metallb.yaml No No Load balancer metallbNS.yaml No No Load balancer metallbOperGroup.yaml No No Load balancer metallbSubscription.yaml No No Multus - Tap CNI for rootless DPDK pods mc_rootless_pods_selinux.yaml No No NMState Operator NMState.yaml No No NMState Operator NMStateNS.yaml No No NMState Operator NMStateOperGroup.yaml No No NMState Operator NMStateSubscription.yaml No No SR-IOV Network Operator sriovNetwork.yaml No No SR-IOV Network Operator sriovNetworkNodePolicy.yaml No No SR-IOV Network Operator SriovOperatorConfig.yaml No No SR-IOV Network Operator SriovSubscription.yaml No No SR-IOV Network Operator SriovSubscriptionNS.yaml No No SR-IOV Network Operator SriovSubscriptionOperGroup.yaml No No 3.3.4.3. Node configuration reference CRs Table 3.10. Node configuration CRs Component Reference CR Optional New in this release Additional kernel modules control-plane-load-kernel-modules.yaml Yes No Additional kernel modules sctp_module_mc.yaml Yes No Additional kernel modules worker-load-kernel-modules.yaml Yes No Container mount namespace hiding mount_namespace_config_master.yaml No Yes Container mount namespace hiding mount_namespace_config_worker.yaml No Yes Kdump enable kdump-master.yaml No Yes Kdump enable kdump-worker.yaml No Yes 3.3.4.4. Other reference CRs Table 3.11. Other CRs Component Reference CR Optional New in this release Cluster logging ClusterLogForwarder.yaml Yes No Cluster logging ClusterLogNS.yaml Yes No Cluster logging ClusterLogOperGroup.yaml Yes No Cluster logging ClusterLogServiceAccount.yaml Yes Yes Cluster logging ClusterLogServiceAccountAuditBinding.yaml Yes Yes Cluster logging ClusterLogServiceAccountInfrastructureBinding.yaml Yes Yes Cluster logging ClusterLogSubscription.yaml Yes No Disconnected configuration catalog-source.yaml No No Disconnected configuration icsp.yaml No No Disconnected configuration operator-hub.yaml No No Monitoring and observability monitoring-config-cm.yaml Yes No Power management PerformanceProfile.yaml No No 3.3.4.5. Resource tuning reference CRs Table 3.12. Resource tuning CRs Component Reference CR Optional New in this release System reserved capacity control-plane-system-reserved.yaml Yes No 3.3.4.6. Scheduling reference CRs Table 3.13. Scheduling CRs Component Reference CR Optional New in this release NUMA-aware scheduler nrop.yaml No No NUMA-aware scheduler NROPSubscription.yaml No No NUMA-aware scheduler NROPSubscriptionNS.yaml No No NUMA-aware scheduler NROPSubscriptionOperGroup.yaml No No NUMA-aware scheduler sched.yaml No No NUMA-aware scheduler Scheduler.yaml No No 3.3.4.7. Storage reference CRs Table 3.14. Storage CRs Component Reference CR Optional New in this release External ODF configuration 01-rook-ceph-external-cluster-details.secret.yaml No No External ODF configuration 02-ocs-external-storagecluster.yaml No No External ODF configuration odfNS.yaml No No External ODF configuration odfOperGroup.yaml No No External ODF configuration odfSubscription.yaml No No 3.3.4.8. YAML reference 3.3.4.8.1. Networking reference YAML Network.yaml # required # count: 1 apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: gatewayConfig: routingViaHost: true # additional networks are optional and may alternatively be specified using NetworkAttachmentDefinition CRs additionalNetworks: [USDadditionalNetworks] # eg #- name: add-net-1 # namespace: app-ns-1 # rawCNIConfig: '{ "cniVersion": "0.3.1", "name": "add-net-1", "plugins": [{"type": "macvlan", "master": "bond1", "ipam": {}}] }' # type: Raw #- name: add-net-2 # namespace: app-ns-1 # rawCNIConfig: '{ "cniVersion": "0.4.0", "name": "add-net-2", "plugins": [ {"type": "macvlan", "master": "bond1", "mode": "private" },{ "type": "tuning", "name": "tuning-arp" }] }' # type: Raw # Enable to use MultiNetworkPolicy CRs useMultiNetworkPolicy: true networkAttachmentDefinition.yaml # optional # copies: 0-N apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: USDname namespace: USDns spec: nodeSelector: kubernetes.io/hostname: USDnodeName config: USDconfig #eg #config: '{ # "cniVersion": "0.3.1", # "name": "external-169", # "type": "vlan", # "master": "ens8f0", # "mode": "bridge", # "vlanid": 169, # "ipam": { # "type": "static", # } #}' addr-pool.yaml # required # count: 1-N apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: USDname # eg addresspool3 namespace: metallb-system spec: ############## # Expected variation in this configuration addresses: [USDpools] #- 3.3.3.0/24 autoAssign: true ############## bfd-profile.yaml # required # count: 1-N apiVersion: metallb.io/v1beta1 kind: BFDProfile metadata: name: USDname # e.g. bfdprofile namespace: metallb-system spec: ################ # These values may vary. Recommended values are included as default receiveInterval: 150 # default 300ms transmitInterval: 150 # default 300ms #echoInterval: 300 # default 50ms detectMultiplier: 10 # default 3 echoMode: true passiveMode: true minimumTtl: 5 # default 254 # ################ bgp-advr.yaml # required # count: 1-N apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: USDname # eg bgpadvertisement-1 namespace: metallb-system spec: ipAddressPools: [USDpool] # eg: # - addresspool3 peers: [USDpeers] # eg: # - peer-one # communities: [USDcommunities] # Note correlation with address pool, or Community # eg: # - bgpcommunity # - 65535:65282 aggregationLength: 32 aggregationLengthV6: 128 localPref: 100 bgp-peer.yaml # required # count: 1-N apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: USDname namespace: metallb-system spec: peerAddress: USDip # eg 192.168.1.2 peerASN: USDpeerasn # eg 64501 myASN: USDmyasn # eg 64500 routerID: USDid # eg 10.10.10.10 bfdProfile: USDbfdprofile # e.g. bfdprofile passwordSecret: {} community.yaml --- apiVersion: metallb.io/v1beta1 kind: Community metadata: name: USDname # e.g. bgpcommunity namespace: metallb-system spec: communities: [USDcomm] metallb.yaml # required # count: 1 apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: {} #nodeSelector: # node-role.kubernetes.io/worker: "" metallbNS.yaml # required: yes # count: 1 --- apiVersion: v1 kind: Namespace metadata: name: metallb-system annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: "true" metallbOperGroup.yaml # required: yes # count: 1 --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: metallb-operator namespace: metallb-system metallbSubscription.yaml # required: yes # count: 1 --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: metallb-operator-sub namespace: metallb-system spec: channel: stable name: metallb-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Automatic status: state: AtLatestKnown mc_rootless_pods_selinux.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-setsebool spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Set SELinux boolean for tap cni plugin Before=kubelet.service [Service] Type=oneshot ExecStart=/sbin/setsebool container_use_devices=on RemainAfterExit=true [Install] WantedBy=multi-user.target graphical.target enabled: true name: setsebool.service NMState.yaml apiVersion: nmstate.io/v1 kind: NMState metadata: name: nmstate spec: {} NMStateNS.yaml apiVersion: v1 kind: Namespace metadata: name: openshift-nmstate annotations: workload.openshift.io/allowed: management NMStateOperGroup.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-nmstate namespace: openshift-nmstate spec: targetNamespaces: - openshift-nmstate NMStateSubscription.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kubernetes-nmstate-operator namespace: openshift-nmstate spec: channel: "stable" name: kubernetes-nmstate-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Automatic status: state: AtLatestKnown sriovNetwork.yaml # optional (though expected for all) # count: 0-N apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: USDname # eg sriov-network-abcd namespace: openshift-sriov-network-operator spec: capabilities: "USDcapabilities" # eg '{"mac": true, "ips": true}' ipam: "USDipam" # eg '{ "type": "host-local", "subnet": "10.3.38.0/24" }' networkNamespace: USDnns # eg cni-test resourceName: USDresource # eg resourceTest sriovNetworkNodePolicy.yaml # optional (though expected in all deployments) # count: 0-N apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: USDname namespace: openshift-sriov-network-operator spec: {} # USDspec # eg #deviceType: netdevice #nicSelector: # deviceID: "1593" # pfNames: # - ens8f0np0#0-9 # rootDevices: # - 0000:d8:00.0 # vendor: "8086" #nodeSelector: # kubernetes.io/hostname: host.sample.lab #numVfs: 20 #priority: 99 #excludeTopology: true #resourceName: resourceNameABCD SriovOperatorConfig.yaml # required # count: 1 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: configDaemonNodeSelector: node-role.kubernetes.io/worker: "" enableInjector: true enableOperatorWebhook: true disableDrain: false logLevel: 2 SriovSubscription.yaml # required: yes # count: 1 apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator spec: channel: "stable" name: sriov-network-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Automatic status: state: AtLatestKnown SriovSubscriptionNS.yaml # required: yes # count: 1 apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator annotations: workload.openshift.io/allowed: management SriovSubscriptionOperGroup.yaml # required: yes # count: 1 apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator 3.3.4.8.2. Node configuration reference YAML control-plane-load-kernel-modules.yaml # optional # count: 1 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 40-load-kernel-modules-control-plane spec: config: # Release info found in https://github.com/coreos/butane/releases ignition: version: 3.2.0 storage: files: - contents: source: data:, mode: 420 overwrite: true path: /etc/modprobe.d/kernel-blacklist.conf - contents: source: data:text/plain;charset=utf-8;base64,aXBfZ3JlCmlwNl90YWJsZXMKaXA2dF9SRUpFQ1QKaXA2dGFibGVfZmlsdGVyCmlwNnRhYmxlX21hbmdsZQppcHRhYmxlX2ZpbHRlcgppcHRhYmxlX21hbmdsZQppcHRhYmxlX25hdAp4dF9tdWx0aXBvcnQKeHRfb3duZXIKeHRfUkVESVJFQ1QKeHRfc3RhdGlzdGljCnh0X1RDUE1TUwo= mode: 420 overwrite: true path: /etc/modules-load.d/kernel-load.conf sctp_module_mc.yaml # optional # count: 1 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: load-sctp-module spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8;base64,c2N0cA== filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf worker-load-kernel-modules.yaml # optional # count: 1 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 40-load-kernel-modules-worker spec: config: # Release info found in https://github.com/coreos/butane/releases ignition: version: 3.2.0 storage: files: - contents: source: data:, mode: 420 overwrite: true path: /etc/modprobe.d/kernel-blacklist.conf - contents: source: data:text/plain;charset=utf-8;base64,aXBfZ3JlCmlwNl90YWJsZXMKaXA2dF9SRUpFQ1QKaXA2dGFibGVfZmlsdGVyCmlwNnRhYmxlX21hbmdsZQppcHRhYmxlX2ZpbHRlcgppcHRhYmxlX21hbmdsZQppcHRhYmxlX25hdAp4dF9tdWx0aXBvcnQKeHRfb3duZXIKeHRfUkVESVJFQ1QKeHRfc3RhdGlzdGljCnh0X1RDUE1TUwo= mode: 420 overwrite: true path: /etc/modules-load.d/kernel-load.conf mount_namespace_config_master.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-kubens-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kubens.service mount_namespace_config_worker.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-kubens-worker spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kubens.service kdump-master.yaml # Automatically generated by extra-manifests-builder # Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 06-kdump-enable-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M kdump-worker.yaml # Automatically generated by extra-manifests-builder # Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 06-kdump-enable-worker spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M 3.3.4.8.3. Other reference YAML ClusterLogForwarder.yaml apiVersion: "observability.openshift.io/v1" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: # outputs: USDoutputs # pipelines: USDpipelines serviceAccount: name: collector #apiVersion: "observability.openshift.io/v1" #kind: ClusterLogForwarder #metadata: # name: instance # namespace: openshift-logging # spec: # outputs: # - type: "kafka" # name: kafka-open # # below url is an example # kafka: # url: tcp://10.11.12.13:9092/test # filters: # - name: test-labels # type: openshiftLabels # openshiftLabels: # label1: test1 # label2: test2 # label3: test3 # label4: test4 # pipelines: # - name: all-to-default # inputRefs: # - audit # - infrastructure # filterRefs: # - test-labels # outputRefs: # - kafka-open # serviceAccount: # name: collector ClusterLogNS.yaml --- apiVersion: v1 kind: Namespace metadata: name: openshift-logging annotations: workload.openshift.io/allowed: management ClusterLogOperGroup.yaml --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging spec: targetNamespaces: - openshift-logging ClusterLogServiceAccount.yaml --- apiVersion: v1 kind: ServiceAccount metadata: name: collector namespace: openshift-logging ClusterLogServiceAccountAuditBinding.yaml --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: logcollector-audit-logs-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: collect-audit-logs subjects: - kind: ServiceAccount name: collector namespace: openshift-logging ClusterLogServiceAccountInfrastructureBinding.yaml --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: logcollector-infrastructure-logs-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: collect-infrastructure-logs subjects: - kind: ServiceAccount name: collector namespace: openshift-logging ClusterLogSubscription.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging spec: channel: "stable-6.0" name: cluster-logging source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Automatic status: state: AtLatestKnown catalog-source.yaml # required # count: 1..N apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: redhat-operators-disconnected namespace: openshift-marketplace spec: displayName: Red Hat Disconnected Operators Catalog image: USDimageUrl publisher: Red Hat sourceType: grpc # updateStrategy: # registryPoll: # interval: 1h status: connectionState: lastObservedState: READY icsp.yaml # required # count: 1 apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: disconnected-internal-icsp spec: repositoryDigestMirrors: [] # - USDmirrors operator-hub.yaml # required # count: 1 apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: name: cluster spec: disableAllDefaultSources: true monitoring-config-cm.yaml # optional # count: 1 --- apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: 15d volumeClaimTemplate: spec: storageClassName: ocs-external-storagecluster-ceph-rbd resources: requests: storage: 100Gi alertmanagerMain: volumeClaimTemplate: spec: storageClassName: ocs-external-storagecluster-ceph-rbd resources: requests: storage: 20Gi PerformanceProfile.yaml # required # count: 1 apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: USDname annotations: # Some pods want the kernel stack to ignore IPv6 router Advertisement. kubeletconfig.experimental: | {"allowedUnsafeSysctls":["net.ipv6.conf.all.accept_ra"]} spec: cpu: # node0 CPUs: 0-17,36-53 # node1 CPUs: 18-34,54-71 # siblings: (0,36), (1,37)... # we want to reserve the first Core of each NUMA socket # # no CPU left behind! all-cpus == isolated + reserved isolated: USDisolated # eg 1-17,19-35,37-53,55-71 reserved: USDreserved # eg 0,18,36,54 # Guaranteed QoS pods will disable IRQ balancing for cores allocated to the pod. # default value of globallyDisableIrqLoadBalancing is false globallyDisableIrqLoadBalancing: false hugepages: defaultHugepagesSize: 1G pages: # 32GB per numa node - count: USDcount # eg 64 size: 1G #machineConfigPoolSelector: {} # pools.operator.machineconfiguration.openshift.io/worker: '' nodeSelector: {} #node-role.kubernetes.io/worker: "" workloadHints: realTime: false highPowerConsumption: false perPodPowerManagement: true realTimeKernel: enabled: false numa: # All guaranteed QoS containers get resources from a single NUMA node topologyPolicy: "single-numa-node" net: userLevelNetworking: false 3.3.4.8.4. Resource tuning reference YAML control-plane-system-reserved.yaml # optional # count: 1 apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: autosizing-master spec: autoSizingReserved: true machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/master: "" 3.3.4.8.5. Scheduling reference YAML nrop.yaml # Optional # count: 1 apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesOperator metadata: name: numaresourcesoperator spec: nodeGroups: [] #- config: # # Periodic is the default setting # infoRefreshMode: Periodic # machineConfigPoolSelector: # matchLabels: # # This label must match the pool(s) you want to run NUMA-aligned workloads # pools.operator.machineconfiguration.openshift.io/worker: "" NROPSubscription.yaml # required # count: 1 apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: numaresources-operator namespace: openshift-numaresources spec: channel: "4.17" name: numaresources-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace status: state: AtLatestKnown NROPSubscriptionNS.yaml # required: yes # count: 1 apiVersion: v1 kind: Namespace metadata: name: openshift-numaresources annotations: workload.openshift.io/allowed: management NROPSubscriptionOperGroup.yaml # required: yes # count: 1 apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: numaresources-operator namespace: openshift-numaresources spec: targetNamespaces: - openshift-numaresources sched.yaml # Optional # count: 1 apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: #cacheResyncPeriod: "0" # Image spec should be the latest for the release imageSpec: "registry.redhat.io/openshift4/noderesourcetopology-scheduler-rhel9:v4.17.0" #logLevel: "Trace" schedulerName: topo-aware-scheduler Scheduler.yaml apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: # non-schedulable control plane is the default. This ensures # compliance. mastersSchedulable: false policy: name: "" 3.3.4.8.6. Storage reference YAML 01-rook-ceph-external-cluster-details.secret.yaml # required # count: 1 --- apiVersion: v1 kind: Secret metadata: name: rook-ceph-external-cluster-details namespace: openshift-storage type: Opaque data: # encoded content has been made generic external_cluster_details: eyJuYW1lIjoicm9vay1jZXBoLW1vbi1lbmRwb2ludHMiLCJraW5kIjoiQ29uZmlnTWFwIiwiZGF0YSI6eyJkYXRhIjoiY2VwaHVzYTE9MS4yLjMuNDo2Nzg5IiwibWF4TW9uSWQiOiIwIiwibWFwcGluZyI6Int9In19LHsibmFtZSI6InJvb2stY2VwaC1tb24iLCJraW5kIjoiU2VjcmV0IiwiZGF0YSI6eyJhZG1pbi1zZWNyZXQiOiJhZG1pbi1zZWNyZXQiLCJmc2lkIjoiMTExMTExMTEtMTExMS0xMTExLTExMTEtMTExMTExMTExMTExIiwibW9uLXNlY3JldCI6Im1vbi1zZWNyZXQifX0seyJuYW1lIjoicm9vay1jZXBoLW9wZXJhdG9yLWNyZWRzIiwia2luZCI6IlNlY3JldCIsImRhdGEiOnsidXNlcklEIjoiY2xpZW50LmhlYWx0aGNoZWNrZXIiLCJ1c2VyS2V5IjoiYzJWamNtVjAifX0seyJuYW1lIjoibW9uaXRvcmluZy1lbmRwb2ludCIsImtpbmQiOiJDZXBoQ2x1c3RlciIsImRhdGEiOnsiTW9uaXRvcmluZ0VuZHBvaW50IjoiMS4yLjMuNCwxLjIuMy4zLDEuMi4zLjIiLCJNb25pdG9yaW5nUG9ydCI6IjkyODMifX0seyJuYW1lIjoiY2VwaC1yYmQiLCJraW5kIjoiU3RvcmFnZUNsYXNzIiwiZGF0YSI6eyJwb29sIjoib2RmX3Bvb2wifX0seyJuYW1lIjoicm9vay1jc2ktcmJkLW5vZGUiLCJraW5kIjoiU2VjcmV0IiwiZGF0YSI6eyJ1c2VySUQiOiJjc2ktcmJkLW5vZGUiLCJ1c2VyS2V5IjoiIn19LHsibmFtZSI6InJvb2stY3NpLXJiZC1wcm92aXNpb25lciIsImtpbmQiOiJTZWNyZXQiLCJkYXRhIjp7InVzZXJJRCI6ImNzaS1yYmQtcHJvdmlzaW9uZXIiLCJ1c2VyS2V5IjoiYzJWamNtVjAifX0seyJuYW1lIjoicm9vay1jc2ktY2VwaGZzLXByb3Zpc2lvbmVyIiwia2luZCI6IlNlY3JldCIsImRhdGEiOnsiYWRtaW5JRCI6ImNzaS1jZXBoZnMtcHJvdmlzaW9uZXIiLCJhZG1pbktleSI6IiJ9fSx7Im5hbWUiOiJyb29rLWNzaS1jZXBoZnMtbm9kZSIsImtpbmQiOiJTZWNyZXQiLCJkYXRhIjp7ImFkbWluSUQiOiJjc2ktY2VwaGZzLW5vZGUiLCJhZG1pbktleSI6ImMyVmpjbVYwIn19LHsibmFtZSI6ImNlcGhmcyIsImtpbmQiOiJTdG9yYWdlQ2xhc3MiLCJkYXRhIjp7ImZzTmFtZSI6ImNlcGhmcyIsInBvb2wiOiJtYW5pbGFfZGF0YSJ9fQ== 02-ocs-external-storagecluster.yaml # required # count: 1 --- apiVersion: ocs.openshift.io/v1 kind: StorageCluster metadata: name: ocs-external-storagecluster namespace: openshift-storage spec: externalStorage: enable: true labelSelector: {} status: phase: Ready odfNS.yaml # required: yes # count: 1 --- apiVersion: v1 kind: Namespace metadata: name: openshift-storage annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: "true" odfOperGroup.yaml # required: yes # count: 1 --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-storage-operatorgroup namespace: openshift-storage spec: targetNamespaces: - openshift-storage odfSubscription.yaml # required: yes # count: 1 --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: odf-operator namespace: openshift-storage spec: channel: "stable-4.14" name: odf-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Automatic status: state: AtLatestKnown 3.3.5. Telco core reference configuration software specifications The following information describes the telco core reference design specification (RDS) validated software versions. 3.3.5.1. Telco core reference configuration software specifications The Red Hat telco core 4.17 solution has been validated using the following Red Hat software products for OpenShift Container Platform clusters. Table 3.15. Telco core cluster validated software components Component Software version Cluster Logging Operator 6.0 OpenShift Data Foundation 4.17 SR-IOV Operator 4.17 MetalLB 4.17 NMState Operator 4.17 NUMA-aware scheduler 4.17 | [
"query=avg_over_time(pod:container_cpu_usage:sum{namespace=\"openshift-kube-apiserver\"}[30m])",
"apiVersion: \"observability.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging annotations: {} spec: # outputs: USDoutputs # pipelines: USDpipelines serviceAccount: name: logcollector #apiVersion: \"observability.openshift.io/v1\" #kind: ClusterLogForwarder #metadata: name: instance namespace: openshift-logging spec: outputs: - type: \"kafka\" name: kafka-open # below url is an example kafka: url: tcp://10.46.55.190:9092/test filters: - name: test-labels type: openshiftLabels openshiftLabels: label1: test1 label2: test2 label3: test3 label4: test4 pipelines: - name: all-to-default inputRefs: - audit - infrastructure filterRefs: - test-labels outputRefs: - kafka-open serviceAccount: name: logcollector",
"--- apiVersion: v1 kind: Namespace metadata: name: openshift-logging annotations: workload.openshift.io/allowed: management",
"--- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging annotations: {} spec: targetNamespaces: - openshift-logging",
"--- apiVersion: v1 kind: ServiceAccount metadata: name: logcollector namespace: openshift-logging annotations: {}",
"--- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: logcollector-audit-logs-binding annotations: {} roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: collect-audit-logs subjects: - kind: ServiceAccount name: logcollector namespace: openshift-logging",
"--- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: logcollector-infrastructure-logs-binding annotations: {} roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: collect-infrastructure-logs subjects: - kind: ServiceAccount name: logcollector namespace: openshift-logging",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging annotations: {} spec: channel: \"stable-6.0\" name: cluster-logging source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown",
"apiVersion: lca.openshift.io/v1 kind: ImageBasedUpgrade metadata: name: upgrade spec: stage: Idle # When setting `stage: Prep`, remember to add the seed image reference object below. # seedImageRef: # image: USDimage # version: USDversion",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lifecycle-agent namespace: openshift-lifecycle-agent annotations: {} spec: channel: \"stable\" name: lifecycle-agent source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown",
"apiVersion: v1 kind: Namespace metadata: name: openshift-lifecycle-agent annotations: workload.openshift.io/allowed: management labels: kubernetes.io/metadata.name: openshift-lifecycle-agent",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: lifecycle-agent namespace: openshift-lifecycle-agent annotations: {} spec: targetNamespaces: - openshift-lifecycle-agent",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: {} name: example-storage-class provisioner: kubernetes.io/no-provisioner reclaimPolicy: Delete",
"apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" annotations: {} spec: logLevel: Normal managementState: Managed storageClassDevices: # The list of storage classes and associated devicePaths need to be specified like this example: - storageClassName: \"example-storage-class\" volumeMode: Filesystem fsType: xfs # The below must be adjusted to the hardware. # For stability and reliability, it's recommended to use persistent # naming conventions for devicePaths, such as /dev/disk/by-path. devicePaths: - /dev/disk/by-path/pci-0000:05:00.0-nvme-1 #--- ## How to verify ## 1. Create a PVC apiVersion: v1 kind: PersistentVolumeClaim metadata: name: local-pvc-name spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi storageClassName: example-storage-class #--- ## 2. Create a pod that mounts it apiVersion: v1 kind: Pod metadata: labels: run: busybox name: busybox spec: containers: - image: quay.io/quay/busybox:latest name: busybox resources: {} command: [\"/bin/sh\", \"-c\", \"sleep infinity\"] volumeMounts: - name: local-pvc mountPath: /data volumes: - name: local-pvc persistentVolumeClaim: claimName: local-pvc-name dnsPolicy: ClusterFirst restartPolicy: Always ## 3. Run the pod on the cluster and verify the size and access of the `/data` mount",
"apiVersion: v1 kind: Namespace metadata: name: openshift-local-storage annotations: workload.openshift.io/allowed: management",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-local-storage namespace: openshift-local-storage annotations: {} spec: targetNamespaces: - openshift-local-storage",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage annotations: {} spec: channel: \"stable\" name: local-storage-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown",
"This CR verifies the installation/upgrade of the Sriov Network Operator apiVersion: operators.coreos.com/v1 kind: Operator metadata: name: lvms-operator.openshift-storage annotations: {} status: components: refs: - kind: Subscription namespace: openshift-storage conditions: - type: CatalogSourcesUnhealthy status: \"False\" - kind: InstallPlan namespace: openshift-storage conditions: - type: Installed status: \"True\" - kind: ClusterServiceVersion namespace: openshift-storage conditions: - type: Succeeded status: \"True\" reason: InstallSucceeded",
"apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: lvmcluster namespace: openshift-storage annotations: {} spec: {} #example: creating a vg1 volume group leveraging all available disks on the node except the installation disk. storage: deviceClasses: - name: vg1 thinPoolConfig: name: thin-pool-1 sizePercent: 90 overprovisionRatio: 10",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lvms-operator namespace: openshift-storage annotations: {} spec: channel: \"stable\" name: lvms-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown",
"apiVersion: v1 kind: Namespace metadata: name: openshift-storage labels: workload.openshift.io/allowed: \"management\" openshift.io/cluster-monitoring: \"true\" annotations: {}",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: lvms-operator-operatorgroup namespace: openshift-storage annotations: {} spec: targetNamespaces: - openshift-storage",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: # if you change this name make sure the 'include' line in TunedPerformancePatch.yaml # matches this name: include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # Also in file 'validatorCRs/informDuValidator.yaml': # name: 50-performance-USD{PerformanceProfile.metadata.name} name: openshift-node-performance-profile annotations: ran.openshift.io/reference-configuration: \"ran-du.redhat.com\" spec: additionalKernelArgs: - \"rcupdate.rcu_normal_after_boot=0\" - \"efi=runtime\" - \"vfio_pci.enable_sriov=1\" - \"vfio_pci.disable_idle_d3=1\" - \"module_blacklist=irdma\" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: \"\" nodeSelector: node-role.kubernetes.io/USDmcp: '' numa: topologyPolicy: \"restricted\" # To use the standard (non-realtime) kernel, set enabled to false realTimeKernel: enabled: true workloadHints: # WorkloadHints defines the set of upper level flags for different type of workloads. # See https://github.com/openshift/cluster-node-tuning-operator/blob/master/docs/performanceprofile/performance_profile.md#workloadhints # for detailed descriptions of each item. # The configuration below is set for a low latency, performance mode. realTime: true highPowerConsumption: false perPodPowerManagement: false",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: performance-patch namespace: openshift-cluster-node-tuning-operator annotations: {} spec: profile: - name: performance-patch # Please note: # - The 'include' line must match the associated PerformanceProfile name, following below pattern # include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # - When using the standard (non-realtime) kernel, remove the kernel.timer_migration override from # the [sysctl] section and remove the entire section if it is empty. data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-node-performance-profile [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* group.ice-gnss=0:f:10:*:ice-gnss.* group.ice-dplls=0:f:10:*:ice-dplls.* [service] service.stalld=start,enable service.chronyd=stop,disable recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"USDmcp\" priority: 19 profile: performance-patch",
"apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary namespace: openshift-ptp annotations: {} spec: profile: - name: \"boundary\" ptp4lOpts: \"-2 --summary_interval -4\" phc2sysOpts: \"-a -r -m -n 24 -N 8 -R 16\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" ptp4lConf: | # The interface name is hardware-specific [USDiface_slave] masterOnly 0 [USDiface_master_1] masterOnly 1 [USDiface_master_2] masterOnly 1 [USDiface_master_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 248 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 135 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: \"boundary\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"",
"apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary-ha namespace: openshift-ptp annotations: {} spec: profile: - name: \"boundary-ha\" ptp4lOpts: \"\" phc2sysOpts: \"-a -r -m -n 24 -N 8 -R 16\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" haProfiles: \"USDprofile1,USDprofile2\" recommend: - profile: \"boundary-ha\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"",
"The grandmaster profile is provided for testing only It is not installed on production clusters apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: grandmaster namespace: openshift-ptp annotations: {} spec: profile: - name: \"grandmaster\" # The interface name is hardware-specific interface: USDinterface ptp4lOpts: \"-2 --summary_interval -4\" phc2sysOpts: \"-a -r -m -n 24 -N 8 -R 16\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 255 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: \"grandmaster\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"",
"apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: du-ptp-slave namespace: openshift-ptp annotations: {} spec: profile: - name: \"slave\" # The interface name is hardware-specific interface: USDinterface ptp4lOpts: \"-2 -s --summary_interval -4\" phc2sysOpts: \"-a -r -m -n 24 -N 8 -R 16\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 255 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: \"slave\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"",
"apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary namespace: openshift-ptp annotations: {} spec: profile: - name: \"boundary\" ptp4lOpts: \"-2\" phc2sysOpts: \"-a -r -n 24\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" ptp4lConf: | # The interface name is hardware-specific [USDiface_slave] masterOnly 0 [USDiface_master_1] masterOnly 1 [USDiface_master_2] masterOnly 1 [USDiface_master_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 248 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 135 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: \"boundary\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"",
"apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary-ha namespace: openshift-ptp annotations: {} spec: profile: - name: \"boundary-ha\" ptp4lOpts: \"\" phc2sysOpts: \"-a -r -n 24\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" haProfiles: \"USDprofile1,USDprofile2\" recommend: - profile: \"boundary-ha\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"",
"The grandmaster profile is provided for testing only It is not installed on production clusters In this example two cards USDiface_nic1 and USDiface_nic2 are connected via SMA1 ports by a cable and USDiface_nic2 receives 1PPS signals from USDiface_nic1 apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: grandmaster namespace: openshift-ptp annotations: {} spec: profile: - name: \"grandmaster\" ptp4lOpts: \"-2 --summary_interval -4\" phc2sysOpts: -r -u 0 -m -w -N 8 -R 16 -s USDiface_nic1 -n 24 ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" plugins: e810: enableDefaultConfig: false settings: LocalMaxHoldoverOffSet: 1500 LocalHoldoverTimeout: 14400 MaxInSpecOffset: 100 pins: USDe810_pins # \"USDiface_nic1\": # \"U.FL2\": \"0 2\" # \"U.FL1\": \"0 1\" # \"SMA2\": \"0 2\" # \"SMA1\": \"2 1\" # \"USDiface_nic2\": # \"U.FL2\": \"0 2\" # \"U.FL1\": \"0 1\" # \"SMA2\": \"0 2\" # \"SMA1\": \"1 1\" ublxCmds: - args: #ubxtool -P 29.20 -z CFG-HW-ANT_CFG_VOLTCTRL,1 - \"-P\" - \"29.20\" - \"-z\" - \"CFG-HW-ANT_CFG_VOLTCTRL,1\" reportOutput: false - args: #ubxtool -P 29.20 -e GPS - \"-P\" - \"29.20\" - \"-e\" - \"GPS\" reportOutput: false - args: #ubxtool -P 29.20 -d Galileo - \"-P\" - \"29.20\" - \"-d\" - \"Galileo\" reportOutput: false - args: #ubxtool -P 29.20 -d GLONASS - \"-P\" - \"29.20\" - \"-d\" - \"GLONASS\" reportOutput: false - args: #ubxtool -P 29.20 -d BeiDou - \"-P\" - \"29.20\" - \"-d\" - \"BeiDou\" reportOutput: false - args: #ubxtool -P 29.20 -d SBAS - \"-P\" - \"29.20\" - \"-d\" - \"SBAS\" reportOutput: false - args: #ubxtool -P 29.20 -t -w 5 -v 1 -e SURVEYIN,600,50000 - \"-P\" - \"29.20\" - \"-t\" - \"-w\" - \"5\" - \"-v\" - \"1\" - \"-e\" - \"SURVEYIN,600,50000\" reportOutput: true - args: #ubxtool -P 29.20 -p MON-HW - \"-P\" - \"29.20\" - \"-p\" - \"MON-HW\" reportOutput: true - args: #ubxtool -P 29.20 -p CFG-MSG,1,38,248 - \"-P\" - \"29.20\" - \"-p\" - \"CFG-MSG,1,38,248\" reportOutput: true ts2phcOpts: \" \" ts2phcConf: | [nmea] ts2phc.master 1 [global] use_syslog 0 verbose 1 logging_level 7 ts2phc.pulsewidth 100000000 #cat /dev/GNSS to find available serial port #example value of gnss_serialport is /dev/ttyGNSS_1700_0 ts2phc.nmea_serialport USDgnss_serialport leapfile /usr/share/zoneinfo/leap-seconds.list [USDiface_nic1] ts2phc.extts_polarity rising ts2phc.extts_correction 0 [USDiface_nic2] ts2phc.master 0 ts2phc.extts_polarity rising #this is a measured value in nanoseconds to compensate for SMA cable delay ts2phc.extts_correction -10 ptp4lConf: | [USDiface_nic1] masterOnly 1 [USDiface_nic1_1] masterOnly 1 [USDiface_nic1_2] masterOnly 1 [USDiface_nic1_3] masterOnly 1 [USDiface_nic2] masterOnly 1 [USDiface_nic2_1] masterOnly 1 [USDiface_nic2_2] masterOnly 1 [USDiface_nic2_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 6 clockAccuracy 0x27 offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval 0 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval -4 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 1 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0x20 recommend: - profile: \"grandmaster\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"",
"The grandmaster profile is provided for testing only It is not installed on production clusters apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: grandmaster namespace: openshift-ptp annotations: {} spec: profile: - name: \"grandmaster\" ptp4lOpts: \"-2 --summary_interval -4\" phc2sysOpts: -r -u 0 -m -w -N 8 -R 16 -s USDiface_master -n 24 ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" plugins: e810: enableDefaultConfig: false settings: LocalMaxHoldoverOffSet: 1500 LocalHoldoverTimeout: 14400 MaxInSpecOffset: 100 pins: USDe810_pins # \"USDiface_master\": # \"U.FL2\": \"0 2\" # \"U.FL1\": \"0 1\" # \"SMA2\": \"0 2\" # \"SMA1\": \"0 1\" ublxCmds: - args: #ubxtool -P 29.20 -z CFG-HW-ANT_CFG_VOLTCTRL,1 - \"-P\" - \"29.20\" - \"-z\" - \"CFG-HW-ANT_CFG_VOLTCTRL,1\" reportOutput: false - args: #ubxtool -P 29.20 -e GPS - \"-P\" - \"29.20\" - \"-e\" - \"GPS\" reportOutput: false - args: #ubxtool -P 29.20 -d Galileo - \"-P\" - \"29.20\" - \"-d\" - \"Galileo\" reportOutput: false - args: #ubxtool -P 29.20 -d GLONASS - \"-P\" - \"29.20\" - \"-d\" - \"GLONASS\" reportOutput: false - args: #ubxtool -P 29.20 -d BeiDou - \"-P\" - \"29.20\" - \"-d\" - \"BeiDou\" reportOutput: false - args: #ubxtool -P 29.20 -d SBAS - \"-P\" - \"29.20\" - \"-d\" - \"SBAS\" reportOutput: false - args: #ubxtool -P 29.20 -t -w 5 -v 1 -e SURVEYIN,600,50000 - \"-P\" - \"29.20\" - \"-t\" - \"-w\" - \"5\" - \"-v\" - \"1\" - \"-e\" - \"SURVEYIN,600,50000\" reportOutput: true - args: #ubxtool -P 29.20 -p MON-HW - \"-P\" - \"29.20\" - \"-p\" - \"MON-HW\" reportOutput: true - args: #ubxtool -P 29.20 -p CFG-MSG,1,38,248 - \"-P\" - \"29.20\" - \"-p\" - \"CFG-MSG,1,38,248\" reportOutput: true ts2phcOpts: \" \" ts2phcConf: | [nmea] ts2phc.master 1 [global] use_syslog 0 verbose 1 logging_level 7 ts2phc.pulsewidth 100000000 #cat /dev/GNSS to find available serial port #example value of gnss_serialport is /dev/ttyGNSS_1700_0 ts2phc.nmea_serialport USDgnss_serialport leapfile /usr/share/zoneinfo/leap-seconds.list [USDiface_master] ts2phc.extts_polarity rising ts2phc.extts_correction 0 ptp4lConf: | [USDiface_master] masterOnly 1 [USDiface_master_1] masterOnly 1 [USDiface_master_2] masterOnly 1 [USDiface_master_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 6 clockAccuracy 0x27 offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval 0 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval -4 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0x20 recommend: - profile: \"grandmaster\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"",
"apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: ordinary namespace: openshift-ptp annotations: {} spec: profile: - name: \"ordinary\" # The interface name is hardware-specific interface: USDinterface ptp4lOpts: \"-2 -s\" phc2sysOpts: \"-a -r -n 24\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 255 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: \"ordinary\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"",
"apiVersion: ptp.openshift.io/v1 kind: PtpOperatorConfig metadata: name: default namespace: openshift-ptp annotations: {} spec: daemonNodeSelector: node-role.kubernetes.io/USDmcp: \"\"",
"apiVersion: ptp.openshift.io/v1 kind: PtpOperatorConfig metadata: name: default namespace: openshift-ptp annotations: {} spec: daemonNodeSelector: node-role.kubernetes.io/USDmcp: \"\" ptpEventConfig: apiVersion: USDevent_api_version enableEventPublisher: true transportHost: \"http://ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043\"",
"--- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ptp-operator-subscription namespace: openshift-ptp annotations: {} spec: channel: \"stable\" name: ptp-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown",
"--- apiVersion: v1 kind: Namespace metadata: name: openshift-ptp annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: \"true\"",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ptp-operators namespace: openshift-ptp annotations: {} spec: targetNamespaces: - openshift-ptp",
"apiVersion: v1 kind: Namespace metadata: name: vran-acceleration-operators annotations: {}",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: vran-operators namespace: vran-acceleration-operators annotations: {} spec: targetNamespaces: - vran-acceleration-operators",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-fec-subscription namespace: vran-acceleration-operators annotations: {} spec: channel: stable name: sriov-fec source: certified-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown",
"apiVersion: sriovfec.intel.com/v2 kind: SriovFecClusterConfig metadata: name: config namespace: vran-acceleration-operators annotations: {} spec: drainSkip: USDdrainSkip # true if SNO, false by default priority: 1 nodeSelector: node-role.kubernetes.io/master: \"\" acceleratorSelector: pciAddress: USDpciAddress physicalFunction: pfDriver: \"vfio-pci\" vfDriver: \"vfio-pci\" vfAmount: 16 bbDevConfig: USDbbDevConfig #Recommended configuration for Intel ACC100 (Mount Bryce) FPGA here: https://github.com/smart-edge-open/openshift-operator/blob/main/spec/openshift-sriov-fec-operator.md#sample-cr-for-wireless-fec-acc100 #Recommended configuration for Intel N3000 FPGA here: https://github.com/smart-edge-open/openshift-operator/blob/main/spec/openshift-sriov-fec-operator.md#sample-cr-for-wireless-fec-n3000",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: \"\" namespace: openshift-sriov-network-operator annotations: {} spec: # resourceName: \"\" networkNamespace: openshift-sriov-network-operator vlan: \"\" spoofChk: \"\" ipam: \"\" linkState: \"\" maxTxRate: \"\" minTxRate: \"\" vlanQoS: \"\" trust: \"\" capabilities: \"\"",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: USDname namespace: openshift-sriov-network-operator annotations: {} spec: # The attributes for Mellanox/Intel based NICs as below. # deviceType: netdevice/vfio-pci # isRdma: true/false deviceType: USDdeviceType isRdma: USDisRdma nicSelector: # The exact physical function name must match the hardware used pfNames: [USDpfNames] nodeSelector: node-role.kubernetes.io/USDmcp: \"\" numVfs: USDnumVfs priority: USDpriority resourceName: USDresourceName",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator annotations: {} spec: configDaemonNodeSelector: \"node-role.kubernetes.io/USDmcp\": \"\" # Injector and OperatorWebhook pods can be disabled (set to \"false\") below # to reduce the number of management pods. It is recommended to start with the # webhook and injector pods enabled, and only disable them after verifying the # correctness of user manifests. # If the injector is disabled, containers using sr-iov resources must explicitly assign # them in the \"requests\"/\"limits\" section of the container spec, for example: # containers: # - name: my-sriov-workload-container # resources: # limits: # openshift.io/<resource_name>: \"1\" # requests: # openshift.io/<resource_name>: \"1\" enableInjector: false enableOperatorWebhook: false logLevel: 0",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator annotations: {} spec: configDaemonNodeSelector: \"node-role.kubernetes.io/USDmcp\": \"\" # Injector and OperatorWebhook pods can be disabled (set to \"false\") below # to reduce the number of management pods. It is recommended to start with the # webhook and injector pods enabled, and only disable them after verifying the # correctness of user manifests. # If the injector is disabled, containers using sr-iov resources must explicitly assign # them in the \"requests\"/\"limits\" section of the container spec, for example: # containers: # - name: my-sriov-workload-container # resources: # limits: # openshift.io/<resource_name>: \"1\" # requests: # openshift.io/<resource_name>: \"1\" enableInjector: false enableOperatorWebhook: false # Disable drain is needed for Single Node Openshift disableDrain: true logLevel: 0",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator annotations: {} spec: channel: \"stable\" name: sriov-network-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnown",
"apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator annotations: workload.openshift.io/allowed: management",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator annotations: {} spec: targetNamespaces: - openshift-sriov-network-operator",
"example-node1-bmh-secret & assisted-deployment-pull-secret need to be created under same namespace example-sno --- apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"example-sno\" namespace: \"example-sno\" spec: baseDomain: \"example.com\" pullSecretRef: name: \"assisted-deployment-pull-secret\" clusterImageSetNameRef: \"openshift-4.16\" sshPublicKey: \"ssh-rsa AAAA...\" clusters: - clusterName: \"example-sno\" networkType: \"OVNKubernetes\" # installConfigOverrides is a generic way of passing install-config # parameters through the siteConfig. The 'capabilities' field configures # the composable openshift feature. In this 'capabilities' setting, we # remove all the optional set of components. # Notes: # - OperatorLifecycleManager is needed for 4.15 and later # - NodeTuning is needed for 4.13 and later, not for 4.12 and earlier # - Ingress is needed for 4.16 and later installConfigOverrides: | { \"capabilities\": { \"baselineCapabilitySet\": \"None\", \"additionalEnabledCapabilities\": [ \"NodeTuning\", \"OperatorLifecycleManager\", \"Ingress\" ] } } # It is strongly recommended to include crun manifests as part of the additional install-time manifests for 4.13+. # The crun manifests can be obtained from source-crs/optional-extra-manifest/ and added to the git repo ie.sno-extra-manifest. # extraManifestPath: sno-extra-manifest clusterLabels: # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples du-profile: \"latest\" # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples in ../policygentemplates: # ../policygentemplates/common-ranGen.yaml will apply to all clusters with 'common: true' common: true # ../policygentemplates/group-du-sno-ranGen.yaml will apply to all clusters with 'group-du-sno: \"\"' group-du-sno: \"\" # ../policygentemplates/example-sno-site.yaml will apply to all clusters with 'sites: \"example-sno\"' # Normally this should match or contain the cluster name so it only applies to a single cluster sites: \"example-sno\" clusterNetwork: - cidr: 1001:1::/48 hostPrefix: 64 machineNetwork: - cidr: 1111:2222:3333:4444::/64 serviceNetwork: - 1001:2::/112 additionalNTPSources: - 1111:2222:3333:4444::2 # Initiates the cluster for workload partitioning. Setting specific reserved/isolated CPUSets is done via PolicyTemplate # please see Workload Partitioning Feature for a complete guide. cpuPartitioningMode: AllNodes # Optionally; This can be used to override the KlusterletAddonConfig that is created for this cluster: #crTemplates: # KlusterletAddonConfig: \"KlusterletAddonConfigOverride.yaml\" nodes: - hostName: \"example-node1.example.com\" role: \"master\" # Optionally; This can be used to configure desired BIOS setting on a host: #biosConfigRef: # filePath: \"example-hw.profile\" bmcAddress: \"idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1\" bmcCredentialsName: name: \"example-node1-bmh-secret\" bootMACAddress: \"AA:BB:CC:DD:EE:11\" # Use UEFISecureBoot to enable secure boot. bootMode: \"UEFISecureBoot\" rootDeviceHints: deviceName: \"/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0\" # disk partition at `/var/lib/containers` with ignitionConfigOverride. Some values must be updated. See DiskPartitionContainer.md for more details ignitionConfigOverride: | { \"ignition\": { \"version\": \"3.2.0\" }, \"storage\": { \"disks\": [ { \"device\": \"/dev/disk/by-id/wwn-0x6b07b250ebb9d0002a33509f24af1f62\", \"partitions\": [ { \"label\": \"var-lib-containers\", \"sizeMiB\": 0, \"startMiB\": 250000 } ], \"wipeTable\": false } ], \"filesystems\": [ { \"device\": \"/dev/disk/by-partlabel/var-lib-containers\", \"format\": \"xfs\", \"mountOptions\": [ \"defaults\", \"prjquota\" ], \"path\": \"/var/lib/containers\", \"wipeFilesystem\": true } ] }, \"systemd\": { \"units\": [ { \"contents\": \"# Generated by Butane\\n[Unit]\\nRequires=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\nAfter=systemd-fsck@dev-disk-by\\\\x2dpartlabel-var\\\\x2dlib\\\\x2dcontainers.service\\n\\n[Mount]\\nWhere=/var/lib/containers\\nWhat=/dev/disk/by-partlabel/var-lib-containers\\nType=xfs\\nOptions=defaults,prjquota\\n\\n[Install]\\nRequiredBy=local-fs.target\", \"enabled\": true, \"name\": \"var-lib-containers.mount\" } ] } } nodeNetwork: interfaces: - name: eno1 macAddress: \"AA:BB:CC:DD:EE:11\" config: interfaces: - name: eno1 type: ethernet state: up ipv4: enabled: false ipv6: enabled: true address: # For SNO sites with static IP addresses, the node-specific, # API and Ingress IPs should all be the same and configured on # the interface - ip: 1111:2222:3333:4444::aaaa:1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 next-hop-interface: eno1 next-hop-address: 1111:2222:3333:4444::1 table-id: 254",
"apiVersion: operator.openshift.io/v1 kind: Console metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"false\" include.release.openshift.io/self-managed-high-availability: \"false\" include.release.openshift.io/single-node-developer: \"false\" release.openshift.io/create-only: \"true\" name: cluster spec: logLevel: Normal managementState: Removed operatorLogLevel: Normal",
"Taken from https://github.com/operator-framework/operator-marketplace/blob/53c124a3f0edfd151652e1f23c87dd39ed7646bb/manifests/01_namespace.yaml Update it as the source evolves. apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: \"\" workload.openshift.io/allowed: \"management\" labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: baseline pod-security.kubernetes.io/enforce-version: v1.25 pod-security.kubernetes.io/audit: baseline pod-security.kubernetes.io/audit-version: v1.25 pod-security.kubernetes.io/warn: baseline pod-security.kubernetes.io/warn-version: v1.25 name: \"openshift-marketplace\"",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: default-cat-source namespace: openshift-marketplace annotations: target.workload.openshift.io/management: '{\"effect\": \"PreferredDuringScheduling\"}' spec: displayName: default-cat-source image: USDimageUrl publisher: Red Hat sourceType: grpc updateStrategy: registryPoll: interval: 1h status: connectionState: lastObservedState: READY",
"apiVersion: v1 kind: ConfigMap metadata: name: collect-profiles-config namespace: openshift-operator-lifecycle-manager annotations: {} data: pprof-config.yaml: | disabled: True",
"apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: disconnected-internal-icsp annotations: {} spec: repositoryDigestMirrors: - USDmirrors",
"apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: name: cluster annotations: {} spec: disableAllDefaultSources: true",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring annotations: {} data: config.yaml: | alertmanagerMain: enabled: false telemeterClient: enabled: false prometheusK8s: retention: 24h",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster annotations: {} spec: disableNetworkDiagnostics: true",
"apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: enable-crun-master spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/master: \"\" containerRuntimeConfig: defaultRuntime: crun",
"apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: enable-crun-worker spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" containerRuntimeConfig: defaultRuntime: crun",
"Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-crio-disable-wipe-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW9dCmNsZWFuX3NodXRkb3duX2ZpbGUgPSAiIgo= mode: 420 path: /etc/crio/crio.conf.d/99-crio-disable-wipe.toml",
"Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-crio-disable-wipe-worker spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW9dCmNsZWFuX3NodXRkb3duX2ZpbGUgPSAiIgo= mode: 420 path: /etc/crio/crio.conf.d/99-crio-disable-wipe.toml",
"Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 06-kdump-enable-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M",
"Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 06-kdump-enable-worker spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M",
"Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: container-mount-namespace-and-kubelet-conf-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKCmRlYnVnKCkgewogIGVjaG8gJEAgPiYyCn0KCnVzYWdlKCkgewogIGVjaG8gVXNhZ2U6ICQoYmFzZW5hbWUgJDApIFVOSVQgW2VudmZpbGUgW3Zhcm5hbWVdXQogIGVjaG8KICBlY2hvIEV4dHJhY3QgdGhlIGNvbnRlbnRzIG9mIHRoZSBmaXJzdCBFeGVjU3RhcnQgc3RhbnphIGZyb20gdGhlIGdpdmVuIHN5c3RlbWQgdW5pdCBhbmQgcmV0dXJuIGl0IHRvIHN0ZG91dAogIGVjaG8KICBlY2hvICJJZiAnZW52ZmlsZScgaXMgcHJvdmlkZWQsIHB1dCBpdCBpbiB0aGVyZSBpbnN0ZWFkLCBhcyBhbiBlbnZpcm9ubWVudCB2YXJpYWJsZSBuYW1lZCAndmFybmFtZSciCiAgZWNobyAiRGVmYXVsdCAndmFybmFtZScgaXMgRVhFQ1NUQVJUIGlmIG5vdCBzcGVjaWZpZWQiCiAgZXhpdCAxCn0KClVOSVQ9JDEKRU5WRklMRT0kMgpWQVJOQU1FPSQzCmlmIFtbIC16ICRVTklUIHx8ICRVTklUID09ICItLWhlbHAiIHx8ICRVTklUID09ICItaCIgXV07IHRoZW4KICB1c2FnZQpmaQpkZWJ1ZyAiRXh0cmFjdGluZyBFeGVjU3RhcnQgZnJvbSAkVU5JVCIKRklMRT0kKHN5c3RlbWN0bCBjYXQgJFVOSVQgfCBoZWFkIC1uIDEpCkZJTEU9JHtGSUxFI1wjIH0KaWYgW1sgISAtZiAkRklMRSBdXTsgdGhlbgogIGRlYnVnICJGYWlsZWQgdG8gZmluZCByb290IGZpbGUgZm9yIHVuaXQgJFVOSVQgKCRGSUxFKSIKICBleGl0CmZpCmRlYnVnICJTZXJ2aWNlIGRlZmluaXRpb24gaXMgaW4gJEZJTEUiCkVYRUNTVEFSVD0kKHNlZCAtbiAtZSAnL15FeGVjU3RhcnQ9LipcXCQvLC9bXlxcXSQvIHsgcy9eRXhlY1N0YXJ0PS8vOyBwIH0nIC1lICcvXkV4ZWNTdGFydD0uKlteXFxdJC8geyBzL15FeGVjU3RhcnQ9Ly87IHAgfScgJEZJTEUpCgppZiBbWyAkRU5WRklMRSBdXTsgdGhlbgogIFZBUk5BTUU9JHtWQVJOQU1FOi1FWEVDU1RBUlR9CiAgZWNobyAiJHtWQVJOQU1FfT0ke0VYRUNTVEFSVH0iID4gJEVOVkZJTEUKZWxzZQogIGVjaG8gJEVYRUNTVEFSVApmaQo= mode: 493 path: /usr/local/bin/extractExecStart - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKbnNlbnRlciAtLW1vdW50PS9ydW4vY29udGFpbmVyLW1vdW50LW5hbWVzcGFjZS9tbnQgIiRAIgo= mode: 493 path: /usr/local/bin/nsenterCmns systemd: units: - contents: | [Unit] Description=Manages a mount namespace that both kubelet and crio can use to share their container-specific mounts [Service] Type=oneshot RemainAfterExit=yes RuntimeDirectory=container-mount-namespace Environment=RUNTIME_DIRECTORY=%t/container-mount-namespace Environment=BIND_POINT=%t/container-mount-namespace/mnt ExecStartPre=bash -c \"findmnt USD{RUNTIME_DIRECTORY} || mount --make-unbindable --bind USD{RUNTIME_DIRECTORY} USD{RUNTIME_DIRECTORY}\" ExecStartPre=touch USD{BIND_POINT} ExecStart=unshare --mount=USD{BIND_POINT} --propagation slave mount --make-rshared / ExecStop=umount -R USD{RUNTIME_DIRECTORY} name: container-mount-namespace.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c \"nsenter --mount=%t/container-mount-namespace/mnt USD{ORIG_EXECSTART}\" name: 90-container-mount-namespace.conf name: crio.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c \"nsenter --mount=%t/container-mount-namespace/mnt USD{ORIG_EXECSTART} --housekeeping-interval=30s\" name: 90-container-mount-namespace.conf - contents: | [Service] Environment=\"OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s\" Environment=\"OPENSHIFT_EVICTION_MONITORING_PERIOD_DURATION=30s\" name: 30-kubelet-interval-tuning.conf name: kubelet.service",
"Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: container-mount-namespace-and-kubelet-conf-worker spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKCmRlYnVnKCkgewogIGVjaG8gJEAgPiYyCn0KCnVzYWdlKCkgewogIGVjaG8gVXNhZ2U6ICQoYmFzZW5hbWUgJDApIFVOSVQgW2VudmZpbGUgW3Zhcm5hbWVdXQogIGVjaG8KICBlY2hvIEV4dHJhY3QgdGhlIGNvbnRlbnRzIG9mIHRoZSBmaXJzdCBFeGVjU3RhcnQgc3RhbnphIGZyb20gdGhlIGdpdmVuIHN5c3RlbWQgdW5pdCBhbmQgcmV0dXJuIGl0IHRvIHN0ZG91dAogIGVjaG8KICBlY2hvICJJZiAnZW52ZmlsZScgaXMgcHJvdmlkZWQsIHB1dCBpdCBpbiB0aGVyZSBpbnN0ZWFkLCBhcyBhbiBlbnZpcm9ubWVudCB2YXJpYWJsZSBuYW1lZCAndmFybmFtZSciCiAgZWNobyAiRGVmYXVsdCAndmFybmFtZScgaXMgRVhFQ1NUQVJUIGlmIG5vdCBzcGVjaWZpZWQiCiAgZXhpdCAxCn0KClVOSVQ9JDEKRU5WRklMRT0kMgpWQVJOQU1FPSQzCmlmIFtbIC16ICRVTklUIHx8ICRVTklUID09ICItLWhlbHAiIHx8ICRVTklUID09ICItaCIgXV07IHRoZW4KICB1c2FnZQpmaQpkZWJ1ZyAiRXh0cmFjdGluZyBFeGVjU3RhcnQgZnJvbSAkVU5JVCIKRklMRT0kKHN5c3RlbWN0bCBjYXQgJFVOSVQgfCBoZWFkIC1uIDEpCkZJTEU9JHtGSUxFI1wjIH0KaWYgW1sgISAtZiAkRklMRSBdXTsgdGhlbgogIGRlYnVnICJGYWlsZWQgdG8gZmluZCByb290IGZpbGUgZm9yIHVuaXQgJFVOSVQgKCRGSUxFKSIKICBleGl0CmZpCmRlYnVnICJTZXJ2aWNlIGRlZmluaXRpb24gaXMgaW4gJEZJTEUiCkVYRUNTVEFSVD0kKHNlZCAtbiAtZSAnL15FeGVjU3RhcnQ9LipcXCQvLC9bXlxcXSQvIHsgcy9eRXhlY1N0YXJ0PS8vOyBwIH0nIC1lICcvXkV4ZWNTdGFydD0uKlteXFxdJC8geyBzL15FeGVjU3RhcnQ9Ly87IHAgfScgJEZJTEUpCgppZiBbWyAkRU5WRklMRSBdXTsgdGhlbgogIFZBUk5BTUU9JHtWQVJOQU1FOi1FWEVDU1RBUlR9CiAgZWNobyAiJHtWQVJOQU1FfT0ke0VYRUNTVEFSVH0iID4gJEVOVkZJTEUKZWxzZQogIGVjaG8gJEVYRUNTVEFSVApmaQo= mode: 493 path: /usr/local/bin/extractExecStart - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKbnNlbnRlciAtLW1vdW50PS9ydW4vY29udGFpbmVyLW1vdW50LW5hbWVzcGFjZS9tbnQgIiRAIgo= mode: 493 path: /usr/local/bin/nsenterCmns systemd: units: - contents: | [Unit] Description=Manages a mount namespace that both kubelet and crio can use to share their container-specific mounts [Service] Type=oneshot RemainAfterExit=yes RuntimeDirectory=container-mount-namespace Environment=RUNTIME_DIRECTORY=%t/container-mount-namespace Environment=BIND_POINT=%t/container-mount-namespace/mnt ExecStartPre=bash -c \"findmnt USD{RUNTIME_DIRECTORY} || mount --make-unbindable --bind USD{RUNTIME_DIRECTORY} USD{RUNTIME_DIRECTORY}\" ExecStartPre=touch USD{BIND_POINT} ExecStart=unshare --mount=USD{BIND_POINT} --propagation slave mount --make-rshared / ExecStop=umount -R USD{RUNTIME_DIRECTORY} name: container-mount-namespace.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c \"nsenter --mount=%t/container-mount-namespace/mnt USD{ORIG_EXECSTART}\" name: 90-container-mount-namespace.conf name: crio.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c \"nsenter --mount=%t/container-mount-namespace/mnt USD{ORIG_EXECSTART} --housekeeping-interval=30s\" name: 90-container-mount-namespace.conf - contents: | [Service] Environment=\"OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s\" Environment=\"OPENSHIFT_EVICTION_MONITORING_PERIOD_DURATION=30s\" name: 30-kubelet-interval-tuning.conf name: kubelet.service",
"Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-sync-time-once-master spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Sync time once After=network-online.target Wants=network-online.target [Service] Type=oneshot TimeoutStartSec=300 ExecCondition=/bin/bash -c 'systemctl is-enabled chronyd.service --quiet && exit 1 || exit 0' ExecStart=/usr/sbin/chronyd -n -f /etc/chrony.conf -q RemainAfterExit=yes [Install] WantedBy=multi-user.target enabled: true name: sync-time-once.service",
"Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-sync-time-once-worker spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Sync time once After=network-online.target Wants=network-online.target [Service] Type=oneshot TimeoutStartSec=300 ExecCondition=/bin/bash -c 'systemctl is-enabled chronyd.service --quiet && exit 1 || exit 0' ExecStart=/usr/sbin/chronyd -n -f /etc/chrony.conf -q RemainAfterExit=yes [Install] WantedBy=multi-user.target enabled: true name: sync-time-once.service",
"Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: load-sctp-module-master spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8,sctp filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf",
"Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: load-sctp-module-worker spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8,sctp filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf",
"Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 08-set-rcu-normal-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKIwojIERpc2FibGUgcmN1X2V4cGVkaXRlZCBhZnRlciBub2RlIGhhcyBmaW5pc2hlZCBib290aW5nCiMKIyBUaGUgZGVmYXVsdHMgYmVsb3cgY2FuIGJlIG92ZXJyaWRkZW4gdmlhIGVudmlyb25tZW50IHZhcmlhYmxlcwojCgojIERlZmF1bHQgd2FpdCB0aW1lIGlzIDYwMHMgPSAxMG06Ck1BWElNVU1fV0FJVF9USU1FPSR7TUFYSU1VTV9XQUlUX1RJTUU6LTYwMH0KCiMgRGVmYXVsdCBzdGVhZHktc3RhdGUgdGhyZXNob2xkID0gMiUKIyBBbGxvd2VkIHZhbHVlczoKIyAgNCAgLSBhYnNvbHV0ZSBwb2QgY291bnQgKCsvLSkKIyAgNCUgLSBwZXJjZW50IGNoYW5nZSAoKy8tKQojICAtMSAtIGRpc2FibGUgdGhlIHN0ZWFkeS1zdGF0ZSBjaGVjawpTVEVBRFlfU1RBVEVfVEhSRVNIT0xEPSR7U1RFQURZX1NUQVRFX1RIUkVTSE9MRDotMiV9CgojIERlZmF1bHQgc3RlYWR5LXN0YXRlIHdpbmRvdyA9IDYwcwojIElmIHRoZSBydW5uaW5nIHBvZCBjb3VudCBzdGF5cyB3aXRoaW4gdGhlIGdpdmVuIHRocmVzaG9sZCBmb3IgdGhpcyB0aW1lCiMgcGVyaW9kLCByZXR1cm4gQ1BVIHV0aWxpemF0aW9uIHRvIG5vcm1hbCBiZWZvcmUgdGhlIG1heGltdW0gd2FpdCB0aW1lIGhhcwojIGV4cGlyZXMKU1RFQURZX1NUQVRFX1dJTkRPVz0ke1NURUFEWV9TVEFURV9XSU5ET1c6LTYwfQoKIyBEZWZhdWx0IHN0ZWFkeS1zdGF0ZSBhbGxvd3MgYW55IHBvZCBjb3VudCB0byBiZSAic3RlYWR5IHN0YXRlIgojIEluY3JlYXNpbmcgdGhpcyB3aWxsIHNraXAgYW55IHN0ZWFkeS1zdGF0ZSBjaGVja3MgdW50aWwgdGhlIGNvdW50IHJpc2VzIGFib3ZlCiMgdGhpcyBudW1iZXIgdG8gYXZvaWQgZmFsc2UgcG9zaXRpdmVzIGlmIHRoZXJlIGFyZSBzb21lIHBlcmlvZHMgd2hlcmUgdGhlCiMgY291bnQgZG9lc24ndCBpbmNyZWFzZSBidXQgd2Uga25vdyB3ZSBjYW4ndCBiZSBhdCBzdGVhZHktc3RhdGUgeWV0LgpTVEVBRFlfU1RBVEVfTUlOSU1VTT0ke1NURUFEWV9TVEFURV9NSU5JTVVNOi0wfQoKIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIwoKd2l0aGluKCkgewogIGxvY2FsIGxhc3Q9JDEgY3VycmVudD0kMiB0aHJlc2hvbGQ9JDMKICBsb2NhbCBkZWx0YT0wIHBjaGFuZ2UKICBkZWx0YT0kKCggY3VycmVudCAtIGxhc3QgKSkKICBpZiBbWyAkY3VycmVudCAtZXEgJGxhc3QgXV07IHRoZW4KICAgIHBjaGFuZ2U9MAogIGVsaWYgW1sgJGxhc3QgLWVxIDAgXV07IHRoZW4KICAgIHBjaGFuZ2U9MTAwMDAwMAogIGVsc2UKICAgIHBjaGFuZ2U9JCgoICggIiRkZWx0YSIgKiAxMDApIC8gbGFzdCApKQogIGZpCiAgZWNobyAtbiAibGFzdDokbGFzdCBjdXJyZW50OiRjdXJyZW50IGRlbHRhOiRkZWx0YSBwY2hhbmdlOiR7cGNoYW5nZX0lOiAiCiAgbG9jYWwgYWJzb2x1dGUgbGltaXQKICBjYXNlICR0aHJlc2hvbGQgaW4KICAgIColKQogICAgICBhYnNvbHV0ZT0ke3BjaGFuZ2UjIy19ICMgYWJzb2x1dGUgdmFsdWUKICAgICAgbGltaXQ9JHt0aHJlc2hvbGQlJSV9CiAgICAgIDs7CiAgICAqKQogICAgICBhYnNvbHV0ZT0ke2RlbHRhIyMtfSAjIGFic29sdXRlIHZhbHVlCiAgICAgIGxpbWl0PSR0aHJlc2hvbGQKICAgICAgOzsKICBlc2FjCiAgaWYgW1sgJGFic29sdXRlIC1sZSAkbGltaXQgXV07IHRoZW4KICAgIGVjaG8gIndpdGhpbiAoKy8tKSR0aHJlc2hvbGQiCiAgICByZXR1cm4gMAogIGVsc2UKICAgIGVjaG8gIm91dHNpZGUgKCsvLSkkdGhyZXNob2xkIgogICAgcmV0dXJuIDEKICBmaQp9CgpzdGVhZHlzdGF0ZSgpIHsKICBsb2NhbCBsYXN0PSQxIGN1cnJlbnQ9JDIKICBpZiBbWyAkbGFzdCAtbHQgJFNURUFEWV9TVEFURV9NSU5JTVVNIF1dOyB0aGVuCiAgICBlY2hvICJsYXN0OiRsYXN0IGN1cnJlbnQ6JGN1cnJlbnQgV2FpdGluZyB0byByZWFjaCAkU1RFQURZX1NUQVRFX01JTklNVU0gYmVmb3JlIGNoZWNraW5nIGZvciBzdGVhZHktc3RhdGUiCiAgICByZXR1cm4gMQogIGZpCiAgd2l0aGluICIkbGFzdCIgIiRjdXJyZW50IiAiJFNURUFEWV9TVEFURV9USFJFU0hPTEQiCn0KCndhaXRGb3JSZWFkeSgpIHsKICBsb2dnZXIgIlJlY292ZXJ5OiBXYWl0aW5nICR7TUFYSU1VTV9XQUlUX1RJTUV9cyBmb3IgdGhlIGluaXRpYWxpemF0aW9uIHRvIGNvbXBsZXRlIgogIGxvY2FsIHQ9MCBzPTEwCiAgbG9jYWwgbGFzdENjb3VudD0wIGNjb3VudD0wIHN0ZWFkeVN0YXRlVGltZT0wCiAgd2hpbGUgW1sgJHQgLWx0ICRNQVhJTVVNX1dBSVRfVElNRSBdXTsgZG8KICAgIHNsZWVwICRzCiAgICAoKHQgKz0gcykpCiAgICAjIERldGVjdCBzdGVhZHktc3RhdGUgcG9kIGNvdW50CiAgICBjY291bnQ9JChjcmljdGwgcHMgMj4vZGV2L251bGwgfCB3YyAtbCkKICAgIGlmIFtbICRjY291bnQgLWd0IDAgXV0gJiYgc3RlYWR5c3RhdGUgIiRsYXN0Q2NvdW50IiAiJGNjb3VudCI7IHRoZW4KICAgICAgKChzdGVhZHlTdGF0ZVRpbWUgKz0gcykpCiAgICAgIGVjaG8gIlN0ZWFkeS1zdGF0ZSBmb3IgJHtzdGVhZHlTdGF0ZVRpbWV9cy8ke1NURUFEWV9TVEFURV9XSU5ET1d9cyIKICAgICAgaWYgW1sgJHN0ZWFkeVN0YXRlVGltZSAtZ2UgJFNURUFEWV9TVEFURV9XSU5ET1cgXV07IHRoZW4KICAgICAgICBsb2dnZXIgIlJlY292ZXJ5OiBTdGVhZHktc3RhdGUgKCsvLSAkU1RFQURZX1NUQVRFX1RIUkVTSE9MRCkgZm9yICR7U1RFQURZX1NUQVRFX1dJTkRPV31zOiBEb25lIgogICAgICAgIHJldHVybiAwCiAgICAgIGZpCiAgICBlbHNlCiAgICAgIGlmIFtbICRzdGVhZHlTdGF0ZVRpbWUgLWd0IDAgXV07IHRoZW4KICAgICAgICBlY2hvICJSZXNldHRpbmcgc3RlYWR5LXN0YXRlIHRpbWVyIgogICAgICAgIHN0ZWFkeVN0YXRlVGltZT0wCiAgICAgIGZpCiAgICBmaQogICAgbGFzdENjb3VudD0kY2NvdW50CiAgZG9uZQogIGxvZ2dlciAiUmVjb3Zlcnk6IFJlY292ZXJ5IENvbXBsZXRlIFRpbWVvdXQiCn0KCnNldFJjdU5vcm1hbCgpIHsKICBlY2hvICJTZXR0aW5nIHJjdV9ub3JtYWwgdG8gMSIKICBlY2hvIDEgPiAvc3lzL2tlcm5lbC9yY3Vfbm9ybWFsCn0KCm1haW4oKSB7CiAgd2FpdEZvclJlYWR5CiAgZWNobyAiV2FpdGluZyBmb3Igc3RlYWR5IHN0YXRlIHRvb2s6ICQoYXdrICd7cHJpbnQgaW50KCQxLzM2MDApImgiLCBpbnQoKCQxJTM2MDApLzYwKSJtIiwgaW50KCQxJTYwKSJzIn0nIC9wcm9jL3VwdGltZSkiCiAgc2V0UmN1Tm9ybWFsCn0KCmlmIFtbICIke0JBU0hfU09VUkNFWzBdfSIgPSAiJHswfSIgXV07IHRoZW4KICBtYWluICIke0B9IgogIGV4aXQgJD8KZmkK mode: 493 path: /usr/local/bin/set-rcu-normal.sh systemd: units: - contents: | [Unit] Description=Disable rcu_expedited after node has finished booting by setting rcu_normal to 1 [Service] Type=simple ExecStart=/usr/local/bin/set-rcu-normal.sh # Maximum wait time is 600s = 10m: Environment=MAXIMUM_WAIT_TIME=600 # Steady-state threshold = 2% # Allowed values: # 4 - absolute pod count (+/-) # 4% - percent change (+/-) # -1 - disable the steady-state check # Note: '%' must be escaped as '%%' in systemd unit files Environment=STEADY_STATE_THRESHOLD=2%% # Steady-state window = 120s # If the running pod count stays within the given threshold for this time # period, return CPU utilization to normal before the maximum wait time has # expires Environment=STEADY_STATE_WINDOW=120 # Steady-state minimum = 40 # Increasing this will skip any steady-state checks until the count rises above # this number to avoid false positives if there are some periods where the # count doesn't increase but we know we can't be at steady-state yet. Environment=STEADY_STATE_MINIMUM=40 [Install] WantedBy=multi-user.target enabled: true name: set-rcu-normal.service",
"Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 08-set-rcu-normal-worker spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKIwojIERpc2FibGUgcmN1X2V4cGVkaXRlZCBhZnRlciBub2RlIGhhcyBmaW5pc2hlZCBib290aW5nCiMKIyBUaGUgZGVmYXVsdHMgYmVsb3cgY2FuIGJlIG92ZXJyaWRkZW4gdmlhIGVudmlyb25tZW50IHZhcmlhYmxlcwojCgojIERlZmF1bHQgd2FpdCB0aW1lIGlzIDYwMHMgPSAxMG06Ck1BWElNVU1fV0FJVF9USU1FPSR7TUFYSU1VTV9XQUlUX1RJTUU6LTYwMH0KCiMgRGVmYXVsdCBzdGVhZHktc3RhdGUgdGhyZXNob2xkID0gMiUKIyBBbGxvd2VkIHZhbHVlczoKIyAgNCAgLSBhYnNvbHV0ZSBwb2QgY291bnQgKCsvLSkKIyAgNCUgLSBwZXJjZW50IGNoYW5nZSAoKy8tKQojICAtMSAtIGRpc2FibGUgdGhlIHN0ZWFkeS1zdGF0ZSBjaGVjawpTVEVBRFlfU1RBVEVfVEhSRVNIT0xEPSR7U1RFQURZX1NUQVRFX1RIUkVTSE9MRDotMiV9CgojIERlZmF1bHQgc3RlYWR5LXN0YXRlIHdpbmRvdyA9IDYwcwojIElmIHRoZSBydW5uaW5nIHBvZCBjb3VudCBzdGF5cyB3aXRoaW4gdGhlIGdpdmVuIHRocmVzaG9sZCBmb3IgdGhpcyB0aW1lCiMgcGVyaW9kLCByZXR1cm4gQ1BVIHV0aWxpemF0aW9uIHRvIG5vcm1hbCBiZWZvcmUgdGhlIG1heGltdW0gd2FpdCB0aW1lIGhhcwojIGV4cGlyZXMKU1RFQURZX1NUQVRFX1dJTkRPVz0ke1NURUFEWV9TVEFURV9XSU5ET1c6LTYwfQoKIyBEZWZhdWx0IHN0ZWFkeS1zdGF0ZSBhbGxvd3MgYW55IHBvZCBjb3VudCB0byBiZSAic3RlYWR5IHN0YXRlIgojIEluY3JlYXNpbmcgdGhpcyB3aWxsIHNraXAgYW55IHN0ZWFkeS1zdGF0ZSBjaGVja3MgdW50aWwgdGhlIGNvdW50IHJpc2VzIGFib3ZlCiMgdGhpcyBudW1iZXIgdG8gYXZvaWQgZmFsc2UgcG9zaXRpdmVzIGlmIHRoZXJlIGFyZSBzb21lIHBlcmlvZHMgd2hlcmUgdGhlCiMgY291bnQgZG9lc24ndCBpbmNyZWFzZSBidXQgd2Uga25vdyB3ZSBjYW4ndCBiZSBhdCBzdGVhZHktc3RhdGUgeWV0LgpTVEVBRFlfU1RBVEVfTUlOSU1VTT0ke1NURUFEWV9TVEFURV9NSU5JTVVNOi0wfQoKIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIwoKd2l0aGluKCkgewogIGxvY2FsIGxhc3Q9JDEgY3VycmVudD0kMiB0aHJlc2hvbGQ9JDMKICBsb2NhbCBkZWx0YT0wIHBjaGFuZ2UKICBkZWx0YT0kKCggY3VycmVudCAtIGxhc3QgKSkKICBpZiBbWyAkY3VycmVudCAtZXEgJGxhc3QgXV07IHRoZW4KICAgIHBjaGFuZ2U9MAogIGVsaWYgW1sgJGxhc3QgLWVxIDAgXV07IHRoZW4KICAgIHBjaGFuZ2U9MTAwMDAwMAogIGVsc2UKICAgIHBjaGFuZ2U9JCgoICggIiRkZWx0YSIgKiAxMDApIC8gbGFzdCApKQogIGZpCiAgZWNobyAtbiAibGFzdDokbGFzdCBjdXJyZW50OiRjdXJyZW50IGRlbHRhOiRkZWx0YSBwY2hhbmdlOiR7cGNoYW5nZX0lOiAiCiAgbG9jYWwgYWJzb2x1dGUgbGltaXQKICBjYXNlICR0aHJlc2hvbGQgaW4KICAgIColKQogICAgICBhYnNvbHV0ZT0ke3BjaGFuZ2UjIy19ICMgYWJzb2x1dGUgdmFsdWUKICAgICAgbGltaXQ9JHt0aHJlc2hvbGQlJSV9CiAgICAgIDs7CiAgICAqKQogICAgICBhYnNvbHV0ZT0ke2RlbHRhIyMtfSAjIGFic29sdXRlIHZhbHVlCiAgICAgIGxpbWl0PSR0aHJlc2hvbGQKICAgICAgOzsKICBlc2FjCiAgaWYgW1sgJGFic29sdXRlIC1sZSAkbGltaXQgXV07IHRoZW4KICAgIGVjaG8gIndpdGhpbiAoKy8tKSR0aHJlc2hvbGQiCiAgICByZXR1cm4gMAogIGVsc2UKICAgIGVjaG8gIm91dHNpZGUgKCsvLSkkdGhyZXNob2xkIgogICAgcmV0dXJuIDEKICBmaQp9CgpzdGVhZHlzdGF0ZSgpIHsKICBsb2NhbCBsYXN0PSQxIGN1cnJlbnQ9JDIKICBpZiBbWyAkbGFzdCAtbHQgJFNURUFEWV9TVEFURV9NSU5JTVVNIF1dOyB0aGVuCiAgICBlY2hvICJsYXN0OiRsYXN0IGN1cnJlbnQ6JGN1cnJlbnQgV2FpdGluZyB0byByZWFjaCAkU1RFQURZX1NUQVRFX01JTklNVU0gYmVmb3JlIGNoZWNraW5nIGZvciBzdGVhZHktc3RhdGUiCiAgICByZXR1cm4gMQogIGZpCiAgd2l0aGluICIkbGFzdCIgIiRjdXJyZW50IiAiJFNURUFEWV9TVEFURV9USFJFU0hPTEQiCn0KCndhaXRGb3JSZWFkeSgpIHsKICBsb2dnZXIgIlJlY292ZXJ5OiBXYWl0aW5nICR7TUFYSU1VTV9XQUlUX1RJTUV9cyBmb3IgdGhlIGluaXRpYWxpemF0aW9uIHRvIGNvbXBsZXRlIgogIGxvY2FsIHQ9MCBzPTEwCiAgbG9jYWwgbGFzdENjb3VudD0wIGNjb3VudD0wIHN0ZWFkeVN0YXRlVGltZT0wCiAgd2hpbGUgW1sgJHQgLWx0ICRNQVhJTVVNX1dBSVRfVElNRSBdXTsgZG8KICAgIHNsZWVwICRzCiAgICAoKHQgKz0gcykpCiAgICAjIERldGVjdCBzdGVhZHktc3RhdGUgcG9kIGNvdW50CiAgICBjY291bnQ9JChjcmljdGwgcHMgMj4vZGV2L251bGwgfCB3YyAtbCkKICAgIGlmIFtbICRjY291bnQgLWd0IDAgXV0gJiYgc3RlYWR5c3RhdGUgIiRsYXN0Q2NvdW50IiAiJGNjb3VudCI7IHRoZW4KICAgICAgKChzdGVhZHlTdGF0ZVRpbWUgKz0gcykpCiAgICAgIGVjaG8gIlN0ZWFkeS1zdGF0ZSBmb3IgJHtzdGVhZHlTdGF0ZVRpbWV9cy8ke1NURUFEWV9TVEFURV9XSU5ET1d9cyIKICAgICAgaWYgW1sgJHN0ZWFkeVN0YXRlVGltZSAtZ2UgJFNURUFEWV9TVEFURV9XSU5ET1cgXV07IHRoZW4KICAgICAgICBsb2dnZXIgIlJlY292ZXJ5OiBTdGVhZHktc3RhdGUgKCsvLSAkU1RFQURZX1NUQVRFX1RIUkVTSE9MRCkgZm9yICR7U1RFQURZX1NUQVRFX1dJTkRPV31zOiBEb25lIgogICAgICAgIHJldHVybiAwCiAgICAgIGZpCiAgICBlbHNlCiAgICAgIGlmIFtbICRzdGVhZHlTdGF0ZVRpbWUgLWd0IDAgXV07IHRoZW4KICAgICAgICBlY2hvICJSZXNldHRpbmcgc3RlYWR5LXN0YXRlIHRpbWVyIgogICAgICAgIHN0ZWFkeVN0YXRlVGltZT0wCiAgICAgIGZpCiAgICBmaQogICAgbGFzdENjb3VudD0kY2NvdW50CiAgZG9uZQogIGxvZ2dlciAiUmVjb3Zlcnk6IFJlY292ZXJ5IENvbXBsZXRlIFRpbWVvdXQiCn0KCnNldFJjdU5vcm1hbCgpIHsKICBlY2hvICJTZXR0aW5nIHJjdV9ub3JtYWwgdG8gMSIKICBlY2hvIDEgPiAvc3lzL2tlcm5lbC9yY3Vfbm9ybWFsCn0KCm1haW4oKSB7CiAgd2FpdEZvclJlYWR5CiAgZWNobyAiV2FpdGluZyBmb3Igc3RlYWR5IHN0YXRlIHRvb2s6ICQoYXdrICd7cHJpbnQgaW50KCQxLzM2MDApImgiLCBpbnQoKCQxJTM2MDApLzYwKSJtIiwgaW50KCQxJTYwKSJzIn0nIC9wcm9jL3VwdGltZSkiCiAgc2V0UmN1Tm9ybWFsCn0KCmlmIFtbICIke0JBU0hfU09VUkNFWzBdfSIgPSAiJHswfSIgXV07IHRoZW4KICBtYWluICIke0B9IgogIGV4aXQgJD8KZmkK mode: 493 path: /usr/local/bin/set-rcu-normal.sh systemd: units: - contents: | [Unit] Description=Disable rcu_expedited after node has finished booting by setting rcu_normal to 1 [Service] Type=simple ExecStart=/usr/local/bin/set-rcu-normal.sh # Maximum wait time is 600s = 10m: Environment=MAXIMUM_WAIT_TIME=600 # Steady-state threshold = 2% # Allowed values: # 4 - absolute pod count (+/-) # 4% - percent change (+/-) # -1 - disable the steady-state check # Note: '%' must be escaped as '%%' in systemd unit files Environment=STEADY_STATE_THRESHOLD=2%% # Steady-state window = 120s # If the running pod count stays within the given threshold for this time # period, return CPU utilization to normal before the maximum wait time has # expires Environment=STEADY_STATE_WINDOW=120 # Steady-state minimum = 40 # Increasing this will skip any steady-state checks until the count rises above # this number to avoid false positives if there are some periods where the # count doesn't increase but we know we can't be at steady-state yet. Environment=STEADY_STATE_MINIMUM=40 [Install] WantedBy=multi-user.target enabled: true name: set-rcu-normal.service",
"Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 07-sriov-related-kernel-args-master spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on - iommu=pt",
"Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 07-sriov-related-kernel-args-worker spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on - iommu=pt",
"cpu-load-balancing.crio.io: \"disable\" cpu-quota.crio.io: \"disable\" irq-load-balancing.crio.io: \"disable\"",
"cpu-c-states.crio.io: \"disable\" cpu-freq-governor.crio.io: \"performance\"",
"mkdir -p ./out",
"podman run -it registry.redhat.io/openshift4/openshift-telco-core-rds-rhel9:v4.17 | base64 -d | tar xv -C out",
"out/ βββ telco-core-rds βββ configuration β βββ reference-crs β βββ optional β β βββ logging β β βββ networking β β β βββ multus β β β βββ tap_cni β β βββ other β β βββ tuning β βββ required β βββ networking β β βββ metallb β β βββ multinetworkpolicy β β βββ sriov β βββ other β βββ performance β βββ scheduling β βββ storage β βββ odf-external βββ install",
"required count: 1 apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: gatewayConfig: routingViaHost: true # additional networks are optional and may alternatively be specified using NetworkAttachmentDefinition CRs additionalNetworks: [USDadditionalNetworks] # eg #- name: add-net-1 # namespace: app-ns-1 # rawCNIConfig: '{ \"cniVersion\": \"0.3.1\", \"name\": \"add-net-1\", \"plugins\": [{\"type\": \"macvlan\", \"master\": \"bond1\", \"ipam\": {}}] }' # type: Raw #- name: add-net-2 # namespace: app-ns-1 # rawCNIConfig: '{ \"cniVersion\": \"0.4.0\", \"name\": \"add-net-2\", \"plugins\": [ {\"type\": \"macvlan\", \"master\": \"bond1\", \"mode\": \"private\" },{ \"type\": \"tuning\", \"name\": \"tuning-arp\" }] }' # type: Raw # Enable to use MultiNetworkPolicy CRs useMultiNetworkPolicy: true",
"optional copies: 0-N apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: USDname namespace: USDns spec: nodeSelector: kubernetes.io/hostname: USDnodeName config: USDconfig #eg #config: '{ # \"cniVersion\": \"0.3.1\", # \"name\": \"external-169\", # \"type\": \"vlan\", # \"master\": \"ens8f0\", # \"mode\": \"bridge\", # \"vlanid\": 169, # \"ipam\": { # \"type\": \"static\", # } #}'",
"required count: 1-N apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: USDname # eg addresspool3 namespace: metallb-system spec: ############## # Expected variation in this configuration addresses: [USDpools] #- 3.3.3.0/24 autoAssign: true ##############",
"required count: 1-N apiVersion: metallb.io/v1beta1 kind: BFDProfile metadata: name: USDname # e.g. bfdprofile namespace: metallb-system spec: ################ # These values may vary. Recommended values are included as default receiveInterval: 150 # default 300ms transmitInterval: 150 # default 300ms #echoInterval: 300 # default 50ms detectMultiplier: 10 # default 3 echoMode: true passiveMode: true minimumTtl: 5 # default 254 # ################",
"required count: 1-N apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: USDname # eg bgpadvertisement-1 namespace: metallb-system spec: ipAddressPools: [USDpool] # eg: # - addresspool3 peers: [USDpeers] # eg: # - peer-one # communities: [USDcommunities] # Note correlation with address pool, or Community # eg: # - bgpcommunity # - 65535:65282 aggregationLength: 32 aggregationLengthV6: 128 localPref: 100",
"required count: 1-N apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: USDname namespace: metallb-system spec: peerAddress: USDip # eg 192.168.1.2 peerASN: USDpeerasn # eg 64501 myASN: USDmyasn # eg 64500 routerID: USDid # eg 10.10.10.10 bfdProfile: USDbfdprofile # e.g. bfdprofile passwordSecret: {}",
"--- apiVersion: metallb.io/v1beta1 kind: Community metadata: name: USDname # e.g. bgpcommunity namespace: metallb-system spec: communities: [USDcomm]",
"required count: 1 apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: {} #nodeSelector: node-role.kubernetes.io/worker: \"\"",
"required: yes count: 1 --- apiVersion: v1 kind: Namespace metadata: name: metallb-system annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: \"true\"",
"required: yes count: 1 --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: metallb-operator namespace: metallb-system",
"required: yes count: 1 --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: metallb-operator-sub namespace: metallb-system spec: channel: stable name: metallb-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Automatic status: state: AtLatestKnown",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-setsebool spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Set SELinux boolean for tap cni plugin Before=kubelet.service [Service] Type=oneshot ExecStart=/sbin/setsebool container_use_devices=on RemainAfterExit=true [Install] WantedBy=multi-user.target graphical.target enabled: true name: setsebool.service",
"apiVersion: nmstate.io/v1 kind: NMState metadata: name: nmstate spec: {}",
"apiVersion: v1 kind: Namespace metadata: name: openshift-nmstate annotations: workload.openshift.io/allowed: management",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-nmstate namespace: openshift-nmstate spec: targetNamespaces: - openshift-nmstate",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kubernetes-nmstate-operator namespace: openshift-nmstate spec: channel: \"stable\" name: kubernetes-nmstate-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Automatic status: state: AtLatestKnown",
"optional (though expected for all) count: 0-N apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: USDname # eg sriov-network-abcd namespace: openshift-sriov-network-operator spec: capabilities: \"USDcapabilities\" # eg '{\"mac\": true, \"ips\": true}' ipam: \"USDipam\" # eg '{ \"type\": \"host-local\", \"subnet\": \"10.3.38.0/24\" }' networkNamespace: USDnns # eg cni-test resourceName: USDresource # eg resourceTest",
"optional (though expected in all deployments) count: 0-N apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: USDname namespace: openshift-sriov-network-operator spec: {} # USDspec eg #deviceType: netdevice #nicSelector: deviceID: \"1593\" pfNames: - ens8f0np0#0-9 rootDevices: - 0000:d8:00.0 vendor: \"8086\" #nodeSelector: kubernetes.io/hostname: host.sample.lab #numVfs: 20 #priority: 99 #excludeTopology: true #resourceName: resourceNameABCD",
"required count: 1 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: configDaemonNodeSelector: node-role.kubernetes.io/worker: \"\" enableInjector: true enableOperatorWebhook: true disableDrain: false logLevel: 2",
"required: yes count: 1 apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator spec: channel: \"stable\" name: sriov-network-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Automatic status: state: AtLatestKnown",
"required: yes count: 1 apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator annotations: workload.openshift.io/allowed: management",
"required: yes count: 1 apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator",
"optional count: 1 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 40-load-kernel-modules-control-plane spec: config: # Release info found in https://github.com/coreos/butane/releases ignition: version: 3.2.0 storage: files: - contents: source: data:, mode: 420 overwrite: true path: /etc/modprobe.d/kernel-blacklist.conf - contents: source: data:text/plain;charset=utf-8;base64,aXBfZ3JlCmlwNl90YWJsZXMKaXA2dF9SRUpFQ1QKaXA2dGFibGVfZmlsdGVyCmlwNnRhYmxlX21hbmdsZQppcHRhYmxlX2ZpbHRlcgppcHRhYmxlX21hbmdsZQppcHRhYmxlX25hdAp4dF9tdWx0aXBvcnQKeHRfb3duZXIKeHRfUkVESVJFQ1QKeHRfc3RhdGlzdGljCnh0X1RDUE1TUwo= mode: 420 overwrite: true path: /etc/modules-load.d/kernel-load.conf",
"optional count: 1 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: load-sctp-module spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8;base64,c2N0cA== filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf",
"optional count: 1 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 40-load-kernel-modules-worker spec: config: # Release info found in https://github.com/coreos/butane/releases ignition: version: 3.2.0 storage: files: - contents: source: data:, mode: 420 overwrite: true path: /etc/modprobe.d/kernel-blacklist.conf - contents: source: data:text/plain;charset=utf-8;base64,aXBfZ3JlCmlwNl90YWJsZXMKaXA2dF9SRUpFQ1QKaXA2dGFibGVfZmlsdGVyCmlwNnRhYmxlX21hbmdsZQppcHRhYmxlX2ZpbHRlcgppcHRhYmxlX21hbmdsZQppcHRhYmxlX25hdAp4dF9tdWx0aXBvcnQKeHRfb3duZXIKeHRfUkVESVJFQ1QKeHRfc3RhdGlzdGljCnh0X1RDUE1TUwo= mode: 420 overwrite: true path: /etc/modules-load.d/kernel-load.conf",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-kubens-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kubens.service",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-kubens-worker spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kubens.service",
"Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 06-kdump-enable-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M",
"Automatically generated by extra-manifests-builder Do not make changes directly. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 06-kdump-enable-worker spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M",
"apiVersion: \"observability.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: # outputs: USDoutputs # pipelines: USDpipelines serviceAccount: name: collector #apiVersion: \"observability.openshift.io/v1\" #kind: ClusterLogForwarder #metadata: name: instance namespace: openshift-logging spec: outputs: - type: \"kafka\" name: kafka-open # below url is an example kafka: url: tcp://10.11.12.13:9092/test filters: - name: test-labels type: openshiftLabels openshiftLabels: label1: test1 label2: test2 label3: test3 label4: test4 pipelines: - name: all-to-default inputRefs: - audit - infrastructure filterRefs: - test-labels outputRefs: - kafka-open serviceAccount: name: collector",
"--- apiVersion: v1 kind: Namespace metadata: name: openshift-logging annotations: workload.openshift.io/allowed: management",
"--- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging spec: targetNamespaces: - openshift-logging",
"--- apiVersion: v1 kind: ServiceAccount metadata: name: collector namespace: openshift-logging",
"--- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: logcollector-audit-logs-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: collect-audit-logs subjects: - kind: ServiceAccount name: collector namespace: openshift-logging",
"--- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: logcollector-infrastructure-logs-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: collect-infrastructure-logs subjects: - kind: ServiceAccount name: collector namespace: openshift-logging",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging spec: channel: \"stable-6.0\" name: cluster-logging source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Automatic status: state: AtLatestKnown",
"required count: 1..N apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: redhat-operators-disconnected namespace: openshift-marketplace spec: displayName: Red Hat Disconnected Operators Catalog image: USDimageUrl publisher: Red Hat sourceType: grpc updateStrategy: registryPoll: interval: 1h status: connectionState: lastObservedState: READY",
"required count: 1 apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: disconnected-internal-icsp spec: repositoryDigestMirrors: [] - USDmirrors",
"required count: 1 apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: name: cluster spec: disableAllDefaultSources: true",
"optional count: 1 --- apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: 15d volumeClaimTemplate: spec: storageClassName: ocs-external-storagecluster-ceph-rbd resources: requests: storage: 100Gi alertmanagerMain: volumeClaimTemplate: spec: storageClassName: ocs-external-storagecluster-ceph-rbd resources: requests: storage: 20Gi",
"required count: 1 apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: USDname annotations: # Some pods want the kernel stack to ignore IPv6 router Advertisement. kubeletconfig.experimental: | {\"allowedUnsafeSysctls\":[\"net.ipv6.conf.all.accept_ra\"]} spec: cpu: # node0 CPUs: 0-17,36-53 # node1 CPUs: 18-34,54-71 # siblings: (0,36), (1,37) # we want to reserve the first Core of each NUMA socket # # no CPU left behind! all-cpus == isolated + reserved isolated: USDisolated # eg 1-17,19-35,37-53,55-71 reserved: USDreserved # eg 0,18,36,54 # Guaranteed QoS pods will disable IRQ balancing for cores allocated to the pod. # default value of globallyDisableIrqLoadBalancing is false globallyDisableIrqLoadBalancing: false hugepages: defaultHugepagesSize: 1G pages: # 32GB per numa node - count: USDcount # eg 64 size: 1G #machineConfigPoolSelector: {} # pools.operator.machineconfiguration.openshift.io/worker: '' nodeSelector: {} #node-role.kubernetes.io/worker: \"\" workloadHints: realTime: false highPowerConsumption: false perPodPowerManagement: true realTimeKernel: enabled: false numa: # All guaranteed QoS containers get resources from a single NUMA node topologyPolicy: \"single-numa-node\" net: userLevelNetworking: false",
"optional count: 1 apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: autosizing-master spec: autoSizingReserved: true machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/master: \"\"",
"Optional count: 1 apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesOperator metadata: name: numaresourcesoperator spec: nodeGroups: [] #- config: # # Periodic is the default setting # infoRefreshMode: Periodic # machineConfigPoolSelector: # matchLabels: # # This label must match the pool(s) you want to run NUMA-aligned workloads # pools.operator.machineconfiguration.openshift.io/worker: \"\"",
"required count: 1 apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: numaresources-operator namespace: openshift-numaresources spec: channel: \"4.17\" name: numaresources-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace status: state: AtLatestKnown",
"required: yes count: 1 apiVersion: v1 kind: Namespace metadata: name: openshift-numaresources annotations: workload.openshift.io/allowed: management",
"required: yes count: 1 apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: numaresources-operator namespace: openshift-numaresources spec: targetNamespaces: - openshift-numaresources",
"Optional count: 1 apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: #cacheResyncPeriod: \"0\" # Image spec should be the latest for the release imageSpec: \"registry.redhat.io/openshift4/noderesourcetopology-scheduler-rhel9:v4.17.0\" #logLevel: \"Trace\" schedulerName: topo-aware-scheduler",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: # non-schedulable control plane is the default. This ensures # compliance. mastersSchedulable: false policy: name: \"\"",
"required count: 1 --- apiVersion: v1 kind: Secret metadata: name: rook-ceph-external-cluster-details namespace: openshift-storage type: Opaque data: # encoded content has been made generic external_cluster_details: eyJuYW1lIjoicm9vay1jZXBoLW1vbi1lbmRwb2ludHMiLCJraW5kIjoiQ29uZmlnTWFwIiwiZGF0YSI6eyJkYXRhIjoiY2VwaHVzYTE9MS4yLjMuNDo2Nzg5IiwibWF4TW9uSWQiOiIwIiwibWFwcGluZyI6Int9In19LHsibmFtZSI6InJvb2stY2VwaC1tb24iLCJraW5kIjoiU2VjcmV0IiwiZGF0YSI6eyJhZG1pbi1zZWNyZXQiOiJhZG1pbi1zZWNyZXQiLCJmc2lkIjoiMTExMTExMTEtMTExMS0xMTExLTExMTEtMTExMTExMTExMTExIiwibW9uLXNlY3JldCI6Im1vbi1zZWNyZXQifX0seyJuYW1lIjoicm9vay1jZXBoLW9wZXJhdG9yLWNyZWRzIiwia2luZCI6IlNlY3JldCIsImRhdGEiOnsidXNlcklEIjoiY2xpZW50LmhlYWx0aGNoZWNrZXIiLCJ1c2VyS2V5IjoiYzJWamNtVjAifX0seyJuYW1lIjoibW9uaXRvcmluZy1lbmRwb2ludCIsImtpbmQiOiJDZXBoQ2x1c3RlciIsImRhdGEiOnsiTW9uaXRvcmluZ0VuZHBvaW50IjoiMS4yLjMuNCwxLjIuMy4zLDEuMi4zLjIiLCJNb25pdG9yaW5nUG9ydCI6IjkyODMifX0seyJuYW1lIjoiY2VwaC1yYmQiLCJraW5kIjoiU3RvcmFnZUNsYXNzIiwiZGF0YSI6eyJwb29sIjoib2RmX3Bvb2wifX0seyJuYW1lIjoicm9vay1jc2ktcmJkLW5vZGUiLCJraW5kIjoiU2VjcmV0IiwiZGF0YSI6eyJ1c2VySUQiOiJjc2ktcmJkLW5vZGUiLCJ1c2VyS2V5IjoiIn19LHsibmFtZSI6InJvb2stY3NpLXJiZC1wcm92aXNpb25lciIsImtpbmQiOiJTZWNyZXQiLCJkYXRhIjp7InVzZXJJRCI6ImNzaS1yYmQtcHJvdmlzaW9uZXIiLCJ1c2VyS2V5IjoiYzJWamNtVjAifX0seyJuYW1lIjoicm9vay1jc2ktY2VwaGZzLXByb3Zpc2lvbmVyIiwia2luZCI6IlNlY3JldCIsImRhdGEiOnsiYWRtaW5JRCI6ImNzaS1jZXBoZnMtcHJvdmlzaW9uZXIiLCJhZG1pbktleSI6IiJ9fSx7Im5hbWUiOiJyb29rLWNzaS1jZXBoZnMtbm9kZSIsImtpbmQiOiJTZWNyZXQiLCJkYXRhIjp7ImFkbWluSUQiOiJjc2ktY2VwaGZzLW5vZGUiLCJhZG1pbktleSI6ImMyVmpjbVYwIn19LHsibmFtZSI6ImNlcGhmcyIsImtpbmQiOiJTdG9yYWdlQ2xhc3MiLCJkYXRhIjp7ImZzTmFtZSI6ImNlcGhmcyIsInBvb2wiOiJtYW5pbGFfZGF0YSJ9fQ==",
"required count: 1 --- apiVersion: ocs.openshift.io/v1 kind: StorageCluster metadata: name: ocs-external-storagecluster namespace: openshift-storage spec: externalStorage: enable: true labelSelector: {} status: phase: Ready",
"required: yes count: 1 --- apiVersion: v1 kind: Namespace metadata: name: openshift-storage annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: \"true\"",
"required: yes count: 1 --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-storage-operatorgroup namespace: openshift-storage spec: targetNamespaces: - openshift-storage",
"required: yes count: 1 --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: odf-operator namespace: openshift-storage spec: channel: \"stable-4.14\" name: odf-operator source: redhat-operators-disconnected sourceNamespace: openshift-marketplace installPlanApproval: Automatic status: state: AtLatestKnown"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/scalability_and_performance/reference-design-specifications |
Chapter 2. Eclipse Temurin features | Chapter 2. Eclipse Temurin features Eclipse Temurin does not contain structural changes from the upstream distribution of OpenJDK. For the list of changes and security fixes that the latest OpenJDK 11 release of Eclipse Temurin includes, see OpenJDK 11.0.24 Released . New features and enhancements Review the following release notes to understand new features and feature enhancements included with the Eclipse Temurin 11.0.24 release: DTLS 1.0 is disabled by default OpenJDK 9 introduced support for both version 1.0 and version 1.2 of the Datagram Transport Layer Security (DTLS) protocol ( JEP-219 ). DTLSv1.0, which is based on TLS 1.1, is no longer recommended for use, because this protocol is considered weak and insecure by modern standards. In OpenJDK 11.0.24, if you attempt to use DTLSv1.0, the JDK throws an SSLHandshakeException by default. If you want to continue using DTLSv1.0, you can remove DTLSv1.0 from the jdk.tls.disabledAlgorithms system property either by modifying the java.security configuration file or by using the java.security.properties system property. Note Continued use of DTLSv1.0 is not recommended and is at the user's own risk. See JDK-8256660 (JDK Bug System) . RPATH preferred over RUNPATH for USDORIGIN runtime search paths in internal JDK binaries Native executables and libraries in the JDK use embedded runtime search paths (rpaths) to locate required internal JDK native libraries. On Linux systems, binaries can specify these search paths by using either DT_RPATH or DT_RUNPATH : If a binary specifies search paths by using DT_RPATH , these paths are searched before any paths that are specified in the LD_LIBRARY_PATH environment variable. If a binary specifies search paths by using DT_RUNPATH , these paths are searched only after paths that are specified in LD_LIBRARY_PATH . This means that the use of DT_RUNPATH can allow JDK internal libraries to be overridden by any libraries of the same name that are specified in LD_LIBRARY_PATH , which is undesirable from a security perspective. In earlier releases, the type of runtime search path used was based on the default search path for the dynamic linker. In OpenJDK 11.0.24, to ensure that DT_RPATH is used, the --disable-new-dtags option is explicitly passed to the linker. See JDK-8326891 (JDK Bug System) . GlobalSign R46 and E46 root certificates added In OpenJDK 11.0.24, the cacerts truststore includes two GlobalSign TLS root certificates: Certificate 1 Name: GlobalSign Alias name: globalsignr46 Distinguished name: CN=GlobalSign Root R46, O=GlobalSign nv-sa, C=BE Certificate 2 Name: GlobalSign Alias name: globalsigne46 Distinguished name: CN=GlobalSign Root E46, O=GlobalSign nv-sa, C=BE See JDK-8316138 (JDK Bug System) . Revised on 2024-08-02 13:32:25 UTC | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_eclipse_temurin_11.0.24/openjdk-temurin-features-11-0-23_openjdk |
Chapter 4. Installing a cluster on vSphere using the Assisted Installer | Chapter 4. Installing a cluster on vSphere using the Assisted Installer You can install OpenShift Container Platform on on-premise hardware or on-premise VMs by using the Assisted Installer. Installing OpenShift Container Platform by using the Assisted Installer supports x86_64 , AArch64 , ppc64le , and s390x CPU architectures. The Assisted Installer is a user-friendly installation solution offered on the Red Hat Hybrid Cloud Console. The Assisted Installer supports the various deployment platforms with a focus on the following infrastructures: Bare metal Nutanix vSphere 4.1. Additional resources Installing OpenShift Container Platform with the Assisted Installer | null | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_vsphere/installing-vsphere-assisted-installer |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.