title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
5.271. quagga
5.271. quagga 5.271.1. RHSA-2012:1259 - Moderate: quagga security update Updated quagga packages that fix multiple security issues are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Quagga is a TCP/IP based routing software suite. The Quagga bgpd daemon implements the BGP (Border Gateway Protocol) routing protocol. The Quagga ospfd and ospf6d daemons implement the OSPF (Open Shortest Path First) routing protocol. Security Fixes CVE-2011-3327 A heap-based buffer overflow flaw was found in the way the bgpd daemon processed malformed Extended Communities path attributes. An attacker could send a specially-crafted BGP message, causing bgpd on a target system to crash or, possibly, execute arbitrary code with the privileges of the user running bgpd. The UPDATE message would have to arrive from an explicitly configured BGP peer, but could have originated elsewhere in the BGP network. CVE-2011-3323 A stack-based buffer overflow flaw was found in the way the ospf6d daemon processed malformed Link State Update packets. An OSPF router could use this flaw to crash ospf6d on an adjacent router. CVE-2011-3324 A flaw was found in the way the ospf6d daemon processed malformed link state advertisements. An OSPF neighbor could use this flaw to crash ospf6d on a target system. CVE-2011-3325 A flaw was found in the way the ospfd daemon processed malformed Hello packets. An OSPF neighbor could use this flaw to crash ospfd on a target system. CVE-2011-3326 A flaw was found in the way the ospfd daemon processed malformed link state advertisements. An OSPF router in the autonomous system could use this flaw to crash ospfd on a target system. CVE-2012-0249 An assertion failure was found in the way the ospfd daemon processed certain Link State Update packets. An OSPF router could use this flaw to cause ospfd on an adjacent router to abort. CVE-2012-0250 A buffer overflow flaw was found in the way the ospfd daemon processed certain Link State Update packets. An OSPF router could use this flaw to crash ospfd on an adjacent router. CVE-2012-0255 , CVE-2012-1820 Two flaws were found in the way the bgpd daemon processed certain BGP OPEN messages. A configured BGP peer could cause bgpd on a target system to abort via a specially-crafted BGP OPEN message. Red Hat would like to thank CERT-FI for reporting CVE-2011-3327, CVE-2011-3323, CVE-2011-3324, CVE-2011-3325, and CVE-2011-3326; and the CERT/CC for reporting CVE-2012-0249, CVE-2012-0250, CVE-2012-0255, and CVE-2012-1820. CERT-FI acknowledges Riku Hietamaki, Tuomo Untinen and Jukka Taimisto of the Codenomicon CROSS project as the original reporters of CVE-2011-3327, CVE-2011-3323, CVE-2011-3324, CVE-2011-3325, and CVE-2011-3326. The CERT/CC acknowledges Martin Winter at OpenSourceRouting.org as the original reporter of CVE-2012-0249, CVE-2012-0250, and CVE-2012-0255, and Denis Ovsienko as the original reporter of CVE-2012-1820. Users of quagga should upgrade to these updated packages, which contain backported patches to correct these issues. After installing the updated packages, the bgpd, ospfd, and ospf6d daemons will be restarted automatically.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/quagga
8.8. Configuring nfsexport and nfsserver Resources
8.8. Configuring nfsexport and nfsserver Resources This section describes the issues and considerations to take into account when configuring an nfsexport or an nfsserver resource. The nfsexport resource agent works with NFSv2 and NFSv3 clients. When using nfsexport , you must do the following: Ensure that nfs and nfslock are enabled at boot. Add RPCNFSDARGS="-N 4" to the /etc/sysconfig/nfs file on all cluster nodes. The "-N 4" option prevents NFSv4 clients from being able to connect to the server. Add STATDARG="-H /usr/sbin/clunfslock" to the /etc/sysconfig/nfs file on all cluster nodes. Add nfslock="1" to the service component in the cluster.conf file. Structure your service as follows: The nfsserver resource agent works with NFSv3 and NFSv4 clients. When using nfsserver , you must do the following: Ensure that nfs and nfslock are disabled at boot Ensure that nfslock="1" is not set for the service. Structure your service as follows: When configuring a system to use the nfsserver resource agent for use with NFSv3 and NFSv4, you must account for the following limitations: Configure only one nfsserver resource per cluster. If you require more, you must use restricted failover domains to ensure that the two services in question can never start on the same host. Do not reference a globally-configured nfsserver resource in more than one service. Do not mix old-style NFS services with the new nfsserver in the same cluster. Older NFS services required the NFS daemons to be running; nfsserver requires the daemons to be stopped when the service is started. When using multiple file systems, you will be unable to use inheritance for the exports; thus reuse of nfsclient resources in services with multiple file systems is limited. You may, however, explicitly define target and path attributes for as many nfsclients as you like.
[ "<service nfslock=\"1\" ... > <fs name=\"myfs\" ... > <nfsexport name=\"exports\"> <nfsclient ref=\"client1\" /> <nfsclient ref=\"client2\" /> </nfsexport> </fs> <ip address=\"10.1.1.2\" /> </service>", "<service ... > <fs name=\"myfs\" ... > <nfsserver name=\"server\"> <nfsclient ref=\"client1\" /> <nfsclient ref=\"client2\" /> <ip address=\"10.1.1.2\" /> </nfsserver> </fs> </service>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-config-nfsexport_nfsserver-cli-CA
4.3. Volume Group Administration
4.3. Volume Group Administration This section describes the commands that perform the various aspects of volume group administration. 4.3.1. Creating Volume Groups To create a volume group from one or more physical volumes, use the vgcreate command. The vgcreate command creates a new volume group by name and adds at least one physical volume to it. The following command creates a volume group named vg1 that contains physical volumes /dev/sdd1 and /dev/sde1 . When physical volumes are used to create a volume group, its disk space is divided into 4MB extents, by default. This extent is the minimum amount by which the logical volume may be increased or decreased in size. Large numbers of extents will have no impact on I/O performance of the logical volume. You can specify the extent size with the -s option to the vgcreate command if the default extent size is not suitable. You can put limits on the number of physical or logical volumes the volume group can have by using the -p and -l arguments of the vgcreate command. By default, a volume group allocates physical extents according to common-sense rules such as not placing parallel stripes on the same physical volume. This is the normal allocation policy. You can use the --alloc argument of the vgcreate command to specify an allocation policy of contiguous , anywhere , or cling . The contiguous policy requires that new extents are adjacent to existing extents. If there are sufficient free extents to satisfy an allocation request but a normal allocation policy would not use them, the anywhere allocation policy will, even if that reduces performance by placing two stripes on the same physical volume. The cling policy places new extents on the same physical volume as existing extents in the same stripe of the logical volume. These policies can be changed using the vgchange command. In general, allocation policies other than normal are required only in special cases where you need to specify unusual or nonstandard extent allocation. LVM volume groups and underlying logical volumes are included in the device special file directory tree in the /dev directory with the following layout: For example, if you create two volume groups myvg1 and myvg2 , each with three logical volumes named lvo1 , lvo2 , and lvo3 , this create six device special files: The maximum device size with LVM is 8 Exabytes on 64-bit CPUs.
[ "vgcreate vg1 /dev/sdd1 /dev/sde1", "/dev/ vg / lv /", "/dev/myvg1/lv01 /dev/myvg1/lv02 /dev/myvg1/lv03 /dev/myvg2/lv01 /dev/myvg2/lv02 /dev/myvg2/lv03" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/VG_admin
Chapter 24. Apache HTTP Server Configuration
Chapter 24. Apache HTTP Server Configuration Red Hat Enterprise Linux provides version 2.0 of the Apache HTTP Server. If you want to migrate an existing configuration file by hand, refer to the migration guide at /usr/share/doc/httpd- <ver> /migration.html or the Reference Guide for details. If you configured the Apache HTTP Server with the HTTP Configuration Tool in versions of Red Hat Enterprise Linux and then performed an upgrade, you can use the HTTP Configuration Tool to migrate the configuration file to the new format for version 2.0. Start the HTTP Configuration Tool , make any changes to the configuration, and save it. The configuration file saved will be compatible with version 2.0. The httpd and system-config-httpd RPM packages need to be installed to use the HTTP Configuration Tool . It also requires the X Window System and root access. To start the application, go to the Main Menu Button => System Settings => Server Settings => HTTP or type the command system-config-httpd at a shell prompt (for example, in an XTerm or GNOME Terminal). The HTTP Configuration Tool allows you to configure the /etc/httpd/conf/httpd.conf configuration file for the Apache HTTP Server. It does not use the old srm.conf or access.conf configuration files; leave them empty. Through the graphical interface, you can configure directives such as virtual hosts, logging attributes, and maximum number of connections. Only modules provided with Red Hat Enterprise Linux can be configured with the HTTP Configuration Tool . If additional modules are installed, they can not be configured using this tool. Warning Do not edit the /etc/httpd/conf/httpd.conf configuration file by hand if you wish to use this tool. The HTTP Configuration Tool generates this file after you save your changes and exit the program. If you want to add additional modules or configuration options that are not available in HTTP Configuration Tool , you cannot use this tool. The general steps for configuring the Apache HTTP Server using the HTTP Configuration Tool are as follows: Configure the basic settings under the Main tab. Click on the Virtual Hosts tab and configure the default settings. Under the Virtual Hosts tab, configure the Default Virtual Host. To serve more than one URL or virtual host, add any additional virtual hosts. Configure the server settings under the Server tab. Configure the connections settings under the Performance Tuning tab. Copy all necessary files to the DocumentRoot and cgi-bin directories. Exit the application and select to save your settings. 24.1. Basic Settings Use the Main tab to configure the basic server settings. Figure 24.1. Basic Settings Enter a fully qualified domain name that you have the right to use in the Server Name text area. This option corresponds to the ServerName directive in httpd.conf . The ServerName directive sets the hostname of the Web server. It is used when creating redirection URLs. If you do not define a server name, the Web server attempts to resolve it from the IP address of the system. The server name does not have to be the domain name resolved from the IP address of the server. For example, you might set the server name to www.example.com while the server's real DNS name is foo.example.com. Enter the email address of the person who maintains the Web server in the Webmaster email address text area. This option corresponds to the ServerAdmin directive in httpd.conf . If you configure the server's error pages to contain an email address, this email address is used so that users can report a problem to the server's administrator. The default value is root@localhost. Use the Available Addresses area to define the ports on which the server accepts incoming requests. This option corresponds to the Listen directive in httpd.conf . By default, Red Hat configures the Apache HTTP Server to listen to port 80 for non-secure Web communications. Click the Add button to define additional ports on which to accept requests. A window as shown in Figure 24.2, "Available Addresses" appears. Either choose the Listen to all addresses option to listen to all IP addresses on the defined port or specify a particular IP address over which the server accepts connections in the Address field. Only specify one IP address per port number. To specify more than one IP address with the same port number, create an entry for each IP address. If at all possible, use an IP address instead of a domain name to prevent a DNS lookup failure. Refer to http://httpd.apache.org/docs-2.0/dns-caveats.html for more information about Issues Regarding DNS and Apache . Entering an asterisk (*) in the Address field is the same as choosing Listen to all addresses . Clicking the Edit button in the Available Addresses frame shows the same window as the Add button except with the fields populated for the selected entry. To delete an entry, select it and click the Delete button. Note If you set the server to listen to a port under 1024, you must be root to start it. For port 1024 and above, httpd can be started as a regular user. Figure 24.2. Available Addresses
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/httpd_configuration
B.88. spice-xpi
B.88. spice-xpi B.88.1. RHSA-2011:0426 - Moderate: spice-xpi security update An updated spice-xpi package that fixes two security issues is now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. Common Vulnerability Scoring System (CVSS) base scores, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The Simple Protocol for Independent Computing Environments (SPICE) is a remote display protocol used in Red Hat Enterprise Linux for viewing virtualized guests running on the Kernel-based Virtual Machine (KVM) hypervisor, or on Red Hat Enterprise Virtualization Hypervisor. CVE-2011-1179 The spice-xpi package provides a plug-in that allows the SPICE client to run from within Mozilla Firefox. An uninitialized pointer use flaw was found in the SPICE Firefox plug-in. If a user were tricked into visiting a malicious web page with Firefox while the SPICE plug-in was enabled, it could cause Firefox to crash or, possibly, execute arbitrary code with the privileges of the user running Firefox. CVE-2011-0012 It was found that the SPICE Firefox plug-in used a predictable name for one of its log files. A local attacker could use this flaw to conduct a symbolic link attack, allowing them to overwrite arbitrary files accessible to the user running Firefox. Users of spice-xpi should upgrade to this updated package, which contains backported patches to correct these issues. After installing the update, Firefox must be restarted for the changes to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/spice-xpi
Chapter 134. AutoRestartStatus schema reference
Chapter 134. AutoRestartStatus schema reference Used in: KafkaConnectorStatus , KafkaMirrorMaker2Status Property Description count The number of times the connector or task is restarted. integer connectorName The name of the connector being restarted. string lastRestartTimestamp The last time the automatic restart was attempted. The required format is 'yyyy-MM-ddTHH:mm:ssZ' in the UTC time zone. string
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-autorestartstatus-reference
Chapter 10. Scaling Compute nodes with director Operator
Chapter 10. Scaling Compute nodes with director Operator If you require more or fewer compute resources for your overcloud, you can scale the number of Compute nodes according to your requirements. 10.1. Adding Compute nodes to your overcloud with director Operator To add more Compute nodes to your overcloud, you must increase the node count for the compute OpenStackBaremetalSet resource. When a new node is provisioned, you create a new OpenStackConfigGenerator resource to generate a new set of Ansible playbooks, then use the OpenStackConfigVersion to create or update the OpenStackDeploy object to reapply the Ansible configuration to your overcloud. Procedure Check that you have enough hosts in a ready state in the openshift-machine-api namespace: For more information on managing your bare-metal hosts, see Managing bare metal hosts . Increase the count parameter for the compute OpenStackBaremetalSet resource: The OpenStackBaremetalSet resource automatically provisions the new nodes with the Red Hat Enterprise Linux base operating system. Wait until the provisioning process completes. Check the nodes periodically to determine the readiness of the nodes: Optional: Reserve static IP addresses for networks on the new Compute nodes. For more information, see Reserving static IP addresses for added Compute nodes with the OpenStackNetConfig CRD . Generate the Ansible playbooks by using OpenStackConfigGenerator . For more information, see Creating Ansible playbooks for overcloud configuration with the OpenStackConfigGenerator CRD . Apply the overcloud configuration. For more information, see Applying overcloud configuration with director Operator . Additional resources Managing bare metal hosts 10.2. Reserving static IP addresses for added Compute nodes with the OpenStackNetConfig CRD Use the OpenStackNetConfig CRD to define IP addresses that you want to reserve for the Compute node you added to your overcloud. Tip Use the following commands to view the OpenStackNetConfig CRD definition and specification schema: Procedure Open the openstacknetconfig.yaml file for the overcloud on your workstation. Add the following configuration to openstacknetconfig.yaml to create the OpenStackNetConfig custom resource (CR): Reserve static IP addresses for networks on specific nodes: Note Reservations have precedence over any autogenerated IP addresses. Save the openstacknetconfig.yaml definition file. Create the overcloud network configuration: Verification To verify that the overcloud network configuration is created, view the resources for the overcloud network configuration: View the OpenStackNetConfig API and child resources: If you see errors, check the underlying network-attach-definition and node network configuration policies: 10.3. Removing Compute nodes from your overcloud with director Operator To remove a Compute node from your overcloud, you must disable the Compute node, mark it for deletion, and decrease the node count for the compute OpenStackBaremetalSet resource. Note If you scale the overcloud with a new node in the same role, the node reuses the host names starting with lowest ID suffix and corresponding IP reservation. Prerequisites The workloads on the Compute nodes have been migrated to other Compute nodes. For more information, see Migrating virtual machine instances between Compute nodes . Procedure Access the remote shell for openstackclient : Identify the Compute node that you want to remove: Disable the Compute service on the node to prevent the node from scheduling new instances: Annotate the bare-metal node to prevent Metal 3 from starting the node: Replace <node> with the name of the BareMetalHost resource. Replace <metal3-pod> with the name of your metal3 pod. Log in to the Compute node as the root user and shut down the bare-metal node: If the Compute node is not accessible, complete the following steps: Log in to a Controller node as the root user. If Instance HA is enabled, disable the STONITH device for the Compute node: Replace <stonith_resource_name> with the name of the STONITH resource that corresponds to the node. The resource name uses the format <resource_agent>-<host_mac> . You can find the resource agent and the host MAC address in the FencingConfig section of the fencing.yaml file. Use IPMI to power off the bare-metal node. For more information, see your hardware vendor documentation. Retrieve the BareMetalHost resource that corresponds to the node that you want to remove: To change the status of the annotatedForDeletion parameter to true in the OpenStackBaremetalSet resource, annotate the BareMetalHost resource with osp-director.openstack.org/delete-host=true : Optional: Confirm that the annotatedForDeletion status has changed to true in the OpenStackBaremetalSet resource: Decrease the count parameter for the compute OpenStackBaremetalSet resource: When you reduce the resource count of the OpenStackBaremetalSet resource, you trigger the corresponding controller to handle the resource deletion, which causes the following actions: Director Operator deletes the corresponding IP reservations from OpenStackIPSet and OpenStackNetConfig for the deleted node. Director Operator flags the IP reservation entry in the OpenStackNet resource as deleted. Optional: To make the IP reservations of the deleted OpenStackBaremetalSet resource available for other roles to use, set the value of the spec.preserveReservations parameter to false in the OpenStackNetConfig object. Access the remote shell for openstackclient : Remove the Compute service entries from the overcloud: Check the Compute network agents entries in the overcloud and remove them if they exist: Exit from openstackclient :
[ "oc get baremetalhosts -n openshift-machine-api", "oc patch openstackbaremetalset compute --type=merge --patch '{\"spec\":{\"count\":3}}' -n openstack", "oc get baremetalhosts -n openshift-machine-api oc get openstackbaremetalset", "oc describe crd openstacknetconfig oc explain openstacknetconfig.spec", "apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackNetConfig metadata: name: openstacknetconfig", "spec: reservations: controller-0: ipReservations: ctlplane: 172.22.0.120 compute-0: ipReservations: ctlplane: 172.22.0.140 internal_api: 172.17.0.40 storage: 172.18.0.40 tenant: 172.20.0.40 //The key for the ctlplane VIPs controlplane: ipReservations: ctlplane: 172.22.0.110 external: 10.0.0.10 internal_api: 172.17.0.10 storage: 172.18.0.10 storage_mgmt: 172.19.0.10 macReservations: {}", "oc create -f openstacknetconfig.yaml -n openstack", "oc get openstacknetconfig/openstacknetconfig", "oc get openstacknetconfig/openstacknetconfig -n openstack oc get openstacknetattachment -n openstack oc get openstacknet -n openstack", "oc get network-attachment-definitions -n openstack oc get nncp", "oc rsh -n openstack openstackclient", "openstack compute service list", "openstack compute service set <hostname> nova-compute --disable", "oc annotate baremetalhost <node> baremetalhost.metal3.io/detached=true oc logs --since=1h <metal3-pod> metal3-baremetal-operator | grep -i detach oc get baremetalhost <node> -o json | jq .status.operationalStatus \"detached\"", "shutdown -h now", "pcs stonith disable <stonith_resource_name>", "oc get openstackbaremetalset compute -o json | jq '.status.baremetalHosts | to_entries[] | \"\\(.key) => \\(.value | .hostRef)\"' \"compute-0, openshift-worker-3\" \"compute-1, openshift-worker-4\"", "oc annotate -n openshift-machine-api bmh/openshift-worker-3 osp-director.openstack.org/delete-host=true --overwrite", "oc get openstackbaremetalset compute -o json -n openstack | jq .status { \"baremetalHosts\": { \"compute-0\": { \"annotatedForDeletion\": true, \"ctlplaneIP\": \"192.168.25.105/24\", \"hostRef\": \"openshift-worker-3\", \"hostname\": \"compute-0\", \"networkDataSecretName\": \"compute-cloudinit-networkdata-openshift-worker-3\", \"provisioningState\": \"provisioned\", \"userDataSecretName\": \"compute-cloudinit-userdata-openshift-worker-3\" }, \"compute-1\": { \"annotatedForDeletion\": false, \"ctlplaneIP\": \"192.168.25.106/24\", \"hostRef\": \"openshift-worker-4\", \"hostname\": \"compute-1\", \"networkDataSecretName\": \"compute-cloudinit-networkdata-openshift-worker-4\", \"provisioningState\": \"provisioned\", \"userDataSecretName\": \"compute-cloudinit-userdata-openshift-worker-4\" } }, \"provisioningStatus\": { \"readyCount\": 2, \"reason\": \"All requested BaremetalHosts have been provisioned\", \"state\": \"provisioned\" } }", "oc patch openstackbaremetalset compute --type=merge --patch '{\"spec\":{\"count\":1}}' -n openstack", "oc get osnet ctlplane -o json -n openstack | jq .reservations { \"compute-0\": { \"deleted\": true, \"ip\": \"172.22.0.140\" }, \"compute-1\": { \"deleted\": false, \"ip\": \"172.22.0.100\" }, \"controller-0\": { \"deleted\": false, \"ip\": \"172.22.0.120\" }, \"controlplane\": { \"deleted\": false, \"ip\": \"172.22.0.110\" }, \"openstackclient-0\": { \"deleted\": false, \"ip\": \"172.22.0.251\" }", "oc rsh openstackclient -n openstack", "openstack compute service list openstack compute service delete <service-id>", "openstack network agent list for AGENT in USD(openstack network agent list --host <scaled-down-node> -c ID -f value) ; do openstack network agent delete USDAGENT ; done", "exit" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/deploying_an_overcloud_in_a_red_hat_openshift_container_platform_cluster_with_director_operator/assembly_scaling-compute-nodes-with-director-operator
Release Notes for AMQ Streams 1.7 on OpenShift
Release Notes for AMQ Streams 1.7 on OpenShift Red Hat AMQ 2021.q2 For use with AMQ Streams on OpenShift Container Platform
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/release_notes_for_amq_streams_1.7_on_openshift/index
Chapter 3. Reviewing a system by using the tuna interface
Chapter 3. Reviewing a system by using the tuna interface The tuna tool reduces the complexity of performing tuning tasks. Use tuna to adjust scheduler tunables, tune thread priority, IRQ handlers, and to isolate CPU cores and sockets. By using the tuna tool, you can perform the following operations: List the CPUs on a system. List the interrupt requests (IRQs) currently running on a system. Change policy and priority information about threads. Display the current policies and priorities of a system. 3.1. Installing the tuna tool The tuna tool is designed to be used on a running system. This allows application-specific measurement tools to see and analyze system performance immediately after changes have been made. Procedure Install the tuna tool: Verification Display the available tuna CLI options: Additional resources tuna(8) man page on your system 3.2. Viewing the system status by using the tuna tool You can use the tuna command-line interface (CLI) tool to view the system status. Prerequisites The tuna tool is installed. For more information, see Installing the tuna tool . Procedure View the current policies and priorities: Alternatively, to view a specific thread corresponding to a PID or matching a command name, enter: The pid_or_cmd_list argument is a list of comma-separated PIDs or command-name patterns. Depending on you scenario, perform one of the following actions: To tune CPUs by using the tuna CLI, complete the steps in Tuning CPUs by using the tuna tool . To tune the IRQs by using the tuna tool, complete the steps in Tuning IRQs by using the tuna tool . Save the changed configuration: This command saves only currently running kernel threads. Processes that are not running are not saved. Additional resources tuna(8) man page on your system 3.3. Tuning CPUs by using the tuna tool The tuna tool commands can target individual CPUs. By using the tuna tool, you can perform the following actions: Isolate CPUs All tasks running on the specified CPU move to the available CPU. Isolating a CPU makes this CPU unavailable by removing it from the affinity mask of all threads. Include CPUs Allows tasks to run on the specified CPU. Restore CPUs Restores the specified CPU to its configuration. Prerequisites The tuna tool is installed. For more information, see Installing the tuna tool . Procedure List all CPUs and specify the list of CPUs to be affected by the command: Display the thread list in the tuna interface: Specify the list of CPUs to be affected by a command: The cpu_list argument is a list of comma-separated CPU numbers, for example, --cpus 0,2 . To add a specific CPU to the current cpu_list , use, for example, --cpus +0 . Depending on your scenario, perform one of the following actions: To isolate a CPU, enter: To include a CPU, enter: To use a system with four or more processors, make all ssh threads run on CPU 0 and 1 and all http threads on CPU 2 and 3 : Verification Display the current configuration and verify that the changes were applied: # tuna show_threads -t ssh * pid SCHED_ rtpri affinity voluntary nonvoluntary cmd 855 OTHER 0 0,1 23 15 sshd # tuna show_threads -t http\ * pid SCHED_ rtpri affinity voluntary nonvoluntary cmd 855 OTHER 0 2,3 23 15 http Additional resources /proc/cpuinfo file tuna(8) man page on your system 3.4. Tuning IRQs by using the tuna tool The /proc/interrupts file records the number of interrupts per IRQ, the type of interrupt, and the name of the device that is located at that IRQ. Prerequisites The tuna tool is installed. For more information, see Installing tuna tool . Procedure View the current IRQs and their affinity: Specify the list of IRQs to be affected by a command: The irq_list argument is a list of comma-separated IRQ numbers or user-name patterns. Replace [ command ] with, for example, --spread . Move an interrupt to a specified CPU: Replace 128 with the irq_list argument and 3 with the cpu_list argument. The cpu_list argument is a list of comma-separated CPU numbers, for example, --cpus 0,2 . For more information, see Tuning CPUs by using the tuna tool . Verification Compare the state of the selected IRQs before and after moving any interrupt to a specified CPU: Additional resources /procs/interrupts file tuna(8) man page on your system
[ "dnf install tuna", "tuna -h", "tuna show_threads pid SCHED_ rtpri affinity cmd 1 OTHER 0 0,1 init 2 FIFO 99 0 migration/0 3 OTHER 0 0 ksoftirqd/0 4 FIFO 99 0 watchdog/0", "tuna show_threads -t pid_or_cmd_list", "tuna save filename", "ps ax | awk 'BEGIN { ORS=\",\" }{ print USD1 }' PID,1,2,3,4,5,6,8,10,11,12,13,14,15,16,17,19", "tuna show_threads -t 'thread_list from above cmd'", "*tuna [ command ] --cpus cpu_list *", "tuna isolate --cpus cpu_list", "tuna include --cpus cpu_list", "tuna move --cpus 0,1 -t ssh * tuna move --cpus 2,3 -t http\\ *", "tuna show_threads -t ssh * pid SCHED_ rtpri affinity voluntary nonvoluntary cmd 855 OTHER 0 0,1 23 15 sshd tuna show_threads -t http\\ * pid SCHED_ rtpri affinity voluntary nonvoluntary cmd 855 OTHER 0 2,3 23 15 http", "tuna show_irqs users affinity 0 timer 0 1 i8042 0 7 parport0 0", "tuna [ command ] --irqs irq_list --cpus cpu_list", "tuna show_irqs --irqs 128 users affinity 128 iwlwifi 0,1,2,3 tuna move --irqs 128 --cpus 3", "tuna show_irqs --irqs 128 users affinity 128 iwlwifi 3" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/monitoring_and_managing_system_status_and_performance/reviewing-a-system-using-tuna-interface_monitoring-and-managing-system-status-and-performance
Chapter 292. Scheduler Component
Chapter 292. Scheduler Component Available as of Camel version 2.15 The scheduler: component is used to generate message exchanges when a scheduler fires. This component is similar to the Timer component, but it offers more functionality in terms of scheduling. Also this component uses JDK ScheduledExecutorService . Where as the timer uses a JDK Timer . You can only consume events from this endpoint. 292.1. URI format Where name is the name of the scheduler, which is created and shared across endpoints. So if you use the same name for all your scheduler endpoints, only one scheduler thread pool and thread will be used - but you can configure the thread pool to allow more concurrent threads. You can append query options to the URI in the following format, ?option=value&option=value&... Note: The IN body of the generated exchange is null . So exchange.getIn().getBody() returns null . 292.2. Options The Scheduler component supports 2 options, which are listed below. Name Description Default Type concurrentTasks (scheduler) Number of threads used by the scheduling thread pool. Is by default using a single thread 1 int resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Scheduler endpoint is configured using URI syntax: with the following path and query parameters: 292.2.1. Path Parameters (1 parameters): Name Description Default Type name Required The name of the scheduler String 292.2.2. Query Parameters (20 parameters): Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN/ERROR level and ignored. false boolean sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this options is not in use. By default the consumer will deal with exceptions, that will be logged at WARN/ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the default exchange pattern when creating an exchange. ExchangePattern pollStrategy (consumer) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. In other words the error occurred while the polling was gathering information, for instance access to a file network failed so Camel cannot access it to scan for files. The default implementation will log the caused exception at WARN level and ignore it. PollingConsumerPoll Strategy synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int concurrentTasks (scheduler) Number of threads used by the scheduling thread pool. Is by default using a single thread 1 int delay (scheduler) Milliseconds before the poll. The default value is 500. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour). 500 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. The default value is 1000. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour). 1000 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. This option allows you to share a thread pool among multiple consumers. ScheduledExecutor Service scheduler (scheduler) Allow to plugin a custom org.apache.camel.spi.ScheduledPollConsumerScheduler to use as the scheduler for firing when the polling consumer runs. The default implementation uses the ScheduledExecutorService and there is a Quartz2, and Spring based which supports CRON expressions. Notice: If using a custom scheduler then the options for initialDelay, useFixedDelay, timeUnit, and scheduledExecutorService may not be in use. Use the text quartz2 to refer to use the Quartz2 scheduler; and use the text spring to use the Spring based; and use the text #myScheduler to refer to a custom scheduler by its id in the Registry. See Quartz2 page for an example. none ScheduledPollConsumer Scheduler schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz2, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean 292.3. More information This component is a scheduler Polling Consumer where you can find more information about the options above, and examples at the Polling Consumer page. 292.4. Exchange Properties When the timer is fired, it adds the following information as properties to the Exchange : Name Type Description Exchange.TIMER_NAME String The value of the name option. Exchange.TIMER_FIRED_TIME Date The time when the consumer fired. 292.5. Sample To set up a route that generates an event every 60 seconds: from("scheduler://foo?delay=60s").to("bean:myBean?method=someMethodName"); The above route will generate an event and then invoke the someMethodName method on the bean called myBean in the Registry such as JNDI or Spring. And the route in Spring DSL: <route> <from uri="scheduler://foo?delay=60s"/> <to uri="bean:myBean?method=someMethodName"/> </route> 292.6. Forcing the scheduler to trigger immediately when completed To let the scheduler trigger as soon as the task is complete, you can set the option greedy=true . But beware then the scheduler will keep firing all the time. So use this with caution. 292.7. Forcing the scheduler to be idle There can be use cases where you want the scheduler to trigger and be greedy. But sometimes you want "tell the scheduler" that there was no task to poll, so the scheduler can change into idle mode using the backoff options. To do this you would need to set a property on the exchange with the key Exchange.SCHEDULER_POLLED_MESSAGES to a boolean value of false. This will cause the consumer to indicate that there was no messages polled. The consumer will otherwise as by default return 1 message polled to the scheduler, every time the consumer has completed processing the exchange. 292.8. See Also Timer Quartz
[ "scheduler:name[?options]", "scheduler:name", "from(\"scheduler://foo?delay=60s\").to(\"bean:myBean?method=someMethodName\");", "<route> <from uri=\"scheduler://foo?delay=60s\"/> <to uri=\"bean:myBean?method=someMethodName\"/> </route>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/scheduler-component
Appendix A. Image service command options
Appendix A. Image service command options You can use the following optional arguments with the glance image-create , glance image-create-via-import , and glance image-update commands. Table A.1. Command options Specific to Option Description All --architecture <ARCHITECTURE> Operating system architecture as specified in https://docs.openstack.org/glance/latest/user/common-image-properties.html#architecture All --protected [True_False] If true, image will not be deletable. All --name <NAME> Descriptive name for the image All --instance-uuid <INSTANCE_UUID> Metadata that can be used to record which instance this image is associated with. (Informational only, does not create an instance snapshot.) All --min-disk <MIN_DISK> Amount of disk space (in GB) required to boot image. All --visibility <VISIBILITY> Scope of image accessibility. Valid values: public, private, community, shared All --kernel-id <KERNEL_ID> ID of image stored in the Image service (glance) that should be used as the kernel when booting an AMI-style image. All --os-version <OS_VERSION> Operating system version as specified by the distributor All --disk-format <DISK_FORMAT> Format of the disk. Valid values: None, ami, ari, aki, vhd, vhdx, vmdk, raw, qcow2, vdi, iso, ploop All --os-distro <OS_DISTRO> Common name of operating system distribution as specified in https://docs.openstack.org/glance/latest/user/common-image-properties.html#os-distro All --owner <OWNER> Owner of the image All --ramdisk-id <RAMDISK_ID> ID of image stored in the Image service that should be used as the ramdisk when booting an AMI-style image. All --min-ram <MIN_RAM> Amount of RAM (in MB) required to boot image. All --container-format <CONTAINER_FORMAT> Format of the container. Valid values: None, ami, ari, aki, bare, ovf, ova, docker All --property <key=value> Arbitrary property to associate with image. May be used multiple times. glance image-create --tags <TAGS> [<TAGS> ...] List of strings related to the image glance image-create --id <ID> An identifier for the image glance image-update --remove-property Key name of arbitrary property to remove from the image.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/creating_and_managing_images/assembly_image-service-command-options_glance-creating-images
Preface
Preface For OpenShift Data Foundation, node replacement can be performed proactively for an operational node and reactively for a failed node for the following deployments: For Amazon Web Services (AWS) User-provisioned infrastructure Installer-provisioned infrastructure For VMware User-provisioned infrastructure Installer-provisioned infrastructure For Microsoft Azure Installer-provisioned infrastructure For local storage devices Bare metal VMware IBM Power For replacing your storage nodes in external mode, see Red Hat Ceph Storage documentation .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/replacing_nodes/preface-replacing-nodes
Appendix A. Fence Device Parameters
Appendix A. Fence Device Parameters This appendix provides tables with parameter descriptions of fence devices. You can configure the parameters with luci , by using the ccs command, or by editing the etc/cluster/cluster.conf file. For a comprehensive list and description of the fence device parameters for each fence agent, see the man page for that agent. Note The Name parameter for a fence device specifies an arbitrary name for the device that will be used by Red Hat High Availability Add-On. This is not the same as the DNS name for the device. Note Certain fence devices have an optional Password Script parameter. The Password Script parameter allows you to specify that a fence-device password is supplied from a script rather than from the Password parameter. Using the Password Script parameter supersedes the Password parameter, allowing passwords to not be visible in the cluster configuration file ( /etc/cluster/cluster.conf ). Table A.1, "Fence Device Summary" lists the fence devices, the fence device agents associated with the fence devices, and provides a reference to the table documenting the parameters for the fence devices. Table A.1. Fence Device Summary Fence Device Fence Agent Reference to Parameter Description APC Power Switch (telnet/SSH) fence_apc Table A.2, "APC Power Switch (telnet/SSH)" APC Power Switch over SNMP fence_apc_snmp Table A.3, "APC Power Switch over SNMP" Brocade Fabric Switch fence_brocade Table A.4, "Brocade Fabric Switch" Cisco MDS fence_cisco_mds Table A.5, "Cisco MDS" Cisco UCS fence_cisco_ucs Table A.6, "Cisco UCS" Dell DRAC 5 fence_drac5 Table A.7, "Dell DRAC 5" Dell iDRAC fence_idrac Table A.25, "IPMI (Intelligent Platform Management Interface) LAN, Dell iDrac, IBM Integrated Management Module, HPiLO3, HPiLO4" Eaton Network Power Switch (SNMP Interface) fence_eaton_snmp Table A.8, "Eaton Network Power Controller (SNMP Interface) (Red Hat Enterprise Linux 6.4 and later)" Egenera BladeFrame fence_egenera Table A.9, "Egenera BladeFrame" Emerson Network Power Switch (SNMP Interface) fence_emerson Table A.10, "Emerson Network Power Switch (SNMP interface) (Red Hat Enterprise LInux 6.7 and later) " ePowerSwitch fence_eps Table A.11, "ePowerSwitch" Fence virt (Serial/VMChannel Mode) fence_virt Table A.12, "Fence virt (Serial/VMChannel Mode)" Fence virt (fence_xvm/Multicast Mode) fence_xvm Table A.13, "Fence virt (fence_xvm/Multicast Mode) " Fujitsu Siemens Remoteview Service Board (RSB) fence_rsb Table A.14, "Fujitsu Siemens Remoteview Service Board (RSB)" HP BladeSystem fence_hpblade Table A.15, "HP BladeSystem (Red Hat Enterprise Linux 6.4 and later)" HP iLO Device fence_ilo Table A.16, "HP iLO and HP iLO2" HP iLO over SSH Device fence_ilo_ssh Table A.17, "HP iLO over SSH, HP iLO3 over SSH, HPiLO4 over SSH (Red Hat Enterprise Linux 6.7 and later)" HP iLO2 Device fence_ilo2 Table A.16, "HP iLO and HP iLO2" HP iLO3 Device fence_ilo3 Table A.25, "IPMI (Intelligent Platform Management Interface) LAN, Dell iDrac, IBM Integrated Management Module, HPiLO3, HPiLO4" HP iLO3 over SSH Device fence_ilo3_ssh Table A.17, "HP iLO over SSH, HP iLO3 over SSH, HPiLO4 over SSH (Red Hat Enterprise Linux 6.7 and later)" HP iLO4 Device fence_ilo4 Table A.25, "IPMI (Intelligent Platform Management Interface) LAN, Dell iDrac, IBM Integrated Management Module, HPiLO3, HPiLO4" HP iLO4 over SSH Device fence_ilo4_ssh Table A.17, "HP iLO over SSH, HP iLO3 over SSH, HPiLO4 over SSH (Red Hat Enterprise Linux 6.7 and later)" HP iLO MP fence_ilo_mp Table A.18, "HP iLO MP" HP Moonshot iLO fence_ilo_moonshot Table A.19, "HP Moonshot iLO (Red Hat Enterprise Linux 6.7 and later)" IBM BladeCenter fence_bladecenter Table A.20, "IBM BladeCenter" IBM BladeCenter SNMP fence_ibmblade Table A.21, "IBM BladeCenter SNMP" IBM Integrated Management Module fence_imm Table A.25, "IPMI (Intelligent Platform Management Interface) LAN, Dell iDrac, IBM Integrated Management Module, HPiLO3, HPiLO4" IBM iPDU fence_ipdu Table A.22, "IBM iPDU (Red Hat Enterprise Linux 6.4 and later)" IF MIB fence_ifmib Table A.23, "IF MIB" Intel Modular fence_intelmodular Table A.24, "Intel Modular" IPMI (Intelligent Platform Management Interface) Lan fence_ipmilan Table A.25, "IPMI (Intelligent Platform Management Interface) LAN, Dell iDrac, IBM Integrated Management Module, HPiLO3, HPiLO4" Fence kdump fence_kdump Table A.26, "Fence kdump" Multipath Persistent Reservation Fencing fence_mpath Table A.27, "Multipath Persistent Reservation Fencing (Red Hat Enterprise Linux 6.7 and later)" RHEV-M fencing fence_rhevm Table A.28, "RHEV-M fencing (RHEL 6.2 and later against RHEV 3.0 and later)" SCSI Fencing fence_scsi Table A.29, "SCSI Reservation Fencing" VMware Fencing (SOAP Interface) fence_vmware_soap Table A.30, "VMware Fencing (SOAP Interface) (Red Hat Enterprise Linux 6.2 and later)" WTI Power Switch fence_wti Table A.31, "WTI Power Switch" Table A.2, "APC Power Switch (telnet/SSH)" lists the fence device parameters used by fence_apc , the fence agent for APC over telnet/SSH. Table A.2. APC Power Switch (telnet/SSH) luci Field cluster.conf Attribute Description Name name A name for the APC device connected to the cluster into which the fence daemon logs by means of telnet/ssh. IP Address or Hostname ipaddr The IP address or host name assigned to the device. IP Port (optional) ipport The TCP port to use to connect to the device. The default port is 23, or 22 if Use SSH is selected. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Port port The port. Switch (optional) switch The switch number for the APC switch that connects to the node when you have multiple daisy-chained switches. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Use SSH secure Indicates that system will use SSH to access the device. When using SSH, you must specify either a password, a password script, or an identity file. SSH Options ssh_options SSH options to use. The default value is -1 -c blowfish . Path to SSH Identity File identity_file The identity file for SSH. Table A.3, "APC Power Switch over SNMP" lists the fence device parameters used by fence_apc_snmp , the fence agent for APC that logs into the SNP device by means of the SNMP protocol. Table A.3. APC Power Switch over SNMP luci Field cluster.conf Attribute Description Name name A name for the APC device connected to the cluster into which the fence daemon logs by means of the SNMP protocol. IP Address or Hostname ipaddr The IP address or host name assigned to the device. UDP/TCP port udpport The UDP/TCP port to use for connection with the device; the default value is 161. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. SNMP Version snmp_version The SNMP version to use (1, 2c, 3); the default value is 1. SNMP Community community The SNMP community string; the default value is private . SNMP Security Level snmp_sec_level The SNMP security level (noAuthNoPriv, authNoPriv, authPriv). SNMP Authentication Protocol snmp_auth_prot The SNMP authentication protocol (MD5, SHA). SNMP Privacy Protocol snmp_priv_prot The SNMP privacy protocol (DES, AES). SNMP Privacy Protocol Password snmp_priv_passwd The SNMP privacy protocol password. SNMP Privacy Protocol Script snmp_priv_passwd_script The script that supplies a password for SNMP privacy protocol. Using this supersedes the SNMP privacy protocol password parameter. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Port (Outlet) Number port The port. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Table A.4, "Brocade Fabric Switch" lists the fence device parameters used by fence_brocade , the fence agent for Brocade FC switches. Table A.4. Brocade Fabric Switch luci Field cluster.conf Attribute Description Name name A name for the Brocade device connected to the cluster. IP Address or Hostname ipaddr The IP address assigned to the device. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Force IP Family inet4_only, inet6_only Force the agent to use IPv4 or IPv6 addresses only Force Command Prompt cmd_prompt The command prompt to use. The default value is '\USD'. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Port port The switch outlet number. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Use SSH secure Indicates that the system will use SSH to access the device. When using SSH, you must specify either a password, a password script, or an identity file. SSH Options ssh_options SSH options to use. The default value is -1 -c blowfish . Path to SSH Identity File identity_file The identity file for SSH. Unfencing unfence section of the cluster configuration file When enabled, this ensures that a fenced node is not re-enabled until the node has been rebooted. This is necessary for non-power fence methods (that is, SAN/storage fencing). When you configure a device that requires unfencing, the cluster must first be stopped and the full configuration including devices and unfencing must be added before the cluster is started. For more information about unfencing a node, see the fence_node (8) man page. For information about configuring unfencing in the cluster configuration file, see Section 8.3, "Configuring Fencing" . For information about configuring unfencing with the ccs command, see Section 6.7.2, "Configuring a Single Storage-Based Fence Device for a Node" . Table A.5, "Cisco MDS" lists the fence device parameters used by fence_cisco_mds , the fence agent for Cisco MDS. Table A.5. Cisco MDS luci Field cluster.conf Attribute Description Name name A name for the Cisco MDS 9000 series device with SNMP enabled. IP Address or Hostname ipaddr The IP address or host name assigned to the device. UDP/TCP port (optional) udpport The UDP/TCP port to use for connection with the device; the default value is 161. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. SNMP Version snmp_version The SNMP version to use (1, 2c, 3). SNMP Community community The SNMP community string. SNMP Security Level snmp_sec_level The SNMP security level (noAuthNoPriv, authNoPriv, authPriv). SNMP Authentication Protocol snmp_auth_prot The SNMP authentication protocol (MD5, SHA). SNMP Privacy Protocol snmp_priv_prot The SNMP privacy protocol (DES, AES). SNMP Privacy Protocol Password snmp_priv_passwd The SNMP privacy protocol password. SNMP Privacy Protocol Script snmp_priv_passwd_script The script that supplies a password for SNMP privacy protocol. Using this supersedes the SNMP privacy protocol password parameter. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Port (Outlet) Number port The port. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Table A.6, "Cisco UCS" lists the fence device parameters used by fence_cisco_ucs , the fence agent for Cisco UCS. Table A.6. Cisco UCS luci Field cluster.conf Attribute Description Name name A name for the Cisco UCS device. IP Address or Hostname ipaddr The IP address or host name assigned to the device. IP port (optional) ipport The TCP port to use to connect to the device. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Use SSL ssl Use SSL connections to communicate with the device. Sub-Organization suborg Additional path needed to access suborganization. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Port (Outlet) Number port Name of virtual machine. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Table A.7, "Dell DRAC 5" lists the fence device parameters used by fence_drac5 , the fence agent for Dell DRAC 5. Table A.7. Dell DRAC 5 luci Field cluster.conf Attribute Description Name name The name assigned to the DRAC. IP Address or Hostname ipaddr The IP address or host name assigned to the DRAC. IP Port (optional) ipport The TCP port to use to connect to the device. Login login The login name used to access the DRAC. Password passwd The password used to authenticate the connection to the DRAC. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Use SSH secure Indicates that the system will use SSH to access the device. When using SSH, you must specify either a password, a password script, or an identity file. SSH Options ssh_options SSH options to use. The default value is -1 -c blowfish . Path to SSH Identity File identity_file The identity file for SSH. Module Name module_name (optional) The module name for the DRAC when you have multiple DRAC modules. Force Command Prompt cmd_prompt The command prompt to use. The default value is '\USD'. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Delay (seconds) delay The number of seconds to wait before fencing is started. The default value is 0. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Table A.8, "Eaton Network Power Controller (SNMP Interface) (Red Hat Enterprise Linux 6.4 and later)" lists the fence device parameters used by fence_eaton_snmp , the fence agent for the Eaton over SNMP network power switch. Table A.8. Eaton Network Power Controller (SNMP Interface) (Red Hat Enterprise Linux 6.4 and later) luci Field cluster.conf Attribute Description Name name A name for the Eaton network power switch connected to the cluster. IP Address or Hostname ipaddr The IP address or host name assigned to the device. UDP/TCP Port (optional) udpport The UDP/TCP port to use for connection with the device; the default value is 161. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. SNMP Version snmp_version The SNMP version to use (1, 2c, 3); the default value is 1. SNMP Community community The SNMP community string; the default value is private . SNMP Security Level snmp_sec_level The SNMP security level (noAuthNoPriv, authNoPriv, authPriv). SNMP Authentication Protocol snmp_auth_prot The SNMP authentication protocol (MD5, SHA). SNMP Privacy Protocol snmp_priv_prot The SNMP privacy protocol (DES, AES). SNMP Privacy Protocol Password snmp_priv_passwd The SNMP privacy protocol password. SNMP Privacy Protocol Script snmp_priv_passwd_script The script that supplies a password for SNMP privacy protocol. Using this supersedes the SNMP privacy protocol password parameter. Power wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Port (Outlet) Number port Physical plug number or name of virtual machine. This parameter is always required. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Table A.9, "Egenera BladeFrame" lists the fence device parameters used by fence_egenera , the fence agent for the Egenera BladeFrame. Table A.9. Egenera BladeFrame luci Field cluster.conf Attribute Description Name name A name for the Egenera BladeFrame device connected to the cluster. CServer cserver The host name (and optionally the user name in the form of username@hostname ) assigned to the device. Refer to the fence_egenera (8) man page for more information. ESH Path (optional) esh The path to the esh command on the cserver (default is /opt/panmgr/bin/esh) Username user The login name. The default value is root . lpan lpan The logical process area network (LPAN) of the device. pserver pserver The processing blade (pserver) name of the device. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Unfencing unfence section of the cluster configuration file When enabled, this ensures that a fenced node is not re-enabled until the node has been rebooted. This is necessary for non-power fence methods (that is, SAN/storage fencing). When you configure a device that requires unfencing, the cluster must first be stopped and the full configuration including devices and unfencing must be added before the cluster is started. For more information about unfencing a node, see the fence_node (8) man page. For information about configuring unfencing in the cluster configuration file, see Section 8.3, "Configuring Fencing" . For information about configuring unfencing with the ccs command, see Section 6.7.2, "Configuring a Single Storage-Based Fence Device for a Node" . Table A.10, "Emerson Network Power Switch (SNMP interface) (Red Hat Enterprise LInux 6.7 and later) " lists the fence device parameters used by fence_emerson , the fence agent for Emerson over SNMP. Table A.10. Emerson Network Power Switch (SNMP interface) (Red Hat Enterprise LInux 6.7 and later) luci Field cluster.conf Attribute Description Name name A name for the Emerson Network Power Switch device connected to the cluster. IP Address or Hostname ipaddr The IP address or host name assigned to the device. UDP/TCP Port (optional) udpport UDP/TCP port to use for connections with the device; the default value is 161. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. SNMP Version snmp_version The SNMP version to use (1, 2c, 3); the default value is 1. SNMP Community community The SNMP community string. SNMP Security Level snmp_sec_level The SNMP security level (noAuthNoPriv, authNoPriv, authPriv). SNMP Authentication Protocol snmp_auth_prot The SNMP authentication protocol (MD5, SHA). SNMP Privacy Protocol snmp_priv_prot The SNMP privacy protocol (DES, AES). SNMP privacy protocol password snmp_priv_passwd The SNMP Privacy Protocol Password. SNMP Privacy Protocol Script snmp_priv_passwd_script The script that supplies a password for SNMP privacy protocol. Using this supersedes the SNMP privacy protocol password parameter. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Port (Outlet) Number port Physical plug number or name of virtual machine. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Table A.11, "ePowerSwitch" lists the fence device parameters used by fence_eps , the fence agent for ePowerSwitch. Table A.11. ePowerSwitch luci Field cluster.conf Attribute Description Name name A name for the ePowerSwitch device connected to the cluster. IP Address or Hostname ipaddr The IP address or host name assigned to the device. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Name of Hidden Page hidden_page The name of the hidden page for the device. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Port (Outlet) Number port Physical plug number or name of virtual machine. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Table A.12, "Fence virt (Serial/VMChannel Mode)" lists the fence device parameters used by fence_virt , the fence agent for virtual machines using VM channel or serial mode . Table A.12. Fence virt (Serial/VMChannel Mode) luci Field cluster.conf Attribute Description Name name A name for the Fence virt fence device. Serial Device serial_device On the host, the serial device must be mapped in each domain's configuration file. For more information, see the fence_virt man page. If this field is specified, it causes the fence_virt fencing agent to operate in serial mode. Not specifying a value causes the fence_virt fencing agent to operate in VM channel mode. Serial Parameters serial_params The serial parameters. The default is 115200, 8N1. VM Channel IP Address channel_address The channel IP. The default value is 10.0.2.179. Timeout (optional) timeout Fencing timeout, in seconds. The default value is 30. Domain port (formerly domain ) Virtual machine (domain UUID or name) to fence. ipport The channel port. The default value is 1229, which is the value used when configuring this fence device with luci . Delay (optional) delay Fencing delay, in seconds. The fence agent will wait the specified number of seconds before attempting a fencing operation. The default value is 0. Table A.13, "Fence virt (fence_xvm/Multicast Mode) " lists the fence device parameters used by fence_xvm , the fence agent for virtual machines using multicast. Table A.13. Fence virt (fence_xvm/Multicast Mode) luci Field cluster.conf Attribute Description Name name A name for the Fence virt fence device. Timeout (optional) timeout Fencing timeout, in seconds. The default value is 30. Domain port (formerly domain ) Virtual machine (domain UUID or name) to fence. Delay (optional) delay Fencing delay, in seconds. The fence agent will wait the specified number of seconds before attempting a fencing operation. The default value is 0. Table A.14, "Fujitsu Siemens Remoteview Service Board (RSB)" lists the fence device parameters used by fence_rsb , the fence agent for Fujitsu-Siemens RSB. Table A.14. Fujitsu Siemens Remoteview Service Board (RSB) luci Field cluster.conf Attribute Description Name name A name for the RSB to use as a fence device. IP Address or Hostname ipaddr The host name assigned to the device. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Use SSH secure Indicates that the system will use SSH to access the device. When using SSH, you must specify either a password, a password script, or an identity file. Path to SSH Identity File identity_file The Identity file for SSH. TCP Port ipport The port number on which the telnet service listens. The default value is 3172. Force Command Prompt cmd_prompt The command prompt to use. The default value is '\USD'. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Delay (seconds) delay The number of seconds to wait before fencing is started. The default value is 0. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Table A.15, "HP BladeSystem (Red Hat Enterprise Linux 6.4 and later)" lists the fence device parameters used by fence_hpblade , the fence agent for HP BladeSystem. Table A.15. HP BladeSystem (Red Hat Enterprise Linux 6.4 and later) luci Field cluster.conf Attribute Description Name name The name assigned to the HP Bladesystem device connected to the cluster. IP Address or Hostname ipaddr The IP address or host name assigned to the HP BladeSystem device. IP Port (optional) ipport The TCP port to use to connect to the device. Login login The login name used to access the HP BladeSystem device. This parameter is required. Password passwd The password used to authenticate the connection to the fence device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Force Command Prompt cmd_prompt The command prompt to use. The default value is '\USD'. Missing port returns OFF instead of failure missing_as_off Missing port returns OFF instead of failure. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Use SSH secure Indicates that the system will use SSH to access the device. When using SSH, you must specify either a password, a password script, or an identity file. SSH Options ssh_options SSH options to use. The default value is -1 -c blowfish . Path to SSH Identity File identity_file The identity file for SSH. The fence agents for HP iLO devices ( fence_ilo and) HP iLO2 devices ( fence_ilo2 ) share the same implementation. Table A.16, "HP iLO and HP iLO2" lists the fence device parameters used by these agents. Table A.16. HP iLO and HP iLO2 luci Field cluster.conf Attribute Description Name name A name for the server with HP iLO support. IP Address or Hostname ipaddr The IP address or host name assigned to the device. IP Port (optional) ipport TCP port to use for connection with the device. The default value is 443. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Delay (seconds) delay The number of seconds to wait before fencing is started. The default value is 0. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. The fence agents for HP iLO devices over SSH ( fence_ilo_ssh ), HP iLO3 devices over SSH ( fence_ilo3_ssh ), and HP iLO4 devices over SSH ( fence_ilo4_ssh ) share the same implementation. Table A.17, "HP iLO over SSH, HP iLO3 over SSH, HPiLO4 over SSH (Red Hat Enterprise Linux 6.7 and later)" lists the fence device parameters used by these agents. Table A.17. HP iLO over SSH, HP iLO3 over SSH, HPiLO4 over SSH (Red Hat Enterprise Linux 6.7 and later) luci Field cluster.conf Attribute Description Name name A name for the server with HP iLO support. IP Address or Hostname ipaddr The IP address or host name assigned to the device. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Use SSH secure Indicates that the system will use SSH to access the device. When using SSH, you must specify either a password, a password script, or an identity file. Path to SSH Identity File identity_file The Identity file for SSH. TCP Port ipport UDP/TCP port to use for connections with the device; the default value is 23. Force Command Prompt cmd_prompt The command prompt to use. The default value is 'MP>', 'hpiLO->'. Method to Fence method The method to fence: on/off or cycle Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Delay (seconds) delay The number of seconds to wait before fencing is started. The default value is 0. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Table A.18, "HP iLO MP" lists the fence device parameters used by fence_ilo_mp , the fence agent for HP iLO MP devices. Table A.18. HP iLO MP luci Field cluster.conf Attribute Description Name name A name for the server with HP iLO support. IP Address or Hostname ipaddr The IP address or host name assigned to the device. IP Port (optional) ipport TCP port to use for connection with the device. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Use SSH secure Indicates that the system will use SSH to access the device. When using SSH, you must specify either a password, a password script, or an identity file. SSH Options ssh_options SSH options to use. The default value is -1 -c blowfish . Path to SSH Identity File identity_file The Identity file for SSH. Force Command Prompt cmd_prompt The command prompt to use. The default value is 'MP>', 'hpiLO->'. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Delay (seconds) delay The number of seconds to wait before fencing is started. The default value is 0. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Table A.19, "HP Moonshot iLO (Red Hat Enterprise Linux 6.7 and later)" lists the fence device parameters used by fence_ilo_moonshot , the fence agent for HP Moonshot iLO devices. Table A.19. HP Moonshot iLO (Red Hat Enterprise Linux 6.7 and later) luci Field cluster.conf Attribute Description Name name A name for the server with HP iLO support. IP Address or Hostname ipaddr The IP address or host name assigned to the device. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Use SSH secure Indicates that the system will use SSH to access the device. When using SSH, you must specify either a password, a password script, or an identity file. Path to SSH Identity File identity_file The Identity file for SSH. TCP Port ipport UDP/TCP port to use for connections with the device; the default value is 22. Force Command Prompt cmd_prompt The command prompt to use. The default value is 'MP>', 'hpiLO->'. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Delay (seconds) delay The number of seconds to wait before fencing is started. The default value is 0. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Table A.20, "IBM BladeCenter" lists the fence device parameters used by fence_bladecenter , the fence agent for IBM BladeCenter. Table A.20. IBM BladeCenter luci Field cluster.conf Attribute Description Name name A name for the IBM BladeCenter device connected to the cluster. IP Address or Hostname ipaddr The IP address or host name assigned to the device. IP port (optional) ipport TCP port to use for connection with the device. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Use SSH secure Indicates that system will use SSH to access the device. When using SSH, you must specify either a password, a password script, or an identity file. SSH Options ssh_options SSH options to use. The default value is -1 -c blowfish . Path to SSH Identity File identity_file The identity file for SSH. Table A.21, "IBM BladeCenter SNMP" lists the fence device parameters used by fence_ibmblade , the fence agent for IBM BladeCenter over SNMP. Table A.21. IBM BladeCenter SNMP luci Field cluster.conf Attribute Description Name name A name for the IBM BladeCenter SNMP device connected to the cluster. IP Address or Hostname ipaddr The IP address or host name assigned to the device. UDP/TCP Port (optional) udpport UDP/TCP port to use for connections with the device; the default value is 161. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. SNMP Version snmp_version The SNMP version to use (1, 2c, 3); the default value is 1. SNMP Community community The SNMP community string. SNMP Security Level snmp_sec_level The SNMP security level (noAuthNoPriv, authNoPriv, authPriv). SNMP Authentication Protocol snmp_auth_prot The SNMP authentication protocol (MD5, SHA). SNMP Privacy Protocol snmp_priv_prot The SNMP privacy protocol (DES, AES). SNMP privacy protocol password snmp_priv_passwd The SNMP Privacy Protocol Password. SNMP Privacy Protocol Script snmp_priv_passwd_script The script that supplies a password for SNMP privacy protocol. Using this supersedes the SNMP privacy protocol password parameter. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Port (Outlet) Number port Physical plug number or name of virtual machine. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Table A.22, "IBM iPDU (Red Hat Enterprise Linux 6.4 and later)" lists the fence device parameters used by fence_ipdu , the fence agent for iPDU over SNMP devices. Table A.22. IBM iPDU (Red Hat Enterprise Linux 6.4 and later) luci Field cluster.conf Attribute Description Name name A name for the IBM iPDU device connected to the cluster into which the fence daemon logs by means of the SNMP protocol. IP Address or Hostname ipaddr The IP address or host name assigned to the device. UDP/TCP Port udpport The UDP/TCP port to use for connection with the device; the default value is 161. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. SNMP Version snmp_version The SNMP version to use (1, 2c, 3); the default value is 1. SNMP Community community The SNMP community string; the default value is private . SNMP Security Level snmp_sec_level The SNMP security level (noAuthNoPriv, authNoPriv, authPriv). SNMP Authentication Protocol snmp_auth_prot The SNMP Authentication Protocol (MD5, SHA). SNMP Privacy Protocol snmp_priv_prot The SNMP privacy protocol (DES, AES). SNMP Privacy Protocol Password snmp_priv_passwd The SNMP privacy protocol password. SNMP Privacy Protocol Script snmp_priv_passwd_script The script that supplies a password for SNMP privacy protocol. Using this supersedes the SNMP privacy protocol password parameter. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Port (Outlet) Number port Physical plug number or name of virtual machine. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Table A.23, "IF MIB" lists the fence device parameters used by fence_ifmib , the fence agent for IF-MIB devices. Table A.23. IF MIB luci Field cluster.conf Attribute Description Name name A name for the IF MIB device connected to the cluster. IP Address or Hostname ipaddr The IP address or host name assigned to the device. UDP/TCP Port (optional) udpport The UDP/TCP port to use for connection with the device; the default value is 161. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. SNMP Version snmp_version The SNMP version to use (1, 2c, 3); the default value is 1. SNMP Community community The SNMP community string. SNMP Security Level snmp_sec_level The SNMP security level (noAuthNoPriv, authNoPriv, authPriv). SNMP Authentication Protocol snmp_auth_prot The SNMP authentication protocol (MD5, SHA). SNMP Privacy Protocol snmp_priv_prot The SNMP privacy protocol (DES, AES). SNMP Privacy Protocol Password snmp_priv_passwd The SNMP privacy protocol password. SNMP Privacy Protocol Script snmp_priv_passwd_script The script that supplies a password for SNMP privacy protocol. Using this supersedes the SNMP privacy protocol password parameter. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Port (Outlet) Number port Physical plug number or name of virtual machine. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Table A.24, "Intel Modular" lists the fence device parameters used by fence_intelmodular , the fence agent for Intel Modular. Table A.24. Intel Modular luci Field cluster.conf Attribute Description Name name A name for the Intel Modular device connected to the cluster. IP Address or Hostname ipaddr The IP address or host name assigned to the device. UDP/TCP Port (optional) udpport The UDP/TCP port to use for connection with the device; the default value is 161. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. SNMP Version snmp_version The SNMP version to use (1, 2c, 3); the default value is 1. SNMP Community community The SNMP community string; the default value is private . SNMP Security Level snmp_sec_level The SNMP security level (noAuthNoPriv, authNoPriv, authPriv). SNMP Authentication Protocol snmp_auth_prot The SNMP authentication protocol (MD5, SHA). SNMP Privacy Protocol snmp_priv_prot The SNMP privacy protocol (DES, AES). SNMP Privacy Protocol Password snmp_priv_passwd The SNMP privacy protocol password. SNMP Privacy Protocol Script snmp_priv_passwd_script The script that supplies a password for SNMP privacy protocol. Using this supersedes the SNMP privacy protocol password parameter. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Port (Outlet) Number port Physical plug number or name of virtual machine. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. The fence agents for IPMI over LAN ( fence_ipmilan ,) Dell iDRAC ( fence_idrac ), IBM Integrated Management Module ( fence_imm ), HP iLO3 devices ( fence_ilo3 ), and HP iLO4 devices ( fence_ilo4 ) share the same implementation. Table A.25, "IPMI (Intelligent Platform Management Interface) LAN, Dell iDrac, IBM Integrated Management Module, HPiLO3, HPiLO4" lists the fence device parameters used by these agents. Table A.25. IPMI (Intelligent Platform Management Interface) LAN, Dell iDrac, IBM Integrated Management Module, HPiLO3, HPiLO4 luci Field cluster.conf Attribute Description Name name A name for the fence device connected to the cluster. IP Address or Hostname ipaddr The IP address or host name assigned to the device. Login login The login name of a user capable of issuing power on/off commands to the given port. Password passwd The password used to authenticate the connection to the port. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Authentication Type auth Authentication type: none , password , or MD5 . Use Lanplus lanplus True or 1 . If blank, then value is False . It is recommended that you enable Lanplus to improve the security of your connection if your hardware supports it. Ciphersuite to use cipher The remote server authentication, integrity, and encryption algorithms to use for IPMIv2 lanplus connections. Privilege level privlvl The privilege level on the device. IPMI Operation Timeout timeout Timeout in seconds for IPMI operation. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. The default value is 2 seconds for fence_ipmilan , fence_idrac , fence_imm , and fence_ilo4 . The default value is 4 seconds for fence_ilo3 . Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Method to Fence method The method to fence: on/off or cycle Table A.26, "Fence kdump" lists the fence device parameters used by fence_kdump , the fence agent for kdump crash recovery service. Note that fence_kdump is not a replacement for traditional fencing methods; The fence_kdump agent can detect only that a node has entered the kdump crash recovery service. This allows the kdump crash recovery service to complete without being preempted by traditional power fencing methods. Table A.26. Fence kdump luci Field cluster.conf Attribute Description Name name A name for the fence_kdump device. IP Family family IP network family. The default value is auto . IP Port (optional) ipport IP port number that the fence_kdump agent will use to listen for messages. The default value is 7410. Operation Timeout (seconds) (optional) timeout Number of seconds to wait for message from failed node. Node name nodename Name or IP address of the node to be fenced. Table A.27, "Multipath Persistent Reservation Fencing (Red Hat Enterprise Linux 6.7 and later)" lists the fence device parameters used by fence_mpath , the fence agent for multipath persistent reservation fencing. Table A.27. Multipath Persistent Reservation Fencing (Red Hat Enterprise Linux 6.7 and later) luci Field cluster.conf Attribute Description Name name A name for the fence_mpath device. Devices (Comma delimited list) devices Comma-separated list of devices to use for the current operation. Each device must support SCSI-3 persistent reservations. Use sudo when calling third-party software sudo Use sudo (without password) when calling 3rd party software. Path to sudo binary (optional) sudo_path Path to sudo binary (default value is /usr/bin/sudo . Path to mpathpersist binary (optional) mpathpersist_path Path to mpathpersist binary (default value is /sbin/mpathpersist . Path to a directory where the fence agent can store information (optional) store_path Path to directory where fence agent can store information (default value is /var/run/cluster . Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Unfencing unfence section of the cluster configuration file When enabled, this ensures that a fenced node is not re-enabled until the node has been rebooted. This is necessary for non-power fence methods. When you configure a device that requires unfencing, the cluster must first be stopped and the full configuration including devices and unfencing must be added before the cluster is started. For more information about unfencing a node, see the fence_node (8) man page. For information about configuring unfencing in the cluster configuration file, see Section 8.3, "Configuring Fencing" . For information about configuring unfencing with the ccs command, see Section 6.7.2, "Configuring a Single Storage-Based Fence Device for a Node" . Key for current action key Key to use for the current operation. This key should be unique to a node and written in /etc/multipath.conf . For the "on" action, the key specifies the key use to register the local node. For the "off" action, this key specifies the key to be removed from the device(s). This parameter is always required. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Table A.28, "RHEV-M fencing (RHEL 6.2 and later against RHEV 3.0 and later)" lists the fence device parameters used by fence_rhevm , the fence agent for RHEV-M fencing. Table A.28. RHEV-M fencing (RHEL 6.2 and later against RHEV 3.0 and later) luci Field cluster.conf Attribute Description Name name Name of the RHEV-M fencing device. IP Address or Hostname ipaddr The IP address or host name assigned to the device. IP Port (optional) ipport The TCP port to use for connection with the device. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Use SSL ssl Use SSL connections to communicate with the device. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Port (Outlet) Number port Physical plug number or name of virtual machine. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Table A.29, "SCSI Reservation Fencing" lists the fence device parameters used by fence_scsi , the fence agent for SCSI persistent reservations. Note Use of SCSI persistent reservations as a fence method is supported with the following limitations: When using SCSI fencing, all nodes in the cluster must register with the same devices so that each node can remove another node's registration key from all the devices it is registered with. Devices used for the cluster volumes should be a complete LUN, not partitions. SCSI persistent reservations work on an entire LUN, meaning that access is controlled to each LUN, not individual partitions. It is recommended that devices used for the cluster volumes be specified in the format /dev/disk/by-id/ xxx where possible. Devices specified in this format are consistent among all nodes and will point to the same disk, unlike devices specified in a format such as /dev/sda which can point to different disks from machine to machine and across reboots. Table A.29. SCSI Reservation Fencing luci Field cluster.conf Attribute Description Name name A name for the SCSI fence device. Unfencing unfence section of the cluster configuration file When enabled, this ensures that a fenced node is not re-enabled until the node has been rebooted. This is necessary for non-power fence methods (that is, SAN/storage fencing). When you configure a device that requires unfencing, the cluster must first be stopped and the full configuration including devices and unfencing must be added before the cluster is started. For more information about unfencing a node, see the fence_node (8) man page. For information about configuring unfencing in the cluster configuration file, see Section 8.3, "Configuring Fencing" . For information about configuring unfencing with the ccs command, see Section 6.7.2, "Configuring a Single Storage-Based Fence Device for a Node" . Node name nodename The node name is used to generate the key value used for the current operation. Key for current action key (overrides node name) Key to use for the current operation. This key should be unique to a node. For the "on" action, the key specifies the key use to register the local node. For the "off" action,this key specifies the key to be removed from the device(s). Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Table A.30, "VMware Fencing (SOAP Interface) (Red Hat Enterprise Linux 6.2 and later)" lists the fence device parameters used by fence_vmware_soap , the fence agent for VMware over SOAP API. Table A.30. VMware Fencing (SOAP Interface) (Red Hat Enterprise Linux 6.2 and later) luci Field cluster.conf Attribute Description Name name Name of the virtual machine fencing device. IP Address or Hostname ipaddr The IP address or host name assigned to the device. IP Port (optional) ipport The TCP port to use for connection with the device. The default port is 80, or 443 if Use SSL is selected. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. VM name port Name of virtual machine in inventory path format (for example, /datacenter/vm/Discovered_virtual_machine/myMachine). VM UUID uuid The UUID of the virtual machine to fence. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Use SSL ssl Use SSL connections to communicate with the device. Table A.31, "WTI Power Switch" lists the fence device parameters used by fence_wti , the fence agent for the WTI network power switch. Table A.31. WTI Power Switch luci Field cluster.conf Attribute Description Name name A name for the WTI power switch connected to the cluster. IP Address or Hostname ipaddr The IP or host name address assigned to the device. IP Port (optional) ipport The TCP port to use to connect to the device. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Force command prompt cmd_prompt The command prompt to use. The default value is ['RSM>', '>MPC', 'IPS>', 'TPS>', 'NBB>', 'NPS>', 'VMR>'] Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Use SSH secure Indicates that system will use SSH to access the device. When using SSH, you must specify either a password, a password script, or an identity file. SSH Options ssh_options SSH options to use. The default value is -1 -c blowfish . Path to SSH Identity File identity_file The identity file for SSH. Port port Physical plug number or name of virtual machine.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/ap-fence-device-param-CA
4.4.2. Displaying Quota Limits and Usage
4.4.2. Displaying Quota Limits and Usage Quota limits and current usage can be displayed for a specific user or group using the gfs_quota get command. The entire contents of the quota file can also be displayed using the gfs_quota list command, in which case all IDs with a non-zero hard limit, warn limit, or value are listed. Usage Displaying Quota Limits for a User Displaying Quota Limits for a Group Displaying Entire Quota File User A user ID to display information about a specific user. It can be either a user name from the password file or the UID number. Group A group ID to display information about a specific group. It can be either a group name from the group file or the GID number. MountPoint Specifies the GFS file system to which the actions apply. Command Output GFS quota information from the gfs_quota command is displayed as follows: The LimitSize , WarnSize , and Value numbers (values) are in units of megabytes by default. Adding the -k , -s , or -b flags to the command line change the units to kilobytes, sectors, or file-system blocks, respectively. User A user name or ID to which the data is associated. Group A group name or ID to which the data is associated. LimitSize The hard limit set for the user or group. This value is zero if no limit has been set. Value The actual amount of disk space used by the user or group. Comments When displaying quota information, the gfs_quota command does not resolve UIDs and GIDs into names if the -n option is added to the command line. Space allocated to GFS's hidden files can be left out of displayed values for the root UID and GID by adding the -d option to the command line. This is useful when trying to match the numbers from gfs_quota with the results of a du command. Examples This example displays quota information for all users and groups that have a limit set or are using any disk space on file system /gfs . This example displays quota information in sectors for group users on file system /gfs .
[ "gfs_quota get -u User -f MountPoint", "gfs_quota get -g Group -f MountPoint", "gfs_quota list -f MountPoint", "user User : limit: LimitSize warn: WarnSize value: Value group Group : limit: LimitSize warn: WarnSize value: Value", "gfs_quota list -f /gfs", "gfs_quota get -g users -f /gfs -s" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/global_file_system/s2-manage-displayquota
Chapter 90. workflow
Chapter 90. workflow This chapter describes the commands under the workflow command. 90.1. workflow create Create new workflow. Usage: Table 90.1. Positional Arguments Value Summary definition Workflow definition file. Table 90.2. Optional Arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. --namespace [NAMESPACE] Namespace to create the workflow within. --public With this flag workflow will be marked as "public". Table 90.3. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 90.4. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 90.5. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 90.6. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 90.2. workflow definition show Show workflow definition. Usage: Table 90.7. Positional Arguments Value Summary identifier Workflow id or name. Table 90.8. Optional Arguments Value Summary -h, --help Show this help message and exit --namespace [NAMESPACE] Namespace to get the workflow from. 90.3. workflow delete Delete workflow. Usage: Table 90.9. Positional Arguments Value Summary workflow Name or id of workflow(s). Table 90.10. Optional Arguments Value Summary -h, --help Show this help message and exit --namespace [NAMESPACE] Namespace to delete the workflow from. 90.4. workflow engine service list List all services. Usage: Table 90.11. Optional Arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. Table 90.12. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 90.13. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 90.14. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 90.15. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 90.5. workflow env create Create new environment. Usage: Table 90.16. Positional Arguments Value Summary file Environment configuration file in json or yaml Table 90.17. Optional Arguments Value Summary -h, --help Show this help message and exit Table 90.18. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 90.19. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 90.20. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 90.21. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 90.6. workflow env delete Delete environment. Usage: Table 90.22. Positional Arguments Value Summary environment Name of environment(s). Table 90.23. Optional Arguments Value Summary -h, --help Show this help message and exit 90.7. workflow env list List all environments. Usage: Table 90.24. Optional Arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. Table 90.25. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 90.26. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 90.27. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 90.28. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 90.8. workflow env show Show specific environment. Usage: Table 90.29. Positional Arguments Value Summary environment Environment name Table 90.30. Optional Arguments Value Summary -h, --help Show this help message and exit --export Export the environment suitable for import Table 90.31. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 90.32. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 90.33. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 90.34. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 90.9. workflow env update Update environment. Usage: Table 90.35. Positional Arguments Value Summary file Environment configuration file in json or yaml Table 90.36. Optional Arguments Value Summary -h, --help Show this help message and exit Table 90.37. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 90.38. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 90.39. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 90.40. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 90.10. workflow execution create Create new execution. Usage: Table 90.41. Positional Arguments Value Summary workflow_identifier Workflow id or name. workflow name will be deprecated since Mitaka. workflow_input Workflow input params Workflow additional parameters Table 90.42. Optional Arguments Value Summary -h, --help Show this help message and exit --namespace [NAMESPACE] Workflow namespace. -d DESCRIPTION, --description DESCRIPTION Execution description -s [SOURCE_EXECUTION_ID] Workflow execution id which will allow operators to create a new workflow execution based on the previously successful executed workflow. Example: mistral execution-create -s 123e4567-e89b-12d3-a456-426655440000 Table 90.43. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 90.44. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 90.45. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 90.46. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 90.11. workflow execution delete Delete execution. Usage: Table 90.47. Positional Arguments Value Summary execution Id of execution identifier(s). Table 90.48. Optional Arguments Value Summary -h, --help Show this help message and exit --force Force the deletion of an execution. might cause a cascade of errors if used for running executions. 90.12. workflow execution input show Show execution input data. Usage: Table 90.49. Positional Arguments Value Summary id Execution id Table 90.50. Optional Arguments Value Summary -h, --help Show this help message and exit 90.13. workflow execution list List all executions. Usage: Table 90.51. Optional Arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. --oldest Display the executions starting from the oldest entries instead of the newest --task [TASK] Parent task execution id associated with workflow execution list. Table 90.52. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 90.53. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 90.54. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 90.55. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 90.14. workflow execution output show Show execution output data. Usage: Table 90.56. Positional Arguments Value Summary id Execution id Table 90.57. Optional Arguments Value Summary -h, --help Show this help message and exit 90.15. workflow execution published show Show workflow global published variables. Usage: Table 90.58. Positional Arguments Value Summary id Workflow id Table 90.59. Optional Arguments Value Summary -h, --help Show this help message and exit 90.16. workflow execution report show Print execution report. Usage: Table 90.60. Positional Arguments Value Summary id Execution id Table 90.61. Optional Arguments Value Summary -h, --help Show this help message and exit --errors-only Only error paths will be included. --no-errors-only Not only error paths will be included. --max-depth [MAX_DEPTH] Maximum depth of the workflow execution tree. if 0, only the root workflow execution and its tasks will be included 90.17. workflow execution show Show specific execution. Usage: Table 90.62. Positional Arguments Value Summary execution Execution identifier Table 90.63. Optional Arguments Value Summary -h, --help Show this help message and exit Table 90.64. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 90.65. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 90.66. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 90.67. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 90.18. workflow execution update Update execution. Usage: Table 90.68. Positional Arguments Value Summary id Execution identifier Table 90.69. Optional Arguments Value Summary -h, --help Show this help message and exit -s {RUNNING,PAUSED,SUCCESS,ERROR,CANCELLED}, --state {RUNNING,PAUSED,SUCCESS,ERROR,CANCELLED} Execution state -e ENV, --env ENV Environment variables -d DESCRIPTION, --description DESCRIPTION Execution description Table 90.70. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 90.71. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 90.72. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 90.73. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 90.19. workflow list List all workflows. Usage: Table 90.74. Optional Arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. Table 90.75. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 90.76. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 90.77. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 90.78. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 90.20. workflow show Show specific workflow. Usage: Table 90.79. Positional Arguments Value Summary workflow Workflow id or name. Table 90.80. Optional Arguments Value Summary -h, --help Show this help message and exit --namespace [NAMESPACE] Namespace to get the workflow from. Table 90.81. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 90.82. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 90.83. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 90.84. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 90.21. workflow update Update workflow. Usage: Table 90.85. Positional Arguments Value Summary definition Workflow definition Table 90.86. Optional Arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. --id ID Workflow id. --namespace [NAMESPACE] Namespace of the workflow. --public With this flag workflow will be marked as "public". Table 90.87. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 90.88. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 90.89. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 90.90. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 90.22. workflow validate Validate workflow. Usage: Table 90.91. Positional Arguments Value Summary definition Workflow definition file Table 90.92. Optional Arguments Value Summary -h, --help Show this help message and exit Table 90.93. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 90.94. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 90.95. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 90.96. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack workflow create [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS] [--namespace [NAMESPACE]] [--public] definition", "openstack workflow definition show [-h] [--namespace [NAMESPACE]] identifier", "openstack workflow delete [-h] [--namespace [NAMESPACE]] workflow [workflow ...]", "openstack workflow engine service list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS]", "openstack workflow env create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] file", "openstack workflow env delete [-h] environment [environment ...]", "openstack workflow env list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS]", "openstack workflow env show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--export] environment", "openstack workflow env update [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] file", "openstack workflow execution create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--namespace [NAMESPACE]] [-d DESCRIPTION] [-s [SOURCE_EXECUTION_ID]] [workflow_identifier] [workflow_input] [params]", "openstack workflow execution delete [-h] [--force] execution [execution ...]", "openstack workflow execution input show [-h] id", "openstack workflow execution list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS] [--oldest] [--task [TASK]]", "openstack workflow execution output show [-h] id", "openstack workflow execution published show [-h] id", "openstack workflow execution report show [-h] [--errors-only] [--no-errors-only] [--max-depth [MAX_DEPTH]] id", "openstack workflow execution show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] execution", "openstack workflow execution update [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [-s {RUNNING,PAUSED,SUCCESS,ERROR,CANCELLED}] [-e ENV] [-d DESCRIPTION] id", "openstack workflow list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS]", "openstack workflow show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--namespace [NAMESPACE]] workflow", "openstack workflow update [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS] [--id ID] [--namespace [NAMESPACE]] [--public] definition", "openstack workflow validate [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] definition" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/workflow
Chapter 1. Preparing to install with the Agent-based Installer
Chapter 1. Preparing to install with the Agent-based Installer 1.1. About the Agent-based Installer The Agent-based installation method provides the flexibility to boot your on-premises servers in any way that you choose. It combines the ease of use of the Assisted Installation service with the ability to run offline, including in air-gapped environments. Agent-based installation is a subcommand of the OpenShift Container Platform installer. It generates a bootable ISO image containing all of the information required to deploy an OpenShift Container Platform cluster, with an available release image. The configuration is in the same format as for the installer-provisioned infrastructure and user-provisioned infrastructure installation methods. The Agent-based Installer can also optionally generate or accept Zero Touch Provisioning (ZTP) custom resources. ZTP allows you to provision new edge sites with declarative configurations of bare-metal equipment. 1.2. Understanding Agent-based Installer As an OpenShift Container Platform user, you can leverage the advantages of the Assisted Installer hosted service in disconnected environments. The Agent-based installation comprises a bootable ISO that contains the Assisted discovery agent and the Assisted Service. Both are required to perform the cluster installation, but the latter runs on only one of the hosts. The openshift-install agent create image subcommand generates an ephemeral ISO based on the inputs that you provide. You can choose to provide inputs through the following manifests: Preferred: install-config.yaml agent-config.yaml or Optional: ZTP manifests cluster-manifests/cluster-deployment.yaml cluster-manifests/agent-cluster-install.yaml cluster-manifests/pull-secret.yaml cluster-manifests/infraenv.yaml cluster-manifests/cluster-image-set.yaml cluster-manifests/nmstateconfig.yaml mirror/registries.conf mirror/ca-bundle.crt 1.2.1. Agent-based Installer workflow One of the control plane hosts runs the Assisted Service at the start of the boot process and eventually becomes the bootstrap host. This node is called the rendezvous host (node 0). The Assisted Service ensures that all the hosts meet the requirements and triggers an OpenShift Container Platform cluster deployment. All the nodes have the Red Hat Enterprise Linux CoreOS (RHCOS) image written to the disk. The non-bootstrap nodes reboot and initiate a cluster deployment. Once the nodes are rebooted, the rendezvous host reboots and joins the cluster. The bootstrapping is complete and the cluster is deployed. Figure 1.1. Node installation workflow You can install a disconnected OpenShift Container Platform cluster through the openshift-install agent create image subcommand for the following topologies: A single-node OpenShift Container Platform cluster (SNO) : A node that is both a master and worker. A three-node OpenShift Container Platform cluster : A compact cluster that has three master nodes that are also worker nodes. Highly available OpenShift Container Platform cluster (HA) : Three master nodes with any number of worker nodes. 1.2.2. Recommended resources for topologies Recommended cluster resources for the following topologies: Table 1.1. Recommended cluster resources Topology Number of master nodes Number of worker nodes vCPU Memory Storage Single-node cluster 1 0 8 vCPUs 16GB of RAM 120GB Compact cluster 3 0 or 1 8 vCPUs 16GB of RAM 120GB HA cluster 3 2 and above 8 vCPUs 16GB of RAM 120GB The following platforms are supported: baremetal vsphere external none Important For platform none : The none option requires the provision of DNS name resolution and load balancing infrastructure in your cluster. See Requirements for a cluster using the platform "none" option in the "Additional resources" section for more information. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you attempt to install an OpenShift Container Platform cluster in virtualized or cloud environments. Additional resources Requirements for a cluster using the platform "none" option Increase the network MTU Adding worker nodes to single-node OpenShift clusters 1.3. About FIPS compliance For many OpenShift Container Platform customers, regulatory readiness, or compliance, on some level is required before any systems can be put into production. That regulatory readiness can be imposed by national standards, industry standards or the organization's corporate governance framework. Federal Information Processing Standards (FIPS) compliance is one of the most critical components required in highly secure environments to ensure that only supported cryptographic technologies are allowed on nodes. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 1.4. Configuring FIPS through the Agent-based Installer During a cluster deployment, the Federal Information Processing Standards (FIPS) change is applied when the Red Hat Enterprise Linux CoreOS (RHCOS) machines are deployed in your cluster. For Red Hat Enterprise Linux (RHEL) machines, you must enable FIPS mode when you install the operating system on the machines that you plan to use as worker machines. You can enable FIPS mode through the preferred method of install-config.yaml and agent-config.yaml : You must set value of the fips field to True in the install-config.yaml file: Sample install-config.yaml.file apiVersion: v1 baseDomain: test.example.com metadata: name: sno-cluster fips: True Optional: If you are using the GitOps ZTP manifests, you must set the value of fips as True in the Agent-install.openshift.io/install-config-overrides field in the agent-cluster-install.yaml file: Sample agent-cluster-install.yaml file apiVersion: extensions.hive.openshift.io/v1beta1 kind: AgentClusterInstall metadata: annotations: agent-install.openshift.io/install-config-overrides: '{"fips": True}' name: sno-cluster namespace: sno-cluster-test Additional resources OpenShift Security Guide Book Support for FIPS cryptography 1.5. Host configuration You can make additional configurations for each host on the cluster in the agent-config.yaml file, such as network configurations and root device hints. Important For each host you configure, you must provide the MAC address of an interface on the host to specify which host you are configuring. 1.5.1. Host roles Each host in the cluster is assigned a role of either master or worker . You can define the role for each host in the agent-config.yaml file by using the role parameter. If you do not assign a role to the hosts, the roles will be assigned at random during installation. It is recommended to explicitly define roles for your hosts. The rendezvousIP must be assigned to a host with the master role. This can be done manually or by allowing the Agent-based Installer to assign the role. Important You do not need to explicitly define the master role for the rendezvous host, however you cannot create configurations that conflict with this assignment. For example, if you have 4 hosts with 3 of the hosts explicitly defined to have the master role, the last host that is automatically assigned the worker role during installation cannot be configured as the rendezvous host. Sample agent-config.yaml file apiVersion: v1beta1 kind: AgentConfig metadata: name: example-cluster rendezvousIP: 192.168.111.80 hosts: - hostname: master-1 role: master interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 - hostname: master-2 role: master interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a6 - hostname: master-3 role: master interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a7 - hostname: worker-1 role: worker interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a8 1.5.2. About root device hints The rootDeviceHints parameter enables the installer to provision the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. The installer examines the devices in the order it discovers them, and compares the discovered values with the hint values. The installer uses the first discovered device that matches the hint value. The configuration can combine multiple hints, but a device must match all hints for the installer to select it. Table 1.2. Subfields Subfield Description deviceName A string containing a Linux device name such as /dev/vda or /dev/disk/by-path/ . It is recommended to use the /dev/disk/by-path/<device_path> link to the storage location. The hint must match the actual value exactly. hctl A string containing a SCSI bus address like 0:0:0:0 . The hint must match the actual value exactly. model A string containing a vendor-specific device identifier. The hint can be a substring of the actual value. vendor A string containing the name of the vendor or manufacturer of the device. The hint can be a sub-string of the actual value. serialNumber A string containing the device serial number. The hint must match the actual value exactly. minSizeGigabytes An integer representing the minimum size of the device in gigabytes. wwn A string containing the unique storage identifier. The hint must match the actual value exactly. If you use the udevadm command to retrieve the wwn value, and the command outputs a value for ID_WWN_WITH_EXTENSION , then you must use this value to specify the wwn subfield. rotational A boolean indicating whether the device should be a rotating disk (true) or not (false). Example usage - name: master-0 role: master rootDeviceHints: deviceName: "/dev/sda" 1.6. About networking The rendezvous IP must be known at the time of generating the agent ISO, so that during the initial boot all the hosts can check in to the assisted service. If the IP addresses are assigned using a Dynamic Host Configuration Protocol (DHCP) server, then the rendezvousIP field must be set to an IP address of one of the hosts that will become part of the deployed control plane. In an environment without a DHCP server, you can define IP addresses statically. In addition to static IP addresses, you can apply any network configuration that is in NMState format. This includes VLANs and NIC bonds. 1.6.1. DHCP Preferred method: install-config.yaml and agent-config.yaml You must specify the value for the rendezvousIP field. The networkConfig fields can be left blank: Sample agent-config.yaml.file apiVersion: v1alpha1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1 1 The IP address for the rendezvous host. 1.6.2. Static networking Preferred method: install-config.yaml and agent-config.yaml Sample agent-config.yaml.file cat > agent-config.yaml << EOF apiVersion: v1alpha1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1 hosts: - hostname: master-0 interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 2 networkConfig: interfaces: - name: eno1 type: ethernet state: up mac-address: 00:ef:44:21:e6:a5 ipv4: enabled: true address: - ip: 192.168.111.80 3 prefix-length: 23 4 dhcp: false dns-resolver: config: server: - 192.168.111.1 5 routes: config: - destination: 0.0.0.0/0 -hop-address: 192.168.111.1 6 -hop-interface: eno1 table-id: 254 EOF 1 If a value is not specified for the rendezvousIP field, one address will be chosen from the static IP addresses specified in the networkConfig fields. 2 The MAC address of an interface on the host, used to determine which host to apply the configuration to. 3 The static IP address of the target bare metal host. 4 The static IP address's subnet prefix for the target bare metal host. 5 The DNS server for the target bare metal host. 6 hop address for the node traffic. This must be in the same subnet as the IP address set for the specified interface. Optional method: GitOps ZTP manifests The optional method of the GitOps ZTP custom resources comprises 6 custom resources; you can configure static IPs in the nmstateconfig.yaml file. apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig metadata: name: master-0 namespace: openshift-machine-api labels: cluster0-nmstate-label-name: cluster0-nmstate-label-value spec: config: interfaces: - name: eth0 type: ethernet state: up mac-address: 52:54:01:aa:aa:a1 ipv4: enabled: true address: - ip: 192.168.122.2 1 prefix-length: 23 2 dhcp: false dns-resolver: config: server: - 192.168.122.1 3 routes: config: - destination: 0.0.0.0/0 -hop-address: 192.168.122.1 4 -hop-interface: eth0 table-id: 254 interfaces: - name: eth0 macAddress: 52:54:01:aa:aa:a1 5 1 The static IP address of the target bare metal host. 2 The static IP address's subnet prefix for the target bare metal host. 3 The DNS server for the target bare metal host. 4 hop address for the node traffic. This must be in the same subnet as the IP address set for the specified interface. 5 The MAC address of an interface on the host, used to determine which host to apply the configuration to. The rendezvous IP is chosen from the static IP addresses specified in the config fields. 1.7. Requirements for a cluster using the platform "none" option This section describes the requirements for an Agent-based OpenShift Container Platform installation that is configured to use the platform none option. Important Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you attempt to install an OpenShift Container Platform cluster in virtualized or cloud environments. 1.7.1. Platform "none" DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The control plane and compute machines Reverse DNS resolution is also required for the Kubernetes API, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. The following DNS records are required for an OpenShift Container Platform cluster using the platform none option and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 1.3. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Control plane machines <master><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <worker><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. 1.7.1.1. Example DNS configuration for platform "none" clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform using the platform none option. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a platform "none" cluster The following example is a BIND zone file that shows sample A records for name resolution in a cluster using the platform none option. Example 1.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; master0.ocp4.example.com. IN A 192.168.1.97 4 master1.ocp4.example.com. IN A 192.168.1.98 5 master2.ocp4.example.com. IN A 192.168.1.99 6 ; worker0.ocp4.example.com. IN A 192.168.1.11 7 worker1.ocp4.example.com. IN A 192.168.1.7 8 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 5 6 Provides name resolution for the control plane machines. 7 8 Provides name resolution for the compute machines. Example DNS PTR record configuration for a platform "none" cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a cluster using the platform none option. Example 1.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 3 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 4 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 5 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 6 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 7 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 4 5 Provides reverse DNS resolution for the control plane machines. 6 7 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 1.7.2. Platform "none" Load balancing requirements Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note These requirements do not apply to single-node OpenShift clusters using the platform none option. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the API routes. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configure the following ports on both the front and back of the load balancers: Table 1.4. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the ingress routes. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 1.5. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 1.7.2.1. Example load balancer configuration for platform "none" clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for clusters using the platform none option. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 1.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 2 bind *:22623 mode tcp server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 3 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 4 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 Port 22623 handles the machine config server traffic and points to the control plane machines. 3 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 4 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 1.8. Example: Bonds and VLAN interface node network configuration The following agent-config.yaml file is an example of a manifest for bond and VLAN interfaces. apiVersion: v1alpha1 kind: AgentConfig rendezvousIP: 10.10.10.14 hosts: - hostname: master0 role: master interfaces: - name: enp0s4 macAddress: 00:21:50:90:c0:10 - name: enp0s5 macAddress: 00:21:50:90:c0:20 networkConfig: interfaces: - name: bond0.300 1 type: vlan 2 state: up vlan: base-iface: bond0 id: 300 ipv4: enabled: true address: - ip: 10.10.10.14 prefix-length: 24 dhcp: false - name: bond0 3 type: bond 4 state: up mac-address: 00:21:50:90:c0:10 5 ipv4: enabled: false ipv6: enabled: false link-aggregation: mode: active-backup 6 options: miimon: "150" 7 port: - enp0s4 - enp0s5 dns-resolver: 8 config: server: - 10.10.10.11 - 10.10.10.12 routes: config: - destination: 0.0.0.0/0 -hop-address: 10.10.10.10 9 -hop-interface: bond0.300 10 table-id: 254 1 3 Name of the interface. 2 The type of interface. This example creates a VLAN. 4 The type of interface. This example creates a bond. 5 The mac address of the interface. 6 The mode attribute specifies the bonding mode. 7 Specifies the MII link monitoring frequency in milliseconds. This example inspects the bond link every 150 milliseconds. 8 Optional: Specifies the search and server settings for the DNS server. 9 hop address for the node traffic. This must be in the same subnet as the IP address set for the specified interface. 10 hop interface for the node traffic. 1.9. Example: Bonds and SR-IOV dual-nic node network configuration Important Support for Day 1 operations associated with enabling NIC partitioning for SR-IOV devices is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The following agent-config.yaml file is an example of a manifest for dual port NIC with a bond and SR-IOV interfaces: apiVersion: v1alpha1 kind: AgentConfig rendezvousIP: 10.10.10.14 hosts: - hostname: worker-1 interfaces: - name: eno1 macAddress: 0c:42:a1:55:f3:06 - name: eno2 macAddress: 0c:42:a1:55:f3:07 networkConfig: 1 interfaces: 2 - name: eno1 3 type: ethernet 4 state: up mac-address: 0c:42:a1:55:f3:06 ipv4: enabled: true dhcp: false 5 ethernet: sr-iov: total-vfs: 2 6 ipv6: enabled: false - name: sriov:eno1:0 type: ethernet state: up 7 ipv4: enabled: false 8 ipv6: enabled: false dhcp: false - name: sriov:eno1:1 type: ethernet state: down - name: eno2 type: ethernet state: up mac-address: 0c:42:a1:55:f3:07 ipv4: enabled: true ethernet: sr-iov: total-vfs: 2 ipv6: enabled: false - name: sriov:eno2:0 type: ethernet state: up ipv4: enabled: false ipv6: enabled: false - name: sriov:eno2:1 type: ethernet state: down - name: bond0 type: bond state: up min-tx-rate: 100 9 max-tx-rate: 200 10 link-aggregation: mode: active-backup 11 options: primary: sriov:eno1:0 12 port: - sriov:eno1:0 - sriov:eno2:0 ipv4: address: - ip: 10.19.16.57 13 prefix-length: 23 dhcp: false enabled: true ipv6: enabled: false dns-resolver: config: server: - 10.11.5.160 - 10.2.70.215 routes: config: - destination: 0.0.0.0/0 -hop-address: 10.19.17.254 -hop-interface: bond0 14 table-id: 254 1 The networkConfig field contains information about the network configuration of the host, with subfields including interfaces , dns-resolver , and routes . 2 The interfaces field is an array of network interfaces defined for the host. 3 The name of the interface. 4 The type of interface. This example creates an ethernet interface. 5 Set this to false to disable DHCP for the physical function (PF) if it is not strictly required. 6 Set this to the number of SR-IOV virtual functions (VFs) to instantiate. 7 Set this to up . 8 Set this to false to disable IPv4 addressing for the VF attached to the bond. 9 Sets a minimum transmission rate, in Mbps, for the VF. This sample value sets a rate of 100 Mbps. This value must be less than or equal to the maximum transmission rate. Intel NICs do not support the min-tx-rate parameter. For more information, see BZ#1772847 . 10 Sets a maximum transmission rate, in Mbps, for the VF. This sample value sets a rate of 200 Mbps. 11 Sets the desired bond mode. 12 Sets the preferred port of the bonding interface. The primary device is the first of the bonding interfaces to be used and is not abandoned unless it fails. This setting is particularly useful when one NIC in the bonding interface is faster and, therefore, able to handle a bigger load. This setting is only valid when the bonding interface is in active-backup mode (mode 1) and balance-tlb (mode 5). 13 Sets a static IP address for the bond interface. This is the node IP address. 14 Sets bond0 as the gateway for the default route. Additional resources Configuring network bonding 1.10. Sample install-config.yaml file for bare metal You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - name: worker replicas: 0 3 controlPlane: 4 name: master replicas: 1 5 metadata: name: sno-cluster 6 networking: clusterNetwork: - cidr: 10.128.0.0/14 7 hostPrefix: 23 8 networkType: OVNKubernetes 9 serviceNetwork: 10 - 172.30.0.0/16 platform: none: {} 11 fips: false 12 pullSecret: '{"auths": ...}' 13 sshKey: 'ssh-ed25519 AAAA...' 14 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 This parameter controls the number of compute machines that the Agent-based installation waits to discover before triggering the installation process. It is the number of compute machines that must be booted with the generated ISO. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 5 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 6 The cluster name that you specified in your DNS records. 7 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 8 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 9 The cluster network plugin to install. The supported values are OVNKubernetes (default value) and OpenShiftSDN . 10 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 11 You must set the platform to none for a single-node cluster. You can set the platform to vsphere , baremetal , or none for multi-node clusters. Note If you set the platform to vsphere or baremetal , you can configure IP address endpoints for cluster nodes in three ways: IPv4 IPv6 IPv4 and IPv6 in parallel (dual-stack) Example of dual-stack networking networking: clusterNetwork: - cidr: 172.21.0.0/16 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.11.0/16 - cidr: 2001:DB8::/32 serviceNetwork: - 172.22.0.0/16 - fd03::/112 networkType: OVNKubernetes platform: baremetal: apiVIPs: - 192.168.11.3 - 2001:DB8::4 ingressVIPs: - 192.168.11.4 - 2001:DB8::5 12 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 13 This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 14 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 1.11. Validation checks before agent ISO creation The Agent-based Installer performs validation checks on user defined YAML files before the ISO is created. Once the validations are successful, the agent ISO is created. install-config.yaml baremetal , vsphere and none platforms are supported. The networkType parameter must be OVNKubernetes in the case of none platform. apiVIPs and ingressVIPs parameters must be set for bare metal and vSphere platforms. Some host-specific fields in the bare metal platform configuration that have equivalents in agent-config.yaml file are ignored. A warning message is logged if these fields are set. agent-config.yaml Each interface must have a defined MAC address. Additionally, all interfaces must have a different MAC address. At least one interface must be defined for each host. World Wide Name (WWN) vendor extensions are not supported in root device hints. The role parameter in the host object must have a value of either master or worker . 1.11.1. ZTP manifests agent-cluster-install.yaml For IPv6, the only supported value for the networkType parameter is OVNKubernetes . The OpenshiftSDN value can be used only for IPv4. cluster-image-set.yaml The ReleaseImage parameter must match the release defined in the installer. 1.12. steps Installing a cluster Installing a cluster with customizations
[ "apiVersion: v1 baseDomain: test.example.com metadata: name: sno-cluster fips: True", "apiVersion: extensions.hive.openshift.io/v1beta1 kind: AgentClusterInstall metadata: annotations: agent-install.openshift.io/install-config-overrides: '{\"fips\": True}' name: sno-cluster namespace: sno-cluster-test", "apiVersion: v1beta1 kind: AgentConfig metadata: name: example-cluster rendezvousIP: 192.168.111.80 hosts: - hostname: master-1 role: master interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 - hostname: master-2 role: master interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a6 - hostname: master-3 role: master interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a7 - hostname: worker-1 role: worker interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a8", "- name: master-0 role: master rootDeviceHints: deviceName: \"/dev/sda\"", "apiVersion: v1alpha1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1", "cat > agent-config.yaml << EOF apiVersion: v1alpha1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1 hosts: - hostname: master-0 interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 2 networkConfig: interfaces: - name: eno1 type: ethernet state: up mac-address: 00:ef:44:21:e6:a5 ipv4: enabled: true address: - ip: 192.168.111.80 3 prefix-length: 23 4 dhcp: false dns-resolver: config: server: - 192.168.111.1 5 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.111.1 6 next-hop-interface: eno1 table-id: 254 EOF", "apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig metadata: name: master-0 namespace: openshift-machine-api labels: cluster0-nmstate-label-name: cluster0-nmstate-label-value spec: config: interfaces: - name: eth0 type: ethernet state: up mac-address: 52:54:01:aa:aa:a1 ipv4: enabled: true address: - ip: 192.168.122.2 1 prefix-length: 23 2 dhcp: false dns-resolver: config: server: - 192.168.122.1 3 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.122.1 4 next-hop-interface: eth0 table-id: 254 interfaces: - name: eth0 macAddress: 52:54:01:aa:aa:a1 5", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; master0.ocp4.example.com. IN A 192.168.1.97 4 master1.ocp4.example.com. IN A 192.168.1.98 5 master2.ocp4.example.com. IN A 192.168.1.99 6 ; worker0.ocp4.example.com. IN A 192.168.1.11 7 worker1.ocp4.example.com. IN A 192.168.1.7 8 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 3 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 4 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 5 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 6 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 7 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 2 bind *:22623 mode tcp server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 3 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 4 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "apiVersion: v1alpha1 kind: AgentConfig rendezvousIP: 10.10.10.14 hosts: - hostname: master0 role: master interfaces: - name: enp0s4 macAddress: 00:21:50:90:c0:10 - name: enp0s5 macAddress: 00:21:50:90:c0:20 networkConfig: interfaces: - name: bond0.300 1 type: vlan 2 state: up vlan: base-iface: bond0 id: 300 ipv4: enabled: true address: - ip: 10.10.10.14 prefix-length: 24 dhcp: false - name: bond0 3 type: bond 4 state: up mac-address: 00:21:50:90:c0:10 5 ipv4: enabled: false ipv6: enabled: false link-aggregation: mode: active-backup 6 options: miimon: \"150\" 7 port: - enp0s4 - enp0s5 dns-resolver: 8 config: server: - 10.10.10.11 - 10.10.10.12 routes: config: - destination: 0.0.0.0/0 next-hop-address: 10.10.10.10 9 next-hop-interface: bond0.300 10 table-id: 254", "apiVersion: v1alpha1 kind: AgentConfig rendezvousIP: 10.10.10.14 hosts: - hostname: worker-1 interfaces: - name: eno1 macAddress: 0c:42:a1:55:f3:06 - name: eno2 macAddress: 0c:42:a1:55:f3:07 networkConfig: 1 interfaces: 2 - name: eno1 3 type: ethernet 4 state: up mac-address: 0c:42:a1:55:f3:06 ipv4: enabled: true dhcp: false 5 ethernet: sr-iov: total-vfs: 2 6 ipv6: enabled: false - name: sriov:eno1:0 type: ethernet state: up 7 ipv4: enabled: false 8 ipv6: enabled: false dhcp: false - name: sriov:eno1:1 type: ethernet state: down - name: eno2 type: ethernet state: up mac-address: 0c:42:a1:55:f3:07 ipv4: enabled: true ethernet: sr-iov: total-vfs: 2 ipv6: enabled: false - name: sriov:eno2:0 type: ethernet state: up ipv4: enabled: false ipv6: enabled: false - name: sriov:eno2:1 type: ethernet state: down - name: bond0 type: bond state: up min-tx-rate: 100 9 max-tx-rate: 200 10 link-aggregation: mode: active-backup 11 options: primary: sriov:eno1:0 12 port: - sriov:eno1:0 - sriov:eno2:0 ipv4: address: - ip: 10.19.16.57 13 prefix-length: 23 dhcp: false enabled: true ipv6: enabled: false dns-resolver: config: server: - 10.11.5.160 - 10.2.70.215 routes: config: - destination: 0.0.0.0/0 next-hop-address: 10.19.17.254 next-hop-interface: bond0 14 table-id: 254", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - name: worker replicas: 0 3 controlPlane: 4 name: master replicas: 1 5 metadata: name: sno-cluster 6 networking: clusterNetwork: - cidr: 10.128.0.0/14 7 hostPrefix: 23 8 networkType: OVNKubernetes 9 serviceNetwork: 10 - 172.30.0.0/16 platform: none: {} 11 fips: false 12 pullSecret: '{\"auths\": ...}' 13 sshKey: 'ssh-ed25519 AAAA...' 14", "networking: clusterNetwork: - cidr: 172.21.0.0/16 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.11.0/16 - cidr: 2001:DB8::/32 serviceNetwork: - 172.22.0.0/16 - fd03::/112 networkType: OVNKubernetes platform: baremetal: apiVIPs: - 192.168.11.3 - 2001:DB8::4 ingressVIPs: - 192.168.11.4 - 2001:DB8::5" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_an_on-premise_cluster_with_the_agent-based_installer/preparing-to-install-with-agent-based-installer
function::ansi_new_line
function::ansi_new_line Name function::ansi_new_line - Move cursor to new line. Synopsis Arguments None General Syntax ansi_new_line Description Sends ansi code new line.
[ "function ansi_new_line()" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-ansi-new-line
34.5. Configuring Locations
34.5. Configuring Locations A location is a set of maps, which are all stored in auto.master , and a location can store multiple maps. The location entry only works as a container for map entries; it is not an automount configuration in and of itself. Important Identity Management does not set up or configure autofs. That must be done separately. Identity Management works with an existing autofs deployment. 34.5.1. Configuring Locations through the Web UI Click the Policy tab. Click the Automount subtab. Click the Add link at the top of the list of automount locations. Enter the name for the new location. Click the Add and Edit button to go to the map configuration for the new location. Create maps, as described in Section 34.6.1.1, "Configuring Direct Maps from the Web UI" and Section 34.6.2.1, "Configuring Indirect Maps from the Web UI" . 34.5.2. Configuring Locations through the Command Line To create a map, using the automountlocation-add and give the location name. For example: When a new location is created, two maps are automatically created for it, auto.master and auto.direct . auto.master is the root map for all automount maps for the location. auto.direct is the default map for direct mounts and is mounted on /- . To view all of the maps configured for a location as if they were deployed on a filesystem, use the automountlocation-tofiles command:
[ "ipa automountlocation-add location", "ipa automountlocation-add raleigh ---------------------------------- Added automount location \"raleigh\" ---------------------------------- Location: raleigh", "ipa automountlocation-tofiles raleigh /etc/auto.master: /- /etc/auto.direct --------------------------- /etc/auto.direct:" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/adding-locations
Chapter 10. Removing Windows nodes
Chapter 10. Removing Windows nodes You can remove a Windows node by deleting its host Windows machine. 10.1. Deleting a specific machine You can delete a specific machine. Important Do not delete a control plane machine unless your cluster uses a control plane machine set. Prerequisites Install an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure View the machines that are in the cluster by running the following command: USD oc get machine -n openshift-machine-api The command output contains a list of machines in the <clusterid>-<role>-<cloud_region> format. Identify the machine that you want to delete. Delete the machine by running the following command: USD oc delete machine <machine> -n openshift-machine-api Important By default, the machine controller tries to drain the node that is backed by the machine until it succeeds. In some situations, such as with a misconfigured pod disruption budget, the drain operation might not be able to succeed. If the drain operation fails, the machine controller cannot proceed removing the machine. You can skip draining the node by annotating machine.openshift.io/exclude-node-draining in a specific machine. If the machine that you delete belongs to a machine set, a new machine is immediately created to satisfy the specified number of replicas.
[ "oc get machine -n openshift-machine-api", "oc delete machine <machine> -n openshift-machine-api" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/windows_container_support_for_openshift/removing-windows-nodes
Chapter 3. Using the Cluster Samples Operator with an alternate registry
Chapter 3. Using the Cluster Samples Operator with an alternate registry You can use the Cluster Samples Operator with an alternate registry by first creating a mirror registry. Important You must have access to the internet to obtain the necessary container images. In this procedure, you place the mirror registry on a mirror host that has access to both your network and the internet. 3.1. About the mirror registry You can mirror the images that are required for OpenShift Container Platform installation and subsequent product updates to a container mirror registry such as Red Hat Quay, JFrog Artifactory, Sonatype Nexus Repository, or Harbor. If you do not have access to a large-scale container registry, you can use the mirror registry for Red Hat OpenShift , a small-scale container registry included with OpenShift Container Platform subscriptions. You can use any container registry that supports Docker v2-2 , such as Red Hat Quay, the mirror registry for Red Hat OpenShift , Artifactory, Sonatype Nexus Repository, or Harbor. Regardless of your chosen registry, the procedure to mirror content from Red Hat hosted sites on the internet to an isolated image registry is the same. After you mirror the content, you configure each cluster to retrieve this content from your mirror registry. Important The OpenShift image registry cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process. If choosing a container registry that is not the mirror registry for Red Hat OpenShift , it must be reachable by every machine in the clusters that you provision. If the registry is unreachable, installation, updating, or normal operations such as workload relocation might fail. For that reason, you must run mirror registries in a highly available way, and the mirror registries must at least match the production availability of your OpenShift Container Platform clusters. When you populate your mirror registry with OpenShift Container Platform images, you can follow two scenarios. If you have a host that can access both the internet and your mirror registry, but not your cluster nodes, you can directly mirror the content from that machine. This process is referred to as connected mirroring . If you have no such host, you must mirror the images to a file system and then bring that host or removable media into your restricted environment. This process is referred to as disconnected mirroring . For mirrored registries, to view the source of pulled images, you must review the Trying to access log entry in the CRI-O logs. Other methods to view the image pull source, such as using the crictl images command on a node, show the non-mirrored image name, even though the image is pulled from the mirrored location. Note Red Hat does not test third party registries with OpenShift Container Platform. Additional information For information on viewing the CRI-O logs to view the image source, see Viewing the image pull source . 3.1.1. Preparing the mirror host Before you create the mirror registry, you must prepare the mirror host. 3.1.2. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.18. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.18 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 3.2. Configuring credentials that allow images to be mirrored Create a container image registry credentials file that enables you to mirror images from Red Hat to your mirror. Prerequisites You configured a mirror registry to use in your disconnected environment. Procedure Complete the following steps on the installation host: Download your registry.redhat.io pull secret from Red Hat OpenShift Cluster Manager . Make a copy of your pull secret in JSON format by running the following command: USD cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1 1 Specify the path to the folder to store the pull secret in and a name for the JSON file that you create. Example pull secret { "auths": { "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "quay.io": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" } } } Generate the base64-encoded user name and password or token for your mirror registry by running the following command: USD echo -n '<user_name>:<password>' | base64 -w0 1 1 For <user_name> and <password> , specify the user name and password that you configured for your registry. Example output BGVtbYk3ZHAtqXs= Edit the JSON file and add a section that describes your registry to it: "auths": { "<mirror_registry>": { 1 "auth": "<credentials>", 2 "email": "[email protected]" } }, 1 Specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:8443 2 Specify the base64-encoded user name and password for the mirror registry. Example modified pull secret { "auths": { "registry.example.com": { "auth": "BGVtbYk3ZHAtqXs=", "email": "[email protected]" }, "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "quay.io": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" } } } 3.3. Mirroring the OpenShift Container Platform image repository Mirror the OpenShift Container Platform image repository to your registry to use during cluster installation or upgrade. Prerequisites Your mirror host has access to the internet. You configured a mirror registry to use in your restricted network and can access the certificate and credentials that you configured. You downloaded the pull secret from Red Hat OpenShift Cluster Manager and modified it to include authentication to your mirror repository. If you use self-signed certificates, you have specified a Subject Alternative Name in the certificates. Procedure Complete the following steps on the mirror host: Review the OpenShift Container Platform downloads page to determine the version of OpenShift Container Platform that you want to install and determine the corresponding tag on the Repository Tags page. Set the required environment variables: Export the release version: USD OCP_RELEASE=<release_version> For <release_version> , specify the tag that corresponds to the version of OpenShift Container Platform to install, such as 4.5.4 . Export the local registry name and host port: USD LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>' For <local_registry_host_name> , specify the registry domain name for your mirror repository, and for <local_registry_host_port> , specify the port that it serves content on. Export the local repository name: USD LOCAL_REPOSITORY='<local_repository_name>' For <local_repository_name> , specify the name of the repository to create in your registry, such as ocp4/openshift4 . Export the name of the repository to mirror: USD PRODUCT_REPO='openshift-release-dev' For a production release, you must specify openshift-release-dev . Export the path to your registry pull secret: USD LOCAL_SECRET_JSON='<path_to_pull_secret>' For <path_to_pull_secret> , specify the absolute path to and file name of the pull secret for your mirror registry that you created. Export the release mirror: USD RELEASE_NAME="ocp-release" For a production release, you must specify ocp-release . Export the type of architecture for your cluster: USD ARCHITECTURE=<cluster_architecture> 1 1 Specify the architecture of the cluster, such as x86_64 , aarch64 , s390x , or ppc64le . Export the path to the directory to host the mirrored images: USD REMOVABLE_MEDIA_PATH=<path> 1 1 Specify the full path, including the initial forward slash (/) character. Mirror the version images to the mirror registry: If your mirror host does not have internet access, take the following actions: Connect the removable media to a system that is connected to the internet. Review the images and configuration manifests to mirror: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} \ --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} \ --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} \ --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run Record the entire imageContentSources section from the output of the command. The information about your mirrors is unique to your mirrored repository, and you must add the imageContentSources section to the install-config.yaml file during installation. Mirror the images to a directory on the removable media: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} Take the media to the restricted network environment and upload the images to the local container registry. USD oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror "file://openshift/release:USD{OCP_RELEASE}*" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1 1 For REMOVABLE_MEDIA_PATH , you must use the same path that you specified when you mirrored the images. Important Running oc image mirror might result in the following error: error: unable to retrieve source image . This error occurs when image indexes include references to images that no longer exist on the image registry. Image indexes might retain older references to allow users running those images an upgrade path to newer points on the upgrade graph. As a temporary workaround, you can use the --skip-missing option to bypass the error and continue downloading the image index. For more information, see Service Mesh Operator mirroring failed . If the local container registry is connected to the mirror host, take the following actions: Directly push the release images to the local registry by using following command: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} \ --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} \ --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} \ --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} This command pulls the release information as a digest, and its output includes the imageContentSources data that you require when you install your cluster. Record the entire imageContentSources section from the output of the command. The information about your mirrors is unique to your mirrored repository, and you must add the imageContentSources section to the install-config.yaml file during installation. Note The image name gets patched to Quay.io during the mirroring process, and the podman images will show Quay.io in the registry on the bootstrap virtual machine. To create the installation program that is based on the content that you mirrored, extract it and pin it to the release: If your mirror host does not have internet access, run the following command: USD oc adm release extract -a USD{LOCAL_SECRET_JSON} --icsp-file=<file> --command=openshift-install "USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}" \ --insecure=true 1 1 Optional: If you do not want to configure trust for the target registry, add the --insecure=true flag. If the local container registry is connected to the mirror host, run the following command: USD oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-install "USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}" Important To ensure that you use the correct images for the version of OpenShift Container Platform that you selected, you must extract the installation program from the mirrored content. You must perform this step on a machine with an active internet connection. For clusters using installer-provisioned infrastructure, run the following command: USD openshift-install 3.4. Using Cluster Samples Operator image streams with alternate or mirrored registries Most image streams in the openshift namespace managed by the Cluster Samples Operator point to images located in the Red Hat registry at registry.redhat.io . Note The cli , installer , must-gather , and tests image streams, while part of the install payload, are not managed by the Cluster Samples Operator. These are not addressed in this procedure. Important The Cluster Samples Operator must be set to Managed in a disconnected environment. To install the image streams, you have a mirrored registry. Prerequisites Access to the cluster as a user with the cluster-admin role. Create a pull secret for your mirror registry. Procedure Access the images of a specific image stream to mirror, for example: USD oc get is <imagestream> -n openshift -o json | jq .spec.tags[].from.name | grep registry.redhat.io Mirror images from registry.redhat.io associated with any image streams you need USD oc image mirror registry.redhat.io/rhscl/ruby-25-rhel7:latest USD{MIRROR_ADDR}/rhscl/ruby-25-rhel7:latest Create the cluster's image configuration object: USD oc create configmap registry-config --from-file=USD{MIRROR_ADDR_HOSTNAME}..5000=USDpath/ca.crt -n openshift-config Add the required trusted CAs for the mirror in the cluster's image configuration object: USD oc patch image.config.openshift.io/cluster --patch '{"spec":{"additionalTrustedCA":{"name":"registry-config"}}}' --type=merge Update the samplesRegistry field in the Cluster Samples Operator configuration object to contain the hostname portion of the mirror location defined in the mirror configuration: USD oc edit configs.samples.operator.openshift.io -n openshift-cluster-samples-operator Note This is required because the image stream import process does not use the mirror or search mechanism at this time. Add any image streams that are not mirrored into the skippedImagestreams field of the Cluster Samples Operator configuration object. Or if you do not want to support any of the sample image streams, set the Cluster Samples Operator to Removed in the Cluster Samples Operator configuration object. Note The Cluster Samples Operator issues alerts if image stream imports are failing but the Cluster Samples Operator is either periodically retrying or does not appear to be retrying them. Many of the templates in the openshift namespace reference the image streams. So using Removed to purge both the image streams and templates will eliminate the possibility of attempts to use them if they are not functional because of any missing image streams. 3.4.1. Cluster Samples Operator assistance for mirroring During installation, OpenShift Container Platform creates a config map named imagestreamtag-to-image in the openshift-cluster-samples-operator namespace. The imagestreamtag-to-image config map contains an entry, the populating image, for each image stream tag. The format of the key for each entry in the data field in the config map is <image_stream_name>_<image_stream_tag_name> . During a disconnected installation of OpenShift Container Platform, the status of the Cluster Samples Operator is set to Removed . If you choose to change it to Managed , it installs samples. Note The use of samples in a network-restricted or discontinued environment may require access to services external to your network. Some example services include: Github, Maven Central, npm, RubyGems, PyPi and others. There might be additional steps to take that allow the cluster samples operators's objects to reach the services they require. You can use this config map as a reference for which images need to be mirrored for your image streams to import. While the Cluster Samples Operator is set to Removed , you can create your mirrored registry, or determine which existing mirrored registry you want to use. Mirror the samples you want to the mirrored registry using the new config map as your guide. Add any of the image streams you did not mirror to the skippedImagestreams list of the Cluster Samples Operator configuration object. Set samplesRegistry of the Cluster Samples Operator configuration object to the mirrored registry. Then set the Cluster Samples Operator to Managed to install the image streams you have mirrored. See Using Cluster Samples Operator image streams with alternate or mirrored registries for a detailed procedure.
[ "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1", "{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "echo -n '<user_name>:<password>' | base64 -w0 1", "BGVtbYk3ZHAtqXs=", "\"auths\": { \"<mirror_registry>\": { 1 \"auth\": \"<credentials>\", 2 \"email\": \"[email protected]\" } },", "{ \"auths\": { \"registry.example.com\": { \"auth\": \"BGVtbYk3ZHAtqXs=\", \"email\": \"[email protected]\" }, \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "OCP_RELEASE=<release_version>", "LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'", "LOCAL_REPOSITORY='<local_repository_name>'", "PRODUCT_REPO='openshift-release-dev'", "LOCAL_SECRET_JSON='<path_to_pull_secret>'", "RELEASE_NAME=\"ocp-release\"", "ARCHITECTURE=<cluster_architecture> 1", "REMOVABLE_MEDIA_PATH=<path> 1", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE}", "oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror \"file://openshift/release:USD{OCP_RELEASE}*\" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}", "oc adm release extract -a USD{LOCAL_SECRET_JSON} --icsp-file=<file> --command=openshift-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}\" --insecure=true 1", "oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}\"", "openshift-install", "oc get is <imagestream> -n openshift -o json | jq .spec.tags[].from.name | grep registry.redhat.io", "oc image mirror registry.redhat.io/rhscl/ruby-25-rhel7:latest USD{MIRROR_ADDR}/rhscl/ruby-25-rhel7:latest", "oc create configmap registry-config --from-file=USD{MIRROR_ADDR_HOSTNAME}..5000=USDpath/ca.crt -n openshift-config", "oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-config\"}}}' --type=merge", "oc edit configs.samples.operator.openshift.io -n openshift-cluster-samples-operator" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/images/samples-operator-alt-registry
Deploying OpenShift Data Foundation using Red Hat OpenShift Service on AWS with hosted control planes
Deploying OpenShift Data Foundation using Red Hat OpenShift Service on AWS with hosted control planes Red Hat OpenShift Data Foundation 4.16 Instructions for deploying OpenShift Data Foundation using Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install Red Hat OpenShift Data Foundation using Red Hat OpenShift Container Platform on ROSA with hosted control planes (HCP).
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_using_red_hat_openshift_service_on_aws_with_hosted_control_planes/index
Chapter 14. Installation configuration parameters for GCP
Chapter 14. Installation configuration parameters for GCP Before you deploy an OpenShift Container Platform cluster on Google Cloud Platform (GCP), you provide parameters to customize your cluster and the platform that hosts it. When you create the install-config.yaml file, you provide values for the required parameters through the command line. You can then modify the install-config.yaml file to customize your cluster further. 14.1. Available installation configuration parameters for GCP The following tables specify the required, optional, and GCP-specific installation configuration parameters that you can set as part of the installation process. Note After installation, you cannot modify these parameters in the install-config.yaml file. 14.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 14.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 14.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 14.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. OVNKubernetes . OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . The IP address block for services. The default value is 172.30.0.0/16 . The OVN-Kubernetes network plugins supports only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 14.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 14.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . String Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . String Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. Mint , Passthrough , Manual or an empty string ( "" ). Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . + Note If you are installing on GCP into a shared virtual private cloud (VPC), credentialsMode must be set to Passthrough or Manual . + Important Setting this parameter to Manual enables alternatives to storing administrator-level secrets in the kube-system project, which require additional configuration steps. For more information, see "Alternatives to storing administrator-level secrets in the kube-system project". 14.1.4. Additional Google Cloud Platform (GCP) configuration parameters Additional GCP configuration parameters are described in the following table: Table 14.4. Additional GCP parameters Parameter Description Values Optional. By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image that is used to boot control plane machines. You can override the default behavior by specifying the location of a custom RHCOS image that the installation program is to use for control plane machines only. String. The name of GCP project where the image is located. The name of the custom RHCOS image that the installation program is to use to boot control plane machines. If you use controlPlane.platform.gcp.osImage.project , this field is required. String. The name of the RHCOS image. Optional. By default, the installation program downloads and installs the RHCOS image that is used to boot compute machines. You can override the default behavior by specifying the location of a custom RHCOS image that the installation program is to use for compute machines only. String. The name of GCP project where the image is located. The name of the custom RHCOS image that the installation program is to use to boot compute machines. If you use compute.platform.gcp.osImage.project , this field is required. String. The name of the RHCOS image. Specifies the email address of a GCP service account to be used during installations. This service account will be used to provision compute machines. String. The email address of the service account. The name of the existing Virtual Private Cloud (VPC) where you want to deploy your cluster. If you want to deploy your cluster into a shared VPC, you must set platform.gcp.networkProjectID with the name of the GCP project that contains the shared VPC. String. Optional. The name of the GCP project that contains the shared VPC where you want to deploy your cluster. String. The name of the GCP project where the installation program installs the cluster. String. The name of the GCP region that hosts your cluster. Any valid region name, such as us-central1 . The name of the existing subnet where you want to deploy your control plane machines. The subnet name. The name of the existing subnet where you want to deploy your compute machines. The subnet name. The availability zones where the installation program creates machines. A list of valid GCP availability zones , such as us-central1-a , in a YAML sequence . Important When running your cluster on GCP 64-bit ARM infrastructures, ensure that you use a zone where Ampere Altra Arm CPU's are available. You can find which zones are compatible with 64-bit ARM processors in the "GCP availability zones" link. The size of the disk in gigabytes (GB). Any size between 16 GB and 65536 GB. The GCP disk type . The default disk type for all machines. Valid values are pd-balanced , pd-ssd , pd-standard , or hyperdisk-balanced . The default value is pd-ssd . Control plane machines cannot use the pd-standard disk type, so if you specify pd-standard as the default machine platform disk type, you must specify a different disk type using the controlPlane.platform.gcp.osDisk.diskType parameter. Optional. By default, the installation program downloads and installs the RHCOS image that is used to boot control plane and compute machines. You can override the default behavior by specifying the location of a custom RHCOS image that the installation program is to use for both types of machines. String. The name of GCP project where the image is located. The name of the custom RHCOS image that the installation program is to use to boot control plane and compute machines. If you use platform.gcp.defaultMachinePlatform.osImage.project , this field is required. String. The name of the RHCOS image. Optional. Additional network tags to add to the control plane and compute machines. One or more strings, for example network-tag1 . The GCP machine type for control plane and compute machines. The GCP machine type, for example n1-standard-4 . The name of the customer managed encryption key to be used for machine disk encryption. The encryption key name. The name of the Key Management Service (KMS) key ring to which the KMS key belongs. The KMS key ring name. The GCP location in which the KMS key ring exists. The GCP location. The ID of the project in which the KMS key ring exists. This value defaults to the value of the platform.gcp.projectID parameter if it is not set. The GCP project ID. The GCP service account used for the encryption request for control plane and compute machines. If absent, the Compute Engine default service account is used. For more information about GCP service accounts, see Google's documentation on service accounts . The GCP service account email, for example <service_account_name>@<project_id>.iam.gserviceaccount.com . Whether to enable Shielded VM secure boot for all machines in the cluster. Shielded VMs have additional security protocols such as secure boot, firmware and integrity monitoring, and rootkit protection. For more information on Shielded VMs, see Google's documentation on Shielded VMs . Enabled or Disabled . The default value is Disabled . Whether to use Confidential VMs for all machines in the cluster. Confidential VMs provide encryption for data during processing. For more information on Confidential computing, see Google's documentation on Confidential computing . Enabled or Disabled . The default value is Disabled . Specifies the behavior of all VMs during a host maintenance event, such as a software or hardware update. For Confidential VMs, this parameter must be set to Terminate . Confidential VMs do not support live VM migration. Terminate or Migrate . The default value is Migrate . The name of the customer managed encryption key to be used for control plane machine disk encryption. The encryption key name. For control plane machines, the name of the KMS key ring to which the KMS key belongs. The KMS key ring name. For control plane machines, the GCP location in which the key ring exists. For more information about KMS locations, see Google's documentation on Cloud KMS locations . The GCP location for the key ring. For control plane machines, the ID of the project in which the KMS key ring exists. This value defaults to the VM project ID if not set. The GCP project ID. The GCP service account used for the encryption request for control plane machines. If absent, the Compute Engine default service account is used. For more information about GCP service accounts, see Google's documentation on service accounts . The GCP service account email, for example <service_account_name>@<project_id>.iam.gserviceaccount.com . The size of the disk in gigabytes (GB). This value applies to control plane machines. Any integer between 16 and 65536. The GCP disk type for control plane machines. Valid values are pd-balanced , pd-ssd , or hyperdisk-balanced . The default value is pd-ssd . Optional. Additional network tags to add to the control plane machines. If set, this parameter overrides the platform.gcp.defaultMachinePlatform.tags parameter for control plane machines. One or more strings, for example control-plane-tag1 . The GCP machine type for control plane machines. If set, this parameter overrides the platform.gcp.defaultMachinePlatform.type parameter. The GCP machine type, for example n1-standard-4 . The availability zones where the installation program creates control plane machines. A list of valid GCP availability zones , such as us-central1-a , in a YAML sequence . Important When running your cluster on GCP 64-bit ARM infrastructures, ensure that you use a zone where Ampere Altra Arm CPU's are available. You can find which zones are compatible with 64-bit ARM processors in the "GCP availability zones" link. Whether to enable Shielded VM secure boot for control plane machines. Shielded VMs have additional security protocols such as secure boot, firmware and integrity monitoring, and rootkit protection. For more information on Shielded VMs, see Google's documentation on Shielded VMs . Enabled or Disabled . The default value is Disabled . Whether to enable Confidential VMs for control plane machines. Confidential VMs provide encryption for data while it is being processed. For more information on Confidential VMs, see Google's documentation on Confidential Computing . Enabled or Disabled . The default value is Disabled . Specifies the behavior of control plane VMs during a host maintenance event, such as a software or hardware update. For Confidential VMs, this parameter must be set to Terminate . Confidential VMs do not support live VM migration. Terminate or Migrate . The default value is Migrate . Specifies the email address of a GCP service account to be used during installations. This service account will be used to provision control plane machines. Important In the case of shared VPC installations, when the service account is not provided, the installer service account must have the resourcemanager.projects.getIamPolicy and resourcemanager.projects.setIamPolicy permissions in the host project. String. The email address of the service account. The name of the customer managed encryption key to be used for compute machine disk encryption. The encryption key name. For compute machines, the name of the KMS key ring to which the KMS key belongs. The KMS key ring name. For compute machines, the GCP location in which the key ring exists. For more information about KMS locations, see Google's documentation on Cloud KMS locations . The GCP location for the key ring. For compute machines, the ID of the project in which the KMS key ring exists. This value defaults to the VM project ID if not set. The GCP project ID. The GCP service account used for the encryption request for compute machines. If this value is not set, the Compute Engine default service account is used. For more information about GCP service accounts, see Google's documentation on service accounts . The GCP service account email, for example <service_account_name>@<project_id>.iam.gserviceaccount.com . The size of the disk in gigabytes (GB). This value applies to compute machines. Any integer between 16 and 65536. The GCP disk type for compute machines. Valid values are pd-balanced , pd-ssd , pd-standard , or hyperdisk-balanced . The default value is pd-ssd . Optional. Additional network tags to add to the compute machines. If set, this parameter overrides the platform.gcp.defaultMachinePlatform.tags parameter for compute machines. One or more strings, for example compute-network-tag1 . The GCP machine type for compute machines. If set, this parameter overrides the platform.gcp.defaultMachinePlatform.type parameter. The GCP machine type, for example n1-standard-4 . The availability zones where the installation program creates compute machines. A list of valid GCP availability zones , such as us-central1-a , in a YAML sequence . Important When running your cluster on GCP 64-bit ARM infrastructures, ensure that you use a zone where Ampere Altra Arm CPU's are available. You can find which zones are compatible with 64-bit ARM processors in the "GCP availability zones" link. Whether to enable Shielded VM secure boot for compute machines. Shielded VMs have additional security protocols such as secure boot, firmware and integrity monitoring, and rootkit protection. For more information on Shielded VMs, see Google's documentation on Shielded VMs . Enabled or Disabled . The default value is Disabled . Whether to enable Confidential VMs for compute machines. Confidential VMs provide encryption for data while it is being processed. For more information on Confidential VMs, see Google's documentation on Confidential Computing . Enabled or Disabled . The default value is Disabled . Specifies the behavior of compute VMs during a host maintenance event, such as a software or hardware update. For Confidential VMs, this parameter must be set to Terminate . Confidential VMs do not support live VM migration. Terminate or Migrate . The default value is Migrate .
[ "apiVersion:", "baseDomain:", "metadata:", "metadata: name:", "platform:", "pullSecret:", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking:", "networking: networkType:", "networking: clusterNetwork:", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: clusterNetwork: cidr:", "networking: clusterNetwork: hostPrefix:", "networking: serviceNetwork:", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork:", "networking: machineNetwork: - cidr: 10.0.0.0/16", "networking: machineNetwork: cidr:", "additionalTrustBundle:", "capabilities:", "capabilities: baselineCapabilitySet:", "capabilities: additionalEnabledCapabilities:", "cpuPartitioningMode:", "compute:", "compute: architecture:", "compute: hyperthreading:", "compute: name:", "compute: platform:", "compute: replicas:", "featureSet:", "controlPlane:", "controlPlane: architecture:", "controlPlane: hyperthreading:", "controlPlane: name:", "controlPlane: platform:", "controlPlane: replicas:", "credentialsMode:", "fips:", "imageContentSources:", "imageContentSources: source:", "imageContentSources: mirrors:", "publish:", "sshKey:", "controlPlane: platform: gcp: osImage: project:", "controlPlane: platform: gcp: osImage: name:", "compute: platform: gcp: osImage: project:", "compute: platform: gcp: osImage: name:", "compute: platform: gcp: serviceAccount:", "platform: gcp: network:", "platform: gcp: networkProjectID:", "platform: gcp: projectID:", "platform: gcp: region:", "platform: gcp: controlPlaneSubnet:", "platform: gcp: computeSubnet:", "platform: gcp: defaultMachinePlatform: zones:", "platform: gcp: defaultMachinePlatform: osDisk: diskSizeGB:", "platform: gcp: defaultMachinePlatform: osDisk: diskType:", "platform: gcp: defaultMachinePlatform: osImage: project:", "platform: gcp: defaultMachinePlatform: osImage: name:", "platform: gcp: defaultMachinePlatform: tags:", "platform: gcp: defaultMachinePlatform: type:", "platform: gcp: defaultMachinePlatform: osDisk: encryptionKey: kmsKey: name:", "platform: gcp: defaultMachinePlatform: osDisk: encryptionKey: kmsKey: keyRing:", "platform: gcp: defaultMachinePlatform: osDisk: encryptionKey: kmsKey: location:", "platform: gcp: defaultMachinePlatform: osDisk: encryptionKey: kmsKey: projectID:", "platform: gcp: defaultMachinePlatform: osDisk: encryptionKey: kmsKeyServiceAccount:", "platform: gcp: defaultMachinePlatform: secureBoot:", "platform: gcp: defaultMachinePlatform: confidentialCompute:", "platform: gcp: defaultMachinePlatform: onHostMaintenance:", "controlPlane: platform: gcp: osDisk: encryptionKey: kmsKey: name:", "controlPlane: platform: gcp: osDisk: encryptionKey: kmsKey: keyRing:", "controlPlane: platform: gcp: osDisk: encryptionKey: kmsKey: location:", "controlPlane: platform: gcp: osDisk: encryptionKey: kmsKey: projectID:", "controlPlane: platform: gcp: osDisk: encryptionKey: kmsKeyServiceAccount:", "controlPlane: platform: gcp: osDisk: diskSizeGB:", "controlPlane: platform: gcp: osDisk: diskType:", "controlPlane: platform: gcp: tags:", "controlPlane: platform: gcp: type:", "controlPlane: platform: gcp: zones:", "controlPlane: platform: gcp: secureBoot:", "controlPlane: platform: gcp: confidentialCompute:", "controlPlane: platform: gcp: onHostMaintenance:", "controlPlane: platform: gcp: serviceAccount:", "compute: platform: gcp: osDisk: encryptionKey: kmsKey: name:", "compute: platform: gcp: osDisk: encryptionKey: kmsKey: keyRing:", "compute: platform: gcp: osDisk: encryptionKey: kmsKey: location:", "compute: platform: gcp: osDisk: encryptionKey: kmsKey: projectID:", "compute: platform: gcp: osDisk: encryptionKey: kmsKeyServiceAccount:", "compute: platform: gcp: osDisk: diskSizeGB:", "compute: platform: gcp: osDisk: diskType:", "compute: platform: gcp: tags:", "compute: platform: gcp: type:", "compute: platform: gcp: zones:", "compute: platform: gcp: secureBoot:", "compute: platform: gcp: confidentialCompute:", "compute: platform: gcp: onHostMaintenance:" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_gcp/installation-config-parameters-gcp
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/3/html/enabling_dynamic_jfr_recordings_based_on_mbean_custom_triggers/making-open-source-more-inclusive
Chapter 19. API reference
Chapter 19. API reference 19.1. 5.6 Logging API reference 19.1.1. Logging 5.6 API reference 19.1.1.1. ClusterLogForwarder ClusterLogForwarder is an API to configure forwarding logs. You configure forwarding by specifying a list of pipelines , which forward from a set of named inputs to a set of named outputs. There are built-in input names for common log categories, and you can define custom inputs to do additional filtering. There is a built-in output name for the default openshift log store, but you can define your own outputs with a URL and other connection information to forward logs to other stores or processors, inside or outside the cluster. For more details see the documentation on the API fields. Property Type Description spec object Specification of the desired behavior of ClusterLogForwarder status object Status of the ClusterLogForwarder 19.1.1.1.1. .spec 19.1.1.1.1.1. Description ClusterLogForwarderSpec defines how logs should be forwarded to remote targets. 19.1.1.1.1.1.1. Type object Property Type Description inputs array (optional) Inputs are named filters for log messages to be forwarded. outputDefaults object (optional) DEPRECATED OutputDefaults specify forwarder config explicitly for the default store. outputs array (optional) Outputs are named destinations for log messages. pipelines array Pipelines forward the messages selected by a set of inputs to a set of outputs. 19.1.1.1.2. .spec.inputs[] 19.1.1.1.2.1. Description InputSpec defines a selector of log messages. 19.1.1.1.2.1.1. Type array Property Type Description application object (optional) Application, if present, enables named set of application logs that name string Name used to refer to the input of a pipeline . 19.1.1.1.3. .spec.inputs[].application 19.1.1.1.3.1. Description Application log selector. All conditions in the selector must be satisfied (logical AND) to select logs. 19.1.1.1.3.1.1. Type object Property Type Description namespaces array (optional) Namespaces from which to collect application logs. selector object (optional) Selector for logs from pods with matching labels. 19.1.1.1.4. .spec.inputs[].application.namespaces[] 19.1.1.1.4.1. Description 19.1.1.1.4.1.1. Type array 19.1.1.1.5. .spec.inputs[].application.selector 19.1.1.1.5.1. Description A label selector is a label query over a set of resources. 19.1.1.1.5.1.1. Type object Property Type Description matchLabels object (optional) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels 19.1.1.1.6. .spec.inputs[].application.selector.matchLabels 19.1.1.1.6.1. Description 19.1.1.1.6.1.1. Type object 19.1.1.1.7. .spec.outputDefaults 19.1.1.1.7.1. Description 19.1.1.1.7.1.1. Type object Property Type Description elasticsearch object (optional) Elasticsearch OutputSpec default values 19.1.1.1.8. .spec.outputDefaults.elasticsearch 19.1.1.1.8.1. Description ElasticsearchStructuredSpec is spec related to structured log changes to determine the elasticsearch index 19.1.1.1.8.1.1. Type object Property Type Description enableStructuredContainerLogs bool (optional) EnableStructuredContainerLogs enables multi-container structured logs to allow structuredTypeKey string (optional) StructuredTypeKey specifies the metadata key to be used as name of elasticsearch index structuredTypeName string (optional) StructuredTypeName specifies the name of elasticsearch schema 19.1.1.1.9. .spec.outputs[] 19.1.1.1.9.1. Description Output defines a destination for log messages. 19.1.1.1.9.1.1. Type array Property Type Description syslog object (optional) fluentdForward object (optional) elasticsearch object (optional) kafka object (optional) cloudwatch object (optional) loki object (optional) googleCloudLogging object (optional) splunk object (optional) name string Name used to refer to the output from a pipeline . secret object (optional) Secret for authentication. tls object TLS contains settings for controlling options on TLS client connections. type string Type of output plugin. url string (optional) URL to send log records to. 19.1.1.1.10. .spec.outputs[].secret 19.1.1.1.10.1. Description OutputSecretSpec is a secret reference containing name only, no namespace. 19.1.1.1.10.1.1. Type object Property Type Description name string Name of a secret in the namespace configured for log forwarder secrets. 19.1.1.1.11. .spec.outputs[].tls 19.1.1.1.11.1. Description OutputTLSSpec contains options for TLS connections that are agnostic to the output type. 19.1.1.1.11.1.1. Type object Property Type Description insecureSkipVerify bool If InsecureSkipVerify is true, then the TLS client will be configured to ignore errors with certificates. 19.1.1.1.12. .spec.pipelines[] 19.1.1.1.12.1. Description PipelinesSpec link a set of inputs to a set of outputs. 19.1.1.1.12.1.1. Type array Property Type Description detectMultilineErrors bool (optional) DetectMultilineErrors enables multiline error detection of container logs inputRefs array InputRefs lists the names ( input.name ) of inputs to this pipeline. labels object (optional) Labels applied to log records passing through this pipeline. name string (optional) Name is optional, but must be unique in the pipelines list if provided. outputRefs array OutputRefs lists the names ( output.name ) of outputs from this pipeline. parse string (optional) Parse enables parsing of log entries into structured logs 19.1.1.1.13. .spec.pipelines[].inputRefs[] 19.1.1.1.13.1. Description 19.1.1.1.13.1.1. Type array 19.1.1.1.14. .spec.pipelines[].labels 19.1.1.1.14.1. Description 19.1.1.1.14.1.1. Type object 19.1.1.1.15. .spec.pipelines[].outputRefs[] 19.1.1.1.15.1. Description 19.1.1.1.15.1.1. Type array 19.1.1.1.16. .status 19.1.1.1.16.1. Description ClusterLogForwarderStatus defines the observed state of ClusterLogForwarder 19.1.1.1.16.1.1. Type object Property Type Description conditions object Conditions of the log forwarder. inputs Conditions Inputs maps input name to condition of the input. outputs Conditions Outputs maps output name to condition of the output. pipelines Conditions Pipelines maps pipeline name to condition of the pipeline. 19.1.1.1.17. .status.conditions 19.1.1.1.17.1. Description 19.1.1.1.17.1.1. Type object 19.1.1.1.18. .status.inputs 19.1.1.1.18.1. Description 19.1.1.1.18.1.1. Type Conditions 19.1.1.1.19. .status.outputs 19.1.1.1.19.1. Description 19.1.1.1.19.1.1. Type Conditions 19.1.1.1.20. .status.pipelines 19.1.1.1.20.1. Description 19.1.1.1.20.1.1. Type Conditions== ClusterLogging A Red Hat OpenShift Logging instance. ClusterLogging is the Schema for the clusterloggings API Property Type Description spec object Specification of the desired behavior of ClusterLogging status object Status defines the observed state of ClusterLogging 19.1.1.1.21. .spec 19.1.1.1.21.1. Description ClusterLoggingSpec defines the desired state of ClusterLogging 19.1.1.1.21.1.1. Type object Property Type Description collection object Specification of the Collection component for the cluster curation object (DEPRECATED) (optional) Deprecated. Specification of the Curation component for the cluster forwarder object (DEPRECATED) (optional) Deprecated. Specification for Forwarder component for the cluster logStore object (optional) Specification of the Log Storage component for the cluster managementState string (optional) Indicator if the resource is 'Managed' or 'Unmanaged' by the operator visualization object (optional) Specification of the Visualization component for the cluster 19.1.1.1.22. .spec.collection 19.1.1.1.22.1. Description This is the struct that will contain information pertinent to Log and event collection 19.1.1.1.22.1.1. Type object Property Type Description resources object (optional) The resource requirements for the collector nodeSelector object (optional) Define which Nodes the Pods are scheduled on. tolerations array (optional) Define the tolerations the Pods will accept fluentd object (optional) Fluentd represents the configuration for forwarders of type fluentd. logs object (DEPRECATED) (optional) Deprecated. Specification of Log Collection for the cluster type string (optional) The type of Log Collection to configure 19.1.1.1.23. .spec.collection.fluentd 19.1.1.1.23.1. Description FluentdForwarderSpec represents the configuration for forwarders of type fluentd. 19.1.1.1.23.1.1. Type object Property Type Description buffer object inFile object 19.1.1.1.24. .spec.collection.fluentd.buffer 19.1.1.1.24.1. Description FluentdBufferSpec represents a subset of fluentd buffer parameters to tune the buffer configuration for all fluentd outputs. It supports a subset of parameters to configure buffer and queue sizing, flush operations and retry flushing. For general parameters refer to: https://docs.fluentd.org/configuration/buffer-section#buffering-parameters For flush parameters refer to: https://docs.fluentd.org/configuration/buffer-section#flushing-parameters For retry parameters refer to: https://docs.fluentd.org/configuration/buffer-section#retries-parameters 19.1.1.1.24.1.1. Type object Property Type Description chunkLimitSize string (optional) ChunkLimitSize represents the maximum size of each chunk. Events will be flushInterval string (optional) FlushInterval represents the time duration to wait between two consecutive flush flushMode string (optional) FlushMode represents the mode of the flushing thread to write chunks. The mode flushThreadCount int (optional) FlushThreadCount reprents the number of threads used by the fluentd buffer overflowAction string (optional) OverflowAction represents the action for the fluentd buffer plugin to retryMaxInterval string (optional) RetryMaxInterval represents the maximum time interval for exponential backoff retryTimeout string (optional) RetryTimeout represents the maximum time interval to attempt retries before giving up retryType string (optional) RetryType represents the type of retrying flush operations. Flush operations can retryWait string (optional) RetryWait represents the time duration between two consecutive retries to flush totalLimitSize string (optional) TotalLimitSize represents the threshold of node space allowed per fluentd 19.1.1.1.25. .spec.collection.fluentd.inFile 19.1.1.1.25.1. Description FluentdInFileSpec represents a subset of fluentd in-tail plugin parameters to tune the configuration for all fluentd in-tail inputs. For general parameters refer to: https://docs.fluentd.org/input/tail#parameters 19.1.1.1.25.1.1. Type object Property Type Description readLinesLimit int (optional) ReadLinesLimit represents the number of lines to read with each I/O operation 19.1.1.1.26. .spec.collection.logs 19.1.1.1.26.1. Description 19.1.1.1.26.1.1. Type object Property Type Description fluentd object Specification of the Fluentd Log Collection component type string The type of Log Collection to configure 19.1.1.1.27. .spec.collection.logs.fluentd 19.1.1.1.27.1. Description CollectorSpec is spec to define scheduling and resources for a collector 19.1.1.1.27.1.1. Type object Property Type Description nodeSelector object (optional) Define which Nodes the Pods are scheduled on. resources object (optional) The resource requirements for the collector tolerations array (optional) Define the tolerations the Pods will accept 19.1.1.1.28. .spec.collection.logs.fluentd.nodeSelector 19.1.1.1.28.1. Description 19.1.1.1.28.1.1. Type object 19.1.1.1.29. .spec.collection.logs.fluentd.resources 19.1.1.1.29.1. Description 19.1.1.1.29.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 19.1.1.1.30. .spec.collection.logs.fluentd.resources.limits 19.1.1.1.30.1. Description 19.1.1.1.30.1.1. Type object 19.1.1.1.31. .spec.collection.logs.fluentd.resources.requests 19.1.1.1.31.1. Description 19.1.1.1.31.1.1. Type object 19.1.1.1.32. .spec.collection.logs.fluentd.tolerations[] 19.1.1.1.32.1. Description 19.1.1.1.32.1.1. Type array Property Type Description effect string (optional) Effect indicates the taint effect to match. Empty means match all taint effects. key string (optional) Key is the taint key that the toleration applies to. Empty means match all taint keys. operator string (optional) Operator represents a key's relationship to the value. tolerationSeconds int (optional) TolerationSeconds represents the period of time the toleration (which must be value string (optional) Value is the taint value the toleration matches to. 19.1.1.1.33. .spec.collection.logs.fluentd.tolerations[].tolerationSeconds 19.1.1.1.33.1. Description 19.1.1.1.33.1.1. Type int 19.1.1.1.34. .spec.curation 19.1.1.1.34.1. Description This is the struct that will contain information pertinent to Log curation (Curator) 19.1.1.1.34.1.1. Type object Property Type Description curator object The specification of curation to configure type string The kind of curation to configure 19.1.1.1.35. .spec.curation.curator 19.1.1.1.35.1. Description 19.1.1.1.35.1.1. Type object Property Type Description nodeSelector object Define which Nodes the Pods are scheduled on. resources object (optional) The resource requirements for Curator schedule string The cron schedule that the Curator job is run. Defaults to "30 3 * * *" tolerations array 19.1.1.1.36. .spec.curation.curator.nodeSelector 19.1.1.1.36.1. Description 19.1.1.1.36.1.1. Type object 19.1.1.1.37. .spec.curation.curator.resources 19.1.1.1.37.1. Description 19.1.1.1.37.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 19.1.1.1.38. .spec.curation.curator.resources.limits 19.1.1.1.38.1. Description 19.1.1.1.38.1.1. Type object 19.1.1.1.39. .spec.curation.curator.resources.requests 19.1.1.1.39.1. Description 19.1.1.1.39.1.1. Type object 19.1.1.1.40. .spec.curation.curator.tolerations[] 19.1.1.1.40.1. Description 19.1.1.1.40.1.1. Type array Property Type Description effect string (optional) Effect indicates the taint effect to match. Empty means match all taint effects. key string (optional) Key is the taint key that the toleration applies to. Empty means match all taint keys. operator string (optional) Operator represents a key's relationship to the value. tolerationSeconds int (optional) TolerationSeconds represents the period of time the toleration (which must be value string (optional) Value is the taint value the toleration matches to. 19.1.1.1.41. .spec.curation.curator.tolerations[].tolerationSeconds 19.1.1.1.41.1. Description 19.1.1.1.41.1.1. Type int 19.1.1.1.42. .spec.forwarder 19.1.1.1.42.1. Description ForwarderSpec contains global tuning parameters for specific forwarder implementations. This field is not required for general use, it allows performance tuning by users familiar with the underlying forwarder technology. Currently supported: fluentd . 19.1.1.1.42.1.1. Type object Property Type Description fluentd object 19.1.1.1.43. .spec.forwarder.fluentd 19.1.1.1.43.1. Description FluentdForwarderSpec represents the configuration for forwarders of type fluentd. 19.1.1.1.43.1.1. Type object Property Type Description buffer object inFile object 19.1.1.1.44. .spec.forwarder.fluentd.buffer 19.1.1.1.44.1. Description FluentdBufferSpec represents a subset of fluentd buffer parameters to tune the buffer configuration for all fluentd outputs. It supports a subset of parameters to configure buffer and queue sizing, flush operations and retry flushing. For general parameters refer to: https://docs.fluentd.org/configuration/buffer-section#buffering-parameters For flush parameters refer to: https://docs.fluentd.org/configuration/buffer-section#flushing-parameters For retry parameters refer to: https://docs.fluentd.org/configuration/buffer-section#retries-parameters 19.1.1.1.44.1.1. Type object Property Type Description chunkLimitSize string (optional) ChunkLimitSize represents the maximum size of each chunk. Events will be flushInterval string (optional) FlushInterval represents the time duration to wait between two consecutive flush flushMode string (optional) FlushMode represents the mode of the flushing thread to write chunks. The mode flushThreadCount int (optional) FlushThreadCount reprents the number of threads used by the fluentd buffer overflowAction string (optional) OverflowAction represents the action for the fluentd buffer plugin to retryMaxInterval string (optional) RetryMaxInterval represents the maximum time interval for exponential backoff retryTimeout string (optional) RetryTimeout represents the maximum time interval to attempt retries before giving up retryType string (optional) RetryType represents the type of retrying flush operations. Flush operations can retryWait string (optional) RetryWait represents the time duration between two consecutive retries to flush totalLimitSize string (optional) TotalLimitSize represents the threshold of node space allowed per fluentd 19.1.1.1.45. .spec.forwarder.fluentd.inFile 19.1.1.1.45.1. Description FluentdInFileSpec represents a subset of fluentd in-tail plugin parameters to tune the configuration for all fluentd in-tail inputs. For general parameters refer to: https://docs.fluentd.org/input/tail#parameters 19.1.1.1.45.1.1. Type object Property Type Description readLinesLimit int (optional) ReadLinesLimit represents the number of lines to read with each I/O operation 19.1.1.1.46. .spec.logStore 19.1.1.1.46.1. Description The LogStoreSpec contains information about how logs are stored. 19.1.1.1.46.1.1. Type object Property Type Description elasticsearch object Specification of the Elasticsearch Log Store component lokistack object LokiStack contains information about which LokiStack to use for log storage if Type is set to LogStoreTypeLokiStack. retentionPolicy object (optional) Retention policy defines the maximum age for an index after which it should be deleted type string The Type of Log Storage to configure. The operator currently supports either using ElasticSearch 19.1.1.1.47. .spec.logStore.elasticsearch 19.1.1.1.47.1. Description 19.1.1.1.47.1.1. Type object Property Type Description nodeCount int Number of nodes to deploy for Elasticsearch nodeSelector object Define which Nodes the Pods are scheduled on. proxy object Specification of the Elasticsearch Proxy component redundancyPolicy string (optional) resources object (optional) The resource requirements for Elasticsearch storage object (optional) The storage specification for Elasticsearch data nodes tolerations array 19.1.1.1.48. .spec.logStore.elasticsearch.nodeSelector 19.1.1.1.48.1. Description 19.1.1.1.48.1.1. Type object 19.1.1.1.49. .spec.logStore.elasticsearch.proxy 19.1.1.1.49.1. Description 19.1.1.1.49.1.1. Type object Property Type Description resources object 19.1.1.1.50. .spec.logStore.elasticsearch.proxy.resources 19.1.1.1.50.1. Description 19.1.1.1.50.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 19.1.1.1.51. .spec.logStore.elasticsearch.proxy.resources.limits 19.1.1.1.51.1. Description 19.1.1.1.51.1.1. Type object 19.1.1.1.52. .spec.logStore.elasticsearch.proxy.resources.requests 19.1.1.1.52.1. Description 19.1.1.1.52.1.1. Type object 19.1.1.1.53. .spec.logStore.elasticsearch.resources 19.1.1.1.53.1. Description 19.1.1.1.53.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 19.1.1.1.54. .spec.logStore.elasticsearch.resources.limits 19.1.1.1.54.1. Description 19.1.1.1.54.1.1. Type object 19.1.1.1.55. .spec.logStore.elasticsearch.resources.requests 19.1.1.1.55.1. Description 19.1.1.1.55.1.1. Type object 19.1.1.1.56. .spec.logStore.elasticsearch.storage 19.1.1.1.56.1. Description 19.1.1.1.56.1.1. Type object Property Type Description size object The max storage capacity for the node to provision. storageClassName string (optional) The name of the storage class to use with creating the node's PVC. 19.1.1.1.57. .spec.logStore.elasticsearch.storage.size 19.1.1.1.57.1. Description 19.1.1.1.57.1.1. Type object Property Type Description Format string Change Format at will. See the comment for Canonicalize for d object d is the quantity in inf.Dec form if d.Dec != nil i int i is the quantity in int64 scaled form, if d.Dec == nil s string s is the generated value of this quantity to avoid recalculation 19.1.1.1.58. .spec.logStore.elasticsearch.storage.size.d 19.1.1.1.58.1. Description 19.1.1.1.58.1.1. Type object Property Type Description Dec object 19.1.1.1.59. .spec.logStore.elasticsearch.storage.size.d.Dec 19.1.1.1.59.1. Description 19.1.1.1.59.1.1. Type object Property Type Description scale int unscaled object 19.1.1.1.60. .spec.logStore.elasticsearch.storage.size.d.Dec.unscaled 19.1.1.1.60.1. Description 19.1.1.1.60.1.1. Type object Property Type Description abs Word sign neg bool 19.1.1.1.61. .spec.logStore.elasticsearch.storage.size.d.Dec.unscaled.abs 19.1.1.1.61.1. Description 19.1.1.1.61.1.1. Type Word 19.1.1.1.62. .spec.logStore.elasticsearch.storage.size.i 19.1.1.1.62.1. Description 19.1.1.1.62.1.1. Type int Property Type Description scale int value int 19.1.1.1.63. .spec.logStore.elasticsearch.tolerations[] 19.1.1.1.63.1. Description 19.1.1.1.63.1.1. Type array Property Type Description effect string (optional) Effect indicates the taint effect to match. Empty means match all taint effects. key string (optional) Key is the taint key that the toleration applies to. Empty means match all taint keys. operator string (optional) Operator represents a key's relationship to the value. tolerationSeconds int (optional) TolerationSeconds represents the period of time the toleration (which must be value string (optional) Value is the taint value the toleration matches to. 19.1.1.1.64. .spec.logStore.elasticsearch.tolerations[].tolerationSeconds 19.1.1.1.64.1. Description 19.1.1.1.64.1.1. Type int 19.1.1.1.65. .spec.logStore.lokistack 19.1.1.1.65.1. Description LokiStackStoreSpec is used to set up cluster-logging to use a LokiStack as logging storage. It points to an existing LokiStack in the same namespace. 19.1.1.1.65.1.1. Type object Property Type Description name string Name of the LokiStack resource. 19.1.1.1.66. .spec.logStore.retentionPolicy 19.1.1.1.66.1. Description 19.1.1.1.66.1.1. Type object Property Type Description application object audit object infra object 19.1.1.1.67. .spec.logStore.retentionPolicy.application 19.1.1.1.67.1. Description 19.1.1.1.67.1.1. Type object Property Type Description diskThresholdPercent int (optional) The threshold percentage of ES disk usage that when reached, old indices should be deleted (e.g. 75) maxAge string (optional) namespaceSpec array (optional) The per namespace specification to delete documents older than a given minimum age pruneNamespacesInterval string (optional) How often to run a new prune-namespaces job 19.1.1.1.68. .spec.logStore.retentionPolicy.application.namespaceSpec[] 19.1.1.1.68.1. Description 19.1.1.1.68.1.1. Type array Property Type Description minAge string (optional) Delete the records matching the namespaces which are older than this MinAge (e.g. 1d) namespace string Target Namespace to delete logs older than MinAge (defaults to 7d) 19.1.1.1.69. .spec.logStore.retentionPolicy.audit 19.1.1.1.69.1. Description 19.1.1.1.69.1.1. Type object Property Type Description diskThresholdPercent int (optional) The threshold percentage of ES disk usage that when reached, old indices should be deleted (e.g. 75) maxAge string (optional) namespaceSpec array (optional) The per namespace specification to delete documents older than a given minimum age pruneNamespacesInterval string (optional) How often to run a new prune-namespaces job 19.1.1.1.70. .spec.logStore.retentionPolicy.audit.namespaceSpec[] 19.1.1.1.70.1. Description 19.1.1.1.70.1.1. Type array Property Type Description minAge string (optional) Delete the records matching the namespaces which are older than this MinAge (e.g. 1d) namespace string Target Namespace to delete logs older than MinAge (defaults to 7d) 19.1.1.1.71. .spec.logStore.retentionPolicy.infra 19.1.1.1.71.1. Description 19.1.1.1.71.1.1. Type object Property Type Description diskThresholdPercent int (optional) The threshold percentage of ES disk usage that when reached, old indices should be deleted (e.g. 75) maxAge string (optional) namespaceSpec array (optional) The per namespace specification to delete documents older than a given minimum age pruneNamespacesInterval string (optional) How often to run a new prune-namespaces job 19.1.1.1.72. .spec.logStore.retentionPolicy.infra.namespaceSpec[] 19.1.1.1.72.1. Description 19.1.1.1.72.1.1. Type array Property Type Description minAge string (optional) Delete the records matching the namespaces which are older than this MinAge (e.g. 1d) namespace string Target Namespace to delete logs older than MinAge (defaults to 7d) 19.1.1.1.73. .spec.visualization 19.1.1.1.73.1. Description This is the struct that will contain information pertinent to Log visualization (Kibana) 19.1.1.1.73.1.1. Type object Property Type Description kibana object Specification of the Kibana Visualization component type string The type of Visualization to configure 19.1.1.1.74. .spec.visualization.kibana 19.1.1.1.74.1. Description 19.1.1.1.74.1.1. Type object Property Type Description nodeSelector object Define which Nodes the Pods are scheduled on. proxy object Specification of the Kibana Proxy component replicas int Number of instances to deploy for a Kibana deployment resources object (optional) The resource requirements for Kibana tolerations array 19.1.1.1.75. .spec.visualization.kibana.nodeSelector 19.1.1.1.75.1. Description 19.1.1.1.75.1.1. Type object 19.1.1.1.76. .spec.visualization.kibana.proxy 19.1.1.1.76.1. Description 19.1.1.1.76.1.1. Type object Property Type Description resources object 19.1.1.1.77. .spec.visualization.kibana.proxy.resources 19.1.1.1.77.1. Description 19.1.1.1.77.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 19.1.1.1.78. .spec.visualization.kibana.proxy.resources.limits 19.1.1.1.78.1. Description 19.1.1.1.78.1.1. Type object 19.1.1.1.79. .spec.visualization.kibana.proxy.resources.requests 19.1.1.1.79.1. Description 19.1.1.1.79.1.1. Type object 19.1.1.1.80. .spec.visualization.kibana.replicas 19.1.1.1.80.1. Description 19.1.1.1.80.1.1. Type int 19.1.1.1.81. .spec.visualization.kibana.resources 19.1.1.1.81.1. Description 19.1.1.1.81.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 19.1.1.1.82. .spec.visualization.kibana.resources.limits 19.1.1.1.82.1. Description 19.1.1.1.82.1.1. Type object 19.1.1.1.83. .spec.visualization.kibana.resources.requests 19.1.1.1.83.1. Description 19.1.1.1.83.1.1. Type object 19.1.1.1.84. .spec.visualization.kibana.tolerations[] 19.1.1.1.84.1. Description 19.1.1.1.84.1.1. Type array Property Type Description effect string (optional) Effect indicates the taint effect to match. Empty means match all taint effects. key string (optional) Key is the taint key that the toleration applies to. Empty means match all taint keys. operator string (optional) Operator represents a key's relationship to the value. tolerationSeconds int (optional) TolerationSeconds represents the period of time the toleration (which must be value string (optional) Value is the taint value the toleration matches to. 19.1.1.1.85. .spec.visualization.kibana.tolerations[].tolerationSeconds 19.1.1.1.85.1. Description 19.1.1.1.85.1.1. Type int 19.1.1.1.86. .status 19.1.1.1.86.1. Description ClusterLoggingStatus defines the observed state of ClusterLogging 19.1.1.1.86.1.1. Type object Property Type Description collection object (optional) conditions object (optional) curation object (optional) logStore object (optional) visualization object (optional) 19.1.1.1.87. .status.collection 19.1.1.1.87.1. Description 19.1.1.1.87.1.1. Type object Property Type Description logs object (optional) 19.1.1.1.88. .status.collection.logs 19.1.1.1.88.1. Description 19.1.1.1.88.1.1. Type object Property Type Description fluentdStatus object (optional) 19.1.1.1.89. .status.collection.logs.fluentdStatus 19.1.1.1.89.1. Description 19.1.1.1.89.1.1. Type object Property Type Description clusterCondition object (optional) daemonSet string (optional) nodes object (optional) pods string (optional) 19.1.1.1.90. .status.collection.logs.fluentdStatus.clusterCondition 19.1.1.1.90.1. Description operator-sdk generate crds does not allow map-of-slice, must use a named type. 19.1.1.1.90.1.1. Type object 19.1.1.1.91. .status.collection.logs.fluentdStatus.nodes 19.1.1.1.91.1. Description 19.1.1.1.91.1.1. Type object 19.1.1.1.92. .status.conditions 19.1.1.1.92.1. Description 19.1.1.1.92.1.1. Type object 19.1.1.1.93. .status.curation 19.1.1.1.93.1. Description 19.1.1.1.93.1.1. Type object Property Type Description curatorStatus array (optional) 19.1.1.1.94. .status.curation.curatorStatus[] 19.1.1.1.94.1. Description 19.1.1.1.94.1.1. Type array Property Type Description clusterCondition object (optional) cronJobs string (optional) schedules string (optional) suspended bool (optional) 19.1.1.1.95. .status.curation.curatorStatus[].clusterCondition 19.1.1.1.95.1. Description operator-sdk generate crds does not allow map-of-slice, must use a named type. 19.1.1.1.95.1.1. Type object 19.1.1.1.96. .status.logStore 19.1.1.1.96.1. Description 19.1.1.1.96.1.1. Type object Property Type Description elasticsearchStatus array (optional) 19.1.1.1.97. .status.logStore.elasticsearchStatus[] 19.1.1.1.97.1. Description 19.1.1.1.97.1.1. Type array Property Type Description cluster object (optional) clusterConditions object (optional) clusterHealth string (optional) clusterName string (optional) deployments array (optional) nodeConditions object (optional) nodeCount int (optional) pods object (optional) replicaSets array (optional) shardAllocationEnabled string (optional) statefulSets array (optional) 19.1.1.1.98. .status.logStore.elasticsearchStatus[].cluster 19.1.1.1.98.1. Description 19.1.1.1.98.1.1. Type object Property Type Description activePrimaryShards int The number of Active Primary Shards for the Elasticsearch Cluster activeShards int The number of Active Shards for the Elasticsearch Cluster initializingShards int The number of Initializing Shards for the Elasticsearch Cluster numDataNodes int The number of Data Nodes for the Elasticsearch Cluster numNodes int The number of Nodes for the Elasticsearch Cluster pendingTasks int relocatingShards int The number of Relocating Shards for the Elasticsearch Cluster status string The current Status of the Elasticsearch Cluster unassignedShards int The number of Unassigned Shards for the Elasticsearch Cluster 19.1.1.1.99. .status.logStore.elasticsearchStatus[].clusterConditions 19.1.1.1.99.1. Description 19.1.1.1.99.1.1. Type object 19.1.1.1.100. .status.logStore.elasticsearchStatus[].deployments[] 19.1.1.1.100.1. Description 19.1.1.1.100.1.1. Type array 19.1.1.1.101. .status.logStore.elasticsearchStatus[].nodeConditions 19.1.1.1.101.1. Description 19.1.1.1.101.1.1. Type object 19.1.1.1.102. .status.logStore.elasticsearchStatus[].pods 19.1.1.1.102.1. Description 19.1.1.1.102.1.1. Type object 19.1.1.1.103. .status.logStore.elasticsearchStatus[].replicaSets[] 19.1.1.1.103.1. Description 19.1.1.1.103.1.1. Type array 19.1.1.1.104. .status.logStore.elasticsearchStatus[].statefulSets[] 19.1.1.1.104.1. Description 19.1.1.1.104.1.1. Type array 19.1.1.1.105. .status.visualization 19.1.1.1.105.1. Description 19.1.1.1.105.1.1. Type object Property Type Description kibanaStatus array (optional) 19.1.1.1.106. .status.visualization.kibanaStatus[] 19.1.1.1.106.1. Description 19.1.1.1.106.1.1. Type array Property Type Description clusterCondition object (optional) deployment string (optional) pods string (optional) The status for each of the Kibana pods for the Visualization component replicaSets array (optional) replicas int (optional) 19.1.1.1.107. .status.visualization.kibanaStatus[].clusterCondition 19.1.1.1.107.1. Description 19.1.1.1.107.1.1. Type object 19.1.1.1.108. .status.visualization.kibanaStatus[].replicaSets[] 19.1.1.1.108.1. Description 19.1.1.1.108.1.1. Type array
null
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/logging/api-reference
Chapter 8. Management CLI Logging
Chapter 8. Management CLI Logging You can capture output and other management CLI information in a log file. By default, management CLI logging is disabled. You can enable it and configure other logging settings using the EAP_HOME /bin/jboss-cli-logging.properties file. Configure Management CLI Logging Edit the EAP_HOME /bin/jboss-cli-logging.properties file. Uncomment or add the following line to enable logging. Change the log level from OFF to the desired level, such as INFO or ALL . Once you restart the management CLI, output will be logged to the EAP_HOME /bin/jboss-cli.log file. For information on configuring other settings in a logging properties file, see the Configuring logging.properties section of the JBoss EAP Development Guide .
[ "uncomment to enable logging to the file logger.handlers=FILE", "logger.org.jboss.as.cli.level=INFO" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/management_cli_guide/cli_logging
Deploying RHEL 8 on Amazon Web Services
Deploying RHEL 8 on Amazon Web Services Red Hat Enterprise Linux 8 Obtaining RHEL system images and creating RHEL instances on AWS Red Hat Customer Content Services
[ "yum install python3 python3-pip", "pip3 install awscli", "aws configure AWS Access Key ID [None]: AWS Secret Access Key [None]: Default region name [None]: Default output format [None]:", "BUCKET= bucketname aws s3 mb s3://USDBUCKET", "{ \"Version\": \"2022-10-17\", \"Statement\": [{ \"Effect\": \"Allow\", \"Principal\": { \"Service\": \"vmie.amazonaws.com\" }, \"Action\": \"sts:AssumeRole\", \"Condition\": { \"StringEquals\": { \"sts:Externalid\": \"vmimport\" } } }] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [{ \"Effect\": \"Allow\", \"Action\": [\"s3:GetBucketLocation\", \"s3:GetObject\", \"s3:ListBucket\"], \"Resource\": [\"arn:aws:s3:::%s\", \"arn:aws:s3:::%s/ \"] }, { \"Effect\": \"Allow\", \"Action\": [\"ec2:ModifySnapshotAttribute\", \"ec2:CopySnapshot\", \"ec2:RegisterImage\", \"ec2:Describe \"], \"Resource\": \"*\" }] } USDBUCKET USDBUCKET", "aws iam create-role --role-name vmimport --assume-role-policy-document file://trust-policy.json", "aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file://role-policy.json", "provider = \"aws\" [settings] accessKeyID = \" AWS_ACCESS_KEY_ID \" secretAccessKey = \"AWS_SECRET_ACCESS_KEY\" bucket = \"AWS_BUCKET\" region = \"AWS_REGION\" key = \"IMAGE_KEY\"", "composer-cli compose start blueprint-name image-type image-key configuration-file .toml", "composer-cli compose status", "chmod 400 <_your-instance-name.pem_>", "ssh -i <_your-instance-name.pem_> ec2-user@<_your-instance-IP-address_>", "virt-install --name kvmtest --memory 2048 --vcpus 2 --cdrom /home/username/Downloads/rhel8.iso,bus=virtio --os-variant=rhel8.0", "subscription-manager register --auto-attach", "yum install cloud-init systemctl enable --now cloud-init.service", "dracut -f --add-drivers \"nvme xen-netfront xen-blkfront\"", "dracut -f --add-drivers \"nvme\"", "yum install awscli", "aws --version aws-cli/1.19.77 Python/3.6.15 Linux/5.14.16-201.fc34.x86_64 botocore/1.20.77", "aws configure AWS Access Key ID [None]: AWS Secret Access Key [None]: Default region name [None]: Default output format [None]:", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Service\": \"vmie.amazonaws.com\" }, \"Action\": \"sts:AssumeRole\", \"Condition\": { \"StringEquals\":{ \"sts:Externalid\": \"vmimport\" } } } ] }", "aws iam create-role --role-name vmimport --assume-role-policy-document file:///home/sample/ImportService/trust-policy.json", "{ \"Version\":\"2012-10-17\", \"Statement\":[ { \"Effect\":\"Allow\", \"Action\":[ \"s3:GetBucketLocation\", \"s3:GetObject\", \"s3:ListBucket\" ], \"Resource\":[ \"arn:aws:s3:::s3-bucket-name\", \"arn:aws:s3:::s3-bucket-name/*\" ] }, { \"Effect\":\"Allow\", \"Action\":[ \"ec2:ModifySnapshotAttribute\", \"ec2:CopySnapshot\", \"ec2:RegisterImage\", \"ec2:Describe*\" ], \"Resource\":\"*\" } ] }", "aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file:///home/sample/ImportService/role-policy.json", "qemu-img convert -f qcow2 -O raw rhel-8.0-sample.qcow2 rhel-8.0-sample.raw", "aws s3 cp rhel-8.0-sample.raw s3://s3-bucket-name", "{ \"Description\": \"rhel-8.0-sample.raw\", \"Format\": \"raw\", \"UserBucket\": { \"S3Bucket\": \"s3-bucket-name\", \"S3Key\": \"s3-key\" } }", "aws ec2 import-snapshot --disk-container file://containers.json", "{ \"SnapshotTaskDetail\": { \"Status\": \"active\", \"Format\": \"RAW\", \"DiskImageSize\": 0.0, \"UserBucket\": { \"S3Bucket\": \"s3-bucket-name\", \"S3Key\": \"rhel-8.0-sample.raw\" }, \"Progress\": \"3\", \"StatusMessage\": \"pending\" }, \"ImportTaskId\": \"import-snap-06cea01fa0f1166a8\" }", "aws ec2 describe-import-snapshot-tasks --import-task-ids import-snap-06cea01fa0f1166a8", "aws ec2 register-image --name \"myimagename\" --description \"myimagedescription\" --architecture x86_64 --virtualization-type hvm --root-device-name \"/dev/sda1\" --ena-support --block-device-mappings \"{\\\"DeviceName\\\": \\\"/dev/sda1\\\",\\\"Ebs\\\": {\\\"SnapshotId\\\": \\\"snap-0ce7f009b69ab274d\\\"}}\"", "subscription-manager register --auto-attach", "insights-client register --display-name <display-name-value>", "subscription-manager identity system identity: fdc46662-c536-43fb-a18a-bbcb283102b7 name: 192.168.122.222 org name: 6340056 org ID: 6340056", "yum install awscli", "aws --version aws-cli/1.19.77 Python/3.6.15 Linux/5.14.16-201.fc34.x86_64 botocore/1.20.77", "aws configure AWS Access Key ID [None]: AWS Secret Access Key [None]: Default region name [None]: Default output format [None]:", "chmod 400 KeyName.pem", "sudo -i yum -y remove rh-amazon-rhui-client *", "subscription-manager register --auto-attach", "subscription-manager repos --disable= *", "subscription-manager repos --enable=rhel-8-for-x86_64-highavailability-rpms", "yum update -y", "yum install pcs pacemaker fence-agents-aws", "passwd hacluster", "firewall-cmd --permanent --add-service=high-availability firewall-cmd --reload", "systemctl start pcsd.service systemctl enable pcsd.service", "systemctl status pcsd.service pcsd.service - PCS GUI and remote configuration interface Loaded: loaded (/usr/lib/systemd/system/pcsd.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2018-03-01 14:53:28 UTC; 28min ago Docs: man:pcsd(8) man:pcs(8) Main PID: 5437 (pcsd) CGroup: /system.slice/pcsd.service └─5437 /usr/bin/ruby /usr/lib/pcsd/pcsd > /dev/null & Mar 01 14:53:27 ip-10-0-0-48.ec2.internal systemd[1]: Starting PCS GUI and remote configuration interface... Mar 01 14:53:28 ip-10-0-0-48.ec2.internal systemd[1]: Started PCS GUI and remote configuration interface.", "pcs host auth <hostname1> <hostname2> <hostname3>", "pcs host auth node01 node02 node03 Username: hacluster Password: node01: Authorized node02: Authorized node03: Authorized", "pcs cluster setup <cluster_name> <hostname1> <hostname2> <hostname3>", "pcs cluster setup new_cluster node01 node02 node03 [...] Synchronizing pcsd certificates on nodes node01, node02, node03 node02: Success node03: Success node01: Success Restarting pcsd on the nodes in order to reload the certificates node02: Success node03: Success node01: Success", "pcs cluster enable --all node02: Cluster Enabled node03: Cluster Enabled node01: Cluster Enabled", "pcs cluster start --all node02: Starting Cluster node03: Starting Cluster node01: Starting Cluster", "echo USD(curl -s http://169.254.169.254/latest/meta-data/instance-id)", "echo USD(curl -s http://169.254.169.254/latest/meta-data/instance-id) i-07f1ac63af0ec0ac6", "pcs stonith create <name> fence_aws access_key=access-key secret_key= <secret-access-key> region= <region> pcmk_host_map=\"rhel-hostname-1:Instance-ID-1;rhel-hostname-2:Instance-ID-2;rhel-hostname-3:Instance-ID-3\" power_timeout=240 pcmk_reboot_timeout=480 pcmk_reboot_retries=4", "pcs stonith create clusterfence fence_aws access_key=AKIAI123456MRMJA secret_key=a75EYIG4RVL3hdsdAslK7koQ8dzaDyn5yoIZ/ region=us-east-1 pcmk_host_map=\"ip-10-0-0-48:i-07f1ac63af0ec0ac6;ip-10-0-0-46:i-063fc5fe93b4167b2;ip-10-0-0-58:i-08bd39eb03a6fd2c7\" power_timeout=240 pcmk_reboot_timeout=480 pcmk_reboot_retries=4", "aws ec2 describe-vpcs --output text --filters \"Name=tag:Name,Values= <clustername> -vpc\" --query 'Vpcs[ * ].VpcId' vpc-06bc10ac8f6006664", "aws ec2 describe-instances --output text --filters \"Name=vpc-id,Values=vpc-06bc10ac8f6006664\" --query 'Reservations[ * ].Instances[ * ].{Name:Tags[? Key== Name ]|[0].Value,Instance:InstanceId}' | grep \"\\-node[a-c]\" i-0b02af8927a895137 <clustername> -nodea-vm i-0cceb4ba8ab743b69 <clustername> -nodeb-vm i-0502291ab38c762a5 <clustername> -nodec-vm", "CLUSTER= <clustername> && pcs stonith create fenceUSD{CLUSTER} fence_aws access_key=XXXXXXXXXXXXXXXXXXXX pcmk_host_map=USD(for NODE in node{a..c}; do ssh USD{NODE} \"echo -n \\USD{HOSTNAME}:\\USD(curl -s http://169.254.169.254/latest/meta-data/instance-id)\\;\"; done) pcmk_reboot_retries=4 pcmk_reboot_timeout=480 power_timeout=240 region=xx-xxxx-x secret_key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX", "pcs stonith config fenceUSD{CLUSTER} Resource: <clustername> (class=stonith type=fence_aws) Attributes: access_key=XXXXXXXXXXXXXXXXXXXX pcmk_host_map=nodea:i-0b02af8927a895137;nodeb:i-0cceb4ba8ab743b69;nodec:i-0502291ab38c762a5; pcmk_reboot_retries=4 pcmk_reboot_timeout=480 power_timeout=240 region=xx-xxxx-x secret_key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Operations: monitor interval=60s ( <clustername> -monitor-interval-60s)", "pcs stonith fence <awsnodename>", "pcs stonith fence ip-10-0-0-58 Node: ip-10-0-0-58 fenced", "pcs status", "pcs status Cluster name: newcluster Stack: corosync Current DC: ip-10-0-0-46 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum Last updated: Fri Mar 2 19:55:41 2018 Last change: Fri Mar 2 19:24:59 2018 by root via cibadmin on ip-10-0-0-46 3 nodes configured 1 resource configured Online: [ ip-10-0-0-46 ip-10-0-0-48 ] OFFLINE: [ ip-10-0-0-58 ] Full list of resources: clusterfence (stonith:fence_aws): Started ip-10-0-0-46 Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled", "pcs cluster start <awshostname>", "pcs status", "pcs status Cluster name: newcluster Stack: corosync Current DC: ip-10-0-0-46 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum Last updated: Fri Mar 2 20:01:31 2018 Last change: Fri Mar 2 19:24:59 2018 by root via cibadmin on ip-10-0-0-48 3 nodes configured 1 resource configured Online: [ ip-10-0-0-46 ip-10-0-0-48 ip-10-0-0-58 ] Full list of resources: clusterfence (stonith:fence_aws): Started ip-10-0-0-46 Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled", "aws ec2 describe-instances --output text --query 'Reservations[ * ].Instances[ * ].[InstanceId,Tags[?Key== Name ].Value]' i-07f1ac63af0ec0ac6 ip-10-0-0-48 i-063fc5fe93b4167b2 ip-10-0-0-46 i-08bd39eb03a6fd2c7 ip-10-0-0-58", "yum install resource-agents", "aws ec2 allocate-address --domain vpc --output text eipalloc-4c4a2c45 vpc 35.169.153.122", "pcs resource describe awseip", "pcs resource create <resource-id> awseip elastic_ip= <Elastic-IP-Address> allocation_id= <Elastic-IP-Association-ID> --group networking-group", "pcs resource create elastic awseip elastic_ip=35.169.153.122 allocation_id=eipalloc-4c4a2c45 --group networking-group", "pcs status", "pcs status Cluster name: newcluster Stack: corosync Current DC: ip-10-0-0-58 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum Last updated: Mon Mar 5 16:27:55 2018 Last change: Mon Mar 5 15:57:51 2018 by root via cibadmin on ip-10-0-0-46 3 nodes configured 4 resources configured Online: [ ip-10-0-0-46 ip-10-0-0-48 ip-10-0-0-58 ] Full list of resources: clusterfence (stonith:fence_aws): Started ip-10-0-0-46 Resource Group: networking-group vip (ocf::heartbeat:IPaddr2): Started ip-10-0-0-48 elastic (ocf::heartbeat:awseip): Started ip-10-0-0-48 Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled", "ssh -l <user-name> -i ~/.ssh/<KeyName>.pem <elastic-IP>", "ssh -l ec2-user -i ~/.ssh/cluster-admin.pem 35.169.153.122", "yum install resource-agents", "pcs resource describe awsvip", "pcs resource create <resource-id> awsvip secondary_private_ip= <Unused-IP-Address> --group <group-name>", "pcs resource create privip awsvip secondary_private_ip=10.0.0.68 --group networking-group", "pcs resource create <resource-id> IPaddr2 ip= <secondary-private-IP> --group <group-name>", "root@ip-10-0-0-48 ~]# pcs resource create vip IPaddr2 ip=10.0.0.68 --group networking-group", "pcs status", "pcs status Cluster name: newcluster Stack: corosync Current DC: ip-10-0-0-46 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum Last updated: Fri Mar 2 22:34:24 2018 Last change: Fri Mar 2 22:14:58 2018 by root via cibadmin on ip-10-0-0-46 3 nodes configured 3 resources configured Online: [ ip-10-0-0-46 ip-10-0-0-48 ip-10-0-0-58 ] Full list of resources: clusterfence (stonith:fence_aws): Started ip-10-0-0-46 Resource Group: networking-group privip (ocf::heartbeat:awsvip): Started ip-10-0-0-48 vip (ocf::heartbeat:IPaddr2): Started ip-10-0-0-58 Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled", "yum install resource-agents", "pcs resource describe aws-vpc-move-ip", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"Stmt1424870324000\", \"Effect\": \"Allow\", \"Action\": \"ec2:DescribeRouteTables\", \"Resource\": \"*\" }, { \"Sid\": \"Stmt1424860166260\", \"Action\": [ \"ec2:CreateRoute\", \"ec2:ReplaceRoute\" ], \"Effect\": \"Allow\", \"Resource\": \"arn:aws:ec2:<region>:<account-id>:route-table/<ClusterRouteTableID>\" } ] }", "aws ec2 create-route --route-table-id <ClusterRouteTableID> --destination-cidr-block <NewCIDRblockIP/NetMask> --instance-id <ClusterNodeID>", "pcs resource create vpcip aws-vpc-move-ip ip= 192.168.0.15 interface=eth0 routing_table= <ClusterRouteTableID>", "192.168.0.15 vpcip", "pcs resource move vpcip", "pcs resource clear vpcip", "aws ec2 create-volume --availability-zone <availability_zone> --no-encrypted --size 1024 --volume-type io1 --iops 51200 --multi-attach-enabled", "aws ec2 create-volume --availability-zone us-east-1a --no-encrypted --size 1024 --volume-type io1 --iops 51200 --multi-attach-enabled { \"AvailabilityZone\": \"us-east-1a\", \"CreateTime\": \"2020-08-27T19:16:42.000Z\", \"Encrypted\": false, \"Size\": 1024, \"SnapshotId\": \"\", \"State\": \"creating\", \"VolumeId\": \"vol-042a5652867304f09\", \"Iops\": 51200, \"Tags\": [ ], \"VolumeType\": \"io1\" }", "aws ec2 attach-volume --device /dev/xvdd --instance-id <instance_id> --volume-id <volume_id>", "aws ec2 attach-volume --device /dev/xvdd --instance-id i-0eb803361c2c887f2 --volume-id vol-042a5652867304f09 { \"AttachTime\": \"2020-08-27T19:26:16.086Z\", \"Device\": \"/dev/xvdd\", \"InstanceId\": \"i-0eb803361c2c887f2\", \"State\": \"attaching\", \"VolumeId\": \"vol-042a5652867304f09\" }", "ssh <ip_address> \"hostname ; lsblk -d | grep ' 1T '\"", "ssh 198.51.100.3 \"hostname ; lsblk -d | grep ' 1T '\" nodea nvme2n1 259:1 0 1T 0 disk", "ssh <ip_address> \"hostname ; lsblk -d | grep ' 1T ' | awk '{print \\USD1}' | xargs -i udevadm info --query=all --name=/dev/{} | grep '^E: ID_SERIAL='\"", "ssh 198.51.100.3 \"hostname ; lsblk -d | grep ' 1T ' | awk '{print \\USD1}' | xargs -i udevadm info --query=all --name=/dev/{} | grep '^E: ID_SERIAL='\" nodea E: ID_SERIAL=Amazon Elastic Block Store_vol0fa5342e7aedf09f7" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html-single/deploying_rhel_8_on_amazon_web_services/index
Chapter 4. Remediate issues directly from Insights for Red Hat Enterprise Linux
Chapter 4. Remediate issues directly from Insights for Red Hat Enterprise Linux Remote host configuration (rhc) allows you to remediate issues on your Red Hat Enterprise Linux (RHEL) systems directly from Insights for Red Hat Enterprise Linux. Direct remediation is available when you have the rhc client installed on your RHEL 8.5 and later system. For complete remediations documentation for Red Hat Insights, see the Red Hat Insights Remediations Guide .
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/remote_host_configuration_and_management/remediations_intro-rhc
Chapter 22. Managing user credentials
Chapter 22. Managing user credentials Credentials authenticate the automation controller user when launching jobs against machines, synchronizing with inventory sources, and importing project content from a version control system. You can grant users and teams the ability to use these credentials, without exposing the credential to the user. If a user moves to a different team or leaves the organization, you do not have to re-key all of your systems just because that credential was available in automation controller. Note Automation controller encrypts passwords and key information in the database and never makes secret information visible through the API. For further information, see Configuring automation execution . 22.1. How credentials work Automation controller uses SSH to connect to remote hosts. To pass the key from automation controller to SSH, the key must be decrypted before it can be written to a named pipe. Automation controller uses that pipe to send the key to SSH, so that the key is never written to disk. If passwords are used, automation controller handles them by responding directly to the password prompt and decrypting the password before writing it to the prompt. Note It is possible to create duplicate credentials with the same name and without an organization. However, it is not possible to create two duplicate credentials in the same organization. Example Create two machine credentials with the same name but without an organization. Use the module ansible.controller.export to export the credentials. Use the module ansible.controller.import in a different automation execution node. Check the imported credentials. When you export two duplicate credentials and then import them in a different node, only one credential is imported. 22.2. Creating new credentials Credentials added to a team are made available to all members of the team. You can also add credentials to individual users. As part of the initial setup, two credentials are available for your use: Demo Credential and Ansible Galaxy. Use the Ansible Galaxy credential as a template. You can copy this credential, but not edit it. Add more credentials as needed. Procedure From the navigation panel, select Automation Execution Infrastructure Credentials . On the Credentials page, click Create credential . Enter the following information: Name : the name for your new credential. (Optional) Description : a description for the new credential. Optional Organization : The name of the organization with which the credential is associated. The default is Default . Credential type : enter or select the credential type you want to create. Enter the appropriate details depending on the type of credential selected, as described in Credential types . Click Create credential . 22.3. Adding new users and job templates to existing credentials Procedure From the navigation panel, select Automation Execution Infrastructure Credentials . Select the credential that you want to assign to additional users. Click the User Access tab. You can see users and teams associated with this credential and their roles. If no users exist, add them from the Users menu. For more information, see Users . Click Add roles . Select the user(s) that you want to give access to the credential and click . From the Select roles to apply page, select the roles you want to add to the User. Click . Review your selections and click Finish to add the roles or click Back to make changes. The Add roles window displays stating whether the action was successful. If the action is not successful, a warning displays. Click Close . The User Access page displays the summary information. Select the Job templates tab to select a job template to which you want to assign this credential. Chose a job template or select Create job template from the Create template list to assign the credential to additional job templates. For more information about creating new job templates, see the Job templates section. 22.4. Credential types Automation controller supports the following credential types: Amazon Web Services Ansible Galaxy/Automation Hub API Token AWS Secrets Manager Lookup Bitbucket Data Center HTTP Access Token Centrify Vault Credential Provider Lookup Container Registry CyberArk Central Credential Provider Lookup CyberArk Conjur Secrets Manager Lookup GitHub Personal Access Token GitLab Personal Access Token Google Compute Engine GPG Public Key HashiCorp Vault Secret Lookup HashiCorp Vault Signed SSH Insights Machine Microsoft Azure Key Vault Microsoft Azure Resource Manager Network OpenShift or Kubernetes API Bearer Token OpenStack Red Hat Ansible Automation Platform Red Hat Satellite 6 Red Hat Virtualization Source Control Terraform Backend Configuration Thycotic DevOps Secrets Vault Thycotic Secret Server Vault VMware vCenter The credential types associated with AWS Secrets Manager, Centrify, CyberArk, HashiCorp Vault, Microsoft Azure Key Vault, and Thycotic are part of the credential plugins capability that enables an external system to lookup your secrets information. For more information, see Secrets Management System . 22.4.1. Amazon Web Services credential type Select this credential to enable synchronization of cloud inventory with Amazon Web Services. Automation controller uses the following environment variables for AWS credentials: AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SECURITY_TOKEN These are fields prompted in the user interface. Amazon Web Services credentials consist of the AWS Access Key and Secret Key . Automation controller provides support for EC2 STS tokens, also known as Identity and Access Management (IAM) STS credentials. Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS IAM users. Note If the value of your tags in EC2 contain Booleans ( yes/no/true/false ), you must quote them. Warning To use implicit IAM role credentials, do not attach AWS cloud credentials in automation controller when relying on IAM roles to access the AWS API. Attaching your AWS cloud credential to your job template forces the use of your AWS credentials, not your IAM role credentials. Additional resources For more information about the IAM/EC2 STS Token, see Temporary security credentials in IAM . 22.4.1.1. Access Amazon EC2 credentials in an Ansible Playbook You can get AWS credential parameters from a job runtime environment: vars: aws: access_key: '{{ lookup("env", "AWS_ACCESS_KEY_ID") }}' secret_key: '{{ lookup("env", "AWS_SECRET_ACCESS_KEY") }}' security_token: '{{ lookup("env", "AWS_SECURITY_TOKEN") }}' 22.4.2. Ansible Galaxy/Automation Hub API token credential type Select this credential to access Ansible Galaxy or use a collection published on an instance of private automation hub. Entering the Galaxy server URL on this screen. Populate the Galaxy Server URL field with the contents of the Server URL field at Red Hat Hybrid Cloud Console . Populate the Auth Server URL field with the contents of the SSO URL field at Red Hat Hybrid Cloud Console . Additional resources For more information, see Using Collections with automation hub . 22.4.3. AWS secrets manager lookup This is considered part of the secret management capability. For more information, see AWS Secrets Manager Lookup 22.4.4. BitBucket data center HTTP access token Bitbucket Data Center is a self-hosted Git repository for collaboration and management. Select this credential type to enable you to use HTTP access tokens in place of passwords for Git over HTTPS. For further information, see HTTP access tokens in the Bitbucket Data Center documentation.. 22.4.5. Centrify Vault Credential Provider Lookup credential type This is considered part of the secret management capability. For more information, see Centrify Vault Credential Provider Lookup . 22.4.6. Container Registry credential type Select this credential to enable automation controller to access a collection of container images. For more information, see What is a container registry? . You must specify a name. The Authentication URL field is pre-populated with a default value. You can change the value by specifying the authentication endpoint for a different container registry. 22.4.7. CyberArk Central Credential Provider Lookup credential type This is considered part of the secret management capability. For more information, see CyberArk Central Credential Provider (CCP) Lookup . 22.4.8. CyberArk Conjur Secrets Manager Lookup credential type This is considered part of the secret management capability. For more information, see CyberArk Conjur Secrets Manager Lookup . 22.4.9. GitHub Personal Access Token credential type Select this credential to enable you to access GitHub by using a Personal Access Token (PAT), which you can get through GitHub. For more information, see Setting up a GitHub webhook . GitHub PAT credentials require a value in the Token field, which is provided in your GitHub profile settings. Use this credential to establish an API connection to GitHub for use in webhook listener jobs, to post status updates. 22.4.10. GitLab Personal Access Token credential type Select this credential to enable you to access GitLab by using a Personal Access Token (PAT), which you can get through GitLab. For more information, see Setting up a GitLab webhook . GitLab PAT credentials require a value in the Token field, which is provided in your GitLab profile settings. Use this credential to establish an API connection to GitLab for use in webhook listener jobs, to post status updates. 22.4.11. Google Compute Engine credential type Select this credential to enable synchronization of a cloud inventory with Google Compute Engine (GCE). Automation controller uses the following environment variables for GCE credentials: GCE_EMAIL GCE_PROJECT GCE_CREDENTIALS_FILE_PATH These are fields prompted in the user interface: GCE credentials require the following information: Service Account Email Address : The email address assigned to the Google Compute Engine service account . Optional: Project : Provide the GCE assigned identification or the unique project ID that you provided at project creation time. Optional: Service Account JSON File : Upload a GCE service account file. Click Browse to browse for the file that has the special account information that can be used by services and applications running on your GCE instance to interact with other Google Cloud Platform APIs. This grants permissions to the service account and virtual machine instances. RSA Private Key : The PEM file associated with the service account email. 22.4.11.1. Access Google Compute Engine credentials in an Ansible Playbook You can get GCE credential parameters from a job runtime environment: vars: gce: email: '{{ lookup("env", "GCE_EMAIL") }}' project: '{{ lookup("env", "GCE_PROJECT") }}' pem_file_path: '{{ lookup("env", "GCE_PEM_FILE_PATH") }}' 22.4.12. GPG Public Key credential type Select this credential type to enable automation controller to verify the integrity of the project when synchronizing from source control. For more information about how to generate a valid keypair, use the CLI tool to sign content, and how to add the public key to the controller, see Project Signing and Verification . 22.4.13. HashiCorp Vault Secret Lookup credential type This is considered part of the secret management capability. For more information, see HashiCorp Vault Secret Lookup . 22.4.14. HashiCorp Vault Signed SSH credential type This is considered part of the secret management capability. For more information, see HashiCorp Vault Signed SSH . 22.4.15. Insights credential type Select this credential type to enable synchronization of cloud inventory with Red Hat Insights. Insights credentials are the Insights Username and Password , which are the user's Red Hat Customer Portal Account username and password. The extra_vars and env injectors for Insights are as follows: ManagedCredentialType( namespace='insights', .... .... .... injectors={ 'extra_vars': { "scm_username": "{{username}}", "scm_password": "{{password}}", }, 'env': { 'INSIGHTS_USER': '{{username}}', 'INSIGHTS_PASSWORD': '{{password}}', }, 22.4.16. Machine credential type Machine credentials enable automation controller to call Ansible on hosts under your management. You can specify the SSH username, optionally give a password, an SSH key, a key password, or have automation controller prompt the user for their password at deployment time. They define SSH and user-level privilege escalation access for playbooks, and are used when submitting jobs to run playbooks on a remote host. The following network connections use Machine as the credential type: httpapi , netconf , and network_cli Machine and SSH credentials do not use environment variables. They pass the username through the ansible -u flag, and interactively write the SSH password when the underlying SSH client prompts for it. Machine credentials require the following inputs: Username : The username to use for SSH authentication. Password : The password to use for SSH authentication. This password is stored encrypted in the database, if entered. Alternatively, you can configure automation controller to ask the user for the password at launch time by selecting Prompt on launch . In these cases, a dialog opens when the job is launched, promoting the user to enter the password and password confirmation. SSH Private Key : Copy or drag-and-drop the SSH private key for the machine credential. Private Key Passphrase : If the SSH Private Key used is protected by a password, you can configure a Key Passphrase for the private key. This password is stored encrypted in the database, if entered. You can also configure automation controller to ask the user for the key passphrase at launch time by selecting Prompt on launch . In these cases, a dialog opens when the job is launched, prompting the user to enter the key passphrase and key passphrase confirmation. Privilege Escalation Method : Specifies the type of escalation privilege to assign to specific users. This is the same as specifying the --become-method=BECOME_METHOD parameter, where BECOME_METHOD is any of the existing methods, or a custom method you have written. Begin entering the name of the method, and the appropriate name auto-populates. empty selection : If a task or play has become set to yes and is used with an empty selection, then it will default to sudo . sudo : Performs single commands with superuser (root user) privileges. su : Switches to the superuser (root user) account (or to other user accounts). pbrun : Requests that an application or command be run in a controlled account and provides for advanced root privilege delegation and keylogging. pfexec : Executes commands with predefined process attributes, such as specific user or group IDs. dzdo : An enhanced version of sudo that uses RBAC information in Centrify's Active Directory service. For more information, see Centrify's site on DZDO . pmrun : Requests that an application is run in a controlled account. See Privilege Manager for Unix 6.0 . runas : Enables you to run as the current user. enable : Switches to elevated permissions on a network device. doas : Enables your remote/login user to run commands as another user through the doas ("Do as user") utility. ksu : Enables your remote/login user to run commands as another user through Kerberos access. machinectl : Enables you to manage containers through the systemd machine manager. sesu : Enables your remote/login user to run commands as another user through the CA Privileged Access Manager. Note Custom become plugins are available from Ansible 2.8+. For more information, see Understanding Privilege Escalation and the list of Become plugins Privilege Escalation Username : You see this field only if you selected an option for privilege escalation. Enter the username to use with escalation privileges on the remote system. Privilege Escalation Password : You see this field only if you selected an option for privilege escalation. Enter the password to use to authenticate the user through the selected privilege escalation type on the remote system. This password is stored encrypted in the database. You can also configure automation controller to ask the user for the password at launch time by selecting Prompt on launch . In these cases, a dialog opens when the job is launched, promoting the user to enter the password and password confirmation. Note You must use sudo password must in combination with SSH passwords or SSH Private Keys, because automation controller must first establish an authenticated SSH connection with the host before invoking sudo to change to the sudo user. Warning Credentials that are used in scheduled jobs must not be configured as Prompt on launch . 22.4.16.1. Access machine credentials in an ansible playbook You can get username and password from Ansible facts: vars: machine: username: '{{ ansible_user }}' password: '{{ ansible_password }}' 22.4.17. Microsoft Azure Key Vault credential type This is considered part of the secret management capability. For more information, see Microsoft Azure Key Vault . 22.4.18. Microsoft Azure Resource Manager credential type Select this credential type to enable synchronization of cloud inventory with Microsoft Azure Resource Manager. Microsoft Azure Resource Manager credentials require the following inputs: Subscription ID : The Subscription UUID for the Microsoft Azure account. Username : The username to use to connect to the Microsoft Azure account. Password : The password to use to connect to the Microsoft Azure account. Client ID : The Client ID for the Microsoft Azure account. Client Secret : The Client Secret for the Microsoft Azure account. Tenant ID : The Tenant ID for the Microsoft Azure account. Azure Cloud Environment : The variable associated with Azure cloud or Azure stack environments. These fields are equal to the variables in the API. To pass service principal credentials, define the following variables: AZURE_CLIENT_ID AZURE_SECRET AZURE_SUBSCRIPTION_ID AZURE_TENANT AZURE_CLOUD_ENVIRONMENT To pass an Active Directory username and password pair, define the following variables: AZURE_AD_USER AZURE_PASSWORD AZURE_SUBSCRIPTION_ID You can also pass credentials as parameters to a task within a playbook. The order of precedence is parameters, then environment variables, and finally a file found in your home directory. To pass credentials as parameters to a task, use the following parameters for service principal credentials: client_id secret subscription_id tenant azure_cloud_environment Alternatively, pass the following parameters for Active Directory username/password: ad_user password subscription_id 22.4.18.1. Access Microsoft Azure resource manager credentials in an ansible playbook You can get Microsoft Azure credential parameters from a job runtime environment: vars: azure: client_id: '{{ lookup("env", "AZURE_CLIENT_ID") }}' secret: '{{ lookup("env", "AZURE_SECRET") }}' tenant: '{{ lookup("env", "AZURE_TENANT") }}' subscription_id: '{{ lookup("env", "AZURE_SUBSCRIPTION_ID") }}' 22.4.19. Network credential type Note Select the Network credential type if you are using a local connection with provider to use Ansible networking modules to connect to and manage networking devices. When connecting to network devices, the credential type must match the connection type: For local connections using provider , credential type should be Network . For all other network connections ( httpapi , netconf , and network_cli ), the credential type should be Machine . For more information about connection types available for network devices, see Multiple Communication Protocols . Automation controller uses the following environment variables for Network credentials: ANSIBLE_NET_USERNAME ANSIBLE_NET_PASSWORD Provide the following information for network credentials: Username : The username to use in conjunction with the network device. Password : The password to use in conjunction with the network device. SSH Private Key : Copy or drag-and-drop the actual SSH Private Key to be used to authenticate the user to the network through SSH. Private Key Passphrase : The passphrase for the private key to authenticate the user to the network through SSH. Authorize : Select this to control whether or not to enter privileged mode. If Authorize is checked, enter a password in the Authorize Password field to access privileged mode. For more information, see Porting Ansible Network Playbooks with New Connection Plugins . 22.4.20. Access network credentials in an ansible playbook You can get the username and password parameters from a job runtime environment: vars: network: username: '{{ lookup("env", "ANSIBLE_NET_USERNAME") }}' password: '{{ lookup("env", "ANSIBLE_NET_PASSWORD") }}' 22.4.21. OpenShift or Kubernetes API Bearer Token credential type Select this credential type to create instance groups that point to a Kubernetes or OpenShift container. For more information, see Instance and container groups . Provide the following information for container credentials: OpenShift or Kubernetes API Endpoint (required): The endpoint used to connect to an OpenShift or Kubernetes container. API authentication bearer token (required): The token used to authenticate the connection. Optional: Verify SSL : You can check this option to verify the server's SSL/TLS certificate is valid and trusted. Environments that use internal or private Certificate Authority (CA) must leave this option unchecked to disable verification. Certificate Authority data : Include the BEGIN CERTIFICATE and END CERTIFICATE lines when pasting the certificate, if provided. A container group is a type of instance group that has an associated credential that enables connection to an OpenShift cluster. To set up a container group, you must have the following items: A namespace you can start into. Although every cluster has a default namespace, you can use a specific namespace. A service account that has the roles that enable it to start and manage pods in this namespace. If you use execution environments in a private registry, and have a container registry credential associated with them in automation controller, the service account also requires the roles to get, create, and delete secrets in the namespace. If you do not want to give these roles to the service account, you can pre-create the ImagePullSecrets and specify them on the pod spec for the container group. In this case, the execution environment must not have a Container Registry credential associated, or automation controller attempts to create the secret for you in the namespace. A token associated with that service account (OpenShift or Kubernetes Bearer Token) A CA certificate associated with the cluster 22.4.21.1. Creating a service account in an Openshift cluster Creating a service account in an Openshift or Kubernetes cluster to be used to run jobs in a container group through automation controller. After you create the service account, its credentials are provided to automation controller in the form of an Openshift or Kubernetes API bearer token credential. After you create a service account, use the information in the new service account to configure automation controller. Procedure To create a service account, download and use the sample service account, containergroup sa , and change it as needed to obtain the credentials: --- apiVersion: v1 kind: ServiceAccount metadata: name: containergroup-service-account namespace: containergroup-namespace --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: role-containergroup-service-account namespace: containergroup-namespace rules: - apiGroups: [""] resources: ["pods"] verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] - apiGroups: [""] resources: ["pods/log"] verbs: ["get"] - apiGroups: [""] resources: ["pods/attach"] verbs: ["get", "list", "watch", "create"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: role-containergroup-service-account-binding namespace: containergroup-namespace subjects: - kind: ServiceAccount name: containergroup-service-account namespace: containergroup-namespace roleRef: kind: Role name: role-containergroup-service-account apiGroup: rbac.authorization.k8s.io Apply the configuration from containergroup-sa.yml : oc apply -f containergroup-sa.yml Get the secret name associated with the service account: export SA_SECRET=USD(oc get sa containergroup-service-account -o json | jq '.secrets[0].name' | tr -d '"') Get the token from the secret: oc get secret USD(echo USD{SA_SECRET}) -o json | jq '.data.token' | xargs | base64 --decode > containergroup-sa.token Get the CA cert: oc get secret USDSA_SECRET -o json | jq '.data["ca.crt"]' | xargs | base64 --decode > containergroup-ca.crt Use the contents of containergroup-sa.token and containergroup-ca.crt to provide the information for the OpenShift or Kubernetes API Bearer Token required for the container group. 22.4.22. OpenStack credential type Select this credential type to enable synchronization of cloud inventory with OpenStack. Enter the following information for OpenStack credentials: Username : The username to use to connect to OpenStack. Password (API Key) : The password or API key to use to connect to OpenStack. Host (Authentication URL) : The host to be used for authentication. Project (Tenant Name) : The Tenant name or Tenant ID used for OpenStack. This value is usually the same as the username. Optional: Project (Domain Name) : Give the project name associated with your domain. Optional: Domain Name : Give the FQDN to be used to connect to OpenStack. Optional: Region Name : Give the region name. For some cloud providers, like OVH, the region must be specified. If you are interested in using OpenStack Cloud Credentials, see Use Cloud Credentials with a cloud inventory , which includes a sample playbook. 22.4.23. Red Hat Ansible Automation Platform credential type Select this credential to access another automation controller instance. Ansible Automation Platform credentials require the following inputs: Red Hat Ansible Automation Platform : The base URL or IP address of the other instance to connect to. Username : The username to use to connect to it. Password : The password to use to connect to it. Oauth Token : If username and password are not used, provide an OAuth token to use to authenticate. The env injectors for Ansible Automation Platform are as follows: ManagedCredentialType( namespace='controller', .... .... .... injectors={ 'env': { 'TOWER_HOST': '{{host}}', 'TOWER_USERNAME': '{{username}}', 'TOWER_PASSWORD': '{{password}}', 'TOWER_VERIFY_SSL': '{{verify_ssl}}', 'TOWER_OAUTH_TOKEN': '{{oauth_token}}', 'CONTROLLER_HOST': '{{host}}', 'CONTROLLER_USERNAME': '{{username}}', 'CONTROLLER_PASSWORD': '{{password}}', 'CONTROLLER_VERIFY_SSL': '{{verify_ssl}}', 'CONTROLLER_OAUTH_TOKEN': '{{oauth_token}}', } 22.4.23.1. Access automation controller credentials in an Ansible Playbook You can get the host, username, and password parameters from a job runtime environment: vars: controller: host: '{{ lookup("env", "CONTROLLER_HOST") }}' username: '{{ lookup("env", "CONTROLLER_USERNAME") }}' password: '{{ lookup("env", "CONTROLLER_PASSWORD") }}' 22.4.24. Red Hat Satellite 6 credential type Select this credential type to enable synchronization of cloud inventory with Red Hat Satellite 6. Automation controller writes a Satellite configuration file based on fields prompted in the user interface. The absolute path to the file is set in the following environment variable: FOREMAN_INI_PATH Satellite credentials have the following required inputs: Satellite 6 URL : The Satellite 6 URL or IP address to connect to. Username : The username to use to connect to Satellite 6. Password : The password to use to connect to Satellite 6. 22.4.25. Red Hat Virtualization credential type Select this credential to enable automation controller to access Ansible's oVirt4.py dynamic inventory plugin, which is managed by Red Hat Virtualization . Automation controller uses the following environment variables for Red Hat Virtualization credentials. These are fields in the user interface: OVIRT_URL OVIRT_USERNAME OVIRT_PASSWORD Provide the following information for Red Hat Virtualization credentials: Host (Authentication URL) : The host URL or IP address to connect to. To sync with the inventory, the credential URL needs to include the ovirt-engine/api path. Username : The username to use to connect to oVirt4. This must include the domain profile to succeed, for example [email protected] . Password : The password to use to connect to it. Optional: CA File : Provide an absolute path to the oVirt certificate file (it might end in .pem , .cer and .crt extensions, but preferably .pem for consistency) 22.4.25.1. Access virtualization credentials in an Ansible Playbook You can get the Red Hat Virtualization credential parameter from a job runtime environment: vars: ovirt: ovirt_url: '{{ lookup("env", "OVIRT_URL") }}' ovirt_username: '{{ lookup("env", "OVIRT_USERNAME") }}' ovirt_password: '{{ lookup("env", "OVIRT_PASSWORD") }}' The file and env injectors for Red Hat Virtualization are as follows: ManagedCredentialType( namespace='rhv', .... .... .... injectors={ # The duplication here is intentional; the ovirt4 inventory plugin # writes a .ini file for authentication, while the ansible modules for # ovirt4 use a separate authentication process that support # environment variables; by injecting both, we support both 'file': { 'template': '\n'.join( [ '[ovirt]', 'ovirt_url={{host}}', 'ovirt_username={{username}}', 'ovirt_password={{password}}', '{% if ca_file %}ovirt_ca_file={{ca_file}}{% endif %}', ] ) }, 'env': {'OVIRT_INI_PATH': '{{tower.filename}}', 'OVIRT_URL': '{{host}}', 'OVIRT_USERNAME': '{{username}}', 'OVIRT_PASSWORD': '{{password}}'}, }, ) 22.4.26. Source Control credential type Source Control credentials are used with projects to clone and update local source code repositories from a remote revision control system such as Git or Subversion. Source Control credentials require the following inputs: Username : The username to use in conjunction with the source control system. Password : The password to use in conjunction with the source control system. SCM Private Key : Copy or drag-and-drop the actual SSH Private Key to be used to authenticate the user to the source control system through SSH. Private Key Passphrase : If the SSH Private Key used is protected by a passphrase, you can configure a Key Passphrase for the private key. Note You cannot configure Source Control credentials as Prompt on launch . If you are using a GitHub account for a Source Control credential and you have Two Factor Authentication (2FA) enabled on your account, you must use your Personal Access Token in the password field rather than your account password. 22.4.27. Terraform backend configuration Terraform is a HashiCorp tool used to automate various infrastructure tasks. Select this credential type to enable synchronization with the Terraform inventory source. The Terraform credential requires the Backend configuration attribute which must contain the data from a Terraform backend block . You can paste, drag a file, browse to upload a file, or click the icon to populate the field from an external Secret Management System . Terraform backend configuration requires the following inputs: Name Credential type: Select Terraform backend configuration . Optional: Organization Optional: Description Backend configuration : Drag a file here or browse to upload. Example configuration for an S3 backend: bucket = "my-terraform-state-bucket" key = "path/to/terraform-state-file" region = "us-east-1" access_key = "my-aws-access-key" secret_key = "my-aws-secret-access-key" Optional: Google Cloud Platform account credentials 22.4.28. Thycotic DevOps Secrets Vault credential type This is considered part of the secret management capability. For more information, see Thycotic DevOps Secrets Vault . 22.4.29. Thycotic secret server credential type This is considered part of the secret management capability. For more information, see Thycotic Secret Server . 22.4.30. Ansible Vault credential type Select this credential type to enable synchronization of inventory with Ansible Vault. Vault credentials require the Vault Password and an optional Vault Identifier if applying multi-Vault credentialing. You can configure automation controller to ask the user for the password at launch time by selecting Prompt on launch . When you select Prompt on launch , a dialog opens when the job is launched, prompting the user to enter the password. Warning Credentials that are used in scheduled jobs must not be configured as Prompt on launch . For more information about Ansible Vault, see Protecting sensitive data with Ansible vault . 22.4.31. VMware vCenter credential type Select this credential type to enable synchronization of inventory with VMware vCenter. Automation controller uses the following environment variables for VMware vCenter credentials: VMWARE_HOST VMWARE_USER VMWARE_PASSWORD VMWARE_VALIDATE_CERTS These are fields prompted in the user interface. VMware credentials require the following inputs: vCenter Host : The vCenter hostname or IP address to connect to. Username : The username to use to connect to vCenter. Password : The password to use to connect to vCenter. Note If the VMware guest tools are not running on the instance, VMware inventory synchronization does not return an IP address for that instance. 22.4.31.1. Access VMware vCenter credentials in an ansible playbook You can get VMware vCenter credential parameters from a job runtime environment: vars: vmware: host: '{{ lookup("env", "VMWARE_HOST") }}' username: '{{ lookup("env", "VMWARE_USER") }}' password: '{{ lookup("env", "VMWARE_PASSWORD") }}' 22.5. Use automation controller credentials in a playbook The following playbook is an example of how to use automation controller credentials in your playbook. - hosts: all vars: machine: username: '{{ ansible_user }}' password: '{{ ansible_password }}' controller: host: '{{ lookup("env", "CONTROLLER_HOST") }}' username: '{{ lookup("env", "CONTROLLER_USERNAME") }}' password: '{{ lookup("env", "CONTROLLER_PASSWORD") }}' network: username: '{{ lookup("env", "ANSIBLE_NET_USERNAME") }}' password: '{{ lookup("env", "ANSIBLE_NET_PASSWORD") }}' aws: access_key: '{{ lookup("env", "AWS_ACCESS_KEY_ID") }}' secret_key: '{{ lookup("env", "AWS_SECRET_ACCESS_KEY") }}' security_token: '{{ lookup("env", "AWS_SECURITY_TOKEN") }}' vmware: host: '{{ lookup("env", "VMWARE_HOST") }}' username: '{{ lookup("env", "VMWARE_USER") }}' password: '{{ lookup("env", "VMWARE_PASSWORD") }}' gce: email: '{{ lookup("env", "GCE_EMAIL") }}' project: '{{ lookup("env", "GCE_PROJECT") }}' azure: client_id: '{{ lookup("env", "AZURE_CLIENT_ID") }}' secret: '{{ lookup("env", "AZURE_SECRET") }}' tenant: '{{ lookup("env", "AZURE_TENANT") }}' subscription_id: '{{ lookup("env", "AZURE_SUBSCRIPTION_ID") }}' tasks: - debug: var: machine - debug: var: controller - debug: var: network - debug: var: aws - debug: var: vmware - debug: var: gce - shell: 'cat {{ gce.pem_file_path }}' delegate_to: localhost - debug: var: azure Use 'delegate_to' and any lookup variable - command: somecommand environment: USERNAME: '{{ lookup("env", "USERNAME") }}' PASSWORD: '{{ lookup("env", "PASSWORD") }}' delegate_to: somehost
[ "AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SECURITY_TOKEN", "vars: aws: access_key: '{{ lookup(\"env\", \"AWS_ACCESS_KEY_ID\") }}' secret_key: '{{ lookup(\"env\", \"AWS_SECRET_ACCESS_KEY\") }}' security_token: '{{ lookup(\"env\", \"AWS_SECURITY_TOKEN\") }}'", "GCE_EMAIL GCE_PROJECT GCE_CREDENTIALS_FILE_PATH", "vars: gce: email: '{{ lookup(\"env\", \"GCE_EMAIL\") }}' project: '{{ lookup(\"env\", \"GCE_PROJECT\") }}' pem_file_path: '{{ lookup(\"env\", \"GCE_PEM_FILE_PATH\") }}'", "ManagedCredentialType( namespace='insights', . . . injectors={ 'extra_vars': { \"scm_username\": \"{{username}}\", \"scm_password\": \"{{password}}\", }, 'env': { 'INSIGHTS_USER': '{{username}}', 'INSIGHTS_PASSWORD': '{{password}}', },", "vars: machine: username: '{{ ansible_user }}' password: '{{ ansible_password }}'", "AZURE_CLIENT_ID AZURE_SECRET AZURE_SUBSCRIPTION_ID AZURE_TENANT AZURE_CLOUD_ENVIRONMENT", "AZURE_AD_USER AZURE_PASSWORD AZURE_SUBSCRIPTION_ID", "client_id secret subscription_id tenant azure_cloud_environment", "ad_user password subscription_id", "vars: azure: client_id: '{{ lookup(\"env\", \"AZURE_CLIENT_ID\") }}' secret: '{{ lookup(\"env\", \"AZURE_SECRET\") }}' tenant: '{{ lookup(\"env\", \"AZURE_TENANT\") }}' subscription_id: '{{ lookup(\"env\", \"AZURE_SUBSCRIPTION_ID\") }}'", "ANSIBLE_NET_USERNAME ANSIBLE_NET_PASSWORD", "vars: network: username: '{{ lookup(\"env\", \"ANSIBLE_NET_USERNAME\") }}' password: '{{ lookup(\"env\", \"ANSIBLE_NET_PASSWORD\") }}'", "--- apiVersion: v1 kind: ServiceAccount metadata: name: containergroup-service-account namespace: containergroup-namespace --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: role-containergroup-service-account namespace: containergroup-namespace rules: - apiGroups: [\"\"] resources: [\"pods\"] verbs: [\"get\", \"list\", \"watch\", \"create\", \"update\", \"patch\", \"delete\"] - apiGroups: [\"\"] resources: [\"pods/log\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"pods/attach\"] verbs: [\"get\", \"list\", \"watch\", \"create\"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: role-containergroup-service-account-binding namespace: containergroup-namespace subjects: - kind: ServiceAccount name: containergroup-service-account namespace: containergroup-namespace roleRef: kind: Role name: role-containergroup-service-account apiGroup: rbac.authorization.k8s.io", "apply -f containergroup-sa.yml", "export SA_SECRET=USD(oc get sa containergroup-service-account -o json | jq '.secrets[0].name' | tr -d '\"')", "get secret USD(echo USD{SA_SECRET}) -o json | jq '.data.token' | xargs | base64 --decode > containergroup-sa.token", "get secret USDSA_SECRET -o json | jq '.data[\"ca.crt\"]' | xargs | base64 --decode > containergroup-ca.crt", "ManagedCredentialType( namespace='controller', . . . injectors={ 'env': { 'TOWER_HOST': '{{host}}', 'TOWER_USERNAME': '{{username}}', 'TOWER_PASSWORD': '{{password}}', 'TOWER_VERIFY_SSL': '{{verify_ssl}}', 'TOWER_OAUTH_TOKEN': '{{oauth_token}}', 'CONTROLLER_HOST': '{{host}}', 'CONTROLLER_USERNAME': '{{username}}', 'CONTROLLER_PASSWORD': '{{password}}', 'CONTROLLER_VERIFY_SSL': '{{verify_ssl}}', 'CONTROLLER_OAUTH_TOKEN': '{{oauth_token}}', }", "vars: controller: host: '{{ lookup(\"env\", \"CONTROLLER_HOST\") }}' username: '{{ lookup(\"env\", \"CONTROLLER_USERNAME\") }}' password: '{{ lookup(\"env\", \"CONTROLLER_PASSWORD\") }}'", "FOREMAN_INI_PATH", "OVIRT_URL OVIRT_USERNAME OVIRT_PASSWORD", "vars: ovirt: ovirt_url: '{{ lookup(\"env\", \"OVIRT_URL\") }}' ovirt_username: '{{ lookup(\"env\", \"OVIRT_USERNAME\") }}' ovirt_password: '{{ lookup(\"env\", \"OVIRT_PASSWORD\") }}'", "ManagedCredentialType( namespace='rhv', . . . injectors={ # The duplication here is intentional; the ovirt4 inventory plugin # writes a .ini file for authentication, while the ansible modules for # ovirt4 use a separate authentication process that support # environment variables; by injecting both, we support both 'file': { 'template': '\\n'.join( [ '[ovirt]', 'ovirt_url={{host}}', 'ovirt_username={{username}}', 'ovirt_password={{password}}', '{% if ca_file %}ovirt_ca_file={{ca_file}}{% endif %}', ] ) }, 'env': {'OVIRT_INI_PATH': '{{tower.filename}}', 'OVIRT_URL': '{{host}}', 'OVIRT_USERNAME': '{{username}}', 'OVIRT_PASSWORD': '{{password}}'}, }, )", "bucket = \"my-terraform-state-bucket\" key = \"path/to/terraform-state-file\" region = \"us-east-1\" access_key = \"my-aws-access-key\" secret_key = \"my-aws-secret-access-key\"", "VMWARE_HOST VMWARE_USER VMWARE_PASSWORD VMWARE_VALIDATE_CERTS", "vars: vmware: host: '{{ lookup(\"env\", \"VMWARE_HOST\") }}' username: '{{ lookup(\"env\", \"VMWARE_USER\") }}' password: '{{ lookup(\"env\", \"VMWARE_PASSWORD\") }}'", "- hosts: all vars: machine: username: '{{ ansible_user }}' password: '{{ ansible_password }}' controller: host: '{{ lookup(\"env\", \"CONTROLLER_HOST\") }}' username: '{{ lookup(\"env\", \"CONTROLLER_USERNAME\") }}' password: '{{ lookup(\"env\", \"CONTROLLER_PASSWORD\") }}' network: username: '{{ lookup(\"env\", \"ANSIBLE_NET_USERNAME\") }}' password: '{{ lookup(\"env\", \"ANSIBLE_NET_PASSWORD\") }}' aws: access_key: '{{ lookup(\"env\", \"AWS_ACCESS_KEY_ID\") }}' secret_key: '{{ lookup(\"env\", \"AWS_SECRET_ACCESS_KEY\") }}' security_token: '{{ lookup(\"env\", \"AWS_SECURITY_TOKEN\") }}' vmware: host: '{{ lookup(\"env\", \"VMWARE_HOST\") }}' username: '{{ lookup(\"env\", \"VMWARE_USER\") }}' password: '{{ lookup(\"env\", \"VMWARE_PASSWORD\") }}' gce: email: '{{ lookup(\"env\", \"GCE_EMAIL\") }}' project: '{{ lookup(\"env\", \"GCE_PROJECT\") }}' azure: client_id: '{{ lookup(\"env\", \"AZURE_CLIENT_ID\") }}' secret: '{{ lookup(\"env\", \"AZURE_SECRET\") }}' tenant: '{{ lookup(\"env\", \"AZURE_TENANT\") }}' subscription_id: '{{ lookup(\"env\", \"AZURE_SUBSCRIPTION_ID\") }}' tasks: - debug: var: machine - debug: var: controller - debug: var: network - debug: var: aws - debug: var: vmware - debug: var: gce - shell: 'cat {{ gce.pem_file_path }}' delegate_to: localhost - debug: var: azure", "- command: somecommand environment: USERNAME: '{{ lookup(\"env\", \"USERNAME\") }}' PASSWORD: '{{ lookup(\"env\", \"PASSWORD\") }}' delegate_to: somehost" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/using_automation_execution/controller-credentials
25.9.2. Adding a Log File
25.9.2. Adding a Log File To add a log file you want to view in the list, select File Open . This will display the Open Log window where you can select the directory and file name of the log file you want to view. Figure 25.6, "Log File Viewer - adding a log file" illustrates the Open Log window. Figure 25.6. Log File Viewer - adding a log file Click on the Open button to open the file. The file is immediately added to the viewing list where you can select it and view its contents. Note The Log File Viewer also allows you to open log files zipped in the .gz format.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-logfiles-adding
Chapter 9. Replacing DistributedComputeHCI nodes
Chapter 9. Replacing DistributedComputeHCI nodes During hardware maintenance you may need to scale down, scale up, or replace a DistributedComputeHCI node at an edge site. To replace a DistributedComputeHCI node, remove services from the node you are replacing, scale the number of nodes down, and then follow the procedures for scaling those nodes back up. 9.1. Removing Red Hat Ceph Storage services Before removing an HCI (hyperconverged) node from a cluster, you must remove Red Hat Ceph Storage services. To remove the Red Hat Ceph services, you must disable and remove ceph-osd service from the cluster services on the node you are removing, then stop and disable the mon , mgr , and osd services. Procedure On the undercloud, use SSH to connect to the DistributedComputeHCI node that you want to remove: USD ssh tripleo-admin@<dcn-computehci-node> Start a cephadm shell. Use the configuration file and keyring file for the site that the host being removed is in: Record the OSDs (object storage devices) associated with the DistributedComputeHCI node you are removing for use reference in a later step: [ceph: root@dcn2-computehci2-1 ~]# ceph osd tree -c /etc/ceph/dcn2.conf ... -3 0.24399 host dcn2-computehci2-1 1 hdd 0.04880 osd.1 up 1.00000 1.00000 7 hdd 0.04880 osd.7 up 1.00000 1.00000 11 hdd 0.04880 osd.11 up 1.00000 1.00000 15 hdd 0.04880 osd.15 up 1.00000 1.00000 18 hdd 0.04880 osd.18 up 1.00000 1.00000 ... Use SSH to connect to another node in the same cluster and remove the monitor from the cluster: Use SSH to log in again to the node that you are removing from the cluster. Stop and disable the mgr service: Start the cephadm shell: Verify that the mgr service for the node is removed from the cluster: 1 The node that the mgr service is removed from is no longer listed when the mgr service is successfully removed. Export the Red Hat Ceph Storage specification: Edit the specifications in the spec.yaml file: Remove all instances of the host <dcn-computehci-node> from spec.yml Remove all instances of the <dcn-computehci-node> entry from the following: service_type: osd service_type: mon service_type: host Reapply the Red Hat Ceph Storage specification: Remove the OSDs that you identified using ceph osd tree : Verify the status of the OSDs being removed. Do not continue until the following command returns no output: Verify that no daemons remain on the host you are removing: If daemons are still present, you can remove them with the following command: Remove the <dcn-computehci-node> host from the Red Hat Ceph Storage cluster: 9.2. Removing the Image service (glance) services Remove image services from a node when you remove it from service. Procedure To disable the Image service services, disable them using systemctl on the node you are removing: [root@dcn2-computehci2-1 ~]# systemctl stop tripleo_glance_api.service [root@dcn2-computehci2-1 ~]# systemctl stop tripleo_glance_api_tls_proxy.service [root@dcn2-computehci2-1 ~]# systemctl disable tripleo_glance_api.service Removed /etc/systemd/system/multi-user.target.wants/tripleo_glance_api.service. [root@dcn2-computehci2-1 ~]# systemctl disable tripleo_glance_api_tls_proxy.service Removed /etc/systemd/system/multi-user.target.wants/tripleo_glance_api_tls_proxy.service. 9.3. Removing the Block Storage (cinder) services You must remove the cinder-volume and etcd services from the DistributedComputeHCI node when you remove it from service. Procedure Identify and disable the cinder-volume service on the node you are removing: (central) [stack@site-undercloud-0 ~]USD openstack volume service list --service cinder-volume | cinder-volume | dcn2-computehci2-1@tripleo_ceph | az-dcn2 | enabled | up | 2022-03-23T17:41:43.000000 | (central) [stack@site-undercloud-0 ~]USD openstack volume service set --disable dcn2-computehci2-1@tripleo_ceph cinder-volume Log on to a different DistributedComputeHCI node in the stack: USD ssh tripleo-admin@dcn2-computehci2-0 Remove the cinder-volume service associated with the node that you are removing: [root@dcn2-computehci2-0 ~]# podman exec -it cinder_volume cinder-manage service remove cinder-volume dcn2-computehci2-1@tripleo_ceph Service cinder-volume on host dcn2-computehci2-1@tripleo_ceph removed. Stop and disable the tripleo_cinder_volume service on the node that you are removing: 9.4. Delete the DistributedComputeHCI node Set the provisioned parameter to a value of false and remove the node from the stack. Disable the nova-compute service and delete the relevant network agent. Procedure Copy the baremetal-deployment.yaml file: Edit the baremetal-deployement-scaledown.yaml file. Identify the host you want to remove and set the provisioned parameter to have a value of false : Remove the node from the stack: Optional: If you are going to reuse the node, use ironic to clean the disk. This is required if the node will host Ceph OSDs: openstack baremetal node manage USDUUID openstack baremetal node clean USDUUID --clean-steps '[{"interface":"deploy", "step": "erase_devices_metadata"}]' openstack baremetal provide USDUUID Redeploy the central site. Include all templates that you used for the initial configuration: 9.5. Replacing a removed DistributedComputeHCI node 9.5.1. Replacing a removed DistributedComputeHCI node To add new HCI nodes to your DCN deployment, you must redeploy the edge stack with the additional node, perform a ceph export of that stack, and then perform a stack update for the central location. A stack update of the central location adds configurations specific to edge-sites. Prerequisites The node counts are correct in the nodes_data.yaml file of the stack that you want to replace the node in or add a new node to. Procedure You must set the EtcdIntialClusterState parameter to existing in one of the templates called by your deploy script: Redeploy using the deployment script specific to the stack: Export the Red Hat Ceph Storage data from the stack: Replace dcn_ceph_external.yaml with the newly generated dcn2_scale_up_ceph_external.yaml in the deploy script for the central location. Perform a stack update at central: 9.6. Verify the functionality of a replaced DistributedComputeHCI node Ensure the value of the status field is enabled , and that the value of the State field is up : (central) [stack@site-undercloud-0 ~]USD openstack compute service list -c Binary -c Host -c Zone -c Status -c State +----------------+-----------------------------------------+------------+---------+-------+ | Binary | Host | Zone | Status | State | +----------------+-----------------------------------------+------------+---------+-------+ ... | nova-compute | dcn1-compute1-0.redhat.local | az-dcn1 | enabled | up | | nova-compute | dcn1-compute1-1.redhat.local | az-dcn1 | enabled | up | | nova-compute | dcn2-computehciscaleout2-0.redhat.local | az-dcn2 | enabled | up | | nova-compute | dcn2-computehci2-0.redhat.local | az-dcn2 | enabled | up | | nova-compute | dcn2-computescaleout2-0.redhat.local | az-dcn2 | enabled | up | | nova-compute | dcn2-computehci2-2.redhat.local | az-dcn2 | enabled | up | ... Ensure that all network agents are in the up state: (central) [stack@site-undercloud-0 ~]USD openstack network agent list -c "Agent Type" -c Host -c Alive -c State +--------------------+-----------------------------------------+-------+-------+ | Agent Type | Host | Alive | State | +--------------------+-----------------------------------------+-------+-------+ | DHCP agent | dcn3-compute3-1.redhat.local | :-) | UP | | Open vSwitch agent | central-computehci0-1.redhat.local | :-) | UP | | DHCP agent | dcn3-compute3-0.redhat.local | :-) | UP | | DHCP agent | central-controller0-2.redhat.local | :-) | UP | | Open vSwitch agent | dcn3-compute3-1.redhat.local | :-) | UP | | Open vSwitch agent | dcn1-compute1-1.redhat.local | :-) | UP | | Open vSwitch agent | central-computehci0-0.redhat.local | :-) | UP | | DHCP agent | central-controller0-1.redhat.local | :-) | UP | | L3 agent | central-controller0-2.redhat.local | :-) | UP | | Metadata agent | central-controller0-1.redhat.local | :-) | UP | | Open vSwitch agent | dcn2-computescaleout2-0.redhat.local | :-) | UP | | Open vSwitch agent | dcn2-computehci2-5.redhat.local | :-) | UP | | Open vSwitch agent | central-computehci0-2.redhat.local | :-) | UP | | DHCP agent | central-controller0-0.redhat.local | :-) | UP | | Open vSwitch agent | central-controller0-1.redhat.local | :-) | UP | | Open vSwitch agent | dcn2-computehci2-0.redhat.local | :-) | UP | | Open vSwitch agent | dcn1-compute1-0.redhat.local | :-) | UP | ... Verify the status of the Ceph Cluster: Use SSH to connect to the new DistributedComputeHCI node and check the status of the Ceph cluster: [root@dcn2-computehci2-5 ~]# podman exec -it ceph-mon-dcn2-computehci2-5 \ ceph -s -c /etc/ceph/dcn2.conf Verify that both the ceph mon and ceph mgr services exist for the new node: services: mon: 3 daemons, quorum dcn2-computehci2-2,dcn2-computehci2-0,dcn2-computehci2-5 (age 3d) mgr: dcn2-computehci2-2(active, since 3d), standbys: dcn2-computehci2-0, dcn2-computehci2-5 osd: 20 osds: 20 up (since 3d), 20 in (since 3d) Verify the status of the ceph osds with 'ceph osd tree'. Ensure all osds for our new node are in STATUS up: [root@dcn2-computehci2-5 ~]# podman exec -it ceph-mon-dcn2-computehci2-5 ceph osd tree -c /etc/ceph/dcn2.conf ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.97595 root default -5 0.24399 host dcn2-computehci2-0 0 hdd 0.04880 osd.0 up 1.00000 1.00000 4 hdd 0.04880 osd.4 up 1.00000 1.00000 8 hdd 0.04880 osd.8 up 1.00000 1.00000 13 hdd 0.04880 osd.13 up 1.00000 1.00000 17 hdd 0.04880 osd.17 up 1.00000 1.00000 -9 0.24399 host dcn2-computehci2-2 3 hdd 0.04880 osd.3 up 1.00000 1.00000 5 hdd 0.04880 osd.5 up 1.00000 1.00000 10 hdd 0.04880 osd.10 up 1.00000 1.00000 14 hdd 0.04880 osd.14 up 1.00000 1.00000 19 hdd 0.04880 osd.19 up 1.00000 1.00000 -3 0.24399 host dcn2-computehci2-5 1 hdd 0.04880 osd.1 up 1.00000 1.00000 7 hdd 0.04880 osd.7 up 1.00000 1.00000 11 hdd 0.04880 osd.11 up 1.00000 1.00000 15 hdd 0.04880 osd.15 up 1.00000 1.00000 18 hdd 0.04880 osd.18 up 1.00000 1.00000 -7 0.24399 host dcn2-computehciscaleout2-0 2 hdd 0.04880 osd.2 up 1.00000 1.00000 6 hdd 0.04880 osd.6 up 1.00000 1.00000 9 hdd 0.04880 osd.9 up 1.00000 1.00000 12 hdd 0.04880 osd.12 up 1.00000 1.00000 16 hdd 0.04880 osd.16 up 1.00000 1.00000 Verify the cinder-volume service for the new DistributedComputeHCI node is in Status 'enabled' and in State 'up': (central) [stack@site-undercloud-0 ~]USD openstack volume service list --service cinder-volume -c Binary -c Host -c Zone -c Status -c State +---------------+---------------------------------+------------+---------+-------+ | Binary | Host | Zone | Status | State | +---------------+---------------------------------+------------+---------+-------+ | cinder-volume | hostgroup@tripleo_ceph | az-central | enabled | up | | cinder-volume | dcn1-compute1-1@tripleo_ceph | az-dcn1 | enabled | up | | cinder-volume | dcn1-compute1-0@tripleo_ceph | az-dcn1 | enabled | up | | cinder-volume | dcn2-computehci2-0@tripleo_ceph | az-dcn2 | enabled | up | | cinder-volume | dcn2-computehci2-2@tripleo_ceph | az-dcn2 | enabled | up | | cinder-volume | dcn2-computehci2-5@tripleo_ceph | az-dcn2 | enabled | up | +---------------+---------------------------------+------------+---------+-------+ Note If the State of the cinder-volume service is down , then the service has not been started on the node. Use ssh to connect to the new DistributedComputeHCI node and check the status of the Glance services with 'systemctl': [root@dcn2-computehci2-5 ~]# systemctl --type service | grep glance tripleo_glance_api.service loaded active running glance_api container tripleo_glance_api_healthcheck.service loaded activating start start glance_api healthcheck tripleo_glance_api_tls_proxy.service loaded active running glance_api_tls_proxy container 9.7. Troubleshooting DistributedComputeHCI state down If the replacement node was deployed without the EtcdInitialClusterState parameter value set to existing , then the cinder-volume service of the replaced node shows down when you run openstack volume service list . Procedure Log onto the replacement node and check logs for the etcd service. Check that the logs show the etcd service is reporting a cluster ID mismatch in the /var/log/containers/stdouts/etcd.log log file: Set the EtcdInitialClusterState parameter to the value of existing in your deployment templates and rerun the deployment script. Use SSH to connect to the replacement node and run the following commands as root: Recheck the /var/log/containers/stdouts/etcd.log log file to verify that the node successfully joined the cluster: Check the state of the cinder-volume service, and confirm it reads up on the replacement node when you run openstack volume service list .
[ "ssh tripleo-admin@<dcn-computehci-node>", "sudo cephadm shell --config /etc/ceph/dcn2.conf --keyring /etc/ceph/dcn2.client.admin.keyring", "ceph osd tree -c /etc/ceph/dcn2.conf ... -3 0.24399 host dcn2-computehci2-1 1 hdd 0.04880 osd.1 up 1.00000 1.00000 7 hdd 0.04880 osd.7 up 1.00000 1.00000 11 hdd 0.04880 osd.11 up 1.00000 1.00000 15 hdd 0.04880 osd.15 up 1.00000 1.00000 18 hdd 0.04880 osd.18 up 1.00000 1.00000 ...", "sudo cephadm shell --config /etc/ceph/dcn2.conf --keyring /etc/ceph/dcn2.client.admin.keyring ceph mon remove dcn2-computehci2-1 -c /etc/ceph/dcn2.conf removing mon.dcn2-computehci2-1 at [v2:172.23.3.153:3300/0,v1:172.23.3.153:6789/0], there will be 2 monitors", "[tripleo-admin@dcn2-computehci2-1 ~]USD sudo systemctl --type=service | grep ceph [email protected] loaded active running Ceph crash dump collector [email protected] loaded active running Ceph Manager [tripleo-admin@dcn2-computehci2-1 ~]USD sudo systemctl stop ceph-mgr@dcn2-computehci2-1 [tripleo-admin@dcn2-computehci2-1 ~]USD sudo systemctl --type=service | grep ceph [email protected] loaded active running Ceph crash dump collector [tripleo-admin@dcn2-computehci2-1 ~]USD sudo systemctl disable ceph-mgr@dcn2-computehci2-1 Removed /etc/systemd/system/multi-user.target.wants/[email protected].", "sudo cephadm shell --config /etc/ceph/dcn2.conf --keyring /etc/ceph/dcn2.client.admin.keyring", "ceph -s cluster: id: b9b53581-d590-41ac-8463-2f50aa985001 health: HEALTH_WARN 3 pools have too many placement groups mons are allowing insecure global_id reclaim services: mon: 2 daemons, quorum dcn2-computehci2-2,dcn2-computehci2-0 (age 2h) mgr: dcn2-computehci2-2(active, since 20h), standbys: dcn2-computehci2-0 1 osd: 15 osds: 15 up (since 3h), 15 in (since 3h) data: pools: 3 pools, 384 pgs objects: 32 objects, 88 MiB usage: 16 GiB used, 734 GiB / 750 GiB avail pgs: 384 active+clean", "ceph orch ls --export > spec.yml", "ceph orch apply -i spec.yml", "ceph orch osd rm --zap 1 7 11 15 18 Scheduled OSD(s) for removal", "ceph orch osd rm status OSD_ID HOST STATE PG_COUNT REPLACE FORCE DRAIN_STARTED_AT 1 dcn2-computehci2-1 draining 27 False False 2021-04-23 21:35:51.215361 7 dcn2-computehci2-1 draining 8 False False 2021-04-23 21:35:49.111500 11 dcn2-computehci2-1 draining 14 False False 2021-04-23 21:35:50.243762", "ceph orch ps dcn2-computehci2-1", "ceph orch host drain dcn2-computehci2-1", "ceph orch host rm dcn2-computehci2-1 Removed host 'dcn2-computehci2-1'", "systemctl stop tripleo_glance_api.service systemctl stop tripleo_glance_api_tls_proxy.service systemctl disable tripleo_glance_api.service Removed /etc/systemd/system/multi-user.target.wants/tripleo_glance_api.service. systemctl disable tripleo_glance_api_tls_proxy.service Removed /etc/systemd/system/multi-user.target.wants/tripleo_glance_api_tls_proxy.service.", "(central) [stack@site-undercloud-0 ~]USD openstack volume service list --service cinder-volume | cinder-volume | dcn2-computehci2-1@tripleo_ceph | az-dcn2 | enabled | up | 2022-03-23T17:41:43.000000 | (central) [stack@site-undercloud-0 ~]USD openstack volume service set --disable dcn2-computehci2-1@tripleo_ceph cinder-volume", "ssh tripleo-admin@dcn2-computehci2-0", "podman exec -it cinder_volume cinder-manage service remove cinder-volume dcn2-computehci2-1@tripleo_ceph Service cinder-volume on host dcn2-computehci2-1@tripleo_ceph removed.", "systemctl stop tripleo_cinder_volume.service systemctl disable tripleo_cinder_volume.service Removed /etc/systemd/system/multi-user.target.wants/tripleo_cinder_volume.service", "cp /home/stack/dcn2/overcloud-baremetal-deploy.yaml /home/stack/dcn2/baremetal-deployment-scaledown.yaml", "instances: - hostname: dcn2-computehci2-1 provisioned: false", "openstack overcloud node delete --stack dcn2 --baremetal-deployment /home/stack/dcn2/baremetal_deployment_scaledown.yaml", "openstack baremetal node manage USDUUID openstack baremetal node clean USDUUID --clean-steps '[{\"interface\":\"deploy\", \"step\": \"erase_devices_metadata\"}]' openstack baremetal provide USDUUID", "openstack overcloud deploy --deployed-server --stack central --templates /usr/share/openstack-tripleo-heat-templates/ -r ~/control-plane/central_roles.yaml -n ~/network-data.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/dcn-storage.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/nova-az-config.yaml -e /home/stack/central/overcloud-networks-deployed.yaml -e /home/stack/central/overcloud-vip-deployed.yaml -e /home/stack/central/deployed_metal.yaml -e /home/stack/central/deployed_ceph.yaml -e /home/stack/central/dcn_ceph.yaml -e /home/stack/central/glance_update.yaml", "parameter_defaults: EtcdInitialClusterState: existing", "(undercloud) [stack@site-undercloud-0 ~]USD ./overcloud_deploy_dcn2.sh ... Overcloud Deployed without error", "(undercloud) [stack@site-undercloud-0 ~]USD sudo -E openstack overcloud export ceph --stack dcn1,dcn2 --config-download-dir /var/lib/mistral --output-file ~/central/dcn2_scale_up_ceph_external.yaml", "(undercloud) [stack@site-undercloud-0 ~]USD ./overcloud_deploy.sh Overcloud Deployed without error", "(central) [stack@site-undercloud-0 ~]USD openstack compute service list -c Binary -c Host -c Zone -c Status -c State +----------------+-----------------------------------------+------------+---------+-------+ | Binary | Host | Zone | Status | State | +----------------+-----------------------------------------+------------+---------+-------+ | nova-compute | dcn1-compute1-0.redhat.local | az-dcn1 | enabled | up | | nova-compute | dcn1-compute1-1.redhat.local | az-dcn1 | enabled | up | | nova-compute | dcn2-computehciscaleout2-0.redhat.local | az-dcn2 | enabled | up | | nova-compute | dcn2-computehci2-0.redhat.local | az-dcn2 | enabled | up | | nova-compute | dcn2-computescaleout2-0.redhat.local | az-dcn2 | enabled | up | | nova-compute | dcn2-computehci2-2.redhat.local | az-dcn2 | enabled | up |", "(central) [stack@site-undercloud-0 ~]USD openstack network agent list -c \"Agent Type\" -c Host -c Alive -c State +--------------------+-----------------------------------------+-------+-------+ | Agent Type | Host | Alive | State | +--------------------+-----------------------------------------+-------+-------+ | DHCP agent | dcn3-compute3-1.redhat.local | :-) | UP | | Open vSwitch agent | central-computehci0-1.redhat.local | :-) | UP | | DHCP agent | dcn3-compute3-0.redhat.local | :-) | UP | | DHCP agent | central-controller0-2.redhat.local | :-) | UP | | Open vSwitch agent | dcn3-compute3-1.redhat.local | :-) | UP | | Open vSwitch agent | dcn1-compute1-1.redhat.local | :-) | UP | | Open vSwitch agent | central-computehci0-0.redhat.local | :-) | UP | | DHCP agent | central-controller0-1.redhat.local | :-) | UP | | L3 agent | central-controller0-2.redhat.local | :-) | UP | | Metadata agent | central-controller0-1.redhat.local | :-) | UP | | Open vSwitch agent | dcn2-computescaleout2-0.redhat.local | :-) | UP | | Open vSwitch agent | dcn2-computehci2-5.redhat.local | :-) | UP | | Open vSwitch agent | central-computehci0-2.redhat.local | :-) | UP | | DHCP agent | central-controller0-0.redhat.local | :-) | UP | | Open vSwitch agent | central-controller0-1.redhat.local | :-) | UP | | Open vSwitch agent | dcn2-computehci2-0.redhat.local | :-) | UP | | Open vSwitch agent | dcn1-compute1-0.redhat.local | :-) | UP |", "podman exec -it ceph-mon-dcn2-computehci2-5 ceph -s -c /etc/ceph/dcn2.conf", "services: mon: 3 daemons, quorum dcn2-computehci2-2,dcn2-computehci2-0,dcn2-computehci2-5 (age 3d) mgr: dcn2-computehci2-2(active, since 3d), standbys: dcn2-computehci2-0, dcn2-computehci2-5 osd: 20 osds: 20 up (since 3d), 20 in (since 3d)", "podman exec -it ceph-mon-dcn2-computehci2-5 ceph osd tree -c /etc/ceph/dcn2.conf ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.97595 root default -5 0.24399 host dcn2-computehci2-0 0 hdd 0.04880 osd.0 up 1.00000 1.00000 4 hdd 0.04880 osd.4 up 1.00000 1.00000 8 hdd 0.04880 osd.8 up 1.00000 1.00000 13 hdd 0.04880 osd.13 up 1.00000 1.00000 17 hdd 0.04880 osd.17 up 1.00000 1.00000 -9 0.24399 host dcn2-computehci2-2 3 hdd 0.04880 osd.3 up 1.00000 1.00000 5 hdd 0.04880 osd.5 up 1.00000 1.00000 10 hdd 0.04880 osd.10 up 1.00000 1.00000 14 hdd 0.04880 osd.14 up 1.00000 1.00000 19 hdd 0.04880 osd.19 up 1.00000 1.00000 -3 0.24399 host dcn2-computehci2-5 1 hdd 0.04880 osd.1 up 1.00000 1.00000 7 hdd 0.04880 osd.7 up 1.00000 1.00000 11 hdd 0.04880 osd.11 up 1.00000 1.00000 15 hdd 0.04880 osd.15 up 1.00000 1.00000 18 hdd 0.04880 osd.18 up 1.00000 1.00000 -7 0.24399 host dcn2-computehciscaleout2-0 2 hdd 0.04880 osd.2 up 1.00000 1.00000 6 hdd 0.04880 osd.6 up 1.00000 1.00000 9 hdd 0.04880 osd.9 up 1.00000 1.00000 12 hdd 0.04880 osd.12 up 1.00000 1.00000 16 hdd 0.04880 osd.16 up 1.00000 1.00000", "(central) [stack@site-undercloud-0 ~]USD openstack volume service list --service cinder-volume -c Binary -c Host -c Zone -c Status -c State +---------------+---------------------------------+------------+---------+-------+ | Binary | Host | Zone | Status | State | +---------------+---------------------------------+------------+---------+-------+ | cinder-volume | hostgroup@tripleo_ceph | az-central | enabled | up | | cinder-volume | dcn1-compute1-1@tripleo_ceph | az-dcn1 | enabled | up | | cinder-volume | dcn1-compute1-0@tripleo_ceph | az-dcn1 | enabled | up | | cinder-volume | dcn2-computehci2-0@tripleo_ceph | az-dcn2 | enabled | up | | cinder-volume | dcn2-computehci2-2@tripleo_ceph | az-dcn2 | enabled | up | | cinder-volume | dcn2-computehci2-5@tripleo_ceph | az-dcn2 | enabled | up | +---------------+---------------------------------+------------+---------+-------+", "systemctl --type service | grep glance tripleo_glance_api.service loaded active running glance_api container tripleo_glance_api_healthcheck.service loaded activating start start glance_api healthcheck tripleo_glance_api_tls_proxy.service loaded active running glance_api_tls_proxy container", "2022-04-06T18:00:11.834104130+00:00 stderr F 2022-04-06 18:00:11.834045 E | rafthttp: request cluster ID mismatch (got 654f4cf0e2cfb9fd want 918b459b36fe2c0c)", "systemctl stop tripleo_etcd rm -rf /var/lib/etcd/* systemctl start tripleo_etcd", "2022-04-06T18:24:22.130059875+00:00 stderr F 2022-04-06 18:24:22.129395 I | etcdserver/membership: added member 96f61470cd1839e5 [https://dcn2-computehci2-4.internalapi.redhat.local:2380] to cluster 654f4cf0e2cfb9fd" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/deploying_a_distributed_compute_node_dcn_architecture/assembly_replacing-dcnhci-nodes
Chapter 24. OperatorPKI [network.operator.openshift.io/v1]
Chapter 24. OperatorPKI [network.operator.openshift.io/v1] Description OperatorPKI is a simple certificate authority. It is not intended for external use - rather, it is internal to the network operator. The CNO creates a CA and a certificate signed by that CA. The certificate has both ClientAuth and ServerAuth extended usages enabled. More specifically, given an OperatorPKI with <name>, the CNO will manage: - A Secret called <name>-ca with two data keys: - tls.key - the private key - tls.crt - the CA certificate - A ConfigMap called <name>-ca with a single data key: - cabundle.crt - the CA certificate(s) - A Secret called <name>-cert with two data keys: - tls.key - the private key - tls.crt - the certificate, signed by the CA The CA certificate will have a validity of 10 years, rotated after 9. The target certificate will have a validity of 6 months, rotated after 3 The CA certificate will have a CommonName of "<namespace>_<name>-ca@<timestamp>", where <timestamp> is the last rotation time. Type object Required spec 24.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object OperatorPKISpec is the PKI configuration. status object OperatorPKIStatus is not implemented. 24.1.1. .spec Description OperatorPKISpec is the PKI configuration. Type object Required targetCert Property Type Description targetCert object targetCert configures the certificate signed by the CA. It will have both ClientAuth and ServerAuth enabled 24.1.2. .spec.targetCert Description targetCert configures the certificate signed by the CA. It will have both ClientAuth and ServerAuth enabled Type object Required commonName Property Type Description commonName string commonName is the value in the certificate's CN 24.1.3. .status Description OperatorPKIStatus is not implemented. Type object 24.2. API endpoints The following API endpoints are available: /apis/network.operator.openshift.io/v1/operatorpkis GET : list objects of kind OperatorPKI /apis/network.operator.openshift.io/v1/namespaces/{namespace}/operatorpkis DELETE : delete collection of OperatorPKI GET : list objects of kind OperatorPKI POST : create an OperatorPKI /apis/network.operator.openshift.io/v1/namespaces/{namespace}/operatorpkis/{name} DELETE : delete an OperatorPKI GET : read the specified OperatorPKI PATCH : partially update the specified OperatorPKI PUT : replace the specified OperatorPKI 24.2.1. /apis/network.operator.openshift.io/v1/operatorpkis Table 24.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind OperatorPKI Table 24.2. HTTP responses HTTP code Reponse body 200 - OK OperatorPKIList schema 401 - Unauthorized Empty 24.2.2. /apis/network.operator.openshift.io/v1/namespaces/{namespace}/operatorpkis Table 24.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 24.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of OperatorPKI Table 24.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 24.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind OperatorPKI Table 24.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 24.8. HTTP responses HTTP code Reponse body 200 - OK OperatorPKIList schema 401 - Unauthorized Empty HTTP method POST Description create an OperatorPKI Table 24.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.10. Body parameters Parameter Type Description body OperatorPKI schema Table 24.11. HTTP responses HTTP code Reponse body 200 - OK OperatorPKI schema 201 - Created OperatorPKI schema 202 - Accepted OperatorPKI schema 401 - Unauthorized Empty 24.2.3. /apis/network.operator.openshift.io/v1/namespaces/{namespace}/operatorpkis/{name} Table 24.12. Global path parameters Parameter Type Description name string name of the OperatorPKI namespace string object name and auth scope, such as for teams and projects Table 24.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an OperatorPKI Table 24.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 24.15. Body parameters Parameter Type Description body DeleteOptions schema Table 24.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OperatorPKI Table 24.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 24.18. HTTP responses HTTP code Reponse body 200 - OK OperatorPKI schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OperatorPKI Table 24.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.20. Body parameters Parameter Type Description body Patch schema Table 24.21. HTTP responses HTTP code Reponse body 200 - OK OperatorPKI schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OperatorPKI Table 24.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.23. Body parameters Parameter Type Description body OperatorPKI schema Table 24.24. HTTP responses HTTP code Reponse body 200 - OK OperatorPKI schema 201 - Created OperatorPKI schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/operator_apis/operatorpki-network-operator-openshift-io-v1
4.2. Header Section
4.2. Header Section The Header section contains a hyperlink that, when activated, opens the CND Notation Preference page. Also, if the CND being edited has validation errors, the header section will have another hyperlink that identifies the total number of validation errors found. Clicking the errors hyperlink will open a dialog that lists the specific validation errors and provides a way to export those validation messages to a file.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_2_modeshape_tools/header_section
A.15. Guest Virtual Machine Fails to Shutdown
A.15. Guest Virtual Machine Fails to Shutdown Traditionally, executing a virsh shutdown command causes a power button ACPI event to be sent, thus copying the same action as when someone presses a power button on a physical machine. Within every physical machine, it is up to the OS to handle this event. In the past operating systems would just silently shutdown. Today, the most usual action is to show a dialog asking what should be done. Some operating systems even ignore this event completely, especially when no users are logged in. When such operating systems are installed on a guest virtual machine, running virsh shutdown just does not work (it is either ignored or a dialog is shown on a virtual display). However, if a qemu-guest-agent channel is added to a guest virtual machine and this agent is running inside the guest virtual machine's OS, the virsh shutdown command will ask the agent to shut down the guest OS instead of sending the ACPI event. The agent will call for a shutdown from inside the guest virtual machine OS and everything works as expected. Procedure A.7. Configuring the guest agent channel in a guest virtual machine Stop the guest virtual machine. Open the Domain XML for the guest virtual machine and add the following snippet: <channel type='unix'> <source mode='bind'/> <target type='virtio' name='org.qemu.guest_agent.0'/> </channel> Figure A.2. Configuring the guest agent channel Start the guest virtual machine, by running virsh start [domain] . Install qemu-guest-agent on the guest virtual machine ( yum install qemu-guest-agent ) and make it run automatically at every boot as a service (qemu-guest-agent.service).
[ "<channel type='unix'> <source mode='bind'/> <target type='virtio' name='org.qemu.guest_agent.0'/> </channel>" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-qemu-agent-vish-shutdown
Chapter 11. Supported and Unsupported features for IBM Power and IBM Z
Chapter 11. Supported and Unsupported features for IBM Power and IBM Z Table 11.1. List of supported and unsupported features on IBM Power and IBM Z Features IBM Power IBM Z Compact deployment Unsupported Unsupported Dynamic storage devices Unsupported Supported Stretched Cluster - Arbiter Supported Unsupported Federal Information Processing Standard Publication (FIPS) Unsupported Unsupported Ability to view pool compression metrics Supported Unsupported Automated scaling of Multicloud Object Gateway (MCG) endpoint pods Supported Unsupported Alerts to control overprovision Supported Unsupported Alerts when Ceph Monitor runs out of space Supported Unsupported Extended OpenShift Data Foundation control plane which allows pluggable external storage such as IBM Flashsystem Unsupported Unsupported IPV6 support Unsupported Unsupported Multus Unsupported Unsupported Multicloud Object Gateway (MCG) bucket replication Supported Unsupported Quota support for object data Supported Unsupported Minimum deployment Unsupported Unsupported Regional-Disaster Recovery (Regional-DR) with Red Hat Advanced Cluster Management (RHACM) Supported Unsupported Metro-Disaster Recovery (Metro-DR) multiple clusters with RHACM Supported Supported Single Node solution for Radio Access Network (RAN) Unsupported Unsupported Support for network file system (NFS) services Supported Unsupported Ability to change Multicloud Object Gateway (MCG) account credentials Supported Unsupported Multicluster monitoring in Red Hat Advanced Cluster Management console Supported Unsupported Deletion of expired objects in Multicloud Object Gateway lifecycle Supported Unsupported Agnostic deployment of OpenShift Data Foundation on any Openshift supported platform Unsupported Unsupported Installer provisioned deployment of OpenShift Data Foundation using bare metal infrastructure Unsupported Unsupported Openshift dual stack with OpenShift Data Foundation using IPv4 Unsupported Unsupported Ability to disable Multicloud Object Gateway external service during deployment Unsupported Unsupported Ability to allow overriding of default NooBaa backing store Supported Unsupported Allowing ocs-operator to deploy two MGR pods, one active and one standby Supported Unsupported Disaster Recovery for brownfield deployments Unsupported Supported
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/planning_your_deployment/unsupported-features
Appendix D. Additional Resources
Appendix D. Additional Resources To learn more about virtualization and Red Hat Enterprise Linux, see the following resources. D.1. Online Resources http://www.libvirt.org/ is the official upstream website for the libvirt virtualization API. https://virt-manager.org/ is the upstream project website for the Virtual Machine Manager (virt-manager), the graphical application for managing virtual machines. Red Hat Virtualization - http://www.redhat.com/products/cloud-computing/virtualization/ Red Hat product documentation - https://access.redhat.com/documentation/en/ Virtualization technologies overview - http://virt.kernelnewbies.org
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/appe-Additional_resources
Chapter 4. Alerts
Chapter 4. Alerts 4.1. Setting up alerts For internal Mode clusters, various alerts related to the storage metrics services, storage cluster, disk devices, cluster health, cluster capacity, and so on are displayed in the Block and File , and the Object dashboards. These alerts are not available for external Mode. Note It might take a few minutes for alerts to be shown in the alert panel because only firing alerts are visible in this panel. You can also view alerts with additional details and customize the display of Alerts in the OpenShift Container Platform. For more information, see Managing alerts .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/monitoring_openshift_data_foundation/alerts
Chapter 3. Creating an application using .NET 6.0
Chapter 3. Creating an application using .NET 6.0 Learn how to create a C# hello-world application. Procedure Create a new Console application in a directory called my-app : The output returns: A simple Hello World console application is created from a template. The application is stored in the specified my-app directory. Verification steps Run the project: The output returns:
[ "dotnet new console --output my-app", "The template \"Console Application\" was created successfully. Processing post-creation actions Running 'dotnet restore' on my-app /my-app.csproj Determining projects to restore Restored /home/ username / my-app /my-app.csproj (in 67 ms). Restore succeeded.", "dotnet run --project my-app", "Hello World!" ]
https://docs.redhat.com/en/documentation/net/6.0/html/getting_started_with_.net_on_rhel_8/creating-an-application-using-dotnet_getting-started-with-dotnet-on-rhel-8
Chapter 136. KafkaBridgeHttpCors schema reference
Chapter 136. KafkaBridgeHttpCors schema reference Used in: KafkaBridgeHttpConfig Property Property type Description allowedOrigins string array List of allowed origins. Java regular expressions can be used. allowedMethods string array List of allowed HTTP methods.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-KafkaBridgeHttpCors-reference
8.3. Using ID Views to Define AD User Attributes
8.3. Using ID Views to Define AD User Attributes With ID views, you can change the user attribute values defined in AD. For a complete list of the attributes, see Attributes an ID View Can Override . For example: If you are managing a mixed Linux-Windows environment and want to manually define POSIX attributes or SSH login attributes for an AD user, but the AD policy does not allow it, you can use ID views to override the attribute values. When the AD user authenticates to clients running SSSD or authenticates using a compat LDAP tree, the new values are used in the authentication process. Note Only IdM users can manage ID views. AD users cannot. The process for overriding the attribute values follows these steps: Create a new ID view. Add a user ID override in the ID view, and specify the require attribute value. Apply the ID view to a specific host. For details on how to perform these steps, see Defining a Different Attribute Value for a User Account on Different Hosts in the Linux Domain Identity, Authentication, and Policy Guide .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/windows_integration_guide/id-views-store-host-specific
2.13. RHEA-2011:0553 - new package: iwl6000g2b-firmware
2.13. RHEA-2011:0553 - new package: iwl6000g2b-firmware A new iwl6000g2b-firmware package that works with the iwlagn driver in the latest Red Hat Enterprise Linux kernels to enable support for Intel Wireless WiFi Link 6030 Series AGN Adapters is now available. iwlagn is a kernel driver module for the Intel Wireless WiFi Link series of devices. The iwlagn driver requires firmware loaded on the device in order to function. This new iwl6000g2b-firmware package provides the firmware required by iwlagn to enable Intel Wireless WiFi Link 6030 Series AGN Adapters. (BZ# 664520 ) All users of the iwlagn driver, especially those requiring iwl6000g2b support, should install this new package, which provides this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.1_technical_notes/iwl6000g2b-firmware_new
Chapter 2. Authentication [operator.openshift.io/v1]
Chapter 2. Authentication [operator.openshift.io/v1] Description Authentication provides information to configure an operator to manage authentication. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object status object 2.1.1. .spec Description Type object Property Type Description logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". unsupportedConfigOverrides `` unsupportedConfigOverrides overrides the final configuration that was computed by the operator. Red Hat does not support the use of this field. Misuse of this field could lead to unexpected behavior or conflict with other configuration options. Seek guidance from the Red Hat support before using this field. Use of this property blocks cluster upgrades, it must be removed before upgrading your cluster. 2.1.2. .status Description Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. latestAvailableRevision integer latestAvailableRevision is the deploymentID of the most recent deployment oauthAPIServer object OAuthAPIServer holds status specific only to oauth-apiserver observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 2.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 2.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Required lastTransitionTime status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string reason string status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. 2.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 2.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Required group name namespace resource Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 2.1.7. .status.oauthAPIServer Description OAuthAPIServer holds status specific only to oauth-apiserver Type object Property Type Description latestAvailableRevision integer LatestAvailableRevision is the latest revision used as suffix of revisioned secrets like encryption-config. A new revision causes a new deployment of pods. 2.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/authentications DELETE : delete collection of Authentication GET : list objects of kind Authentication POST : create an Authentication /apis/operator.openshift.io/v1/authentications/{name} DELETE : delete an Authentication GET : read the specified Authentication PATCH : partially update the specified Authentication PUT : replace the specified Authentication /apis/operator.openshift.io/v1/authentications/{name}/status GET : read status of the specified Authentication PATCH : partially update status of the specified Authentication PUT : replace status of the specified Authentication 2.2.1. /apis/operator.openshift.io/v1/authentications HTTP method DELETE Description delete collection of Authentication Table 2.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Authentication Table 2.2. HTTP responses HTTP code Reponse body 200 - OK AuthenticationList schema 401 - Unauthorized Empty HTTP method POST Description create an Authentication Table 2.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.4. Body parameters Parameter Type Description body Authentication schema Table 2.5. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 201 - Created Authentication schema 202 - Accepted Authentication schema 401 - Unauthorized Empty 2.2.2. /apis/operator.openshift.io/v1/authentications/{name} Table 2.6. Global path parameters Parameter Type Description name string name of the Authentication HTTP method DELETE Description delete an Authentication Table 2.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Authentication Table 2.9. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Authentication Table 2.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.11. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Authentication Table 2.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.13. Body parameters Parameter Type Description body Authentication schema Table 2.14. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 201 - Created Authentication schema 401 - Unauthorized Empty 2.2.3. /apis/operator.openshift.io/v1/authentications/{name}/status Table 2.15. Global path parameters Parameter Type Description name string name of the Authentication HTTP method GET Description read status of the specified Authentication Table 2.16. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Authentication Table 2.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.18. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Authentication Table 2.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.20. Body parameters Parameter Type Description body Authentication schema Table 2.21. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 201 - Created Authentication schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/operator_apis/authentication-operator-openshift-io-v1
Chapter 2. Getting Started with NetworkManager
Chapter 2. Getting Started with NetworkManager 2.1. Overview of NetworkManager In Red Hat Enterprise Linux 7, the default networking service is provided by NetworkManager , which is a dynamic network control and configuration daemon to keep network devices and connections up and active when they are available. The traditional ifcfg type configuration files are still supported. See Section 2.6, "Using NetworkManager with Network Scripts" for more information. 2.1.1. Benefits of Using NetworkManager The main benefits of using NetworkManager are: Making Network management easier: NetworkManager ensures that network connectivity works. When it detects that there is no network configuration in a system but there are network devices, NetworkManager creates temporary connections to provide connectivity. Providing easy setup of connection to the user: NetworkManager offers management through different tools - GUI, nmtui, nmcli -. See Section 2.5, "NetworkManager Tools" . Supporting configuration flexibility. For example, configuring a WiFi interface, NetworkManager scans and shows the available wifi networks. You can select an interface, and NetworkManager displays the required credentials providing automatic connection after the reboot process. NetworkManager can configure network aliases, IP addresses, static routes, DNS information, and VPN connections, as well as many connection-specific parameters. You can modify the configuration options to reflect your needs. Offering an API through D-Bus which allows applications to query and control network configuration and state. In this way, applications can check or configure networking through D-BUS. For example, the web console interface, which monitors and configures servers through a web browser, uses the NetworkManager D-BUS interface to configure networking. Maintaining the state of devices after the reboot process and taking over interfaces which are set into managed mode during restart. Handling devices which are not explicitly set unmanaged but controlled manually by the user or another network service.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/getting_started_with_networkmanager
Knative CLI
Knative CLI Red Hat OpenShift Serverless 1.33 Overview of CLI commands for Knative Functions, Serving, and Eventing Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.33/html/knative_cli/index
7.250. systemtap
7.250. systemtap 7.250.1. RHBA-2013:0345 - systemtap bug fix and enhancement update Updated systemtap packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. SystemTap is a tracing and probing tool to analyze and monitor activities of the operating system, including the kernel. It provides a wide range of filtering and analysis options. Note The systemtap packages have been upgraded to upstream version 1.8, which provides a number of bug fixes and enhancements over the version. (BZ#843123) Bug Fixes BZ#746334 Many of the SystemTap examples for memory used tracepoints which did not exist in some versions of kernel. Consequently, if the user tried to run the mmanonpage.stp, mmfilepage.stp, or mmwriteback.stp files, this process failed. The examples have been updated to work with the memory tracepoints available in Red Hat Enterprise Linux 6 and SystemTap now works as expected. BZ# 822503 Previously, support for the IPv6 protocol was missing. Consequently, an attempt to execute a script that evaluates a tapset variable containing an IPv6 address, or call a tapset function returning an IPv6 address was unsuccessful, and the address field was filled with the "Unsupported Address Family" message instead of a valid IPv6 address. This update adds the support for the IPv6 protocol. BZ#824311 Previously, changes in the include/trace/events/sunrpc.h file were referenced, but were not defined by the #include directive. As a consequence, the rpc tracepoint was missing. This tracepoint has been defined using #include and SystemTap works correctly in this situation. BZ#828103 In kernels and versions of SystemTap, the nfsd.open probe-alias in the nfsd tapset referred to the "access" parameter, which was later renamed to "may_flags" in the kernel. Consequently, the semantic errors occurred and then the stap command failed to execute. This update allows the nfsd.open probe-alias check under both names for setting the "access" script-level variable, and stap now works as expected in the described scenario. BZ#884951 Recent kernel updates required updates to some of the NFS tapset definitions to find certain context variables. With this update, the tapset aliases now search both old and new locations. All users of systemtap are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/systemtap
Chapter 8. Postinstallation storage configuration
Chapter 8. Postinstallation storage configuration After installing OpenShift Container Platform, you can further expand and customize your cluster to your requirements, including storage configuration. By default, containers operate by using the ephemeral storage or transient local storage. The ephemeral storage has a lifetime limitation. To store the data for a long time, you must configure persistent storage. You can configure storage by using one of the following methods: Dynamic provisioning You can dynamically provision storage on-demand by defining and creating storage classes that control different levels of storage, including storage access. Static provisioning You can use Kubernetes persistent volumes to make existing storage available to a cluster. Static provisioning can support various device configurations and mount options. 8.1. Dynamic provisioning Dynamic Provisioning allows you to create storage volumes on-demand, eliminating the need for cluster administrators to pre-provision storage. See Dynamic provisioning . 8.2. Recommended configurable storage technology The following table summarizes the recommended and configurable storage technologies for the given OpenShift Container Platform cluster application. Table 8.1. Recommended and configurable storage technology Storage type Block File Object 1 ReadOnlyMany 2 ReadWriteMany 3 Prometheus is the underlying technology used for metrics. 4 This does not apply to physical disk, VM physical disk, VMDK, loopback over NFS, AWS EBS, and Azure Disk. 5 For metrics, using file storage with the ReadWriteMany (RWX) access mode is unreliable. If you use file storage, do not configure the RWX access mode on any persistent volume claims (PVCs) that are configured for use with metrics. 6 For logging, review the recommended storage solution in Configuring persistent storage for the log store section. Using NFS storage as a persistent volume or through NAS, such as Gluster, can corrupt the data. Hence, NFS is not supported for Elasticsearch storage and LokiStack log store in OpenShift Container Platform Logging. You must use one persistent volume type per log store. 7 Object storage is not consumed through OpenShift Container Platform's PVs or PVCs. Apps must integrate with the object storage REST API. ROX 1 Yes 4 Yes 4 Yes RWX 2 No Yes Yes Registry Configurable Configurable Recommended Scaled registry Not configurable Configurable Recommended Metrics 3 Recommended Configurable 5 Not configurable Elasticsearch Logging Recommended Configurable 6 Not supported 6 Loki Logging Not configurable Not configurable Recommended Apps Recommended Recommended Not configurable 7 Note A scaled registry is an OpenShift image registry where two or more pod replicas are running. 8.2.1. Specific application storage recommendations Important Testing shows issues with using the NFS server on Red Hat Enterprise Linux (RHEL) as a storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations in the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. 8.2.1.1. Registry In a non-scaled/high-availability (HA) OpenShift image registry cluster deployment: The storage technology does not have to support RWX access mode. The storage technology must ensure read-after-write consistency. The preferred storage technology is object storage followed by block storage. File storage is not recommended for OpenShift image registry cluster deployment with production workloads. 8.2.1.2. Scaled registry In a scaled/HA OpenShift image registry cluster deployment: The storage technology must support RWX access mode. The storage technology must ensure read-after-write consistency. The preferred storage technology is object storage. Red Hat OpenShift Data Foundation (ODF), Amazon Simple Storage Service (Amazon S3), Google Cloud Storage (GCS), Microsoft Azure Blob Storage, and OpenStack Swift are supported. Object storage should be S3 or Swift compliant. For non-cloud platforms, such as vSphere and bare metal installations, the only configurable technology is file storage. Block storage is not configurable. The use of Network File System (NFS) storage with OpenShift Container Platform is supported. However, the use of NFS storage with a scaled registry can cause known issues. For more information, see the Red Hat Knowledgebase solution, Is NFS supported for OpenShift cluster internal components in Production? . 8.2.1.3. Metrics In an OpenShift Container Platform hosted metrics cluster deployment: The preferred storage technology is block storage. Object storage is not configurable. Important It is not recommended to use file storage for a hosted metrics cluster deployment with production workloads. 8.2.1.4. Logging In an OpenShift Container Platform hosted logging cluster deployment: Loki Operator: The preferred storage technology is S3 compatible Object storage. Block storage is not configurable. OpenShift Elasticsearch Operator: The preferred storage technology is block storage. Object storage is not supported. Note As of logging version 5.4.3 the OpenShift Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. 8.2.1.5. Applications Application use cases vary from application to application, as described in the following examples: Storage technologies that support dynamic PV provisioning have low mount time latencies, and are not tied to nodes to support a healthy cluster. Application developers are responsible for knowing and understanding the storage requirements for their application, and how it works with the provided storage to ensure that issues do not occur when an application scales or interacts with the storage layer. 8.2.2. Other specific application storage recommendations Important It is not recommended to use RAID configurations on Write intensive workloads, such as etcd . If you are running etcd with a RAID configuration, you might be at risk of encountering performance issues with your workloads. Red Hat OpenStack Platform (RHOSP) Cinder: RHOSP Cinder tends to be adept in ROX access mode use cases. Databases: Databases (RDBMSs, NoSQL DBs, etc.) tend to perform best with dedicated block storage. The etcd database must have enough storage and adequate performance capacity to enable a large cluster. Information about monitoring and benchmarking tools to establish ample storage and a high-performance environment is described in Recommended etcd practices . 8.3. Deploy Red Hat OpenShift Data Foundation Red Hat OpenShift Data Foundation is a provider of agnostic persistent storage for OpenShift Container Platform supporting file, block, and object storage, either in-house or in hybrid clouds. As a Red Hat storage solution, Red Hat OpenShift Data Foundation is completely integrated with OpenShift Container Platform for deployment, management, and monitoring. For more information, see the Red Hat OpenShift Data Foundation documentation . Important OpenShift Data Foundation on top of Red Hat Hyperconverged Infrastructure (RHHI) for Virtualization, which uses hyperconverged nodes that host virtual machines installed with OpenShift Container Platform, is not a supported configuration. For more information about supported platforms, see the Red Hat OpenShift Data Foundation Supportability and Interoperability Guide . If you are looking for Red Hat OpenShift Data Foundation information about... See the following Red Hat OpenShift Data Foundation documentation: What's new, known issues, notable bug fixes, and Technology Previews OpenShift Data Foundation 4.12 Release Notes Supported workloads, layouts, hardware and software requirements, sizing and scaling recommendations Planning your OpenShift Data Foundation 4.12 deployment Instructions on deploying OpenShift Data Foundation to use an external Red Hat Ceph Storage cluster Deploying OpenShift Data Foundation 4.12 in external mode Instructions on deploying OpenShift Data Foundation to local storage on bare metal infrastructure Deploying OpenShift Data Foundation 4.12 using bare metal infrastructure Instructions on deploying OpenShift Data Foundation on Red Hat OpenShift Container Platform VMware vSphere clusters Deploying OpenShift Data Foundation 4.12 on VMware vSphere Instructions on deploying OpenShift Data Foundation using Amazon Web Services for local or cloud storage Deploying OpenShift Data Foundation 4.12 using Amazon Web Services Instructions on deploying and managing OpenShift Data Foundation on existing Red Hat OpenShift Container Platform Google Cloud clusters Deploying and managing OpenShift Data Foundation 4.12 using Google Cloud Instructions on deploying and managing OpenShift Data Foundation on existing Red Hat OpenShift Container Platform Azure clusters Deploying and managing OpenShift Data Foundation 4.12 using Microsoft Azure Instructions on deploying OpenShift Data Foundation to use local storage on IBM Power(R) infrastructure Deploying OpenShift Data Foundation on IBM Power(R) Instructions on deploying OpenShift Data Foundation to use local storage on IBM Z(R) infrastructure Deploying OpenShift Data Foundation on IBM Z(R) infrastructure Allocating storage to core services and hosted applications in Red Hat OpenShift Data Foundation, including snapshot and clone Managing and allocating resources Managing storage resources across a hybrid cloud or multicloud environment using the Multicloud Object Gateway (NooBaa) Managing hybrid and multicloud resources Safely replacing storage devices for Red Hat OpenShift Data Foundation Replacing devices Safely replacing a node in a Red Hat OpenShift Data Foundation cluster Replacing nodes Scaling operations in Red Hat OpenShift Data Foundation Scaling storage Monitoring a Red Hat OpenShift Data Foundation 4.12 cluster Monitoring Red Hat OpenShift Data Foundation 4.12 Resolve issues encountered during operations Troubleshooting OpenShift Data Foundation 4.12 Migrating your OpenShift Container Platform cluster from version 3 to version 4 Migration
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/postinstallation_configuration/post-install-storage-configuration
Red Hat Data Grid
Red Hat Data Grid Data Grid is a high-performance, distributed in-memory data store. Schemaless data structure Flexibility to store different objects as key-value pairs. Grid-based data storage Designed to distribute and replicate data across clusters. Elastic scaling Dynamically adjust the number of nodes to meet demand without service disruption. Data interoperability Store, retrieve, and query data in the grid from different endpoints.
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_spring_boot_starter/red-hat-data-grid
Chapter 1. Overview
Chapter 1. Overview JBoss Operations Network (JON) is an enterprise, Java-based administration and management platform that you can use to develop, test, deploy, and monitor JBoss middleware applications. JON is based on RHQ. When using the JON platform to manage AMQ Brokers, the system comprises three components: JON server JON agent Plug-in pack The AMQ Broker plug-in is a connector that enables the JON Agent to collect information about the message brokers running in your JBoss environment. 1.1. Key Features With the AMQ Broker plug-in for JON, you can: Discover and maintain an inventory of AMQ Brokers. Store, manage, and update AMQ Broker configurations. Detect configuration change, correlate them with performance history, and roll back. Automate and schedule execution of operations for managed resources and resource groups. 1.2. Supported Configurations The supported configurations for both AMQ broker and JON server apply when you are using them together. Ensure that your systems align with the configurations documented in the following Knowledgebase articles: AMQ 7 Supported Configurations JON Supported Configurations and Components
null
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_jon_with_amq_broker/jon-install
Chapter 4. ConsoleLink [console.openshift.io/v1]
Chapter 4. ConsoleLink [console.openshift.io/v1] Description ConsoleLink is an extension for customizing OpenShift web console links. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object Required spec 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ConsoleLinkSpec is the desired console link configuration. 4.1.1. .spec Description ConsoleLinkSpec is the desired console link configuration. Type object Required href location text Property Type Description applicationMenu object applicationMenu holds information about section and icon used for the link in the application menu, and it is applicable only when location is set to ApplicationMenu. href string href is the absolute secure URL for the link (must use https) location string location determines which location in the console the link will be appended to (ApplicationMenu, HelpMenu, UserMenu, NamespaceDashboard). namespaceDashboard object namespaceDashboard holds information about namespaces in which the dashboard link should appear, and it is applicable only when location is set to NamespaceDashboard. If not specified, the link will appear in all namespaces. text string text is the display text for the link 4.1.2. .spec.applicationMenu Description applicationMenu holds information about section and icon used for the link in the application menu, and it is applicable only when location is set to ApplicationMenu. Type object Required section Property Type Description imageURL string imageUrl is the URL for the icon used in front of the link in the application menu. The URL must be an HTTPS URL or a Data URI. The image should be square and will be shown at 24x24 pixels. section string section is the section of the application menu in which the link should appear. This can be any text that will appear as a subheading in the application menu dropdown. A new section will be created if the text does not match text of an existing section. 4.1.3. .spec.namespaceDashboard Description namespaceDashboard holds information about namespaces in which the dashboard link should appear, and it is applicable only when location is set to NamespaceDashboard. If not specified, the link will appear in all namespaces. Type object Property Type Description namespaceSelector object namespaceSelector is used to select the Namespaces that should contain dashboard link by label. If the namespace labels match, dashboard link will be shown for the namespaces. namespaces array (string) namespaces is an array of namespace names in which the dashboard link should appear. 4.1.4. .spec.namespaceDashboard.namespaceSelector Description namespaceSelector is used to select the Namespaces that should contain dashboard link by label. If the namespace labels match, dashboard link will be shown for the namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 4.1.5. .spec.namespaceDashboard.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 4.1.6. .spec.namespaceDashboard.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 4.2. API endpoints The following API endpoints are available: /apis/console.openshift.io/v1/consolelinks DELETE : delete collection of ConsoleLink GET : list objects of kind ConsoleLink POST : create a ConsoleLink /apis/console.openshift.io/v1/consolelinks/{name} DELETE : delete a ConsoleLink GET : read the specified ConsoleLink PATCH : partially update the specified ConsoleLink PUT : replace the specified ConsoleLink /apis/console.openshift.io/v1/consolelinks/{name}/status GET : read status of the specified ConsoleLink PATCH : partially update status of the specified ConsoleLink PUT : replace status of the specified ConsoleLink 4.2.1. /apis/console.openshift.io/v1/consolelinks Table 4.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ConsoleLink Table 4.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 4.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ConsoleLink Table 4.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 4.5. HTTP responses HTTP code Reponse body 200 - OK ConsoleLinkList schema 401 - Unauthorized Empty HTTP method POST Description create a ConsoleLink Table 4.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.7. Body parameters Parameter Type Description body ConsoleLink schema Table 4.8. HTTP responses HTTP code Reponse body 200 - OK ConsoleLink schema 201 - Created ConsoleLink schema 202 - Accepted ConsoleLink schema 401 - Unauthorized Empty 4.2.2. /apis/console.openshift.io/v1/consolelinks/{name} Table 4.9. Global path parameters Parameter Type Description name string name of the ConsoleLink Table 4.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ConsoleLink Table 4.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 4.12. Body parameters Parameter Type Description body DeleteOptions schema Table 4.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ConsoleLink Table 4.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 4.15. HTTP responses HTTP code Reponse body 200 - OK ConsoleLink schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ConsoleLink Table 4.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 4.17. Body parameters Parameter Type Description body Patch schema Table 4.18. HTTP responses HTTP code Reponse body 200 - OK ConsoleLink schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ConsoleLink Table 4.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.20. Body parameters Parameter Type Description body ConsoleLink schema Table 4.21. HTTP responses HTTP code Reponse body 200 - OK ConsoleLink schema 201 - Created ConsoleLink schema 401 - Unauthorized Empty 4.2.3. /apis/console.openshift.io/v1/consolelinks/{name}/status Table 4.22. Global path parameters Parameter Type Description name string name of the ConsoleLink Table 4.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified ConsoleLink Table 4.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 4.25. HTTP responses HTTP code Reponse body 200 - OK ConsoleLink schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ConsoleLink Table 4.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 4.27. Body parameters Parameter Type Description body Patch schema Table 4.28. HTTP responses HTTP code Reponse body 200 - OK ConsoleLink schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ConsoleLink Table 4.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.30. Body parameters Parameter Type Description body ConsoleLink schema Table 4.31. HTTP responses HTTP code Reponse body 200 - OK ConsoleLink schema 201 - Created ConsoleLink schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/console_apis/consolelink-console-openshift-io-v1
Chapter 15. Provisioning real-time and low latency workloads
Chapter 15. Provisioning real-time and low latency workloads Many organizations need high performance computing and low, predictable latency, especially in the financial and telecommunications industries. OpenShift Container Platform provides the Node Tuning Operator to implement automatic tuning to achieve low latency performance and consistent response time for OpenShift Container Platform applications. You use the performance profile configuration to make these changes. You can update the kernel to kernel-rt, reserve CPUs for cluster and operating system housekeeping duties, including pod infra containers, isolate CPUs for application containers to run the workloads, and disable unused CPUs to reduce power consumption. Note When writing your applications, follow the general recommendations described in RHEL for Real Time processes and threads . Additional resources Creating a performance profile 15.1. Scheduling a low latency workload onto a worker with real-time capabilities You can schedule low latency workloads onto a worker node where a performance profile that configures real-time capabilities is applied. Note To schedule the workload on specific nodes, use label selectors in the Pod custom resource (CR). The label selectors must match the nodes that are attached to the machine config pool that was configured for low latency by the Node Tuning Operator. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. You have applied a performance profile in the cluster that tunes worker nodes for low latency workloads. Procedure Create a Pod CR for the low latency workload and apply it in the cluster, for example: Example Pod spec configured to use real-time processing apiVersion: v1 kind: Pod metadata: name: dynamic-low-latency-pod annotations: cpu-quota.crio.io: "disable" 1 cpu-load-balancing.crio.io: "disable" 2 irq-load-balancing.crio.io: "disable" 3 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: dynamic-low-latency-pod image: "registry.redhat.io/openshift4/cnf-tests-rhel8:v4.17" command: ["sleep", "10h"] resources: requests: cpu: 2 memory: "200M" limits: cpu: 2 memory: "200M" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] nodeSelector: node-role.kubernetes.io/worker-cnf: "" 4 runtimeClassName: performance-dynamic-low-latency-profile 5 # ... 1 Disables the CPU completely fair scheduler (CFS) quota at the pod run time. 2 Disables CPU load balancing. 3 Opts the pod out of interrupt handling on the node. 4 The nodeSelector label must match the label that you specify in the Node CR. 5 runtimeClassName must match the name of the performance profile configured in the cluster. Enter the pod runtimeClassName in the form performance-<profile_name>, where <profile_name> is the name from the PerformanceProfile YAML. In the example, the name is performance-dynamic-low-latency-profile . Ensure the pod is running correctly. Status should be running , and the correct cnf-worker node should be set: USD oc get pod -o wide Expected output NAME READY STATUS RESTARTS AGE IP NODE dynamic-low-latency-pod 1/1 Running 0 5h33m 10.131.0.10 cnf-worker.example.com Get the CPUs that the pod configured for IRQ dynamic load balancing runs on: USD oc exec -it dynamic-low-latency-pod -- /bin/bash -c "grep Cpus_allowed_list /proc/self/status | awk '{print USD2}'" Expected output Cpus_allowed_list: 2-3 Verification Ensure the node configuration is applied correctly. Log in to the node to verify the configuration. USD oc debug node/<node-name> Verify that you can use the node file system: sh-4.4# chroot /host Expected output sh-4.4# Ensure the default system CPU affinity mask does not include the dynamic-low-latency-pod CPUs, for example, CPUs 2 and 3. sh-4.4# cat /proc/irq/default_smp_affinity Example output 33 Ensure the system IRQs are not configured to run on the dynamic-low-latency-pod CPUs: sh-4.4# find /proc/irq/ -name smp_affinity_list -exec sh -c 'i="USD1"; mask=USD(cat USDi); file=USD(echo USDi); echo USDfile: USDmask' _ {} \; Example output /proc/irq/0/smp_affinity_list: 0-5 /proc/irq/1/smp_affinity_list: 5 /proc/irq/2/smp_affinity_list: 0-5 /proc/irq/3/smp_affinity_list: 0-5 /proc/irq/4/smp_affinity_list: 0 /proc/irq/5/smp_affinity_list: 0-5 /proc/irq/6/smp_affinity_list: 0-5 /proc/irq/7/smp_affinity_list: 0-5 /proc/irq/8/smp_affinity_list: 4 /proc/irq/9/smp_affinity_list: 4 /proc/irq/10/smp_affinity_list: 0-5 /proc/irq/11/smp_affinity_list: 0 /proc/irq/12/smp_affinity_list: 1 /proc/irq/13/smp_affinity_list: 0-5 /proc/irq/14/smp_affinity_list: 1 /proc/irq/15/smp_affinity_list: 0 /proc/irq/24/smp_affinity_list: 1 /proc/irq/25/smp_affinity_list: 1 /proc/irq/26/smp_affinity_list: 1 /proc/irq/27/smp_affinity_list: 5 /proc/irq/28/smp_affinity_list: 1 /proc/irq/29/smp_affinity_list: 0 /proc/irq/30/smp_affinity_list: 0-5 Warning When you tune nodes for low latency, the usage of execution probes in conjunction with applications that require guaranteed CPUs can cause latency spikes. Use other probes, such as a properly configured set of network probes, as an alternative. Additional resources Placing pods on specific nodes using node selectors Assigning pods to nodes 15.2. Creating a pod with a guaranteed QoS class Keep the following in mind when you create a pod that is given a QoS class of Guaranteed : Every container in the pod must have a memory limit and a memory request, and they must be the same. Every container in the pod must have a CPU limit and a CPU request, and they must be the same. The following example shows the configuration file for a pod that has one container. The container has a memory limit and a memory request, both equal to 200 MiB. The container has a CPU limit and a CPU request, both equal to 1 CPU. apiVersion: v1 kind: Pod metadata: name: qos-demo namespace: qos-example spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: qos-demo-ctr image: <image-pull-spec> resources: limits: memory: "200Mi" cpu: "1" requests: memory: "200Mi" cpu: "1" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] Create the pod: USD oc apply -f qos-pod.yaml --namespace=qos-example View detailed information about the pod: USD oc get pod qos-demo --namespace=qos-example --output=yaml Example output spec: containers: ... status: qosClass: Guaranteed Note If you specify a memory limit for a container, but do not specify a memory request, OpenShift Container Platform automatically assigns a memory request that matches the limit. Similarly, if you specify a CPU limit for a container, but do not specify a CPU request, OpenShift Container Platform automatically assigns a CPU request that matches the limit. 15.3. Disabling CPU load balancing in a Pod Functionality to disable or enable CPU load balancing is implemented on the CRI-O level. The code under the CRI-O disables or enables CPU load balancing only when the following requirements are met. The pod must use the performance-<profile-name> runtime class. You can get the proper name by looking at the status of the performance profile, as shown here: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile ... status: ... runtimeClass: performance-manual The Node Tuning Operator is responsible for the creation of the high-performance runtime handler config snippet under relevant nodes and for creation of the high-performance runtime class under the cluster. It will have the same content as the default runtime handler except that it enables the CPU load balancing configuration functionality. To disable the CPU load balancing for the pod, the Pod specification must include the following fields: apiVersion: v1 kind: Pod metadata: #... annotations: #... cpu-load-balancing.crio.io: "disable" #... #... spec: #... runtimeClassName: performance-<profile_name> #... Note Only disable CPU load balancing when the CPU manager static policy is enabled and for pods with guaranteed QoS that use whole CPUs. Otherwise, disabling CPU load balancing can affect the performance of other containers in the cluster. 15.4. Disabling power saving mode for high priority pods You can configure pods to ensure that high priority workloads are unaffected when you configure power saving for the node that the workloads run on. When you configure a node with a power saving configuration, you must configure high priority workloads with performance configuration at the pod level, which means that the configuration applies to all the cores used by the pod. By disabling P-states and C-states at the pod level, you can configure high priority workloads for best performance and lowest latency. Table 15.1. Configuration for high priority workloads Annotation Possible Values Description cpu-c-states.crio.io: "enable" "disable" "max_latency:microseconds" This annotation allows you to enable or disable C-states for each CPU. Alternatively, you can also specify a maximum latency in microseconds for the C-states. For example, enable C-states with a maximum latency of 10 microseconds with the setting cpu-c-states.crio.io : "max_latency:10" . Set the value to "disable" to provide the best performance for a pod. cpu-freq-governor.crio.io: Any supported cpufreq governor . Sets the cpufreq governor for each CPU. The "performance" governor is recommended for high priority workloads. Prerequisites You have configured power saving in the performance profile for the node where the high priority workload pods are scheduled. Procedure Add the required annotations to your high priority workload pods. The annotations override the default settings. Example high priority workload annotation apiVersion: v1 kind: Pod metadata: #... annotations: #... cpu-c-states.crio.io: "disable" cpu-freq-governor.crio.io: "performance" #... #... spec: #... runtimeClassName: performance-<profile_name> #... Restart the pods to apply the annotation. Additional resources Configuring power saving for nodes that run colocated high and low priority workloads 15.5. Disabling CPU CFS quota To eliminate CPU throttling for pinned pods, create a pod with the cpu-quota.crio.io: "disable" annotation. This annotation disables the CPU completely fair scheduler (CFS) quota when the pod runs. Example pod specification with cpu-quota.crio.io disabled apiVersion: v1 kind: Pod metadata: annotations: cpu-quota.crio.io: "disable" spec: runtimeClassName: performance-<profile_name> #... Note Only disable CPU CFS quota when the CPU manager static policy is enabled and for pods with guaranteed QoS that use whole CPUs. For example, pods that contain CPU-pinned containers. Otherwise, disabling CPU CFS quota can affect the performance of other containers in the cluster. Additional resources Recommended firmware configuration for vDU cluster hosts 15.6. Disabling interrupt processing for CPUs where pinned containers are running To achieve low latency for workloads, some containers require that the CPUs they are pinned to do not process device interrupts. A pod annotation, irq-load-balancing.crio.io , is used to define whether device interrupts are processed or not on the CPUs where the pinned containers are running. When configured, CRI-O disables device interrupts where the pod containers are running. To disable interrupt processing for CPUs where containers belonging to individual pods are pinned, ensure that globallyDisableIrqLoadBalancing is set to false in the performance profile. Then, in the pod specification, set the irq-load-balancing.crio.io pod annotation to disable . The following pod specification contains this annotation: apiVersion: performance.openshift.io/v2 kind: Pod metadata: annotations: irq-load-balancing.crio.io: "disable" spec: runtimeClassName: performance-<profile_name> ... Additional resources Managing device interrupt processing for guaranteed pod isolated CPUs
[ "apiVersion: v1 kind: Pod metadata: name: dynamic-low-latency-pod annotations: cpu-quota.crio.io: \"disable\" 1 cpu-load-balancing.crio.io: \"disable\" 2 irq-load-balancing.crio.io: \"disable\" 3 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: dynamic-low-latency-pod image: \"registry.redhat.io/openshift4/cnf-tests-rhel8:v4.17\" command: [\"sleep\", \"10h\"] resources: requests: cpu: 2 memory: \"200M\" limits: cpu: 2 memory: \"200M\" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" 4 runtimeClassName: performance-dynamic-low-latency-profile 5", "oc get pod -o wide", "NAME READY STATUS RESTARTS AGE IP NODE dynamic-low-latency-pod 1/1 Running 0 5h33m 10.131.0.10 cnf-worker.example.com", "oc exec -it dynamic-low-latency-pod -- /bin/bash -c \"grep Cpus_allowed_list /proc/self/status | awk '{print USD2}'\"", "Cpus_allowed_list: 2-3", "oc debug node/<node-name>", "sh-4.4# chroot /host", "sh-4.4#", "sh-4.4# cat /proc/irq/default_smp_affinity", "33", "sh-4.4# find /proc/irq/ -name smp_affinity_list -exec sh -c 'i=\"USD1\"; mask=USD(cat USDi); file=USD(echo USDi); echo USDfile: USDmask' _ {} \\;", "/proc/irq/0/smp_affinity_list: 0-5 /proc/irq/1/smp_affinity_list: 5 /proc/irq/2/smp_affinity_list: 0-5 /proc/irq/3/smp_affinity_list: 0-5 /proc/irq/4/smp_affinity_list: 0 /proc/irq/5/smp_affinity_list: 0-5 /proc/irq/6/smp_affinity_list: 0-5 /proc/irq/7/smp_affinity_list: 0-5 /proc/irq/8/smp_affinity_list: 4 /proc/irq/9/smp_affinity_list: 4 /proc/irq/10/smp_affinity_list: 0-5 /proc/irq/11/smp_affinity_list: 0 /proc/irq/12/smp_affinity_list: 1 /proc/irq/13/smp_affinity_list: 0-5 /proc/irq/14/smp_affinity_list: 1 /proc/irq/15/smp_affinity_list: 0 /proc/irq/24/smp_affinity_list: 1 /proc/irq/25/smp_affinity_list: 1 /proc/irq/26/smp_affinity_list: 1 /proc/irq/27/smp_affinity_list: 5 /proc/irq/28/smp_affinity_list: 1 /proc/irq/29/smp_affinity_list: 0 /proc/irq/30/smp_affinity_list: 0-5", "apiVersion: v1 kind: Pod metadata: name: qos-demo namespace: qos-example spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: qos-demo-ctr image: <image-pull-spec> resources: limits: memory: \"200Mi\" cpu: \"1\" requests: memory: \"200Mi\" cpu: \"1\" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "oc apply -f qos-pod.yaml --namespace=qos-example", "oc get pod qos-demo --namespace=qos-example --output=yaml", "spec: containers: status: qosClass: Guaranteed", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile status: runtimeClass: performance-manual", "apiVersion: v1 kind: Pod metadata: # annotations: # cpu-load-balancing.crio.io: \"disable\" # # spec: # runtimeClassName: performance-<profile_name> #", "apiVersion: v1 kind: Pod metadata: # annotations: # cpu-c-states.crio.io: \"disable\" cpu-freq-governor.crio.io: \"performance\" # # spec: # runtimeClassName: performance-<profile_name> #", "apiVersion: v1 kind: Pod metadata: annotations: cpu-quota.crio.io: \"disable\" spec: runtimeClassName: performance-<profile_name> #", "apiVersion: performance.openshift.io/v2 kind: Pod metadata: annotations: irq-load-balancing.crio.io: \"disable\" spec: runtimeClassName: performance-<profile_name>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/scalability_and_performance/cnf-provisioning-low-latency-workloads
Configuring Red Hat build of OpenJDK 21 on RHEL
Configuring Red Hat build of OpenJDK 21 on RHEL Red Hat build of OpenJDK 21 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/configuring_red_hat_build_of_openjdk_21_on_rhel/index
19.6. Displaying Guest Details
19.6. Displaying Guest Details You can use the Virtual Machine Monitor to view activity information for any virtual machines on your system. To view a virtual system's details: In the Virtual Machine Manager main window, highlight the virtual machine that you want to view. Figure 19.12. Selecting a virtual machine to display From the Virtual Machine Manager Edit menu, select Virtual Machine Details . When the Virtual Machine details window opens, there may be a console displayed. Should this happen, click View and then select Details . The Overview window opens first by default. To go back to this window, select Overview from the navigation pane on the left-hand side. The Overview view shows a summary of configuration details for the guest. Figure 19.13. Displaying guest details overview Select CPUs from the navigation pane on the left-hand side. The CPUs view allows you to view or change the current processor allocation. It is also possible to increase the number of virtual CPUs (vCPUs) while the virtual machine is running, which is referred to as hot plugging . Important Hot unplugging vCPUs is not supported in Red Hat Enterprise Linux 7. Figure 19.14. Processor allocation panel Select Memory from the navigation pane on the left-hand side. The Memory view allows you to view or change the current memory allocation. Figure 19.15. Displaying memory allocation Select Boot Options from the navigation pane on the left-hand side. The Boot Options view allows you to view or change the boot options including whether or not the virtual machine starts when the host boots and the boot device order for the virtual machine. Figure 19.16. Displaying boot options Each virtual disk attached to the virtual machine is displayed in the navigation pane. click a virtual disk to modify or remove it. Figure 19.17. Displaying disk configuration Each virtual network interface attached to the virtual machine is displayed in the navigation pane. click a virtual network interface to modify or remove it. Figure 19.18. Displaying network configuration
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-managing_guests_with_the_virtual_machine_manager_virt_manager-displaying_guest_details
Chapter 6. Using .NET 6.0 on OpenShift Container Platform
Chapter 6. Using .NET 6.0 on OpenShift Container Platform 6.1. Overview NET images are added to OpenShift by importing imagestream definitions from s2i-dotnetcore . The imagestream definitions includes the dotnet imagestream which contains sdk images for different supported versions of .NET. .NET Life Cycle provides an up-to-date overview of supported versions. Version Tag Alias .NET Core 3.1 dotnet:3.1-el7 dotnet:3.1 dotnet:3.1-ubi8 .NET 5 dotnet:5.0-ubi8 dotnet:5.0 .NET 6 dotnet:6.0-ubi8 dotnet:6.0 The sdk images have corresponding runtime images which are defined under the dotnet-runtime imagestream. The container images work across different versions of Red Hat Enterprise Linux and OpenShift. The RHEL7-based (suffix -el7) are hosted on the registry.redhat.io image repository. Authentication is required to pull these images. These credentials are configured by adding a pull secret to the OpenShift namespace. The UBI-8 based images (suffix -ubi8) are hosted on the registry.access.redhat.com and do not require authentication. 6.2. Installing .NET image streams To install .NET image streams, use image stream definitions from s2i-dotnetcore with the OpenShift Client ( oc ) binary. Image streams can be installed from Linux, Mac, and Windows. A script enables you to install, update or remove the image streams. You can define .NET image streams in the global openshift namespace or locally in a project namespace. Sufficient permissions are required to update the openshift namespace definitions. 6.2.1. Installing image streams using OpenShift Client You can use OpenShift Client ( oc ) to install .NET image streams. Prerequisites An existing pull secret must be present in the namespace. If no pull secret is present in the namespace. Add one by following the instructions in the Red Hat Container Registry Authentication guide. Procedure List the available .NET image streams: The output shows installed images. If no images are installed, the Error from server (NotFound) message is displayed. If the Error from server (NotFound) message is displayed: Install the .NET image streams: If the Error from server (NotFound) message is not displayed: Include newer versions of existing .NET image streams: 6.2.2. Installing image streams on Linux and macOS You can use this script to install, upgrade, or remove the image streams on Linux and macOS. Procedure Download the script. On Linux use: On Mac use: Make the script executable: Log in to the OpenShift cluster: Install image streams and add a pull secret for authentication against the registry.redhat.io : Replace subscription_username with the name of the user, and replace subscription_password with the user's password. The credentials may be omitted if you do not plan to use the RHEL7-based images. If the pull secret is already present, the --user and --password arguments are ignored. Additional information ./install-imagestreams.sh --help 6.2.3. Installing image streams on Windows You can use this script to install, upgrade, or remove the image streams on Windows. Procedure Download the script. Log in to the OpenShift cluster: Install image streams and add a pull secret for authentication against the registry.redhat.io : Replace subscription_username with the name of the user, and replace subscription_password with the user's password. The credentials may be omitted if you do not plan to use the RHEL7-based images. If the pull secret is already present, the -User and -Password arguments are ignored. Note The PowerShell ExecutionPolicy may prohibit executing this script. To relax the policy, run Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass -Force . Additional information Get-Help .\install-imagestreams.ps1 6.3. Deploying applications from source using oc The following example demonstrates how to deploy the example-app application using oc , which is in the app folder on the {dotnet-branch} branch of the redhat-developer/s2i-dotnetcore-ex GitHub repository: Procedure Create a new OpenShift project: Add the ASP.NET Core application: Track the progress of the build: View the deployed application once the build is finished: The application is now accessible within the project. Optional : Make the project accessible externally: Obtain the shareable URL: 6.4. Deploying applications from binary artifacts using oc You can use .NET Source-to-Image (S2I) builder image to build applications using binary artifacts that you provide. Prerequisites Published application. For more information, see Publishing applications with .NET 6.0 . Procedure Create a new binary build: Start the build and specify the path to the binary artifacts on your local machine: Create a new application: 6.5. Environment variables for .NET 6.0 The .NET images support several environment variables to control the build behavior of your .NET application. You can set these variables as part of the build configuration, or add them to the .s2i/environment file in the application source code repository. Variable Name Description Default DOTNET_STARTUP_PROJECT Selects the project to run. This must be a project file (for example, csproj or fsproj ) or a folder containing a single project file. . DOTNET_ASSEMBLY_NAME Selects the assembly to run. This must not include the .dll extension. Set this to the output assembly name specified in csproj (PropertyGroup/AssemblyName). The name of the csproj file DOTNET_PUBLISH_READYTORUN When set to true , the application will be compiled ahead of time. This reduces startup time by reducing the amount of work the JIT needs to perform when the application is loading. false DOTNET_RESTORE_SOURCES Specifies the space-separated list of NuGet package sources used during the restore operation. This overrides all of the sources specified in the NuGet.config file. This variable cannot be combined with DOTNET_RESTORE_CONFIGFILE . DOTNET_RESTORE_CONFIGFILE Specifies a NuGet.Config file to be used for restore operations. This variable cannot be combined with DOTNET_RESTORE_SOURCES . DOTNET_TOOLS Specifies a list of .NET tools to install before building the app. It is possible to install a specific version by post pending the package name with @<version> . DOTNET_NPM_TOOLS Specifies a list of NPM packages to install before building the application. DOTNET_TEST_PROJECTS Specifies the list of test projects to test. This must be project files or folders containing a single project file. dotnet test is invoked for each item. DOTNET_CONFIGURATION Runs the application in Debug or Release mode. This value should be either Release or Debug . Release DOTNET_VERBOSITY Specifies the verbosity of the dotnet build commands. When set, the environment variables are printed at the start of the build. This variable can be set to one of the msbuild verbosity values ( q[uiet] , m[inimal] , n[ormal] , d[etailed] , and diag[nostic] ). HTTP_PROXY, HTTPS_PROXY Configures the HTTP or HTTPS proxy used when building and running the application, respectively. DOTNET_RM_SRC When set to true , the source code will not be included in the image. DOTNET_SSL_DIRS Specifies a list of folders or files with additional SSL certificates to trust. The certificates are trusted by each process that runs during the build and all processes that run in the image after the build (including the application that was built). The items can be absolute paths (starting with / ) or paths in the source repository (for example, certificates). NPM_MIRROR Uses a custom NPM registry mirror to download packages during the build process. ASPNETCORE_URLS This variable is set to http://*:8080 to configure ASP.NET Core to use the port exposed by the image. Changing this is not recommended. http://*:8080 DOTNET_RESTORE_DISABLE_PARALLEL When set to true , disables restoring multiple projects in parallel. This reduces restore timeout errors when the build container is running with low CPU limits. false DOTNET_INCREMENTAL When set to true , the NuGet packages will be kept so they can be re-used for an incremental build. false DOTNET_PACK When set to true , creates a tar.gz file at /opt/app-root/app.tar.gz that contains the published application. 6.6. Creating the MVC sample application s2i-dotnetcore-ex is the default Model, View, Controller (MVC) template application for .NET. This application is used as the example application by the .NET S2I image and can be created directly from the OpenShift UI using the Try Example link. The application can also be created with the OpenShift client binary ( oc ). Procedure To create the sample application using oc : Add the .NET application: Make the application accessible externally: Obtain the sharable URL: Additional resources s2i-dotnetcore-ex application repository on GitHub 6.7. Creating the CRUD sample application s2i-dotnetcore-persistent-ex is a simple Create, Read, Update, Delete (CRUD) .NET web application that stores data in a PostgreSQL database. Procedure To create the sample application using oc : Add the database: Add the .NET application: Add environment variables from the postgresql secret and database service name environment variable: Make the application accessible externally: Obtain the sharable URL: Additional resources s2i-dotnetcore-ex application repository on GitHub
[ "oc describe is dotnet", "oc create -f https://raw.githubusercontent.com/redhat-developer/s2i-dotnetcore/master/dotnet_imagestreams.json", "oc replace -f https://raw.githubusercontent.com/redhat-developer/s2i-dotnetcore/master/dotnet_imagestreams.json", "wget https://raw.githubusercontent.com/redhat-developer/s2i-dotnetcore/master/install-imagestreams.sh", "curl https://raw.githubusercontent.com/redhat-developer/s2i-dotnetcore/master/install-imagestreams.sh -o install-imagestreams.sh", "chmod +x install-imagestreams.sh", "oc login", "./install-imagestreams.sh --os rhel [--user subscription_username --password subscription_password ]", "Invoke-WebRequest https://raw.githubusercontent.com/redhat-developer/s2i-dotnetcore/master/install-imagestreams.ps1 -UseBasicParsing -OutFile install-imagestreams.ps1", "oc login", ".\\install-imagestreams.ps1 --OS rhel [-User subscription_username -Password subscription_password ]", "oc new-project sample-project", "oc new-app --name= example-app 'dotnet:6.0-ubi8~https://github.com/redhat-developer/s2i-dotnetcore-ex#{dotnet-branch}' --build-env DOTNET_STARTUP_PROJECT=app", "oc logs -f bc/ example-app", "oc logs -f dc/ example-app", "oc expose svc/ example-app", "oc get routes", "oc new-build --name= my-web-app dotnet:6.0-ubi8 --binary=true", "oc start-build my-web-app --from-dir= bin/Release/net6.0/publish", "oc new-app my-web-app", "oc new-app dotnet:6.0-ubi8~https://github.com/redhat-developer/s2i-dotnetcore-ex#{dotnet-branch} --context-dir=app", "oc expose service s2i-dotnetcore-ex", "oc get route s2i-dotnetcore-ex", "oc new-app postgresql-ephemeral", "oc new-app dotnet:6.0-ubi8~https://github.com/redhat-developer/s2i-dotnetcore-persistent-ex#{dotnet-branch} --context-dir app", "oc set env dc/s2i-dotnetcore-persistent-ex --from=secret/postgresql -e database-service=postgresql", "oc expose service s2i-dotnetcore-persistent-ex", "oc get route s2i-dotnetcore-persistent-ex" ]
https://docs.redhat.com/en/documentation/net/6.0/html/getting_started_with_.net_on_rhel_8/using_net_6_0_on_openshift_container_platform
Chapter 11. Disabling Windows container workloads
Chapter 11. Disabling Windows container workloads You can disable the capability to run Windows container workloads by uninstalling the Windows Machine Config Operator (WMCO) and deleting the namespace that was added by default when you installed the WMCO. 11.1. Uninstalling the Windows Machine Config Operator You can uninstall the Windows Machine Config Operator (WMCO) from your cluster. Prerequisites Delete the Windows Machine objects hosting your Windows workloads. Procedure From the Operators OperatorHub page, use the Filter by keyword box to search for Red Hat Windows Machine Config Operator . Click the Red Hat Windows Machine Config Operator tile. The Operator tile indicates it is installed. In the Windows Machine Config Operator descriptor page, click Uninstall . 11.2. Deleting the Windows Machine Config Operator namespace You can delete the namespace that was generated for the Windows Machine Config Operator (WMCO) by default. Prerequisites The WMCO is removed from your cluster. Procedure Remove all Windows workloads that were created in the openshift-windows-machine-config-operator namespace: USD oc delete --all pods --namespace=openshift-windows-machine-config-operator Verify that all pods in the openshift-windows-machine-config-operator namespace are deleted or are reporting a terminating state: USD oc get pods --namespace openshift-windows-machine-config-operator Delete the openshift-windows-machine-config-operator namespace: USD oc delete namespace openshift-windows-machine-config-operator Additional resources Deleting Operators from a cluster Removing Windows nodes
[ "oc delete --all pods --namespace=openshift-windows-machine-config-operator", "oc get pods --namespace openshift-windows-machine-config-operator", "oc delete namespace openshift-windows-machine-config-operator" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/windows_container_support_for_openshift/disabling-windows-container-workloads
Appendix B. Troubleshooting
Appendix B. Troubleshooting The troubleshooting information in the following sections might be helpful when diagnosing issues after the installation process. The following sections are for all supported architectures. However, if an issue is for a particular architecture, it is specified at the start of the section. B.1. Resuming an interrupted download attempt You can resume an interrupted download using the curl command. Prerequisite You have navigated to the Product Downloads section of the Red Hat Customer Portal at https://access.redhat.com/downloads , and selected the required variant, version, and architecture. You have right-clicked on the required ISO file, and selected Copy Link Location to copy the URL of the ISO image file to your clipboard. Procedure Download the ISO image from the new link. Add the --continue-at - option to automatically resume the download: Use a checksum utility such as sha256sum to verify the integrity of the image file after the download finishes: Compare the output with reference checksums provided on the Red Hat Enterprise Linux Product Download web page. Example B.1. Resuming an interrupted download attempt The following is an example of a curl command for a partially downloaded ISO image: B.2. Disks are not detected If the installation program cannot find a writable storage device to install to, it returns the following error message in the Installation Destination window: No disks detected. Please shut down the computer, connect at least one disk, and restart to complete installation. Check the following items: Your system has at least one storage device attached. If your system uses a hardware RAID controller; verify that the controller is properly configured and working as expected. See your controller's documentation for instructions. If you are installing into one or more iSCSI devices and there is no local storage present on the system, verify that all required LUNs are presented to the appropriate Host Bus Adapter (HBA). If the error message is still displayed after rebooting the system and starting the installation process, the installation program failed to detect the storage. In many cases the error message is a result of attempting to install on an iSCSI device that is not recognized by the installation program. In this scenario, you must perform a driver update before starting the installation. Check your hardware vendor's website to determine if a driver update is available. For more general information about driver updates, see the Updating drivers during installation . You can also consult the Red Hat Hardware Compatibility List, available at https://access.redhat.com/ecosystem/search/#/category/Server . B.3. Cannot boot with a RAID card If you cannot boot your system after the installation, you might need to reinstall and repartition your system's storage. Some BIOS types do not support booting from RAID cards. After you finish the installation and reboot the system for the first time, a text-based screen displays the boot loader prompt (for example, grub> ) and a flashing cursor might be displayed. If this is the case, you must repartition your system and move your /boot partition and the boot loader outside of the RAID array. The /boot partition and the boot loader must be on the same drive. Once these changes have been made, you should be able to finish your installation and boot the system properly. B.4. Graphical boot sequence is not responding When rebooting your system for the first time after installation, the system might be unresponsive during the graphical boot sequence. If this occurs, a reset is required. In this scenario, the boot loader menu is displayed successfully, but selecting any entry and attempting to boot the system results in a halt. This usually indicates that there is a problem with the graphical boot sequence. To resolve the issue, you must disable the graphical boot by temporarily altering the setting at boot time before changing it permanently. Procedure: Disabling the graphical boot temporarily Start your system and wait until the boot loader menu is displayed. If you set your boot timeout period to 0 , press the Esc key to access it. From the boot loader menu, use your cursor keys to highlight the entry you want to boot. Press the Tab key on BIOS-based systems or the e key on UEFI-based systems to edit the selected entry options. In the list of options, find the kernel line - that is, the line beginning with the keyword linux . On this line, locate and delete rhgb . Press F10 or Ctrl + X to boot your system with the edited options. If the system started successfully, you can log in normally. However, if you do not disable graphical boot permanently, you must perform this procedure every time the system boots. Procedure: Disabling the graphical boot permanently Log in to the root account on your system. Use the grubby tool to find the default GRUB kernel: Use the grubby tool to remove the rhgb boot option from the default kernel in your GRUB configuration. For example: Reboot the system. The graphical boot sequence is no longer used. If you want to enable the graphical boot sequence, follow the same procedure, replacing the --remove-args="rhgb" parameter with the --args="rhgb" parameter. This restores the rhgb boot option to the default kernel in your GRUB configuration. B.5. X server fails after log in An X server is a program in the X Window System that runs on local machines, that is, the computers used directly by users. X server handles all access to the graphics cards, display screens and input devices, typically a keyboard and mouse on those computers. The X Window System, often referred to as X, is a complete, cross-platform and free client-server system for managing GUIs on single computers and on networks of computers. The client-server model is an architecture that divides the work between two separate but linked applications, referred to as clients and servers.* If X server crashes after login, one or more of the file systems might be full. To troubleshoot the issue, execute the following command: The output verifies which partition is full - in most cases, the problem is on the /home partition. The following is a sample output of the df command: In the example, you can see that the /home partition is full, which causes the failure. Remove any unwanted files. After you free up some disk space, start X using the startx command. For additional information about df and an explanation of the options available, such as the -h option used in this example, see the df(1) man page on your system. *Source: http://www.linfo.org/x_server.html B.6. RAM is not recognized In some scenarios, the kernel does not recognize all memory (RAM), which causes the system to use less memory than is installed. If the total amount of memory that your system reports does not match your expectations, it is likely that at least one of your memory modules is faulty. On BIOS-based systems, you can use the Memtest86+ utility to test your system's memory. Some hardware configurations have part of the system's RAM reserved, and as a result, it is unavailable to the system. Some laptop computers with integrated graphics cards reserve a portion of memory for the GPU. For example, a laptop with 4 GiB of RAM and an integrated Intel graphics card shows roughly 3.7 GiB of available memory. Additionally, the kdump crash kernel dumping mechanism, which is enabled by default on most Red Hat Enterprise Linux systems, reserves some memory for the secondary kernel used in case of a primary kernel failure. This reserved memory is not displayed as available. Use this procedure to manually set the amount of memory. Procedure Check the amount of memory that your system currently reports in MiB: Reboot your system and wait until the boot loader menu is displayed. If your boot timeout period is set to 0 , press the Esc key to access the menu. From the boot loader menu, use your cursor keys to highlight the entry you want to boot, and press the Tab key on BIOS-based systems or the e key on UEFI-based systems to edit the selected entry options. In the list of options, find the kernel line: that is, the line beginning with the keyword linux . Append the following option to the end of this line: Replace xx with the amount of RAM you have in MiB. Press F10 or Ctrl + X to boot your system with the edited options. Wait for the system to boot, log in, and open a command line. Check the amount of memory that your system reports in MiB: If the total amount of RAM displayed by the command now matches your expectations, make the change permanent: B.7. System is displaying signal 11 errors A signal 11 error, commonly known as a segmentation fault, means that a program accessed a memory location that it was not assigned. A signal 11 error can occur due to a bug in one of the software programs that are installed, or faulty hardware. If you receive a signal 11 error during the installation process, verify that you are using the most recent installation images and prompt the installation program to verify them to ensure they are not corrupt. For more information, see Verifying Boot media . Faulty installation media (such as an improperly burned or scratched optical disk) are a common cause of signal 11 errors. Verify the integrity of the installation media before every installation. For information about obtaining the most recent installation media, refer to the Product Downloads page. To perform a media check before the installation starts, append the rd.live.check boot option at the boot menu. If you performed a media check without any errors and you still have issues with segmentation faults, it usually indicates that your system encountered a hardware error. In this scenario, the problem is most likely in the system's memory (RAM). This can be a problem even if you previously used a different operating system on the same computer without any errors. Note For AMD and Intel 64-bit and 64-bit ARM architectures: On BIOS-based systems, you can use the Memtest86+ memory testing module included on the installation media to perform a thorough test of your system's memory. For more information, see Detecting memory faults using the Memtest86 application . Other possible causes are beyond this document's scope. Consult your hardware manufacturer's documentation and also see the Red Hat Hardware Compatibility List, available online at https://access.redhat.com/ecosystem/search/#/category/Server . B.8. Unable to IPL from network storage space on IBM Power Systems If you experience difficulties when trying to IPL from Network Storage Space (*NWSSTG), it is most likely due to a missing PReP partition. In this scenario, you must reinstall the system and create this partition during the partitioning phase or in the Kickstart file. B.9. Using XDMCP There are scenarios where you have installed the X Window System and want to log in to your Red Hat Enterprise Linux system using a graphical login manager. Use this procedure to enable the X Display Manager Control Protocol (XDMCP) and remotely log in to a desktop environment from any X-compatible client, such as a network-connected workstation or X11 terminal. Note XDMCP is not supported by the Wayland protocol. Procedure Open the /etc/gdm/custom.conf configuration file in a plain text editor such as vi or nano . In the custom.conf file, locate the section starting with [xdmcp] . In this section, add the following line: If you are using XDMCP, ensure that WaylandEnable=false is present in the /etc/gdm/custom.conf file. Save the file and exit the text editor. Restart the X Window System. To do this, either reboot the system, or restart the GNOME Display Manager using the following command as root: Warning Restarting the gdm service terminates all currently running GNOME sessions of all desktop users who are logged in. This might result in users losing unsaved data. Wait for the login prompt and log in using your user name and password. The X Window System is now configured for XDMCP. You can connect to it from another workstation (client) by starting a remote X session using the X command on the client workstation. For example: Replace address with the host name of the remote X11 server. The command connects to the remote X11 server using XDMCP and displays the remote graphical login screen on display :1 of the X11 server system (usually accessible by pressing Ctrl-Alt-F8 ). You can also access remote desktop sessions using a nested X11 server, which opens the remote desktop as a window in your current X11 session. You can use Xnest to open a remote desktop nested in a local X11 session. For example, run Xnest using the following command, replacing address with the host name of the remote X11 server: Additional resources X Window System documentation B.10. Using rescue mode The installation program's rescue mode is a minimal Linux environment that can be booted from the Red Hat Enterprise Linux DVD or other boot media. It contains command-line utilities for repairing a wide variety of issues. Rescue mode can be accessed from the Troubleshooting menu of the boot menu. In this mode, you can mount file systems as read-only, blacklist or add a driver provided on a driver disc, install or upgrade system packages, or manage partitions. Note The installation program's rescue mode is different from rescue mode (an equivalent to single-user mode) and emergency mode, which are provided as parts of the systemd system and service manager. To boot into rescue mode, you must be able to boot the system using one of the Red Hat Enterprise Linux boot media, such as a minimal boot disc or USB drive, or a full installation DVD. Important Advanced storage, such as iSCSI or zFCP devices, must be configured either using dracut boot options such as rd.zfcp= or root=iscsi: options , or in the CMS configuration file on 64-bit IBM Z. It is not possible to configure these storage devices interactively after booting into rescue mode. For information about dracut boot options, see the dracut.cmdline(7) man page on your system. B.10.1. Booting into rescue mode This procedure describes how to boot into rescue mode. Procedure Boot the system from either minimal boot media, or a full installation DVD or USB drive, and wait for the boot menu to be displayed. From the boot menu, either select Troubleshooting > Rescue a Red Hat Enterprise Linux system option, or append the inst.rescue option to the boot command line. To enter the boot command line, press the Tab key on BIOS-based systems or the e key on UEFI-based systems. Optional: If your system requires a third-party driver provided on a driver disc to boot, append the inst.dd=driver_name to the boot command line: Optional: If a driver that is part of the Red Hat Enterprise Linux distribution prevents the system from booting, append the modprobe.blacklist= option to the boot command line: Press Enter (BIOS-based systems) or Ctrl + X (UEFI-based systems) to boot the modified option. Wait until the following message is displayed: If you select 1 , the installation program attempts to mount your file system under the directory /mnt/sysroot/ . You are notified if it fails to mount a partition. If you select 2 , it attempts to mount your file system under the directory /mnt/sysroot/ , but in read-only mode. If you select 3 , your file system is not mounted. For the system root, the installer supports two mount points /mnt/sysimage and /mnt/sysroot . The /mnt/sysroot path is used to mount / of the target system. Usually, the physical root and the system root are the same, so /mnt/sysroot is attached to the same file system as /mnt/sysimage . The only exceptions are rpm-ostree systems, where the system root changes based on the deployment. Then, /mnt/sysroot is attached to a subdirectory of /mnt/sysimage . Use /mnt/sysroot for chroot. Select 1 to continue. Once your system is in rescue mode, a prompt appears on VC (virtual console) 1 and VC 2. Use the Ctrl+Alt+F1 key combination to access VC 1 and Ctrl+Alt+F2 to access VC 2: Even if your file system is mounted, the default root partition while in rescue mode is a temporary root partition, not the root partition of the file system used during normal user mode ( multi-user.target or graphical.target ). If you selected to mount your file system and it mounted successfully, you can change the root partition of the rescue mode environment to the root partition of your file system by executing the following command: This is useful if you need to run commands, such as rpm , that require your root partition to be mounted as / . To exit the chroot environment, type exit to return to the prompt. If you selected 3 , you can still try to mount a partition or LVM2 logical volume manually inside rescue mode by creating a directory, such as /directory/ , and typing the following command: In the above command, /directory/ is the directory that you created and /dev/mapper/VolGroup00-LogVol02 is the LVM2 logical volume you want to mount. If the partition is a different type than XFS, replace the xfs string with the correct type (such as ext4). If you do not know the names of all physical partitions, use the following command to list them: If you do not know the names of all LVM2 physical volumes, volume groups, or logical volumes, use the pvdisplay , vgdisplay or lvdisplay commands. B.10.2. Using an SOS report in rescue mode The sosreport command-line utility collects configuration and diagnostic information, such as the running kernel version, loaded modules, and system and service configuration files from the system. The utility output is stored in a tar archive in the /var/tmp/ directory. The sosreport utility is useful for analyzing system errors and troubleshooting. Use this procedure to capture an sosreport output in rescue mode. Prerequisites You have booted into rescue mode. You have mounted the installed system / (root) partition in read-write mode. You have contacted Red Hat Support about your case and received a case number. Procedure Change the root directory to the /mnt/sysroot/ directory: Execute sosreport to generate an archive with system configuration and diagnostic information: sosreport prompts you to enter your name and the case number you received from Red Hat Support. Use only letters and numbers because adding any of the following characters or spaces could render the report unusable: # % & { } \ < > > * ? / USD ~ ' " : @ + ` | = Optional: If you want to transfer the generated archive to a new location using the network, it is necessary to have a network interface configured. In this scenario, use the dynamic IP addressing as no other steps required. However, when using static addressing, enter the following command to assign an IP address (for example 10.13.153.64/23) to a network interface, for example dev eth0: Exit the chroot environment: Store the generated archive in a new location, from where it can be easily accessible: For transferring the archive through the network, use the scp utility: Additional resources What is an sosreport and how to create one in Red Hat Enterprise Linux? (Red Hat Knowledgebase) How to generate sosreport from the rescue environment (Red Hat Knowledgebase) How do I make sosreport write to an alternative location? (Red Hat Knowledgebase) Sosreport fails. What data should I provide in its place? (Red Hat Knowledgebase) B.10.3. Reinstalling the GRUB boot loader In some scenarios, the GRUB boot loader is mistakenly deleted, corrupted, or replaced by other operating systems. In that case, reinstall GRUB on the master boot record (MBR) on AMD64 and Intel 64 systems with BIOS. Prerequisites You have booted into rescue mode. You have mounted the installed system / (root) partition in read-write mode. You have mounted the /boot mount point in read-write mode. Procedure Change the root partition: Reinstall the GRUB boot loader, where the install_device block device was installed: Important Running the grub2-install command could lead to the machine being unbootable if all the following conditions apply: The system is an AMD64 or Intel 64 with Extensible Firmware Interface (EFI). Secure Boot is enabled. After you run the grub2-install command, you cannot boot the AMD64 or Intel 64 systems that have Extensible Firmware Interface (EFI) and Secure Boot enabled. This issue occurs because the grub2-install command installs an unsigned GRUB image that boots directly instead of using the shim application. When the system boots, the shim application validates the image signature, which when not found fails to boot the system. Reboot the system. B.10.4. Using yum to add or remove a driver Missing or malfunctioning drivers cause problems when booting the system. Rescue mode provides an environment in which you can add or remove a driver even when the system fails to boot. Wherever possible, use the yum package manager to remove malfunctioning drivers or to add updated or missing drivers. Important When you install a driver from a driver disc, the driver disc updates all initramfs images on the system to use this driver. If a problem with a driver prevents a system from booting, you cannot rely on booting the system from another initramfs image. B.10.4.1. Adding a driver using yum Use this procedure to add a driver. Prerequisites You have booted into rescue mode. You have mounted the installed system in read-write mode. Procedure Make the RPM package that contains the driver available. For example, mount a CD or USB flash drive and copy the RPM package to a location of your choice under /mnt/sysroot/ , for example: /mnt/sysroot/root/drivers/ . Change the root directory to /mnt/sysroot/ : Use the yum install command to install the driver package. For example, run the following command to install the xorg-x11-drv-wacom driver package from /root/drivers/ : Note The /root/drivers/ directory in this chroot environment is the /mnt/sysroot/root/drivers/ directory in the original rescue environment. Exit the chroot environment: B.10.4.2. Removing a driver using yum Use this procedure to remove a driver. Prerequisites You have booted into rescue mode. You have mounted the installed system in read-write mode. Procedure Change the root directory to the /mnt/sysroot/ directory: Use the yum remove command to remove the driver package. For example, to remove the xorg-x11-drv-wacom driver package, run: Exit the chroot environment: If you cannot remove a malfunctioning driver for some reason, you can instead blocklist the driver so that it does not load at boot time. When you have finished adding and removing drivers, reboot the system. B.11. ip= boot option returns an error Using the ip= boot option format ip=[ip address] for example, ip=192.168.1.1 returns the error message Fatal for argument 'ip=[insert ip here]'\n sorry, unknown value [ip address] refusing to continue . In releases of Red Hat Enterprise Linux, the boot option format was: However, in Red Hat Enterprise Linux 8, the boot option format is: To resolve the issue, use the format: ip=ip::gateway:netmask:hostname:interface:none where: ip specifies the client ip address. You can specify IPv6 addresses in square brackets, for example, [2001:DB8::1] . gateway is the default gateway. IPv6 addresses are also accepted. netmask is the netmask to be used. This can be either a full netmask, for example, 255.255.255.0, or a prefix, for example, 64 . hostname is the host name of the client system. This parameter is optional. Additional resources Network boot options B.12. Cannot boot into the graphical installation on iLO or iDRAC devices The graphical installer for a remote ISO installation on iLO or iDRAC devices may not be available due to a slow internet connection. To proceed with the installation in this case, you can choose one of the following methods: Avoid the timeout. To do so: Press the Tab key in case of BIOS usage, or the e key in case of UEFI usage when booting from an installation media. That will allow you to modify the kernel command line arguments. To proceed with the installation, append the rd.live.ram=1 and press Enter in case of BIOS usage, or Ctrl+x in case of UEFI usage. This might take longer to load the installation program. Another option to extend the loading time for the graphical installer is to set the inst.xtimeout kernel argument in seconds. You can install the system in text mode. For more details, see Installing RHEL8 in text mode . In the remote management console, such as iLO or iDRAC, instead of a local media source, use the direct URL to the installation ISO file from the Download center on the Red Hat Customer Portal. You must be logged in to access this section. B.13. Rootfs image is not initramfs If you get the following message on the console during booting the installer, the transfer of the installer initrd.img might have had errors: To resolve this issue, download initrd again or run the sha256sum with initrd.img and compare it with the checksum stored in the .treeinfo file on the installation medium, for example, To view the checksum in .treeinfo : Despite having correct initrd.img , if you get the following kernel messages during booting the installer, often a boot parameter is missing or mis-spelled, and the installer could not load stage2 , typically referred to by the inst.repo= parameter, providing the full installer initial ramdisk for its in-memory root file system: To resolve this issue, check if the installation source specified is correct on the kernel command line ( inst.repo= ) or in the kickstart file the network configuration is specified on the kernel command line (if the installation source is specified as network) the network installation source is accessible from another system
[ "curl --output directory-path/filename.iso 'new_copied_link_location' --continue-at -", "sha256sum rhel-x.x-x86_64-dvd.iso `85a...46c rhel-x.x-x86_64-dvd.iso`", "curl --output _rhel-x.x-x86_64-dvd.iso 'https://access.cdn.redhat.com//content/origin/files/sha256/85/85a...46c/rhel-x.x-x86_64-dvd.iso?_auth =141...963' --continue-at -", "grubby --default-kernel /boot/vmlinuz-4.18.0-94.el8.x86_64", "grubby --remove-args=\"rhgb\" --update-kernel /boot/vmlinuz-4.18.0-94.el8.x86_64", "df -h", "Filesystem Size Used Avail Use% Mounted on devtmpfs 396M 0 396M 0% /dev tmpfs 411M 0 411M 0% /dev/shm tmpfs 411M 6.7M 405M 2% /run tmpfs 411M 0 411M 0% /sys/fs/cgroup /dev/mapper/rhel-root 17G 4.1G 13G 25% / /dev/sda1 1014M 173M 842M 17% /boot tmpfs 83M 20K 83M 1% /run/user/42 tmpfs 83M 84K 83M 1% /run/user/1000 /dev/dm-4 90G 90G 0 100% /home", "free -m", "mem= xx M", "free -m", "grubby --update-kernel=ALL --args=\"mem= xx M\"", "Enable=true", "systemctl restart gdm.service", "X :1 -query address", "Xnest :1 -query address", "inst.rescue inst.dd=driver_name", "inst.rescue modprobe.blacklist=driver_name", "The rescue environment will now attempt to find your Linux installation and mount it under the directory: /mnt/sysroot/. You can then make any changes required to your system. Choose 1 to proceed with this step. You can choose to mount your file systems read-only instead of read-write by choosing 2 . If for some reason this process does not work choose 3 to skip directly to a shell. 1) Continue 2) Read-only mount 3) Skip to shell 4) Quit (Reboot)", "sh-4.2#", "sh-4.2# chroot /mnt/sysroot", "sh-4.2# mount -t xfs /dev/mapper/VolGroup00-LogVol02 /directory", "sh-4.2# fdisk -l", "sh-4.2# chroot /mnt/sysroot/", "sh-4.2# sosreport", "bash-4.2# ip addr add 10.13.153.64/23 dev eth0", "sh-4.2# exit", "sh-4.2# cp /mnt/sysroot/var/tmp/sosreport new_location", "sh-4.2# scp /mnt/sysroot/var/tmp/sosreport username@hostname:sosreport", "sh-4.2# chroot /mnt/sysroot/", "sh-4.2# /sbin/grub2-install install_device", "sh-4.2# chroot /mnt/sysroot/", "sh-4.2# yum install /root/drivers/xorg-x11-drv-wacom-0.23.0-6.el7.x86_64.rpm", "sh-4.2# exit", "sh-4.2# chroot /mnt/sysroot/", "sh-4.2# yum remove xorg-x11-drv-wacom", "sh-4.2# exit", "ip=192.168.1.15 netmask=255.255.255.0 gateway=192.168.1.254 nameserver=192.168.1.250 hostname=myhost1", "ip=192.168.1.15::192.168.1.254:255.255.255.0:myhost1::none: nameserver=192.168.1.250", "inst.xtimeout= N", "[ ...] rootfs image is not initramfs", "sha256sum dvd/images/pxeboot/initrd.img fdb1a70321c06e25a1ed6bf3d8779371b768d5972078eb72b2c78c925067b5d8 dvd/images/pxeboot/initrd.img", "grep sha256 dvd/.treeinfo images/efiboot.img = sha256: d357d5063b96226d643c41c9025529554a422acb43a4394e4ebcaa779cc7a917 images/install.img = sha256: 8c0323572f7fc04e34dd81c97d008a2ddfc2cfc525aef8c31459e21bf3397514 images/pxeboot/initrd.img = sha256: fdb1a70321c06e25a1ed6bf3d8779371b768d5972078eb72b2c78c925067b5d8 images/pxeboot/vmlinuz = sha256: b9510ea4212220e85351cbb7f2ebc2b1b0804a6d40ccb93307c165e16d1095db", "[ ...] No filesystem could mount root, tried: [ ...] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(1,0) [ ...] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.14.0-55.el9.s390x #1 [ ...] [ ...] Call Trace: [ ...] ([<...>] show_trace+0x.../0x...) [ ...] [<...>] show_stack+0x.../0x [ ...] [<...>] panic+0x.../0x [ ...] [<...>] mount_block_root+0x.../0x [ ...] [<...>] prepare_namespace+0x.../0x [ ...] [<...>] kernel_init_freeable+0x.../0x [ ...] [<...>] kernel_init+0x.../0x [ ...] [<...>] kernel_thread_starter+0x.../0x [ ...] [<...>] kernel_thread_starter+0x.../0x..." ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/interactively_installing_rhel_over_the_network/troubleshooting-after-installation_rhel-installer
25.15. Scanning iSCSI Interconnects
25.15. Scanning iSCSI Interconnects For iSCSI, if the targets send an iSCSI async event indicating new storage is added, then the scan is done automatically. However, if the targets do not send an iSCSI async event, you need to manually scan them using the iscsiadm utility. Before doing so, however, you need to first retrieve the proper --targetname and the --portal values. If your device model supports only a single logical unit and portal per target, use iscsiadm to issue a sendtargets command to the host, as in: The output will appear in the following format: Example 25.11. Using iscsiadm to issue a sendtargets Command For example, on a target with a proper_target_name of iqn.1992-08.com.netapp:sn.33615311 and a target_IP:port of 10.15.85.19:3260 , the output may appear as: In this example, the target has two portals, each using target_ip:port s of 10.15.84.19:3260 and 10.15.85.19:3260 . To see which iface configuration will be used for each session, add the -P 1 option. This option will print also session information in tree format, as in: Example 25.12. View iface Configuration For example, with iscsiadm -m discovery -t sendtargets -p 10.15.85.19:3260 -P 1 , the output may appear as: This means that the target iqn.1992-08.com.netapp:sn.33615311 will use iface2 as its iface configuration. With some device models a single target may have multiple logical units and portals. In this case, issue a sendtargets command to the host first to find new portals on the target. Then, rescan the existing sessions using: You can also rescan a specific session by specifying the session's SID value, as in: If your device supports multiple targets, you will need to issue a sendtargets command to the hosts to find new portals for each target. Rescan existing sessions to discover new logical units on existing sessions using the --rescan option. Important The sendtargets command used to retrieve --targetname and --portal values overwrites the contents of the /var/lib/iscsi/nodes database. This database will then be repopulated using the settings in /etc/iscsi/iscsid.conf . However, this will not occur if a session is currently logged in and in use. To safely add new targets/portals or delete old ones, use the -o new or -o delete options, respectively. For example, to add new targets/portals without overwriting /var/lib/iscsi/nodes , use the following command: To delete /var/lib/iscsi/nodes entries that the target did not display during discovery, use: You can also perform both tasks simultaneously, as in: The sendtargets command will yield the following output: Example 25.13. Output of the sendtargets Command For example, given a device with a single target, logical unit, and portal, with equallogic-iscsi1 as your target_name , the output should appear similar to the following: Note that proper_target_name and ip:port,target_portal_group_tag are identical to the values of the same name in Section 25.7.1, "iSCSI API" . At this point, you now have the proper --targetname and --portal values needed to manually scan for iSCSI devices. To do so, run the following command: Example 25.14. Full iscsiadm Command Using our example (where proper_target_name is equallogic-iscsi1 ), the full command would be: [7] For information on how to retrieve a session's SID value, refer to Section 25.7.1, "iSCSI API" . [8] This is a single command split into multiple lines, to accommodate printed and PDF versions of this document. All concatenated lines - preceded by the backslash (\) - should be treated as one command, sans backslashes.
[ "iscsiadm -m discovery -t sendtargets -p target_IP:port [5]", "target_IP:port , target_portal_group_tag proper_target_name", "10.15.84.19:3260,2 iqn.1992-08.com.netapp:sn.33615311 10.15.85.19:3260,3 iqn.1992-08.com.netapp:sn.33615311", "Target: proper_target_name Portal: target_IP:port , target_portal_group_tag Iface Name: iface_name", "Target: iqn.1992-08.com.netapp:sn.33615311 Portal: 10.15.84.19:3260,2 Iface Name: iface2 Portal: 10.15.85.19:3260,3 Iface Name: iface2", "iscsiadm -m session --rescan", "iscsiadm -m session -r SID --rescan [7]", "iscsiadm -m discovery -t st -p target_IP -o new", "iscsiadm -m discovery -t st -p target_IP -o delete", "iscsiadm -m discovery -t st -p target_IP -o delete -o new", "ip:port,target_portal_group_tag proper_target_name", "10.16.41.155:3260,0 iqn.2001-05.com.equallogic:6-8a0900-ac3fe0101-63aff113e344a4a2-dl585-03-1", "iscsiadm --mode node --targetname proper_target_name --portal ip:port,target_portal_group_tag \\ --login [8]", "iscsiadm --mode node --targetname \\ iqn.2001-05.com.equallogic:6-8a0900-ac3fe0101-63aff113e344a4a2-dl585-03-1 \\ --portal 10.16.41.155:3260,0 --login [8]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/iscsi-scanning-interconnects
Chapter 352. Twitter Components
Chapter 352. Twitter Components Available as of Camel version 2.10 The camel-twitter consists of 4 components: Twitter Direct Message Twitter Search Twitter Streaming Twitter Timeline The Twitter components enable the most useful features of the Twitter API by encapsulating Twitter4J . It allows direct, polling, or event-driven consumption of timelines, users, trends, and direct messages. Also, it supports producing messages as status updates or direct messages. Twitter now requires the use of OAuth for all client application authentication. In order to use camel-twitter with your account, you'll need to create a new application within Twitter at https://dev.twitter.com/apps/new and grant the application access to your account. Finally, generate your access token and secret. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-twitter</artifactId> <version>USD{camel-version}</version> </dependency> 352.1. Consumer endpoints Rather than the endpoints returning a List through one single route exchange, camel-twitter creates one route exchange per returned object. As an example, if "timeline/home" results in five statuses, the route will be executed five times (one for each Status). Endpoint Context Body Type Notice twitter-directmessage direct, polling twitter4j.DirectMessage twitter-search direct, polling twitter4j.Status twitter-streaming event, polling twitter4j.Status twitter-timeline direct, polling twitter4j.Status 352.2. Producer endpoints Endpoint Body Type Notice twitter-directmessage String twitter-search List<twitter4j.Status> twitter-timeline String Only 'user' timelineType is supported for producer 352.3. Message headers Name Description CamelTwitterKeywords This header is used by the search producer to change the search key words dynamically. CamelTwitterSearchLanguage Camel 2.11.0: This header can override the option of lang which set the search language for the search endpoint dynamically CamelTwitterCount Camel 2.11.0 This header can override the option of count which sets the max twitters that will be returned. CamelTwitterNumberOfPages Camel 2.11.0 This header can override the option of numberOfPages which sets how many pages we want to twitter returns. 352.4. Message body All message bodies utilize objects provided by the Twitter4J API. 352.5. Use cases Note API Rate Limits: Twitter REST APIs encapsulated by Twitter4J are subjected to API Rate Limiting . You can find the per method limits in the API Rate Limits documentation. Note that endpoints/resources not listed in that page are default to 15 requests per allotted user per window. 352.5.1. To create a status update within your Twitter profile, send this producer a String body: from("direct:foo") .to("twitter-timeline://user?consumerKey=[s]&consumerSecret=[s]&accessToken=[s]&accessTokenSecret=[s]); 352.5.2. To poll, every 60 sec., all statuses on your home timeline: from("twitter-timeline://home?type=polling&delay=60&consumerKey=[s]&consumerSecret=[s]&accessToken=[s]&accessTokenSecret=[s]") .to("bean:blah"); 352.5.3. To search for all statuses with the keyword 'camel' only once: from("twitter-search://foo?type=polling&keywords=camel&consumerKey=[s]&consumerSecret=[s]&accessToken=[s]&accessTokenSecret=[s]") .to("bean:blah"); 352.5.4. Searching using a producer with static keywords: from("direct:foo") .to("twitter-search://foo?keywords=camel&consumerKey=[s]&consumerSecret=[s]&accessToken=[s]&accessTokenSecret=[s]"); 352.5.5. Searching using a producer with dynamic keywords from header: In the bar header we have the keywords we want to search, so we can assign this value to the CamelTwitterKeywords header: from("direct:foo") .setHeader("CamelTwitterKeywords", header("bar")) .to("twitter-search://foo?consumerKey=[s]&consumerSecret=[s]&accessToken=[s]&accessTokenSecret=[s]"); 352.6. Example See also the Twitter Websocket Example . 352.7. See Also Configuring Camel Component Endpoint Getting Started Twitter Websocket Example
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-twitter</artifactId> <version>USD{camel-version}</version> </dependency>", "from(\"direct:foo\") .to(\"twitter-timeline://user?consumerKey=[s]&consumerSecret=[s]&accessToken=[s]&accessTokenSecret=[s]);", "from(\"twitter-timeline://home?type=polling&delay=60&consumerKey=[s]&consumerSecret=[s]&accessToken=[s]&accessTokenSecret=[s]\") .to(\"bean:blah\");", "from(\"twitter-search://foo?type=polling&keywords=camel&consumerKey=[s]&consumerSecret=[s]&accessToken=[s]&accessTokenSecret=[s]\") .to(\"bean:blah\");", "from(\"direct:foo\") .to(\"twitter-search://foo?keywords=camel&consumerKey=[s]&consumerSecret=[s]&accessToken=[s]&accessTokenSecret=[s]\");", "from(\"direct:foo\") .setHeader(\"CamelTwitterKeywords\", header(\"bar\")) .to(\"twitter-search://foo?consumerKey=[s]&consumerSecret=[s]&accessToken=[s]&accessTokenSecret=[s]\");" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/twitter_components
Chapter 9. nova
Chapter 9. nova The following chapter contains information about the configuration options in the nova service. 9.1. nova.conf This section contains options for the /etc/nova/nova.conf file. 9.1.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/nova/nova.conf file. . Configuration option = Default value Type Description allow_resize_to_same_host = False boolean value Allow destination machine to match source for resize. Useful when testing in single-host environments. By default it is not allowed to resize to the same host. Setting this option to true will add the same host to the destination options. Also set to true if you allow the ServerGroupAffinityFilter and need to resize. arq_binding_timeout = 300 integer value Timeout for Accelerator Request (ARQ) bind event message arrival. Number of seconds to wait for ARQ bind resolution event to arrive. The event indicates that every ARQ for an instance has either bound successfully or failed to bind. If it does not arrive, instance bringup is aborted with an exception. backdoor_port = None string value Enable eventlet backdoor. Acceptable values are 0, <port>, and <start>:<end>, where 0 results in listening on a random tcp port number; <port> results in listening on the specified port number (and not enabling backdoor if that port is in use); and <start>:<end> results in listening on the smallest unused port number within the specified range of port numbers. The chosen port is displayed in the service's log file. backdoor_socket = None string value Enable eventlet backdoor, using the provided path as a unix socket that can receive connections. This option is mutually exclusive with backdoor_port in that only one should be provided. If both are provided then the existence of this option overrides the usage of that option. Inside the path {pid} will be replaced with the PID of the current process. block_device_allocate_retries = 60 integer value The number of times to check for a volume to be "available" before attaching it during server create. When creating a server with block device mappings where source_type is one of blank , image or snapshot and the destination_type is volume , the nova-compute service will create a volume and then attach it to the server. Before the volume can be attached, it must be in status "available". This option controls how many times to check for the created volume to be "available" before it is attached. If the operation times out, the volume will be deleted if the block device mapping delete_on_termination value is True. It is recommended to configure the image cache in the block storage service to speed up this operation. See https://docs.openstack.org/cinder/latest/admin/blockstorage-image-volume-cache.html for details. Possible values: 60 (default) If value is 0, then one attempt is made. For any value > 0, total attempts are (value + 1) Related options: block_device_allocate_retries_interval - controls the interval between checks block_device_allocate_retries_interval = 3 integer value Interval (in seconds) between block device allocation retries on failures. This option allows the user to specify the time interval between consecutive retries. The block_device_allocate_retries option specifies the maximum number of retries. Possible values: 0: Disables the option. Any positive integer in seconds enables the option. Related options: block_device_allocate_retries - controls the number of retries cert = self.pem string value Path to SSL certificate file. Related options: key ssl_only [console] ssl_ciphers [console] ssl_minimum_version compute_driver = None string value Defines which driver to use for controlling virtualization. Possible values: libvirt.LibvirtDriver fake.FakeDriver ironic.IronicDriver vmwareapi.VMwareVCDriver hyperv.HyperVDriver powervm.PowerVMDriver zvm.ZVMDriver compute_monitors = [] list value A comma-separated list of monitors that can be used for getting compute metrics. You can use the alias/name from the setuptools entry points for nova.compute.monitors.* namespaces. If no namespace is supplied, the "cpu." namespace is assumed for backwards-compatibility. Note Only one monitor per namespace (For example: cpu) can be loaded at a time. Possible values: An empty list will disable the feature (Default). An example value that would enable the CPU bandwidth monitor that uses the virt driver variant compute_monitors = cpu.virt_driver config_drive_format = iso9660 string value Config drive format. Config drive format that will contain metadata attached to the instance when it boots. Related options: This option is meaningful when one of the following alternatives occur: force_config_drive option set to true the REST API call to create the instance contains an enable flag for config drive option the image used to create the instance requires a config drive, this is defined by img_config_drive property for that image. A compute node running Hyper-V hypervisor can be configured to attach config drive as a CD drive. To attach the config drive as a CD drive, set the [hyperv] config_drive_cdrom option to true. Deprecated since: 19.0.0 Reason: This option was originally added as a workaround for bug in libvirt, #1246201, that was resolved in libvirt v1.2.17. As a result, this option is no longer necessary or useful. conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool console_host = <based on operating system> string value Console proxy host to be used to connect to instances on this host. It is the publicly visible name for the console host. Possible values: Current hostname (default) or any string representing hostname. control_exchange = nova string value The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option. cpu_allocation_ratio = None floating point value Virtual CPU to physical CPU allocation ratio. This option is used to influence the hosts selected by the Placement API by configuring the allocation ratio for VCPU inventory. note:: note:: Possible values: Any valid positive integer or float value Related options: initial_cpu_allocation_ratio daemon = False boolean value Run as a background process. debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_access_ip_network_name = None string value Name of the network to be used to set access IPs for instances. If there are multiple IPs to choose from, an arbitrary one will be chosen. Possible values: None (default) Any string representing network name. default_availability_zone = nova string value Default availability zone for compute services. This option determines the default availability zone for nova-compute services, which will be used if the service(s) do not belong to aggregates with availability zone metadata. Possible values: Any string representing an existing availability zone name. default_ephemeral_format = None string value The default format an ephemeral_volume will be formatted with on creation. Possible values: ext2 ext3 ext4 xfs ntfs (only for Windows guests) default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. default_schedule_zone = None string value Default availability zone for instances. This option determines the default availability zone for instances, which will be used when a user does not specify one when creating an instance. The instance(s) will be bound to this availability zone for their lifetime. Possible values: Any string representing an existing availability zone name. None, which means that the instance can move from one availability zone to another during its lifetime if it is moved from one compute node to another. Related options: [cinder]/cross_az_attach disk_allocation_ratio = None floating point value Virtual disk to physical disk allocation ratio. This option is used to influence the hosts selected by the Placement API by configuring the allocation ratio for DISK_GB inventory. When configured, a ratio greater than 1.0 will result in over-subscription of the available physical disk, which can be useful for more efficiently packing instances created with images that do not use the entire virtual disk, such as sparse or compressed images. It can be set to a value between 0.0 and 1.0 in order to preserve a percentage of the disk for uses other than instances. note:: note:: Possible values: Any valid positive integer or float value Related options: initial_disk_allocation_ratio enable_new_services = True boolean value Enable new nova-compute services on this host automatically. When a new nova-compute service starts up, it gets registered in the database as an enabled service. Sometimes it can be useful to register new compute services in disabled state and then enabled them at a later point in time. This option only sets this behavior for nova-compute services, it does not auto-disable other services like nova-conductor, nova-scheduler, or nova-osapi_compute. Possible values: True : Each new compute service is enabled as soon as it registers itself. False : Compute services must be enabled via an os-services REST API call or with the CLI with nova service-enable <hostname> <binary> , otherwise they are not ready to use. enabled_apis = ['osapi_compute', 'metadata'] list value List of APIs to be enabled by default. enabled_ssl_apis = [] list value List of APIs with enabled SSL. Nova provides SSL support for the API servers. enabled_ssl_apis option allows configuring the SSL support. executor_thread_pool_size = 64 integer value Size of executor thread pool when executor is threading or eventlet. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. flat_injected = False boolean value This option determines whether the network setup information is injected into the VM before it is booted. While it was originally designed to be used only by nova-network, it is also used by the vmware virt driver to control whether network information is injected into a VM. The libvirt virt driver also uses it when we use config_drive to configure network to control whether network information is injected into a VM. force_config_drive = False boolean value Force injection to take place on a config drive When this option is set to true config drive functionality will be forced enabled by default, otherwise users can still enable config drives via the REST API or image metadata properties. Launched instances are not affected by this option. Possible values: True: Force to use of config drive regardless the user's input in the REST API call. False: Do not force use of config drive. Config drives can still be enabled via the REST API or image metadata properties. Related options: Use the mkisofs_cmd flag to set the path where you install the genisoimage program. If genisoimage is in same path as the nova-compute service, you do not need to set this flag. To use a config drive with Hyper-V, you must set the mkisofs_cmd value to the full path to an mkisofs.exe installation. Additionally, you must set the qemu_img_cmd value in the hyperv configuration section to the full path to an qemu-img command installation. force_raw_images = True boolean value Force conversion of backing images to raw format. Possible values: True: Backing image files will be converted to raw image format False: Backing image files will not be converted Related options: compute_driver : Only the libvirt driver uses this option. [libvirt]/images_type : If images_type is rbd, setting this option to False is not allowed. See the bug https://bugs.launchpad.net/nova/+bug/1816686 for more details. graceful_shutdown_timeout = 60 integer value Specify a timeout after which a gracefully shutdown server will exit. Zero value means endless wait. heal_instance_info_cache_interval = 60 integer value Interval between instance network information cache updates. Number of seconds after which each compute node runs the task of querying Neutron for all of its instances networking information, then updates the Nova db with that information. Nova will never update it's cache if this option is set to 0. If we don't update the cache, the metadata service and nova-api endpoints will be proxying incorrect network data about the instance. So, it is not recommended to set this option to 0. Possible values: Any positive integer in seconds. Any value ⇐0 will disable the sync. This is not recommended. host = <based on operating system> string value Hostname, FQDN or IP address of this host. Used as: the oslo.messaging queue name for nova-compute worker we use this value for the binding_host sent to neutron. This means if you use a neutron agent, it should have the same value for host. cinder host attachment information Must be valid within AMQP key. Possible values: String with hostname, FQDN or IP address. Default is hostname of this host. initial_cpu_allocation_ratio = 16.0 floating point value Initial virtual CPU to physical CPU allocation ratio. This is only used when initially creating the computes_nodes table record for a given nova-compute service. See https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html for more details and usage scenarios. Related options: cpu_allocation_ratio initial_disk_allocation_ratio = 1.0 floating point value Initial virtual disk to physical disk allocation ratio. This is only used when initially creating the computes_nodes table record for a given nova-compute service. See https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html for more details and usage scenarios. Related options: disk_allocation_ratio initial_ram_allocation_ratio = 1.5 floating point value Initial virtual RAM to physical RAM allocation ratio. This is only used when initially creating the computes_nodes table record for a given nova-compute service. See https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html for more details and usage scenarios. Related options: ram_allocation_ratio injected_network_template = USDpybasedir/nova/virt/interfaces.template string value Path to /etc/network/interfaces template. The path to a template file for the /etc/network/interfaces -style file, which will be populated by nova and subsequently used by cloudinit. This provides a method to configure network connectivity in environments without a DHCP server. The template will be rendered using Jinja2 template engine, and receive a top-level key called interfaces . This key will contain a list of dictionaries, one for each interface. Refer to the cloudinit documentaion for more information: Possible values: A path to a Jinja2-formatted template for a Debian /etc/network/interfaces file. This applies even if using a non Debian-derived guest. Related options: flat_inject : This must be set to True to ensure nova embeds network configuration information in the metadata provided through the config drive. instance_build_timeout = 0 integer value Maximum time in seconds that an instance can take to build. If this timer expires, instance status will be changed to ERROR. Enabling this option will make sure an instance will not be stuck in BUILD state for a longer period. Possible values: 0: Disables the option (default) Any positive integer in seconds: Enables the option. instance_delete_interval = 300 integer value Interval for retrying failed instance file deletes. This option depends on maximum_instance_delete_attempts . This option specifies how often to retry deletes whereas maximum_instance_delete_attempts specifies the maximum number of retry attempts that can be made. Possible values: 0: Will run at the default periodic interval. Any value < 0: Disables the option. Any positive integer in seconds. Related options: maximum_instance_delete_attempts from instance_cleaning_opts group. `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. instance_name_template = instance-%08x string value Template string to be used to generate instance names. This template controls the creation of the database name of an instance. This is not the display name you enter when creating an instance (via Horizon or CLI). For a new deployment it is advisable to change the default value (which uses the database autoincrement) to another value which makes use of the attributes of an instance, like instance-%(uuid)s . If you already have instances in your deployment when you change this, your deployment will break. Possible values: A string which either uses the instance database ID (like the default) A string with a list of named database columns, for example %(id)d or %(uuid)s or %(hostname)s . instance_usage_audit = False boolean value This option enables periodic compute.instance.exists notifications. Each compute node must be configured to generate system usage data. These notifications are consumed by OpenStack Telemetry service. instance_usage_audit_period = month string value Time period to generate instance usages for. It is possible to define optional offset to given period by appending @ character followed by a number defining offset. Possible values: period, example: hour , day , month or year period with offset, example: month@15 will result in monthly audits starting on 15th day of month. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. instances_path = USDstate_path/instances string value Specifies where instances are stored on the hypervisor's disk. It can point to locally attached storage or a directory on NFS. Possible values: USDstate_path/instances where state_path is a config option that specifies the top-level directory for maintaining nova's state. (default) or Any string representing directory path. Related options: [workarounds]/ensure_libvirt_rbd_instance_dir_cleanup internal_service_availability_zone = internal string value Availability zone for internal services. This option determines the availability zone for the various internal nova services, such as nova-scheduler , nova-conductor , etc. Possible values: Any string representing an existing availability zone name. key = None string value SSL key file (if separate from cert). Related options: cert live_migration_retry_count = 30 integer value Maximum number of 1 second retries in live_migration. It specifies number of retries to iptables when it complains. It happens when an user continuously sends live-migration request to same host leading to concurrent request to iptables. Possible values: Any positive integer representing retry count. log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_options = True boolean value Enables or disables logging values of all registered options when starting a service (at DEBUG level). log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter long_rpc_timeout = 1800 integer value This option allows setting an alternate timeout value for RPC calls that have the potential to take a long time. If set, RPC calls to other services will use this value for the timeout (in seconds) instead of the global rpc_response_timeout value. Operations with RPC calls that utilize this value: live migration scheduling enabling/disabling a compute service image pre-caching snapshot-based / cross-cell resize resize / cold migration volume attach Related options: rpc_response_timeout max_concurrent_builds = 10 integer value Limits the maximum number of instance builds to run concurrently by nova-compute. Compute service can attempt to build an infinite number of instances, if asked to do so. This limit is enforced to avoid building unlimited instance concurrently on a compute node. This value can be set per compute node. Possible Values: 0 : treated as unlimited. Any positive integer representing maximum concurrent builds. max_concurrent_live_migrations = 1 integer value Maximum number of live migrations to run concurrently. This limit is enforced to avoid outbound live migrations overwhelming the host/network and causing failures. It is not recommended that you change this unless you are very sure that doing so is safe and stable in your environment. Possible values: 0 : treated as unlimited. Any positive integer representing maximum number of live migrations to run concurrently. max_concurrent_snapshots = 5 integer value Maximum number of instance snapshot operations to run concurrently. This limit is enforced to prevent snapshots overwhelming the host/network/storage and causing failure. This value can be set per compute node. Possible Values: 0 : treated as unlimited. Any positive integer representing maximum concurrent snapshots. max_local_block_devices = 3 integer value Maximum number of devices that will result in a local image being created on the hypervisor node. A negative number means unlimited. Setting max_local_block_devices to 0 means that any request that attempts to create a local disk will fail. This option is meant to limit the number of local discs (so root local disc that is the result of imageRef being used when creating a server, and any other ephemeral and swap disks). 0 does not mean that images will be automatically converted to volumes and boot instances from volumes - it just means that all requests that attempt to create a local disk will fail. Possible values: 0: Creating a local disk is not allowed. Negative number: Allows unlimited number of local discs. Positive number: Allows only these many number of local discs. max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". maximum_instance_delete_attempts = 5 integer value The number of times to attempt to reap an instance's files. This option specifies the maximum number of retry attempts that can be made. Possible values: Any positive integer defines how many attempts are made. Related options: [DEFAULT] instance_delete_interval can be used to disable this option. metadata_listen = 0.0.0.0 string value IP address on which the metadata API will listen. The metadata API service listens on this IP address for incoming requests. metadata_listen_port = 8775 port value Port on which the metadata API will listen. The metadata API service listens on this port number for incoming requests. metadata_workers = <based on operating system> integer value Number of workers for metadata service. If not specified the number of available CPUs will be used. The metadata service can be configured to run as multi-process (workers). This overcomes the problem of reduction in throughput when API request concurrency increases. The metadata service will run in the specified number of processes. Possible Values: Any positive integer None (default value) migrate_max_retries = -1 integer value Number of times to retry live-migration before failing. Possible values: If == -1, try until out of hosts (default) If == 0, only try once, no retries Integer greater than 0 mkisofs_cmd = genisoimage string value Name or path of the tool used for ISO image creation. Use the mkisofs_cmd flag to set the path where you install the genisoimage program. If genisoimage is on the system path, you do not need to change the default value. To use a config drive with Hyper-V, you must set the mkisofs_cmd value to the full path to an mkisofs.exe installation. Additionally, you must set the qemu_img_cmd value in the hyperv configuration section to the full path to an qemu-img command installation. Possible values: Name of the ISO image creator program, in case it is in the same directory as the nova-compute service Path to ISO image creator program Related options: This option is meaningful when config drives are enabled. To use config drive with Hyper-V, you must set the qemu_img_cmd value in the hyperv configuration section to the full path to an qemu-img command installation. my_block_storage_ip = USDmy_ip string value The IP address which is used to connect to the block storage network. Possible values: String with valid IP address. Default is IP address of this host. Related options: my_ip - if my_block_storage_ip is not set, then my_ip value is used. my_ip = <based on operating system> string value The IP address which the host is using to connect to the management network. Possible values: String with valid IP address. Default is IPv4 address of this host. Related options: my_block_storage_ip network_allocate_retries = 0 integer value Number of times to retry network allocation. It is required to attempt network allocation retries if the virtual interface plug fails. Possible values: Any positive integer representing retry count. non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] list value Image properties that should not be inherited from the instance when taking a snapshot. This option gives an opportunity to select which image-properties should not be inherited by newly created snapshots. note:: cinder_encryption_key_id cinder_encryption_key_deletion_policy img_signature img_signature_hash_method img_signature_key_type img_signature_certificate_uuid Possible values: A comma-separated list whose item is an image property. Usually only the image properties that are only needed by base images can be included here, since the snapshots that are created from the base images don't need them. Default list: cache_in_nova, bittorrent osapi_compute_listen = 0.0.0.0 string value IP address on which the OpenStack API will listen. The OpenStack API service listens on this IP address for incoming requests. osapi_compute_listen_port = 8774 port value Port on which the OpenStack API will listen. The OpenStack API service listens on this port number for incoming requests. `osapi_compute_unique_server_name_scope = ` string value Sets the scope of the check for unique instance names. The default doesn't check for unique names. If a scope for the name check is set, a launch of a new instance or an update of an existing instance with a duplicate name will result in an 'InstanceExists ' error. The uniqueness is case-insensitive. Setting this option can increase the usability for end users as they don't have to distinguish among instances with the same name by their IDs. osapi_compute_workers = None integer value Number of workers for OpenStack API service. The default will be the number of CPUs available. OpenStack API services can be configured to run as multi-process (workers). This overcomes the problem of reduction in throughput when API request concurrency increases. OpenStack API service will run in the specified number of processes. Possible Values: Any positive integer None (default value) password_length = 12 integer value Length of generated instance admin passwords. periodic_enable = True boolean value Enable periodic tasks. If set to true, this option allows services to periodically run tasks on the manager. In case of running multiple schedulers or conductors you may want to run periodic tasks on only one host - in this case disable this option for all hosts but one. periodic_fuzzy_delay = 60 integer value Number of seconds to randomly delay when starting the periodic task scheduler to reduce stampeding. When compute workers are restarted in unison across a cluster, they all end up running the periodic tasks at the same time causing problems for the external services. To mitigate this behavior, periodic_fuzzy_delay option allows you to introduce a random initial delay when starting the periodic task scheduler. Possible Values: Any positive integer (in seconds) 0 : disable the random delay pointer_model = usbtablet string value Generic property to specify the pointer type. Input devices allow interaction with a graphical framebuffer. For example to provide a graphic tablet for absolute cursor movement. If set, either the hw_input_bus or hw_pointer_model image metadata properties will take precedence over this configuration option. Related options: usbtablet must be configured with VNC enabled or SPICE enabled and SPICE agent disabled. When used with libvirt the instance mode should be configured as HVM. preallocate_images = none string value The image preallocation mode to use. Image preallocation allows storage for instance images to be allocated up front when the instance is initially provisioned. This ensures immediate feedback is given if enough space isn't available. In addition, it should significantly improve performance on writes to new blocks and may even improve I/O performance to prewritten blocks due to reduced fragmentation. publish_errors = False boolean value Enables or disables publication of error events. pybasedir = /usr/lib/python3.9/site-packages string value The directory where the Nova python modules are installed. This directory is used to store template files for networking and remote console access. It is also the default path for other config options which need to persist Nova internal data. It is very unlikely that you need to change this option from its default value. Possible values: The full path to a directory. Related options: state_path ram_allocation_ratio = None floating point value Virtual RAM to physical RAM allocation ratio. This option is used to influence the hosts selected by the Placement API by configuring the allocation ratio for MEMORY_MB inventory. note:: Possible values: Any valid positive integer or float value Related options: initial_ram_allocation_ratio rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. reboot_timeout = 0 integer value Time interval after which an instance is hard rebooted automatically. When doing a soft reboot, it is possible that a guest kernel is completely hung in a way that causes the soft reboot task to not ever finish. Setting this option to a time period in seconds will automatically hard reboot an instance if it has been stuck in a rebooting state longer than N seconds. Possible values: 0: Disables the option (default). Any positive integer in seconds: Enables the option. reclaim_instance_interval = 0 integer value Interval for reclaiming deleted instances. A value greater than 0 will enable SOFT_DELETE of instances. This option decides whether the server to be deleted will be put into the SOFT_DELETED state. If this value is greater than 0, the deleted server will not be deleted immediately, instead it will be put into a queue until it's too old (deleted time greater than the value of reclaim_instance_interval). The server can be recovered from the delete queue by using the restore action. If the deleted server remains longer than the value of reclaim_instance_interval, it will be deleted by a periodic task in the compute service automatically. Note that this option is read from both the API and compute nodes, and must be set globally otherwise servers could be put into a soft deleted state in the API and never actually reclaimed (deleted) on the compute node. note:: When using this option, you should also configure the [cinder] auth options, e.g. auth_type , auth_url , username , etc. Since the reclaim happens in a periodic task, there is no user token to cleanup volumes attached to any SOFT_DELETED servers so nova must be configured with administrator role access to cleanup those resources in cinder. Possible values: Any positive integer(in seconds) greater than 0 will enable this option. Any value ⇐0 will disable the option. Related options: [cinder] auth options for cleaning up volumes attached to servers during the reclaim process record = None string value Filename that will be used for storing websocket frames received and sent by a proxy service (like VNC, spice, serial) running on this host. If this is not set, no recording will be done. report_interval = 10 integer value Number of seconds indicating how frequently the state of services on a given hypervisor is reported. Nova needs to know this to determine the overall health of the deployment. Related Options: service_down_time report_interval should be less than service_down_time. If service_down_time is less than report_interval, services will routinely be considered down, because they report in too rarely. rescue_timeout = 0 integer value Interval to wait before un-rescuing an instance stuck in RESCUE. Possible values: 0: Disables the option (default) Any positive integer in seconds: Enables the option. reserved_host_cpus = 0 integer value Number of host CPUs to reserve for host processes. The host resources usage is reported back to the scheduler continuously from nova-compute running on the compute node. This value is used to determine the reserved value reported to placement. This option cannot be set if the [compute] cpu_shared_set or [compute] cpu_dedicated_set config options have been defined. When these options are defined, any host CPUs not included in these values are considered reserved for the host. Possible values: Any positive integer representing number of physical CPUs to reserve for the host. Related options: [compute] cpu_shared_set [compute] cpu_dedicated_set reserved_host_disk_mb = 0 integer value Amount of disk resources in MB to make them always available to host. The disk usage gets reported back to the scheduler from nova-compute running on the compute nodes. To prevent the disk resources from being considered as available, this option can be used to reserve disk space for that host. Possible values: Any positive integer representing amount of disk in MB to reserve for the host. reserved_host_memory_mb = 512 integer value Amount of memory in MB to reserve for the host so that it is always available to host processes. The host resources usage is reported back to the scheduler continuously from nova-compute running on the compute node. To prevent the host memory from being considered as available, this option is used to reserve memory for the host. Possible values: Any positive integer representing amount of memory in MB to reserve for the host. reserved_huge_pages = None dict value Number of huge/large memory pages to reserved per NUMA host cell. Possible values: A list of valid key=value which reflect NUMA node ID, page size (Default unit is KiB) and number of pages to be reserved. For example reserved_huge_pages = node:0,size:2048,count:64 reserved_huge_pages = node:1,size:1GB,count:1 resize_confirm_window = 0 integer value Automatically confirm resizes after N seconds. Resize functionality will save the existing server before resizing. After the resize completes, user is requested to confirm the resize. The user has the opportunity to either confirm or revert all changes. Confirm resize removes the original server and changes server status from resized to active. Setting this option to a time period (in seconds) will automatically confirm the resize if the server is in resized state longer than that time. Possible values: 0: Disables the option (default) Any positive integer in seconds: Enables the option. resize_fs_using_block_device = False boolean value Enable resizing of filesystems via a block device. If enabled, attempt to resize the filesystem by accessing the image over a block device. This is done by the host and may not be necessary if the image contains a recent version of cloud-init. Possible mechanisms require the nbd driver (for qcow and raw), or loop (for raw). resume_guests_state_on_host_boot = False boolean value This option specifies whether to start guests that were running before the host rebooted. It ensures that all of the instances on a Nova compute node resume their state each time the compute node boots or restarts. rootwrap_config = /etc/nova/rootwrap.conf string value Path to the rootwrap configuration file. Goal of the root wrapper is to allow a service-specific unprivileged user to run a number of actions as the root user in the safest manner possible. The configuration file used here must match the one defined in the sudoers entry. rpc_conn_pool_size = 30 integer value Size of RPC connection pool. rpc_ping_enabled = False boolean value Add an endpoint to answer to ping calls. Endpoint is named oslo_rpc_server_ping rpc_response_timeout = 60 integer value Seconds to wait for a response from a call. run_external_periodic_tasks = True boolean value Some periodic tasks can be run in a separate process. Should we run them here? running_deleted_instance_action = reap string value The compute service periodically checks for instances that have been deleted in the database but remain running on the compute node. The above option enables action to be taken when such instances are identified. Related options: running_deleted_instance_poll_interval running_deleted_instance_timeout running_deleted_instance_poll_interval = 1800 integer value Time interval in seconds to wait between runs for the clean up action. If set to 0, above check will be disabled. If "running_deleted_instance _action" is set to "log" or "reap", a value greater than 0 must be set. Possible values: Any positive integer in seconds enables the option. 0: Disables the option. 1800: Default value. Related options: running_deleted_instance_action running_deleted_instance_timeout = 0 integer value Time interval in seconds to wait for the instances that have been marked as deleted in database to be eligible for cleanup. Possible values: Any positive integer in seconds(default is 0). Related options: "running_deleted_instance_action" scheduler_instance_sync_interval = 120 integer value Interval between sending the scheduler a list of current instance UUIDs to verify that its view of instances is in sync with nova. If the CONF option scheduler_tracks_instance_changes is False, the sync calls will not be made. So, changing this option will have no effect. If the out of sync situations are not very common, this interval can be increased to lower the number of RPC messages being sent. Likewise, if sync issues turn out to be a problem, the interval can be lowered to check more frequently. Possible values: 0: Will run at the default periodic interval. Any value < 0: Disables the option. Any positive integer in seconds. Related options: This option has no impact if scheduler_tracks_instance_changes is set to False. service_down_time = 60 integer value Maximum time in seconds since last check-in for up service Each compute node periodically updates their database status based on the specified report interval. If the compute node hasn't updated the status for more than service_down_time, then the compute node is considered down. Related Options: report_interval (service_down_time should not be less than report_interval) servicegroup_driver = db string value This option specifies the driver to be used for the servicegroup service. ServiceGroup API in nova enables checking status of a compute node. When a compute worker running the nova-compute daemon starts, it calls the join API to join the compute group. Services like nova scheduler can query the ServiceGroup API to check if a node is alive. Internally, the ServiceGroup client driver automatically updates the compute worker status. There are multiple backend implementations for this service: Database ServiceGroup driver and Memcache ServiceGroup driver. Related Options: service_down_time (maximum time since last check-in for up service) shelved_offload_time = 0 integer value Time before a shelved instance is eligible for removal from a host. By default this option is set to 0 and the shelved instance will be removed from the hypervisor immediately after shelve operation. Otherwise, the instance will be kept for the value of shelved_offload_time(in seconds) so that during the time period the unshelve action will be faster, then the periodic task will remove the instance from hypervisor after shelved_offload_time passes. Possible values: 0: Instance will be immediately offloaded after being shelved. Any value < 0: An instance will never offload. Any positive integer in seconds: The instance will exist for the specified number of seconds before being offloaded. shelved_poll_interval = 3600 integer value Interval for polling shelved instances to offload. The periodic task runs for every shelved_poll_interval number of seconds and checks if there are any shelved instances. If it finds a shelved instance, based on the shelved_offload_time config value it offloads the shelved instances. Check shelved_offload_time config option description for details. Possible values: Any value ⇐ 0: Disables the option. Any positive integer in seconds. Related options: shelved_offload_time shutdown_timeout = 60 integer value Total time to wait in seconds for an instance to perform a clean shutdown. It determines the overall period (in seconds) a VM is allowed to perform a clean shutdown. While performing stop, rescue and shelve, rebuild operations, configuring this option gives the VM a chance to perform a controlled shutdown before the instance is powered off. The default timeout is 60 seconds. A value of 0 (zero) means the guest will be powered off immediately with no opportunity for guest OS clean-up. The timeout value can be overridden on a per image basis by means of os_shutdown_timeout that is an image metadata setting allowing different types of operating systems to specify how much time they need to shut down cleanly. Possible values: A positive integer or 0 (default value is 60). source_is_ipv6 = False boolean value Set to True if source host is addressed with IPv6. ssl_only = False boolean value Disallow non-encrypted connections. Related options: cert key state_path = USDpybasedir string value The top-level directory for maintaining Nova's state. This directory is used to store Nova's internal state. It is used by a variety of other config options which derive from this. In some scenarios (for example migrations) it makes sense to use a storage location which is shared between multiple compute hosts (for example via NFS). Unless the option instances_path gets overwritten, this directory can grow very large. Possible values: The full path to a directory. Defaults to value provided in pybasedir . sync_power_state_interval = 600 integer value Interval to sync power states between the database and the hypervisor. The interval that Nova checks the actual virtual machine power state and the power state that Nova has in its database. If a user powers down their VM, Nova updates the API to report the VM has been powered down. Should something turn on the VM unexpectedly, Nova will turn the VM back off to keep the system in the expected state. Possible values: 0: Will run at the default periodic interval. Any value < 0: Disables the option. Any positive integer in seconds. Related options: If handle_virt_lifecycle_events in the workarounds group is false and this option is negative, then instances that get out of sync between the hypervisor and the Nova database will have to be synchronized manually. sync_power_state_pool_size = 1000 integer value Number of greenthreads available for use to sync power states. This option can be used to reduce the number of concurrent requests made to the hypervisor or system with real instance power states for performance reasons, for example, with Ironic. Possible values: Any positive integer representing greenthreads count. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. tempdir = None string value Explicitly specify the temporary working directory. timeout_nbd = 10 integer value Amount of time, in seconds, to wait for NBD device start up. transport_url = rabbit:// string value The network address and optional user credentials for connecting to the messaging backend, in URL format. The expected format is: driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query Example: rabbit://rabbitmq:[email protected]:5672// For full details on the fields in the URL see the documentation of oslo_messaging.TransportURL at https://docs.openstack.org/oslo.messaging/latest/reference/transport.html update_resources_interval = 0 integer value Interval for updating compute resources. This option specifies how often the update_available_resource periodic task should run. A number less than 0 means to disable the task completely. Leaving this at the default of 0 will cause this to run at the default periodic interval. Setting it to any positive value will cause it to run at approximately that number of seconds. Possible values: 0: Will run at the default periodic interval. Any value < 0: Disables the option. Any positive integer in seconds. use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_cow_images = True boolean value Enable use of copy-on-write (cow) images. QEMU/KVM allow the use of qcow2 as backing files. By disabling this, backing files will not be used. use_eventlog = False boolean value Log output to Windows Event Log. use_rootwrap_daemon = False boolean value Start and use a daemon that can run the commands that need to be run with root privileges. This option is usually enabled on nodes that run nova compute processes. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. vcpu_pin_set = None string value Mask of host CPUs that can be used for VCPU resources. The behavior of this option depends on the definition of the [compute] cpu_dedicated_set option and affects the behavior of the [compute] cpu_shared_set option. If [compute] cpu_dedicated_set is defined, defining this option will result in an error. If [compute] cpu_dedicated_set is not defined, this option will be used to determine inventory for VCPU resources and to limit the host CPUs that both pinned and unpinned instances can be scheduled to, overriding the [compute] cpu_shared_set option. Possible values: A comma-separated list of physical CPU numbers that virtual CPUs can be allocated from. Each element should be either a single CPU number, a range of CPU numbers, or a caret followed by a CPU number to be excluded from a range. For example vcpu_pin_set = "4-12,^8,15" Related options: [compute] cpu_dedicated_set [compute] cpu_shared_set Deprecated since: 20.0.0 Reason: This option has been superseded by the ``[compute] cpu_dedicated_set`` and ``[compute] cpu_shared_set`` options, which allow things like the co-existence of pinned and unpinned instances on the same host (for the libvirt driver). vif_plugging_is_fatal = True boolean value Determine if instance should boot or fail on VIF plugging timeout. Nova sends a port update to Neutron after an instance has been scheduled, providing Neutron with the necessary information to finish setup of the port. Once completed, Neutron notifies Nova that it has finished setting up the port, at which point Nova resumes the boot of the instance since network connectivity is now supposed to be present. A timeout will occur if the reply is not received after a given interval. This option determines what Nova does when the VIF plugging timeout event happens. When enabled, the instance will error out. When disabled, the instance will continue to boot on the assumption that the port is ready. Possible values: True: Instances should fail after VIF plugging timeout False: Instances should continue booting after VIF plugging timeout vif_plugging_timeout = 300 integer value Timeout for Neutron VIF plugging event message arrival. Number of seconds to wait for Neutron vif plugging events to arrive before continuing or failing (see vif_plugging_is_fatal ). If you are hitting timeout failures at scale, consider running rootwrap in "daemon mode" in the neutron agent via the [agent]/root_helper_daemon neutron configuration option. Related options: vif_plugging_is_fatal - If vif_plugging_timeout is set to zero and vif_plugging_is_fatal is False, events should not be expected to arrive at all. virt_mkfs = [] multi valued Name of the mkfs commands for ephemeral device. The format is <os_type>=<mkfs command> volume_usage_poll_interval = 0 integer value Interval for gathering volume usages. This option updates the volume usage cache for every volume_usage_poll_interval number of seconds. Possible values: Any positive integer(in seconds) greater than 0 will enable this option. Any value ⇐0 will disable the option. watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. web = /usr/share/spice-html5 string value Path to directory with content which will be served by a web server. 9.1.2. api The following table outlines the options available under the [api] group in the /etc/nova/nova.conf file. Table 9.1. api Configuration option = Default value Type Description auth_strategy = keystone string value Determine the strategy to use for authentication. Deprecated since: 21.0.0 Reason: The only non-default choice, ``noauth2``, is for internal development and testing purposes only and should not be used in deployments. This option and its middleware, NoAuthMiddleware[V2_18], will be removed in a future release. compute_link_prefix = None string value This string is prepended to the normal URL that is returned in links to the OpenStack Compute API. If it is empty (the default), the URLs are returned unchanged. Possible values: Any string, including an empty string (the default). config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 string value When gathering the existing metadata for a config drive, the EC2-style metadata is returned for all versions that don't appear in this option. As of the Liberty release, the available versions are: 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 2009-04-04 The option is in the format of a single string, with each version separated by a space. Possible values: Any string that represents zero or more versions, separated by spaces. dhcp_domain = novalocal string value Domain name used to configure FQDN for instances. Configure a fully-qualified domain name for instance hostnames. If unset, only the hostname without a domain will be configured. Possible values: Any string that is a valid domain name. enable_instance_password = True boolean value Enables returning of the instance password by the relevant server API calls such as create, rebuild, evacuate, or rescue. If the hypervisor does not support password injection, then the password returned will not be correct, so if your hypervisor does not support password injection, set this to False. glance_link_prefix = None string value This string is prepended to the normal URL that is returned in links to Glance resources. If it is empty (the default), the URLs are returned unchanged. Possible values: Any string, including an empty string (the default). instance_list_cells_batch_fixed_size = 100 integer value This controls the batch size of instances requested from each cell database if instance_list_cells_batch_strategy` is set to fixed . This integral value will define the limit issued to each cell every time a batch of instances is requested, regardless of the number of cells in the system or any other factors. Per the general logic called out in the documentation for instance_list_cells_batch_strategy , the minimum value for this is 100 records per batch. Related options: instance_list_cells_batch_strategy max_limit instance_list_cells_batch_strategy = distributed string value This controls the method by which the API queries cell databases in smaller batches during large instance list operations. If batching is performed, a large instance list operation will request some fraction of the overall API limit from each cell database initially, and will re-request that same batch size as records are consumed (returned) from each cell as necessary. Larger batches mean less chattiness between the API and the database, but potentially more wasted effort processing the results from the database which will not be returned to the user. Any strategy will yield a batch size of at least 100 records, to avoid a user causing many tiny database queries in their request. Related options: instance_list_cells_batch_fixed_size max_limit instance_list_per_project_cells = False boolean value When enabled, this will cause the API to only query cell databases in which the tenant has mapped instances. This requires an additional (fast) query in the API database before each list, but also (potentially) limits the number of cell databases that must be queried to provide the result. If you have a small number of cells, or tenants are likely to have instances in all cells, then this should be False. If you have many cells, especially if you confine tenants to a small subset of those cells, this should be True. list_records_by_skipping_down_cells = True boolean value When set to False, this will cause the API to return a 500 error if there is an infrastructure failure like non-responsive cells. If you want the API to skip the down cells and return the results from the up cells set this option to True. Note that from API microversion 2.69 there could be transient conditions in the deployment where certain records are not available and the results could be partial for certain requests containing those records. In those cases this option will be ignored. See "Handling Down Cells" section of the Compute API guide ( https://docs.openstack.org/api-guide/compute/down_cells.html ) for more information. local_metadata_per_cell = False boolean value Indicates that the nova-metadata API service has been deployed per-cell, so that we can have better performance and data isolation in a multi-cell deployment. Users should consider the use of this configuration depending on how neutron is setup. If you have networks that span cells, you might need to run nova-metadata API service globally. If your networks are segmented along cell boundaries, then you can run nova-metadata API service per cell. When running nova-metadata API service per cell, you should also configure each Neutron metadata-agent to point to the corresponding nova-metadata API service. max_limit = 1000 integer value As a query can potentially return many thousands of items, you can limit the maximum number of items in a single response by setting this option. metadata_cache_expiration = 15 integer value This option is the time (in seconds) to cache metadata. When set to 0, metadata caching is disabled entirely; this is generally not recommended for performance reasons. Increasing this setting should improve response times of the metadata API when under heavy load. Higher values may increase memory usage, and result in longer times for host metadata changes to take effect. neutron_default_tenant_id = default string value Tenant ID for getting the default network from Neutron API (also referred in some places as the project ID ) to use. Related options: use_neutron_default_nets use_forwarded_for = False boolean value When True, the X-Forwarded-For header is treated as the canonical remote address. When False (the default), the remote_address header is used. You should only enable this if you have an HTML sanitizing proxy. use_neutron_default_nets = False boolean value When True, the TenantNetworkController will query the Neutron API to get the default networks to use. Related options: neutron_default_tenant_id vendordata_dynamic_connect_timeout = 5 integer value Maximum wait time for an external REST service to connect. Possible values: Any integer with a value greater than three (the TCP packet retransmission timeout). Note that instance start may be blocked during this wait time, so this value should be kept small. Related options: vendordata_providers vendordata_dynamic_targets vendordata_dynamic_ssl_certfile vendordata_dynamic_read_timeout vendordata_dynamic_failure_fatal vendordata_dynamic_failure_fatal = False boolean value Should failures to fetch dynamic vendordata be fatal to instance boot? Related options: vendordata_providers vendordata_dynamic_targets vendordata_dynamic_ssl_certfile vendordata_dynamic_connect_timeout vendordata_dynamic_read_timeout vendordata_dynamic_read_timeout = 5 integer value Maximum wait time for an external REST service to return data once connected. Possible values: Any integer. Note that instance start is blocked during this wait time, so this value should be kept small. Related options: vendordata_providers vendordata_dynamic_targets vendordata_dynamic_ssl_certfile vendordata_dynamic_connect_timeout vendordata_dynamic_failure_fatal `vendordata_dynamic_ssl_certfile = ` string value Path to an optional certificate file or CA bundle to verify dynamic vendordata REST services ssl certificates against. Possible values: An empty string, or a path to a valid certificate file Related options: vendordata_providers vendordata_dynamic_targets vendordata_dynamic_connect_timeout vendordata_dynamic_read_timeout vendordata_dynamic_failure_fatal vendordata_dynamic_targets = [] list value A list of targets for the dynamic vendordata provider. These targets are of the form <name>@<url> . The dynamic vendordata provider collects metadata by contacting external REST services and querying them for information about the instance. This behaviour is documented in the vendordata.rst file in the nova developer reference. vendordata_jsonfile_path = None string value Cloud providers may store custom data in vendor data file that will then be available to the instances via the metadata service, and to the rendering of config-drive. The default class for this, JsonFileVendorData, loads this information from a JSON file, whose path is configured by this option. If there is no path set by this option, the class returns an empty dictionary. Note that when using this to provide static vendor data to a configuration drive, the nova-compute service must be configured with this option and the file must be accessible from the nova-compute host. Possible values: Any string representing the path to the data file, or an empty string (default). vendordata_providers = ['StaticJSON'] list value A list of vendordata providers. vendordata providers are how deployers can provide metadata via configdrive and metadata that is specific to their deployment. For more information on the requirements for implementing a vendordata dynamic endpoint, please see the vendordata.rst file in the nova developer reference. Related options: vendordata_dynamic_targets vendordata_dynamic_ssl_certfile vendordata_dynamic_connect_timeout vendordata_dynamic_read_timeout vendordata_dynamic_failure_fatal 9.1.3. api_database The following table outlines the options available under the [api_database] group in the /etc/nova/nova.conf file. Table 9.2. api_database Configuration option = Default value Type Description connection = None string value The SQLAlchemy connection string to use to connect to the database. Do not set this for the nova-compute service. connection_debug = 0 integer value Verbosity of SQL debugging information: 0=None, 100=Everything. `connection_parameters = ` string value Optional URL parameters to append onto the connection URL at connect time; specify as param1=value1&param2=value2&... connection_recycle_time = 3600 integer value Connections which have been present in the connection pool longer than this number of seconds will be replaced with a new one the time they are checked out from the pool. connection_trace = False boolean value Add Python stack traces to SQL as comment strings. max_overflow = None integer value If set, use this value for max_overflow with SQLAlchemy. max_pool_size = None integer value Maximum number of SQL connections to keep open in a pool. Setting a value of 0 indicates no limit. max_retries = 10 integer value Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count. mysql_sql_mode = TRADITIONAL string value The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode= pool_timeout = None integer value If set, use this value for pool_timeout with SQLAlchemy. retry_interval = 10 integer value Interval between retries of opening a SQL connection. slave_connection = None string value The SQLAlchemy connection string to use to connect to the slave database. sqlite_synchronous = True boolean value If True, SQLite uses synchronous mode. 9.1.4. barbican The following table outlines the options available under the [barbican] group in the /etc/nova/nova.conf file. Table 9.3. barbican Configuration option = Default value Type Description auth_endpoint = http://localhost/identity/v3 string value Use this endpoint to connect to Keystone barbican_api_version = None string value Version of the Barbican API, for example: "v1" barbican_endpoint = None string value Use this endpoint to connect to Barbican, for example: "http://localhost:9311/" barbican_endpoint_type = public string value Specifies the type of endpoint. Allowed values are: public, private, and admin number_of_retries = 60 integer value Number of times to retry poll for key creation completion retry_delay = 1 integer value Number of seconds to wait before retrying poll for key creation completion verify_ssl = True boolean value Specifies if insecure TLS (https) requests. If False, the server's certificate will not be validated, if True, we can set the verify_ssl_path config meanwhile. verify_ssl_path = None string value A path to a bundle or CA certs to check against, or None for requests to attempt to locate and use certificates which verify_ssh is True. If verify_ssl is False, this is ignored. 9.1.5. cache The following table outlines the options available under the [cache] group in the /etc/nova/nova.conf file. Table 9.4. cache Configuration option = Default value Type Description backend = dogpile.cache.null string value Cache backend module. For eventlet-based or environments with hundreds of threaded servers, Memcache with pooling (oslo_cache.memcache_pool) is recommended. For environments with less than 100 threaded servers, Memcached (dogpile.cache.memcached) or Redis (dogpile.cache.redis) is recommended. Test environments with a single instance of the server can use the dogpile.cache.memory backend. backend_argument = [] multi valued Arguments supplied to the backend module. Specify this option once per argument to be passed to the dogpile.cache backend. Example format: "<argname>:<value>". config_prefix = cache.oslo string value Prefix for building the configuration dictionary for the cache region. This should not need to be changed unless there is another dogpile.cache region with the same configuration name. dead_timeout = 60 floating point value Time in seconds before attempting to add a node back in the pool in the HashClient's internal mechanisms. debug_cache_backend = False boolean value Extra debugging from the cache backend (cache keys, get/set/delete/etc calls). This is only really useful if you need to see the specific cache-backend get/set/delete calls with the keys/values. Typically this should be left set to false. enable_retry_client = False boolean value Enable retry client mechanisms to handle failure. Those mechanisms can be used to wrap all kind of pymemcache clients. The wrapper allows you to define how many attempts to make and how long to wait between attemots. enable_socket_keepalive = False boolean value Global toggle for the socket keepalive of dogpile's pymemcache backend enabled = False boolean value Global toggle for caching. expiration_time = 600 integer value Default TTL, in seconds, for any cached item in the dogpile.cache region. This applies to any cached method that doesn't have an explicit cache expiration time defined for it. hashclient_retry_attempts = 2 integer value Amount of times a client should be tried before it is marked dead and removed from the pool in the HashClient's internal mechanisms. hashclient_retry_delay = 1 floating point value Time in seconds that should pass between retry attempts in the HashClient's internal mechanisms. memcache_dead_retry = 300 integer value Number of seconds memcached server is considered dead before it is tried again. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only). memcache_pool_connection_get_timeout = 10 integer value Number of seconds that an operation will wait to get a memcache client connection. memcache_pool_flush_on_reconnect = False boolean value Global toggle if memcache will be flushed on reconnect. (oslo_cache.memcache_pool backend only). memcache_pool_maxsize = 10 integer value Max total number of open connections to every memcached server. (oslo_cache.memcache_pool backend only). memcache_pool_unused_timeout = 60 integer value Number of seconds a connection to memcached is held unused in the pool before it is closed. (oslo_cache.memcache_pool backend only). memcache_servers = ['localhost:11211'] list value Memcache servers in the format of "host:port". (dogpile.cache.memcached and oslo_cache.memcache_pool backends only). If a given host refer to an IPv6 or a given domain refer to IPv6 then you should prefix the given address with the address family ( inet6 ) (e.g inet6[::1]:11211 , inet6:[fd12:3456:789a:1::1]:11211 , inet6:[controller-0.internalapi]:11211 ). If the address family is not given then default address family used will be inet which correspond to IPv4 memcache_socket_timeout = 1.0 floating point value Timeout in seconds for every call to a server. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only). proxies = [] list value Proxy classes to import that will affect the way the dogpile.cache backend functions. See the dogpile.cache documentation on changing-backend-behavior. retry_attempts = 2 integer value Number of times to attempt an action before failing. retry_delay = 0 floating point value Number of seconds to sleep between each attempt. socket_keepalive_count = 1 integer value The maximum number of keepalive probes TCP should send before dropping the connection. Should be a positive integer greater than zero. socket_keepalive_idle = 1 integer value The time (in seconds) the connection needs to remain idle before TCP starts sending keepalive probes. Should be a positive integer most greater than zero. socket_keepalive_interval = 1 integer value The time (in seconds) between individual keepalive probes. Should be a positive integer greater than zero. tls_allowed_ciphers = None string value Set the available ciphers for sockets created with the TLS context. It should be a string in the OpenSSL cipher list format. If not specified, all OpenSSL enabled ciphers will be available. tls_cafile = None string value Path to a file of concatenated CA certificates in PEM format necessary to establish the caching servers' authenticity. If tls_enabled is False, this option is ignored. tls_certfile = None string value Path to a single file in PEM format containing the client's certificate as well as any number of CA certificates needed to establish the certificate's authenticity. This file is only required when client side authentication is necessary. If tls_enabled is False, this option is ignored. tls_enabled = False boolean value Global toggle for TLS usage when comunicating with the caching servers. tls_keyfile = None string value Path to a single file containing the client's private key in. Otherwhise the private key will be taken from the file specified in tls_certfile. If tls_enabled is False, this option is ignored. 9.1.6. cinder The following table outlines the options available under the [cinder] group in the /etc/nova/nova.conf file. Table 9.5. cinder Configuration option = Default value Type Description auth-url = None string value Authentication URL auth_section = None string value Config Section from which to load plugin specific options auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. catalog_info = volumev3::publicURL string value Info to match when looking for cinder in the service catalog. The <service_name> is optional and omitted by default since it should not be necessary in most deployments. Possible values: Format is separated values of the form: <service_type>:<service_name>:<endpoint_type> Note: Nova does not support the Cinder v2 API since the Nova 17.0.0 Queens release. Related options: endpoint_template - Setting this option will override catalog_info certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. cross_az_attach = True boolean value Allow attach between instance and volume in different availability zones. If False, volumes attached to an instance must be in the same availability zone in Cinder as the instance availability zone in Nova. This also means care should be taken when booting an instance from a volume where source is not "volume" because Nova will attempt to create a volume using the same availability zone as what is assigned to the instance. If that AZ is not in Cinder (or allow_availability_zone_fallback=False in cinder.conf), the volume create request will fail and the instance will fail the build request. By default there is no availability zone restriction on volume attach. Related options: [DEFAULT]/default_schedule_zone default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to endpoint_template = None string value If this option is set then it will override service catalog lookup with this template for cinder endpoint Possible values: URL for cinder endpoint API e.g. http://localhost:8776/v3/%(project_id)s Note: Nova does not support the Cinder v2 API since the Nova 17.0.0 Queens release. Related options: catalog_info - If endpoint_template is not set, catalog_info will be used. http_retries = 3 integer value Number of times cinderclient should retry on any failed http call. 0 means connection is attempted only once. Setting it to any positive integer means that on failure connection is retried that many times e.g. setting it to 3 means total attempts to connect will be 4. Possible values: Any integer value. 0 means connection is attempted only once insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file os_region_name = None string value Region name of this node. This is used when picking the URL in the service catalog. Possible values: Any string representing region name password = None string value User's password project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to split-loggers = False boolean value Log requests to multiple loggers. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value Trust ID user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User ID username = None string value Username 9.1.7. compute The following table outlines the options available under the [compute] group in the /etc/nova/nova.conf file. Table 9.6. compute Configuration option = Default value Type Description consecutive_build_service_disable_threshold = 10 integer value Enables reporting of build failures to the scheduler. Any nonzero value will enable sending build failure statistics to the scheduler for use by the BuildFailureWeigher. Possible values: Any positive integer enables reporting build failures. Zero to disable reporting build failures. Related options: [filter_scheduler]/build_failure_weight_multiplier cpu_dedicated_set = None string value Mask of host CPUs that can be used for PCPU resources. The behavior of this option affects the behavior of the deprecated vcpu_pin_set option. If this option is defined, defining vcpu_pin_set will result in an error. If this option is not defined, vcpu_pin_set will be used to determine inventory for VCPU resources and to limit the host CPUs that both pinned and unpinned instances can be scheduled to. This behavior will be simplified in a future release when vcpu_pin_set is removed. Possible values: A comma-separated list of physical CPU numbers that instance VCPUs can be allocated from. Each element should be either a single CPU number, a range of CPU numbers, or a caret followed by a CPU number to be excluded from a range. For example cpu_dedicated_set = "4-12,^8,15" Related options: [compute] cpu_shared_set : This is the counterpart option for defining where VCPU resources should be allocated from. vcpu_pin_set : A legacy option that this option partially replaces. cpu_shared_set = None string value Mask of host CPUs that can be used for VCPU resources and offloaded emulator threads. The behavior of this option depends on the definition of the deprecated vcpu_pin_set option. If vcpu_pin_set is not defined, [compute] cpu_shared_set will be be used to provide VCPU inventory and to determine the host CPUs that unpinned instances can be scheduled to. It will also be used to determine the host CPUS that instance emulator threads should be offloaded to for instances configured with the share emulator thread policy ( hw:emulator_threads_policy=share ). If vcpu_pin_set is defined, [compute] cpu_shared_set will only be used to determine the host CPUs that instance emulator threads should be offloaded to for instances configured with the share emulator thread policy ( hw:emulator_threads_policy=share ). vcpu_pin_set will be used to provide VCPU inventory and to determine the host CPUs that both pinned and unpinned instances can be scheduled to. This behavior will be simplified in a future release when vcpu_pin_set is removed. Possible values: A comma-separated list of physical CPU numbers that instance VCPUs can be allocated from. Each element should be either a single CPU number, a range of CPU numbers, or a caret followed by a CPU number to be excluded from a range. For example cpu_shared_set = "4-12,^8,15" Related options: [compute] cpu_dedicated_set : This is the counterpart option for defining where PCPU resources should be allocated from. vcpu_pin_set : A legacy option whose definition may change the behavior of this option. image_type_exclude_list = [] list value A list of image formats that should not be advertised as supported by this compute node. In some situations, it may be desirable to have a compute node refuse to support an expensive or complex image format. This factors into the decisions made by the scheduler about which compute node to select when booted with a given image. Possible values: Any glance image disk_format name (i.e. raw , qcow2 , etc) Related options: [scheduler]query_placement_for_image_type_support - enables filtering computes based on supported image types, which is required to be enabled for this to take effect. live_migration_wait_for_vif_plug = True boolean value Determine if the source compute host should wait for a network-vif-plugged event from the (neutron) networking service before starting the actual transfer of the guest to the destination compute host. Note that this option is read on the destination host of a live migration. If you set this option the same on all of your compute hosts, which you should do if you use the same networking backend universally, you do not have to worry about this. Before starting the transfer of the guest, some setup occurs on the destination compute host, including plugging virtual interfaces. Depending on the networking backend on the destination host , a network-vif-plugged event may be triggered and then received on the source compute host and the source compute can wait for that event to ensure networking is set up on the destination host before starting the guest transfer in the hypervisor. note:: Possible values: True: wait for network-vif-plugged events before starting guest transfer False: do not wait for network-vif-plugged events before starting guest transfer (this is the legacy behavior) Related options: [DEFAULT]/vif_plugging_is_fatal: if live_migration_wait_for_vif_plug is True and vif_plugging_timeout is greater than 0, and a timeout is reached, the live migration process will fail with an error but the guest transfer will not have started to the destination host [DEFAULT]/vif_plugging_timeout: if live_migration_wait_for_vif_plug is True, this controls the amount of time to wait before timing out and either failing if vif_plugging_is_fatal is True, or simply continuing with the live migration max_concurrent_disk_ops = 0 integer value Number of concurrent disk-IO-intensive operations (glance image downloads, image format conversions, etc.) that we will do in parallel. If this is set too high then response time suffers. The default value of 0 means no limit. max_disk_devices_to_attach = -1 integer value Maximum number of disk devices allowed to attach to a single server. Note that the number of disks supported by an server depends on the bus used. For example, the ide disk bus is limited to 4 attached devices. The configured maximum is enforced during server create, rebuild, evacuate, unshelve, live migrate, and attach volume. Usually, disk bus is determined automatically from the device type or disk device, and the virtualization type. However, disk bus can also be specified via a block device mapping or an image property. See the disk_bus field in :doc: /user/block-device-mapping for more information about specifying disk bus in a block device mapping, and see https://docs.openstack.org/glance/latest/admin/useful-image-properties.html for more information about the hw_disk_bus image property. Operators changing the [compute]/max_disk_devices_to_attach on a compute service that is hosting servers should be aware that it could cause rebuilds to fail, if the maximum is decreased lower than the number of devices already attached to servers. For example, if server A has 26 devices attached and an operators changes [compute]/max_disk_devices_to_attach to 20, a request to rebuild server A will fail and go into ERROR state because 26 devices are already attached and exceed the new configured maximum of 20. Operators setting [compute]/max_disk_devices_to_attach should also be aware that during a cold migration, the configured maximum is only enforced in-place and the destination is not checked before the move. This means if an operator has set a maximum of 26 on compute host A and a maximum of 20 on compute host B, a cold migration of a server with 26 attached devices from compute host A to compute host B will succeed. Then, once the server is on compute host B, a subsequent request to rebuild the server will fail and go into ERROR state because 26 devices are already attached and exceed the configured maximum of 20 on compute host B. The configured maximum is not enforced on shelved offloaded servers, as they have no compute host. warning:: If this option is set to 0, the nova-compute service will fail to start, as 0 disk devices is an invalid configuration that would prevent instances from being able to boot. Possible values: -1 means unlimited Any integer >= 1 represents the maximum allowed. A value of 0 will cause the nova-compute service to fail to start, as 0 disk devices is an invalid configuration that would prevent instances from being able to boot. provider_config_location = /etc/nova/provider_config/ string value Location of YAML files containing resource provider configuration data. These files allow the operator to specify additional custom inventory and traits to assign to one or more resource providers. Additional documentation is available here: resource_provider_association_refresh = 300 integer value Interval for updating nova-compute-side cache of the compute node resource provider's inventories, aggregates, and traits. This option specifies the number of seconds between attempts to update a provider's inventories, aggregates and traits in the local cache of the compute node. A value of zero disables cache refresh completely. The cache can be cleared manually at any time by sending SIGHUP to the compute process, causing it to be repopulated the time the data is accessed. Possible values: Any positive integer in seconds, or zero to disable refresh. shutdown_retry_interval = 10 integer value Time to wait in seconds before resending an ACPI shutdown signal to instances. The overall time to wait is set by shutdown_timeout . Possible values: Any integer greater than 0 in seconds Related options: shutdown_timeout vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] list value A list of strings describing allowed VMDK "create-type" subformats that will be allowed. This is recommended to only include single-file-with-sparse-header variants to avoid potential host file exposure due to processing named extents. If this list is empty, then no form of VMDK image will be allowed. 9.1.8. conductor The following table outlines the options available under the [conductor] group in the /etc/nova/nova.conf file. Table 9.7. conductor Configuration option = Default value Type Description workers = None integer value Number of workers for OpenStack Conductor service. The default will be the number of CPUs available. 9.1.9. console The following table outlines the options available under the [console] group in the /etc/nova/nova.conf file. Table 9.8. console Configuration option = Default value Type Description allowed_origins = [] list value Adds list of allowed origins to the console websocket proxy to allow connections from other origin hostnames. Websocket proxy matches the host header with the origin header to prevent cross-site requests. This list specifies if any there are values other than host are allowed in the origin header. Possible values: A list where each element is an allowed origin hostnames, else an empty list ssl_ciphers = None string value OpenSSL cipher preference string that specifies what ciphers to allow for TLS connections from clients. For example:: See the man page for the OpenSSL ciphers command for details of the cipher preference string format and allowed values:: Related options: [DEFAULT] cert [DEFAULT] key ssl_minimum_version = default string value Minimum allowed SSL/TLS protocol version. Related options: [DEFAULT] cert [DEFAULT] key 9.1.10. consoleauth The following table outlines the options available under the [consoleauth] group in the /etc/nova/nova.conf file. Table 9.9. consoleauth Configuration option = Default value Type Description token_ttl = 600 integer value The lifetime of a console auth token (in seconds). A console auth token is used in authorizing console access for a user. Once the auth token time to live count has elapsed, the token is considered expired. Expired tokens are then deleted. 9.1.11. cors The following table outlines the options available under the [cors] group in the /etc/nova/nova.conf file. Table 9.10. cors Configuration option = Default value Type Description allow_credentials = True boolean value Indicate that the actual request can include user credentials allow_headers = ['X-Auth-Token', 'X-Openstack-Request-Id', 'X-Identity-Status', 'X-Roles', 'X-Service-Catalog', 'X-User-Id', 'X-Tenant-Id', 'X-OpenStack-Nova-API-Version', 'OpenStack-API-Version'] list value Indicate which header field names may be used during the actual request. allow_methods = ['GET', 'PUT', 'POST', 'DELETE', 'PATCH'] list value Indicate which methods can be used during the actual request. allowed_origin = None list value Indicate whether this resource may be shared with the domain received in the requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing slash. Example: https://horizon.example.com expose_headers = ['X-Auth-Token', 'X-Openstack-Request-Id', 'X-Subject-Token', 'X-Service-Token', 'X-OpenStack-Nova-API-Version', 'OpenStack-API-Version'] list value Indicate which headers are safe to expose to the API. Defaults to HTTP Simple Headers. max_age = 3600 integer value Maximum cache age of CORS preflight requests. 9.1.12. cyborg The following table outlines the options available under the [cyborg] group in the /etc/nova/nova.conf file. Table 9.11. cyborg Configuration option = Default value Type Description cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. connect-retries = None integer value The maximum number of retries that should be attempted for connection errors. connect-retry-delay = None floating point value Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. endpoint-override = None string value Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version , min-version , and/or max-version options. insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file region-name = None string value The default region_name for endpoint URL discovery. service-name = None string value The default service_name for endpoint URL discovery. service-type = accelerator string value The default service_type for endpoint URL discovery. split-loggers = False boolean value Log requests to multiple loggers. status-code-retries = None integer value The maximum number of retries that should be attempted for retriable HTTP status codes. status-code-retry-delay = None floating point value Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. timeout = None integer value Timeout value for http requests valid-interfaces = ['internal', 'public'] list value List of interfaces, in order of preference, for endpoint URL. 9.1.13. database The following table outlines the options available under the [database] group in the /etc/nova/nova.conf file. Table 9.12. database Configuration option = Default value Type Description backend = sqlalchemy string value The back end to use for the database. connection = None string value The SQLAlchemy connection string to use to connect to the database. connection_debug = 0 integer value Verbosity of SQL debugging information: 0=None, 100=Everything. `connection_parameters = ` string value Optional URL parameters to append onto the connection URL at connect time; specify as param1=value1&param2=value2&... connection_recycle_time = 3600 integer value Connections which have been present in the connection pool longer than this number of seconds will be replaced with a new one the time they are checked out from the pool. connection_trace = False boolean value Add Python stack traces to SQL as comment strings. db_inc_retry_interval = True boolean value If True, increases the interval between retries of a database operation up to db_max_retry_interval. db_max_retries = 20 integer value Maximum retries in case of connection error or deadlock error before error is raised. Set to -1 to specify an infinite retry count. db_max_retry_interval = 10 integer value If db_inc_retry_interval is set, the maximum seconds between retries of a database operation. db_retry_interval = 1 integer value Seconds between retries of a database transaction. max_overflow = 50 integer value If set, use this value for max_overflow with SQLAlchemy. max_pool_size = 5 integer value Maximum number of SQL connections to keep open in a pool. Setting a value of 0 indicates no limit. max_retries = 10 integer value Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count. mysql_enable_ndb = False boolean value If True, transparently enables support for handling MySQL Cluster (NDB). mysql_sql_mode = TRADITIONAL string value The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode= pool_timeout = None integer value If set, use this value for pool_timeout with SQLAlchemy. retry_interval = 10 integer value Interval between retries of opening a SQL connection. slave_connection = None string value The SQLAlchemy connection string to use to connect to the slave database. sqlite_synchronous = True boolean value If True, SQLite uses synchronous mode. use_db_reconnect = False boolean value Enable the experimental use of database reconnect on connection lost. use_tpool = False boolean value Enable the experimental use of thread pooling for all DB API calls 9.1.14. devices The following table outlines the options available under the [devices] group in the /etc/nova/nova.conf file. Table 9.13. devices Configuration option = Default value Type Description enabled_vgpu_types = [] list value The vGPU types enabled in the compute node. Some pGPUs (e.g. NVIDIA GRID K1) support different vGPU types. User can use this option to specify a list of enabled vGPU types that may be assigned to a guest instance. If more than one single vGPU type is provided, then for each vGPU type an additional section, [vgpu_USD(VGPU_TYPE)] , must be added to the configuration file. Each section then must be configured with a single configuration option, device_addresses , which should be a list of PCI addresses corresponding to the physical GPU(s) to assign to this type. If one or more sections are missing (meaning that a specific type is not wanted to use for at least one physical GPU) or if no device addresses are provided, then Nova will only use the first type that was provided by [devices]/enabled_vgpu_types . If the same PCI address is provided for two different types, nova-compute will return an InvalidLibvirtGPUConfig exception at restart. An example is as the following [devices] enabled_vgpu_types = nvidia-35, nvidia-36 9.1.15. ephemeral_storage_encryption The following table outlines the options available under the [ephemeral_storage_encryption] group in the /etc/nova/nova.conf file. Table 9.14. ephemeral_storage_encryption Configuration option = Default value Type Description cipher = aes-xts-plain64 string value Cipher-mode string to be used. The cipher and mode to be used to encrypt ephemeral storage. The set of cipher-mode combinations available depends on kernel support. According to the dm-crypt documentation, the cipher is expected to be in the format: "<cipher>-<chainmode>-<ivmode>". Possible values: Any crypto option listed in /proc/crypto . enabled = False boolean value Enables/disables LVM ephemeral storage encryption. key_size = 512 integer value Encryption key length in bits. The bit length of the encryption key to be used to encrypt ephemeral storage. In XTS mode only half of the bits are used for encryption key. 9.1.16. filter_scheduler The following table outlines the options available under the [filter_scheduler] group in the /etc/nova/nova.conf file. Table 9.15. filter_scheduler Configuration option = Default value Type Description aggregate_image_properties_isolation_namespace = None string value Image property namespace for use in the host aggregate. Images and hosts can be configured so that certain images can only be scheduled to hosts in a particular aggregate. This is done with metadata values set on the host aggregate that are identified by beginning with the value of this option. If the host is part of an aggregate with such a metadata key, the image in the request spec must have the value of that metadata in its properties in order for the scheduler to consider the host as acceptable. Note that this setting only affects scheduling if the AggregateImagePropertiesIsolation filter is enabled. Possible values: A string, where the string corresponds to an image property namespace Related options: [filter_scheduler] aggregate_image_properties_isolation_separator aggregate_image_properties_isolation_separator = . string value Separator character(s) for image property namespace and name. When using the aggregate_image_properties_isolation filter, the relevant metadata keys are prefixed with the namespace defined in the aggregate_image_properties_isolation_namespace configuration option plus a separator. This option defines the separator to be used. Note that this setting only affects scheduling if the AggregateImagePropertiesIsolation filter is enabled. Possible values: A string, where the string corresponds to an image property namespace separator character Related options: [filter_scheduler] aggregate_image_properties_isolation_namespace available_filters = ['nova.scheduler.filters.all_filters'] multi valued Filters that the scheduler can use. An unordered list of the filter classes the nova scheduler may apply. Only the filters specified in the [filter_scheduler] enabled_filters option will be used, but any filter appearing in that option must also be included in this list. By default, this is set to all filters that are included with nova. Possible values: A list of zero or more strings, where each string corresponds to the name of a filter that may be used for selecting a host Related options: [filter_scheduler] enabled_filters build_failure_weight_multiplier = 1000000.0 floating point value Multiplier used for weighing hosts that have had recent build failures. This option determines how much weight is placed on a compute node with recent build failures. Build failures may indicate a failing, misconfigured, or otherwise ailing compute node, and avoiding it during scheduling may be beneficial. The weight is inversely proportional to the number of recent build failures the compute node has experienced. This value should be set to some high value to offset weight given by other enabled weighers due to available resources. To disable weighing compute hosts by the number of recent failures, set this to zero. Note that this setting only affects scheduling if the BuildFailureWeigher weigher is enabled. Possible values: An integer or float value, where the value corresponds to the multiplier ratio for this weigher. Related options: [compute] consecutive_build_service_disable_threshold - Must be nonzero for a compute to report data considered by this weigher. [filter_scheduler] weight_classes cpu_weight_multiplier = 1.0 floating point value CPU weight multiplier ratio. Multiplier used for weighting free vCPUs. Negative numbers indicate stacking rather than spreading. Note that this setting only affects scheduling if the CPUWeigher weigher is enabled. Possible values: An integer or float value, where the value corresponds to the multipler ratio for this weigher. Related options: [filter_scheduler] weight_classes cross_cell_move_weight_multiplier = 1000000.0 floating point value Multiplier used for weighing hosts during a cross-cell move. This option determines how much weight is placed on a host which is within the same source cell when moving a server, for example during cross-cell resize. By default, when moving an instance, the scheduler will prefer hosts within the same cell since cross-cell move operations can be slower and riskier due to the complicated nature of cross-cell migrations. Note that this setting only affects scheduling if the CrossCellWeigher weigher is enabled. If your cloud is not configured to support cross-cell migrations, then this option has no effect. The value of this configuration option can be overridden per host aggregate by setting the aggregate metadata key with the same name ( cross_cell_move_weight_multiplier ). Possible values: An integer or float value, where the value corresponds to the multiplier ratio for this weigher. Positive values mean the weigher will prefer hosts within the same cell in which the instance is currently running. Negative values mean the weigher will prefer hosts in other cells from which the instance is currently running. Related options: [filter_scheduler] weight_classes disk_weight_multiplier = 1.0 floating point value Disk weight multipler ratio. Multiplier used for weighing free disk space. Negative numbers mean to stack vs spread. Note that this setting only affects scheduling if the DiskWeigher weigher is enabled. Possible values: An integer or float value, where the value corresponds to the multipler ratio for this weigher. enabled_filters = ['AvailabilityZoneFilter', 'ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] list value Filters that the scheduler will use. An ordered list of filter class names that will be used for filtering hosts. These filters will be applied in the order they are listed so place your most restrictive filters first to make the filtering process more efficient. All of the filters in this option must be present in the [scheduler_filter] available_filter option, or a SchedulerHostFilterNotFound exception will be raised. Possible values: A list of zero or more strings, where each string corresponds to the name of a filter to be used for selecting a host Related options: [filter_scheduler] available_filters host_subset_size = 1 integer value Size of subset of best hosts selected by scheduler. New instances will be scheduled on a host chosen randomly from a subset of the N best hosts, where N is the value set by this option. Setting this to a value greater than 1 will reduce the chance that multiple scheduler processes handling similar requests will select the same host, creating a potential race condition. By selecting a host randomly from the N hosts that best fit the request, the chance of a conflict is reduced. However, the higher you set this value, the less optimal the chosen host may be for a given request. Possible values: An integer, where the integer corresponds to the size of a host subset. hypervisor_version_weight_multiplier = 1.0 floating point value Hypervisor Version weight multiplier ratio. The multiplier is used for weighting hosts based on the reported hypervisor version. Negative numbers indicate preferring older hosts, the default is to prefer newer hosts to aid with upgrades. Possible values: An integer or float value, where the value corresponds to the multiplier ratio for this weigher. Example: Strongly prefer older hosts code-block:: ini Moderately prefer new hosts code-block:: ini Disable weigher influence code-block:: ini Related options: [filter_scheduler] weight_classes image_properties_default_architecture = None string value The default architecture to be used when using the image properties filter. When using the ImagePropertiesFilter , it is possible that you want to define a default architecture to make the user experience easier and avoid having something like x86_64 images landing on AARCH64 compute nodes because the user did not specify the hw_architecture property in Glance. Possible values: CPU Architectures such as x86_64, aarch64, s390x. io_ops_weight_multiplier = -1.0 floating point value IO operations weight multipler ratio. This option determines how hosts with differing workloads are weighed. Negative values, such as the default, will result in the scheduler preferring hosts with lighter workloads whereas positive values will prefer hosts with heavier workloads. Another way to look at it is that positive values for this option will tend to schedule instances onto hosts that are already busy, while negative values will tend to distribute the workload across more hosts. The absolute value, whether positive or negative, controls how strong the io_ops weigher is relative to other weighers. Note that this setting only affects scheduling if the IoOpsWeigher weigher is enabled. Possible values: An integer or float value, where the value corresponds to the multipler ratio for this weigher. Related options: [filter_scheduler] weight_classes isolated_hosts = [] list value List of hosts that can only run certain images. If there is a need to restrict some images to only run on certain designated hosts, list those host names here. Note that this setting only affects scheduling if the IsolatedHostsFilter filter is enabled. Possible values: A list of strings, where each string corresponds to the name of a host Related options: [filter_scheduler] isolated_images [filter_scheduler] restrict_isolated_hosts_to_isolated_images isolated_images = [] list value List of UUIDs for images that can only be run on certain hosts. If there is a need to restrict some images to only run on certain designated hosts, list those image UUIDs here. Note that this setting only affects scheduling if the IsolatedHostsFilter filter is enabled. Possible values: A list of UUID strings, where each string corresponds to the UUID of an image Related options: [filter_scheduler] isolated_hosts [filter_scheduler] restrict_isolated_hosts_to_isolated_images max_instances_per_host = 50 integer value Maximum number of instances that can exist on a host. If you need to limit the number of instances on any given host, set this option to the maximum number of instances you want to allow. The NumInstancesFilter and AggregateNumInstancesFilter will reject any host that has at least as many instances as this option's value. Note that this setting only affects scheduling if the NumInstancesFilter or AggregateNumInstancesFilter filter is enabled. Possible values: An integer, where the integer corresponds to the max instances that can be scheduled on a host. Related options: [filter_scheduler] enabled_filters max_io_ops_per_host = 8 integer value The number of instances that can be actively performing IO on a host. Instances performing IO includes those in the following states: build, resize, snapshot, migrate, rescue, unshelve. Note that this setting only affects scheduling if the IoOpsFilter filter is enabled. Possible values: An integer, where the integer corresponds to the max number of instances that can be actively performing IO on any given host. Related options: [filter_scheduler] enabled_filters pci_weight_multiplier = 1.0 floating point value PCI device affinity weight multiplier. The PCI device affinity weighter computes a weighting based on the number of PCI devices on the host and the number of PCI devices requested by the instance. Note that this setting only affects scheduling if the PCIWeigher weigher and NUMATopologyFilter filter are enabled. Possible values: A positive integer or float value, where the value corresponds to the multiplier ratio for this weigher. Related options: [filter_scheduler] weight_classes ram_weight_multiplier = 1.0 floating point value RAM weight multipler ratio. This option determines how hosts with more or less available RAM are weighed. A positive value will result in the scheduler preferring hosts with more available RAM, and a negative number will result in the scheduler preferring hosts with less available RAM. Another way to look at it is that positive values for this option will tend to spread instances across many hosts, while negative values will tend to fill up (stack) hosts as much as possible before scheduling to a less-used host. The absolute value, whether positive or negative, controls how strong the RAM weigher is relative to other weighers. Note that this setting only affects scheduling if the RAMWeigher weigher is enabled. Possible values: An integer or float value, where the value corresponds to the multipler ratio for this weigher. Related options: [filter_scheduler] weight_classes restrict_isolated_hosts_to_isolated_images = True boolean value Prevent non-isolated images from being built on isolated hosts. Note that this setting only affects scheduling if the IsolatedHostsFilter filter is enabled. Even then, this option doesn't affect the behavior of requests for isolated images, which will always be restricted to isolated hosts. Related options: [filter_scheduler] isolated_images [filter_scheduler] isolated_hosts shuffle_best_same_weighed_hosts = False boolean value Enable spreading the instances between hosts with the same best weight. Enabling it is beneficial for cases when [filter_scheduler] host_subset_size is 1 (default), but there is a large number of hosts with same maximal weight. This scenario is common in Ironic deployments where there are typically many baremetal nodes with identical weights returned to the scheduler. In such case enabling this option will reduce contention and chances for rescheduling events. At the same time it will make the instance packing (even in unweighed case) less dense. soft_affinity_weight_multiplier = 1.0 floating point value Multiplier used for weighing hosts for group soft-affinity. Note that this setting only affects scheduling if the ServerGroupSoftAffinityWeigher weigher is enabled. Possible values: A non-negative integer or float value, where the value corresponds to weight multiplier for hosts with group soft affinity. Related options: [filter_scheduler] weight_classes soft_anti_affinity_weight_multiplier = 1.0 floating point value Multiplier used for weighing hosts for group soft-anti-affinity. Note that this setting only affects scheduling if the ServerGroupSoftAntiAffinityWeigher weigher is enabled. Possible values: A non-negative integer or float value, where the value corresponds to weight multiplier for hosts with group soft anti-affinity. Related options: [filter_scheduler] weight_classes track_instance_changes = True boolean value Enable querying of individual hosts for instance information. The scheduler may need information about the instances on a host in order to evaluate its filters and weighers. The most common need for this information is for the (anti-)affinity filters, which need to choose a host based on the instances already running on a host. If the configured filters and weighers do not need this information, disabling this option will improve performance. It may also be disabled when the tracking overhead proves too heavy, although this will cause classes requiring host usage data to query the database on each request instead. note:: Related options: [filter_scheduler] enabled_filters [workarounds] disable_group_policy_check_upcall weight_classes = ['nova.scheduler.weights.all_weighers'] list value Weighers that the scheduler will use. Only hosts which pass the filters are weighed. The weight for any host starts at 0, and the weighers order these hosts by adding to or subtracting from the weight assigned by the weigher. Weights may become negative. An instance will be scheduled to one of the N most-weighted hosts, where N is [filter_scheduler] host_subset_size . By default, this is set to all weighers that are included with Nova. Possible values: A list of zero or more strings, where each string corresponds to the name of a weigher that will be used for selecting a host 9.1.17. glance The following table outlines the options available under the [glance] group in the /etc/nova/nova.conf file. Table 9.16. glance Configuration option = Default value Type Description api_servers = None list value List of glance api servers endpoints available to nova. https is used for ssl-based glance api servers. Note The preferred mechanism for endpoint discovery is via keystoneauth1 loading options. Only use api_servers if you need multiple endpoints and are unable to use a load balancer for some reason. Possible values: A list of any fully qualified url of the form "scheme://hostname:port[/path]" (i.e. "http://10.0.1.0:9292" or "https://my.glance.server/image"). Deprecated since: 21.0.0 Reason: Support for image service configuration via standard keystoneauth1 Adapter options was added in the 17.0.0 Queens release. The api_servers option was retained temporarily to allow consumers time to cut over to a real load balancing solution. cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. connect-retries = None integer value The maximum number of retries that should be attempted for connection errors. connect-retry-delay = None floating point value Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. debug = False boolean value Enable or disable debug logging with glanceclient. default_trusted_certificate_ids = [] list value List of certificate IDs for certificates that should be trusted. May be used as a default list of trusted certificate IDs for certificate validation. The value of this option will be ignored if the user provides a list of trusted certificate IDs with an instance API request. The value of this option will be persisted with the instance data if signature verification and certificate validation are enabled and if the user did not provide an alternative list. If left empty when certificate validation is enabled the user must provide a list of trusted certificate IDs otherwise certificate validation will fail. Related options: The value of this option may be used if both verify_glance_signatures and enable_certificate_validation are enabled. enable_certificate_validation = False boolean value Enable certificate validation for image signature verification. During image signature verification nova will first verify the validity of the image's signing certificate using the set of trusted certificates associated with the instance. If certificate validation fails, signature verification will not be performed and the instance will be placed into an error state. This provides end users with stronger assurances that the image data is unmodified and trustworthy. If left disabled, image signature verification can still occur but the end user will not have any assurance that the signing certificate used to generate the image signature is still trustworthy. Related options: This option only takes effect if verify_glance_signatures is enabled. The value of default_trusted_certificate_ids may be used when this option is enabled. Deprecated since: 16.0.0 Reason: This option is intended to ease the transition for deployments leveraging image signature verification. The intended state long-term is for signature verification and certificate validation to always happen together. enable_rbd_download = False boolean value Enable download of Glance images directly via RBD. Allow compute hosts to quickly download and cache images localy directly from Ceph rather than slow dowloads from the Glance API. This can reduce download time for images in the ten to hundreds of GBs from tens of minutes to tens of seconds, but requires a Ceph-based deployment and access from the compute nodes to Ceph. Related options: [glance] rbd_user [glance] rbd_connect_timeout [glance] rbd_pool [glance] rbd_ceph_conf endpoint-override = None string value Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version , min-version , and/or max-version options. insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file num_retries = 3 integer value Enable glance operation retries. Specifies the number of retries when uploading / downloading an image to / from glance. 0 means no retries. `rbd_ceph_conf = ` string value Path to the ceph configuration file to use. Related options: This option is only used if [glance] enable_rbd_download is set to True. rbd_connect_timeout = 5 integer value The RADOS client timeout in seconds when initially connecting to the cluster. Related options: This option is only used if [glance] enable_rbd_download is set to True. `rbd_pool = ` string value The RADOS pool in which the Glance images are stored as rbd volumes. Related options: This option is only used if [glance] enable_rbd_download is set to True. `rbd_user = ` string value The RADOS client name for accessing Glance images stored as rbd volumes. Related options: This option is only used if [glance] enable_rbd_download is set to True. region-name = None string value The default region_name for endpoint URL discovery. service-name = None string value The default service_name for endpoint URL discovery. service-type = image string value The default service_type for endpoint URL discovery. split-loggers = False boolean value Log requests to multiple loggers. status-code-retries = None integer value The maximum number of retries that should be attempted for retriable HTTP status codes. status-code-retry-delay = None floating point value Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. timeout = None integer value Timeout value for http requests valid-interfaces = ['internal', 'public'] list value List of interfaces, in order of preference, for endpoint URL. verify_glance_signatures = False boolean value Enable image signature verification. nova uses the image signature metadata from glance and verifies the signature of a signed image while downloading that image. If the image signature cannot be verified or if the image signature metadata is either incomplete or unavailable, then nova will not boot the image and instead will place the instance into an error state. This provides end users with stronger assurances of the integrity of the image data they are using to create servers. Related options: The options in the key_manager group, as the key_manager is used for the signature validation. Both enable_certificate_validation and default_trusted_certificate_ids below depend on this option being enabled. 9.1.18. guestfs The following table outlines the options available under the [guestfs] group in the /etc/nova/nova.conf file. Table 9.17. guestfs Configuration option = Default value Type Description debug = False boolean value Enable/disables guestfs logging. This configures guestfs to debug messages and push them to OpenStack logging system. When set to True, it traces libguestfs API calls and enable verbose debug messages. In order to use the above feature, "libguestfs" package must be installed. Related options: Since libguestfs access and modifies VM's managed by libvirt, below options should be set to give access to those VM's. libvirt.inject_key libvirt.inject_partition libvirt.inject_password 9.1.19. healthcheck The following table outlines the options available under the [healthcheck] group in the /etc/nova/nova.conf file. Table 9.18. healthcheck Configuration option = Default value Type Description backends = [] list value Additional backends that can perform health checks and report that information back as part of a request. detailed = False boolean value Show more detailed information as part of the response. Security note: Enabling this option may expose sensitive details about the service being monitored. Be sure to verify that it will not violate your security policies. disable_by_file_path = None string value Check the presence of a file to determine if an application is running on a port. Used by DisableByFileHealthcheck plugin. disable_by_file_paths = [] list value Check the presence of a file based on a port to determine if an application is running on a port. Expects a "port:path" list of strings. Used by DisableByFilesPortsHealthcheck plugin. path = /healthcheck string value The path to respond to healtcheck requests on. 9.1.20. hyperv The following table outlines the options available under the [hyperv] group in the /etc/nova/nova.conf file. Table 9.19. hyperv Configuration option = Default value Type Description config_drive_cdrom = False boolean value Mount config drive as a CD drive. OpenStack can be configured to write instance metadata to a config drive, which is then attached to the instance before it boots. The config drive can be attached as a disk drive (default) or as a CD drive. Related options: This option is meaningful with force_config_drive option set to True or when the REST API call to create an instance will have --config-drive=True flag. config_drive_format option must be set to iso9660 in order to use CD drive as the config drive image. To use config drive with Hyper-V, you must set the mkisofs_cmd value to the full path to an mkisofs.exe installation. Additionally, you must set the qemu_img_cmd value to the full path to an qemu-img command installation. You can configure the Compute service to always create a configuration drive by setting the force_config_drive option to True . config_drive_inject_password = False boolean value Inject password to config drive. When enabled, the admin password will be available from the config drive image. Related options: This option is meaningful when used with other options that enable config drive usage with Hyper-V, such as force_config_drive . dynamic_memory_ratio = 1.0 floating point value Dynamic memory ratio Enables dynamic memory allocation (ballooning) when set to a value greater than 1. The value expresses the ratio between the total RAM assigned to an instance and its startup RAM amount. For example a ratio of 2.0 for an instance with 1024MB of RAM implies 512MB of RAM allocated at startup. Possible values: 1.0: Disables dynamic memory allocation (Default). Float values greater than 1.0: Enables allocation of total implied RAM divided by this value for startup. enable_instance_metrics_collection = False boolean value Enable instance metrics collection Enables metrics collections for an instance by using Hyper-V's metric APIs. Collected data can be retrieved by other apps and services, e.g.: Ceilometer. enable_remotefx = False boolean value Enable RemoteFX feature This requires at least one DirectX 11 capable graphics adapter for Windows / Hyper-V Server 2012 R2 or newer and RDS-Virtualization feature has to be enabled. Instances with RemoteFX can be requested with the following flavor extra specs: os:resolution . Guest VM screen resolution size. Acceptable values 1024x768, 1280x1024, 1600x1200, 1920x1200, 2560x1600, 3840x2160 3840x2160 is only available on Windows / Hyper-V Server 2016. os:monitors . Guest VM number of monitors. Acceptable values [1, 4] - Windows / Hyper-V Server 2012 R2 [1, 8] - Windows / Hyper-V Server 2016 os:vram . Guest VM VRAM amount. Only available on Windows / Hyper-V Server 2016. Acceptable values:: `instances_path_share = ` string value Instances path share The name of a Windows share mapped to the "instances_path" dir and used by the resize feature to copy files to the target host. If left blank, an administrative share (hidden network share) will be used, looking for the same "instances_path" used locally. Possible values: "": An administrative share will be used (Default). Name of a Windows share. Related options: "instances_path": The directory which will be used if this option here is left blank. iscsi_initiator_list = [] list value List of iSCSI initiators that will be used for estabilishing iSCSI sessions. If none are specified, the Microsoft iSCSI initiator service will choose the initiator. limit_cpu_features = False boolean value Limit CPU features This flag is needed to support live migration to hosts with different CPU features and checked during instance creation in order to limit the CPU features used by the instance. mounted_disk_query_retry_count = 10 integer value Mounted disk query retry count The number of times to retry checking for a mounted disk. The query runs until the device can be found or the retry count is reached. Possible values: Positive integer values. Values greater than 1 is recommended (Default: 10). Related options: Time interval between disk mount retries is declared with "mounted_disk_query_retry_interval" option. mounted_disk_query_retry_interval = 5 integer value Mounted disk query retry interval Interval between checks for a mounted disk, in seconds. Possible values: Time in seconds (Default: 5). Related options: This option is meaningful when the mounted_disk_query_retry_count is greater than 1. The retry loop runs with mounted_disk_query_retry_count and mounted_disk_query_retry_interval configuration options. power_state_check_timeframe = 60 integer value Power state check timeframe The timeframe to be checked for instance power state changes. This option is used to fetch the state of the instance from Hyper-V through the WMI interface, within the specified timeframe. Possible values: Timeframe in seconds (Default: 60). power_state_event_polling_interval = 2 integer value Power state event polling interval Instance power state change event polling frequency. Sets the listener interval for power state events to the given value. This option enhances the internal lifecycle notifications of instances that reboot themselves. It is unlikely that an operator has to change this value. Possible values: Time in seconds (Default: 2). qemu_img_cmd = qemu-img.exe string value qemu-img command qemu-img is required for some of the image related operations like converting between different image types. You can get it from here: ( http://qemu.weilnetz.de/ ) or you can install the Cloudbase OpenStack Hyper-V Compute Driver ( https://cloudbase.it/openstack-hyperv-driver/ ) which automatically sets the proper path for this config option. You can either give the full path of qemu-img.exe or set its path in the PATH environment variable and leave this option to the default value. Possible values: Name of the qemu-img executable, in case it is in the same directory as the nova-compute service or its path is in the PATH environment variable (Default). Path of qemu-img command (DRIVELETTER:\PATH\TO\QEMU-IMG\COMMAND). Related options: If the config_drive_cdrom option is False, qemu-img will be used to convert the ISO to a VHD, otherwise the config drive will remain an ISO. To use config drive with Hyper-V, you must set the mkisofs_cmd value to the full path to an mkisofs.exe installation. use_multipath_io = False boolean value Use multipath connections when attaching iSCSI or FC disks. This requires the Multipath IO Windows feature to be enabled. MPIO must be configured to claim such devices. volume_attach_retry_count = 10 integer value Volume attach retry count The number of times to retry attaching a volume. Volume attachment is retried until success or the given retry count is reached. Possible values: Positive integer values (Default: 10). Related options: Time interval between attachment attempts is declared with volume_attach_retry_interval option. volume_attach_retry_interval = 5 integer value Volume attach retry interval Interval between volume attachment attempts, in seconds. Possible values: Time in seconds (Default: 5). Related options: This options is meaningful when volume_attach_retry_count is greater than 1. The retry loop runs with volume_attach_retry_count and volume_attach_retry_interval configuration options. vswitch_name = None string value External virtual switch name The Hyper-V Virtual Switch is a software-based layer-2 Ethernet network switch that is available with the installation of the Hyper-V server role. The switch includes programmatically managed and extensible capabilities to connect virtual machines to both virtual networks and the physical network. In addition, Hyper-V Virtual Switch provides policy enforcement for security, isolation, and service levels. The vSwitch represented by this config option must be an external one (not internal or private). Possible values: If not provided, the first of a list of available vswitches is used. This list is queried using WQL. Virtual switch name. wait_soft_reboot_seconds = 60 integer value Wait soft reboot seconds Number of seconds to wait for instance to shut down after soft reboot request is made. We fall back to hard reboot if instance does not shutdown within this window. Possible values: Time in seconds (Default: 60). 9.1.21. image_cache The following table outlines the options available under the [image_cache] group in the /etc/nova/nova.conf file. Table 9.20. image_cache Configuration option = Default value Type Description manager_interval = 2400 integer value Number of seconds to wait between runs of the image cache manager. Note that when using shared storage for the [DEFAULT]/instances_path configuration option across multiple nova-compute services, this periodic could process a large number of instances. Similarly, using a compute driver that manages a cluster (like vmwareapi.VMwareVCDriver) could result in processing a large number of instances. Therefore you may need to adjust the time interval for the anticipated load, or only run on one nova-compute service within a shared storage aggregate. Possible values: 0: run at the default interval of 60 seconds (not recommended) -1: disable Any other value Related options: [DEFAULT]/compute_driver [DEFAULT]/instances_path precache_concurrency = 1 integer value Maximum number of compute hosts to trigger image precaching in parallel. When an image precache request is made, compute nodes will be contacted to initiate the download. This number constrains the number of those that will happen in parallel. Higher numbers will cause more computes to work in parallel and may result in reduced time to complete the operation, but may also DDoS the image service. Lower numbers will result in more sequential operation, lower image service load, but likely longer runtime to completion. remove_unused_base_images = True boolean value Should unused base images be removed? remove_unused_original_minimum_age_seconds = 86400 integer value Unused unresized base images younger than this will not be removed. remove_unused_resized_minimum_age_seconds = 3600 integer value Unused resized base images younger than this will not be removed. subdirectory_name = _base string value Location of cached images. This is NOT the full path - just a folder name relative to USDinstances_path . For per-compute-host cached images, set to base USDmy_ip 9.1.22. ironic The following table outlines the options available under the [ironic] group in the /etc/nova/nova.conf file. Table 9.21. ironic Configuration option = Default value Type Description api_max_retries = 60 integer value The number of times to retry when a request conflicts. If set to 0, only try once, no retries. Related options: api_retry_interval api_retry_interval = 2 integer value The number of seconds to wait before retrying the request. Related options: api_max_retries auth-url = None string value Authentication URL auth_section = None string value Config Section from which to load plugin specific options auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. connect-retries = None integer value The maximum number of retries that should be attempted for connection errors. connect-retry-delay = None floating point value Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to endpoint-override = None string value Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version , min-version , and/or max-version options. insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file partition_key = None string value Case-insensitive key to limit the set of nodes that may be managed by this service to the set of nodes in Ironic which have a matching conductor_group property. If unset, all available nodes will be eligible to be managed by this service. Note that setting this to the empty string ( "" ) will match the default conductor group, and is different than leaving the option unset. password = None string value User's password peer_list = [] list value List of hostnames for all nova-compute services (including this host) with this partition_key config value. Nodes matching the partition_key value will be distributed between all services specified here. If partition_key is unset, this option is ignored. project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to region-name = None string value The default region_name for endpoint URL discovery. serial_console_state_timeout = 10 integer value Timeout (seconds) to wait for node serial console state changed. Set to 0 to disable timeout. service-name = None string value The default service_name for endpoint URL discovery. service-type = baremetal string value The default service_type for endpoint URL discovery. split-loggers = False boolean value Log requests to multiple loggers. status-code-retries = None integer value The maximum number of retries that should be attempted for retriable HTTP status codes. status-code-retry-delay = None floating point value Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. system-scope = None string value Scope for system operations timeout = None integer value Timeout value for http requests trust-id = None string value Trust ID user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User ID username = None string value Username valid-interfaces = ['internal', 'public'] list value List of interfaces, in order of preference, for endpoint URL. 9.1.23. key_manager The following table outlines the options available under the [key_manager] group in the /etc/nova/nova.conf file. Table 9.22. key_manager Configuration option = Default value Type Description auth_type = None string value The type of authentication credential to create. Possible values are token , password , keystone_token , and keystone_password . Required if no context is passed to the credential factory. auth_url = None string value Use this endpoint to connect to Keystone. backend = barbican string value Specify the key manager implementation. Options are "barbican" and "vault". Default is "barbican". Will support the values earlier set using [key_manager]/api_class for some time. domain_id = None string value Domain ID for domain scoping. Optional for keystone_token and keystone_password auth_type. domain_name = None string value Domain name for domain scoping. Optional for keystone_token and keystone_password auth_type. fixed_key = None string value Fixed key returned by key manager, specified in hex. Possible values: Empty string or a key in hex value password = None string value Password for authentication. Required for password and keystone_password auth_type. project_domain_id = None string value Project's domain ID for project. Optional for keystone_token and keystone_password auth_type. project_domain_name = None string value Project's domain name for project. Optional for keystone_token and keystone_password auth_type. project_id = None string value Project ID for project scoping. Optional for keystone_token and keystone_password auth_type. project_name = None string value Project name for project scoping. Optional for keystone_token and keystone_password auth_type. reauthenticate = True boolean value Allow fetching a new token if the current one is going to expire. Optional for keystone_token and keystone_password auth_type. token = None string value Token for authentication. Required for token and keystone_token auth_type if no context is passed to the credential factory. trust_id = None string value Trust ID for trust scoping. Optional for keystone_token and keystone_password auth_type. user_domain_id = None string value User's domain ID for authentication. Optional for keystone_token and keystone_password auth_type. user_domain_name = None string value User's domain name for authentication. Optional for keystone_token and keystone_password auth_type. user_id = None string value User ID for authentication. Optional for keystone_token and keystone_password auth_type. username = None string value Username for authentication. Required for password auth_type. Optional for the keystone_password auth_type. 9.1.24. keystone The following table outlines the options available under the [keystone] group in the /etc/nova/nova.conf file. Table 9.23. keystone Configuration option = Default value Type Description cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. connect-retries = None integer value The maximum number of retries that should be attempted for connection errors. connect-retry-delay = None floating point value Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. endpoint-override = None string value Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version , min-version , and/or max-version options. insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file region-name = None string value The default region_name for endpoint URL discovery. service-name = None string value The default service_name for endpoint URL discovery. service-type = identity string value The default service_type for endpoint URL discovery. split-loggers = False boolean value Log requests to multiple loggers. status-code-retries = None integer value The maximum number of retries that should be attempted for retriable HTTP status codes. status-code-retry-delay = None floating point value Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. timeout = None integer value Timeout value for http requests valid-interfaces = ['internal', 'public'] list value List of interfaces, in order of preference, for endpoint URL. 9.1.25. keystone_authtoken The following table outlines the options available under the [keystone_authtoken] group in the /etc/nova/nova.conf file. Table 9.24. keystone_authtoken Configuration option = Default value Type Description auth_section = None string value Config Section from which to load plugin specific options auth_type = None string value Authentication type to load auth_uri = None string value Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. This option is deprecated in favor of www_authenticate_uri and will be removed in the S release. Deprecated since: Queens *Reason:*The auth_uri option is deprecated in favor of www_authenticate_uri and will be removed in the S release. auth_version = None string value API version of the Identity API endpoint. cache = None string value Request environment key where the Swift cache object is stored. When auth_token middleware is deployed with a Swift cache, use this option to have the middleware share a caching backend with swift. Otherwise, use the memcached_servers option instead. cafile = None string value A PEM encoded Certificate Authority to use when verifying HTTPs connections. Defaults to system CAs. certfile = None string value Required if identity server requires client certificate delay_auth_decision = False boolean value Do not handle authorization requests within the middleware, but delegate the authorization decision to downstream WSGI components. enforce_token_bind = permissive string value Used to control the use and type of token binding. Can be set to: "disabled" to not check token binding. "permissive" (default) to validate binding information if the bind type is of a form known to the server and ignore it if not. "strict" like "permissive" but if the bind type is unknown the token will be rejected. "required" any form of token binding is needed to be allowed. Finally the name of a binding method that must be present in tokens. http_connect_timeout = None integer value Request timeout value for communicating with Identity API server. http_request_max_retries = 3 integer value How many times are we trying to reconnect when communicating with Identity API Server. include_service_catalog = True boolean value (Optional) Indicate whether to set the X-Service-Catalog header. If False, middleware will not ask for service catalog on token validation and will not set the X-Service-Catalog header. insecure = False boolean value Verify HTTPS connections. interface = internal string value Interface to use for the Identity API endpoint. Valid values are "public", "internal" (default) or "admin". keyfile = None string value Required if identity server requires client certificate memcache_pool_conn_get_timeout = 10 integer value (Optional) Number of seconds that an operation will wait to get a memcached client connection from the pool. memcache_pool_dead_retry = 300 integer value (Optional) Number of seconds memcached server is considered dead before it is tried again. memcache_pool_maxsize = 10 integer value (Optional) Maximum total number of open connections to every memcached server. memcache_pool_socket_timeout = 3 integer value (Optional) Socket timeout in seconds for communicating with a memcached server. memcache_pool_unused_timeout = 60 integer value (Optional) Number of seconds a connection to memcached is held unused in the pool before it is closed. memcache_secret_key = None string value (Optional, mandatory if memcache_security_strategy is defined) This string is used for key derivation. memcache_security_strategy = None string value (Optional) If defined, indicate whether token data should be authenticated or authenticated and encrypted. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the cache. If the value is not one of these options or empty, auth_token will raise an exception on initialization. memcache_use_advanced_pool = False boolean value (Optional) Use the advanced (eventlet safe) memcached client pool. The advanced pool will only work under python 2.x. memcached_servers = None list value Optionally specify a list of memcached server(s) to use for caching. If left undefined, tokens will instead be cached in-process. region_name = None string value The region in which the identity server can be found. service_token_roles = ['service'] list value A choice of roles that must be present in a service token. Service tokens are allowed to request that an expired token can be used and so this check should tightly control that only actual services should be sending this token. Roles here are applied as an ANY check so any role in this list must be present. For backwards compatibility reasons this currently only affects the allow_expired check. service_token_roles_required = False boolean value For backwards compatibility reasons we must let valid service tokens pass that don't pass the service_token_roles check as valid. Setting this true will become the default in a future release and should be enabled if possible. service_type = None string value The name or type of the service as it appears in the service catalog. This is used to validate tokens that have restricted access rules. token_cache_time = 300 integer value In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen tokens for a configurable duration (in seconds). Set to -1 to disable caching completely. www_authenticate_uri = None string value Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. 9.1.26. libvirt The following table outlines the options available under the [libvirt] group in the /etc/nova/nova.conf file. Table 9.25. libvirt Configuration option = Default value Type Description `connection_uri = ` string value Overrides the default libvirt URI of the chosen virtualization type. If set, Nova will use this URI to connect to libvirt. Possible values: An URI like qemu:///system . Related options: virt_type : Influences what is used as default value here. cpu_mode = None string value Is used to set the CPU mode an instance should have. If virt_type="kvm&verbar;qemu" , it will default to host-model , otherwise it will default to none . Related options: cpu_models : This should be set ONLY when cpu_mode is set to custom . Otherwise, it would result in an error and the instance launch will fail. cpu_model_extra_flags = [] list value Enable or disable guest CPU flags. To explicitly enable or disable CPU flags, use the +flag or -flag notation - the + sign will enable the CPU flag for the guest, while a - sign will disable it. If neither + nor - is specified, the flag will be enabled, which is the default behaviour. For example, if you specify the following (assuming the said CPU model and features are supported by the host hardware and software):: Nova will disable the hle and rtm flags for the guest; and it will enable ssbd and mttr (because it was specified with neither + nor - prefix). The CPU flags are case-insensitive. In the following example, the pdpe1gb flag will be disabled for the guest; vmx and pcid flags will be enabled:: Specifying extra CPU flags is valid in combination with all the three possible values of cpu_mode config attribute: custom (this also requires an explicit CPU model to be specified via the cpu_models config attribute), host-model , or host-passthrough . There can be scenarios where you may need to configure extra CPU flags even for host-passthrough CPU mode, because sometimes QEMU may disable certain CPU features. An example of this is Intel's "invtsc" (Invariable Time Stamp Counter) CPU flag - if you need to expose this flag to a Nova instance, you need to explicitly enable it. The possible values for cpu_model_extra_flags depends on the CPU model in use. Refer to /usr/share/libvirt/cpu_map/*.xml for possible CPU feature flags for a given CPU model. A special note on a particular CPU flag: pcid (an Intel processor feature that alleviates guest performance degradation as a result of applying the Meltdown CVE fixes). When configuring this flag with the custom CPU mode, not all CPU models (as defined by QEMU and libvirt) need it: The only virtual CPU models that include the pcid capability are Intel "Haswell", "Broadwell", and "Skylake" variants. The libvirt / QEMU CPU models "Nehalem", "Westmere", "SandyBridge", and "IvyBridge" will not expose the pcid capability by default, even if the host CPUs by the same name include it. I.e. PCID needs to be explicitly specified when using the said virtual CPU models. The libvirt driver's default CPU mode, host-model , will do the right thing with respect to handling PCID CPU flag for the guest - assuming you are running updated processor microcode, host and guest kernel, libvirt, and QEMU. The other mode, host-passthrough , checks if PCID is available in the hardware, and if so directly passes it through to the Nova guests. Thus, in context of PCID , with either of these CPU modes ( host-model or host-passthrough ), there is no need to use the cpu_model_extra_flags . Related options: cpu_mode cpu_models cpu_models = [] list value An ordered list of CPU models the host supports. It is expected that the list is ordered so that the more common and less advanced CPU models are listed earlier. Here is an example: SandyBridge,IvyBridge,Haswell,Broadwell , the latter CPU model's features is richer that the CPU model. Possible values: The named CPU models can be found via virsh cpu-models ARCH , where ARCH is your host architecture. Related options: cpu_mode : This should be set to custom ONLY when you want to configure (via cpu_models ) a specific named CPU model. Otherwise, it would result in an error and the instance launch will fail. virt_type : Only the virtualization types kvm and qemu use this. note:: Be careful to only specify models which can be fully supported in hardware. device_detach_attempts = 8 integer value Maximum number of attempts the driver tries to detach a device in libvirt. Related options: :oslo.config:option: libvirt.device_detach_timeout device_detach_timeout = 20 integer value Maximum number of seconds the driver waits for the success or the failure event from libvirt for a given device detach attempt before it re-trigger the detach. Related options: :oslo.config:option: libvirt.device_detach_attempts disk_cachemodes = [] list value Specific cache modes to use for different disk types. For example: file=directsync,block=none,network=writeback For local or direct-attached storage, it is recommended that you use writethrough (default) mode, as it ensures data integrity and has acceptable I/O performance for applications running in the guest, especially for read operations. However, caching mode none is recommended for remote NFS storage, because direct I/O operations (O_DIRECT) perform better than synchronous I/O operations (with O_SYNC). Caching mode none effectively turns all guest I/O operations into direct I/O operations on the host, which is the NFS client in this environment. Possible cache modes: default: "It Depends" - For Nova-managed disks, none , if the host file system is capable of Linux's O_DIRECT semantics; otherwise writeback . For volume drivers, the default is driver-dependent: none for everything except for SMBFS and Virtuzzo (which use writeback ). none: With caching mode set to none, the host page cache is disabled, but the disk write cache is enabled for the guest. In this mode, the write performance in the guest is optimal because write operations bypass the host page cache and go directly to the disk write cache. If the disk write cache is battery-backed, or if the applications or storage stack in the guest transfer data properly (either through fsync operations or file system barriers), then data integrity can be ensured. However, because the host page cache is disabled, the read performance in the guest would not be as good as in the modes where the host page cache is enabled, such as writethrough mode. Shareable disk devices, like for a multi-attachable block storage volume, will have their cache mode set to none regardless of configuration. writethrough: With caching set to writethrough mode, the host page cache is enabled, but the disk write cache is disabled for the guest. Consequently, this caching mode ensures data integrity even if the applications and storage stack in the guest do not transfer data to permanent storage properly (either through fsync operations or file system barriers). Because the host page cache is enabled in this mode, the read performance for applications running in the guest is generally better. However, the write performance might be reduced because the disk write cache is disabled. writeback: With caching set to writeback mode, both the host page cache and the disk write cache are enabled for the guest. Because of this, the I/O performance for applications running in the guest is good, but the data is not protected in a power failure. As a result, this caching mode is recommended only for temporary data where potential data loss is not a concern. NOTE: Certain backend disk mechanisms may provide safe writeback cache semantics. Specifically those that bypass the host page cache, such as QEMU's integrated RBD driver. Ceph documentation recommends setting this to writeback for maximum performance while maintaining data safety. directsync: Like "writethrough", but it bypasses the host page cache. unsafe: Caching mode of unsafe ignores cache transfer operations completely. As its name implies, this caching mode should be used only for temporary data where data loss is not a concern. This mode can be useful for speeding up guest installations, but you should switch to another caching mode in production environments. disk_prefix = None string value Override the default disk prefix for the devices attached to an instance. If set, this is used to identify a free disk device name for a bus. Possible values: Any prefix which will result in a valid disk device name like sda or hda for example. This is only necessary if the device names differ to the commonly known device name prefixes for a virtualization type such as: sd, xvd, uvd, vd. Related options: virt_type : Influences which device type is used, which determines the default disk prefix. enabled_perf_events = [] list value Performance events to monitor and collect statistics for. This will allow you to specify a list of events to monitor low-level performance of guests, and collect related statistics via the libvirt driver, which in turn uses the Linux kernel's perf infrastructure. With this config attribute set, Nova will generate libvirt guest XML to monitor the specified events. For example, to monitor the count of CPU cycles (total/elapsed) and the count of cache misses, enable them as follows:: Possible values: A string list. The list of supported events can be found here`__. Note that Intel CMT events - `cmt , mbmbt and mbml - are unsupported by recent Linux kernel versions (4.14+) and will be ignored by nova. __ https://libvirt.org/formatdomain.html#elementsPerf . file_backed_memory = 0 integer value Available capacity in MiB for file-backed memory. Set to 0 to disable file-backed memory. When enabled, instances will create memory files in the directory specified in /etc/libvirt/qemu.conf 's memory_backing_dir option. The default location is /var/lib/libvirt/qemu/ram . When enabled, the value defined for this option is reported as the node memory capacity. Compute node system memory will be used as a cache for file-backed memory, via the kernel's pagecache mechanism. note:: This feature is not compatible with hugepages. note:: This feature is not compatible with memory overcommit. Related options: virt_type must be set to kvm or qemu . ram_allocation_ratio must be set to 1.0. gid_maps = [] list value List of guid targets and ranges.Syntax is guest-gid:host-gid:count. Maximum of 5 allowed. hw_disk_discard = None string value Discard option for nova managed disks. Requires: Libvirt >= 1.0.6 Qemu >= 1.5 (raw format) Qemu >= 1.6 (qcow2 format) hw_machine_type = None list value For qemu or KVM guests, set this option to specify a default machine type per host architecture. You can find a list of supported machine types in your environment by checking the output of the :command: virsh capabilities command. The format of the value for this config option is host-arch=machine-type . For example: x86_64=machinetype1,armv7l=machinetype2 . `images_rbd_ceph_conf = ` string value Path to the ceph configuration file to use images_rbd_glance_copy_poll_interval = 15 integer value The interval in seconds with which to poll Glance after asking for it to copy an image to the local rbd store. This affects how often we ask Glance to report on copy completion, and thus should be short enough that we notice quickly, but not too aggressive that we generate undue load on the Glance server. Related options: images_type - must be set to rbd images_rbd_glance_store_name - must be set to a store name images_rbd_glance_copy_timeout = 600 integer value The overall maximum time we will wait for Glance to complete an image copy to our local rbd store. This should be long enough to allow large images to be copied over the network link between our local store and the one where images typically reside. The downside of setting this too long is just to catch the case where the image copy is stalled or proceeding too slowly to be useful. Actual errors will be reported by Glance and noticed according to the poll interval. Related options: * images_type - must be set to rbd * images_rbd_glance_store_name - must be set to a store name * images_rbd_glance_copy_poll_interval - controls the failure time-to-notice `images_rbd_glance_store_name = ` string value The name of the Glance store that represents the rbd cluster in use by this node. If set, this will allow Nova to request that Glance copy an image from an existing non-local store into the one named by this option before booting so that proper Copy-on-Write behavior is maintained. Related options: images_type - must be set to rbd images_rbd_glance_copy_poll_interval - controls the status poll frequency images_rbd_glance_copy_timeout - controls the overall copy timeout images_rbd_pool = rbd string value The RADOS pool in which rbd volumes are stored images_type = default string value VM Images format. If default is specified, then use_cow_images flag is used instead of this one. Related options: compute.use_cow_images images_volume_group [workarounds]/ensure_libvirt_rbd_instance_dir_cleanup compute.force_raw_images images_volume_group = None string value LVM Volume Group that is used for VM images, when you specify images_type=lvm Related options: images_type inject_key = False boolean value Allow the injection of an SSH key at boot time. There is no agent needed within the image to do this. If libguestfs is available on the host, it will be used. Otherwise nbd is used. The file system of the image will be mounted and the SSH key, which is provided in the REST API call will be injected as SSH key for the root user and appended to the authorized_keys of that user. The SELinux context will be set if necessary. Be aware that the injection is not possible when the instance gets launched from a volume. This config option will enable directly modifying the instance disk and does not affect what cloud-init may do using data from config_drive option or the metadata service. Linux distribution guest only. Related options: inject_partition : That option will decide about the discovery and usage of the file system. It also can disable the injection at all. inject_partition = -2 integer value Determines how the file system is chosen to inject data into it. libguestfs is used to inject data. If libguestfs is not able to determine the root partition (because there are more or less than one root partition) or cannot mount the file system it will result in an error and the instance won't boot. Possible values: -2 ⇒ disable the injection of data. -1 ⇒ find the root partition with the file system to mount with libguestfs 0 ⇒ The image is not partitioned >0 ⇒ The number of the partition to use for the injection Linux distribution guest only. Related options: inject_key : If this option allows the injection of a SSH key it depends on value greater or equal to -1 for inject_partition . inject_password : If this option allows the injection of an admin password it depends on value greater or equal to -1 for inject_partition . [guestfs]/debug You can enable the debug log level of libguestfs with this config option. A more verbose output will help in debugging issues. virt_type : If you use lxc as virt_type it will be treated as a single partition image inject_password = False boolean value Allow the injection of an admin password for instance only at create and rebuild process. There is no agent needed within the image to do this. If libguestfs is available on the host, it will be used. Otherwise nbd is used. The file system of the image will be mounted and the admin password, which is provided in the REST API call will be injected as password for the root user. If no root user is available, the instance won't be launched and an error is thrown. Be aware that the injection is not possible when the instance gets launched from a volume. Linux distribution guest only. Possible values: True: Allows the injection. False: Disallows the injection. Any via the REST API provided admin password will be silently ignored. Related options: inject_partition : That option will decide about the discovery and usage of the file system. It also can disable the injection at all. iscsi_iface = None string value The iSCSI transport iface to use to connect to target in case offload support is desired. Default format is of the form <transport_name>.<hwaddress> , where <transport_name> is one of ( be2iscsi , bnx2i , cxgb3i , cxgb4i , qla4xxx , ocs , tcp ) and <hwaddress> is the MAC address of the interface and can be generated via the iscsiadm -m iface command. Do not confuse the iscsi_iface parameter to be provided here with the actual transport name. iser_use_multipath = False boolean value Use multipath connection of the iSER volume. iSER volumes can be connected as multipath devices. This will provide high availability and fault tolerance. live_migration_bandwidth = 0 integer value Maximum bandwidth(in MiB/s) to be used during migration. If set to 0, the hypervisor will choose a suitable default. Some hypervisors do not support this feature and will return an error if bandwidth is not 0. Please refer to the libvirt documentation for further details. live_migration_completion_timeout = 800 integer value Time to wait, in seconds, for migration to successfully complete transferring data before aborting the operation. Value is per GiB of guest RAM + disk to be transferred, with lower bound of a minimum of 2 GiB. Should usually be larger than downtime delay * downtime steps. Set to 0 to disable timeouts. Related options: live_migration_downtime live_migration_downtime_steps live_migration_downtime_delay live_migration_downtime = 500 integer value Maximum permitted downtime, in milliseconds, for live migration switchover. Will be rounded up to a minimum of 100ms. You can increase this value if you want to allow live-migrations to complete faster, or avoid live-migration timeout errors by allowing the guest to be paused for longer during the live-migration switch over. Related options: live_migration_completion_timeout live_migration_downtime_delay = 75 integer value Time to wait, in seconds, between each step increase of the migration downtime. Minimum delay is 3 seconds. Value is per GiB of guest RAM + disk to be transferred, with lower bound of a minimum of 2 GiB per device. live_migration_downtime_steps = 10 integer value Number of incremental steps to reach max downtime value. Will be rounded up to a minimum of 3 steps. live_migration_inbound_addr = None host address value IP address used as the live migration address for this host. This option indicates the IP address which should be used as the target for live migration traffic when migrating to this hypervisor. This metadata is then used by the source of the live migration traffic to construct a migration URI. If this option is set to None, the hostname of the migration target compute node will be used. This option is useful in environments where the live-migration traffic can impact the network plane significantly. A separate network for live-migration traffic can then use this config option and avoids the impact on the management network. live_migration_permit_auto_converge = False boolean value This option allows nova to start live migration with auto converge on. Auto converge throttles down CPU if a progress of on-going live migration is slow. Auto converge will only be used if this flag is set to True and post copy is not permitted or post copy is unavailable due to the version of libvirt and QEMU in use. Related options: live_migration_permit_post_copy live_migration_permit_post_copy = False boolean value This option allows nova to switch an on-going live migration to post-copy mode, i.e., switch the active VM to the one on the destination node before the migration is complete, therefore ensuring an upper bound on the memory that needs to be transferred. Post-copy requires libvirt>=1.3.3 and QEMU>=2.5.0. When permitted, post-copy mode will be automatically activated if we reach the timeout defined by live_migration_completion_timeout and live_migration_timeout_action is set to force_complete . Note if you change to no timeout or choose to use abort , i.e. live_migration_completion_timeout = 0 , then there will be no automatic switch to post-copy. The live-migration force complete API also uses post-copy when permitted. If post-copy mode is not available, force complete falls back to pausing the VM to ensure the live-migration operation will complete. When using post-copy mode, if the source and destination hosts lose network connectivity, the VM being live-migrated will need to be rebooted. For more details, please see the Administration guide. Related options: live_migration_permit_auto_converge live_migration_timeout_action live_migration_scheme = None string value URI scheme for live migration used by the source of live migration traffic. Override the default libvirt live migration scheme (which is dependent on virt_type). If this option is set to None, nova will automatically choose a sensible default based on the hypervisor. It is not recommended that you change this unless you are very sure that hypervisor supports a particular scheme. Related options: virt_type : This option is meaningful only when virt_type is set to kvm or qemu . live_migration_uri : If live_migration_uri value is not None, the scheme used for live migration is taken from live_migration_uri instead. live_migration_timeout_action = abort string value This option will be used to determine what action will be taken against a VM after live_migration_completion_timeout expires. By default, the live migrate operation will be aborted after completion timeout. If it is set to force_complete , the compute service will either pause the VM or trigger post-copy depending on if post copy is enabled and available ( live_migration_permit_post_copy is set to True). Related options: live_migration_completion_timeout live_migration_permit_post_copy live_migration_tunnelled = False boolean value Enable tunnelled migration. This option enables the tunnelled migration feature, where migration data is transported over the libvirtd connection. If enabled, we use the VIR_MIGRATE_TUNNELLED migration flag, avoiding the need to configure the network to allow direct hypervisor to hypervisor communication. If False, use the native transport. If not set, Nova will choose a sensible default based on, for example the availability of native encryption support in the hypervisor. Enabling this option will definitely impact performance massively. Note that this option is NOT compatible with use of block migration. Deprecated since: 23.0.0 Reason: The "tunnelled live migration" has two inherent limitations: it cannot handle live migration of disks in a non-shared storage setup; and it has a huge performance cost. Both these problems are solved by ``live_migration_with_native_tls`` (requires a pre-configured TLS environment), which is the recommended approach for securing all live migration streams. live_migration_uri = None string value Live migration target URI used by the source of live migration traffic. Override the default libvirt live migration target URI (which is dependent on virt_type). Any included "%s" is replaced with the migration target hostname, or live_migration_inbound_addr if set. If this option is set to None (which is the default), Nova will automatically generate the live_migration_uri value based on only 4 supported virt_type in following list: kvm : qemu+tcp://%s/system qemu : qemu+tcp://%s/system parallels : parallels+tcp://%s/system Related options: live_migration_inbound_addr : If live_migration_inbound_addr value is not None and live_migration_tunnelled is False, the ip/hostname address of target compute node is used instead of live_migration_uri as the uri for live migration. live_migration_scheme : If live_migration_uri is not set, the scheme used for live migration is taken from live_migration_scheme instead. Deprecated since: 15.0.0 Reason: live_migration_uri is deprecated for removal in favor of two other options that allow to change live migration scheme and target URI: ``live_migration_scheme`` and ``live_migration_inbound_addr`` respectively. live_migration_with_native_tls = False boolean value Use QEMU-native TLS encryption when live migrating. This option will allow both migration stream (guest RAM plus device state) and disk stream to be transported over native TLS, i.e. TLS support built into QEMU. Prerequisite: TLS environment is configured correctly on all relevant Compute nodes. This means, Certificate Authority (CA), server, client certificates, their corresponding keys, and their file permisssions are in place, and are validated. Notes: To have encryption for migration stream and disk stream (also called: "block migration"), live_migration_with_native_tls is the preferred config attribute instead of live_migration_tunnelled . The live_migration_tunnelled will be deprecated in the long-term for two main reasons: (a) it incurs a huge performance penalty; and (b) it is not compatible with block migration. Therefore, if your compute nodes have at least libvirt 4.4.0 and QEMU 2.11.0, it is strongly recommended to use live_migration_with_native_tls . The live_migration_tunnelled and live_migration_with_native_tls should not be used at the same time. Unlike live_migration_tunnelled , the live_migration_with_native_tls is compatible with block migration. That is, with this option, NBD stream, over which disks are migrated to a target host, will be encrypted. Related options: live_migration_tunnelled : This transports migration stream (but not disk stream) over libvirtd. max_queues = None integer value The maximum number of virtio queue pairs that can be enabled when creating a multiqueue guest. The number of virtio queues allocated will be the lesser of the CPUs requested by the guest and the max value defined. By default, this value is set to none meaning the legacy limits based on the reported kernel major version will be used. mem_stats_period_seconds = 10 integer value A number of seconds to memory usage statistics period. Zero or negative value mean to disable memory usage statistics. nfs_mount_options = None string value Mount options passed to the NFS client. See section of the nfs man page for details. Mount options controls the way the filesystem is mounted and how the NFS client behaves when accessing files on this mount point. Possible values: Any string representing mount options separated by commas. Example string: vers=3,lookupcache=pos nfs_mount_point_base = USDstate_path/mnt string value Directory where the NFS volume is mounted on the compute node. The default is mnt directory of the location where nova's Python module is installed. NFS provides shared storage for the OpenStack Block Storage service. Possible values: A string representing absolute path of mount point. num_aoe_discover_tries = 3 integer value Number of times to rediscover AoE target to find volume. Nova provides support for block storage attaching to hosts via AOE (ATA over Ethernet). This option allows the user to specify the maximum number of retry attempts that can be made to discover the AoE device. num_iser_scan_tries = 5 integer value Number of times to scan iSER target to find volume. iSER is a server network protocol that extends iSCSI protocol to use Remote Direct Memory Access (RDMA). This option allows the user to specify the maximum number of scan attempts that can be made to find iSER volume. num_memory_encrypted_guests = None integer value Maximum number of guests with encrypted memory which can run concurrently on this compute host. For now this is only relevant for AMD machines which support SEV (Secure Encrypted Virtualization). Such machines have a limited number of slots in their memory controller for storing encryption keys. Each running guest with encrypted memory will consume one of these slots. The option may be reused for other equivalent technologies in the future. If the machine does not support memory encryption, the option will be ignored and inventory will be set to 0. If the machine does support memory encryption, for now a value of None means an effectively unlimited inventory, i.e. no limit will be imposed by Nova on the number of SEV guests which can be launched, even though the underlying hardware will enforce its own limit. However it is expected that in the future, auto-detection of the inventory from the hardware will become possible, at which point None will cause auto-detection to automatically impose the correct limit. note:: Related options: :oslo.config:option: libvirt.virt_type must be set to kvm . It's recommended to consider including x86_64=q35 in :oslo.config:option: libvirt.hw_machine_type ; see :ref: deploying-sev-capable-infrastructure for more on this. num_nvme_discover_tries = 5 integer value Number of times to rediscover NVMe target to find volume Nova provides support for block storage attaching to hosts via NVMe (Non-Volatile Memory Express). This option allows the user to specify the maximum number of retry attempts that can be made to discover the NVMe device. num_pcie_ports = 0 integer value The number of PCIe ports an instance will get. Libvirt allows a custom number of PCIe ports (pcie-root-port controllers) a target instance will get. Some will be used by default, rest will be available for hotplug use. By default we have just 1-2 free ports which limits hotplug. More info: https://github.com/qemu/qemu/blob/master/docs/pcie.txt Due to QEMU limitations for aarch64/virt maximum value is set to 28 . Default value 0 moves calculating amount of ports to libvirt. num_volume_scan_tries = 5 integer value Number of times to scan given storage protocol to find volume. pmem_namespaces = [] list value Configure persistent memory(pmem) namespaces. These namespaces must have been already created on the host. This config option is in the following format:: USDNSNAME is the name of the pmem namespace. USDLABEL represents one resource class, this is used to generate the resource class name as CUSTOM_PMEM_NAMESPACE_USDLABEL . For example [libvirt] pmem_namespaces=128G:ns0|ns1|ns2|ns3,262144MB:ns4|ns5,MEDIUM:ns6|ns7 quobyte_client_cfg = None string value Path to a Quobyte Client configuration file. quobyte_mount_point_base = USDstate_path/mnt string value Directory where the Quobyte volume is mounted on the compute node. Nova supports Quobyte volume driver that enables storing Block Storage service volumes on a Quobyte storage back end. This Option specifies the path of the directory where Quobyte volume is mounted. Possible values: A string representing absolute path of mount point. rbd_connect_timeout = 5 integer value The RADOS client timeout in seconds when initially connecting to the cluster. rbd_destroy_volume_retries = 12 integer value Number of retries to destroy a RBD volume. Related options: [libvirt]/images_type = rbd rbd_destroy_volume_retry_interval = 5 integer value Number of seconds to wait between each consecutive retry to destroy a RBD volume. Related options: [libvirt]/images_type = rbd rbd_secret_uuid = None string value The libvirt UUID of the secret for the rbd_user volumes. rbd_user = None string value The RADOS client name for accessing rbd(RADOS Block Devices) volumes. Libvirt will refer to this user when connecting and authenticating with the Ceph RBD server. realtime_scheduler_priority = 1 integer value In a realtime host context vCPUs for guest will run in that scheduling priority. Priority depends on the host kernel (usually 1-99) remote_filesystem_transport = ssh string value libvirt's transport method for remote file operations. Because libvirt cannot use RPC to copy files over network to/from other compute nodes, other method must be used for: creating directory on remote host creating file on remote host removing file from remote host copying file to remote host rescue_image_id = None string value The ID of the image to boot from to rescue data from a corrupted instance. If the rescue REST API operation doesn't provide an ID of an image to use, the image which is referenced by this ID is used. If this option is not set, the image from the instance is used. Possible values: An ID of an image or nothing. If it points to an Amazon Machine Image (AMI), consider to set the config options rescue_kernel_id and rescue_ramdisk_id too. If nothing is set, the image of the instance is used. Related options: rescue_kernel_id : If the chosen rescue image allows the separate definition of its kernel disk, the value of this option is used, if specified. This is the case when Amazon 's AMI/AKI/ARI image format is used for the rescue image. rescue_ramdisk_id : If the chosen rescue image allows the separate definition of its RAM disk, the value of this option is used if, specified. This is the case when Amazon 's AMI/AKI/ARI image format is used for the rescue image. rescue_kernel_id = None string value The ID of the kernel (AKI) image to use with the rescue image. If the chosen rescue image allows the separate definition of its kernel disk, the value of this option is used, if specified. This is the case when Amazon 's AMI/AKI/ARI image format is used for the rescue image. Possible values: An ID of an kernel image or nothing. If nothing is specified, the kernel disk from the instance is used if it was launched with one. Related options: rescue_image_id : If that option points to an image in Amazon 's AMI/AKI/ARI image format, it's useful to use rescue_kernel_id too. rescue_ramdisk_id = None string value The ID of the RAM disk (ARI) image to use with the rescue image. If the chosen rescue image allows the separate definition of its RAM disk, the value of this option is used, if specified. This is the case when Amazon 's AMI/AKI/ARI image format is used for the rescue image. Possible values: An ID of a RAM disk image or nothing. If nothing is specified, the RAM disk from the instance is used if it was launched with one. Related options: rescue_image_id : If that option points to an image in Amazon 's AMI/AKI/ARI image format, it's useful to use rescue_ramdisk_id too. rng_dev_path = /dev/urandom string value The path to an RNG (Random Number Generator) device that will be used as the source of entropy on the host. Since libvirt 1.3.4, any path (that returns random numbers when read) is accepted. The recommended source of entropy is /dev/urandom - it is non-blocking, therefore relatively fast; and avoids the limitations of /dev/random , which is a legacy interface. For more details (and comparision between different RNG sources), refer to the "Usage" section in the Linux kernel API documentation for [u]random : http://man7.org/linux/man-pages/man4/urandom.4.html and http://man7.org/linux/man-pages/man7/random.7.html . rx_queue_size = None integer value Configure virtio rx queue size. This option is only usable for virtio-net device with vhost and vhost-user backend. Available only with QEMU/KVM. Requires libvirt v2.3 QEMU v2.7. `smbfs_mount_options = ` string value Mount options passed to the SMBFS client. Provide SMBFS options as a single string containing all parameters. See mount.cifs man page for details. Note that the libvirt-qemu uid and gid must be specified. smbfs_mount_point_base = USDstate_path/mnt string value Directory where the SMBFS shares are mounted on the compute node. snapshot_compression = False boolean value Enable snapshot compression for qcow2 images. Note: you can set snapshot_image_format to qcow2 to force all snapshots to be in qcow2 format, independently from their original image type. Related options: snapshot_image_format snapshot_image_format = None string value Determine the snapshot image format when sending to the image service. If set, this decides what format is used when sending the snapshot to the image service. If not set, defaults to same type as source image. snapshots_directory = USDinstances_path/snapshots string value Location where libvirt driver will store snapshots before uploading them to image service sparse_logical_volumes = False boolean value Create sparse logical volumes (with virtualsize) if this flag is set to True. Deprecated since: 18.0.0 Reason: Sparse logical volumes is a feature that is not tested hence not supported. LVM logical volumes are preallocated by default. If you want thin provisioning, use Cinder thin-provisioned volumes. swtpm_enabled = False boolean value Enable emulated TPM (Trusted Platform Module) in guests. swtpm_group = tss string value Group that swtpm binary runs as. When using emulated TPM, the swtpm binary will run to emulate a TPM device. The user this binary runs as depends on libvirt configuration, with tss being the default. In order to support cold migration and resize, nova needs to know what group the swtpm binary is running as in order to ensure that files get the proper ownership after being moved between nodes. Related options: swtpm_user must also be set. swtpm_user = tss string value User that swtpm binary runs as. When using emulated TPM, the swtpm binary will run to emulate a TPM device. The user this binary runs as depends on libvirt configuration, with tss being the default. In order to support cold migration and resize, nova needs to know what user the swtpm binary is running as in order to ensure that files get the proper ownership after being moved between nodes. Related options: swtpm_group must also be set. sysinfo_serial = unique string value The data source used to the populate the host "serial" UUID exposed to guest in the virtual BIOS. All choices except unique will change the serial when migrating the instance to another host. Changing the choice of this option will also affect existing instances on this host once they are stopped and started again. It is recommended to use the default choice ( unique ) since that will not change when an instance is migrated. However, if you have a need for per-host serials in addition to per-instance serial numbers, then consider restricting flavors via host aggregates. tx_queue_size = None integer value Configure virtio tx queue size. This option is only usable for virtio-net device with vhost-user backend. Available only with QEMU/KVM. Requires libvirt v3.7 QEMU v2.10. uid_maps = [] list value List of uid targets and ranges.Syntax is guest-uid:host-uid:count. Maximum of 5 allowed. use_virtio_for_bridges = True boolean value Use virtio for bridge interfaces with KVM/QEMU virt_type = kvm string value Describes the virtualization type (or so called domain type) libvirt should use. The choice of this type must match the underlying virtualization strategy you have chosen for this host. Related options: connection_uri : depends on this disk_prefix : depends on this cpu_mode : depends on this cpu_models : depends on this volume_clear = zero string value Method used to wipe ephemeral disks when they are deleted. Only takes effect if LVM is set as backing storage. Related options: images_type - must be set to lvm volume_clear_size volume_clear_size = 0 integer value Size of area in MiB, counting from the beginning of the allocated volume, that will be cleared using method set in volume_clear option. Possible values: 0 - clear whole volume >0 - clear specified amount of MiB Related options: images_type - must be set to lvm volume_clear - must be set and the value must be different than none for this option to have any impact volume_use_multipath = False boolean value Use multipath connection of the iSCSI or FC volume Volumes can be connected in the LibVirt as multipath devices. This will provide high availability and fault tolerance. vzstorage_cache_path = None string value Path to the SSD cache file. You can attach an SSD drive to a client and configure the drive to store a local cache of frequently accessed data. By having a local cache on a client's SSD drive, you can increase the overall cluster performance by up to 10 and more times. WARNING! There is a lot of SSD models which are not server grade and may loose arbitrary set of data changes on power loss. Such SSDs should not be used in Vstorage and are dangerous as may lead to data corruptions and inconsistencies. Please consult with the manual on which SSD models are known to be safe or verify it using vstorage-hwflush-check(1) utility. This option defines the path which should include "%(cluster_name)s" template to separate caches from multiple shares. Related options: vzstorage_mount_opts may include more detailed cache options. vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz string value Path to vzstorage client log. This option defines the log of cluster operations, it should include "%(cluster_name)s" template to separate logs from multiple shares. Related options: vzstorage_mount_opts may include more detailed logging options. vzstorage_mount_group = qemu string value Mount owner group name. This option defines the owner group of Vzstorage cluster mountpoint. Related options: vzstorage_mount_* group of parameters vzstorage_mount_opts = [] list value Extra mount options for pstorage-mount For full description of them, see https://static.openvz.org/vz-man/man1/pstorage-mount.1.gz.html Format is a python string representation of arguments list, like: "[ -v , -R , 500 ]" Shouldn't include -c, -l, -C, -u, -g and -m as those have explicit vzstorage_* options. Related options: All other vzstorage_* options vzstorage_mount_perms = 0770 string value Mount access mode. This option defines the access bits of Vzstorage cluster mountpoint, in the format similar to one of chmod(1) utility, like this: 0770. It consists of one to four digits ranging from 0 to 7, with missing lead digits assumed to be 0's. Related options: vzstorage_mount_* group of parameters vzstorage_mount_point_base = USDstate_path/mnt string value Directory where the Virtuozzo Storage clusters are mounted on the compute node. This option defines non-standard mountpoint for Vzstorage cluster. Related options: vzstorage_mount_* group of parameters vzstorage_mount_user = stack string value Mount owner user name. This option defines the owner user of Vzstorage cluster mountpoint. Related options: vzstorage_mount_* group of parameters wait_soft_reboot_seconds = 120 integer value Number of seconds to wait for instance to shut down after soft reboot request is made. We fall back to hard reboot if instance does not shutdown within this window. 9.1.27. metrics The following table outlines the options available under the [metrics] group in the /etc/nova/nova.conf file. Table 9.26. metrics Configuration option = Default value Type Description required = True boolean value Whether metrics are required. This setting determines how any unavailable metrics are treated. If this option is set to True, any hosts for which a metric is unavailable will raise an exception, so it is recommended to also use the MetricFilter to filter out those hosts before weighing. Possible values: A boolean value, where False ensures any metric being unavailable for a host will set the host weight to [metrics] weight_of_unavailable . Related options: [metrics] weight_of_unavailable weight_multiplier = 1.0 floating point value Multiplier used for weighing hosts based on reported metrics. When using metrics to weight the suitability of a host, you can use this option to change how the calculated weight influences the weight assigned to a host as follows: >1.0 : increases the effect of the metric on overall weight 1.0 : no change to the calculated weight >0.0,<1.0 : reduces the effect of the metric on overall weight 0.0 : the metric value is ignored, and the value of the [metrics] weight_of_unavailable option is returned instead >-1.0,<0.0 : the effect is reduced and reversed -1.0 : the effect is reversed <-1.0 : the effect is increased proportionally and reversed Possible values: An integer or float value, where the value corresponds to the multipler ratio for this weigher. Related options: [filter_scheduler] weight_classes [metrics] weight_of_unavailable weight_of_unavailable = -10000.0 floating point value Default weight for unavailable metrics. When any of the following conditions are met, this value will be used in place of any actual metric value: One of the metrics named in [metrics] weight_setting is not available for a host, and the value of required is False . The ratio specified for a metric in [metrics] weight_setting is 0. The [metrics] weight_multiplier option is set to 0. Possible values: An integer or float value, where the value corresponds to the multipler ratio for this weigher. Related options: [metrics] weight_setting [metrics] required [metrics] weight_multiplier weight_setting = [] list value Mapping of metric to weight modifier. This setting specifies the metrics to be weighed and the relative ratios for each metric. This should be a single string value, consisting of a series of one or more name=ratio pairs, separated by commas, where name is the name of the metric to be weighed, and ratio is the relative weight for that metric. Note that if the ratio is set to 0, the metric value is ignored, and instead the weight will be set to the value of the [metrics] weight_of_unavailable option. As an example, let's consider the case where this option is set to: The final weight will be: Possible values: A list of zero or more key/value pairs separated by commas, where the key is a string representing the name of a metric and the value is a numeric weight for that metric. If any value is set to 0, the value is ignored and the weight will be set to the value of the [metrics] weight_of_unavailable option. Related options: [metrics] weight_of_unavailable 9.1.28. mks The following table outlines the options available under the [mks] group in the /etc/nova/nova.conf file. Table 9.27. mks Configuration option = Default value Type Description enabled = False boolean value Enables graphical console access for virtual machines. mksproxy_base_url = http://127.0.0.1:6090/ uri value Location of MKS web console proxy The URL in the response points to a WebMKS proxy which starts proxying between client and corresponding vCenter server where instance runs. In order to use the web based console access, WebMKS proxy should be installed and configured Possible values: Must be a valid URL of the form: http://host:port/ or https://host:port/ 9.1.29. neutron The following table outlines the options available under the [neutron] group in the /etc/nova/nova.conf file. Table 9.28. neutron Configuration option = Default value Type Description auth-url = None string value Authentication URL auth_section = None string value Config Section from which to load plugin specific options auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. connect-retries = None integer value The maximum number of retries that should be attempted for connection errors. connect-retry-delay = None floating point value Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default_floating_pool = nova string value Default name for the floating IP pool. Specifies the name of floating IP pool used for allocating floating IPs. This option is only used if Neutron does not specify the floating IP pool name in port binding reponses. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to endpoint-override = None string value Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version , min-version , and/or max-version options. extension_sync_interval = 600 integer value Integer value representing the number of seconds to wait before querying Neutron for extensions. After this number of seconds the time Nova needs to create a resource in Neutron it will requery Neutron for the extensions that it has loaded. Setting value to 0 will refresh the extensions with no wait. http_retries = 3 integer value Number of times neutronclient should retry on any failed http call. 0 means connection is attempted only once. Setting it to any positive integer means that on failure connection is retried that many times e.g. setting it to 3 means total attempts to connect will be 4. Possible values: Any integer value. 0 means connection is attempted only once insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file `metadata_proxy_shared_secret = ` string value This option holds the shared secret string used to validate proxy requests to Neutron metadata requests. In order to be used, the X-Metadata-Provider-Signature header must be supplied in the request. Related options: service_metadata_proxy ovs_bridge = br-int string value Default name for the Open vSwitch integration bridge. Specifies the name of an integration bridge interface used by OpenvSwitch. This option is only used if Neutron does not specify the OVS bridge name in port binding responses. password = None string value User's password physnets = [] list value List of physnets present on this host. For each physnet listed, an additional section, [neutron_physnet_USDPHYSNET] , will be added to the configuration file. Each section must be configured with a single configuration option, numa_nodes , which should be a list of node IDs for all NUMA nodes this physnet is associated with. For example:: Any physnet that is not listed using this option will be treated as having no particular NUMA node affinity. Tunnelled networks (VXLAN, GRE, ... ) cannot be accounted for in this way and are instead configured using the [neutron_tunnel] group. For example:: Related options: [neutron_tunnel] numa_nodes can be used to configure NUMA affinity for all tunneled networks [neutron_physnet_USDPHYSNET] numa_nodes must be configured for each value of USDPHYSNET specified by this option project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to region-name = None string value The default region_name for endpoint URL discovery. service-name = None string value The default service_name for endpoint URL discovery. service-type = network string value The default service_type for endpoint URL discovery. service_metadata_proxy = False boolean value When set to True, this option indicates that Neutron will be used to proxy metadata requests and resolve instance ids. Otherwise, the instance ID must be passed to the metadata request in the X-Instance-ID header. Related options: metadata_proxy_shared_secret split-loggers = False boolean value Log requests to multiple loggers. status-code-retries = None integer value The maximum number of retries that should be attempted for retriable HTTP status codes. status-code-retry-delay = None floating point value Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value Trust ID user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User ID username = None string value Username valid-interfaces = ['internal', 'public'] list value List of interfaces, in order of preference, for endpoint URL. 9.1.30. notifications The following table outlines the options available under the [notifications] group in the /etc/nova/nova.conf file. Table 9.29. notifications Configuration option = Default value Type Description bdms_in_notifications = False boolean value If enabled, include block device information in the versioned notification payload. Sending block device information is disabled by default as providing that information can incur some overhead on the system since the information may need to be loaded from the database. default_level = INFO string value Default notification level for outgoing notifications. notification_format = unversioned string value Specifies which notification format shall be emitted by nova. The versioned notification interface are in feature parity with the legacy interface and the versioned interface is actively developed so new consumers should used the versioned interface. However, the legacy interface is heavily used by ceilometer and other mature OpenStack components so it remains the default. Note that notifications can be completely disabled by setting driver=noop in the [oslo_messaging_notifications] group. The list of versioned notifications is visible in https://docs.openstack.org/nova/latest/reference/notifications.html notify_on_state_change = None string value If set, send compute.instance.update notifications on instance state changes. Please refer to https://docs.openstack.org/nova/latest/reference/notifications.html for additional information on notifications. versioned_notifications_topics = ['versioned_notifications'] list value Specifies the topics for the versioned notifications issued by nova. The default value is fine for most deployments and rarely needs to be changed. However, if you have a third-party service that consumes versioned notifications, it might be worth getting a topic for that service. Nova will send a message containing a versioned notification payload to each topic queue in this list. The list of versioned notifications is visible in https://docs.openstack.org/nova/latest/reference/notifications.html 9.1.31. oslo_concurrency The following table outlines the options available under the [oslo_concurrency] group in the /etc/nova/nova.conf file. Table 9.30. oslo_concurrency Configuration option = Default value Type Description disable_process_locking = False boolean value Enables or disables inter-process locks. lock_path = None string value Directory to use for lock files. For security, the specified directory should only be writable by the user running the processes that need locking. Defaults to environment variable OSLO_LOCK_PATH. If external locks are used, a lock path must be set. 9.1.32. oslo_messaging_amqp The following table outlines the options available under the [oslo_messaging_amqp] group in the /etc/nova/nova.conf file. Table 9.31. oslo_messaging_amqp Configuration option = Default value Type Description addressing_mode = dynamic string value Indicates the addressing mode used by the driver. Permitted values: legacy - use legacy non-routable addressing routable - use routable addresses dynamic - use legacy addresses if the message bus does not support routing otherwise use routable addressing anycast_address = anycast string value Appended to the address prefix when sending to a group of consumers. Used by the message bus to identify messages that should be delivered in a round-robin fashion across consumers. broadcast_prefix = broadcast string value address prefix used when broadcasting to all servers connection_retry_backoff = 2 integer value Increase the connection_retry_interval by this many seconds after each unsuccessful failover attempt. connection_retry_interval = 1 integer value Seconds to pause before attempting to re-connect. connection_retry_interval_max = 30 integer value Maximum limit for connection_retry_interval + connection_retry_backoff container_name = None string value Name for the AMQP container. must be globally unique. Defaults to a generated UUID default_notification_exchange = None string value Exchange name used in notification addresses. Exchange name resolution precedence: Target.exchange if set else default_notification_exchange if set else control_exchange if set else notify default_notify_timeout = 30 integer value The deadline for a sent notification message delivery. Only used when caller does not provide a timeout expiry. default_reply_retry = 0 integer value The maximum number of attempts to re-send a reply message which failed due to a recoverable error. default_reply_timeout = 30 integer value The deadline for an rpc reply message delivery. default_rpc_exchange = None string value Exchange name used in RPC addresses. Exchange name resolution precedence: Target.exchange if set else default_rpc_exchange if set else control_exchange if set else rpc default_send_timeout = 30 integer value The deadline for an rpc cast or call message delivery. Only used when caller does not provide a timeout expiry. default_sender_link_timeout = 600 integer value The duration to schedule a purge of idle sender links. Detach link after expiry. group_request_prefix = unicast string value address prefix when sending to any server in group idle_timeout = 0 integer value Timeout for inactive connections (in seconds) link_retry_delay = 10 integer value Time to pause between re-connecting an AMQP 1.0 link that failed due to a recoverable error. multicast_address = multicast string value Appended to the address prefix when sending a fanout message. Used by the message bus to identify fanout messages. notify_address_prefix = openstack.org/om/notify string value Address prefix for all generated Notification addresses notify_server_credit = 100 integer value Window size for incoming Notification messages pre_settled = ['rpc-cast', 'rpc-reply'] multi valued Send messages of this type pre-settled. Pre-settled messages will not receive acknowledgement from the peer. Note well: pre-settled messages may be silently discarded if the delivery fails. Permitted values: rpc-call - send RPC Calls pre-settled rpc-reply - send RPC Replies pre-settled rpc-cast - Send RPC Casts pre-settled notify - Send Notifications pre-settled pseudo_vhost = True boolean value Enable virtual host support for those message buses that do not natively support virtual hosting (such as qpidd). When set to true the virtual host name will be added to all message bus addresses, effectively creating a private subnet per virtual host. Set to False if the message bus supports virtual hosting using the hostname field in the AMQP 1.0 Open performative as the name of the virtual host. reply_link_credit = 200 integer value Window size for incoming RPC Reply messages. rpc_address_prefix = openstack.org/om/rpc string value Address prefix for all generated RPC addresses rpc_server_credit = 100 integer value Window size for incoming RPC Request messages `sasl_config_dir = ` string value Path to directory that contains the SASL configuration `sasl_config_name = ` string value Name of configuration file (without .conf suffix) `sasl_default_realm = ` string value SASL realm to use if no realm present in username `sasl_mechanisms = ` string value Space separated list of acceptable SASL mechanisms server_request_prefix = exclusive string value address prefix used when sending to a specific server ssl = False boolean value Attempt to connect via SSL. If no other ssl-related parameters are given, it will use the system's CA-bundle to verify the server's certificate. `ssl_ca_file = ` string value CA certificate PEM file used to verify the server's certificate `ssl_cert_file = ` string value Self-identifying certificate PEM file for client authentication `ssl_key_file = ` string value Private key PEM file used to sign ssl_cert_file certificate (optional) ssl_key_password = None string value Password for decrypting ssl_key_file (if encrypted) ssl_verify_vhost = False boolean value By default SSL checks that the name in the server's certificate matches the hostname in the transport_url. In some configurations it may be preferable to use the virtual hostname instead, for example if the server uses the Server Name Indication TLS extension (rfc6066) to provide a certificate per virtual host. Set ssl_verify_vhost to True if the server's SSL certificate uses the virtual host name instead of the DNS name. trace = False boolean value Debug: dump AMQP frames to stdout unicast_address = unicast string value Appended to the address prefix when sending to a particular RPC/Notification server. Used by the message bus to identify messages sent to a single destination. 9.1.33. oslo_messaging_kafka The following table outlines the options available under the [oslo_messaging_kafka] group in the /etc/nova/nova.conf file. Table 9.32. oslo_messaging_kafka Configuration option = Default value Type Description compression_codec = none string value The compression codec for all data generated by the producer. If not set, compression will not be used. Note that the allowed values of this depend on the kafka version conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool consumer_group = oslo_messaging_consumer string value Group id for Kafka consumer. Consumers in one group will coordinate message consumption enable_auto_commit = False boolean value Enable asynchronous consumer commits kafka_consumer_timeout = 1.0 floating point value Default timeout(s) for Kafka consumers kafka_max_fetch_bytes = 1048576 integer value Max fetch bytes of Kafka consumer max_poll_records = 500 integer value The maximum number of records returned in a poll call pool_size = 10 integer value Pool Size for Kafka Consumers producer_batch_size = 16384 integer value Size of batch for the producer async send producer_batch_timeout = 0.0 floating point value Upper bound on the delay for KafkaProducer batching in seconds sasl_mechanism = PLAIN string value Mechanism when security protocol is SASL security_protocol = PLAINTEXT string value Protocol used to communicate with brokers `ssl_cafile = ` string value CA certificate PEM file used to verify the server certificate `ssl_client_cert_file = ` string value Client certificate PEM file used for authentication. `ssl_client_key_file = ` string value Client key PEM file used for authentication. `ssl_client_key_password = ` string value Client key password file used for authentication. 9.1.34. oslo_messaging_notifications The following table outlines the options available under the [oslo_messaging_notifications] group in the /etc/nova/nova.conf file. Table 9.33. oslo_messaging_notifications Configuration option = Default value Type Description driver = [] multi valued The Drivers(s) to handle sending notifications. Possible values are messaging, messagingv2, routing, log, test, noop retry = -1 integer value The maximum number of attempts to re-send a notification message which failed to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite topics = ['notifications'] list value AMQP topic used for OpenStack notifications. transport_url = None string value A URL representing the messaging driver to use for notifications. If not set, we fall back to the same configuration used for RPC. 9.1.35. oslo_messaging_rabbit The following table outlines the options available under the [oslo_messaging_rabbit] group in the /etc/nova/nova.conf file. Table 9.34. oslo_messaging_rabbit Configuration option = Default value Type Description amqp_auto_delete = False boolean value Auto-delete queues in AMQP. amqp_durable_queues = False boolean value Use durable queues in AMQP. direct_mandatory_flag = True boolean value (DEPRECATED) Enable/Disable the RabbitMQ mandatory flag for direct send. The direct send is used as reply, so the MessageUndeliverable exception is raised in case the client queue does not exist.MessageUndeliverable exception will be used to loop for a timeout to lets a chance to sender to recover.This flag is deprecated and it will not be possible to deactivate this functionality anymore enable_cancel_on_failover = False boolean value Enable x-cancel-on-ha-failover flag so that rabbitmq server will cancel and notify consumerswhen queue is down heartbeat_in_pthread = False boolean value Run the health check heartbeat thread through a native python thread by default. If this option is equal to False then the health check heartbeat will inherit the execution model from the parent process. For example if the parent process has monkey patched the stdlib by using eventlet/greenlet then the heartbeat will be run through a green thread. This option should be set to True only for the wsgi services. heartbeat_rate = 2 integer value How often times during the heartbeat_timeout_threshold we check the heartbeat. heartbeat_timeout_threshold = 60 integer value Number of seconds after which the Rabbit broker is considered down if heartbeat's keep-alive fails (0 disables heartbeat). kombu_compression = None string value EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not be used. This option may not be available in future versions. kombu_failover_strategy = round-robin string value Determines how the RabbitMQ node is chosen in case the one we are currently connected to becomes unavailable. Takes effect only if more than one RabbitMQ node is provided in config. kombu_missing_consumer_retry_timeout = 60 integer value How long to wait a missing client before abandoning to send it its replies. This value should not be longer than rpc_response_timeout. kombu_reconnect_delay = 1.0 floating point value How long to wait before reconnecting in response to an AMQP consumer cancel notification. rabbit_ha_queues = False boolean value Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring is no longer controlled by the x-ha-policy argument when declaring a queue. If you just want to make sure that all queues (except those with auto-generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy HA ^(?!amq\.).* {"ha-mode": "all"} " rabbit_interval_max = 30 integer value Maximum interval of RabbitMQ connection retries. Default is 30 seconds. rabbit_login_method = AMQPLAIN string value The RabbitMQ login method. rabbit_qos_prefetch_count = 0 integer value Specifies the number of messages to prefetch. Setting to zero allows unlimited messages. rabbit_retry_backoff = 2 integer value How long to backoff for between retries when connecting to RabbitMQ. rabbit_retry_interval = 1 integer value How frequently to retry connecting with RabbitMQ. rabbit_transient_queues_ttl = 1800 integer value Positive integer representing duration in seconds for queue TTL (x-expires). Queues which are unused for the duration of the TTL are automatically deleted. The parameter affects only reply and fanout queues. ssl = False boolean value Connect over SSL. `ssl_ca_file = ` string value SSL certification authority file (valid only if SSL enabled). `ssl_cert_file = ` string value SSL cert file (valid only if SSL enabled). `ssl_key_file = ` string value SSL key file (valid only if SSL enabled). `ssl_version = ` string value SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions. 9.1.36. oslo_middleware The following table outlines the options available under the [oslo_middleware] group in the /etc/nova/nova.conf file. Table 9.35. oslo_middleware Configuration option = Default value Type Description enable_proxy_headers_parsing = False boolean value Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not. max_request_body_size = 114688 integer value The maximum body size for each request, in bytes. secure_proxy_ssl_header = X-Forwarded-Proto string value The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy. 9.1.37. oslo_policy The following table outlines the options available under the [oslo_policy] group in the /etc/nova/nova.conf file. Table 9.36. oslo_policy Configuration option = Default value Type Description enforce_new_defaults = False boolean value This option controls whether or not to use old deprecated defaults when evaluating policies. If True , the old deprecated defaults are not going to be evaluated. This means if any existing token is allowed for old defaults but is disallowed for new defaults, it will be disallowed. It is encouraged to enable this flag along with the enforce_scope flag so that you can get the benefits of new defaults and scope_type together enforce_scope = False boolean value This option controls whether or not to enforce scope when evaluating policies. If True , the scope of the token used in the request is compared to the scope_types of the policy being enforced. If the scopes do not match, an InvalidScope exception will be raised. If False , a message will be logged informing operators that policies are being invoked with mismatching scope. policy_default_rule = default string value Default rule. Enforced when a requested rule is not found. policy_dirs = ['policy.d'] multi valued Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored. policy_file = policy.yaml string value The relative or absolute path of a file that maps roles to permissions for a given service. Relative paths must be specified in relation to the configuration file setting this option. remote_content_type = application/x-www-form-urlencoded string value Content Type to send and receive data for REST based policy check remote_ssl_ca_crt_file = None string value Absolute path to ca cert file for REST based policy check remote_ssl_client_crt_file = None string value Absolute path to client cert for REST based policy check remote_ssl_client_key_file = None string value Absolute path client key file REST based policy check remote_ssl_verify_server_crt = False boolean value server identity verification for REST based policy check 9.1.38. pci The following table outlines the options available under the [pci] group in the /etc/nova/nova.conf file. Table 9.37. pci Configuration option = Default value Type Description alias = [] multi valued An alias for a PCI passthrough device requirement. This allows users to specify the alias in the extra specs for a flavor, without needing to repeat all the PCI property requirements. This should be configured for the nova-api service and, assuming you wish to use move operations, for each nova-compute service. Possible Values: A dictionary of JSON values which describe the aliases. For example:: Supports multiple aliases by repeating the option (not by specifying a list value) alias = { "name": "QuickAssist-1", "product_id": "0443", "vendor_id": "8086", "device_type": "type-PCI", "numa_policy": "required" } alias = { "name": "QuickAssist-2", "product_id": "0444", "vendor_id": "8086", "device_type": "type-PCI", "numa_policy": "required" } passthrough_whitelist = [] multi valued White list of PCI devices available to VMs. Possible values: A JSON dictionary which describe a whitelisted PCI device. It should take the following format ["vendor_id": "<id>",] ["product_id": "<id>",] ["address": "[[[[<domain>]:]<bus>]:][<slot>][.[<function>]]" | "devname": "<name>",] {"<tag>": "<tag_value>",} domain - 0xFFFF bus - 0xFF slot - 0x1F function - 0x7 physical_network trusted Valid examples are passthrough_whitelist = {"devname":"eth0", "physical_network":"physnet"} passthrough_whitelist = {"address":" :0a:00. "} passthrough_whitelist = {"address":":0a:00.", "physical_network":"physnet1"} passthrough_whitelist = {"vendor_id":"1137", "product_id":"0071"} passthrough_whitelist = {"vendor_id":"1137", "product_id":"0071", "address": "0000:0a:00.1", "physical_network":"physnet1"} passthrough_whitelist = {"address":{"domain": ". ", "bus": "02", "slot": "01", "function": "[2-7]"}, "physical_network":"physnet1"} passthrough_whitelist = {"address":{"domain": ". ", "bus": "02", "slot": "0[1-2]", "function": ".*"}, "physical_network":"physnet1"} passthrough_whitelist = {"devname": "eth0", "physical_network":"physnet1", "trusted": "true"} The following are invalid, as they specify mutually exclusive options passthrough_whitelist = {"devname":"eth0", "physical_network":"physnet", "address":" :0a:00. "} A JSON list of JSON dictionaries corresponding to the above format. For example passthrough_whitelist = [{"product_id":"0001", "vendor_id":"8086"}, {"product_id":"0002", "vendor_id":"8086"}] 9.1.39. placement The following table outlines the options available under the [placement] group in the /etc/nova/nova.conf file. Table 9.38. placement Configuration option = Default value Type Description auth-url = None string value Authentication URL auth_section = None string value Config Section from which to load plugin specific options auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. connect-retries = None integer value The maximum number of retries that should be attempted for connection errors. connect-retry-delay = None floating point value Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to endpoint-override = None string value Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version , min-version , and/or max-version options. insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file password = None string value User's password project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to region-name = None string value The default region_name for endpoint URL discovery. service-name = None string value The default service_name for endpoint URL discovery. service-type = placement string value The default service_type for endpoint URL discovery. split-loggers = False boolean value Log requests to multiple loggers. status-code-retries = None integer value The maximum number of retries that should be attempted for retriable HTTP status codes. status-code-retry-delay = None floating point value Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value Trust ID user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User ID username = None string value Username valid-interfaces = ['internal', 'public'] list value List of interfaces, in order of preference, for endpoint URL. 9.1.40. powervm The following table outlines the options available under the [powervm] group in the /etc/nova/nova.conf file. Table 9.39. powervm Configuration option = Default value Type Description disk_driver = localdisk string value The disk driver to use for PowerVM disks. PowerVM provides support for localdisk and PowerVM Shared Storage Pool disk drivers. Related options: volume_group_name - required when using localdisk proc_units_factor = 0.1 floating point value Factor used to calculate the amount of physical processor compute power given to each vCPU. E.g. A value of 1.0 means a whole physical processor, whereas 0.05 means 1/20th of a physical processor. `volume_group_name = ` string value Volume Group to use for block device operations. If disk_driver is localdisk, then this attribute must be specified. It is strongly recommended NOT to use rootvg since that is used by the management partition and filling it will cause failures. 9.1.41. privsep The following table outlines the options available under the [privsep] group in the /etc/nova/nova.conf file. Table 9.40. privsep Configuration option = Default value Type Description capabilities = [] list value List of Linux capabilities retained by the privsep daemon. group = None string value Group that the privsep daemon should run as. helper_command = None string value Command to invoke to start the privsep daemon if not using the "fork" method. If not specified, a default is generated using "sudo privsep-helper" and arguments designed to recreate the current configuration. This command must accept suitable --privsep_context and --privsep_sock_path arguments. logger_name = oslo_privsep.daemon string value Logger name to use for this privsep context. By default all contexts log with oslo_privsep.daemon. thread_pool_size = <based on operating system> integer value The number of threads available for privsep to concurrently run processes. Defaults to the number of CPU cores in the system. user = None string value User that the privsep daemon should run as. 9.1.42. profiler The following table outlines the options available under the [profiler] group in the /etc/nova/nova.conf file. Table 9.41. profiler Configuration option = Default value Type Description connection_string = messaging:// string value Connection string for a notifier backend. Default value is messaging:// which sets the notifier to oslo_messaging. Examples of possible values: messaging:// - use oslo_messaging driver for sending spans. redis://127.0.0.1:6379 - use redis driver for sending spans. mongodb://127.0.0.1:27017 - use mongodb driver for sending spans. elasticsearch://127.0.0.1:9200 - use elasticsearch driver for sending spans. jaeger://127.0.0.1:6831 - use jaeger tracing as driver for sending spans. enabled = False boolean value Enable the profiling for all services on this node. Default value is False (fully disable the profiling feature). Possible values: True: Enables the feature False: Disables the feature. The profiling cannot be started via this project operations. If the profiling is triggered by another project, this project part will be empty. es_doc_type = notification string value Document type for notification indexing in elasticsearch. es_scroll_size = 10000 integer value Elasticsearch splits large requests in batches. This parameter defines maximum size of each batch (for example: es_scroll_size=10000). es_scroll_time = 2m string value This parameter is a time value parameter (for example: es_scroll_time=2m), indicating for how long the nodes that participate in the search will maintain relevant resources in order to continue and support it. filter_error_trace = False boolean value Enable filter traces that contain error/exception to a separated place. Default value is set to False. Possible values: True: Enable filter traces that contain error/exception. False: Disable the filter. hmac_keys = SECRET_KEY string value Secret key(s) to use for encrypting context data for performance profiling. This string value should have the following format: <key1>[,<key2>,... <keyn>], where each key is some random string. A user who triggers the profiling via the REST API has to set one of these keys in the headers of the REST API call to include profiling results of this node for this particular project. Both "enabled" flag and "hmac_keys" config options should be set to enable profiling. Also, to generate correct profiling information across all services at least one key needs to be consistent between OpenStack projects. This ensures it can be used from client side to generate the trace, containing information from all possible resources. sentinel_service_name = mymaster string value Redissentinel uses a service name to identify a master redis service. This parameter defines the name (for example: sentinal_service_name=mymaster ). socket_timeout = 0.1 floating point value Redissentinel provides a timeout option on the connections. This parameter defines that timeout (for example: socket_timeout=0.1). trace_sqlalchemy = False boolean value Enable SQL requests profiling in services. Default value is False (SQL requests won't be traced). Possible values: True: Enables SQL requests profiling. Each SQL query will be part of the trace and can the be analyzed by how much time was spent for that. False: Disables SQL requests profiling. The spent time is only shown on a higher level of operations. Single SQL queries cannot be analyzed this way. 9.1.43. quota The following table outlines the options available under the [quota] group in the /etc/nova/nova.conf file. Table 9.42. quota Configuration option = Default value Type Description cores = 20 integer value The number of instance cores or vCPUs allowed per project. Possible values: A positive integer or 0. -1 to disable the quota. count_usage_from_placement = False boolean value Enable the counting of quota usage from the placement service. Starting in Train, it is possible to count quota usage for cores and ram from the placement service and instances from the API database instead of counting from cell databases. This works well if there is only one Nova deployment running per placement deployment. However, if an operator is running more than one Nova deployment sharing a placement deployment, they should not set this option to True because currently the placement service has no way to partition resource providers per Nova deployment. When this option is left as the default or set to False, Nova will use the legacy counting method to count quota usage for instances, cores, and ram from its cell databases. Note that quota usage behavior related to resizes will be affected if this option is set to True. Placement resource allocations are claimed on the destination while holding allocations on the source during a resize, until the resize is confirmed or reverted. During this time, when the server is in VERIFY_RESIZE state, quota usage will reflect resource consumption on both the source and the destination. This can be beneficial as it reserves space for a revert of a downsize, but it also means quota usage will be inflated until a resize is confirmed or reverted. Behavior will also be different for unscheduled servers in ERROR state. A server in ERROR state that has never been scheduled to a compute host will not have placement allocations, so it will not consume quota usage for cores and ram. Behavior will be different for servers in SHELVED_OFFLOADED state. A server in SHELVED_OFFLOADED state will not have placement allocations, so it will not consume quota usage for cores and ram. Note that because of this, it will be possible for a request to unshelve a server to be rejected if the user does not have enough quota available to support the cores and ram needed by the server to be unshelved. The populate_queued_for_delete and populate_user_id online data migrations must be completed before usage can be counted from placement. Until the data migration is complete, the system will fall back to legacy quota usage counting from cell databases depending on the result of an EXISTS database query during each quota check, if this configuration option is set to True. Operators who want to avoid the performance hit from the EXISTS queries should wait to set this configuration option to True until after they have completed their online data migrations via nova-manage db online_data_migrations . driver = nova.quota.DbQuotaDriver string value Provides abstraction for quota checks. Users can configure a specific driver to use for quota checks. injected_file_content_bytes = 10240 integer value The number of bytes allowed per injected file. Possible values: A positive integer or 0. -1 to disable the quota. injected_file_path_length = 255 integer value The maximum allowed injected file path length. Possible values: A positive integer or 0. -1 to disable the quota. injected_files = 5 integer value The number of injected files allowed. File injection allows users to customize the personality of an instance by injecting data into it upon boot. Only text file injection is permitted: binary or ZIP files are not accepted. During file injection, any existing files that match specified files are renamed to include .bak extension appended with a timestamp. Possible values: A positive integer or 0. -1 to disable the quota. instances = 10 integer value The number of instances allowed per project. Possible Values A positive integer or 0. -1 to disable the quota. key_pairs = 100 integer value The maximum number of key pairs allowed per user. Users can create at least one key pair for each project and use the key pair for multiple instances that belong to that project. Possible values: A positive integer or 0. -1 to disable the quota. metadata_items = 128 integer value The number of metadata items allowed per instance. Users can associate metadata with an instance during instance creation. This metadata takes the form of key-value pairs. Possible values: A positive integer or 0. -1 to disable the quota. ram = 51200 integer value The number of megabytes of instance RAM allowed per project. Possible values: A positive integer or 0. -1 to disable the quota. recheck_quota = True boolean value Recheck quota after resource creation to prevent allowing quota to be exceeded. This defaults to True (recheck quota after resource creation) but can be set to False to avoid additional load if allowing quota to be exceeded because of racing requests is considered acceptable. For example, when set to False, if a user makes highly parallel REST API requests to create servers, it will be possible for them to create more servers than their allowed quota during the race. If their quota is 10 servers, they might be able to create 50 during the burst. After the burst, they will not be able to create any more servers but they will be able to keep their 50 servers until they delete them. The initial quota check is done before resources are created, so if multiple parallel requests arrive at the same time, all could pass the quota check and create resources, potentially exceeding quota. When recheck_quota is True, quota will be checked a second time after resources have been created and if the resource is over quota, it will be deleted and OverQuota will be raised, usually resulting in a 403 response to the REST API user. This makes it impossible for a user to exceed their quota with the caveat that it will, however, be possible for a REST API user to be rejected with a 403 response in the event of a collision close to reaching their quota limit, even if the user has enough quota available when they made the request. server_group_members = 10 integer value The maximum number of servers per server group. Possible values: A positive integer or 0. -1 to disable the quota. server_groups = 10 integer value The maxiumum number of server groups per project. Server groups are used to control the affinity and anti-affinity scheduling policy for a group of servers or instances. Reducing the quota will not affect any existing group, but new servers will not be allowed into groups that have become over quota. Possible values: A positive integer or 0. -1 to disable the quota. 9.1.44. rdp The following table outlines the options available under the [rdp] group in the /etc/nova/nova.conf file. Table 9.43. rdp Configuration option = Default value Type Description enabled = False boolean value Enable Remote Desktop Protocol (RDP) related features. Hyper-V, unlike the majority of the hypervisors employed on Nova compute nodes, uses RDP instead of VNC and SPICE as a desktop sharing protocol to provide instance console access. This option enables RDP for graphical console access for virtual machines created by Hyper-V. Note: RDP should only be enabled on compute nodes that support the Hyper-V virtualization platform. Related options: compute_driver : Must be hyperv. html5_proxy_base_url = http://127.0.0.1:6083/ uri value The URL an end user would use to connect to the RDP HTML5 console proxy. The console proxy service is called with this token-embedded URL and establishes the connection to the proper instance. An RDP HTML5 console proxy service will need to be configured to listen on the address configured here. Typically the console proxy service would be run on a controller node. The localhost address used as default would only work in a single node environment i.e. devstack. An RDP HTML5 proxy allows a user to access via the web the text or graphical console of any Windows server or workstation using RDP. RDP HTML5 console proxy services include FreeRDP, wsgate. See https://github.com/FreeRDP/FreeRDP-WebConnect Possible values: <scheme>://<ip-address>:<port-number>/ Related options: rdp.enabled : Must be set to True for html5_proxy_base_url to be effective. 9.1.45. remote_debug The following table outlines the options available under the [remote_debug] group in the /etc/nova/nova.conf file. Table 9.44. remote_debug Configuration option = Default value Type Description host = None host address value Debug host (IP or name) to connect to. This command line parameter is used when you want to connect to a nova service via a debugger running on a different host. Note that using the remote debug option changes how nova uses the eventlet library to support async IO. This could result in failures that do not occur under normal operation. Use at your own risk. Possible Values: IP address of a remote host as a command line parameter to a nova service. For example nova-compute --config-file /etc/nova/nova.conf --remote_debug-host <IP address of the debugger> port = None port value Debug port to connect to. This command line parameter allows you to specify the port you want to use to connect to a nova service via a debugger running on different host. Note that using the remote debug option changes how nova uses the eventlet library to support async IO. This could result in failures that do not occur under normal operation. Use at your own risk. Possible Values: Port number you want to use as a command line parameter to a nova service. For example nova-compute --config-file /etc/nova/nova.conf --remote_debug-host <IP address of the debugger> --remote_debug-port <port debugger is listening on>. 9.1.46. scheduler The following table outlines the options available under the [scheduler] group in the /etc/nova/nova.conf file. Table 9.45. scheduler Configuration option = Default value Type Description discover_hosts_in_cells_interval = -1 integer value Periodic task interval. This value controls how often (in seconds) the scheduler should attempt to discover new hosts that have been added to cells. If negative (the default), no automatic discovery will occur. Deployments where compute nodes come and go frequently may want this enabled, where others may prefer to manually discover hosts when one is added to avoid any overhead from constantly checking. If enabled, every time this runs, we will select any unmapped hosts out of each cell database on every run. Possible values: An integer, where the integer corresponds to periodic task interval in seconds. 0 uses the default interval (60 seconds). A negative value disables periodic tasks. enable_isolated_aggregate_filtering = False boolean value Restrict use of aggregates to instances with matching metadata. This setting allows the scheduler to restrict hosts in aggregates based on matching required traits in the aggregate metadata and the instance flavor/image. If an aggregate is configured with a property with key trait:USDTRAIT_NAME and value required , the instance flavor extra_specs and/or image metadata must also contain trait:USDTRAIT_NAME=required to be eligible to be scheduled to hosts in that aggregate. More technical details at https://docs.openstack.org/nova/latest/reference/isolate-aggregates.html Possible values: A boolean value. image_metadata_prefilter = False boolean value Use placement to filter hosts based on image metadata. This setting causes the scheduler to transform well known image metadata properties into placement required traits to filter host based on image metadata. This feature requires host support and is currently supported by the following compute drivers: libvirt.LibvirtDriver (since Ussuri (21.0.0)) Possible values: A boolean value. Related options: [compute] compute_driver limit_tenants_to_placement_aggregate = False boolean value Restrict tenants to specific placement aggregates. This setting causes the scheduler to look up a host aggregate with the metadata key of filter_tenant_id set to the project of an incoming request, and request results from placement be limited to that aggregate. Multiple tenants may be added to a single aggregate by appending a serial number to the key, such as filter_tenant_id:123 . The matching aggregate UUID must be mirrored in placement for proper operation. If no host aggregate with the tenant id is found, or that aggregate does not match one in placement, the result will be the same as not finding any suitable hosts for the request. Possible values: A boolean value. Related options: [scheduler] placement_aggregate_required_for_tenants max_attempts = 3 integer value The maximum number of schedule attempts. This is the maximum number of attempts that will be made for a given instance build/move operation. It limits the number of alternate hosts returned by the scheduler. When that list of hosts is exhausted, a MaxRetriesExceeded exception is raised and the instance is set to an error state. Possible values: A positive integer, where the integer corresponds to the max number of attempts that can be made when building or moving an instance. max_placement_results = 1000 integer value The maximum number of placement results to request. This setting determines the maximum limit on results received from the placement service during a scheduling operation. It effectively limits the number of hosts that may be considered for scheduling requests that match a large number of candidates. A value of 1 (the minimum) will effectively defer scheduling to the placement service strictly on "will it fit" grounds. A higher value will put an upper cap on the number of results the scheduler will consider during the filtering and weighing process. Large deployments may need to set this lower than the total number of hosts available to limit memory consumption, network traffic, etc. of the scheduler. Possible values: An integer, where the integer corresponds to the number of placement results to return. placement_aggregate_required_for_tenants = False boolean value Require a placement aggregate association for all tenants. This setting, when limit_tenants_to_placement_aggregate=True, will control whether or not a tenant with no aggregate affinity will be allowed to schedule to any available node. If aggregates are used to limit some tenants but not all, then this should be False. If all tenants should be confined via aggregate, then this should be True to prevent them from receiving unrestricted scheduling to any available node. Possible values: A boolean value. Related options: [scheduler] placement_aggregate_required_for_tenants query_placement_for_availability_zone = False boolean value Use placement to determine availability zones. This setting causes the scheduler to look up a host aggregate with the metadata key of availability_zone set to the value provided by an incoming request, and request results from placement be limited to that aggregate. The matching aggregate UUID must be mirrored in placement for proper operation. If no host aggregate with the availability_zone key is found, or that aggregate does not match one in placement, the result will be the same as not finding any suitable hosts. Note that if you enable this flag, you can disable the (less efficient) AvailabilityZoneFilter in the scheduler. Possible values: A boolean value. Related options: [filter_scheduler] enabled_filters query_placement_for_image_type_support = False boolean value Use placement to determine host support for the instance's image type. This setting causes the scheduler to ask placement only for compute hosts that support the disk_format of the image used in the request. Possible values: A boolean value. query_placement_for_routed_network_aggregates = False boolean value Enable the scheduler to filter compute hosts affined to routed network segment aggregates. See https://docs.openstack.org/neutron/latest/admin/config-routed-networks.html for details. workers = None integer value Number of workers for the nova-scheduler service. Defaults to the number of CPUs available. Possible values: An integer, where the integer corresponds to the number of worker processes. 9.1.47. serial_console The following table outlines the options available under the [serial_console] group in the /etc/nova/nova.conf file. Table 9.46. serial_console Configuration option = Default value Type Description base_url = ws://127.0.0.1:6083/ uri value The URL an end user would use to connect to the nova-serialproxy service. The nova-serialproxy service is called with this token enriched URL and establishes the connection to the proper instance. Related options: The IP address must be identical to the address to which the nova-serialproxy service is listening (see option serialproxy_host in this section). The port must be the same as in the option serialproxy_port of this section. If you choose to use a secured websocket connection, then start this option with wss:// instead of the unsecured ws:// . The options cert and key in the [DEFAULT] section have to be set for that. enabled = False boolean value Enable the serial console feature. In order to use this feature, the service nova-serialproxy needs to run. This service is typically executed on the controller node. port_range = 10000:20000 string value A range of TCP ports a guest can use for its backend. Each instance which gets created will use one port out of this range. If the range is not big enough to provide another port for an new instance, this instance won't get launched. Possible values: Each string which passes the regex ^\d+:\d+USD For example 10000:20000 . Be sure that the first port number is lower than the second port number and that both are in range from 0 to 65535. proxyclient_address = 127.0.0.1 string value The IP address to which proxy clients (like nova-serialproxy ) should connect to get the serial console of an instance. This is typically the IP address of the host of a nova-compute service. serialproxy_host = 0.0.0.0 string value The IP address which is used by the nova-serialproxy service to listen for incoming requests. The nova-serialproxy service listens on this IP address for incoming connection requests to instances which expose serial console. Related options: Ensure that this is the same IP address which is defined in the option base_url of this section or use 0.0.0.0 to listen on all addresses. serialproxy_port = 6083 port value The port number which is used by the nova-serialproxy service to listen for incoming requests. The nova-serialproxy service listens on this port number for incoming connection requests to instances which expose serial console. Related options: Ensure that this is the same port number which is defined in the option base_url of this section. 9.1.48. service_user The following table outlines the options available under the [service_user] group in the /etc/nova/nova.conf file. Table 9.47. service_user Configuration option = Default value Type Description auth-url = None string value Authentication URL auth_section = None string value Config Section from which to load plugin specific options auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file password = None string value User's password project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to send_service_user_token = False boolean value When True, if sending a user token to a REST API, also send a service token. Nova often reuses the user token provided to the nova-api to talk to other REST APIs, such as Cinder, Glance and Neutron. It is possible that while the user token was valid when the request was made to Nova, the token may expire before it reaches the other service. To avoid any failures, and to make it clear it is Nova calling the service on the user's behalf, we include a service token along with the user token. Should the user's token have expired, a valid service token ensures the REST API request will still be accepted by the keystone middleware. split-loggers = False boolean value Log requests to multiple loggers. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value Trust ID user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User ID username = None string value Username 9.1.49. spice The following table outlines the options available under the [spice] group in the /etc/nova/nova.conf file. Table 9.48. spice Configuration option = Default value Type Description agent_enabled = True boolean value Enable the SPICE guest agent support on the instances. The Spice agent works with the Spice protocol to offer a better guest console experience. However, the Spice console can still be used without the Spice Agent. With the Spice agent installed the following features are enabled: Copy & Paste of text and images between the guest and client machine Automatic adjustment of resolution when the client screen changes - e.g. if you make the Spice console full screen the guest resolution will adjust to match it rather than letterboxing. Better mouse integration - The mouse can be captured and released without needing to click inside the console or press keys to release it. The performance of mouse movement is also improved. enabled = False boolean value Enable SPICE related features. Related options: VNC must be explicitly disabled to get access to the SPICE console. Set the enabled option to False in the [vnc] section to disable the VNC console. html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html uri value Location of the SPICE HTML5 console proxy. End user would use this URL to connect to the nova-spicehtml5proxy service. This service will forward request to the console of an instance. In order to use SPICE console, the service nova-spicehtml5proxy should be running. This service is typically launched on the controller node. Possible values: Must be a valid URL of the form: http://host:port/spice_auto.html where host is the node running nova-spicehtml5proxy and the port is typically 6082. Consider not using default value as it is not well defined for any real deployment. Related options: This option depends on html5proxy_host and html5proxy_port options. The access URL returned by the compute node must have the host and port where the nova-spicehtml5proxy service is listening. html5proxy_host = 0.0.0.0 host address value IP address or a hostname on which the nova-spicehtml5proxy service listens for incoming requests. Related options: This option depends on the html5proxy_base_url option. The nova-spicehtml5proxy service must be listening on a host that is accessible from the HTML5 client. html5proxy_port = 6082 port value Port on which the nova-spicehtml5proxy service listens for incoming requests. Related options: This option depends on the html5proxy_base_url option. The nova-spicehtml5proxy service must be listening on a port that is accessible from the HTML5 client. server_listen = 127.0.0.1 string value The address where the SPICE server running on the instances should listen. Typically, the nova-spicehtml5proxy proxy client runs on the controller node and connects over the private network to this address on the compute node(s). Possible values: IP address to listen on. server_proxyclient_address = 127.0.0.1 string value The address used by nova-spicehtml5proxy client to connect to instance console. Typically, the nova-spicehtml5proxy proxy client runs on the controller node and connects over the private network to this address on the compute node(s). Possible values: Any valid IP address on the compute node. Related options: This option depends on the server_listen option. The proxy client must be able to access the address specified in server_listen using the value of this option. 9.1.50. upgrade_levels The following table outlines the options available under the [upgrade_levels] group in the /etc/nova/nova.conf file. Table 9.49. upgrade_levels Configuration option = Default value Type Description baseapi = None string value Base API RPC API version cap. Possible values: By default send the latest version the client knows about A string representing a version number in the format N.N ; for example, possible values might be 1.12 or 2.0 . An OpenStack release name, in lower case, such as mitaka or liberty . cert = None string value Cert RPC API version cap. Possible values: By default send the latest version the client knows about A string representing a version number in the format N.N ; for example, possible values might be 1.12 or 2.0 . An OpenStack release name, in lower case, such as mitaka or liberty . Deprecated since: 18.0.0 Reason: The nova-cert service was removed in 16.0.0 (Pike) so this option is no longer used. compute = None string value Compute RPC API version cap. By default, we always send messages using the most recent version the client knows about. Where you have old and new compute services running, you should set this to the lowest deployed version. This is to guarantee that all services never send messages that one of the compute nodes can't understand. Note that we only support upgrading from release N to release N+1. Set this option to "auto" if you want to let the compute RPC module automatically determine what version to use based on the service versions in the deployment. Possible values: By default send the latest version the client knows about auto : Automatically determines what version to use based on the service versions in the deployment. A string representing a version number in the format N.N ; for example, possible values might be 1.12 or 2.0 . An OpenStack release name, in lower case, such as mitaka or liberty . conductor = None string value Conductor RPC API version cap. Possible values: By default send the latest version the client knows about A string representing a version number in the format N.N ; for example, possible values might be 1.12 or 2.0 . An OpenStack release name, in lower case, such as mitaka or liberty . scheduler = None string value Scheduler RPC API version cap. Possible values: By default send the latest version the client knows about A string representing a version number in the format N.N ; for example, possible values might be 1.12 or 2.0 . An OpenStack release name, in lower case, such as mitaka or liberty . 9.1.51. vault The following table outlines the options available under the [vault] group in the /etc/nova/nova.conf file. Table 9.50. vault Configuration option = Default value Type Description approle_role_id = None string value AppRole role_id for authentication with vault approle_secret_id = None string value AppRole secret_id for authentication with vault kv_mountpoint = secret string value Mountpoint of KV store in Vault to use, for example: secret kv_version = 2 integer value Version of KV store in Vault to use, for example: 2 root_token_id = None string value root token for vault ssl_ca_crt_file = None string value Absolute path to ca cert file use_ssl = False boolean value SSL Enabled/Disabled vault_url = http://127.0.0.1:8200 string value Use this endpoint to connect to Vault, for example: "http://127.0.0.1:8200" 9.1.52. vendordata_dynamic_auth The following table outlines the options available under the [vendordata_dynamic_auth] group in the /etc/nova/nova.conf file. Table 9.51. vendordata_dynamic_auth Configuration option = Default value Type Description auth-url = None string value Authentication URL auth_section = None string value Config Section from which to load plugin specific options auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file password = None string value User's password project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to split-loggers = False boolean value Log requests to multiple loggers. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value Trust ID user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User ID username = None string value Username 9.1.53. vmware The following table outlines the options available under the [vmware] group in the /etc/nova/nova.conf file. Table 9.52. vmware Configuration option = Default value Type Description api_retry_count = 10 integer value Number of times VMware vCenter server API must be retried on connection failures, e.g. socket error, etc. ca_file = None string value Specifies the CA bundle file to be used in verifying the vCenter server certificate. cache_prefix = None string value This option adds a prefix to the folder where cached images are stored This is not the full path - just a folder prefix. This should only be used when a datastore cache is shared between compute nodes. Note: This should only be used when the compute nodes are running on same host or they have a shared file system. Possible values: Any string representing the cache prefix to the folder cluster_name = None string value Name of a VMware Cluster ComputeResource. connection_pool_size = 10 integer value This option sets the http connection pool size The connection pool size is the maximum number of connections from nova to vSphere. It should only be increased if there are warnings indicating that the connection pool is full, otherwise, the default should suffice. console_delay_seconds = None integer value Set this value if affected by an increased network latency causing repeated characters when typing in a remote console. datastore_regex = None string value Regular expression pattern to match the name of datastore. The datastore_regex setting specifies the datastores to use with Compute. For example, datastore_regex="nas.*" selects all the data stores that have a name starting with "nas". Note If no regex is given, it just picks the datastore with the most freespace. Possible values: Any matching regular expression to a datastore must be given host_ip = None host address value Hostname or IP address for connection to VMware vCenter host. host_password = None string value Password for connection to VMware vCenter host. host_port = 443 port value Port for connection to VMware vCenter host. host_username = None string value Username for connection to VMware vCenter host. insecure = False boolean value If true, the vCenter server certificate is not verified. If false, then the default CA truststore is used for verification. Related options: * ca_file: This option is ignored if "ca_file" is set. integration_bridge = None string value This option should be configured only when using the NSX-MH Neutron plugin. This is the name of the integration bridge on the ESXi server or host. This should not be set for any other Neutron plugin. Hence the default value is not set. Possible values: Any valid string representing the name of the integration bridge maximum_objects = 100 integer value This option specifies the limit on the maximum number of objects to return in a single result. A positive value will cause the operation to suspend the retrieval when the count of objects reaches the specified limit. The server may still limit the count to something less than the configured value. Any remaining objects may be retrieved with additional requests. pbm_default_policy = None string value This option specifies the default policy to be used. If pbm_enabled is set and there is no defined storage policy for the specific request, then this policy will be used. Possible values: Any valid storage policy such as VSAN default storage policy Related options: pbm_enabled pbm_enabled = False boolean value This option enables or disables storage policy based placement of instances. Related options: pbm_default_policy pbm_wsdl_location = None string value This option specifies the PBM service WSDL file location URL. Setting this will disable storage policy based placement of instances. Possible values: Any valid file path e.g file:///opt/SDK/spbm/wsdl/pbmService.wsdl serial_log_dir = /opt/vmware/vspc string value Specifies the directory where the Virtual Serial Port Concentrator is storing console log files. It should match the serial_log_dir config value of VSPC. serial_port_proxy_uri = None uri value Identifies a proxy service that provides network access to the serial_port_service_uri. Possible values: Any valid URI (The scheme is telnet or telnets .) Related options: This option is ignored if serial_port_service_uri is not specified. * serial_port_service_uri serial_port_service_uri = None string value Identifies the remote system where the serial port traffic will be sent. This option adds a virtual serial port which sends console output to a configurable service URI. At the service URI address there will be virtual serial port concentrator that will collect console logs. If this is not set, no serial ports will be added to the created VMs. Possible values: Any valid URI task_poll_interval = 0.5 floating point value Time interval in seconds to poll remote tasks invoked on VMware VC server. use_linked_clone = True boolean value This option enables/disables the use of linked clone. The ESX hypervisor requires a copy of the VMDK file in order to boot up a virtual machine. The compute driver must download the VMDK via HTTP from the OpenStack Image service to a datastore that is visible to the hypervisor and cache it. Subsequent virtual machines that need the VMDK use the cached version and don't have to copy the file again from the OpenStack Image service. If set to false, even with a cached VMDK, there is still a copy operation from the cache location to the hypervisor file directory in the shared datastore. If set to true, the above copy operation is avoided as it creates copy of the virtual machine that shares virtual disks with its parent VM. vnc_keymap = en-us string value Keymap for VNC. The keyboard mapping (keymap) determines which keyboard layout a VNC session should use by default. Possible values: A keyboard layout which is supported by the underlying hypervisor on this node. This is usually an IETF language tag (for example en-us ). vnc_port = 5900 port value This option specifies VNC starting port. Every VM created by ESX host has an option of enabling VNC client for remote connection. Above option vnc_port helps you to set default starting port for the VNC client. Possible values: Any valid port number within 5900 -(5900 + vnc_port_total) Related options: Below options should be set to enable VNC client. * vnc.enabled = True * vnc_port_total vnc_port_total = 10000 integer value Total number of VNC ports. 9.1.54. vnc The following table outlines the options available under the [vnc] group in the /etc/nova/nova.conf file. Table 9.53. vnc Configuration option = Default value Type Description auth_schemes = ['none'] list value The authentication schemes to use with the compute node. Control what RFB authentication schemes are permitted for connections between the proxy and the compute host. If multiple schemes are enabled, the first matching scheme will be used, thus the strongest schemes should be listed first. Related options: [vnc]vencrypt_client_key , [vnc]vencrypt_client_cert : must also be set enabled = True boolean value Enable VNC related features. Guests will get created with graphical devices to support this. Clients (for example Horizon) can then establish a VNC connection to the guest. novncproxy_base_url = http://127.0.0.1:6080/vnc_auto.html uri value Public address of noVNC VNC console proxy. The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. noVNC provides VNC support through a websocket-based client. This option sets the public base URL to which client systems will connect. noVNC clients can use this address to connect to the noVNC instance and, by extension, the VNC sessions. If using noVNC >= 1.0.0, you should use vnc_lite.html instead of vnc_auto.html . Related options: novncproxy_host novncproxy_port novncproxy_host = 0.0.0.0 string value IP address that the noVNC console proxy should bind to. The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. noVNC provides VNC support through a websocket-based client. This option sets the private address to which the noVNC console proxy service should bind to. Related options: novncproxy_port novncproxy_base_url novncproxy_port = 6080 port value Port that the noVNC console proxy should bind to. The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. noVNC provides VNC support through a websocket-based client. This option sets the private port to which the noVNC console proxy service should bind to. Related options: novncproxy_host novncproxy_base_url server_listen = 127.0.0.1 host address value The IP address or hostname on which an instance should listen to for incoming VNC connection requests on this node. server_proxyclient_address = 127.0.0.1 host address value Private, internal IP address or hostname of VNC console proxy. The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. This option sets the private address to which proxy clients, such as nova-novncproxy , should connect to. vencrypt_ca_certs = None string value The path to the CA certificate PEM file The fully qualified path to a PEM file containing one or more x509 certificates for the certificate authorities used by the compute node VNC server. Related options: vnc.auth_schemes : must include vencrypt vencrypt_client_cert = None string value The path to the client key file (for x509) The fully qualified path to a PEM file containing the x509 certificate which the VNC proxy server presents to the compute node during VNC authentication. Realted options: vnc.auth_schemes : must include vencrypt vnc.vencrypt_client_key : must also be set vencrypt_client_key = None string value The path to the client certificate PEM file (for x509) The fully qualified path to a PEM file containing the private key which the VNC proxy server presents to the compute node during VNC authentication. Related options: vnc.auth_schemes : must include vencrypt vnc.vencrypt_client_cert : must also be set 9.1.55. workarounds The following table outlines the options available under the [workarounds] group in the /etc/nova/nova.conf file. Table 9.54. workarounds Configuration option = Default value Type Description disable_compute_service_check_for_ffu = False boolean value If this is set, the normal safety check for old compute services will be treated as a warning instead of an error. This is only to be enabled to facilitate a Fast-Forward upgrade where new control services are being started before compute nodes have been able to update their service record. In an FFU, the service records in the database will be more than one version old until the compute nodes start up, but control services need to be online first. disable_fallback_pcpu_query = False boolean value Disable fallback request for VCPU allocations when using pinned instances. Starting in Train, compute nodes using the libvirt virt driver can report PCPU inventory and will use this for pinned instances. The scheduler will automatically translate requests using the legacy CPU pinning-related flavor extra specs, hw:cpu_policy and hw:cpu_thread_policy , their image metadata property equivalents, and the emulator threads pinning flavor extra spec, hw:emulator_threads_policy , to new placement requests. However, compute nodes require additional configuration in order to report PCPU inventory and this configuration may not be present immediately after an upgrade. To ensure pinned instances can be created without this additional configuration, the scheduler will make a second request to placement for old-style VCPU -based allocations and fallback to these allocation candidates if necessary. This has a slight performance impact and is not necessary on new or upgraded deployments where the new configuration has been set on all hosts. By setting this option, the second lookup is disabled and the scheduler will only request PCPU -based allocations. Deprecated since: 20.0.0 *Reason:*None disable_group_policy_check_upcall = False boolean value Disable the server group policy check upcall in compute. In order to detect races with server group affinity policy, the compute service attempts to validate that the policy was not violated by the scheduler. It does this by making an upcall to the API database to list the instances in the server group for one that it is booting, which violates our api/cell isolation goals. Eventually this will be solved by proper affinity guarantees in the scheduler and placement service, but until then, this late check is needed to ensure proper affinity policy. Operators that desire api/cell isolation over this check should enable this flag, which will avoid making that upcall from compute. Related options: [filter_scheduler]/track_instance_changes also relies on upcalls from the compute service to the scheduler service. disable_libvirt_livesnapshot = False boolean value Disable live snapshots when using the libvirt driver. Live snapshots allow the snapshot of the disk to happen without an interruption to the guest, using coordination with a guest agent to quiesce the filesystem. When using libvirt 1.2.2 live snapshots fail intermittently under load (likely related to concurrent libvirt/qemu operations). This config option provides a mechanism to disable live snapshot, in favor of cold snapshot, while this is resolved. Cold snapshot causes an instance outage while the guest is going through the snapshotting process. For more information, refer to the bug report: Possible values: True: Live snapshot is disabled when using libvirt False: Live snapshots are always used when snapshotting (as long as there is a new enough libvirt and the backend storage supports it) Deprecated since: 19.0.0 Reason: This option was added to work around issues with libvirt 1.2.2. We no longer support this version of libvirt, which means this workaround is no longer necessary. It will be removed in a future release. disable_native_luksv1 = False boolean value When attaching encrypted LUKSv1 Cinder volumes to instances the Libvirt driver configures the encrypted disks to be natively decrypted by QEMU. A performance issue has been discovered in the libgcrypt library used by QEMU that serverly limits the I/O performance in this scenario. For more information please refer to the following bug report: RFE: hardware accelerated AES-XTS mode https://bugzilla.redhat.com/show_bug.cgi?id=1762765 Enabling this workaround option will cause Nova to use the legacy dm-crypt based os-brick encryptor to decrypt the LUKSv1 volume. Note that enabling this option while using volumes that do not provide a host block device such as Ceph will result in a failure to boot from or attach the volume to an instance. See the [workarounds]/rbd_block_device option for a way to avoid this for RBD. Related options: compute_driver (libvirt) rbd_block_device (workarounds) Deprecated since: 23.0.0 Reason: The underlying performance regression within libgcrypt that prompted this workaround has been resolved as of 1.8.5 disable_rootwrap = False boolean value Use sudo instead of rootwrap. Allow fallback to sudo for performance reasons. For more information, refer to the bug report: Possible values: True: Use sudo instead of rootwrap False: Use rootwrap as usual Interdependencies to other options: Any options that affect rootwrap will be ignored. enable_numa_live_migration = False boolean value Enable live migration of instances with NUMA topologies. Live migration of instances with NUMA topologies when using the libvirt driver is only supported in deployments that have been fully upgraded to Train. In versions, or in mixed Stein/Train deployments with a rolling upgrade in progress, live migration of instances with NUMA topologies is disabled by default when using the libvirt driver. This includes live migration of instances with CPU pinning or hugepages. CPU pinning and huge page information for such instances is not currently re-calculated, as noted in `bug #1289064`_. This means that if instances were already present on the destination host, the migrated instance could be placed on the same dedicated cores as these instances or use hugepages allocated for another instance. Alternately, if the host platforms were not homogeneous, the instance could be assigned to non-existent cores or be inadvertently split across host NUMA nodes. Despite these known issues, there may be cases where live migration is necessary. By enabling this option, operators that are aware of the issues and are willing to manually work around them can enable live migration support for these instances. Related options: compute_driver : Only the libvirt driver is affected. _bug #1289064: https://bugs.launchpad.net/nova/+bug/1289064 Deprecated since: 20.0.0 *Reason:*This option was added to mitigate known issues when live migrating instances with a NUMA topology with the libvirt driver. Those issues are resolved in Train. Clouds using the libvirt driver and fully upgraded to Train support NUMA-aware live migration. This option will be removed in a future release. enable_qemu_monitor_announce_self = False boolean value If it is set to True the libvirt driver will try as a best effort to send the announce-self command to the QEMU monitor so that it generates RARP frames to update network switches in the post live migration phase on the destination. Please note that this causes the domain to be considered tainted by libvirt. Related options: :oslo.config:option: DEFAULT.compute_driver (libvirt) ensure_libvirt_rbd_instance_dir_cleanup = False boolean value Ensure the instance directory is removed during clean up when using rbd. When enabled this workaround will ensure that the instance directory is always removed during cleanup on hosts using [libvirt]/images_type=rbd . This avoids the following bugs with evacuation and revert resize clean up that lead to the instance directory remaining on the host: https://bugs.launchpad.net/nova/+bug/1414895 https://bugs.launchpad.net/nova/+bug/1761062 Both of these bugs can then result in DestinationDiskExists errors being raised if the instances ever attempt to return to the host. warning:: Operators will need to ensure that the instance directory itself, specified by [DEFAULT]/instances_path , is not shared between computes before enabling this workaround otherwise the console.log, kernels, ramdisks and any additional files being used by the running instance will be lost. Related options: compute_driver (libvirt) [libvirt]/images_type (rbd) instances_path handle_virt_lifecycle_events = True boolean value Enable handling of events emitted from compute drivers. Many compute drivers emit lifecycle events, which are events that occur when, for example, an instance is starting or stopping. If the instance is going through task state changes due to an API operation, like resize, the events are ignored. This is an advanced feature which allows the hypervisor to signal to the compute service that an unexpected state change has occurred in an instance and that the instance can be shutdown automatically. Unfortunately, this can race in some conditions, for example in reboot operations or when the compute service or when host is rebooted (planned or due to an outage). If such races are common, then it is advisable to disable this feature. Care should be taken when this feature is disabled and sync_power_state_interval is set to a negative value. In this case, any instances that get out of sync between the hypervisor and the Nova database will have to be synchronized manually. For more information, refer to the bug report: https://bugs.launchpad.net/bugs/1444630 Interdependencies to other options: If sync_power_state_interval is negative and this feature is disabled, then instances that get out of sync between the hypervisor and the Nova database will have to be synchronized manually. libvirt_disable_apic = False boolean value With some kernels initializing the guest apic can result in a kernel hang that renders the guest unusable. This happens as a result of a kernel bug. In most cases the correct fix it to update the guest image kernel to one that is patched however in some cases this is not possible. This workaround allows the emulation of an apic to be disabled per host however it is not recommended to use outside of a CI or developer cloud. never_download_image_if_on_rbd = False boolean value When booting from an image on a ceph-backed compute node, if the image does not already reside on the ceph cluster (as would be the case if glance is also using the same cluster), nova will download the image from glance and upload it to ceph itself. If using multiple ceph clusters, this may cause nova to unintentionally duplicate the image in a non-COW-able way in the local ceph deployment, wasting space. For more information, refer to the bug report: https://bugs.launchpad.net/nova/+bug/1858877 Enabling this option will cause nova to refuse to boot an instance if it would require downloading the image from glance and uploading it to ceph itself. Related options: compute_driver (libvirt) [libvirt]/images_type (rbd) rbd_volume_local_attach = False boolean value Attach RBD Cinder volumes to the compute as host block devices. When enabled this option instructs os-brick to connect RBD volumes locally on the compute host as block devices instead of natively through QEMU. This workaround does not currently support extending attached volumes. This can be used with the disable_native_luksv1 workaround configuration option to avoid the recently discovered performance issues found within the libgcrypt library. This workaround is temporary and will be removed during the W release once all impacted distributions have been able to update their versions of the libgcrypt library. Related options: compute_driver (libvirt) disable_qemu_native_luksv1 (workarounds) Deprecated since: 23.0.0 Reason: The underlying performance regression within libgcrypt that prompted this workaround has been resolved as of 1.8.5 reserve_disk_resource_for_image_cache = False boolean value If it is set to True then the libvirt driver will reserve DISK_GB resource for the images stored in the image cache. If the :oslo.config:option: DEFAULT.instances_path is on different disk partition than the image cache directory then the driver will not reserve resource for the cache. Such disk reservation is done by a periodic task in the resource tracker that runs every :oslo.config:option: update_resources_interval seconds. So the reservation is not updated immediately when an image is cached. Related options: :oslo.config:option: DEFAULT.instances_path :oslo.config:option: image_cache.subdirectory_name :oslo.config:option: update_resources_interval skip_cpu_compare_at_startup = False boolean value This will skip the CPU comparison call at the startup of Compute service and lets libvirt handle it. skip_cpu_compare_on_dest = False boolean value When this is enabled, it will skip CPU comparison on the destination host. When using QEMU >= 2.9 and libvirt >= 4.4.0, libvirt will do the correct thing with respect to checking CPU compatibility on the destination host during live migration. skip_hypervisor_version_check_on_lm = False boolean value When this is enabled, it will skip version-checking of hypervisors during live migration. wait_for_vif_plugged_event_during_hard_reboot = [] list value The libvirt virt driver implements power on and hard reboot by tearing down every vif of the instance being rebooted then plug them again. By default nova does not wait for network-vif-plugged event from neutron before it lets the instance run. This can cause the instance to requests the IP via DHCP before the neutron backend has a chance to set up the networking backend after the vif plug. This flag defines which vifs nova expects network-vif-plugged events from during hard reboot. The possible values are neutron port vnic types: normal direct macvtap baremetal direct-physical virtio-forwarder smart-nic vdpa accelerator-direct accelerator-direct-physical Adding a vnic_type to this configuration makes Nova wait for a network-vif-plugged event for each of the instance's vifs having the specific vnic_type before unpausing the instance, similarly to how new instance creation works. Please note that not all neutron networking backends send plug time events, for certain vnic_type therefore this config is empty by default. The ml2/ovs and the networking-odl backends are known to send plug time events for ports with normal vnic_type so it is safe to add normal to this config if you are using only those backends in the compute host. The neutron in-tree SRIOV backend does not reliably send network-vif-plugged event during plug time for ports with direct vnic_type and never sends that event for port with direct-physical vnic_type during plug time. For other vnic_type and backend pairs, please consult the developers of the backend. Related options: :oslo.config:option: DEFAULT.vif_plugging_timeout 9.1.56. wsgi The following table outlines the options available under the [wsgi] group in the /etc/nova/nova.conf file. Table 9.55. wsgi Configuration option = Default value Type Description api_paste_config = api-paste.ini string value This option represents a file name for the paste.deploy config for nova-api. Possible values: A string representing file name for the paste.deploy config. client_socket_timeout = 900 integer value This option specifies the timeout for client connections' socket operations. If an incoming connection is idle for this number of seconds it will be closed. It indicates timeout on individual read/writes on the socket connection. To wait forever set to 0. default_pool_size = 1000 integer value This option specifies the size of the pool of greenthreads used by wsgi. It is possible to limit the number of concurrent connections using this option. keep_alive = True boolean value This option allows using the same TCP connection to send and receive multiple HTTP requests/responses, as opposed to opening a new one for every single request/response pair. HTTP keep-alive indicates HTTP connection reuse. Possible values: True : reuse HTTP connection. False : closes the client socket connection explicitly. Related options: tcp_keepidle max_header_line = 16384 integer value This option specifies the maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs). Since TCP is a stream based protocol, in order to reuse a connection, the HTTP has to have a way to indicate the end of the response and beginning of the . Hence, in a keep_alive case, all messages must have a self-defined message length. secure_proxy_ssl_header = None string value This option specifies the HTTP header used to determine the protocol scheme for the original request, even if it was removed by a SSL terminating proxy. Possible values: None (default) - the request scheme is not influenced by any HTTP headers Valid HTTP header, like HTTP_X_FORWARDED_PROTO Warning Do not set this unless you know what you are doing. Make sure ALL of the following are true before setting this (assuming the values from the example above): Your API is behind a proxy. Your proxy strips the X-Forwarded-Proto header from all incoming requests. In other words, if end users include that header in their requests, the proxy will discard it. Your proxy sets the X-Forwarded-Proto header and sends it to API, but only for requests that originally come in via HTTPS. If any of those are not true, you should keep this setting set to None. ssl_ca_file = None string value This option allows setting path to the CA certificate file that should be used to verify connecting clients. Possible values: String representing path to the CA certificate file. Related options: enabled_ssl_apis ssl_cert_file = None string value This option allows setting path to the SSL certificate of API server. Possible values: String representing path to the SSL certificate. Related options: enabled_ssl_apis ssl_key_file = None string value This option specifies the path to the file where SSL private key of API server is stored when SSL is in effect. Possible values: String representing path to the SSL private key. Related options: enabled_ssl_apis tcp_keepidle = 600 integer value This option sets the value of TCP_KEEPIDLE in seconds for each server socket. It specifies the duration of time to keep connection active. TCP generates a KEEPALIVE transmission for an application that requests to keep connection active. Not supported on OS X. Related options: keep_alive wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f string value It represents a python format string that is used as the template to generate log lines. The following values can be formatted into it: client_ip, date_time, request_line, status_code, body_length, wall_seconds. This option is used for building custom request loglines when running nova-api under eventlet. If used under uwsgi or apache, this option has no effect. Possible values: %(client_ip)s "%(request_line)s" status: %(status_code)s ' 'len: %(body_length)s time: %(wall_seconds).7f (default) Any formatted string formed by specific values. Deprecated since: 16.0.0 Reason: This option only works when running nova-api under eventlet, and encodes very eventlet specific pieces of information. Starting in Pike the preferred model for running nova-api is under uwsgi or apache mod_wsgi. 9.1.57. zvm The following table outlines the options available under the [zvm] group in the /etc/nova/nova.conf file. Table 9.56. zvm Configuration option = Default value Type Description ca_file = None string value CA certificate file to be verified in httpd server with TLS enabled A string, it must be a path to a CA bundle to use. cloud_connector_url = None uri value URL to be used to communicate with z/VM Cloud Connector. image_tmp_path = USDstate_path/images string value The path at which images will be stored (snapshot, deploy, etc). Images used for deploy and images captured via snapshot need to be stored on the local disk of the compute host. This configuration identifies the directory location. Possible values: A file system path on the host running the compute service. reachable_timeout = 300 integer value Timeout (seconds) to wait for an instance to start. The z/VM driver relies on communication between the instance and cloud connector. After an instance is created, it must have enough time to wait for all the network info to be written into the user directory. The driver will keep rechecking network status to the instance with the timeout value, If setting network failed, it will notify the user that starting the instance failed and put the instance in ERROR state. The underlying z/VM guest will then be deleted. Possible Values: Any positive integer. Recommended to be at least 300 seconds (5 minutes), but it will vary depending on instance and system load. A value of 0 is used for debug. In this case the underlying z/VM guest will not be deleted when the instance is marked in ERROR state.
[ "This option does not affect `PCPU` inventory, which cannot be overcommitted.", "If this option is set to something *other than* `None` or `0.0`, the allocation ratio will be overwritten by the value of this option, otherwise, the allocation ratio will not change. Once set to a non-default value, it is not possible to \"unset\" the config to get back to the default behavior. If you want to reset back to the initial value, explicitly specify it to the value of `initial_cpu_allocation_ratio`.", "If the value is set to `>1`, we recommend keeping track of the free disk space, as the value approaching `0` may result in the incorrect functioning of instances using it at the moment.", "If this option is set to something *other than* `None` or `0.0`, the allocation ratio will be overwritten by the value of this option, otherwise, the allocation ratio will not change. Once set to a non-default value, it is not possible to \"unset\" the config to get back to the default behavior. If you want to reset back to the initial value, explicitly specify it to the value of `initial_disk_allocation_ratio`.", "https://cloudinit.readthedocs.io/en/latest/topics/datasources.html", "The following image properties are *never* inherited regardless of whether they are listed in this configuration option or not:", "If this option is set to something *other than* `None` or `0.0`, the allocation ratio will be overwritten by the value of this option, otherwise, the allocation ratio will not change. Once set to a non-default value, it is not possible to \"unset\" the config to get back to the default behavior. If you want to reset back to the initial value, explicitly specify it to the value of `initial_ram_allocation_ratio`.", "In this example we are reserving on NUMA node 0 64 pages of 2MiB and on NUMA node 1 1 page of 1GiB.", "The compute service cannot reliably determine which types of virtual interfaces (`port.binding:vif_type`) will send `network-vif-plugged` events without an accompanying port `binding:host_id` change. Open vSwitch and linuxbridge should be OK, but OpenDaylight is at least one known backend that will not currently work in this case, see bug https://launchpad.net/bugs/1755890 for more details.", "https://docs.openstack.org/nova/latest/admin/managing-resource-providers.html", "ssl_ciphers = \"kEECDH+aECDSA+AES:kEECDH+AES+aRSA:kEDH+aRSA+AES\"", "https://www.openssl.org/docs/man1.1.0/man1/ciphers.html", "[vgpu_nvidia-35] device_addresses = 0000:84:00.0,0000:85:00.0", "[vgpu_nvidia-36] device_addresses = 0000:86:00.0", "[filter_scheduler] hypervisor_version_weight_multiplier=-1000", "[filter_scheduler] hypervisor_version_weight_multiplier=2.5", "[filter_scheduler] hypervisor_version_weight_multiplier=0", "In a multi-cell (v2) setup where the cell MQ is separated from the top-level, computes cannot directly communicate with the scheduler. Thus, this option cannot be enabled in that scenario. See also the `[workarounds] disable_group_policy_check_upcall` option.", "64, 128, 256, 512, 1024", "This is only necessary if the URI differs to the commonly known URIs for the chosen virtualization type.", "[libvirt] cpu_mode = custom cpu_models = Cascadelake-Server cpu_model_extra_flags = -hle, -rtm, +ssbd, mtrr", "[libvirt] cpu_mode = custom cpu_models = Haswell-noTSX-IBRS cpu_model_extra_flags = -PDPE1GB, +VMX, pcid", "[libvirt] enabled_perf_events = cpu_clock, cache_misses", "It is recommended to read :ref:`the deployment documentation's section on this option <num_memory_encrypted_guests>` before deciding whether to configure this setting or leave it at the default.", "\"USDLABEL:USDNSNAME[&verbar;USDNSNAME][,USDLABEL:USDNSNAME[&verbar;USDNSNAME]]\"", "`name1=1.0, name2=-1.3`", "`(name1.value * 1.0) + (name2.value * -1.3)`", "[neutron] physnets = foo, bar", "[neutron_physnet_foo] numa_nodes = 0", "[neutron_physnet_bar] numa_nodes = 0,1", "[neutron_tunnel] numa_nodes = 1", "alias = { \"name\": \"QuickAssist\", \"product_id\": \"0443\", \"vendor_id\": \"8086\", \"device_type\": \"type-PCI\", \"numa_policy\": \"required\" }", "This defines an alias for the Intel QuickAssist card. (multi valued). Valid key values are :", "`name` Name of the PCI alias.", "`product_id` Product ID of the device in hexadecimal.", "`vendor_id` Vendor ID of the device in hexadecimal.", "`device_type` Type of PCI device. Valid values are: `type-PCI`, `type-PF` and `type-VF`. Note that `\"device_type\": \"type-PF\"` **must** be specified if you wish to passthrough a device that supports SR-IOV in its entirety.", "`numa_policy` Required NUMA affinity of device. Valid values are: `legacy`, `preferred` and `required`.", "Where `[` indicates zero or one occurrences, `{` indicates zero or multiple occurrences, and `&verbar;` mutually exclusive options. Note that any missing fields are automatically wildcarded.", "Valid key values are :", "`vendor_id` Vendor ID of the device in hexadecimal.", "`product_id` Product ID of the device in hexadecimal.", "`address` PCI address of the device. Both traditional glob style and regular expression syntax is supported. Please note that the address fields are restricted to the following maximum values:", "`devname` Device name of the device (for e.g. interface name). Not all PCI devices have a name.", "`<tag>` Additional `<tag>` and `<tag_value>` used for matching PCI devices. Supported `<tag>` values are :", "The scheme must be identical to the scheme configured for the RDP HTML5 console proxy service. It is `http` or `https`.", "The IP address must be identical to the address on which the RDP HTML5 console proxy service is listening.", "The port must be identical to the port on which the RDP HTML5 console proxy service is listening.", "https://bugs.launchpad.net/nova/+bug/1334398", "https://bugs.launchpad.net/nova/+bug/1415106" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuration_reference/nova_4
function::str_replace
function::str_replace Name function::str_replace - str_replace Replaces all instances of a substring with another. Synopsis Arguments prnt_str The string to search and replace in. srch_str The substring which is used to search in prnt_str string. rplc_str The substring which is used to replace srch_str. General Syntax str_replace:string(prnt_str:string, srch_str:string, rplc_str:string) Description This function returns the given string with substrings replaced.
[ "function str_replace:string(prnt_str:string,srch_str:string,rplc_str:string)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-str-replace
Release notes for Red Hat build of OpenJDK 8.0.442
Release notes for Red Hat build of OpenJDK 8.0.442 Red Hat build of OpenJDK 8 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.442/index
Chapter 10. Uninstalling a cluster on OpenStack
Chapter 10. Uninstalling a cluster on OpenStack You can remove a cluster that you deployed to Red Hat OpenStack Platform (RHOSP). 10.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note If you deployed your cluster to the AWS C2S Secret Region, the installation program does not support destroying the cluster; you must manually remove the cluster resources. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with user-provisioned infrastructure clusters. There might be resources that the installation program did not create or that the installation program is unable to access. For example, some Google Cloud resources require IAM permissions in shared VPC host projects, or there might be unused health checks that must be deleted . Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. Procedure On the computer that you used to install the cluster, go to the directory that contains the installation program, and run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program.
[ "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_openstack/uninstalling-cluster-openstack
function::remote_uri
function::remote_uri Name function::remote_uri - The name of this instance in a remote execution. Synopsis Arguments None Description This function returns the remote host used to invoke this particular script execution from a swarm of " stap --remote " runs. It may not be unique among the swarm. The function returns an empty string if the script was not launched with " stap --remote " .
[ "remote_uri:string()" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-remote-uri
Chapter 1. Using the Ansible plug-ins
Chapter 1. Using the Ansible plug-ins You can use Ansible plug-ins for Red Hat Developer Hub (RHDH) to learn about Ansible, create automation projects, and access opinionated workflows and tools to develop and test your automation code. From the Red Hat Developer Hub UI, you can navigate to your Ansible Automation Platform instance, where you can configure and run automation jobs. This document describes how to use the Ansible plug-ins for Red Hat Developer Hub. It presents a worked example of developing a playbook project for automating updates to your firewall configuration on RHEL systems. 1.1. Optional requirement The Ansible plug-ins for Red Hat Developer Hub link to Learning Paths on the Red Hat developer portal, developers.redhat.com/learn . To access the Learning Paths, you must have a Red Hat account and you must be able to log in to developers.redhat.com . 1.2. Dashboard navigation When you log in to Red Hat Developer Hub (RHDH), the main RHDH menu and dashboard are displayed. To view the dashboard for Ansible plug-ins for Red Hat Developer Hub, click Ansible in the Red Hat Developer Hub navigation panel. The plug-in dashboard illustrates the steps you need to take from learning about Ansible to deploying automation jobs from Ansible Automation Platform: Overview displays the main dashboard page. Learn provides links to resources curated by Red Hat that introduce you to Ansible and provide step-by-step examples to get you started. For more information, see Learning about Ansible . Discover existing collections links to private automation hub, if configured in the plug-ins, or to automation hub hosted on the Red Hat Hybrid Cloud Console. Automation hub stores existing collections and execution environments that you can use in your projects. For more information, see Discovering existing collections . Create creates new projects in your configured Source Control Management platforms such as GitHub. For more information, see Creating a project . Develop links you to OpenShift Dev Spaces, if configured in the Ansible plug-ins installation. OpenShift Dev Spaces provides on-demand, web-based Integrated Development Environments (IDEs), where you can develop automation content. For more information, see Developing projects . Operate connects you to Ansible Automation Platform, where you can create and run automation jobs that use the projects you have developed. For more information, see Setting up a controller project to run your playbook project . 1.3. Learning about Ansible To learn more about getting started with automation, click Learn from the Overview page of the plug-in dashboard. The Learn page provides the following options for learning: Learning Paths lists a curated selection of learning tools hosted on developers.redhat.com that guide you through the foundations of working with Ansible, the Ansible VS Code extension, and using YAML. You can select other Ansible learning paths from the Useful links section. Labs are self-led labs that are designed to give you hands-on experience in writing Ansible content and using Ansible development tools. 1.4. Discovering existing collections From the Overview page in the Ansible plug-ins dashboard on Red Hat Developer Hub, click Discover Existing Collections . The links in this pane provide access to the source of reusable automation content collections that you configured during plug-in installation. If you configured private automation hub when installing the plug-in, you can click Go to Automation Hub to view the collections and execution environments that your enterprise has curated. If you did not configure a private automation hub URL when installing the plug-in, the Discover existing collection pane provides a link to Red Hat automation hub on console.redhat.com. You can explore certified and validated Ansible content collections on this site. 1.5. Creating a project Prerequisite Ensure you have the correct access (RBAC) to view the templates in Red Hat Developer Hub. Ask your administrator to assign access to you if necessary. Procedure: Log in to your Red Hat Developer Hub UI. Click the Ansible A icon in the Red Hat Developer Hub navigation panel. Navigate to the Overview page. Click Create . Click Create Ansible Git Project . The Available Templates page opens. Click Create Ansible Playbook project . In the Create Ansible Playbook Project page, enter information for your new project in the form. You can see sample values for this form in the Example project. Field Description Source code repository organization name or username The name of your source code repository username or organization name Playbook repository name The name of your new Git repository Playbook description (Optional) A description of the new playbook project Playbook project's collection namespace The new playbook Git project creates an example collection folder for you. Enter a value for the collection namespace. Playbook project's collection name The name of the collection Catalog Owner Name The name of the Developer Hub catalog item owner. This is a Red Hat Developer Hub field. Source code repository organization name or username The name of your source code repository username or organization name Playbook repository name The name of your new Git repository Playbook description (Optional) A description of the new playbook project System (Optional) This is a Red Hat Developer Hub field Note Collection namespaces must follow Python module naming conventions. Collections must have short, all lowercase names. You can use underscores in the collection name if it improves readability. For more information, see the Ansible Collection naming conventions documentation . Click Review . 1.6. Viewing your projects To view the projects that you have created in the plug-in, navigate to the Overview page for the Ansible plug-in and click My Items . 1.7. Developing projects 1.7.1. Developing projects on Dev Spaces OpenShift Dev Spaces is not included with your Ansible Automation Platform subscription or the Ansible plug-ins for Red Hat Developer Hub. The plug-ins provide context-aware links to edit your project in Dev Spaces. The Dev Spaces instance provides a default configuration that installs the Ansible VS Code extension and provides the Ansible command line tools. You can activate Ansible Lightspeed in the Ansible VS Code extension. For more information, refer to the Red Hat Ansible Lightspeed with IBM watsonx Code Assistant User Guide . 1.7.2. Executing automation tasks in Dev Spaces The default configuration for Dev Spaces provides access to the Ansible command line tools. To execute an automation task in Dev Spaces from the VSCode user interface, right-click a playbook name in the list of files and select Run Ansible Playbook via ansible-navigator run or Run playbook via ansible-playbook . 1.8. Setting up a controller project to run your playbook project Procedure The Ansible plug-ins provide a link to Ansible Automation Platform. Log in to your Red Hat Developer Hub UI. Click the Ansible A icon in the Red Hat Developer Hub navigation panel. Click Operate to display a link to your Ansible Automation Platform instance. If automation controller was not included in your plug-in installation, a link to the product feature page is displayed. Click Go to Ansible Automation Platform to open your platform instance in a new browser tab. Alternatively, if your platform instance was not configured during the Ansible plug-in installation, navigate to your automation controller instance in a browser and log in. Log in to automation controller. Create a project in Ansible Automation Platform for the GitHub repository where you stored your playbook project. Refer to the Projects chapter of the Automation controller user guide . Create a job template that uses a playbook from the project that you created. Refer to the Job Templates chapter of the Automation controller user guide .
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/using_ansible_plug-ins_for_red_hat_developer_hub/rhdh-using_aap-plugin-rhdh-using
Chapter 8. Configuring the node port service range
Chapter 8. Configuring the node port service range As a cluster administrator, you can expand the available node port range. If your cluster uses of a large number of node ports, you might need to increase the number of available ports. The default port range is 30000-32767 . You can never reduce the port range, even if you first expand it beyond the default range. 8.1. Prerequisites Your cluster infrastructure must allow access to the ports that you specify within the expanded range. For example, if you expand the node port range to 30000-32900 , the inclusive port range of 32768-32900 must be allowed by your firewall or packet filtering configuration. 8.2. Expanding the node port range You can expand the node port range for the cluster. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Procedure To expand the node port range, enter the following command. Replace <port> with the largest port number in the new range. USD oc patch network.config.openshift.io cluster --type=merge -p \ '{ "spec": { "serviceNodePortRange": "30000-<port>" } }' Tip You can alternatively apply the following YAML to update the node port range: apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: serviceNodePortRange: "30000-<port>" Example output network.config.openshift.io/cluster patched To confirm that the configuration is active, enter the following command. It can take several minutes for the update to apply. USD oc get configmaps -n openshift-kube-apiserver config \ -o jsonpath="{.data['config\.yaml']}" | \ grep -Eo '"service-node-port-range":["[[:digit:]]+-[[:digit:]]+"]' Example output "service-node-port-range":["30000-33000"] 8.3. Additional resources Configuring ingress cluster traffic using a NodePort Network [config.openshift.io/v1 ] Service [core/v1 ]
[ "oc patch network.config.openshift.io cluster --type=merge -p '{ \"spec\": { \"serviceNodePortRange\": \"30000-<port>\" } }'", "apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: serviceNodePortRange: \"30000-<port>\"", "network.config.openshift.io/cluster patched", "oc get configmaps -n openshift-kube-apiserver config -o jsonpath=\"{.data['config\\.yaml']}\" | grep -Eo '\"service-node-port-range\":[\"[[:digit:]]+-[[:digit:]]+\"]'", "\"service-node-port-range\":[\"30000-33000\"]" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/networking/configuring-node-port-service-range
Chapter 5. Transaction Support
Chapter 5. Transaction Support 5.1. Transaction Support JBoss Data Virtualization uses XA transactions for participating in global transactions and for demarcating its local and command scoped transactions. Refer to the Red Hat JBoss Data Virtualization Development Guide Volume 1: Client Development for more information about the transaction subsystem. Table 5.1. JBoss Data Virtualization Transaction Scopes Scope Description Command Treats the user command as if all source commands are executed within the scope of the same transaction. The AutoCommitTxn execution property controls the behavior of command level transactions. Local The transaction boundary is local defined by a single client session. Global JBoss Data Virtualization participates in a global transaction as an XA Resource. The default transaction isolation level for JBoss Data Virtualization is READ_COMMITTED.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/chap-Transaction_Support
function::task_ns_euid
function::task_ns_euid Name function::task_ns_euid - The effective user identifier of the task Synopsis Arguments task task_struct pointer Description This function returns the effective user id of the given task.
[ "task_ns_euid:long(task:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-task-ns-euid
Chapter 6. PodSecurityPolicySubjectReview [security.openshift.io/v1]
Chapter 6. PodSecurityPolicySubjectReview [security.openshift.io/v1] Description PodSecurityPolicySubjectReview checks whether a particular user/SA tuple can create the PodTemplateSpec. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object Required spec 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds spec object PodSecurityPolicySubjectReviewSpec defines specification for PodSecurityPolicySubjectReview status object PodSecurityPolicySubjectReviewStatus contains information/status for PodSecurityPolicySubjectReview. 6.1.1. .spec Description PodSecurityPolicySubjectReviewSpec defines specification for PodSecurityPolicySubjectReview Type object Required template Property Type Description groups array (string) groups is the groups you're testing for. template PodTemplateSpec template is the PodTemplateSpec to check. If template.spec.serviceAccountName is empty it will not be defaulted. If its non-empty, it will be checked. user string user is the user you're testing for. If you specify "user" but not "group", then is it interpreted as "What if user were not a member of any groups. If user and groups are empty, then the check is performed using only the serviceAccountName in the template. 6.1.2. .status Description PodSecurityPolicySubjectReviewStatus contains information/status for PodSecurityPolicySubjectReview. Type object Property Type Description allowedBy ObjectReference allowedBy is a reference to the rule that allows the PodTemplateSpec. A rule can be a SecurityContextConstraint or a PodSecurityPolicy A nil , indicates that it was denied. reason string A machine-readable description of why this operation is in the "Failure" status. If this value is empty there is no information available. template PodTemplateSpec template is the PodTemplateSpec after the defaulting is applied. 6.2. API endpoints The following API endpoints are available: /apis/security.openshift.io/v1/namespaces/{namespace}/podsecuritypolicysubjectreviews POST : create a PodSecurityPolicySubjectReview 6.2.1. /apis/security.openshift.io/v1/namespaces/{namespace}/podsecuritypolicysubjectreviews Table 6.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create a PodSecurityPolicySubjectReview Table 6.2. Body parameters Parameter Type Description body PodSecurityPolicySubjectReview schema Table 6.3. HTTP responses HTTP code Reponse body 200 - OK PodSecurityPolicySubjectReview schema 201 - Created PodSecurityPolicySubjectReview schema 202 - Accepted PodSecurityPolicySubjectReview schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/security_apis/podsecuritypolicysubjectreview-security-openshift-io-v1
Chapter 3. Configuring messaging protocols in network connections
Chapter 3. Configuring messaging protocols in network connections AMQ Broker has a pluggable protocol architecture, so that you can easily enable one or more protocols for a network connection. The broker supports the following protocols: AMQP MQTT OpenWire STOMP Note In addition to the protocols above, the broker also supports its own native protocol known as "Core". Past versions of this protocol were known as "HornetQ" and used by Red Hat JBoss Enterprise Application Platform. 3.1. Configuring a network connection to use a messaging protocol You must associate a protocol with a network connection before you can use it. (See Configuring acceptors and connectors in network connections for more information about how to create and configure network connections.) The default configuration, located in the file <broker_instance_dir> /etc/broker.xml , includes several connections already defined. For convenience, AMQ Broker includes an acceptor for each supported protocol, plus a default acceptor that supports all protocols. Overview of default acceptors Shown below are the acceptors included by default in the broker.xml configuration file. <configuration> <core> ... <acceptors> <!-- All-protocols acceptor --> <acceptor name="artemis">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor> <!-- AMQP Acceptor. Listens on default AMQP port for AMQP traffic --> <acceptor name="amqp">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor> <!-- STOMP Acceptor --> <acceptor name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor> <!-- HornetQ Compatibility Acceptor. Enables HornetQ Core and STOMP for legacy HornetQ clients. --> <acceptor name="hornetq">tcp://0.0.0.0:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor> <!-- MQTT Acceptor --> <acceptor name="mqtt">tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor> </acceptors> ... </core> </configuration> The only requirement to enable a protocol on a given network connnection is to add the protocols parameter to the URI for the acceptor. The value of the parameter must be a comma separated list of protocol names. If the protocol parameter is omitted from the URI, all protocols are enabled. For example, to create an acceptor for receiving messages on port 3232 using the AMQP protocol, follow these steps: Open the <broker_instance_dir> /etc/broker.xml configuration file. Add the following line to the <acceptors> stanza: <acceptor name="ampq">tcp://0.0.0.0:3232?protocols=AMQP</acceptor> Additional parameters in default acceptors In a minimal acceptor configuration, you specify a protocol as part of the connection URI. However, the default acceptors in the broker.xml configuration file have some additional parameters configured. The following table details the additional parameters configured for the default acceptors. Acceptor(s) Parameter Description All-protocols acceptor AMQP STOMP tcpSendBufferSize Size of the TCP send buffer in bytes. The default value is 32768 . tcpReceiveBufferSize Size of the TCP receive buffer in bytes. The default value is 32768 . TCP buffer sizes should be tuned according to the bandwidth and latency of your network. In summary TCP send/receive buffer sizes should be calculated as: buffer_size = bandwidth * RTT. Where bandwidth is in bytes per second and network round trip time (RTT) is in seconds. RTT can be easily measured using the ping utility. For fast networks you may want to increase the buffer sizes from the defaults. All-protocols acceptor AMQP STOMP HornetQ MQTT useEpoll Use Netty epoll if using a system (Linux) that supports it. The Netty native transport offers better performance than the NIO transport. The default value of this option is true . If you set the option to false , NIO is used. All-protocols acceptor AMQP amqpCredits Maximum number of messages that an AMQP producer can send, regardless of the total message size. The default value is 1000 . To learn more about how credits are used to block AMQP messages, see Section 7.3.2, "Blocking AMQP producers" . All-protocols acceptor AMQP amqpLowCredits Lower threshold at which the broker replenishes producer credits. The default value is 300 . When the producer reaches this threshold, the broker sends the producer sufficient credits to restore the amqpCredits value. To learn more about how credits are used to block AMQP messages, see Section 7.3.2, "Blocking AMQP producers" . HornetQ compatibility acceptor anycastPrefix Prefix that clients use to specify the anycast routing type when connecting to an address that uses both anycast and multicast . The default value is jms.queue . For more information about configuring a prefix to enable clients to specify a routing type when connecting to an address, see Section 4.6, "Adding a routing type to an acceptor configuration" . multicastPrefix Prefix that clients use to specify the multicast routing type when connecting to an address that uses both anycast and multicast . The default value is jms.topic . For more information about configuring a prefix to enable clients to specify a routing type when connecting to an address, see Section 4.6, "Adding a routing type to an acceptor configuration" . Additional resources For information about other parameters that you can configure for Netty network connections, see Appendix A, Acceptor and Connector Configuration Parameters . 3.2. Using AMQP with a network connection The broker supports the AMQP 1.0 specification. An AMQP link is a uni-directional protocol for messages between a source and a target, that is, a client and the broker. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Add or configure an acceptor to receive AMQP clients by including the protocols parameter with a value of AMQP as part of the URI, as shown in the following example: <acceptors> <acceptor name="amqp-acceptor">tcp://localhost:5672?protocols=AMQP</acceptor> ... </acceptors> In the preceding example, the broker accepts AMQP 1.0 clients on port 5672, which is the default AMQP port. An AMQP link has two endpoints, a sender and a receiver. When senders transmit a message, the broker converts it into an internal format, so it can be forwarded to its destination on the broker. Receivers connect to the destination at the broker and convert the messages back into AMQP before they are delivered. If an AMQP link is dynamic, a temporary queue is created and either the remote source or the remote target address is set to the name of the temporary queue. If the link is not dynamic, the address of the remote target or source is used for the queue. If the remote target or source does not exist, an exception is sent. A link target can also be a Coordinator, which is used to handle the underlying session as a transaction, either rolling it back or committing it. Note AMQP allows the use of multiple transactions per session, amqp:multi-txns-per-ssn , however the current version of AMQ Broker will support only single transactions per session. Note The details of distributed transactions (XA) within AMQP are not provided in the 1.0 version of the specification. If your environment requires support for distributed transactions, it is recommended that you use the AMQ Core Protocol JMS. See the AMQP 1.0 specification for more information about the protocol and its features. 3.2.1. Using an AMQP Link as a Topic Unlike JMS, the AMQP protocol does not include topics. However, it is still possible to treat AMQP consumers or receivers as subscriptions rather than just consumers on a queue. By default, any receiving link that attaches to an address with the prefix jms.topic. is treated as a subscription, and a subscription queue is created. The subscription queue is made durable or volatile, depending on how the Terminus Durability is configured, as captured in the following table: To create this kind of subscription for a multicast-only queue... Set Terminus Durability to this... Durable UNSETTLED_STATE or CONFIGURATION Non-durable NONE Note The name of a durable queue is composed of the container ID and the link name, for example my-container-id:my-link-name . AMQ Broker also supports the qpid-jms client and will respect its use of topics regardless of the prefix used for the address. 3.2.2. Configuring AMQP security The broker supports AMQP SASL Authentication. See Security for more information about how to configure SASL-based authentication on the broker. 3.3. Using MQTT with a network connection The broker supports MQTT v3.1.1 (and also the older v3.1 code message format). MQTT is a lightweight, client to server, publish/subscribe messaging protocol. MQTT reduces messaging overhead and network traffic, as well as a client's code footprint. For these reasons, MQTT is ideally suited to constrained devices such as sensors and actuators and is quickly becoming the de facto standard communication protocol for Internet of Things(IoT). Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Add an acceptor with the MQTT protocol enabled. For example: <acceptors> <acceptor name="mqtt">tcp://localhost:1883?protocols=MQTT</acceptor> ... </acceptors> MQTT comes with a number of useful features including: Quality of Service Each message can define a quality of service that is associated with it. The broker will attempt to deliver messages to subscribers at the highest quality of service level defined. Retained Messages Messages can be retained for a particular address. New subscribers to that address receive the last-sent retained message before any other messages, even if the retained message was sent before the client connected. Wild card subscriptions MQTT addresses are hierarchical, similar to the hierarchy of a file system. Clients are able to subscribe to specific topics or to whole branches of a hierarchy. Will Messages Clients are able to set a "will message" as part of their connect packet. If the client abnormally disconnects, the broker will publish the will message to the specified address. Other subscribers receive the will message and can react accordingly. The best source of information about the MQTT protocol is in the specification. The MQTT v3.1.1 specification can be downloaded from the OASIS website . 3.4. Using OpenWire with a network connection The broker supports the OpenWire protocol , which allows a JMS client to talk directly to a broker. Use this protocol to communicate with older versions of AMQ Broker. Currently AMQ Broker supports OpenWire clients that use standard JMS APIs only. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Add or modify an acceptor so that it includes OPENWIRE as part of the protocol parameter, as shown in the following example: <acceptors> <acceptor name="openwire-acceptor">tcp://localhost:61616?protocols=OPENWIRE</acceptor> ... </acceptors> In the preceding example, the broker will listen on port 61616 for incoming OpenWire commands. For more details, see the examples located under <install_dir> /examples/protocols/openwire . 3.5. Using STOMP with a network connection STOMP is a text-orientated wire protocol that allows STOMP clients to communicate with STOMP Brokers. The broker supports STOMP 1.0, 1.1 and 1.2. STOMP clients are available for several languages and platforms making it a good choice for interoperability. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Configure an existing acceptor or create a new one and include a protocols parameter with a value of STOMP , as below. <acceptors> <acceptor name="stomp-acceptor">tcp://localhost:61613?protocols=STOMP</acceptor> ... </acceptors> In the preceding example, the broker accepts STOMP connections on the port 61613 , which is the default. See the stomp example located under <install_dir> /examples/protocols for an example of how to configure a broker with STOMP. 3.5.1. STOMP limitations When using STOMP, the following limitations apply: The broker currently does not support virtual hosting, which means the host header in CONNECT frames are ignored. Message acknowledgements are not transactional. The ACK frame cannot be part of a transaction, and it is ignored if its transaction header is set). 3.5.2. Providing IDs for STOMP Messages When receiving STOMP messages through a JMS consumer or a QueueBrowser, the messages do not contain any JMS properties, for example JMSMessageID , by default. However, you can set a message ID on each incoming STOMP message by using a broker paramater. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Set the stompEnableMessageId parameter to true for the acceptor used for STOMP connections, as shown in the following example: <acceptors> <acceptor name="stomp-acceptor">tcp://localhost:61613?protocols=STOMP;stompEnableMessageId=true</acceptor> ... </acceptors> By using the stompEnableMessageId parameter, each stomp message sent using this acceptor has an extra property added. The property key is amq-message-id and the value is a String representation of an internal message id prefixed with "STOMP", as shown in the following example: If stompEnableMessageId is not specified in the configuration, the default value is false . 3.5.3. Setting a connection time to live STOMP clients must send a DISCONNECT frame before closing their connections. This allows the broker to close any server-side resources, such as sessions and consumers. However, if STOMP clients exit without sending a DISCONNECT frame, or if they fail, the broker will have no way of knowing immediately whether the client is still alive. STOMP connections therefore are configured to have a "Time to Live" (TTL) of 1 minute. The means that the broker stops the connection to the STOMP client if it has been idle for more than one minute. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Add the connectionTTL parameter to URI of the acceptor used for STOMP connections, as shown in the following example: <acceptors> <acceptor name="stomp-acceptor">tcp://localhost:61613?protocols=STOMP;connectionTTL=20000</acceptor> ... </acceptors> In the preceding example, any stomp connection that using the stomp-acceptor will have its TTL set to 20 seconds. Note Version 1.0 of the STOMP protocol does not contain any heartbeat frame. It is therefore the user's responsibility to make sure data is sent within connection-ttl or the broker will assume the client is dead and clean up server-side resources. With version 1.1, you can use heart-beats to maintain the life cycle of stomp connections. Overriding the broker default time to live As noted, the default TTL for a STOMP connection is one minute. You can override this value by adding the connection-ttl-override attribute to the broker configuration. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Add the connection-ttl-override attribute and provide a value in milliseconds for the new default. It belongs inside the <core> stanza, as below. <configuration ...> ... <core ...> ... <connection-ttl-override>30000</connection-ttl-override> ... </core> <configuration> In the preceding example, the default Time to Live (TTL) for a STOMP connection is set to 30 seconds, 30000 milliseconds. 3.5.4. Sending and consuming STOMP messages from JMS STOMP is mainly a text-orientated protocol. To make it simpler to interoperate with JMS, the STOMP implementation checks for presence of the content-length header to decide how to map a STOMP message to JMS. If you want a STOMP message to map to a ... The message should... . JMS TextMessage Not include a content-length header. JMS BytesMessage Include a content-length header. The same logic applies when mapping a JMS message to STOMP. A STOMP client can confirm the presence of the content-length header to determine the type of the message body (string or bytes). See the STOMP specification for more information about message headers. 3.5.5. Mapping STOMP destinations to AMQ Broker addresses and queues When sending messages and subscribing, STOMP clients typically include a destination header. Destination names are string values, which are mapped to a destination on the broker. In AMQ Broker, these destinations are mapped to addresses and queues . See the STOMP specification for more information about the destination frame. Take for example a STOMP client that sends the following message (headers and body included): In this case, the broker will forward the message to any queues associated with the address /my/stomp/queue . For example, when a STOMP client sends a message (by using a SEND frame), the specified destination is mapped to an address. It works the same way when the client sends a SUBSCRIBE or UNSUBSCRIBE frame, but in this case AMQ Broker maps the destination to a queue. In the preceding example, the broker will map the destination to the queue /other/stomp/queue . Mapping STOMP destinations to JMS destinations JMS destinations are also mapped to broker addresses and queues. If you want to use STOMP to send messages to JMS destinations, the STOMP destinations must follow the same convention: Send or subscribe to a JMS Queue by prepending the queue name by jms.queue. . For example, to send a message to the orders JMS Queue, the STOMP client must send the frame: Send or subscribe to a JMS Topic by prepending the topic name by jms.topic. . For example, to subscribe to the stocks JMS Topic, the STOMP client must send a frame similar to the following:
[ "<configuration> <core> <acceptors> <!-- All-protocols acceptor --> <acceptor name=\"artemis\">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor> <!-- AMQP Acceptor. Listens on default AMQP port for AMQP traffic --> <acceptor name=\"amqp\">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor> <!-- STOMP Acceptor --> <acceptor name=\"stomp\">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor> <!-- HornetQ Compatibility Acceptor. Enables HornetQ Core and STOMP for legacy HornetQ clients. --> <acceptor name=\"hornetq\">tcp://0.0.0.0:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor> <!-- MQTT Acceptor --> <acceptor name=\"mqtt\">tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor> </acceptors> </core> </configuration>", "<acceptor name=\"ampq\">tcp://0.0.0.0:3232?protocols=AMQP</acceptor>", "<acceptors> <acceptor name=\"amqp-acceptor\">tcp://localhost:5672?protocols=AMQP</acceptor> </acceptors>", "<acceptors> <acceptor name=\"mqtt\">tcp://localhost:1883?protocols=MQTT</acceptor> </acceptors>", "<acceptors> <acceptor name=\"openwire-acceptor\">tcp://localhost:61616?protocols=OPENWIRE</acceptor> </acceptors>", "<acceptors> <acceptor name=\"stomp-acceptor\">tcp://localhost:61613?protocols=STOMP</acceptor> </acceptors>", "<acceptors> <acceptor name=\"stomp-acceptor\">tcp://localhost:61613?protocols=STOMP;stompEnableMessageId=true</acceptor> </acceptors>", "amq-message-id : STOMP12345", "<acceptors> <acceptor name=\"stomp-acceptor\">tcp://localhost:61613?protocols=STOMP;connectionTTL=20000</acceptor> </acceptors>", "<configuration ...> <core ...> <connection-ttl-override>30000</connection-ttl-override> </core> <configuration>", "SEND destination:/my/stomp/queue hello queue a ^@", "SUBSCRIBE destination: /other/stomp/queue ack: client ^@", "SEND destination:jms.queue.orders hello queue orders ^@", "SUBSCRIBE destination:jms.topic.stocks ^@" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/configuring_amq_broker/protocols
5.6. Searching for Objects Using Tags
5.6. Searching for Objects Using Tags Enter a search query using tag as the property and the desired value or set of values as criteria for the search. The objects tagged with the specified criteria are listed in the results list. Note If you search for objects using tag as the property and the inequality operator ( != ), for example, Host: Vms.tag!=server1 , the results list does not include untagged objects.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/searching_for_objects_using_tags
4.3. Configuration File Defaults
4.3. Configuration File Defaults The /etc/multipath.conf configuration file includes a defaults section that sets the user_friendly_names parameter to yes , as follows. This overwrites the default value of the user_friendly_names parameter. The configuration file includes a template of configuration defaults. This section is commented out, as follows. To overwrite the default value for any of the configuration parameters, you can copy the relevant line from this template into the defaults section and uncomment it. For example, to overwrite the path_grouping_policy parameter so that it is multibus rather than the default value of failover , copy the appropriate line from the template to the initial defaults section of the configuration file, and uncomment it, as follows. Table 4.1, "Multipath Configuration Defaults" describes the attributes that are set in the defaults section of the multipath.conf configuration file. These values are used by DM-Multipath unless they are overwritten by the attributes specified in the devices and multipaths sections of the multipath.conf file. Table 4.1. Multipath Configuration Defaults Attribute Description udev_dir Specifies the directory where udev device nodes are created. The default value is /udev . polling_interval Specifies the interval between two path checks in seconds. The default value is 5. selector Specifies the default algorithm to use in determining what path to use for the I/O operation. The default value is round-robin 0 . path_grouping_policy Specifies the default path grouping policy to apply to unspecified multipaths. Possible values include: failover = 1 path per priority group multibus = all valid paths in 1 priority group group_by_serial = 1 priority group per detected serial number group_by_prio = 1 priority group per path priority value group_by_node_name = 1 priority group per target node name The default value is failover . getuid_callout Specifies the default program and arguments to call out to obtain a unique path identifier. An absolute path is required. The default value is /sbin/scsi_id -g -u -s . prio_callout Specifies the default program and arguments to call out to obtain a path priority value. For example, the ALUA bits in SPC-3 provide an exploitable prio value for example. "none" is a valid value. The default value is no callout, indicating all paths are equal features Specifies the default extra features of multipath devices. The only existing feature is queue_if_no_path . The default value is (null). path_checker Specifies the default method used to determine the state of the paths. Possible values include: readsector0 , tur , emc_clariion , hp_sw , and directio . The default value is readsector0 . failback Specifies path group failback. A value of 0 or immediate specifies that as soon as there is a path group with a higher priority than the current path group the system switches to that path group. A numeric value greater than zero specifies deferred failback, expressed in seconds. A value of manual specifies that failback can happen only with operator intervention. The default value is manual . rr_min_io Specifies the number of I/O requests to route to a path before switching to the path in the current path group. The default value is 1000. max_fds (RHEL 4.7 and later) Sets the maximum number of open file descriptors for the multipathd process. A value of max sets the number of open file descriptors to the system maximum. rr_weight If set to priorities , then instead of sending rr_min_io requests to a path before calling selector to choose the path, the number of requests to send is determined by rr_min_io times the path's priority, as determined by the prio_callout program. Currently, there are priority callouts only for devices that use the group_by_prio path grouping policy, which means that all the paths in a path group will always have the same priority. If set to uniform , all path weights are equal. The default value is uniform . no_path_retry A numeric value for this attribute specifies the number of times the system should attempt to use a failed path before disabling queueing. A value of fail indicates immediate failure, without queuing. A value of queue indicates that queuing should not stop until the path is fixed. The default value is (null). flush_on_last_del (RHEL 4.7 and later) If set to yes , the multipathd daemon will disable queueing when the last path to a device has been deleted. The default value is no . user_friendly_names If set to yes , specifies that the system should using the bindings file /var/lib/multipath/bindings to assign a persistent and unique alias to the multipath, in the form of mpath n . If set to no , specifies that the system should use use the WWID as the alias for the multipath. In either case, what is specified here will be overridden by any device-specific aliases you specify in the multipaths section of the configuration file. The default value is no . bindings_file (RHEL 4.6 and later) The location of the bindings file that is used with the user_friendly_names option. The default value is /var/lib/multipath/bindings . mode (RHEL 4.7 and later) The mode to use for the multipath device nodes, in octal. The default value is determined by the process. uid (RHEL 4.7 and later) The user ID to use for the multipath device nodes. You must use the numeric user ID. The default value is determined by the process. gid (RHEL 4.7 and later) The group ID to use for the multipath device nodes. You must use the numeric group ID. The default value is determined by the process.
[ "defaults { user_friendly_names yes }", "#defaults { udev_dir /dev polling_interval 10 selector \"round-robin 0\" path_grouping_policy multibus getuid_callout \"/sbin/scsi_id -g -u -s /block/%n\" prio_callout /bin/true path_checker readsector0 rr_min_io 100 rr_weight priorities failback immediate no_path_retry fail user_friendly_name yes #}", "defaults { user_friendly_names yes path_grouping_policy multibus }" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/dm_multipath/config_file_defaults
probe::nfsd.proc.write
probe::nfsd.proc.write Name probe::nfsd.proc.write - NFS server writing data to file for client Synopsis nfsd.proc.write Values offset the offset of file gid requester's group id vlen read blocks fh file handle (the first part is the length of the file handle) size read bytes vec struct kvec, includes buf address in kernel address and length of each buffer stable argp->stable version nfs version uid requester's user id count read bytes client_ip the ip address of client proto transfer protocol
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-nfsd-proc-write
Using Cryostat to manage a JFR recording
Using Cryostat to manage a JFR recording Red Hat build of Cryostat 2 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/using_cryostat_to_manage_a_jfr_recording/index
Red Hat Quay Release Notes
Red Hat Quay Release Notes Red Hat Quay 3.10 Red Hat Quay Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/red_hat_quay_release_notes/index
Chapter 101. ReplicasChangeStatus schema reference
Chapter 101. ReplicasChangeStatus schema reference Used in: KafkaTopicStatus Property Property type Description targetReplicas integer The target replicas value requested by the user. This may be different from .spec.replicas when a change is ongoing. state string (one of [ongoing, pending]) Current state of the replicas change operation. This can be pending , when the change has been requested, or ongoing , when the change has been successfully submitted to Cruise Control. message string Message for the user related to the replicas change request. This may contain transient error messages that would disappear on periodic reconciliations. sessionId string The session identifier for replicas change requests pertaining to this KafkaTopic resource. This is used by the Topic Operator to track the status of ongoing replicas change operations.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-replicaschangestatus-reference
10.5. Quorum Devices
10.5. Quorum Devices Red Hat Enterprise Linux 7.4 provides full support for the ability to configure a separate quorum device which acts as a third-party arbitration device for the cluster. Its primary use is to allow a cluster to sustain more node failures than standard quorum rules allow. A quorum device is recommended for clusters with an even number of nodes. With two-node clusters, the use of a quorum device can better determine which node survives in a split-brain situation. You must take the following into account when configuring a quorum device. It is recommended that a quorum device be run on a different physical network at the same site as the cluster that uses the quorum device. Ideally, the quorum device host should be in a separate rack than the main cluster, or at least on a separate PSU and not on the same network segment as the corosync ring or rings. You cannot use more than one quorum device in a cluster at the same time. Although you cannot use more than one quorum device in a cluster at the same time, a single quorum device may be used by several clusters at the same time. Each cluster using that quorum device can use different algorithms and quorum options, as those are stored on the cluster nodes themselves. For example, a single quorum device can be used by one cluster with an ffsplit (fifty/fifty split) algorithm and by a second cluster with an lms (last man standing) algorithm. A quorum device should not be run on an existing cluster node. 10.5.1. Installing Quorum Device Packages Configuring a quorum device for a cluster requires that you install the following packages: Install corosync-qdevice on the nodes of an existing cluster. Install pcs and corosync-qnetd on the quorum device host. Start the pcsd service and enable pcsd at system start on the quorum device host. 10.5.2. Configuring a Quorum Device This section provides a sample procedure to configure a quorum device in a Red Hat high availability cluster. The following procedure configures a quorum device and adds it to the cluster. In this example: The node used for a quorum device is qdevice . The quorum device model is net , which is currently the only supported model. The net model supports the following algorithms: ffsplit : fifty-fifty split. This provides exactly one vote to the partition with the highest number of active nodes. lms : last-man-standing. If the node is the only one left in the cluster that can see the qnetd server, then it returns a vote. Warning The LMS algorithm allows the cluster to remain quorate even with only one remaining node, but it also means that the voting power of the quorum device is great since it is the same as number_of_nodes - 1. Losing connection with the quorum device means losing number_of_nodes - 1 votes, which means that only a cluster with all nodes active can remain quorate (by overvoting the quorum device); any other cluster becomes inquorate. For more detailed information on the implementation of these algorithms, see the corosync-qdevice (8) man page. The cluster nodes are node1 and node2 . The following procedure configures a quorum device and adds that quorum device to a cluster. On the node that you will use to host your quorum device, configure the quorum device with the following command. This command configures and starts the quorum device model net and configures the device to start on boot. After configuring the quorum device, you can check its status. This should show that the corosync-qnetd daemon is running and, at this point, there are no clients connected to it. The --full command option provides detailed output. Enable the ports on the firewall needed by the pcsd daemon and the net quorum device by enabling the high-availability service on firewalld with following commands. From one of the nodes in the existing cluster, authenticate user hacluster on the node that is hosting the quorum device. Add the quorum device to the cluster. Before adding the quorum device, you can check the current configuration and status for the quorum device for later comparison. The output for these commands indicates that the cluster is not yet using a quorum device. The following command adds the quorum device that you have previously created to the cluster. You cannot use more than one quorum device in a cluster at the same time. However, one quorum device can be used by several clusters at the same time. This example command configures the quorum device to use the ffsplit algorithm. For information on the configuration options for the quorum device, see the corosync-qdevice (8) man page. Check the configuration status of the quorum device. From the cluster side, you can execute the following commands to see how the configuration has changed. The pcs quorum config shows the quorum device that has been configured. The pcs quorum status command shows the quorum runtime status, indicating that the quorum device is in use. The pcs quorum device status shows the quorum device runtime status. From the quorum device side, you can execute the following status command, which shows the status of the corosync-qnetd daemon. 10.5.3. Managing the Quorum Device Service PCS provides the ability to manage the quorum device service on the local host ( corosync-qnetd ), as shown in the following example commands. Note that these commands affect only the corosync-qnetd service. 10.5.4. Managing the Quorum Device Settings in a Cluster The following sections describe the PCS commands that you can use to manage the quorum device settings in a cluster, showing examples that are based on the quorum device configuration in Section 10.5.2, "Configuring a Quorum Device" . 10.5.4.1. Changing Quorum Device Settings You can change the setting of a quorum device with the pcs quorum device update command. Warning To change the host option of quorum device model net , use the pcs quorum device remove and the pcs quorum device add commands to set up the configuration properly, unless the old and the new host are the same machine. The following command changes the quorum device algorithm to lms . 10.5.4.2. Removing a Quorum Device Use the following command to remove a quorum device configured on a cluster node. After you have removed a quorum device, you should see the following error message when displaying the quorum device status. 10.5.4.3. Destroying a Quorum Device To disable and stop a quorum device on the quorum device host and delete all of its configuration files, use the following command.
[ "yum install corosync-qdevice yum install corosync-qdevice", "yum install pcs corosync-qnetd", "systemctl start pcsd.service systemctl enable pcsd.service", "pcs qdevice setup model net --enable --start Quorum device 'net' initialized quorum device enabled Starting quorum device quorum device started", "pcs qdevice status net --full QNetd address: *:5403 TLS: Supported (client certificate required) Connected clients: 0 Connected clusters: 0 Maximum send/receive size: 32768/32768 bytes", "firewall-cmd --permanent --add-service=high-availability firewall-cmd --add-service=high-availability", "pcs cluster auth qdevice Username: hacluster Password: qdevice: Authorized", "pcs quorum config Options:", "pcs quorum status Quorum information ------------------ Date: Wed Jun 29 13:15:36 2016 Quorum provider: corosync_votequorum Nodes: 2 Node ID: 1 Ring ID: 1/8272 Quorate: Yes Votequorum information ---------------------- Expected votes: 2 Highest expected: 2 Total votes: 2 Quorum: 1 Flags: 2Node Quorate Membership information ---------------------- Nodeid Votes Qdevice Name 1 1 NR node1 (local) 2 1 NR node2", "pcs quorum device add model net host=qdevice algorithm=ffsplit Setting up qdevice certificates on nodes node2: Succeeded node1: Succeeded Enabling corosync-qdevice node1: corosync-qdevice enabled node2: corosync-qdevice enabled Sending updated corosync.conf to nodes node1: Succeeded node2: Succeeded Corosync configuration reloaded Starting corosync-qdevice node1: corosync-qdevice started node2: corosync-qdevice started", "pcs quorum config Options: Device: Model: net algorithm: ffsplit host: qdevice", "pcs quorum status Quorum information ------------------ Date: Wed Jun 29 13:17:02 2016 Quorum provider: corosync_votequorum Nodes: 2 Node ID: 1 Ring ID: 1/8272 Quorate: Yes Votequorum information ---------------------- Expected votes: 3 Highest expected: 3 Total votes: 3 Quorum: 2 Flags: Quorate Qdevice Membership information ---------------------- Nodeid Votes Qdevice Name 1 1 A,V,NMW node1 (local) 2 1 A,V,NMW node2 0 1 Qdevice", "pcs quorum device status Qdevice information ------------------- Model: Net Node ID: 1 Configured node list: 0 Node ID = 1 1 Node ID = 2 Membership node list: 1, 2 Qdevice-net information ---------------------- Cluster name: mycluster QNetd host: qdevice:5403 Algorithm: ffsplit Tie-breaker: Node with lowest node ID State: Connected", "pcs qdevice status net --full QNetd address: *:5403 TLS: Supported (client certificate required) Connected clients: 2 Connected clusters: 1 Maximum send/receive size: 32768/32768 bytes Cluster \"mycluster\": Algorithm: ffsplit Tie-breaker: Node with lowest node ID Node ID 2: Client address: ::ffff:192.168.122.122:50028 HB interval: 8000ms Configured node list: 1, 2 Ring ID: 1.2050 Membership node list: 1, 2 TLS active: Yes (client certificate verified) Vote: ACK (ACK) Node ID 1: Client address: ::ffff:192.168.122.121:48786 HB interval: 8000ms Configured node list: 1, 2 Ring ID: 1.2050 Membership node list: 1, 2 TLS active: Yes (client certificate verified) Vote: ACK (ACK)", "pcs qdevice start net pcs qdevice stop net pcs qdevice enable net pcs qdevice disable net pcs qdevice kill net", "pcs quorum device update model algorithm=lms Sending updated corosync.conf to nodes node1: Succeeded node2: Succeeded Corosync configuration reloaded Reloading qdevice configuration on nodes node1: corosync-qdevice stopped node2: corosync-qdevice stopped node1: corosync-qdevice started node2: corosync-qdevice started", "pcs quorum device remove Sending updated corosync.conf to nodes node1: Succeeded node2: Succeeded Corosync configuration reloaded Disabling corosync-qdevice node1: corosync-qdevice disabled node2: corosync-qdevice disabled Stopping corosync-qdevice node1: corosync-qdevice stopped node2: corosync-qdevice stopped Removing qdevice certificates from nodes node1: Succeeded node2: Succeeded", "pcs quorum device status Error: Unable to get quorum status: corosync-qdevice-tool: Can't connect to QDevice socket (is QDevice running?): No such file or directory", "pcs qdevice destroy net Stopping quorum device quorum device stopped quorum device disabled Quorum device 'net' configuration files removed" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-quorumdev-HAAR
E.2.6. /proc/dma
E.2.6. /proc/dma This file contains a list of the registered ISA DMA channels in use. A sample /proc/dma files looks like the following:
[ "4: cascade" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-proc-dma
13.2.16. Domain Options: Enabling Offline Authentication
13.2.16. Domain Options: Enabling Offline Authentication User identities are always cached, as well as information about the domain services. However, user credentials are not cached by default. This means that SSSD always checks with the back end identity provider for authentication requests. If the identity provider is offline or unavailable, there is no way to process those authentication requests, so user authentication could fail. It is possible to enable offline credentials caching , which stores credentials (after successful login) as part of the user account in the SSSD cache. Therefore, even if an identity provider is unavailable, users can still authenticate, using their stored credentials. Offline credentials caching is primarily configured in each individual domain entry, but there are some optional settings that can be set in the PAM service section, because credentials caching interacts with the local PAM service as well as the remote domain. There are optional parameters that set when those credentials expire. Expiration is useful because it can prevent a user with a potentially outdated account or credentials from accessing local services indefinitely. The credentials expiration itself is set in the PAM service, which processes authentication requests for the system. offline_credentials_expiration sets the number of days after a successful login that a single credentials entry for a user is preserved in cache. Setting this to zero (0) means that entries are kept forever. While not related to the credentials cache specifically, each domain has configuration options on when individual user and service caches expire: account_cache_expiration sets the number of days after a successful login that the entire user account entry is removed from the SSSD cache. This must be equal to or longer than the individual offline credentials cache expiration period. entry_cache_timeout sets a validity period, in seconds, for all entries stored in the cache before SSSD requests updated information from the identity provider. There are also individual cache timeout parameters for group, service, netgroup, sudo, and autofs entries; these are listed in the sssd.conf man page. The default time is 5400 seconds (90 minutes). For example:
[ "[domain/EXAMPLE] cache_credentials = true", "[sssd] services = nss,pam [pam] offline_credentials_expiration = 3 [domain/EXAMPLE] cache_credentials = true", "[sssd] services = nss,pam [pam] offline_credentials_expiration = 3 [domain/EXAMPLE] cache_credentials = true account_cache_expiration = 7 entry_cache_timeout = 14400" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sssd-cache-cred
Chapter 10. General Updates
Chapter 10. General Updates Matahari packages deprecated The Matahari agent framework ( matahari-* ) packages are deprecated starting with the Red Hat Enterprise Linux 6.3 release. Focus for remote systems management has shifted towards the use of the CIM infrastructure. This infrastructure relies on an already existing standard which provides a greater degree of interoperability for all users. It is strongly recommended that users discontinue the use of the matahari packages and other packages which depend on the Matahari infrastructure (specifically, libvirt-qmf and fence-virtd-libvirt-qpid ). It is recommended that users uninstall Matahari from their systems to remove any possibility of security issues being exposed. Users who choose to continue to use the Matahari agents should note the following: The matahari packages are not installed by default starting with Red Hat Enterprise Linux 6.3 and are not enabled by default to start on boot when they are installed. Manual action is needed to both install and enable the matahari services. The default configuration for qpid (the transport agent used by Matahari) does not enable access control lists (ACLs) or SSL. Without ACLs/SSL, the Matahari infrastructure is not secure. Configuring Matahari without ACLs/SSL is not recommended and may reduce your system's security. The matahari-services agent is specifically designed to allow remote manipulation of services (start, stop). Granting a user access to Matahari services is equivalent to providing a remote user with root access. Using Matahari agents should be treated as equivalent to providing remote root SSH access to a host. By default in Red Hat Enterprise Linux, the Matahari broker ( qpidd running on port 49000 ) does not require authentication. However, the Matahari broker is not remotely accessible unless the firewall is disabled, or a rule is added to make it accessible. Given the capabilities exposed by Matahari agents, if Matahari is enabled, system administrators should be extremely cautious with the options that affect remote access to Matahari. Note that Matahari will not be shipped in future releases of Red Hat Enterprise Linux (including Red Hat Enterprise Linux 7), and may be considered for formal removal in a future release of Red Hat Enterprise Linux 6. Software Collections utilities Red Hat Enterprise Linux 6.3 includes an scl-utils package which provides a runtime utility and packaging macros for packaging Software Collections. Software Collections allow users to concurrently install multiple versions of the same RPM packages on the system. Using the scl utility, users may enable specific versions of RPMs which are installed in the /opt directory. For more information on Software Collections, refer to the Software Collections Guide . The openssl-ibmca package is now part of the IBM System z default installation With Red Hat Enterprise Linux 6.3, the openssl-ibmca package is part of the System z default installation. This avoids the need for manual installation steps. MySQL InnoDB plug-in Red Hat Enterprise Linux 6.3 provides the MySQL InnoDB storage engine as a plug-in for AMD64 and Intel 64 architectures. The plugin offers additional features and better performance than the built-in InnoDB storage engine. OpenJDK 7 Red Hat Enterprise Linux 6.3 includes full support for OpenJDK 7 as an alternative to OpenJDK 6. The java-1.7.0-openjdk packages provide the OpenJDK 7 Java Runtime Environment and the OpenJDK 7 Java Software Development Kit. New Java 7 packages The java-1.7.0-oracle and java-1.7.0-ibm packages are now available in Red Hat Enterprise Linux 6.3. Setting the NIS domain name via initscripts The initscripts package has been updated to allow users to set the NIS domain name. This is done by configuring the NISDOMAIN parameter in the /etc/sysconfig/network file, or other relevant configuration files. ACL support for logrotate Previously, when certain groups were permitted to access all logs via ACLs, these ACLs were removed when the logs were rotated. In Red Hat Enterprise Linux 6.3, the logrotate utility supports ACLs, and logs that are rotated preserve any ACL settings. The wacomcpl package deprecated The wacomcpl package has been deprecated and has been removed from the package set. The wacomcpl package provided graphical configuration of Wacom tablet settings. This functionality is now integrated into the GNOME Control Center. Updated NumPy package The NumPy package which is designed to manipulate large multi-dimensional arrays of arbitrary records has been updated to version 1.4.1. This updated version includes these changes: When operating on 0-d arrays, numpy.max and other functions accept only the following parameters: axis=0 , axis=-1 , and axis=None . Using out-of-bounds axes indicates a bug, for which NumPy now raises an error. Specifying the axis > MAX_DIMS parameter is no longer allowed; NumPy now raises an error, instead of behaving the same as when axis=None was specified. Rsyslog updated to major version 5 The rsyslog package has been upgraded to major version 5. This upgrade introduces various enhancements and fixes multiple bugs. The following are the most important changes: The USDHUPisRestart directive has been removed and is no longer supported. Restart-type HUP processing is therefore no longer available. Now, when the SIGHUP signal is received, outputs (log files in most cases) are only re-opened to support log rotation. The format of the spool files (for example, disk-assisted queues) has changed. In order to switch to the new format, drain the spool files, for example, by shutting down rsyslogd . Then, proceed with the Rsyslog upgrade, and start rsyslogd again. Once upgraded, the new format is automatically used. When the rsyslogd daemon was running in the debug mode (using the -d option), it ran in the foreground. This has been fixed and the daemon is now forked and runs in the background, as is expected. For more information on changes introduced in this version of Rsyslog, refer to http://www.rsyslog.com/doc/v5compatibility.html .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_release_notes/general_updates
4.3.4. Scanning Disks for Volume Groups to Build the Cache File
4.3.4. Scanning Disks for Volume Groups to Build the Cache File The vgscan command scans all supported disk devices in the system looking for LVM physical volumes and volume groups. This builds the LVM cache in the /etc/lvm/.cache file, which maintains a listing of current LVM devices. LVM runs the vgscan command automatically at system startup and at other times during LVM operation, such as when you execute a vgcreate command or when LVM detects an inconsistency. You may need to run the vgscan command manually when you change your hardware configuration, causing new devices to be visible to the system that were not present at system bootup. This may be necessary, for example, when you add new disks to the system on a SAN or hotplug a new disk that has been labeled as a physical volume. You can define a filter in the lvm.conf file to restrict the scan to avoid specific devices. For information on using filters to control which devices are scanned, see Section 4.6, "Controlling LVM Device Scans with Filters" . The following example shows the output of a vgscan command.
[ "vgscan Reading all physical volumes. This may take a while Found volume group \"new_vg\" using metadata type lvm2 Found volume group \"officevg\" using metadata type lvm2" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/vgscan
Chapter 4. Creating a standalone broker
Chapter 4. Creating a standalone broker You can get started quickly with AMQ Broker by creating a standalone broker instance on your local machine, starting it, and producing and consuming some test messages. Prerequisites AMQ Broker must be installed. For more information, see Chapter 3, Installing AMQ Broker . 4.1. Creating a broker instance A broker instance is a directory containing the configuration and runtime data for a broker. To create a new broker instance, you first create a directory for the broker instance, and then use the artemis create command to create the broker instance. This procedure demonstrates how to create a simple, standalone broker on your local machine. The broker uses a basic, default configuration, and accepts connections from clients using any of the supported messaging protocols. Procedure Create a directory for the broker instance. If you are using... Do this... Red Hat Enterprise Linux Create a new directory to serve as the location for the broker instance. USD sudo mkdir /var/opt/amq-broker Assign the user that you created during installation. USD sudo chown -R amq-broker:amq-broker /var/opt/amq-broker Windows Use Windows Explorer to create a new folder to serve as the location for the broker instance. Use the artemis create command to create the broker. If you are using... Do this... Red Hat Enterprise Linux Switch to the user account you created during installation. USD su - amq-broker Change to the directory you just created for the broker instance. USD cd /var/opt/amq-broker From the broker instance's directory, create the broker instance. USD <install_dir> /bin/artemis create mybroker Windows Open a command prompt from the directory you just created for the broker instance. From the broker instance's directory, create the broker instance. > <install_dir> \bin\artemis.cmd create mybroker Follow the artemis create prompts to configure the broker instance. Example 4.1. Configuring a broker instance using artemis create USD /opt/redhat/amq-broker/bin/artemis create mybroker Creating ActiveMQ Artemis instance at: /var/opt/amq-broker/mybroker --user: is mandatory with this configuration: Please provide the default username: admin --password: is mandatory with this configuration: Please provide the default password: --role: is mandatory with this configuration: Please provide the default role: amq --allow-anonymous | --require-login: is mandatory with this configuration: Allow anonymous access? (Y/N): Y Auto tuning journal ... done! Your system can make 19.23 writes per millisecond, your journal-buffer-timeout will be 52000 You can now start the broker by executing: "/var/opt/amq-broker/mybroker/bin/artemis" run Or you can run the broker in the background using: "/var/opt/amq-broker/mybroker/bin/artemis-service" start 4.2. Starting the broker instance After the broker instance is created, you use the artemis run command to start it. Procedure Switch to the user account you created during installation. USD su - amq-broker Use the artemis run command to start the broker instance. The broker starts and displays log output with the following information: The location of the transaction logs and cluster configuration. The type of journal being used for message persistence (AIO in this case). The URI(s) that can accept client connections. By default, port 61616 can accept connections from any of the supported protocols (CORE, MQTT, AMQP, STOMP, HORNETQ, and OPENWIRE). There are separate, individual ports for each protocol as well. The web console is available at http://localhost:8161 . The Jolokia service (JMX over REST) is available at http://localhost:8161/jolokia . 4.3. Producing and consuming test messages After starting the broker, you should verify that it is running properly. This involves producing a few test messages, sending them to the broker, and then consuming them. Procedure Use the artemis producer command to produce a few test messages and send them to the broker. This command sends 100 messages to the helloworld address, which is created automatically on the broker. The producer connects to the broker by using the default port 61616, which accepts all supported messaging protocols. USD /opt/redhat/amq-broker/amq-broker-7.2.0/bin/artemis producer --destination helloworld --message-count 100 --url tcp://localhost:61616 Producer ActiveMQQueue[helloworld], thread=0 Started to calculate elapsed time ... Producer ActiveMQQueue[helloworld], thread=0 Produced: 100 messages Producer ActiveMQQueue[helloworld], thread=0 Elapsed time in second : 1 s Producer ActiveMQQueue[helloworld], thread=0 Elapsed time in milli second : 1289 milli seconds Use the web console to see the messages stored in the broker. In a web browser, navigate to http://localhost:8161 . Log into the console using the default username and default password that you created when you created the broker instance. The Attributes tab is displayed. On the Attributes tab, navigate to addresses helloworld queues "anycast" helloworld . In the step, you sent messages to the helloworld address. This created a new anycast helloworld address with a queue (also named helloworld ). The Message count attribute shows that all 100 messages that were sent to helloworld are currently stored in this queue. Figure 4.1. Message count Use the artemis consumer command to consume 50 of the messages stored on the broker. This command consumes 50 of the messages that you sent to the broker previously. USD /opt/redhat/amq-broker/amq-broker-7.2.0/bin/artemis consumer --destination helloworld --message-count 50 --url tcp://localhost:61616 Consumer:: filter = null Consumer ActiveMQQueue[helloworld], thread=0 wait until 50 messages are consumed Consumer ActiveMQQueue[helloworld], thread=0 Consumed: 50 messages Consumer ActiveMQQueue[helloworld], thread=0 Consumer thread finished In the web console, verify that the Message count is now 50. 50 of the messages were consumed, which leaves 50 messages stored in the helloworld queue. Stop the broker and verify that the 50 remaining messages are still stored in the helloworld queue. In the terminal in which the broker is running, press Ctrl + C to stop the broker. Restart the broker. USD /var/opt/amq-broker/mybroker/bin/artemis run In the web console, navigate back to the helloworld queue and verify that there are still 50 messages stored in the queue. Consume the remaining 50 messages. USD /opt/redhat/amq-broker/amq-broker-7.2.0/bin/artemis consumer --destination helloworld --message-count 50 --url tcp://localhost:61616 Consumer:: filter = null Consumer ActiveMQQueue[helloworld], thread=0 wait until 50 messages are consumed Consumer ActiveMQQueue[helloworld], thread=0 Consumed: 50 messages Consumer ActiveMQQueue[helloworld], thread=0 Consumer thread finished In the web console, verify that the Message count is 0. All of the messages stored in the helloworld queue were consumed, and the queue is now empty. 4.4. Stopping the broker instance After creating the standalone broker and producing and consuming test messages, you can stop the broker instance. This procedure manually stops the broker, which forcefully closes all client connections. In a production environment, you should configure the broker to stop gracefully so that client connections can be closed properly. Procedure Use the artemis stop command to stop the broker instance: USD /var/opt/amq-broker/mybroker/bin/artemis stop 2018-12-03 14:37:30,630 INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.6.1.amq-720004-redhat-1 [b6c244ef-f1cb-11e8-a2d7-0800271b03bd] stopped, uptime 35 minutes Server stopped!
[ "sudo mkdir /var/opt/amq-broker", "sudo chown -R amq-broker:amq-broker /var/opt/amq-broker", "su - amq-broker", "cd /var/opt/amq-broker", "<install_dir> /bin/artemis create mybroker", "> <install_dir> \\bin\\artemis.cmd create mybroker", "/opt/redhat/amq-broker/bin/artemis create mybroker Creating ActiveMQ Artemis instance at: /var/opt/amq-broker/mybroker --user: is mandatory with this configuration: Please provide the default username: admin --password: is mandatory with this configuration: Please provide the default password: --role: is mandatory with this configuration: Please provide the default role: amq --allow-anonymous | --require-login: is mandatory with this configuration: Allow anonymous access? (Y/N): Y Auto tuning journal done! Your system can make 19.23 writes per millisecond, your journal-buffer-timeout will be 52000 You can now start the broker by executing: \"/var/opt/amq-broker/mybroker/bin/artemis\" run Or you can run the broker in the background using: \"/var/opt/amq-broker/mybroker/bin/artemis-service\" start", "su - amq-broker", "/var/opt/amq-broker/mybroker/bin/artemis run __ __ ____ ____ _ /\\ | \\/ |/ __ \\ | _ \\ | | / \\ | \\ / | | | | | |_) |_ __ ___ | | _____ _ __ / /\\ \\ | |\\/| | | | | | _ <| '__/ _ \\| |/ / _ \\ '__| / ____ \\| | | | |__| | | |_) | | | (_) | < __/ | /_/ \\_\\_| |_|\\___\\_\\ |____/|_| \\___/|_|\\_\\___|_| Red Hat JBoss AMQ 7.2.1.GA 10:53:43,959 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server 10:53:44,076 INFO [org.apache.activemq.artemis.core.server] AMQ221000: live Message Broker is starting with configuration Broker Configuration (clustered=false,journalDirectory=./data/journal,bindingsDirectory=./data/bindings,largeMessagesDirectory=./data/large-messages,pagingDirectory=./data/paging) 10:53:44,099 INFO [org.apache.activemq.artemis.core.server] AMQ221012: Using AIO Journal", "/opt/redhat/amq-broker/amq-broker-7.2.0/bin/artemis producer --destination helloworld --message-count 100 --url tcp://localhost:61616 Producer ActiveMQQueue[helloworld], thread=0 Started to calculate elapsed time Producer ActiveMQQueue[helloworld], thread=0 Produced: 100 messages Producer ActiveMQQueue[helloworld], thread=0 Elapsed time in second : 1 s Producer ActiveMQQueue[helloworld], thread=0 Elapsed time in milli second : 1289 milli seconds", "/opt/redhat/amq-broker/amq-broker-7.2.0/bin/artemis consumer --destination helloworld --message-count 50 --url tcp://localhost:61616 Consumer:: filter = null Consumer ActiveMQQueue[helloworld], thread=0 wait until 50 messages are consumed Consumer ActiveMQQueue[helloworld], thread=0 Consumed: 50 messages Consumer ActiveMQQueue[helloworld], thread=0 Consumer thread finished", "/var/opt/amq-broker/mybroker/bin/artemis run", "/opt/redhat/amq-broker/amq-broker-7.2.0/bin/artemis consumer --destination helloworld --message-count 50 --url tcp://localhost:61616 Consumer:: filter = null Consumer ActiveMQQueue[helloworld], thread=0 wait until 50 messages are consumed Consumer ActiveMQQueue[helloworld], thread=0 Consumed: 50 messages Consumer ActiveMQQueue[helloworld], thread=0 Consumer thread finished", "/var/opt/amq-broker/mybroker/bin/artemis stop 2018-12-03 14:37:30,630 INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.6.1.amq-720004-redhat-1 [b6c244ef-f1cb-11e8-a2d7-0800271b03bd] stopped, uptime 35 minutes Server stopped!" ]
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.12/html/getting_started_with_amq_broker/creating-standalone-getting-started
Chapter 25. Device Mapper Multipathing and Virtual Storage
Chapter 25. Device Mapper Multipathing and Virtual Storage Red Hat Enterprise Linux 6 also supports DM-Multipath and virtual storage . Both features are documented in detail in the Red Hat books DM Multipath and Virtualization Administration Guide . 25.1. Virtual Storage Red Hat Enterprise Linux 6 supports the following file systems/online storage methods for virtual storage: Fibre Channel iSCSI NFS GFS2 Virtualization in Red Hat Enterprise Linux 6 uses libvirt to manage virtual instances. The libvirt utility uses the concept of storage pools to manage storage for virtualized guests. A storage pool is storage that can be divided up into smaller volumes or allocated directly to a guest. Volumes of a storage pool can be allocated to virtualized guests. There are two categories of storage pools available: Local storage pools Local storage covers storage devices, files or directories directly attached to a host. Local storage includes local directories, directly attached disks, and LVM Volume Groups. Networked (shared) storage pools Networked storage covers storage devices shared over a network using standard protocols. It includes shared storage devices using Fibre Channel, iSCSI, NFS, GFS2, and SCSI RDMA protocols, and is a requirement for migrating guest virtualized guests between hosts. Important For comprehensive information on the deployment and configuration of virtual storage instances in your environment, refer to the Virtualization Storage section of the Virtualization guide provided by Red Hat.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/ch-device-mapper-multipathing-virt-storage
Chapter 1. Federal Information Processing Standard (FIPS) readiness and compliance
Chapter 1. Federal Information Processing Standard (FIPS) readiness and compliance The Federal Information Processing Standard (FIPS) developed by the National Institute of Standards and Technology (NIST) is regarded as the highly regarded for securing and encrypting sensitive data, notably in highly regulated areas such as banking, healthcare, and the public sector. Red Hat Enterprise Linux (RHEL) and OpenShift Container Platform support FIPS by providing a FIPS mode , in which the system only allows usage of specific FIPS-validated cryptographic modules like openssl . This ensures FIPS compliance. 1.1. Enabling FIPS compliance Use the following procedure to enable FIPS compliance on your Red Hat Quay deployment. Prerequisite If you are running a standalone deployment of Red Hat Quay, your Red Hat Enterprise Linux (RHEL) deployment is version 8 or later and FIPS-enabled. If you are using the Red Hat Quay Operator, OpenShift Container Platform is version 4.10 or later. Your Red Hat Quay version is 3.5.0 or later. You have administrative privileges for your Red Hat Quay deployment. Procedure In your Red Hat Quay config.yaml file, set the FEATURE_FIPS configuration field to true . For example: --- FEATURE_FIPS = true --- With FEATURE_FIPS set to true , Red Hat Quay runs using FIPS-compliant hash functions.
[ "--- FEATURE_FIPS = true ---" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/red_hat_quay_operator_features/fips-overview
Chapter 9. Enabling and configuring Data Grid statistics and JMX monitoring
Chapter 9. Enabling and configuring Data Grid statistics and JMX monitoring Data Grid can provide Cache Manager and cache statistics as well as export JMX MBeans. 9.1. Enabling statistics in remote caches Data Grid Server automatically enables statistics for the default Cache Manager. However, you must explicitly enable statistics for your caches. Procedure Open your Data Grid configuration for editing. Add the statistics attribute or field and specify true as the value. Save and close your Data Grid configuration. Remote cache statistics XML <distributed-cache statistics="true" /> JSON { "distributed-cache": { "statistics": "true" } } YAML distributedCache: statistics: true 9.2. Enabling Hot Rod client statistics Hot Rod Java clients can provide statistics that include remote cache and near-cache hits and misses as well as connection pool usage. Procedure Open your Hot Rod Java client configuration for editing. Set true as the value for the statistics property or invoke the statistics().enable() methods. Export JMX MBeans for your Hot Rod client with the jmx and jmx_domain properties or invoke the jmxEnable() and jmxDomain() methods. Save and close your client configuration. Hot Rod Java client statistics ConfigurationBuilder ConfigurationBuilder builder = new ConfigurationBuilder(); builder.statistics().enable() .jmxEnable() .jmxDomain("my.domain.org") .addServer() .host("127.0.0.1") .port(11222); RemoteCacheManager remoteCacheManager = new RemoteCacheManager(builder.build()); hotrod-client.properties infinispan.client.hotrod.statistics = true infinispan.client.hotrod.jmx = true infinispan.client.hotrod.jmx_domain = my.domain.org 9.3. Configuring Data Grid metrics Data Grid generates metrics that are compatible with any monitoring system. Gauges provide values such as the average number of nanoseconds for write operations or JVM uptime. Histograms provide details about operation execution times such as read, write, and remove times. By default, Data Grid generates gauges when you enable statistics but you can also configure it to generate histograms. Note Data Grid metrics are provided at the vendor scope. Metrics related to the JVM are provided in the base scope. Procedure Open your Data Grid configuration for editing. Add the metrics element or object to the cache container. Enable or disable gauges with the gauges attribute or field. Enable or disable histograms with the histograms attribute or field. Save and close your client configuration. Metrics configuration XML <infinispan> <cache-container statistics="true"> <metrics gauges="true" histograms="true" /> </cache-container> </infinispan> JSON { "infinispan" : { "cache-container" : { "statistics" : "true", "metrics" : { "gauges" : "true", "histograms" : "true" } } } } YAML infinispan: cacheContainer: statistics: "true" metrics: gauges: "true" histograms: "true" Verification Data Grid Server exposes statistics through the metrics endpoint that you can collect with monitoring tools such as Prometheus. To verify that statistics are exported to the metrics endpoint, you can do the following: Prometheus format OpenMetrics format Note Data Grid no longer provides metrics in MicroProfile JSON format. Additional resources Micrometer Prometheus 9.4. Registering JMX MBeans Data Grid can register JMX MBeans that you can use to collect statistics and perform administrative operations. You must also enable statistics otherwise Data Grid provides 0 values for all statistic attributes in JMX MBeans. Procedure Open your Data Grid configuration for editing. Add the jmx element or object to the cache container and specify true as the value for the enabled attribute or field. Add the domain attribute or field and specify the domain where JMX MBeans are exposed, if required. Save and close your client configuration. JMX configuration XML <infinispan> <cache-container statistics="true"> <jmx enabled="true" domain="example.com"/> </cache-container> </infinispan> JSON { "infinispan" : { "cache-container" : { "statistics" : "true", "jmx" : { "enabled" : "true", "domain" : "example.com" } } } } YAML infinispan: cacheContainer: statistics: "true" jmx: enabled: "true" domain: "example.com" 9.4.1. Enabling JMX remote ports Provide unique remote JMX ports to expose Data Grid MBeans through connections in JMXServiceURL format. Note Data Grid Server does not expose JMX remotely via the single port endpoint. If you want to remotely access Data Grid Server via JMX you must enable a remote port. You can enable remote JMX ports using one of the following approaches: Enable remote JMX ports that require authentication to one of the Data Grid Server security realms. Enable remote JMX ports manually using the standard Java management configuration options. Prerequisites For remote JMX with authentication, define JMX specific user roles using the default security realm. Users must have controlRole with read/write access or the monitorRole with read-only access to access any JMX resources. Procedure Start Data Grid Server with a remote JMX port enabled using one of the following ways: Enable remote JMX through port 9999 . Warning Using remote JMX with SSL disabled is not intended for production environments. Pass the following system properties to Data Grid Server at startup. Warning Enabling remote JMX with no authentication or SSL is not secure and not recommended in any environment. Disabling authentication and SSL allows unauthorized users to connect to your server and access the data hosted there. Additional resources Creating security realms 9.4.2. Data Grid MBeans Data Grid exposes JMX MBeans that represent manageable resources. org.infinispan:type=Cache Attributes and operations available for cache instances. org.infinispan:type=CacheManager Attributes and operations available for Cache Managers, including Data Grid cache and cluster health statistics. For a complete list of available JMX MBeans along with descriptions and available operations and attributes, see the Data Grid JMX Components documentation. Additional resources Data Grid JMX Components 9.4.3. Registering MBeans in custom MBean servers Data Grid includes an MBeanServerLookup interface that you can use to register MBeans in custom MBeanServer instances. Prerequisites Create an implementation of MBeanServerLookup so that the getMBeanServer() method returns the custom MBeanServer instance. Configure Data Grid to register JMX MBeans. Procedure Open your Data Grid configuration for editing. Add the mbean-server-lookup attribute or field to the JMX configuration for the Cache Manager. Specify fully qualified name (FQN) of your MBeanServerLookup implementation. Save and close your client configuration. JMX MBean server lookup configuration XML <infinispan> <cache-container statistics="true"> <jmx enabled="true" domain="example.com" mbean-server-lookup="com.example.MyMBeanServerLookup"/> </cache-container> </infinispan> JSON { "infinispan" : { "cache-container" : { "statistics" : "true", "jmx" : { "enabled" : "true", "domain" : "example.com", "mbean-server-lookup" : "com.example.MyMBeanServerLookup" } } } } YAML infinispan: cacheContainer: statistics: "true" jmx: enabled: "true" domain: "example.com" mbeanServerLookup: "com.example.MyMBeanServerLookup" 9.5. Exporting metrics during a state transfer operation You can export time metrics for clustered caches that Data Grid redistributes across nodes. A state transfer operation occurs when a clustered cache topology changes, such as a node joining or leaving a cluster. During a state transfer operation, Data Grid exports metrics from each cache, so that you can determine a cache's status. A state transfer exposes attributes as properties, so that Data Grid can export metrics from each cache. Note You cannot perform a state transfer operation in invalidation mode. Data Grid generates time metrics that are compatible with the REST API and the JMX API. Prerequisites Configure Data Grid metrics. Enable metrics for your cache type, such as embedded cache or remote cache. Initiate a state transfer operation by changing your clustered cache topology. Procedure Choose one of the following methods: Configure Data Grid to use the REST API to collect metrics. Configure Data Grid to use the JMX API to collect metrics. Additional resources Enabling and configuring Data Grid statistics and JMX monitoring (Data Grid caches) StateTransferManager (Data Grid 14.0 API) 9.6. Monitoring the status of cross-site replication Monitor the site status of your backup locations to detect interruptions in the communication between the sites. When a remote site status changes to offline , Data Grid stops replicating your data to the backup location. Your data become out of sync and you must fix the inconsistencies before bringing the clusters back online. Monitoring cross-site events is necessary for early problem detection. Use one of the following monitoring strategies: Monitoring cross-site replication with the REST API Monitoring cross-site replication with the Prometheus metrics or any other monitoring system Monitoring cross-site replication with the REST API Monitor the status of cross-site replication for all caches using the REST endpoint. You can implement a custom script to poll the REST endpoint or use the following example. Prerequisites Enable cross-site replication. Procedure Implement a script to poll the REST endpoint. The following example demonstrates how you can use a Python script to poll the site status every five seconds. #!/usr/bin/python3 import time import requests from requests.auth import HTTPDigestAuth class InfinispanConnection: def __init__(self, server: str = 'http://localhost:11222', cache_manager: str = 'default', auth: tuple = ('admin', 'change_me')) -> None: super().__init__() self.__url = f'{server}/rest/v2/cache-managers/{cache_manager}/x-site/backups/' self.__auth = auth self.__headers = { 'accept': 'application/json' } def get_sites_status(self): try: rsp = requests.get(self.__url, headers=self.__headers, auth=HTTPDigestAuth(self.__auth[0], self.__auth[1])) if rsp.status_code != 200: return None return rsp.json() except: return None # Specify credentials for Data Grid user with permission to access the REST endpoint USERNAME = 'admin' PASSWORD = 'change_me' # Set an interval between cross-site status checks POLL_INTERVAL_SEC = 5 # Provide a list of servers SERVERS = [ InfinispanConnection('http://127.0.0.1:11222', auth=(USERNAME, PASSWORD)), InfinispanConnection('http://127.0.0.1:12222', auth=(USERNAME, PASSWORD)) ] #Specify the names of remote sites REMOTE_SITES = [ 'nyc' ] #Provide a list of caches to monitor CACHES = [ 'work', 'sessions' ] def on_event(site: str, cache: str, old_status: str, new_status: str): # TODO implement your handling code here print(f'site={site} cache={cache} Status changed {old_status} -> {new_status}') def __handle_mixed_state(state: dict, site: str, site_status: dict): if site not in state: state[site] = {c: 'online' if c in site_status['online'] else 'offline' for c in CACHES} return for cache in CACHES: __update_cache_state(state, site, cache, 'online' if cache in site_status['online'] else 'offline') def __handle_online_or_offline_state(state: dict, site: str, new_status: str): if site not in state: state[site] = {c: new_status for c in CACHES} return for cache in CACHES: __update_cache_state(state, site, cache, new_status) def __update_cache_state(state: dict, site: str, cache: str, new_status: str): old_status = state[site].get(cache) if old_status != new_status: on_event(site, cache, old_status, new_status) state[site][cache] = new_status def update_state(state: dict): rsp = None for conn in SERVERS: rsp = conn.get_sites_status() if rsp: break if rsp is None: print('Unable to fetch site status from any server') return for site in REMOTE_SITES: site_status = rsp.get(site, {}) new_status = site_status.get('status') if new_status == 'mixed': __handle_mixed_state(state, site, site_status) else: __handle_online_or_offline_state(state, site, new_status) if __name__ == '__main__': _state = {} while True: update_state(_state) time.sleep(POLL_INTERVAL_SEC) When a site status changes from online to offline or vice-versa, the function on_event is invoked. If you want to use this script, you must specify the following variables: USERNAME and PASSWORD : The username and password of Data Grid user with permission to access the REST endpoint. POLL_INTERVAL_SEC : The number of seconds between polls. SERVERS : The list of Data Grid Servers at this site. The script only requires a single valid response but the list is provided to allow fail over. REMOTE_SITES : The list of remote sites to monitor on these servers. CACHES : The list of cache names to monitor. Additional resources REST API: Getting status of backup locations Monitoring cross-site replication with the Prometheus metrics Prometheus, and other monitoring systems, let you configure alerts to detect when a site status changes to offline . Tip Monitoring cross-site latency metrics can help you to discover potential issues. Prerequisites Enable cross-site replication. Procedure Configure Data Grid metrics. Configure alerting rules using the Prometheus metrics format. For the site status, use 1 for online and 0 for offline . For the expr filed, use the following format: vendor_cache_manager_default_cache_<cache name>_x_site_admin_<site name>_status . In the following example, Prometheus alerts you when the NYC site gets offline for cache named work or sessions . groups: - name: Cross Site Rules rules: - alert: Cache Work and Site NYC expr: vendor_cache_manager_default_cache_work_x_site_admin_nyc_status == 0 - alert: Cache Sessions and Site NYC expr: vendor_cache_manager_default_cache_sessions_x_site_admin_nyc_status == 0 The following image shows an alert that the NYC site is offline for cache work . Figure 9.1. Prometheus Alert Additional resources Configuring Data Grid metrics Prometheus Alerting Overview Grafana Alerting Documentation Openshift Managing Alerts
[ "<distributed-cache statistics=\"true\" />", "{ \"distributed-cache\": { \"statistics\": \"true\" } }", "distributedCache: statistics: true", "ConfigurationBuilder builder = new ConfigurationBuilder(); builder.statistics().enable() .jmxEnable() .jmxDomain(\"my.domain.org\") .addServer() .host(\"127.0.0.1\") .port(11222); RemoteCacheManager remoteCacheManager = new RemoteCacheManager(builder.build());", "infinispan.client.hotrod.statistics = true infinispan.client.hotrod.jmx = true infinispan.client.hotrod.jmx_domain = my.domain.org", "<infinispan> <cache-container statistics=\"true\"> <metrics gauges=\"true\" histograms=\"true\" /> </cache-container> </infinispan>", "{ \"infinispan\" : { \"cache-container\" : { \"statistics\" : \"true\", \"metrics\" : { \"gauges\" : \"true\", \"histograms\" : \"true\" } } } }", "infinispan: cacheContainer: statistics: \"true\" metrics: gauges: \"true\" histograms: \"true\"", "curl -v http://localhost:11222/metrics --digest -u username:password", "curl -v http://localhost:11222/metrics --digest -u username:password -H \"Accept: application/openmetrics-text\"", "<infinispan> <cache-container statistics=\"true\"> <jmx enabled=\"true\" domain=\"example.com\"/> </cache-container> </infinispan>", "{ \"infinispan\" : { \"cache-container\" : { \"statistics\" : \"true\", \"jmx\" : { \"enabled\" : \"true\", \"domain\" : \"example.com\" } } } }", "infinispan: cacheContainer: statistics: \"true\" jmx: enabled: \"true\" domain: \"example.com\"", "bin/server.sh --jmx 9999", "bin/server.sh -Dcom.sun.management.jmxremote.port=9999 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false", "<infinispan> <cache-container statistics=\"true\"> <jmx enabled=\"true\" domain=\"example.com\" mbean-server-lookup=\"com.example.MyMBeanServerLookup\"/> </cache-container> </infinispan>", "{ \"infinispan\" : { \"cache-container\" : { \"statistics\" : \"true\", \"jmx\" : { \"enabled\" : \"true\", \"domain\" : \"example.com\", \"mbean-server-lookup\" : \"com.example.MyMBeanServerLookup\" } } } }", "infinispan: cacheContainer: statistics: \"true\" jmx: enabled: \"true\" domain: \"example.com\" mbeanServerLookup: \"com.example.MyMBeanServerLookup\"", "#!/usr/bin/python3 import time import requests from requests.auth import HTTPDigestAuth class InfinispanConnection: def __init__(self, server: str = 'http://localhost:11222', cache_manager: str = 'default', auth: tuple = ('admin', 'change_me')) -> None: super().__init__() self.__url = f'{server}/rest/v2/cache-managers/{cache_manager}/x-site/backups/' self.__auth = auth self.__headers = { 'accept': 'application/json' } def get_sites_status(self): try: rsp = requests.get(self.__url, headers=self.__headers, auth=HTTPDigestAuth(self.__auth[0], self.__auth[1])) if rsp.status_code != 200: return None return rsp.json() except: return None Specify credentials for Data Grid user with permission to access the REST endpoint USERNAME = 'admin' PASSWORD = 'change_me' Set an interval between cross-site status checks POLL_INTERVAL_SEC = 5 Provide a list of servers SERVERS = [ InfinispanConnection('http://127.0.0.1:11222', auth=(USERNAME, PASSWORD)), InfinispanConnection('http://127.0.0.1:12222', auth=(USERNAME, PASSWORD)) ] #Specify the names of remote sites REMOTE_SITES = [ 'nyc' ] #Provide a list of caches to monitor CACHES = [ 'work', 'sessions' ] def on_event(site: str, cache: str, old_status: str, new_status: str): # TODO implement your handling code here print(f'site={site} cache={cache} Status changed {old_status} -> {new_status}') def __handle_mixed_state(state: dict, site: str, site_status: dict): if site not in state: state[site] = {c: 'online' if c in site_status['online'] else 'offline' for c in CACHES} return for cache in CACHES: __update_cache_state(state, site, cache, 'online' if cache in site_status['online'] else 'offline') def __handle_online_or_offline_state(state: dict, site: str, new_status: str): if site not in state: state[site] = {c: new_status for c in CACHES} return for cache in CACHES: __update_cache_state(state, site, cache, new_status) def __update_cache_state(state: dict, site: str, cache: str, new_status: str): old_status = state[site].get(cache) if old_status != new_status: on_event(site, cache, old_status, new_status) state[site][cache] = new_status def update_state(state: dict): rsp = None for conn in SERVERS: rsp = conn.get_sites_status() if rsp: break if rsp is None: print('Unable to fetch site status from any server') return for site in REMOTE_SITES: site_status = rsp.get(site, {}) new_status = site_status.get('status') if new_status == 'mixed': __handle_mixed_state(state, site, site_status) else: __handle_online_or_offline_state(state, site, new_status) if __name__ == '__main__': _state = {} while True: update_state(_state) time.sleep(POLL_INTERVAL_SEC)", "groups: - name: Cross Site Rules rules: - alert: Cache Work and Site NYC expr: vendor_cache_manager_default_cache_work_x_site_admin_nyc_status == 0 - alert: Cache Sessions and Site NYC expr: vendor_cache_manager_default_cache_sessions_x_site_admin_nyc_status == 0" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_server_guide/statistics-jmx
Chapter 2. Certificate-based authentication between Red Hat Quay and SQL
Chapter 2. Certificate-based authentication between Red Hat Quay and SQL Red Hat Quay administrators can configure certificate-based authentication between Red Hat Quay and SQL (PostgreSQL and GCP CloudSQL) by supplying their own SSL/TLS certificates for client-side authentication. This provides enhanced security and allows for easier automation for your Red Hat Quay registry. The following sections shows you how to configure certificate-based authentication between Red Hat Quay and PostgreSQL, and Red Hat Quay and CloudSQL. 2.1. Configuring certificate-based authentication with SQL The following procedure demonstrates how to connect Red Hat Quay with an SQL database using secure client-side certificates. This method ensures both connectivity and authentication through Certificate Trust Verification, as it verifies the SQL server's certificate against a trusted Certificate Authority (CA). This enhances the security of the connection between Red Hat Quay and your SQL server while simplifying automation for your deployment. Although the example uses Google Cloud Platform's CloudSQL, the procedure also applies to PostgreSQL and other supported databases. Prerequisites You have generated custom Certificate Authorities (CAs) and your SSL/TLS certificates and keys are available in PEM format that will be used to generate an SSL connection with your CloudSQL database. For more information, see SSL and TLS for Red Hat Quay . You have base64 decoded the original config bundle into a config.yaml file. For more information, see Downloading the existing configuration . You are using an externally managed PostgreSQL or CloudSQL database. For more information, see Using and existing PostgreSQL database with the DB_URI variable set. Your externally managed PostgreSQL or CloudSQL database is configured for SSL/TLS. The postgres component of your QuayRegistry CRD is set to managed: false , and your CloudSQL database is set with the DB_URI configuration variable. The following procedure uses postgresql://<cloudsql_username>:<dbpassword>@<database_host>:<port>/<database_name> . Procedure After you have generated the CAs and SSL/TLS certificates and keys for your CloudSQL database and ensured that they are in .pem format, test the SSL connection to your CloudSQL server: Initiate a connection to your CloudSQL server by entering the following command: USD psql "sslmode=verify-ca sslrootcert=<ssl_server_certificate_authority>.pem sslcert=<ssl_client_certificate>.pem sslkey=<ssl_client_key>.pem hostaddr=<database_host> port=<5432> user=<cloudsql_username> dbname=<cloudsql_database_name>" In your Red Hat Quay directory, create a new YAML file, for example, quay-config-bundle.yaml , by running the following command: USD touch quay-config-bundle.yaml Create a postgresql-client-certs resource by entering the following command: USD oc -n <quay_namespace> create secret generic postgresql-client-certs \ --from-file config.yaml=<path/to/config.yaml> 1 --from-file=tls.crt=<path/to/ssl_client_certificate.pem> 2 --from-file=tls.key=<path/to/ssl_client_key.pem> 3 --from-file=ca.crt=<path/to/ssl_server_certificate.pem> 4 1 Where` <config.yaml>` is your base64 decoded config.yaml file. 2 Where ssl_client_certificate.pem is your SSL certificate in .pem format. 3 Where ssl_client_key.pem is your SSL key in .pem format. 4 Where ssl_server_certificate.pem is your SSL root CA in .pem format. Edit your `quay-config-bundle.yaml file to include the following database connection settings: Important The information included in the DB_CONNECTION_ARGS variable, for example, sslmode , sslrootcert , sslcert , and sslkey must match the information appended to the DB_URI variable. Failure to match might result in a failed connection. You cannot specify custom filenames or paths. Certificate file paths for sslrootcert , sslcert , and sslkey are hardcoded defaults and mounted into the Quay pod from the Kubernetes secret. You must adhere to the following naming conventions or it will result in a failed connection. DB_CONNECTION_ARGS: autorollback: true sslmode: verify-ca 1 sslrootcert: /.postgresql/root.crt 2 sslcert: /.postgresql/postgresql.crt 3 sslkey: /.postgresql/postgresql.key 4 threadlocals: true 5 DB_URI: postgresql://<dbusername>:<dbpassword>@<database_host>:<port>/<database_name>?sslmode=verify-full&sslrootcert=/.postgresql/root.crt&sslcert=/.postgresql/postgresql.crt&sslkey=/.postgresql/postgresql.key 6 1 Using verify-ca ensures that the database connection uses SSL/TLS and verifies the server certificate against a trusted CA. This can work with both trusted CA and self-signed CA certificates. However, this mode does not verify the hostname of the server. For full hostname and certificate verification, use verify-full . For more information about the configuration options available, see PostgreSQL SSL/TLS connection arguments . 2 The root.crt file contains the root certificate used to verify the SSL/TLS connection with your CloudSQL database. This file is mounted in the Quay pod from the Kubernetes secret. 3 The postgresql.crt file contains the client certificate used to authenticate the connection to your CloudSQL database. This file is mounted in the Quay pod from the Kubernetes secret. 4 The postgresql.key file contains the private key associated with the client certificate. This file is mounted in the Quay pod from the Kubernetes secret. 5 Enables auto-rollback for connections. 6 The URI that accesses your CloudSQL database. Must be appended with the sslmode type, your root.crt , postgresql.crt , and postgresql.key files. The SSL/TLS information included in DB_URI must match the information provided in DB_CONNECTION_ARGS . If you are using CloudSQL, you must include your database username and password in this variable. Create the configBundleSecret resource by entering the following command: USD oc create -n <namespace> -f quay-config-bundle.yaml Example output secret/quay-config-bundle created Update the QuayRegistry YAML file to reference the quay-config-bundle object by entering the following command: USD oc patch quayregistry <registry_name> -n <namespace> --type=merge -p '{"spec":{"configBundleSecret":"quay-config-bundle"}}' Example output quayregistry.quay.redhat.com/example-registry patched Ensure that your QuayRegistry YAML file has been updated to use the extra CA certificate configBundleSecret resource by entering the following command: USD oc get quayregistry <registry_name> -n <namespace> -o yaml Example output # ... configBundleSecret: quay-config-bundle # ...
[ "psql \"sslmode=verify-ca sslrootcert=<ssl_server_certificate_authority>.pem sslcert=<ssl_client_certificate>.pem sslkey=<ssl_client_key>.pem hostaddr=<database_host> port=<5432> user=<cloudsql_username> dbname=<cloudsql_database_name>\"", "touch quay-config-bundle.yaml", "oc -n <quay_namespace> create secret generic postgresql-client-certs --from-file config.yaml=<path/to/config.yaml> 1 --from-file=tls.crt=<path/to/ssl_client_certificate.pem> 2 --from-file=tls.key=<path/to/ssl_client_key.pem> 3 --from-file=ca.crt=<path/to/ssl_server_certificate.pem> 4", "DB_CONNECTION_ARGS: autorollback: true sslmode: verify-ca 1 sslrootcert: /.postgresql/root.crt 2 sslcert: /.postgresql/postgresql.crt 3 sslkey: /.postgresql/postgresql.key 4 threadlocals: true 5 DB_URI: postgresql://<dbusername>:<dbpassword>@<database_host>:<port>/<database_name>?sslmode=verify-full&sslrootcert=/.postgresql/root.crt&sslcert=/.postgresql/postgresql.crt&sslkey=/.postgresql/postgresql.key 6", "oc create -n <namespace> -f quay-config-bundle.yaml", "secret/quay-config-bundle created", "oc patch quayregistry <registry_name> -n <namespace> --type=merge -p '{\"spec\":{\"configBundleSecret\":\"quay-config-bundle\"}}'", "quayregistry.quay.redhat.com/example-registry patched", "oc get quayregistry <registry_name> -n <namespace> -o yaml", "configBundleSecret: quay-config-bundle" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/securing_red_hat_quay/cert-based-auth-quay-sql
Chapter 7. Performing and configuring basic builds
Chapter 7. Performing and configuring basic builds The following sections provide instructions for basic build operations, including starting and canceling builds, editing BuildConfigs , deleting BuildConfigs , viewing build details, and accessing build logs. 7.1. Starting a build You can manually start a new build from an existing build configuration in your current project. Procedure To start a build manually, enter the following command: USD oc start-build <buildconfig_name> 7.1.1. Re-running a build You can manually re-run a build using the --from-build flag. Procedure To manually re-run a build, enter the following command: USD oc start-build --from-build=<build_name> 7.1.2. Streaming build logs You can specify the --follow flag to stream the build's logs in stdout . Procedure To manually stream a build's logs in stdout , enter the following command: USD oc start-build <buildconfig_name> --follow 7.1.3. Setting environment variables when starting a build You can specify the --env flag to set any desired environment variable for the build. Procedure To specify a desired environment variable, enter the following command: USD oc start-build <buildconfig_name> --env=<key>=<value> 7.1.4. Starting a build with source Rather than relying on a Git source pull or a Dockerfile for a build, you can also start a build by directly pushing your source, which could be the contents of a Git or SVN working directory, a set of pre-built binary artifacts you want to deploy, or a single file. This can be done by specifying one of the following options for the start-build command: Option Description --from-dir=<directory> Specifies a directory that will be archived and used as a binary input for the build. --from-file=<file> Specifies a single file that will be the only file in the build source. The file is placed in the root of an empty directory with the same file name as the original file provided. --from-repo=<local_source_repo> Specifies a path to a local repository to use as the binary input for a build. Add the --commit option to control which branch, tag, or commit is used for the build. When passing any of these options directly to the build, the contents are streamed to the build and override the current build source settings. Note Builds triggered from binary input will not preserve the source on the server, so rebuilds triggered by base image changes will use the source specified in the build configuration. Procedure To start a build from a source code repository and send the contents of a local Git repository as an archive from the tag v2 , enter the following command: USD oc start-build hello-world --from-repo=../hello-world --commit=v2 7.2. Canceling a build You can cancel a build using the web console, or with the following CLI command. Procedure To manually cancel a build, enter the following command: USD oc cancel-build <build_name> 7.2.1. Canceling multiple builds You can cancel multiple builds with the following CLI command. Procedure To manually cancel multiple builds, enter the following command: USD oc cancel-build <build1_name> <build2_name> <build3_name> 7.2.2. Canceling all builds You can cancel all builds from the build configuration with the following CLI command. Procedure To cancel all builds, enter the following command: USD oc cancel-build bc/<buildconfig_name> 7.2.3. Canceling all builds in a given state You can cancel all builds in a given state, such as new or pending , while ignoring the builds in other states. Procedure To cancel all in a given state, enter the following command: USD oc cancel-build bc/<buildconfig_name> 7.3. Editing a BuildConfig To edit your build configurations, you use the Edit BuildConfig option in the Builds view of the Developer perspective. You can use either of the following views to edit a BuildConfig : The Form view enables you to edit your BuildConfig using the standard form fields and checkboxes. The YAML view enables you to edit your BuildConfig with full control over the operations. You can switch between the Form view and YAML view without losing any data. The data in the Form view is transferred to the YAML view and vice versa. Procedure In the Builds view of the Developer perspective, click the Options menu to see the Edit BuildConfig option. Click Edit BuildConfig to see the Form view option. In the Git section, enter the Git repository URL for the codebase you want to use to create an application. The URL is then validated. Optional: Click Show Advanced Git Options to add details such as: Git Reference to specify a branch, tag, or commit that contains code you want to use to build the application. Context Dir to specify the subdirectory that contains code you want to use to build the application. Source Secret to create a Secret Name with credentials for pulling your source code from a private repository. In the Build from section, select the option that you would like to build from. You can use the following options: Image Stream tag references an image for a given image stream and tag. Enter the project, image stream, and tag of the location you would like to build from and push to. Image Stream image references an image for a given image stream and image name. Enter the image stream image you would like to build from. Also enter the project, image stream, and tag to push to. Docker image : The Docker image is referenced through a Docker image repository. You will also need to enter the project, image stream, and tag to refer to where you would like to push to. Optional: In the Environment Variables section, add the environment variables associated with the project by using the Name and Value fields. To add more environment variables, use Add Value , or Add from ConfigMap and Secret . Optional: To further customize your application, use the following advanced options: Trigger Triggers a new image build when the builder image changes. Add more triggers by clicking Add Trigger and selecting the Type and Secret . Secrets Adds secrets for your application. Add more secrets by clicking Add secret and selecting the Secret and Mount point . Policy Click Run policy to select the build run policy. The selected policy determines the order in which builds created from the build configuration must run. Hooks Select Run build hooks after image is built to run commands at the end of the build and verify the image. Add Hook type , Command , and Arguments to append to the command. Click Save to save the BuildConfig . 7.4. Deleting a BuildConfig You can delete a BuildConfig using the following command. Procedure To delete a BuildConfig , enter the following command: USD oc delete bc <BuildConfigName> This also deletes all builds that were instantiated from this BuildConfig . To delete a BuildConfig and keep the builds instatiated from the BuildConfig , specify the --cascade=false flag when you enter the following command: USD oc delete --cascade=false bc <BuildConfigName> 7.5. Viewing build details You can view build details with the web console or by using the oc describe CLI command. This displays information including: The build source. The build strategy. The output destination. Digest of the image in the destination registry. How the build was created. If the build uses the Docker or Source strategy, the oc describe output also includes information about the source revision used for the build, including the commit ID, author, committer, and message. Procedure To view build details, enter the following command: USD oc describe build <build_name> 7.6. Accessing build logs You can access build logs using the web console or the CLI. Procedure To stream the logs using the build directly, enter the following command: USD oc describe build <build_name> 7.6.1. Accessing BuildConfig logs You can access BuildConfig logs using the web console or the CLI. Procedure To stream the logs of the latest build for a BuildConfig , enter the following command: USD oc logs -f bc/<buildconfig_name> 7.6.2. Accessing BuildConfig logs for a given version build You can access logs for a given version build for a BuildConfig using the web console or the CLI. Procedure To stream the logs for a given version build for a BuildConfig , enter the following command: USD oc logs --version=<number> bc/<buildconfig_name> 7.6.3. Enabling log verbosity You can enable a more verbose output by passing the BUILD_LOGLEVEL environment variable as part of the sourceStrategy or dockerStrategy in a BuildConfig . Note An administrator can set the default build verbosity for the entire OpenShift Container Platform instance by configuring env/BUILD_LOGLEVEL . This default can be overridden by specifying BUILD_LOGLEVEL in a given BuildConfig . You can specify a higher priority override on the command line for non-binary builds by passing --build-loglevel to oc start-build . Available log levels for source builds are as follows: Level 0 Produces output from containers running the assemble script and all encountered errors. This is the default. Level 1 Produces basic information about the executed process. Level 2 Produces very detailed information about the executed process. Level 3 Produces very detailed information about the executed process, and a listing of the archive contents. Level 4 Currently produces the same information as level 3. Level 5 Produces everything mentioned on levels and additionally provides docker push messages. Procedure To enable more verbose output, pass the BUILD_LOGLEVEL environment variable as part of the sourceStrategy or dockerStrategy in a BuildConfig : sourceStrategy: ... env: - name: "BUILD_LOGLEVEL" value: "2" 1 1 Adjust this value to the desired log level.
[ "oc start-build <buildconfig_name>", "oc start-build --from-build=<build_name>", "oc start-build <buildconfig_name> --follow", "oc start-build <buildconfig_name> --env=<key>=<value>", "oc start-build hello-world --from-repo=../hello-world --commit=v2", "oc cancel-build <build_name>", "oc cancel-build <build1_name> <build2_name> <build3_name>", "oc cancel-build bc/<buildconfig_name>", "oc cancel-build bc/<buildconfig_name>", "oc delete bc <BuildConfigName>", "oc delete --cascade=false bc <BuildConfigName>", "oc describe build <build_name>", "oc describe build <build_name>", "oc logs -f bc/<buildconfig_name>", "oc logs --version=<number> bc/<buildconfig_name>", "sourceStrategy: env: - name: \"BUILD_LOGLEVEL\" value: \"2\" 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/builds_using_buildconfig/basic-build-operations
4.122. krb5
4.122. krb5 4.122.1. RHSA-2011:1790 - Moderate: krb5 security update Updated krb5 packages that fix one security issue are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Kerberos is a network authentication system which allows clients and servers to authenticate to each other using symmetric encryption and a trusted third-party, the Key Distribution Center (KDC). Security Fix CVE-2011-1530 A NULL pointer dereference flaw was found in the way the MIT Kerberos KDC processed certain TGS (Ticket-granting Server) requests. A remote, authenticated attacker could use this flaw to crash the KDC via a specially-crafted TGS request. Red Hat would like to thank the MIT Kerberos project for reporting this issue. All krb5 users should upgrade to these updated packages, which contain a backported patch to correct this issue. After installing the updated packages, the krb5kdc daemon will be restarted automatically. 4.122.2. RHBA-2011:1707 - krb5 bug fix update Updated krb5 packages that fix multiple bugs are now available for Red Hat Enterprise Linux 6. The Kerberos authentication system allows clients and servers to authenticate to each other using symmetric encryption and the help of a trusted third party, the KDC. This update fixes the following bugs: BZ# 651466 Kerberos version 1.8 and later defaults to disabling support for older encryption types which are no longer believed to be sufficiently strong. When upgrading from older versions of Red Hat Enterprise Linux, a number of services which run at the key distribution center (KDC) need to have their keys reset to include keys for newer encryption types. This update adds a spot-check to the KDC init script which assist in diagnosing this condition. BZ# 701446 , BZ# 746341 Previously, a client could fail to connect to a KDC if a sufficiently large number of descriptors was already in use. This update modifies the Kerberos libraries to switch to using poll() instead of select(), which does not suffer from this limitation. BZ# 713252 , BZ# 729068 Previously, the kadmin client could fail to establish a connection with certain older versions of the kadmin daemon. In these situations, the server often logged a diagnostic noting that the client had supplied it with incorrect channel bindings. This update modifies the client to allow it to once again contact those versions of kadmind. BZ# 713518 Previously, a client failed to obtain credentials for authentication from KDCs that rejected requests specifying unrecognized options and that also did not support the canonicalize option. With this update, obtaining credentials also works with these KDCs. BZ# 714217 Previously, locally-applied patches, which attempt to ensure that any files created by the Kerberos libraries are given and keep the correct SELinux file labels, did not correctly ensure that replay cache files kept their labels. This update corrects the patch to cover this case. BZ# 717378 Previously, the Kerberos client libraries could inadvertently trigger an address-to-name lookup inside of the resolver libraries when attempting to derive a principal name from a combination of a service name and a host name, even if the user disabled them using the "rdns" setting in the krb5.conf file. This update modifies the client library to prevent it from triggering these lookups. BZ# 724033 Previously, the kadmind init script could erroneously refuse to start the kadmind server on a KDC, if the realm database was moved to a non-default location, or a non-default kdb backend was in use. This update removes the logic from the init script which caused it to do so. BZ# 729044 Previously, the krb5-debuginfo package excluded several source files used to build the package. This update ensures that the affected files are still included. BZ# 734341 Previously, obtaining the Kerberos credentials for services could fail fail if the target server was in another trusted realm than the client. This update modifies krb5-libs so that the client obtains the credentials as expected. All Kerberos users are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/krb5
Chapter 10. NetworkManager
Chapter 10. NetworkManager NetworkManager is a dynamic network control and configuration system that attempts to keep network devices and connections up and active when they are available. NetworkManager consists of a core daemon, a GNOME Notification Area applet that provides network status information, and graphical configuration tools that can create, edit and remove connections and interfaces. NetworkManager can be used to configure the following types of connections: Ethernet, wireless, mobile broadband (such as cellular 3G), and DSL and PPPoE (Point-to-Point over Ethernet). In addition, NetworkManager allows for the configuration of network aliases, static routes, DNS information and VPN connections, as well as many connection-specific parameters. Finally, NetworkManager provides a rich API via D-Bus which allows applications to query and control network configuration and state. versions of Red Hat Enterprise Linux included the Network Administration Tool , which was commonly known as system-config-network after its command-line invocation. In Red Hat Enterprise Linux 6, NetworkManager replaces the former Network Administration Tool while providing enhanced functionality, such as user-specific and mobile broadband configuration. It is also possible to configure the network in Red Hat Enterprise Linux 6 by editing interface configuration files; see Chapter 11, Network Interfaces for more information. NetworkManager may be installed by default on your version of Red Hat Enterprise Linux. To ensure that it is, run the following command as root : 10.1. The NetworkManager Daemon The NetworkManager daemon runs with root privileges and is usually configured to start up at boot time. You can determine whether the NetworkManager daemon is running by entering this command as root : The service command will report NetworkManager is stopped if the NetworkManager service is not running. To start it for the current session: Run the chkconfig command to ensure that NetworkManager starts up every time the system boots: For more information on starting, stopping and managing services and runlevels, see Chapter 12, Services and Daemons .
[ "~]# yum install NetworkManager", "~]# service NetworkManager status NetworkManager (pid 1527) is running", "~]# service NetworkManager start", "~]# chkconfig NetworkManager on" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/ch-networkmanager
Chapter 3. JBoss EAP installer
Chapter 3. JBoss EAP installer You can use the JBoss EAP installer to install, configure, and uninstall a JBoss EAP instance. You can use the JBoss EAP installer on all platforms supported by JBoss EAP. 3.1. Downloading and installing the JBoss EAP installer You can use the JBoss EAP installer that is available from the Red Hat Customer Portal. The .jar archive can be used to run either the graphical or text-based installers. The installer is the preferred way to install JBoss EAP on all supported platforms. Prerequisites Set up an account on the Red Hat Customer Portal . Review the JBoss EAP 7 supported configurations and ensure that your system is supported. Install a supported Java Development Kit (JDK). Procedure Log in to the Red Hat Customer Portal . From the Product Downloads list, select Red Hat JBoss Enterprise Application Platform . Click Downloads . From the Version drop-down list, select 7.4 . Find Red Hat JBoss Enterprise Application Platform 7.4.0 Installer file in the list and click the Download link. 3.2. Running the JBoss EAP graphical installer The graphical installer offers a convenient way for you to install, configure, and uninstall a JBoss EAP instance. Additionally, you can use the graphical installer to access many optional configuration options. Prerequisites Set up an account on the Red Hat Customer Portal . Review the JBoss EAP 7 supported configurations and ensure that your system is supportable. Download the JBoss EAP installation package. Install a supported Java Development Kit (JDK). Procedure Open a terminal and navigate to the directory containing the downloaded JBoss EAP installer file. Run the graphical installer using the following command: Choose the desired language for the installer and click OK . Agree with the prompt for The EULA for RED HAT JBOSS MIDDLEWARE by selecting "I accept the terms of this license agreement", and then click . Select the installation path for JBoss EAP, and click . Select the components to install. Required components are disabled for deselection. Figure 3.1. JBoss EAP installer - Component selection screen Create an administrative user and assign a password. Then click . Review your installation options, and then click Yes . When the installation progress completes, click . Choose a default configuration for your JBoss EAP installation, or choose to perform an advanced configuration with the installer. Note Even if you choose a default configuration, you can still alter your configuration using the JBoss EAP management interfaces at a later time. Select Perform default configuration , or select Perform advanced configuration and select the items to configure, and then click . Figure 3.2. JBoss EAP installer - Configure runtime environment screen The following configuration steps are optional: Configure password vault You can use the Configure Password Vault option to install a password vault in the advanced configuration of the runtime environment. Configure a password vault to store your sensitive passwords in an encrypted keystore, and then click . For more information, see the password vault documentation in the How To Configure Server Security guide. Figure 3.3. JBoss EAP installer - Configure password vault screen SSL Security You can enable SSL Security in the advanced configuration of the runtime environment by specifying the location of the keystore and the password for securing the JBoss EAP management interfaces. a. Specify the location of the keystore and the password for securing the JBoss EAP management interfaces. b. When you have specified these values, click . For more information, see the documentation on securing the management interfaces in the How To Configure Server Security guide. Warning Red Hat recommends that SSLv2, SSLv3, and TLSv1.0 be explicitly disabled in favor of TLSv1.1 or TLSv1.2 in all affected packages. LDAP configuration You can enable the LDAP server to be the authentication and authorization authority as follows: a. Select Configure Runtime . b. Select Enable LDAP authentication . c. On the LDAP Configuration screen, complete the required configurations and click . For more information, see the LDAP documentation in How to Configure Identity Management . Figure 3.4. JBoss EAP installer - LDAP configuration screen LDAP security realm configuration You can enable LDAP authentication in the advanced configuration of the runtime environment by using the LDAP connection, which is defined in the step. Enabling LDAP authentication creates a new security realm and this realm becomes associated with the management interfaces. Specify the values for your LDAP Security Realm , and then click . For more information, see the LDAP documentation in How to Configure Identity Management . Figure 3.5. JBoss EAP installer - LDAP security realm configuration screen Security Domain Configuration You can add a security domain in the advanced configuration of the runtime environment by configuring a security domain for the JBoss EAP server instance. Most of the fields are already populated with default values and do not need modification. a. Configure the security domain for your JBoss EAP server instance. b. Click . For more information, see Security Domains in the Security Architecture guide. Figure 3.6. JBoss EAP installer - Security domain configuration screen Java Secure Socket Extension configuration You can add a security domain in the advanced configuration of the runtime environment by configuring the Jave Secure Socket Extension (JSSE) for the security domain defined in the step. a. For the JSSE element, set either a keystore or a truststore. b. Click . Figure 3.7. JBoss EAP installer - Java Secure Socket Extension configuration screen Quickstarts You can choose to install quickstarts in the advanced configuration of the runtime environment by selecting the quickstart installation path, and then clicking . Maven repository setup You can install quickstarts in the advanced configuration of the runtime environment by selecting your Maven repository and its settings.xml file. Figure 3.8. JBoss EAP installer - Maven repository setup screen Socket bindings Choose one of the following options to configure your socket bindings: Configure server port bindings in the advanced configuration settings of the runtime environment by configuring port offsets for all default bindings, or configuring custom port bindings. You might need to determine whether the installation uses the default port bindings. Configure port offsets by choosing the offset type. Configure custom bindings by selecting whether to configure the ports for standalone mode, domain mode, or both. If the host is configured for IPv6 only, select the Enable pure IPv6 configuration check box and the installer makes the required configuration changes. When you have configured your socket binding, click . Figure 3.9. JBoss EAP Installer - Socket bindings screen Custom socket bindings for Standalone configurations Configure custom port bindings for standalone mode by setting the ports and system properties for each of the standalone configurations ( standalone , standalone ha , standalone full , standalone full-ha ), and then click . Figure 3.10. JBoss EAP installer - Custom socket bindings for standalone configurations screen Custom socket bindings for domain configurations Configure custom port bindings for domain mode by setting the ports and system properties for the host configuration ( domain host ) and each of the domain profiles ( domain default , domain ha , domain full , domain full-ha ), and then click . Figure 3.11. JBoss EAP installer - Custom socket bindings for domain configurations screen Logging options You can configure logging levels in the advanced configuration settings of the runtime environment as follows: a. Select the desired logging levels for the root logger and the console logger. b. Click . Jakarta server faces setup You can install a Jakarta Server Faces implementation in the advanced configuration settings of the runtime environment, as follows: a. Configure the Jakarta Server Faces options and paths to your Jakarta Server Faces JARs. b. Click . For more information, see Installing a Jakarta Server Faces Implementation in the Configuration Guide . Figure 3.12. JBoss EAPinstaller - Jakarta server faces setup screen JDBC driver setup You can install a JDBC driver in the advanced configuration settings of the runtime environment by installing and setting up a JDBC driver. a. Choose the appropriate driver vendor from the drop-down list. b. Specify the driver's JAR location(s). c. Click . For more information, see the datasource JDBC driver section in the Configuration Guide . Figure 3.13. JBoss EAP installer - JDBC driver setup screen Datasource setup You can install a JDBC driver and install a datasource in the advanced configuration settings of the runtime environment by configuring a datasource. a. Provide a datasource name and configure the other options. b. Click . For more information, see the details of datasource management in the Configuration Guide . Figure 3.14. JBoss EAP installer - Datasource setup screen When the configuration progress completes, click . Select the Create shortcuts in the Start-Menu check box to create shortcuts, and then click . Note Only alphanumeric characters, dash (-), and underscore (_) characters are allowed. On Microsoft Windows, the slash (/) and backslash (\) characters are also allowed. Click Generate installation script and properties file if you want to capture the selected installation options for a future automated installer installation, and then click Done . Installation is now complete. The directory created by the installer is the top-level directory for the server. This is referred to as EAP_HOME . 3.3. Running the JBoss EAP text-based installer You can use the text-based installer to install, configure, and uninstall a JBoss EAP instance. This installer method offers an uncluttered and straightforward experience. Prerequisites Set up an account on the Red Hat Customer Portal . Review the JBoss EAP 7 supported configurations and ensure that your system is supported. Install a supported Java Development Kit (JDK). Download the text-based installer. If you are using Windows, set the JAVA_HOME and PATH environment variables. If you do not have this set up, shortcuts do not work. Procedure Open a terminal and navigate to the directory containing the downloaded JBoss EAP installer. Run the text-based installer using the following command: Follow the prompts to install JBoss EAP. The directory created by the installer is the top-level directory for the server. This is referred to as EAP_HOME . Additional resources See Setting up the EAP_HOME variable, in the JBoss EAP Installation Guide . 3.4. Configuring JBoss EAP installer installation as a service on RHEL You can configure the installer installation to run as a service in Red Hat Enterprise Linux (RHEL). Prerequisites Install the installer. Ensure that you have administrator privileges on the server. Procedure Configure the start-up options in the jboss-eap.conf file by opening the jboss-eap.conf in a text editor and set the options for your JBoss EAP installation. Copy the service initialization and configuration files into the system directories: Copy the modified service configuration file to the /etc/default directory. Copy the service startup script to the /etc/init.d directory and give it execute permissions: Add the new jboss-eap-rhel.sh service to the list of automatically started services using the chkconfig service management command: Verify that the service has been installed correctly by using one of the following commands. For Red Hat Enterprise Linux 6: For Red Hat Enterprise Linux 7 and later: The service starts. If the service does not start and you get an error message, check the error logs and make sure that the options in the configuration file are set correctly. Optional: To make the service start automatically when the Red Hat Enterprise Linux server starts, run the following command: Verification To check the permissions of a file, enter the ls -l command in the directory containing the file. To check that the automatic service start is enabled, enter the following command: For more information about controlling the state of services, see Management system services in the JBoss EAP Configuring basic system settings guide . For more information about viewing error logs, see Bootup logging in the JBoss EAP Configuration Guide . 3.5. Configuring JBoss EAP installer installation as a service on Microsoft Windows Server You can install JBoss EAP on Microsoft Windows Server using the installer installation method. This method provides a basic default installation of a server, with configuration files and libraries placed in standard folders. The default installation of the server contains a service.bat script that you can use with Jsvc to stop and start JBoss EAP. Note If you use the set command to set system environment variables in a Windows Server command prompt it does not permanently set the environment variables. You must use either the setx command, or the System interface in the Control Panel . Prerequisites Install the JBoss EAP installer. Ensure that you administrator privileges on the server. Set the JAVA_HOME system environment variable. Ensure that you have an instance of the JBoss EAP server that is not running. Procedure The procedure for configuring JBoss EAP installer installation as a service in Microsoft Windows Server is similar to that of the archive installation method. See Configuring JBoss EAP archive installation as a service on Microsoft Windows Server . 3.6. Installing and running the JBoss EAP installer installation by using Jsvc You can use the Apache Java Service (Jsvc) component of the JBoss Core Services Apache HTTP Server Installation Guide collection to run JBoss EAP as a detached service, a daemon, on Red Hat Enterprise Linux (RHEL). Warning Although Jsvc works on RHEL, we strongly recommend that you use the native methods for running JBoss EAP as a service on RHEL. Jsvc is a set of libraries and applications that provide Java applications the ability to run as a background service. Applications run using Jsvc can perform operations as a privileged user, then switch identity to a non-privileged user. Prerequisites Install the JBoss EAP installer. Ensure that you administrator privileges on the server. Set the JAVA_HOME system environment variable. Ensure that you have an instance of the JBoss EAP server that is not running. Procedure The procedure for configuring the JBoss EAP installer installation by using Jsvc is similar to that of the archive installation method. For more information refer to the following sections in the JBoss EAP Installation Guide : Installing and starting JBoss EAP installer installation by using Jsvc Jsvc commands to start or stop JBoss EAP as a standalone server Jsvc commands to start or stop JBoss EAP on a managed domain Optional: Configuring JBoss EAP installer installation as a service on Microsoft Windows Server Additional resources For information about controlling JBoss Core Services, see Configuring the Apache HTTP Server Installation in the Apache HTTP Server Installation Guide . For information about configuring a JBoss EAP archive installation as a service using Jsvc, see Archive installation of JBoss EAP . For information about configuring a JBoss EAP installer installation on a Microsoft Windows server, see Configuring JBoss EAP installer installation as a service on Microsoft Windows Server . 3.7. Using the automated installer installation If you used the installer installation to install JBoss EAP, you can use an installation script generated from a install to automate future installations with the same configuration. Warning The automated installer is not backward compatible. You cannot use an installation script generated from a version of JBoss EAP with the automated installer. Only use installation scripts generated by the same minor version of JBoss EAP. For example JBoss EAP 7.4. Prerequisites Use the installer installation to generate an automatic installation script. The automatic installation script is an XML file. Procedure Open a terminal and navigate to the directory containing the downloaded JBoss EAP installer file. Run the following command to install JBoss EAP using the automatic installation script XML file: By default, the installer prompts you to enter any passwords required for the JBoss EAP configuration. You can do an unattended install by pre-setting the passwords for the installation. Note You can store the automatic installation script XML file on a network host, and use HTTP or FTP to point the installer to use it for an installation. For example: Additional resources See Unattended automated installer installation in the JBoss EAP Installation Guide . 3.8. Unattended automated installer installation To do an unattended automated installer installation, you must preset the passwords required for the JBoss EAP installation. When the installation script XML file is generated from a installer installation, an incomplete installation script variables file is also generated. It has the same file name as the installation script file, but with a .variables suffix. The variables file contains a list of key and password parameters needed for an unattended automated installation. You can provide the required passwords as a completed variables file, or as an argument when running the installer command. 3.9. Providing the password as an argument in the installer command You can edit the .variables file in a text editor and provide a password value for each key. You can then run the installer by using the automatic installation script. The installer detects the variables file automatically if the completed variables file is in the same directory as the installation script XML file. Additionally, you must not have modified variables file name. Prerequisites Use the JBoss EAP installer to generate an automatic installation script. The automatic installation script is an XML file. Procedure Open the .variables file in a text editor and provide a password value for each key. The following example demonstrates setting a password value for a key: Run the installer using the automatic installation script XML file: 3.10. Providing the password as a completed variables file You can use the -variablefile option in the management CLI to specify a path to the variables file. You can then run the installer using the automatic installation script to specify passwords as key or value pairs using the -variables argument. Prerequisites Use the JBoss EAP installer to generate an automatic installation script. The automatic installation script is an XML file. Procedure Specify the path to the variables file using -variablefile : Run the installer using the automatic installation script XML file and specify the required passwords as key/value pairs using the -variables argument, as demonstrated in the following example: Note Check that you have not entered any spaces when specifying the -variables key or value pairs. 3.11. Uninstalling a JBoss EAP installer with the graphical uninstaller If you installed JBoss EAP using the installer, you can uninstall JBoss EAP using the graphical uninstaller. The graphical uninstaller offers a convenient way to uninstall the JBoss EAP installer in a few simple steps. Prerequisites Install the JBoss EAP installer. Ensure that you administrator privileges on the server. Set the JAVA_HOME system environment variable. Ensure that you have an instance of the JBoss EAP server that is not running. Procedure Open a terminal and navigate to EAP_HOME /Uninstaller . Run the graphical uninstaller using the following command: The graphical uninstaller is similar to the following figure. Select the check box if you want to delete the JBoss EAP installation directory. Figure 3.15. JBoss EAP graphical uninstaller Click Uninstall to start the uninstall process. When the uninstall process is finished, click Quit to exit the uninstaller. 3.12. Uninstalling JBoss EAP installer installation with the text uninstaller If you installed JBoss EAP using the installer, you can uninstall JBoss EAP using the text uninstaller. The text uninstaller offers a simpler way to manually uninstall the JBoss EAP installer. Prerequisites Install the JBoss EAP installer. Ensure that you have administrator privileges on the server. Set the JAVA_HOME system environment variable. Ensure that you have an instance of the JBoss EAP server that is not running. Procedure Open a terminal and navigate to EAP_HOME /Uninstaller . Run the text-based uninstaller using the following command: Follow the prompts to uninstall JBoss EAP.
[ "java -jar jboss-eap-7.4.0-installer.jar", "java -jar jboss-eap-7.4.0-installer.jar -console", "sudo cp EAP_HOME /bin/init.d/jboss-eap.conf /etc/default", "sudo cp EAP_HOME /bin/init.d/jboss-eap-rhel.sh /etc/init.d sudo chmod +x /etc/init.d/jboss-eap-rhel.sh", "sudo chkconfig --add jboss-eap-rhel.sh", "sudo service jboss-eap-rhel.sh start", "sudo service jboss-eap-rhel start", "sudo chkconfig jboss-eap-rhel.sh on", "sudo chkconfig --list jboss-eap-rhel.sh", "java -jar jboss-eap-7.4.0-installer.jar auto.xml", "java -jar jboss-eap-7.4.0-installer.jar http:// network-host.local/auto.xml", "java -jar jboss-eap-7.4.0-installer.jar ftp:// network-host.local/auto.xml", "adminPassword = password#2 vault.keystorepwd = vaultkeystorepw ssl.password = user12345", "java -jar jboss-eap-7.4.0-installer.jar auto.xml Checking for corresponding .variables file Variables file detected: auto.xml.variables [ Starting automated installation ]", "java -jar jboss-eap-7.4.0-installer.jar auto.xml -variablefile auto.xml.variables", "java -jar jboss-eap-7.4.0-installer.jar auto.xml -variables adminPassword= password#2 ,vault.keystorepwd= vaultkeystorepw ,ssl.password= user12345", "java -jar uninstaller.jar", "java -jar uninstaller.jar -console" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/installation_guide/assembly-jboss-eap-installer_default
Cluster Administration
Cluster Administration Red Hat Enterprise Linux 4 Configuring and Managing a Red Hat Cluster Edition 1.0
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/index
Chapter 9. Configuring email notifications
Chapter 9. Configuring email notifications Email notifications are created by Satellite Server periodically or after completion of certain events. The periodic notifications can be sent daily, weekly or monthly. For an overview of available notification types, see Section 9.1, "Email notification types" . Users do not receive any email notifications by default. An administrator can configure users to receive notifications based on criteria such as the type of notification, and frequency. Important Satellite Server does not enable outgoing emails by default, therefore you must review your email configuration. For more information, see Configuring Satellite Server for Outgoing Emails in Installing Satellite Server in a connected network environment . 9.1. Email notification types Satellite can create the following email notifications: Audit summary A summary of all activity audited by Satellite Server. Capsule sync failure A notification sent after Capsule synchronization fails. Compliance policy summary A summary of OpenSCAP policy reports and their results. Content view promote failure A notification sent after content view promotion fails. Content view publish failure A notification sent after content view publication fails. Host built A notification sent after a host is built. Host errata advisory A summary of applicable and installable errata for hosts managed by the user. Promote errata A notification sent only after a content view promotion. It contains a summary of errata applicable and installable to hosts registered to the promoted content view. This allows a user to monitor what updates have been applied to which hosts. Repository sync failure A notification sent after repository synchronization fails. Sync errata A notification sent only after synchronizing a repository. It contains a summary of new errata introduced by the synchronization. For a complete list of email notification types, navigate to Administer > Users in the Satellite web UI, click the Username of the required user, and select the Email Preferences tab. 9.2. Configuring email notification preferences You can configure Satellite to send email messages to individual users registered to Satellite. Satellite sends the email to the email address that has been added to the account, if present. Users can edit the email address by clicking on their name in the top-right of the Satellite web UI and selecting My account . Configure email notifications for a user from the Satellite web UI. Note If you want to send email notifications to a group email address instead of an individual email address, create a user account with the group email address and minimal Satellite permissions, then subscribe the user account to the desired notification types. Prerequisites The user you are configuring to receive email notifications has a role with this permission: view_mail_notifications . Procedure In the Satellite web UI, navigate to Administer > Users . Click the Username of the user you want to edit. On the User tab, verify the value of the Mail field. Email notifications will be sent to the address in this field. On the Email Preferences tab, select Mail Enabled . Select the notifications you want the user to receive using the drop-down menus to the notification types. Note The Audit Summary notification can be filtered by entering the required query in the Mail Query text box. Click Submit . The user will start receiving the notification emails. 9.3. Testing email delivery To verify the delivery of emails, send a test email to a user. If the email gets delivered, the settings are correct. Procedure In the Satellite web UI, navigate to Administer > Users . Click on the username. On the Email Preferences tab, click Test email . A test email message is sent immediately to the user's email address. If the email is delivered, the verification is complete. Otherwise, you must perform the following diagnostic steps: Verify the user's email address. Verify Satellite Server's email configuration. Examine firewall and mail server logs. If your Satellite Server uses the Postfix service for email delivery, the test email might be held in the queue. To verify, enter the mailq command to list the current mail queue. If the test email is held in the queue, mailq displays the following message: To fix the problem, start the Postfix service on your Satellite Server: 9.4. Testing email notifications To verify that users are correctly subscribed to notifications, trigger the notifications manually. Procedure To trigger the notifications, execute the following command: Replace My_Frequency with one of the following: daily weekly monthly This triggers all notifications scheduled for the specified frequency for all the subscribed users. If every subscribed user receives the notifications, the verification succeeds. Note Sending manually triggered notifications to individual users is currently not supported. 9.5. Changing email notification settings for a host Satellite can send event notifications for a host to the host's registered owner. You can configure Satellite to send email notifications either to an individual user or a user group. When set to a user group, all group members who are subscribed to the email type receive a message. Receiving email notifications for a host can be useful, but also overwhelming if you are expecting to receive frequent errors, for example, because of a known issue or error you are working around. Procedure In the Satellite web UI, navigate to Hosts > All Hosts , locate the host that you want to view, and click Edit in the Actions column. Go to the Additional Information tab. If the checkbox Include this host within Satellite reporting is checked, then the email notifications are enabled on that host. Optional: Toggle the checkbox to enable or disable the email notifications. Note If you want to receive email notifications, ensure that you have an email address set in your user settings.
[ "postqueue: warning: Mail system is down -- accessing queue directly -Queue ID- --Size-- ----Arrival Time---- -Sender/Recipient------- BE68482A783 1922 Thu Oct 3 05:13:36 [email protected]", "systemctl start postfix", "foreman-rake reports:_My_Frequency_" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/administering_red_hat_satellite/configuring_email_notifications_admin
Multicluster global hub
Multicluster global hub Red Hat Advanced Cluster Management for Kubernetes 2.12 Multicluster global hub
[ "kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 storageConfig: registry: imageURL: myregistry.example.com:5000/mirror/oc-mirror-metadata mirror: platform: channels: - name: stable-4.x type: ocp operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.x packages: - name: multicluster-global-hub-operator-rh - name: amq-streams additionalImages: [] helm: {}", "mirror --config=./imageset-config.yaml docker://myregistry.example.com:5000", "patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-mirror-catalog-source namespace: openshift-marketplace spec: image: myregistry.example.com:5000/mirror/my-operator-index:v4.x sourceType: grpc secrets: - <global-hub-secret>", "-n openshift-marketplace get packagemanifests", "apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: labels: operators.openshift.org/catalog: \"true\" name: global-hub-operator-icsp spec: repositoryDigestMirrors: - mirrors: - myregistry.example.com:5000/multicluster-globalhub source: registry.redhat.io/multicluster-globalhub - mirrors: - myregistry.example.com:5000/openshift4 source: registry.redhat.io/openshift4 - mirrors: - myregistry.example.com:5000/redhat source: registry.redhat.io/redhat", "export USER=<the-registry-user>", "export PASSWORD=<the-registry-password>", "get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' > pull_secret.yaml", "registry login --registry=USD{REGISTRY} --auth-basic=\"USDUSER:USDPASSWORD\" --to=pull_secret.yaml", "set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=pull_secret.yaml", "rm pull_secret.yaml", "create secret generic <secret_name> -n <tenant_namespace> --from-file=.dockerconfigjson=<path/to/registry/credentials> --type=kubernetes.io/dockerconfigjson", "secrets link <operator_sa> -n <tenant_namespace> <secret_name> --for=pull", "get pods -n multicluster-global-hub NAME READY STATUS RESTARTS AGE multicluster-global-hub-operator-687584cb7c-fnftj 1/1 Running 0 2m12s", "create secret generic multicluster-global-hub-transport -n multicluster-global-hub --from-literal=bootstrap_server=<kafka-bootstrap-server-address> --from-file=ca.crt=<CA-cert-for-kafka-server> --from-file=client.crt=<Client-cert-for-kafka-server> --from-file=client.key=<Client-key-for-kafka-server>", "create secret generic multicluster-global-hub-storage -n multicluster-global-hub --from-literal=database_uri=<postgresql-uri> --from-literal=database_uri_with_readonlyuser=<postgresql-uri-with-readonlyuser> --from-file=ca.crt=<CA-for-postgres-server>", "get secret multicluster-global-hub-grafana-datasources -n multicluster-global-hub -ojsonpath='{.data.datasources\\.yaml}' | base64 -d", "apiVersion: 1 datasources: - access: proxy isDefault: true name: Global-Hub-DataSource type: postgres url: postgres-primary.multicluster-global-hub.svc:5432 database: hoh user: guest jsonData: sslmode: verify-ca tlsAuth: true tlsAuthWithCACert: true tlsConfigurationMethod: file-content tlsSkipVerify: true queryTimeout: 300s timeInterval: 30s secureJsonData: password: xxxxx tlsCACert: xxxxx", "service: type: LoadBalancer", "get svc postgres-ha -n multicluster-global-hub NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE postgres-ha LoadBalancer 172.30.227.58 xxxx.us-east-1.elb.amazonaws.com 5432:31442/TCP 128m", "get managedclusteraddon multicluster-global-hub-controller -n USD<managed_hub_cluster_name>", "create secret generic auto-import-secret --from-file=kubeconfig=./managedClusterKubeconfig -n <Managedhub Namespace>", "get route multicluster-global-hub-grafana -n <the-namespace-of-multicluster-global-hub-instance>", "deleteRules: - orgId: 1 uid: globalhub_suspicious_policy_change - orgId: 1 uid: globalhub_cluster_compliance_status_change_frequently - orgId: 1 uid: globalhub_high_number_of_policy_events - orgId: 1 uid: globalhub_data_retention_job - orgId: 1 uid: globalhub_local_compliance_job", "apiVersion: v1 kind: Secret metadata: name: multicluster-global-hub-custom-grafana-config namespace: multicluster-global-hub type: Opaque stringData: grafana.ini: | [smtp] enabled = true host = smtp.google.com:465 user = <[email protected]> password = <password> ;cert_file = ;key_file = skip_verify = true from_address = <[email protected]> from_name = Grafana ;ehlo_identity = dashboard.example.com 1", "apiVersion: v1 data: alerting.yaml: | contactPoints: - orgId: 1 name: globalhub_policy receivers: - uid: globalhub_policy_alert_email type: email settings: addresses: <[email protected]> singleEmail: false - uid: globalhub_policy_alert_slack type: slack settings: url: <Slack-webhook-URL> title: | {{ template \"globalhub.policy.title\" . }} text: | {{ template \"globalhub.policy.message\" . }} policies: - orgId: 1 receiver: globalhub_policy group_by: ['grafana_folder', 'alertname'] matchers: - grafana_folder = Policy repeat_interval: 1d deleteRules: - orgId: 1 uid: [Alert Rule Uid] muteTimes: - orgId: 1 name: mti_1 time_intervals: - times: - start_time: '06:00' end_time: '23:59' location: 'UTC' weekdays: ['monday:wednesday', 'saturday', 'sunday'] months: ['1:3', 'may:august', 'december'] years: ['2020:2022', '2030'] days_of_month: ['1:5', '-3:-1'] kind: ConfigMap metadata: name: multicluster-global-hub-custom-alerting namespace: multicluster-global-hub", "exec -it multicluster-global-hub-postgres-0 -n multicluster-global-hub -- psql -d hoh", "-- call the func to generate the initial data of '2023-07-06' by inheriting '2023-07-05' CALL history.generate_local_compliance('2024-07-06');", "annotate search search-v2-operator -n open-cluster-management global-search-preview=true", "status: conditions: - lastTransitionTime: '2024-05-31T19:49:37Z' message: None reason: None status: 'True' type: GlobalSearchReady" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html-single/multicluster_global_hub/index
Chapter 2. Instance boot source
Chapter 2. Instance boot source The boot source for an instance can be an image or a bootable volume. The instance disk of an instance that you boot from an image is controlled by the Compute service and deleted when the instance is deleted. The instance disk of an instance that you boot from a volume is controlled by the Block Storage service and is stored remotely. An image contains a bootable operating system. The Image Service (glance) controls image storage and management. You can launch any number of instances from the same base image. Each instance runs from a copy of the base image. Any changes that you make to the instance do not affect the base image. A bootable volume is a block storage volume created from an image that contains a bootable operating system. The instance can use the bootable volume to persist instance data when the instance is deleted. You can use an existing persistent root volume when you launch an instance. You can also create persistent storage when you launch an instance from an image, so that you can save the instance data when the instance is deleted. A new persistent storage volume is created automatically when you create an instance from a volume snapshot. The following diagram shows the instance disks and storage that you can create when you launch an instance. The actual instance disks and storage created depend on the boot source and flavor used.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/creating_and_managing_instances/con_instance-boot-source_osp
Chapter 28. Knowledge Store REST API for Business Central spaces and projects
Chapter 28. Knowledge Store REST API for Business Central spaces and projects Red Hat Process Automation Manager provides a Knowledge Store REST API that you can use to interact with your projects and spaces in Red Hat Process Automation Manager without using the Business Central user interface. The Knowledge Store is the artifact repository for assets in Red Hat Process Automation Manager. This API support enables you to facilitate and automate maintenance of Business Central projects and spaces. With the Knowledge Store REST API, you can perform the following actions: Retrieve information about all projects and spaces Create, update, or delete projects and spaces Build, deploy, and test projects Retrieve information about Knowledge Store REST API requests, or jobs Knowledge Store REST API requests require the following components: Authentication The Knowledge Store REST API requires HTTP Basic authentication or token-based authentication for the user role rest-all . To view configured user roles for your Red Hat Process Automation Manager distribution, navigate to ~/USDSERVER_HOME/standalone/configuration/application-roles.properties and ~/application-users.properties . To add a user with the rest-all role, navigate to ~/USDSERVER_HOME/bin and run the following command: USD ./bin/jboss-cli.sh --commands="embed-server --std-out=echo,/subsystem=elytron/filesystem-realm=ApplicationRealm:add-identity(identity=<USERNAME>),/subsystem=elytron/filesystem-realm=ApplicationRealm:set-password(identity=<USERNAME>, clear={password='<PASSWORD>'}),/subsystem=elytron/filesystem-realm=ApplicationRealm:add-identity-attribute(identity=<USERNAME>, name=role, value=['rest-all'])" For more information about user roles and Red Hat Process Automation Manager installation options, see Planning a Red Hat Process Automation Manager installation . HTTP headers The Knowledge Store REST API requires the following HTTP headers for API requests: Accept : Data format accepted by your requesting client: application/json (JSON) Content-Type : Data format of your POST or PUT API request data: application/json (JSON) HTTP methods The Knowledge Store REST API supports the following HTTP methods for API requests: GET : Retrieves specified information from a specified resource endpoint POST : Creates or updates a resource PUT : Updates a resource DELETE : Deletes a resource Base URL The base URL for Knowledge Store REST API requests is http://SERVER:PORT/business-central/rest/ , such as http://localhost:8080/business-central/rest/ . Note The REST API base URL for the Knowledge Store and for the Process Automation Manager controller built in to Business Central are the same because both are considered part of Business Central REST services. Endpoints Knowledge Store REST API endpoints, such as /spaces/{spaceName} for a specified space, are the URIs that you append to the Knowledge Store REST API base URL to access the corresponding resource or type of resource in Red Hat Process Automation Manager. Example request URL for /spaces/{spaceName} endpoint http://localhost:8080/business-central/rest/spaces/MySpace Request data HTTP POST requests in the Knowledge Store REST API may require a JSON request body with data to accompany the request. Example POST request URL and JSON request body data http://localhost:8080/business-central/rest/spaces/MySpace/projects { "name": "Employee_Rostering", "groupId": "employeerostering", "version": "1.0.0-SNAPSHOT", "description": "Employee rostering problem optimisation using Planner. Assigns employees to shifts based on their skill." } 28.1. Sending requests with the Knowledge Store REST API using a REST client or curl utility The Knowledge Store REST API enables you to interact with your projects and spaces in Red Hat Process Automation Manager without using the Business Central user interface. You can send Knowledge Store REST API requests using any REST client or curl utility. Prerequisites Business Central is installed and running. You have rest-all user role access to Business Central. Procedure Identify the relevant API endpoint to which you want to send a request, such as [GET] /spaces to retrieve spaces in Business Central. In a REST client or curl utility, enter the following components for a GET request to /spaces . Adjust any request details according to your use case. For REST client: Authentication : Enter the user name and password of the Business Central user with the rest-all role. HTTP Headers : Set the following header: Accept : application/json HTTP method : Set to GET . URL : Enter the Knowledge Store REST API base URL and endpoint, such as http://localhost:8080/business-central/rest/spaces . For curl utility: -u : Enter the user name and password of the Business Central user with the rest-all role. -H : Set the following header: Accept : application/json -X : Set to GET . URL : Enter the Knowledge Store REST API base URL and endpoint, such as http://localhost:8080/business-central/rest/spaces . Execute the request and review the KIE Server response. Example server response (JSON): [ { "name": "MySpace", "description": null, "projects": [ { "name": "Employee_Rostering", "spaceName": "MySpace", "groupId": "employeerostering", "version": "1.0.0-SNAPSHOT", "description": "Employee rostering problem optimisation using Planner. Assigns employees to shifts based on their skill.", "publicURIs": [ { "protocol": "git", "uri": "git://localhost:9418/MySpace/example-Employee_Rostering" }, { "protocol": "ssh", "uri": "ssh://localhost:8001/MySpace/example-Employee_Rostering" } ] }, { "name": "Mortgage_Process", "spaceName": "MySpace", "groupId": "mortgage-process", "version": "1.0.0-SNAPSHOT", "description": "Getting started loan approval process in BPMN2, decision table, business rules, and forms.", "publicURIs": [ { "protocol": "git", "uri": "git://localhost:9418/MySpace/example-Mortgage_Process" }, { "protocol": "ssh", "uri": "ssh://localhost:8001/MySpace/example-Mortgage_Process" } ] } ], "owner": "admin", "defaultGroupId": "com.myspace" }, { "name": "MySpace2", "description": null, "projects": [ { "name": "IT_Orders", "spaceName": "MySpace", "groupId": "itorders", "version": "1.0.0-SNAPSHOT", "description": "Case Management IT Orders project", "publicURIs": [ { "protocol": "git", "uri": "git://localhost:9418/MySpace/example-IT_Orders-1" }, { "protocol": "ssh", "uri": "ssh://localhost:8001/MySpace/example-IT_Orders-1" } ] } ], "owner": "admin", "defaultGroupId": "com.myspace" } ] In your REST client or curl utility, send another API request with the following components for a POST request to /spaces/{spaceName}/projects to create a project within a space. Adjust any request details according to your use case. For REST client: Authentication : Enter the user name and password of the Business Central user with the rest-all role. HTTP Headers : Set the following header: Accept : application/json Accept-Language : en-US Content-Type : application/json HTTP method : Set to POST . URL : Enter the Knowledge Store REST API base URL and endpoint, such as http://localhost:8080/business-central/rest/spaces/MySpace/projects . Request body : Add a JSON request body with the identification data for the new project: { "name": "Employee_Rostering", "groupId": "employeerostering", "version": "1.0.0-SNAPSHOT", "description": "Employee rostering problem optimisation using Planner. Assigns employees to shifts based on their skill." } For curl utility: -u : Enter the user name and password of the Business Central user with the rest-all role. -H : Set the following headers: Accept : application/json Accept-Language : en-US (If not defined, the default locale from the JVM is reflected) Content-Type : application/json -X : Set to POST . URL : Enter the Knowledge Store REST API base URL and endpoint, such as http://localhost:8080/business-central/rest/spaces/MySpace/projects . -d : Add a JSON request body or file ( @file.json ) with the identification data for the new project: Execute the request and review the KIE Server response. Example server response (JSON): { "jobId": "1541017411591-6", "status": "APPROVED", "spaceName": "MySpace", "projectName": "Employee_Rostering", "projectGroupId": "employeerostering", "projectVersion": "1.0.0-SNAPSHOT", "description": "Employee rostering problem optimisation using Planner. Assigns employees to shifts based on their skill." } If you encounter request errors, review the returned error code messages and adjust your request accordingly. 28.2. Supported Knowledge Store REST API endpoints The Knowledge Store REST API provides endpoints for managing spaces and projects in Red Hat Process Automation Manager and for retrieving information about Knowledge Store REST API requests, or jobs . 28.2.1. Spaces The Knowledge Store REST API supports the following endpoints for managing spaces in Business Central. The Knowledge Store REST API base URL is http://SERVER:PORT/business-central/rest/ . All requests require HTTP Basic authentication or token-based authentication for the rest-all user role. [GET] /spaces Returns all spaces in Business Central. Example server response (JSON) [ { "name": "MySpace", "description": null, "projects": [ { "name": "Employee_Rostering", "spaceName": "MySpace", "groupId": "employeerostering", "version": "1.0.0-SNAPSHOT", "description": "Employee rostering problem optimisation using Planner. Assigns employees to shifts based on their skill.", "publicURIs": [ { "protocol": "git", "uri": "git://localhost:9418/MySpace/example-Employee_Rostering" }, { "protocol": "ssh", "uri": "ssh://localhost:8001/MySpace/example-Employee_Rostering" } ] }, { "name": "Mortgage_Process", "spaceName": "MySpace", "groupId": "mortgage-process", "version": "1.0.0-SNAPSHOT", "description": "Getting started loan approval process in BPMN2, decision table, business rules, and forms.", "publicURIs": [ { "protocol": "git", "uri": "git://localhost:9418/MySpace/example-Mortgage_Process" }, { "protocol": "ssh", "uri": "ssh://localhost:8001/MySpace/example-Mortgage_Process" } ] } ], "owner": "admin", "defaultGroupId": "com.myspace" }, { "name": "MySpace2", "description": null, "projects": [ { "name": "IT_Orders", "spaceName": "MySpace", "groupId": "itorders", "version": "1.0.0-SNAPSHOT", "description": "Case Management IT Orders project", "publicURIs": [ { "protocol": "git", "uri": "git://localhost:9418/MySpace/example-IT_Orders-1" }, { "protocol": "ssh", "uri": "ssh://localhost:8001/MySpace/example-IT_Orders-1" } ] } ], "owner": "admin", "defaultGroupId": "com.myspace" } ] [GET] /spaces/{spaceName} Returns information about a specified space. Table 28.1. Request parameters Name Description Type Requirement spaceName Name of the space to be retrieved String Required Example server response (JSON) { "name": "MySpace", "description": null, "projects": [ { "name": "Mortgage_Process", "spaceName": "MySpace", "groupId": "mortgage-process", "version": "1.0.0-SNAPSHOT", "description": "Getting started loan approval process in BPMN2, decision table, business rules, and forms.", "publicURIs": [ { "protocol": "git", "uri": "git://localhost:9418/MySpace/example-Mortgage_Process" }, { "protocol": "ssh", "uri": "ssh://localhost:8001/MySpace/example-Mortgage_Process" } ] }, { "name": "Employee_Rostering", "spaceName": "MySpace", "groupId": "employeerostering", "version": "1.0.0-SNAPSHOT", "description": "Employee rostering problem optimisation using Planner. Assigns employees to shifts based on their skill.", "publicURIs": [ { "protocol": "git", "uri": "git://localhost:9418/MySpace/example-Employee_Rostering" }, { "protocol": "ssh", "uri": "ssh://localhost:8001/MySpace/example-Employee_Rostering" } ] }, { "name": "Evaluation_Process", "spaceName": "MySpace", "groupId": "evaluation", "version": "1.0.0-SNAPSHOT", "description": "Getting started Business Process for evaluating employees", "publicURIs": [ { "protocol": "git", "uri": "git://localhost:9418/MySpace/example-Evaluation_Process" }, { "protocol": "ssh", "uri": "ssh://localhost:8001/MySpace/example-Evaluation_Process" } ] }, { "name": "IT_Orders", "spaceName": "MySpace", "groupId": "itorders", "version": "1.0.0-SNAPSHOT", "description": "Case Management IT Orders project", "publicURIs": [ { "protocol": "git", "uri": "git://localhost:9418/MySpace/example-IT_Orders" }, { "protocol": "ssh", "uri": "ssh://localhost:8001/MySpace/example-IT_Orders" } ] } ], "owner": "admin", "defaultGroupId": "com.myspace" } [POST] /spaces Creates a space in Business Central. Table 28.2. Request parameters Name Description Type Requirement body The name , description , owner , defaultGroupId , and any other components of the new space Request body Required Example request body (JSON) { "name": "NewSpace", "description": "My new space.", "owner": "admin", "defaultGroupId": "com.newspace" } Example server response (JSON) { "jobId": "1541016978154-3", "status": "APPROVED", "spaceName": "NewSpace", "owner": "admin", "defaultGroupId": "com.newspace", "description": "My new space." } [PUT] /spaces Updates description , owner , and defaultGroupId of a space in Business Central. Example request body (JSON) { "name": "MySpace", "description": "This is updated description", "owner": "admin", "defaultGroupId": "com.updatedGroupId" } Example server response (JSON) { "jobId": "1592214574454-1", "status": "APPROVED", "spaceName": "MySpace", "owner": "admin", "defaultGroupId": "com.updatedGroupId", "description": "This is updated description" } [DELETE] /spaces/{spaceName} Deletes a specified space from Business Central. Table 28.3. Request parameters Name Description Type Requirement spaceName Name of the space to be deleted String Required Example server response (JSON) { "jobId": "1541127032997-8", "status": "APPROVED", "spaceName": "MySpace", "owner": "admin", "description": "My deleted space.", "repositories": null } 28.2.2. Projects The Knowledge Store REST API supports the following endpoints for managing, building, and deploying projects in Business Central. The Knowledge Store REST API base URL is http://SERVER:PORT/business-central/rest/ . All requests require HTTP Basic authentication or token-based authentication for the rest-all user role. [GET] /spaces/{spaceName}/projects Returns projects in a specified space. Table 28.4. Request parameters Name Description Type Requirement spaceName Name of the space for which you are retrieving projects String Required Example server response (JSON) [ { "name": "Mortgage_Process", "spaceName": "MySpace", "groupId": "mortgage-process", "version": "1.0.0-SNAPSHOT", "description": "Getting started loan approval process in BPMN2, decision table, business rules, and forms.", "publicURIs": [ { "protocol": "git", "uri": "git://localhost:9418/MySpace/example-Mortgage_Process" }, { "protocol": "ssh", "uri": "ssh://localhost:8001/MySpace/example-Mortgage_Process" } ] }, { "name": "Employee_Rostering", "spaceName": "MySpace", "groupId": "employeerostering", "version": "1.0.0-SNAPSHOT", "description": "Employee rostering problem optimisation using Planner. Assigns employees to shifts based on their skill.", "publicURIs": [ { "protocol": "git", "uri": "git://localhost:9418/MySpace/example-Employee_Rostering" }, { "protocol": "ssh", "uri": "ssh://localhost:8001/MySpace/example-Employee_Rostering" } ] }, { "name": "Evaluation_Process", "spaceName": "MySpace", "groupId": "evaluation", "version": "1.0.0-SNAPSHOT", "description": "Getting started Business Process for evaluating employees", "publicURIs": [ { "protocol": "git", "uri": "git://localhost:9418/MySpace/example-Evaluation_Process" }, { "protocol": "ssh", "uri": "ssh://localhost:8001/MySpace/example-Evaluation_Process" } ] }, { "name": "IT_Orders", "spaceName": "MySpace", "groupId": "itorders", "version": "1.0.0-SNAPSHOT", "description": "Case Management IT Orders project", "publicURIs": [ { "protocol": "git", "uri": "git://localhost:9418/MySpace/example-IT_Orders" }, { "protocol": "ssh", "uri": "ssh://localhost:8001/MySpace/example-IT_Orders" } ] } ] [GET] /spaces/{spaceName}/projects/{projectName} Returns information about a specified project in a specified space. Table 28.5. Request parameters Name Description Type Requirement spaceName Name of the space where the project is located String Required projectName Name of the project to be retrieved String Required Example server response (JSON) { "name": "Employee_Rostering", "spaceName": "MySpace", "groupId": "employeerostering", "version": "1.0.0-SNAPSHOT", "description": "Employee rostering problem optimisation using Planner. Assigns employees to shifts based on their skill.", "publicURIs": [ { "protocol": "git", "uri": "git://localhost:9418/MySpace/example-Employee_Rostering" }, { "protocol": "ssh", "uri": "ssh://localhost:8001/MySpace/example-Employee_Rostering" } ] } [POST] /spaces/{spaceName}/projects Creates a project in a specified space. Table 28.6. Request parameters Name Description Type Requirement spaceName Name of the space in which the new project will be created String Required body The name , groupId , version , description , and any other components of the new project Request body Required Example request body (JSON) { "name": "Employee_Rostering", "groupId": "employeerostering", "version": "1.0.0-SNAPSHOT", "description": "Employee rostering problem optimisation using Planner. Assigns employees to shifts based on their skill." } Example server response (JSON) { "jobId": "1541017411591-6", "status": "APPROVED", "spaceName": "MySpace", "projectName": "Employee_Rostering", "projectGroupId": "employeerostering", "projectVersion": "1.0.0-SNAPSHOT", "description": "Employee rostering problem optimisation using Planner. Assigns employees to shifts based on their skill." } [DELETE] /spaces/{spaceName}/projects/{projectName} Deletes a specified project from a specified space. Table 28.7. Request parameters Name Description Type Requirement spaceName Name of the space where the project is located String Required projectName Name of the project to be deleted String Required Example server response (JSON) { "jobId": "1541128617727-10", "status": "APPROVED", "projectName": "Employee_Rostering", "spaceName": "MySpace" } [POST] /spaces/{spaceName}/git/clone Clones a project into a specified space from a specified Git address. Table 28.8. Request parameters Name Description Type Requirement spaceName Name of the space to which you are cloning a project String Required body The name , description , and Git repository userName , password , and gitURL for the project to be cloned Request body Required Example request body (JSON) { "name": "Employee_Rostering", "description": "Employee rostering problem optimisation using Planner. Assigns employees to shifts based on their skill.", "userName": "baAdmin", "password": "password@1", "gitURL": "git://localhost:9418/MySpace/example-Employee_Rostering" } Example server response (JSON) { "jobId": "1541129488547-13", "status": "APPROVED", "cloneProjectRequest": { "name": "Employee_Rostering", "description": "Employee rostering problem optimisation using Planner. Assigns employees to shifts based on their skill.", "userName": "baAdmin", "password": "password@1", "gitURL": "git://localhost:9418/MySpace/example-Employee_Rostering" }, "spaceName": "MySpace2" } [POST] /spaces/{spaceName}/projects/{projectName}/maven/compile Compiles a specified project in a specified space (equivalent to mvn compile ). Table 28.9. Request parameters Name Description Type Requirement spaceName Name of the space where the project is located String Required projectName Name of the project to be compiled String Required Example server response (JSON) { "jobId": "1541128617727-10", "status": "APPROVED", "projectName": "Employee_Rostering", "spaceName": "MySpace" } [POST] /spaces/{spaceName}/projects/{projectName}/maven/test Tests a specified project in a specified space (equivalent to mvn test ). Table 28.10. Request parameters Name Description Type Requirement spaceName Name of the space where the project is located String Required projectName Name of the project to be tested String Required Example server response (JSON) { "jobId": "1541132591595-19", "status": "APPROVED", "projectName": "Employee_Rostering", "spaceName": "MySpace" } [POST] /spaces/{spaceName}/projects/{projectName}/maven/install Installs a specified project in a specified space (equivalent to mvn install ). Table 28.11. Request parameters Name Description Type Requirement spaceName Name of the space where the project is located String Required projectName Name of the project to be installed String Required Example server response (JSON) { "jobId": "1541132668987-20", "status": "APPROVED", "projectName": "Employee_Rostering", "spaceName": "MySpace" } [POST] /spaces/{spaceName}/projects/{projectName}/maven/deploy Deploys a specified project in a specified space (equivalent to mvn deploy ). Table 28.12. Request parameters Name Description Type Requirement spaceName Name of the space where the project is located String Required projectName Name of the project to be deployed String Required Example server response (JSON) { "jobId": "1541132816435-21", "status": "APPROVED", "projectName": "Employee_Rostering", "spaceName": "MySpace" } 28.2.3. Jobs (API requests) All POST and DELETE requests in the Knowledge Store REST API return a job ID associated with each request, in addition to the returned request details. You can use a job ID to view the request status or delete a sent request. Knowledge Store REST API requests, or jobs , can have the following statuses: Table 28.13. Job statuses (API request statuses) Status Description ACCEPTED The request was accepted and is being processed. BAD_REQUEST The request contained incorrect content and was not accepted. RESOURCE_NOT_EXIST The requested resource (path) does not exist. DUPLICATE_RESOURCE The resource already exists. SERVER_ERROR An error occurred in KIE Server. SUCCESS The request finished successfully. FAIL The request failed. APPROVED The request was approved. DENIED The request was denied. GONE The job ID for the request could not be found due to one of the following reasons: The request was explicitly removed. The request finished and has been deleted from a status cache. A request is removed from a status cache after the cache has reached its maximum capacity. The request never existed. The Knowledge Store REST API supports the following endpoints for retrieving or deleting sent API requests. The Knowledge Store REST API base URL is http://SERVER:PORT/business-central/rest/ . All requests require HTTP Basic authentication or token-based authentication for the rest-all user role. [GET] /jobs/{jobId} Returns the status of a specified job (a previously sent API request). Table 28.14. Request parameters Name Description Type Requirement jobId ID of the job to be retrieved (example: 1541010216919-1 ) String Required Example server response (JSON) { "status": "SUCCESS", "jobId": "1541010216919-1", "result": null, "lastModified": 1541010218352, "detailedResult": [ "level:INFO, path:null, text:Build of module 'Mortgage_Process' (requested by system) completed.\n Build: SUCCESSFUL" ] } [DELETE] /jobs/{jobId} Deletes a specified job (a previously sent API request). If the job is not being processed yet, this request removes the job from the job queue. This request does not cancel or stop an ongoing job. Table 28.15. Request parameters Name Description Type Requirement jobId ID of the job to be deleted (example: 1541010216919-1 ) String Required Example server response (JSON) { "status": "GONE", "jobId": "1541010216919-1", "result": null, "lastModified": 1541132054916, "detailedResult": [ "level:INFO, path:null, text:Build of module 'Mortgage_Process' (requested by system) completed.\n Build: SUCCESSFUL" ] } 28.2.4. Branches The Knowledge Store REST API supports the following endpoints for managing branches in Business Central. The Knowledge Store REST API base URL is http://SERVER:PORT/business-central/rest/ . All requests require HTTP Basic authentication or token-based authentication for the rest-all user role. [GET] /spaces/{spaceName}/projects/{projectName}/branches Returns all branches in a specified project and space. Table 28.16. Request parameters Name Description Type Requirement spaceName Name of the space for which you are retrieving projects String Required projectName Name of the project for which you are retrieving branches String Required Example server response (JSON) [ { "name":"master" } ] [POST] /spaces/{spaceName}/projects/{projectName}/branches Adds a specified branch in a specified project and space. Table 28.17. Request parameters Name Description Type Requirement spaceName Name of the space where the project is located String Required projectName Name of the project in which the new branch needs to be created String Required body The newBranchName and baseBranchName of a project Request body Required Example request body (JSON) { "newBranchName": "branch01", "baseBranchName": "master" } Example server response (JSON) { "jobId": "1576175811141-3", "status": "APPROVED", "spaceName": "Space123", "projectName": "ProjABC", "newBranchName": "b1", "baseBranchName": "master", "userIdentifier": "bc" } [DELETE] /spaces/{spaceName}/projects/{projectName}/branches/{branchName} Deletes a specified branch in a specified project and space. Table 28.18. Request parameters Name Description Type Requirement spaceName Name of the space where the project is located String Required projectName Name of the project where the branch is located String Required branchName Name of the branch to be deleted String Required Example server response (JSON) { "jobId": "1576175811421-5", "status": "APPROVED", "spaceName": "Space123", "projectName": "ProjABC", "branchName": "b1", "userIdentifier": "bc" } [POST] /spaces/{spaceName}/projects/{projectName}/branches/{branchName}/maven/compile Compiles a specified branch in a specified project and space. If branchName is not specified, then request applies to the master branch. Table 28.19. Request parameters Name Description Type Requirement spaceName Name of the space where the project is located String Required projectName Name of the project where the branch is located String Required branchName Name of the branch to be compiled String Required Example server response (JSON) { "jobId": "1576175811233-4", "status": "APPROVED", "spaceName": "Space123", "projectName": "ProjABC", "branchName": "b1", } [POST] /spaces/{spaceName}/projects/{projectName}/branches/{branchName}/maven/install Installs a specified branch in a specified project and space. If branchName is not specified, then request applies to the master branch. Table 28.20. Request parameters Name Description Type Requirement spaceName Name of the space where the project is located String Required projectName Name of the project where the branch is located String Required branchName Name of the branch to be installed String Required Example server response (JSON) { "jobId": "1576175811233-4", "status": "APPROVED", "spaceName": "Space123", "projectName": "ProjABC", "branchName": "b1", } [POST] /spaces/{spaceName}/projects/{projectName}/branches/{branchName}/maven/test Tests a specified branch in a specified project and space. If branchName is not specified, then request applies to the master branch. Table 28.21. Request parameters Name Description Type Requirement spaceName Name of the space where the project located String Required projectName Name of the project where the branch is located String Required branchName Name of the branch to be tested String Required Example server response (JSON) { "jobId": "1576175811233-4", "status": "APPROVED", "spaceName": "Space123", "projectName": "ProjABC", "branchName": "b1", } [POST] /spaces/{spaceName}/projects/{projectName}/branches/{branchName}/maven/deploy Deploys a specified branch in a specified project and space. If branchName is not specified, then request applies to the master branch. Table 28.22. Request parameters Name Description Type Requirement spaceName Name of the space where the project is located String Required projectName Name of the project where the branch is located String Required branchName Name of the branch to be deployed String Required Example server response (JSON) { "jobId": "1576175811233-4", "status": "APPROVED", "spaceName": "Space123", "projectName": "ProjABC", "branchName": "b1", }
[ "./bin/jboss-cli.sh --commands=\"embed-server --std-out=echo,/subsystem=elytron/filesystem-realm=ApplicationRealm:add-identity(identity=<USERNAME>),/subsystem=elytron/filesystem-realm=ApplicationRealm:set-password(identity=<USERNAME>, clear={password='<PASSWORD>'}),/subsystem=elytron/filesystem-realm=ApplicationRealm:add-identity-attribute(identity=<USERNAME>, name=role, value=['rest-all'])\"", "{ \"name\": \"Employee_Rostering\", \"groupId\": \"employeerostering\", \"version\": \"1.0.0-SNAPSHOT\", \"description\": \"Employee rostering problem optimisation using Planner. Assigns employees to shifts based on their skill.\" }", "curl -u 'baAdmin:password@1' -H \"Accept: application/json\" -X GET \"http://localhost:8080/business-central/rest/spaces\"", "[ { \"name\": \"MySpace\", \"description\": null, \"projects\": [ { \"name\": \"Employee_Rostering\", \"spaceName\": \"MySpace\", \"groupId\": \"employeerostering\", \"version\": \"1.0.0-SNAPSHOT\", \"description\": \"Employee rostering problem optimisation using Planner. Assigns employees to shifts based on their skill.\", \"publicURIs\": [ { \"protocol\": \"git\", \"uri\": \"git://localhost:9418/MySpace/example-Employee_Rostering\" }, { \"protocol\": \"ssh\", \"uri\": \"ssh://localhost:8001/MySpace/example-Employee_Rostering\" } ] }, { \"name\": \"Mortgage_Process\", \"spaceName\": \"MySpace\", \"groupId\": \"mortgage-process\", \"version\": \"1.0.0-SNAPSHOT\", \"description\": \"Getting started loan approval process in BPMN2, decision table, business rules, and forms.\", \"publicURIs\": [ { \"protocol\": \"git\", \"uri\": \"git://localhost:9418/MySpace/example-Mortgage_Process\" }, { \"protocol\": \"ssh\", \"uri\": \"ssh://localhost:8001/MySpace/example-Mortgage_Process\" } ] } ], \"owner\": \"admin\", \"defaultGroupId\": \"com.myspace\" }, { \"name\": \"MySpace2\", \"description\": null, \"projects\": [ { \"name\": \"IT_Orders\", \"spaceName\": \"MySpace\", \"groupId\": \"itorders\", \"version\": \"1.0.0-SNAPSHOT\", \"description\": \"Case Management IT Orders project\", \"publicURIs\": [ { \"protocol\": \"git\", \"uri\": \"git://localhost:9418/MySpace/example-IT_Orders-1\" }, { \"protocol\": \"ssh\", \"uri\": \"ssh://localhost:8001/MySpace/example-IT_Orders-1\" } ] } ], \"owner\": \"admin\", \"defaultGroupId\": \"com.myspace\" } ]", "{ \"name\": \"Employee_Rostering\", \"groupId\": \"employeerostering\", \"version\": \"1.0.0-SNAPSHOT\", \"description\": \"Employee rostering problem optimisation using Planner. Assigns employees to shifts based on their skill.\" }", "curl -u 'baAdmin:password@1' -H \"Accept: application/json\" -H \"Accept-Language: en-US\" -H \"Content-Type: application/json\" -X POST \"http://localhost:8080/business-central/rest/spaces/MySpace/projects\" -d \"{ \\\"name\\\": \\\"Employee_Rostering\\\", \\\"groupId\\\": \\\"employeerostering\\\", \\\"version\\\": \\\"1.0.0-SNAPSHOT\\\", \\\"description\\\": \\\"Employee rostering problem optimisation using Planner. Assigns employees to shifts based on their skill.\\\"}\"", "curl -u 'baAdmin:password@1' -H \"Accept: application/json\" -H \"Accept-Language: en-US\" -H \"Content-Type: application/json\" -X POST \"http://localhost:8080/business-central/rest/spaces/MySpace/projects\" -d @my-project.json", "{ \"jobId\": \"1541017411591-6\", \"status\": \"APPROVED\", \"spaceName\": \"MySpace\", \"projectName\": \"Employee_Rostering\", \"projectGroupId\": \"employeerostering\", \"projectVersion\": \"1.0.0-SNAPSHOT\", \"description\": \"Employee rostering problem optimisation using Planner. Assigns employees to shifts based on their skill.\" }", "[ { \"name\": \"MySpace\", \"description\": null, \"projects\": [ { \"name\": \"Employee_Rostering\", \"spaceName\": \"MySpace\", \"groupId\": \"employeerostering\", \"version\": \"1.0.0-SNAPSHOT\", \"description\": \"Employee rostering problem optimisation using Planner. Assigns employees to shifts based on their skill.\", \"publicURIs\": [ { \"protocol\": \"git\", \"uri\": \"git://localhost:9418/MySpace/example-Employee_Rostering\" }, { \"protocol\": \"ssh\", \"uri\": \"ssh://localhost:8001/MySpace/example-Employee_Rostering\" } ] }, { \"name\": \"Mortgage_Process\", \"spaceName\": \"MySpace\", \"groupId\": \"mortgage-process\", \"version\": \"1.0.0-SNAPSHOT\", \"description\": \"Getting started loan approval process in BPMN2, decision table, business rules, and forms.\", \"publicURIs\": [ { \"protocol\": \"git\", \"uri\": \"git://localhost:9418/MySpace/example-Mortgage_Process\" }, { \"protocol\": \"ssh\", \"uri\": \"ssh://localhost:8001/MySpace/example-Mortgage_Process\" } ] } ], \"owner\": \"admin\", \"defaultGroupId\": \"com.myspace\" }, { \"name\": \"MySpace2\", \"description\": null, \"projects\": [ { \"name\": \"IT_Orders\", \"spaceName\": \"MySpace\", \"groupId\": \"itorders\", \"version\": \"1.0.0-SNAPSHOT\", \"description\": \"Case Management IT Orders project\", \"publicURIs\": [ { \"protocol\": \"git\", \"uri\": \"git://localhost:9418/MySpace/example-IT_Orders-1\" }, { \"protocol\": \"ssh\", \"uri\": \"ssh://localhost:8001/MySpace/example-IT_Orders-1\" } ] } ], \"owner\": \"admin\", \"defaultGroupId\": \"com.myspace\" } ]", "{ \"name\": \"MySpace\", \"description\": null, \"projects\": [ { \"name\": \"Mortgage_Process\", \"spaceName\": \"MySpace\", \"groupId\": \"mortgage-process\", \"version\": \"1.0.0-SNAPSHOT\", \"description\": \"Getting started loan approval process in BPMN2, decision table, business rules, and forms.\", \"publicURIs\": [ { \"protocol\": \"git\", \"uri\": \"git://localhost:9418/MySpace/example-Mortgage_Process\" }, { \"protocol\": \"ssh\", \"uri\": \"ssh://localhost:8001/MySpace/example-Mortgage_Process\" } ] }, { \"name\": \"Employee_Rostering\", \"spaceName\": \"MySpace\", \"groupId\": \"employeerostering\", \"version\": \"1.0.0-SNAPSHOT\", \"description\": \"Employee rostering problem optimisation using Planner. Assigns employees to shifts based on their skill.\", \"publicURIs\": [ { \"protocol\": \"git\", \"uri\": \"git://localhost:9418/MySpace/example-Employee_Rostering\" }, { \"protocol\": \"ssh\", \"uri\": \"ssh://localhost:8001/MySpace/example-Employee_Rostering\" } ] }, { \"name\": \"Evaluation_Process\", \"spaceName\": \"MySpace\", \"groupId\": \"evaluation\", \"version\": \"1.0.0-SNAPSHOT\", \"description\": \"Getting started Business Process for evaluating employees\", \"publicURIs\": [ { \"protocol\": \"git\", \"uri\": \"git://localhost:9418/MySpace/example-Evaluation_Process\" }, { \"protocol\": \"ssh\", \"uri\": \"ssh://localhost:8001/MySpace/example-Evaluation_Process\" } ] }, { \"name\": \"IT_Orders\", \"spaceName\": \"MySpace\", \"groupId\": \"itorders\", \"version\": \"1.0.0-SNAPSHOT\", \"description\": \"Case Management IT Orders project\", \"publicURIs\": [ { \"protocol\": \"git\", \"uri\": \"git://localhost:9418/MySpace/example-IT_Orders\" }, { \"protocol\": \"ssh\", \"uri\": \"ssh://localhost:8001/MySpace/example-IT_Orders\" } ] } ], \"owner\": \"admin\", \"defaultGroupId\": \"com.myspace\" }", "{ \"name\": \"NewSpace\", \"description\": \"My new space.\", \"owner\": \"admin\", \"defaultGroupId\": \"com.newspace\" }", "{ \"jobId\": \"1541016978154-3\", \"status\": \"APPROVED\", \"spaceName\": \"NewSpace\", \"owner\": \"admin\", \"defaultGroupId\": \"com.newspace\", \"description\": \"My new space.\" }", "{ \"name\": \"MySpace\", \"description\": \"This is updated description\", \"owner\": \"admin\", \"defaultGroupId\": \"com.updatedGroupId\" }", "{ \"jobId\": \"1592214574454-1\", \"status\": \"APPROVED\", \"spaceName\": \"MySpace\", \"owner\": \"admin\", \"defaultGroupId\": \"com.updatedGroupId\", \"description\": \"This is updated description\" }", "{ \"jobId\": \"1541127032997-8\", \"status\": \"APPROVED\", \"spaceName\": \"MySpace\", \"owner\": \"admin\", \"description\": \"My deleted space.\", \"repositories\": null }", "[ { \"name\": \"Mortgage_Process\", \"spaceName\": \"MySpace\", \"groupId\": \"mortgage-process\", \"version\": \"1.0.0-SNAPSHOT\", \"description\": \"Getting started loan approval process in BPMN2, decision table, business rules, and forms.\", \"publicURIs\": [ { \"protocol\": \"git\", \"uri\": \"git://localhost:9418/MySpace/example-Mortgage_Process\" }, { \"protocol\": \"ssh\", \"uri\": \"ssh://localhost:8001/MySpace/example-Mortgage_Process\" } ] }, { \"name\": \"Employee_Rostering\", \"spaceName\": \"MySpace\", \"groupId\": \"employeerostering\", \"version\": \"1.0.0-SNAPSHOT\", \"description\": \"Employee rostering problem optimisation using Planner. Assigns employees to shifts based on their skill.\", \"publicURIs\": [ { \"protocol\": \"git\", \"uri\": \"git://localhost:9418/MySpace/example-Employee_Rostering\" }, { \"protocol\": \"ssh\", \"uri\": \"ssh://localhost:8001/MySpace/example-Employee_Rostering\" } ] }, { \"name\": \"Evaluation_Process\", \"spaceName\": \"MySpace\", \"groupId\": \"evaluation\", \"version\": \"1.0.0-SNAPSHOT\", \"description\": \"Getting started Business Process for evaluating employees\", \"publicURIs\": [ { \"protocol\": \"git\", \"uri\": \"git://localhost:9418/MySpace/example-Evaluation_Process\" }, { \"protocol\": \"ssh\", \"uri\": \"ssh://localhost:8001/MySpace/example-Evaluation_Process\" } ] }, { \"name\": \"IT_Orders\", \"spaceName\": \"MySpace\", \"groupId\": \"itorders\", \"version\": \"1.0.0-SNAPSHOT\", \"description\": \"Case Management IT Orders project\", \"publicURIs\": [ { \"protocol\": \"git\", \"uri\": \"git://localhost:9418/MySpace/example-IT_Orders\" }, { \"protocol\": \"ssh\", \"uri\": \"ssh://localhost:8001/MySpace/example-IT_Orders\" } ] } ]", "{ \"name\": \"Employee_Rostering\", \"spaceName\": \"MySpace\", \"groupId\": \"employeerostering\", \"version\": \"1.0.0-SNAPSHOT\", \"description\": \"Employee rostering problem optimisation using Planner. Assigns employees to shifts based on their skill.\", \"publicURIs\": [ { \"protocol\": \"git\", \"uri\": \"git://localhost:9418/MySpace/example-Employee_Rostering\" }, { \"protocol\": \"ssh\", \"uri\": \"ssh://localhost:8001/MySpace/example-Employee_Rostering\" } ] }", "{ \"name\": \"Employee_Rostering\", \"groupId\": \"employeerostering\", \"version\": \"1.0.0-SNAPSHOT\", \"description\": \"Employee rostering problem optimisation using Planner. Assigns employees to shifts based on their skill.\" }", "{ \"jobId\": \"1541017411591-6\", \"status\": \"APPROVED\", \"spaceName\": \"MySpace\", \"projectName\": \"Employee_Rostering\", \"projectGroupId\": \"employeerostering\", \"projectVersion\": \"1.0.0-SNAPSHOT\", \"description\": \"Employee rostering problem optimisation using Planner. Assigns employees to shifts based on their skill.\" }", "{ \"jobId\": \"1541128617727-10\", \"status\": \"APPROVED\", \"projectName\": \"Employee_Rostering\", \"spaceName\": \"MySpace\" }", "{ \"name\": \"Employee_Rostering\", \"description\": \"Employee rostering problem optimisation using Planner. Assigns employees to shifts based on their skill.\", \"userName\": \"baAdmin\", \"password\": \"password@1\", \"gitURL\": \"git://localhost:9418/MySpace/example-Employee_Rostering\" }", "{ \"jobId\": \"1541129488547-13\", \"status\": \"APPROVED\", \"cloneProjectRequest\": { \"name\": \"Employee_Rostering\", \"description\": \"Employee rostering problem optimisation using Planner. Assigns employees to shifts based on their skill.\", \"userName\": \"baAdmin\", \"password\": \"password@1\", \"gitURL\": \"git://localhost:9418/MySpace/example-Employee_Rostering\" }, \"spaceName\": \"MySpace2\" }", "{ \"jobId\": \"1541128617727-10\", \"status\": \"APPROVED\", \"projectName\": \"Employee_Rostering\", \"spaceName\": \"MySpace\" }", "{ \"jobId\": \"1541132591595-19\", \"status\": \"APPROVED\", \"projectName\": \"Employee_Rostering\", \"spaceName\": \"MySpace\" }", "{ \"jobId\": \"1541132668987-20\", \"status\": \"APPROVED\", \"projectName\": \"Employee_Rostering\", \"spaceName\": \"MySpace\" }", "{ \"jobId\": \"1541132816435-21\", \"status\": \"APPROVED\", \"projectName\": \"Employee_Rostering\", \"spaceName\": \"MySpace\" }", "{ \"status\": \"SUCCESS\", \"jobId\": \"1541010216919-1\", \"result\": null, \"lastModified\": 1541010218352, \"detailedResult\": [ \"level:INFO, path:null, text:Build of module 'Mortgage_Process' (requested by system) completed.\\n Build: SUCCESSFUL\" ] }", "{ \"status\": \"GONE\", \"jobId\": \"1541010216919-1\", \"result\": null, \"lastModified\": 1541132054916, \"detailedResult\": [ \"level:INFO, path:null, text:Build of module 'Mortgage_Process' (requested by system) completed.\\n Build: SUCCESSFUL\" ] }", "[ { \"name\":\"master\" } ]", "{ \"newBranchName\": \"branch01\", \"baseBranchName\": \"master\" }", "{ \"jobId\": \"1576175811141-3\", \"status\": \"APPROVED\", \"spaceName\": \"Space123\", \"projectName\": \"ProjABC\", \"newBranchName\": \"b1\", \"baseBranchName\": \"master\", \"userIdentifier\": \"bc\" }", "{ \"jobId\": \"1576175811421-5\", \"status\": \"APPROVED\", \"spaceName\": \"Space123\", \"projectName\": \"ProjABC\", \"branchName\": \"b1\", \"userIdentifier\": \"bc\" }", "{ \"jobId\": \"1576175811233-4\", \"status\": \"APPROVED\", \"spaceName\": \"Space123\", \"projectName\": \"ProjABC\", \"branchName\": \"b1\", }", "{ \"jobId\": \"1576175811233-4\", \"status\": \"APPROVED\", \"spaceName\": \"Space123\", \"projectName\": \"ProjABC\", \"branchName\": \"b1\", }", "{ \"jobId\": \"1576175811233-4\", \"status\": \"APPROVED\", \"spaceName\": \"Space123\", \"projectName\": \"ProjABC\", \"branchName\": \"b1\", }", "{ \"jobId\": \"1576175811233-4\", \"status\": \"APPROVED\", \"spaceName\": \"Space123\", \"projectName\": \"ProjABC\", \"branchName\": \"b1\", }" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/deploying_and_managing_red_hat_process_automation_manager_services/knowledge-store-rest-api-con_kie-apis
Chapter 2. Configuring provisioning resources
Chapter 2. Configuring provisioning resources 2.1. Provisioning contexts A provisioning context is the combination of an organization and location that you specify for Satellite components. The organization and location that a component belongs to sets the ownership and access for that component. Organizations divide Red Hat Satellite components into logical groups based on ownership, purpose, content, security level, and other divisions. You can create and manage multiple organizations through Red Hat Satellite and assign components to each individual organization. This ensures Satellite Server provisions hosts within a certain organization and only uses components that are assigned to that organization. For more information about organizations, see Managing Organizations in Administering Red Hat Satellite . Locations function similar to organizations. The difference is that locations are based on physical or geographical setting. Users can nest locations in a hierarchy. For more information about locations, see Managing Locations in Administering Red Hat Satellite . 2.2. Setting the provisioning context When you set a provisioning context, you define which organization and location to use for provisioning hosts. The organization and location menus are located in the menu bar, on the upper left of the Satellite web UI. If you have not selected an organization and location to use, the menu displays: Any Organization and Any Location . Procedure Click Any Organization and select the organization. Click Any Location and select the location to use. Each user can set their default provisioning context in their account settings. Click the user name in the upper right of the Satellite web UI and select My account to edit your user account settings. CLI procedure When using the CLI, include either --organization or --organization-label and --location or --location-id as an option. For example: This command outputs hosts allocated to My_Organization and My_Location . 2.3. Creating operating systems An operating system is a collection of resources that define how Satellite Server installs a base operating system on a host. Operating system entries combine previously defined resources, such as installation media, partition tables, provisioning templates, and others. Importing operating systems from Red Hat's CDN creates new entries on the Hosts > Provisioning Setup > Operating Systems page. To import operating systems from Red Hat's CDN, enable the Red Hat repositories of the operating systems and synchronize the repositories to Satellite. For more information, see Enabling Red Hat Repositories and Synchronizing Repositories in Managing content . You can also add custom operating systems using the following procedure. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Operating systems and click New Operating system. In the Name field, enter a name to represent the operating system entry. In the Major field, enter the number that corresponds to the major version of the operating system. In the Minor field, enter the number that corresponds to the minor version of the operating system. In the Description field, enter a description of the operating system. From the Family list, select the operating system's family. From the Root Password Hash list, select the encoding method for the root password. From the Architectures list, select the architectures that the operating system uses. Click the Partition table tab and select the possible partition tables that apply to this operating system. Optional: If you use non-Red Hat content, click the Installation Media tab and select the installation media that apply to this operating system. For more information, see Adding Installation Media to Satellite . Click the Templates tab and select a PXELinux template , a Provisioning template , and a Finish template for your operating system to use. You can select other templates, for example an iPXE template , if you plan to use iPXE for provisioning. Click Submit to save your provisioning template. CLI procedure Create the operating system using the hammer os create command: 2.4. Updating the details of multiple operating systems Use this procedure to update the details of multiple operating systems. This example shows you how to assign each operating system a partition table called Kickstart default , a configuration template called Kickstart default PXELinux , and a provisioning template called Kickstart Default . Procedure On Satellite Server, run the following Bash script: PARTID=USD(hammer --csv partition-table list | grep "Kickstart default," | cut -d, -f1) PXEID=USD(hammer --csv template list --per-page=1000 | grep "Kickstart default PXELinux" | cut -d, -f1) SATELLITE_ID=USD(hammer --csv template list --per-page=1000 | grep "provision" | grep ",Kickstart default" | cut -d, -f1) for i in USD(hammer --no-headers --csv os list | awk -F, {'print USD1'}) do hammer partition-table add-operatingsystem --id="USD{PARTID}" --operatingsystem-id="USD{i}" hammer template add-operatingsystem --id="USD{PXEID}" --operatingsystem-id="USD{i}" hammer os set-default-template --id="USD{i}" --config-template-id=USD{PXEID} hammer os add-config-template --id="USD{i}" --config-template-id=USD{SATELLITE_ID} hammer os set-default-template --id="USD{i}" --config-template-id=USD{SATELLITE_ID} done Display information about the updated operating system to verify that the operating system is updated correctly: 2.5. Creating architectures An architecture in Satellite represents a logical grouping of hosts and operating systems. Architectures are created by Satellite automatically when hosts check in with Puppet. The x86_64 architecture is already preset in Satellite. Use this procedure to create an architecture in Satellite. Supported architectures Only Intel x86_64 architecture is supported for provisioning using PXE, Discovery, and boot disk. For more information, see the Red Hat Knowledgebase solution Supported architectures and provisioning scenarios in Satellite 6 . Procedure In the Satellite web UI, navigate to Hosts > Provisioning Setup > Architectures . Click Create Architecture . In the Name field, enter a name for the architecture. From the Operating Systems list, select an operating system. If none are available, you can create and assign them under Hosts > Provisioning Setup > Operating Systems . Click Submit . CLI procedure Enter the hammer architecture create command to create an architecture. Specify its name and operating systems that include this architecture: 2.6. Creating hardware models Use this procedure to create a hardware model in Satellite so that you can specify which hardware model a host uses. Procedure In the Satellite web UI, navigate to Hosts > Provisioning Setup > Hardware Models . Click Create Model . In the Name field, enter a name for the hardware model. Optionally, in the Hardware Model and Vendor Class fields, you can enter corresponding information for your system. In the Info field, enter a description of the hardware model. Click Submit to save your hardware model. CLI procedure Create a hardware model using the hammer model create command. The only required parameter is --name . Optionally, enter the hardware model with the --hardware-model option, a vendor class with the --vendor-class option, and a description with the --info option: 2.7. Using a synchronized Kickstart repository for a host's operating system Satellite contains a set of synchronized Kickstart repositories that you use to install the provisioned host's operating system. For more information about adding repositories, see Syncing Repositories in Managing content . Use this procedure to set up a Kickstart repository. Prerequisites You must enable both BaseOS and Appstream Kickstart before provisioning. Procedure Add the synchronized Kickstart repository that you want to use to the existing content view, or create a new content view and add the Kickstart repository. For Red Hat Enterprise Linux 8, ensure that you add both Red Hat Enterprise Linux 8 for x86_64 - AppStream Kickstart x86_64 8 and Red Hat Enterprise Linux 8 for x86_64 - BaseOS Kickstart x86_64 8 repositories. If you use a disconnected environment, you must import the Kickstart repositories from a Red Hat Enterprise Linux binary DVD. For more information, see Importing Kickstart Repositories in Managing content . Publish a new version of the content view where the Kickstart repository is added and promote it to a required lifecycle environment. For more information, see Managing content views in Managing content . When you create a host, in the Operating System tab, for Media Selection , select the Synced Content checkbox. To view the Kickstart tree, enter the following command: 2.8. Adding installation media to Satellite Installation media are sources of packages that Satellite Server uses to install a base operating system on a machine from an external repository. You can use this parameter to install third-party content. Red Hat content is delivered through repository syncing instead. You can view installation media by navigating to Hosts > Provisioning Setup > Installation Media . Installation media must be in the format of an operating system installation tree and must be accessible from the machine hosting the installer through an HTTP URL. By default, Satellite includes installation media for some official Linux distributions. Note that some of those installation media are targeted for a specific version of an operating system. For example CentOS mirror (7.x) must be used for CentOS 7 or earlier, and CentOS mirror (8.x) must be used for CentOS 8 or later. If you want to improve download performance when using installation media to install operating systems on multiple hosts, you must modify the Path of the installation medium to point to the closest mirror or a local copy. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Provisioning Setup > Installation Media . Click Create Medium . In the Name field, enter a name to represent the installation media entry. In the Path enter the URL that contains the installation tree. You can use following variables in the path to represent multiple different system architectures and versions: USDarch - The system architecture. USDversion - The operating system version. USDmajor - The operating system major version. USDminor - The operating system minor version. Example HTTP path: From the Operating system family list, select the distribution or family of the installation medium. For example, CentOS and Fedora are in the Red Hat family. Click the Organizations and Locations tabs, to change the provisioning context. Satellite Server adds the installation medium to the set provisioning context. Click Submit to save your installation medium. CLI procedure Create the installation medium using the hammer medium create command: 2.9. Creating partition tables A partition table is a type of template that defines the way Satellite Server configures the disks available on a new host. A Partition table uses the same ERB syntax as provisioning templates. Red Hat Satellite contains a set of default partition tables to use, including a Kickstart default . You can also edit partition table entries to configure the preferred partitioning scheme, or create a partition table entry and add it to the operating system entry. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Templates > Partition Tables . Click Create Partition Table . In the Name field, enter a name for the partition table. Select the Default checkbox if you want to set the template to automatically associate with new organizations or locations. Select the Snippet checkbox if you want to identify the template as a reusable snippet for other partition tables. From the Operating System Family list, select the distribution or family of the partitioning layout. For example, Red Hat Enterprise Linux, CentOS, and Fedora are in the Red Hat family. In the Template editor field, enter the layout for the disk partition. The format of the layout must match that for the intended operating system. For example, Red Hat Enterprise Linux requires a layout that matches a Kickstart file, such as: For more information, see Section 2.11, "Dynamic partition example" . You can also use the file browser in the template editor to import the layout from a file. In the Audit Comment field, add a summary of changes to the partition layout. Click the Organizations and Locations tabs to add any other provisioning contexts that you want to associate with the partition table. Satellite adds the partition table to the current provisioning context. Click Submit to save your partition table. CLI procedure Create a plain text file, such as ~/My_Partition_Table , that contains the partition layout. The format of the layout must match that for the intended operating system. For example, Red Hat Enterprise Linux requires a layout that matches a Kickstart file, such as: For more information, see Section 2.11, "Dynamic partition example" . Create the installation medium using the hammer partition-table create command: 2.10. Associating partition tables with disk encryption Satellite contains partition tables that encrypt the disk of your host by using Linux Unified Key Setup (LUKS) during host provisioning. Encrypted disks on hosts protect data at rest. Optionally, you can also bind the disk to a Tang server through Clevis for decryption during boot. Associate the partition table with your operating system entry. Then, you assign the partition table to your host group or select it manually during provisioning. Prerequisites Your host has access to the AppStream repository to install clevis during provisioning. Procedure In the Satellite web UI, navigate to Hosts > Provisioning Setup > Operating Systems . Select your Red Hat Enterprise Linux entry. On the Partition Table tab, associate Kickstart default encrypted with your operating system entry. Create a host group that uses the Kickstart default encrypted partition table. For more information, see Creating a host group in Managing hosts . Decrypt the disk of your host during boot time by using one of the following options: LUKS encryption: Add the host parameter disk_enc_passphrase as type string and your cleartext passphrase of the LUKS container as the value. Clevis and Tang: Add the host parameter disk_enc_tang_servers as type array and your list of Tang servers (example: ["1.2.3.4"] or ["server.example.com", "5.6.7.8"] ). If you set disk_enc_tang_servers , do not set disk_enc_passphrase because the passphrase slot is removed from the LUKS container after provisioning. 2.11. Dynamic partition example Using an Anaconda Kickstart template, the following section instructs Anaconda to erase the whole disk, automatically partition, enlarge one partition to maximum size, and then proceed to the sequence of events in the provisioning process: zerombr clearpart --all --initlabel autopart <%= host_param('autopart_options') %> Dynamic partitioning is executed by the installation program. Therefore, you can write your own rules to specify how you want to partition disks according to runtime information from the node, for example, disk sizes, number of drives, vendor, or manufacturer. If you want to provision servers and use dynamic partitioning, add the following example as a template. When the #Dynamic entry is included, the content of the template loads into a %pre shell scriplet and creates a /tmp/diskpart.cfg that is then included into the Kickstart partitioning section. #Dynamic (do not remove this line) MEMORY=USD((`grep MemTotal: /proc/meminfo | sed 's/^MemTotal: *//'|sed 's/ .*//'` / 1024)) if [ "USDMEMORY" -lt 2048 ]; then SWAP_MEMORY=USD((USDMEMORY * 2)) elif [ "USDMEMORY" -lt 8192 ]; then SWAP_MEMORY=USDMEMORY elif [ "USDMEMORY" -lt 65536 ]; then SWAP_MEMORY=USD((USDMEMORY / 2)) else SWAP_MEMORY=32768 fi cat <<EOF > /tmp/diskpart.cfg zerombr clearpart --all --initlabel part /boot --fstype ext4 --size 200 --asprimary part swap --size "USDSWAP_MEMORY" part / --fstype ext4 --size 1024 --grow EOF 2.12. Provisioning templates A provisioning template defines the way Satellite Server installs an operating system on a host. Red Hat Satellite includes many template examples. In the Satellite web UI, navigate to Hosts > Templates > Provisioning Templates to view them. You can create a template or clone a template and edit the clone. For help with templates, navigate to Hosts > Templates > Provisioning Templates > Create Template > Help . Templates supported by Red Hat are indicated by a Red Hat icon. To hide unsupported templates, in the Satellite web UI navigate to Administer > Settings . On the Provisioning tab, set the value of Show unsupported provisioning templates to false and click Submit . You can also filter out the supported templates by making the following query "supported = true". If you clone a supported template, the cloned template will be unsupported. Templates accept the Embedded Ruby (ERB) syntax. For more information, see Template Writing Reference in Managing hosts . You can download provisioning templates. Before you can download the template, you must create a debug certificate. For more information, see Creating an Organization Debug Certificate in Administering Red Hat Satellite . You can synchronize templates between Satellite Server and a Git repository or a local directory. For more information, see Synchronizing Templates Repositories in Managing hosts . To view the history of changes applied to a template, navigate to Hosts > Templates > Provisioning Templates , select one of the templates, and click History . Click Revert to override the content with the version. You can also revert to an earlier change. Click Show Diff to see information about a specific change: The Template Diff tab displays changes in the body of a provisioning template. The Details tab displays changes in the template description. The History tab displays the user who made a change to the template and date of the change. 2.13. Kinds of provisioning templates There are various kinds of provisioning templates: Provision The main template for the provisioning process. For example, a Kickstart template. For more information about Kickstart syntax and commands, see the following resources: Automated installation workflow in Automatically installing RHEL 9 Automated installation workflow in Automatically installing RHEL 8 Kickstart Syntax Reference in the Red Hat Enterprise Linux 7 Installation Guide PXELinux, PXEGrub, PXEGrub2 PXE-based templates that deploy to the template Capsule associated with a subnet to ensure that the host uses the installer with the correct kernel options. For BIOS provisioning, select PXELinux template. For UEFI provisioning, select PXEGrub2 . Finish Post-configuration scripts to execute using an SSH connection when the main provisioning process completes. You can use Finish templates only for image-based provisioning in virtual or cloud environments that do not support user_data. Do not confuse an image with a foreman discovery ISO, which is sometimes called a Foreman discovery image. An image in this context is an install image in a virtualized environment for easy deployment. When a finish script successfully exits with the return code 0 , Red Hat Satellite treats the code as a success and the host exits the build mode. Note that there are a few finish scripts with a build mode that uses a call back HTTP call. These scripts are not used for image-based provisioning, but for post configuration of operating-system installations such as Debian, Ubuntu, and BSD. Red Hat does not support provisioning of operating systems other than Red Hat Enterprise Linux. user_data Post-configuration scripts for providers that accept custom data, also known as seed data. You can use the user_data template to provision virtual machines in cloud or virtualised environments only. This template does not require Satellite to be able to reach the host; the cloud or virtualization platform is responsible for delivering the data to the image. Ensure that the image that you want to provision has the software to read the data installed and set to start during boot. For example, cloud-init , which expects YAML input, or ignition , which expects JSON input. cloud_init Some environments, such as VMWare, either do not support custom data or have their own data format that limits what can be done during customization. In this case, you can configure a cloud-init client with the foreman plugin, which attempts to download the template directly from Satellite over HTTP or HTTPS. This technique can be used in any environment, preferably virtualized. Ensure that you meet the following requirements to use the cloud_init template: Ensure that the image that you want to provision has the software to read the data installed and set to start during boot. A provisioned host is able to reach Satellite from the IP address that matches the host's provisioning interface IP. Note that cloud-init does not work behind NAT. Bootdisk Templates for PXE-less boot methods. Kernel Execution (kexec) Kernel execution templates for PXE-less boot methods. Note Kernel Execution is a Technology Preview feature. Technology Preview features are not fully supported under Red Hat Subscription Service Level Agreements (SLAs), may not be functionally complete, and are not intended for production use. However, these features provide early access to upcoming product innovations, enabling customers to test functionality and provide feedback during the development process. Script An arbitrary script not used by default but useful for custom tasks. ZTP Zero Touch Provisioning templates. POAP PowerOn Auto Provisioning templates. iPXE Templates for iPXE or gPXE environments to use instead of PXELinux. 2.14. Creating provisioning templates A provisioning template defines the way Satellite Server installs an operating system on a host. Use this procedure to create a new provisioning template. Procedure In the Satellite web UI, navigate to Hosts > Templates > Provisioning Templates and click Create Template . In the Name field, enter a name for the provisioning template. Fill in the rest of the fields as required. The Help tab provides information about the template syntax and details the available functions, variables, and methods that can be called on different types of objects within the template. CLI procedure Before you create a template with the CLI, create a plain text file that contains the template. This example uses the ~/my-template file. Create the template using the hammer template create command and specify the type with the --type option: 2.15. Cloning provisioning templates A provisioning template defines the way Satellite Server installs an operating system on a host. Use this procedure to clone a template and add your updates to the clone. Procedure In the Satellite web UI, navigate to Hosts > Templates > Provisioning Templates . Find the template that you want to use. Click Clone to duplicate the template. In the Name field, enter a name for the provisioning template. Select the Default checkbox to set the template to associate automatically with new organizations or locations. In the Template editor field, enter the body of the provisioning template. You can also use the Template file browser to upload a template file. In the Audit Comment field, enter a summary of changes to the provisioning template for auditing purposes. Click the Type tab and if your template is a snippet, select the Snippet checkbox. A snippet is not a standalone provisioning template, but a part of a provisioning template that can be inserted into other provisioning templates. From the Type list, select the type of the template. For example, Provisioning template . Click the Association tab and from the Applicable Operating Systems list, select the names of the operating systems that you want to associate with the provisioning template. Optionally, click Add combination and select a host group from the Host Group list or an environment from the Environment list to associate provisioning template with the host groups and environments. Click the Organizations and Locations tabs to add any additional contexts to the template. Click Submit to save your provisioning template. 2.16. Creating custom provisioning snippets You can execute custom code before and/or after the host provisioning process. Prerequisites Check your provisioning template to ensure that it supports the custom snippets you want to use. You can view all provisioning templates under Hosts > Templates > Provisioning Templates . Procedure In the Satellite web UI, navigate to Hosts > Templates > Provisioning Templates and click Create Template . In the Name field, enter a name for your custom provisioning snippet. The name must start with the name of a provisioning template that supports including custom provisioning snippets: Append custom pre to the name of a provisioning template to run code before provisioning a host. Append custom post to the name of a provisioning template to run code after provisioning a host. On the Type tab, select Snippet . Click Submit to create your custom provisioning snippet. CLI procedure Create a plain text file that contains your custom snippet. Create the template using hammer : 2.17. Custom provisioning snippet example for Red Hat Enterprise Linux You can use Custom Post snippets to call external APIs from within the provisioning template directly after provisioning a host. Kickstart default finish custom post Example for Red Hat Enterprise Linux 2.18. Associating templates with operating systems You can associate templates with operating systems in Satellite. The following example adds a provisioning template to an operating system entry. Procedure In the Satellite web UI, navigate to Hosts > Templates > Provisioning Templates . Select a provisioning template. On the Association tab, select all applicable operating systems. Click Submit to save your changes. CLI procedure Optional: View all templates: Optional: View all operating systems: Associate a template with an operating system: 2.19. Creating compute profiles You can use compute profiles to predefine virtual machine hardware details such as CPUs, memory, and storage. To use the CLI instead of the Satellite web UI, see the CLI procedure . A default installation of Red Hat Satellite contains three predefined profiles: 1-Small 2-Medium 3-Large You can apply compute profiles to all supported compute resources: Section 1.3, "Supported cloud providers" Section 1.4, "Supported virtualization infrastructures" Procedure In the Satellite web UI, navigate to Infrastructure > Compute Profiles and click Create Compute Profile . In the Name field, enter a name for the profile. Click Submit . A new window opens with the name of the compute profile. In the new window, click the name of each compute resource and edit the attributes you want to set for this compute profile. CLI procedure Create a new compute profile: Set attributes for the compute profile: Optional: To update the attributes of a compute profile, specify the attributes you want to change. For example, to change the number of CPUs and memory size: Optional: To change the name of the compute profile, use the --new-name attribute: Additional resources For more information about creating compute profiles by using Hammer, enter hammer compute-profile --help . 2.20. Setting a default encrypted root password for hosts If you do not want to set a plain text default root password for the hosts that you provision, you can use a default encrypted password. The default root password can be inherited by a host group and consequentially by hosts in that group. If you change the password and reprovision the hosts in the group that inherits the password, the password will be overwritten on the hosts. Procedure Generate an encrypted password: Copy the password for later use. In the Satellite web UI, navigate to Administer > Settings . On the Settings page, select the Provisioning tab. In the Name column, navigate to Root password , and click Click to edit . Paste the encrypted password, and click Save . 2.21. Using noVNC to access virtual machines You can use your browser to access the VNC console of VMs created by Satellite. Satellite supports using noVNC on the following virtualization platforms: VMware Libvirt Red Hat Virtualization Prerequisites You must have a virtual machine created by Satellite. For existing virtual machines, ensure that the Display type in the Compute Resource settings is VNC . You must import the Katello root CA certificate into your Satellite Server. Adding a security exception in the browser is not enough for using noVNC. For more information, see Installing the Katello root CA certificate in Configuring authentication for Red Hat Satellite users . Procedure On your Satellite Server, configure the firewall to allow VNC service on ports 5900 to 5930. In the Satellite web UI, navigate to Infrastructure > Compute Resources and select the name of a compute resource. In the Virtual Machines tab, select the name of your virtual machine. Ensure the machine is powered on and then select Console . 2.22. Removing a virtual machine upon host deletion By default, when you delete a host provisioned by Satellite, Satellite does not remove the actual VM on the compute resource. You can configure Satellite to remove the VM when deleting the host entry on Satellite. Note If you do not remove the associated VM and attempt to create a new VM with the same FQDN later, it will fail because that VM already exists in the compute resource. You can still re-register the existing VM to Satellite. To use the CLI instead of the Satellite web UI, see the CLI procedure . Prerequisites Your Satellite account has a role that grants the view_settings and edit_settings permissions. Procedure In the Satellite web UI, navigate to Administer > Settings > Provisioning . Change the value of the Destroy associated VM on host delete setting to Yes . CLI procedure Configure Satellite to remove a VM upon host deletion by using Hammer: steps You can delete a host and Satellite removes its associated VM in the compute resource.
[ "hammer host list --organization \" My_Organization \" --location \" My_Location \"", "hammer os create --architectures \"x86_64\" --description \" My_Operating_System \" --family \"Redhat\" --major 8 --media \"Red Hat\" --minor 8 --name \"Red Hat Enterprise Linux\" --partition-tables \" My_Partition_Table \" --provisioning-templates \" My_Provisioning_Template \"", "PARTID=USD(hammer --csv partition-table list | grep \"Kickstart default,\" | cut -d, -f1) PXEID=USD(hammer --csv template list --per-page=1000 | grep \"Kickstart default PXELinux\" | cut -d, -f1) SATELLITE_ID=USD(hammer --csv template list --per-page=1000 | grep \"provision\" | grep \",Kickstart default\" | cut -d, -f1) for i in USD(hammer --no-headers --csv os list | awk -F, {'print USD1'}) do hammer partition-table add-operatingsystem --id=\"USD{PARTID}\" --operatingsystem-id=\"USD{i}\" hammer template add-operatingsystem --id=\"USD{PXEID}\" --operatingsystem-id=\"USD{i}\" hammer os set-default-template --id=\"USD{i}\" --config-template-id=USD{PXEID} hammer os add-config-template --id=\"USD{i}\" --config-template-id=USD{SATELLITE_ID} hammer os set-default-template --id=\"USD{i}\" --config-template-id=USD{SATELLITE_ID} done", "hammer os info --id 1", "hammer architecture create --name \" My_Architecture \" --operatingsystems \" My_Operating_System \"", "hammer model create --hardware-model \" My_Hardware_Model \" --info \" My_Description \" --name \" My_Hardware_Model_Name \" --vendor-class \" My_Vendor_Class \"", "hammer medium list --organization \" My_Organization \"", "http://download.example.com/centos/USDversion/Server/USDarch/os/", "hammer medium create --locations \" My_Location \" --name \" My_Operating_System \" --organizations \" My_Organization \" --os-family \"Redhat\" --path \"http://download.example.com/centos/USDversion/Server/USDarch/os/\"", "zerombr clearpart --all --initlabel autopart", "zerombr clearpart --all --initlabel autopart", "hammer partition-table create --file \" ~/My_Partition_Table \" --locations \" My_Location \" --name \" My_Partition_Table \" --organizations \" My_Organization \" --os-family \"Redhat\" --snippet false", "zerombr clearpart --all --initlabel autopart <%= host_param('autopart_options') %>", "#Dynamic (do not remove this line) MEMORY=USD((`grep MemTotal: /proc/meminfo | sed 's/^MemTotal: *//'|sed 's/ .*//'` / 1024)) if [ \"USDMEMORY\" -lt 2048 ]; then SWAP_MEMORY=USD((USDMEMORY * 2)) elif [ \"USDMEMORY\" -lt 8192 ]; then SWAP_MEMORY=USDMEMORY elif [ \"USDMEMORY\" -lt 65536 ]; then SWAP_MEMORY=USD((USDMEMORY / 2)) else SWAP_MEMORY=32768 fi cat <<EOF > /tmp/diskpart.cfg zerombr clearpart --all --initlabel part /boot --fstype ext4 --size 200 --asprimary part swap --size \"USDSWAP_MEMORY\" part / --fstype ext4 --size 1024 --grow EOF", "hammer template create --file ~/my-template --locations \" My_Location \" --name \" My_Provisioning_Template \" --organizations \" My_Organization \" --type provision", "hammer template create --file \" /path/to/My_Snippet \" --locations \" My_Location \" --name \" My_Template_Name_custom_pre\" \\ --organizations \"_My_Organization \" --type snippet", "echo \"Calling API to report successful host deployment\" install -y curl ca-certificates curl -X POST -H \"Content-Type: application/json\" -d '{\"name\": \"<%= @host.name %>\", \"operating_system\": \"<%= @host.operatingsystem.name %>\", \"status\": \"provisioned\",}' \"https://api.example.com/\"", "hammer template list", "hammer os list", "hammer template add-operatingsystem --id My_Template_ID --operatingsystem-id My_Operating_System_ID", "hammer compute-profile create --name \" My_Compute_Profile \"", "hammer compute-profile values create --compute-attributes \" flavor=m1.small,cpus=2,memory=4GB,cpu_mode=default --compute-resource \" My_Compute_Resource \" --compute-profile \" My_Compute_Profile \" --volume size= 40GB", "hammer compute-profile values update --compute-resource \" My_Compute_Resource \" --compute-profile \" My_Compute_Profile \" --attributes \" cpus=2,memory=4GB \" --interface \" type=network,bridge=br1,index=1 \" --volume \"size= 40GB \"", "hammer compute-profile update --name \" My_Compute_Profile \" --new-name \" My_New_Compute_Profile \"", "python3 -c 'import crypt,getpass;pw=getpass.getpass(); print(crypt.crypt(pw)) if (pw==getpass.getpass(\"Confirm: \")) else exit()'", "firewall-cmd --add-port=5900-5930/tcp firewall-cmd --add-port=5900-5930/tcp --permanent", "hammer settings set --name destroy_vm_on_host_delete --value true --location \" My_Location \" --organization \" My_Organization \"" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/provisioning_hosts/configuring_provisioning_resources_provisioning
Chapter 8. Using JSON for complex parameters
Chapter 8. Using JSON for complex parameters JSON is the preferred way to describe complex parameters. An example of JSON formatted content appears below:
[ "hammer compute-profile values create --compute-profile-id 22 --compute-resource-id 1 --compute-attributes= '{ \"cpus\": 2, \"corespersocket\": 2, \"memory_mb\": 4096, \"firmware\": \"efi\", \"resource_pool\": \"Resources\", \"cluster\": \"Example_Cluster\", \"guest_id\": \"rhel8\", \"path\": \"/Datacenters/EXAMPLE/vm/\", \"hardware_version\": \"Default\", \"memoryHotAddEnabled\": 0, \"cpuHotAddEnabled\": 0, \"add_cdrom\": 0, \"boot_order\": [ \"disk\", \"network\" ], \"scsi_controllers\":[ { \"type\": \"ParaVirtualSCSIController\", \"key\":1000 }, { \"type\": \"ParaVirtualSCSIController\", \"key\":1001 } ] }'" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/using_the_hammer_cli_tool/using-json-for-complex-parameters
Chapter 59. Defining data tables for guided rule templates
Chapter 59. Defining data tables for guided rule templates After you create a guided rule template and add template keys for field values, a data table is displayed in the Data table of the guided rule templates designer. Each column in the data table corresponds to a template key that you added in the guided rule template. Use this table to define values for each template key row by row. Each row of values that you define in the data table for that template results in a rule. Procedure In the guided rule templates designer, click the Data tab to view the data table. Each column in the data table corresponds to a template key that you added in the guided rule template. Note If you did not add any template keys to the rule template, then this data table does not appear and the template does not function as a genuine template but essentially as an individual guided rule. For this reason, template keys are fundamental in creating guided rule templates. Click Add row and define the data values for each template key column to generate that rule (row). Continue adding rows and defining data values for each rule that will be generated. You can click Add row for each new row, or click the plus icon ( ) or minus icon to add or remove rows. Figure 59.1. Sample data table for a guided rule template To view the DRL code, click the Source tab in the guided rule templates designer. Example: As a visual aid, click the grid icon in the upper-left corner of the data table to toggle cell merging on and off, if needed. Cells in the same column with identical values are merged into a single cell. Figure 59.2. Merge cells in a data table You can then use the expand/collapse icon [+/-] in the upper-left corner of each newly merged cell to collapse the rows corresponding to the merged cell, or to re-expand the collapsed rows. Figure 59.3. Collapse merged cells After you define the template key data for all rules and adjust the table display as needed, click Validate in the upper-right toolbar of the guided rule templates designer to validate the guided rule template. If the rule template validation fails, address any problems described in the error message, review all components in the rule template and data defined in the data table, and try again to validate the rule template until the rule template passes. Click Save in the guided rule templates designer to save your work.
[ "rule \"PaymentRules_6\" when Customer( internetService == false , phoneService == false , TVService == true ) then RecurringPayment fact0 = new RecurringPayment(); fact0.setAmount( 5 ); insertLogical( fact0 ); end rule \"PaymentRules_5\" when Customer( internetService == false , phoneService == true , TVService == false ) then RecurringPayment fact0 = new RecurringPayment(); fact0.setAmount( 5 ); insertLogical( fact0 ); end //Other rules omitted for brevity." ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/guided-rule-templates-tables-proc
Chapter 37. Producer and Consumer Templates
Chapter 37. Producer and Consumer Templates Abstract The producer and consumer templates in Apache Camel are modelled after a feature of the Spring container API, whereby access to a resource is provided through a simplified, easy-to-use API known as a template . In the case of Apache Camel, the producer template and consumer template provide simplified interfaces for sending messages to and receiving messages from producer endpoints and consumer endpoints. 37.1. Using the Producer Template 37.1.1. Introduction to the Producer Template Overview The producer template supports a variety of different approaches to invoking producer endpoints. There are methods that support different formats for the request message (as an Exchange object, as a message body, as a message body with a single header setting, and so on) and there are methods to support both the synchronous and the asynchronous style of invocation. Overall, producer template methods can be grouped into the following categories: Synchronous invocation Synchronous invocation with a processor Asynchronous invocation Asynchronous invocation with a callback Alternatively, see Section 37.2, "Using Fluent Producer Templates" . Synchronous invocation The methods for invoking endpoints synchronously have names of the form send Suffix () and request Suffix () . For example, the methods for invoking an endpoint using either the default message exchange pattern (MEP) or an explicitly specified MEP are named send() , sendBody() , and sendBodyAndHeader() (where these methods respectively send an Exchange object, a message body, or a message body and header value). If you want to force the MEP to be InOut (request/reply semantics), you can call the request() , requestBody() , and requestBodyAndHeader() methods instead. The following example shows how to create a ProducerTemplate instance and use it to send a message body to the activemq:MyQueue endpoint. The example also shows how to send a message body and header value using sendBodyAndHeader() . Synchronous invocation with a processor A special case of synchronous invocation is where you provide the send() method with a Processor argument instead of an Exchange argument. In this case, the producer template implicitly asks the specified endpoint to create an Exchange instance (typically, but not always having the InOnly MEP by default). This default exchange is then passed to the processor, which initializes the contents of the exchange object. The following example shows how to send an exchange initialized by the MyProcessor processor to the activemq:MyQueue endpoint. The MyProcessor class is implemented as shown in the following example. In addition to setting the In message body (as shown here), you could also initialize message header and exchange properties. Asynchronous invocation The methods for invoking endpoints asynchronously have names of the form asyncSend Suffix () and asyncRequest Suffix () . For example, the methods for invoking an endpoint using either the default message exchange pattern (MEP) or an explicitly specified MEP are named asyncSend() and asyncSendBody() (where these methods respectively send an Exchange object or a message body). If you want to force the MEP to be InOut (request/reply semantics), you can call the asyncRequestBody() , asyncRequestBodyAndHeader() , and asyncRequestBodyAndHeaders() methods instead. The following example shows how to send an exchange asynchronously to the direct:start endpoint. The asyncSend() method returns a java.util.concurrent.Future object, which is used to retrieve the invocation result at a later time. The producer template also provides methods to send a message body asynchronously (for example, using asyncSendBody() or asyncRequestBody() ). In this case, you can use one of the following helper methods to extract the returned message body from the Future object: The first version of the extractFutureBody() method blocks until the invocation completes and the reply message is available. The second version of the extractFutureBody() method allows you to specify a timeout. Both methods have a type argument, type , which casts the returned message body to the specified type using a built-in type converter. The following example shows how to use the asyncRequestBody() method to send a message body to the direct:start endpoint. The blocking extractFutureBody() method is then used to retrieve the reply message body from the Future object. Asynchronous invocation with a callback In the preceding asynchronous examples, the request message is dispatched in a sub-thread, while the reply is retrieved and processed by the main thread. The producer template also gives you the option, however, of processing replies in the sub-thread, using one of the asyncCallback() , asyncCallbackSendBody() , or asyncCallbackRequestBody() methods. In this case, you supply a callback object (of org.apache.camel.impl.SynchronizationAdapter type), which automatically gets invoked in the sub-thread as soon as a reply message arrives. The Synchronization callback interface is defined as follows: Where the onComplete() method is called on receipt of a normal reply and the onFailure() method is called on receipt of a fault message reply. Only one of these methods gets called back, so you must override both of them to ensure that all types of reply are processed. The following example shows how to send an exchange to the direct:start endpoint, where the reply message is processed in the sub-thread by the SynchronizationAdapter callback object. Where the SynchronizationAdapter class is a default implementation of the Synchronization interface, which you can override to provide your own definitions of the onComplete() and onFailure() callback methods. You still have the option of accessing the reply from the main thread, because the asyncCallback() method also returns a Future object - for example: 37.1.2. Synchronous Send Overview The synchronous send methods are a collection of methods that you can use to invoke a producer endpoint, where the current thread blocks until the method invocation is complete and the reply (if any) has been received. These methods are compatible with any kind of message exchange protocol. Send an exchange The basic send() method is a general-purpose method that sends the contents of an Exchange object to an endpoint, using the message exchange pattern (MEP) of the exchange. The return value is the exchange that you get after it has been processed by the producer endpoint (possibly containing an Out message, depending on the MEP). There are three varieties of send() method for sending an exchange that let you specify the target endpoint in one of the following ways: as the default endpoint, as an endpoint URI, or as an Endpoint object. Send an exchange populated by a processor A simple variation of the general send() method is to use a processor to populate a default exchange, instead of supplying the exchange object explicitly (see the section called "Synchronous invocation with a processor" for details). The send() methods for sending an exchange populated by a processor let you specify the target endpoint in one of the following ways: as the default endpoint, as an endpoint URI, or as an Endpoint object. In addition, you can optionally specify the exchange's MEP by supplying the pattern argument, instead of accepting the default. Send a message body If you are only concerned with the contents of the message body that you want to send, you can use the sendBody() methods to provide the message body as an argument and let the producer template take care of inserting the body into a default exchange object. The sendBody() methods let you specify the target endpoint in one of the following ways: as the default endpoint, as an endpoint URI, or as an Endpoint object. In addition, you can optionally specify the exchange's MEP by supplying the pattern argument, instead of accepting the default. The methods without a pattern argument return void (even though the invocation might give rise to a reply in some cases); and the methods with a pattern argument return either the body of the Out message (if there is one) or the body of the In message (otherwise). Send a message body and header(s) For testing purposes, it is often interesting to try out the effect of a single header setting and the sendBodyAndHeader() methods are useful for this kind of header testing. You supply the message body and header setting as arguments to sendBodyAndHeader() and let the producer template take care of inserting the body and header setting into a default exchange object. The sendBodyAndHeader() methods let you specify the target endpoint in one of the following ways: as the default endpoint, as an endpoint URI, or as an Endpoint object. In addition, you can optionally specify the exchange's MEP by supplying the pattern argument, instead of accepting the default. The methods without a pattern argument return void (even though the invocation might give rise to a reply in some cases); and the methods with a pattern argument return either the body of the Out message (if there is one) or the body of the In message (otherwise). The sendBodyAndHeaders() methods are similar to the sendBodyAndHeader() methods, except that instead of supplying just a single header setting, these methods allow you to specify a complete hash map of header settings. Send a message body and exchange property You can try out the effect of setting a single exchange property using the sendBodyAndProperty() methods. You supply the message body and property setting as arguments to sendBodyAndProperty() and let the producer template take care of inserting the body and exchange property into a default exchange object. The sendBodyAndProperty() methods let you specify the target endpoint in one of the following ways: as the default endpoint, as an endpoint URI, or as an Endpoint object. In addition, you can optionally specify the exchange's MEP by supplying the pattern argument, instead of accepting the default. The methods without a pattern argument return void (even though the invocation might give rise to a reply in some cases); and the methods with a pattern argument return either the body of the Out message (if there is one) or the body of the In message (otherwise). 37.1.3. Synchronous Request with InOut Pattern Overview The synchronous request methods are similar to the synchronous send methods, except that the request methods force the message exchange pattern to be InOut (conforming to request/reply semantics). Hence, it is generally convenient to use a synchronous request method, if you expect to receive a reply from the producer endpoint. Request an exchange populated by a processor The basic request() method is a general-purpose method that uses a processor to populate a default exchange and forces the message exchange pattern to be InOut (so that the invocation obeys request/reply semantics). The return value is the exchange that you get after it has been processed by the producer endpoint, where the Out message contains the reply message. The request() methods for sending an exchange populated by a processor let you specify the target endpoint in one of the following ways: as an endpoint URI, or as an Endpoint object. Request a message body If you are only concerned with the contents of the message body in the request and in the reply, you can use the requestBody() methods to provide the request message body as an argument and let the producer template take care of inserting the body into a default exchange object. The requestBody() methods let you specify the target endpoint in one of the following ways: as the default endpoint, as an endpoint URI, or as an Endpoint object. The return value is the body of the reply message ( Out message body), which can either be returned as plain Object or converted to a specific type, T , using the built-in type converters (see Section 34.3, "Built-In Type Converters" ). Request a message body and header(s) You can try out the effect of setting a single header value using the requestBodyAndHeader() methods. You supply the message body and header setting as arguments to requestBodyAndHeader() and let the producer template take care of inserting the body and exchange property into a default exchange object. The requestBodyAndHeader() methods let you specify the target endpoint in one of the following ways: as an endpoint URI, or as an Endpoint object. The return value is the body of the reply message ( Out message body), which can either be returned as plain Object or converted to a specific type, T , using the built-in type converters (see Section 34.3, "Built-In Type Converters" ). The requestBodyAndHeaders() methods are similar to the requestBodyAndHeader() methods, except that instead of supplying just a single header setting, these methods allow you to specify a complete hash map of header settings. 37.1.4. Asynchronous Send Overview The producer template provides a variety of methods for invoking a producer endpoint asynchronously, so that the main thread does not block while waiting for the invocation to complete and the reply message can be retrieved at a later time. The asynchronous send methods described in this section are compatible with any kind of message exchange protocol. Send an exchange The basic asyncSend() method takes an Exchange argument and invokes an endpoint asynchronously, using the message exchange pattern (MEP) of the specified exchange. The return value is a java.util.concurrent.Future object, which is a ticket you can use to collect the reply message at a later time - for details of how to obtain the return value from the Future object, see the section called "Asynchronous invocation" . The following asyncSend() methods let you specify the target endpoint in one of the following ways: as an endpoint URI, or as an Endpoint object. Send an exchange populated by a processor A simple variation of the general asyncSend() method is to use a processor to populate a default exchange, instead of supplying the exchange object explicitly. The following asyncSend() methods let you specify the target endpoint in one of the following ways: as an endpoint URI, or as an Endpoint object. Send a message body If you are only concerned with the contents of the message body that you want to send, you can use the asyncSendBody() methods to send a message body asynchronously and let the producer template take care of inserting the body into a default exchange object. The asyncSendBody() methods let you specify the target endpoint in one of the following ways: as an endpoint URI, or as an Endpoint object. 37.1.5. Asynchronous Request with InOut Pattern Overview The asynchronous request methods are similar to the asynchronous send methods, except that the request methods force the message exchange pattern to be InOut (conforming to request/reply semantics). Hence, it is generally convenient to use an asynchronous request method, if you expect to receive a reply from the producer endpoint. Request a message body If you are only concerned with the contents of the message body in the request and in the reply, you can use the requestBody() methods to provide the request message body as an argument and let the producer template take care of inserting the body into a default exchange object. The asyncRequestBody() methods let you specify the target endpoint in one of the following ways: as an endpoint URI, or as an Endpoint object. The return value that is retrievable from the Future object is the body of the reply message ( Out message body), which can be returned either as a plain Object or converted to a specific type, T , using a built-in type converter (see the section called "Asynchronous invocation" ). Request a message body and header(s) You can try out the effect of setting a single header value using the asyncRequestBodyAndHeader() methods. You supply the message body and header setting as arguments to asyncRequestBodyAndHeader() and let the producer template take care of inserting the body and exchange property into a default exchange object. The asyncRequestBodyAndHeader() methods let you specify the target endpoint in one of the following ways: as an endpoint URI, or as an Endpoint object. The return value that is retrievable from the Future object is the body of the reply message ( Out message body), which can be returned either as a plain Object or converted to a specific type, T , using a built-in type converter (see the section called "Asynchronous invocation" ). The asyncRequestBodyAndHeaders() methods are similar to the asyncRequestBodyAndHeader() methods, except that instead of supplying just a single header setting, these methods allow you to specify a complete hash map of header settings. 37.1.6. Asynchronous Send with Callback Overview The producer template also provides the option of processing the reply message in the same sub-thread that is used to invoke the producer endpoint. In this case, you provide a callback object, which automatically gets invoked in the sub-thread as soon as the reply message is received. In other words, the asynchronous send with callback methods enable you to initiate an invocation in your main thread and then have all of the associated processing - invocation of the producer endpoint, waiting for a reply and processing the reply - occur asynchronously in a sub-thread. Send an exchange The basic asyncCallback() method takes an Exchange argument and invokes an endpoint asynchronously, using the message exchange pattern (MEP) of the specified exchange. This method is similar to the asyncSend() method for exchanges, except that it takes an additional org.apache.camel.spi.Synchronization argument, which is a callback interface with two methods: onComplete() and onFailure() . For details of how to use the Synchronization callback, see the section called "Asynchronous invocation with a callback" . The following asyncCallback() methods let you specify the target endpoint in one of the following ways: as an endpoint URI, or as an Endpoint object. Send an exchange populated by a processor The asyncCallback() method for processors calls a processor to populate a default exchange and forces the message exchange pattern to be InOut (so that the invocation obeys request/reply semantics). The following asyncCallback() methods let you specify the target endpoint in one of the following ways: as an endpoint URI, or as an Endpoint object. Send a message body If you are only concerned with the contents of the message body that you want to send, you can use the asyncCallbackSendBody() methods to send a message body asynchronously and let the producer template take care of inserting the body into a default exchange object. The asyncCallbackSendBody() methods let you specify the target endpoint in one of the following ways: as an endpoint URI, or as an Endpoint object. Request a message body If you are only concerned with the contents of the message body in the request and in the reply, you can use the asyncCallbackRequestBody() methods to provide the request message body as an argument and let the producer template take care of inserting the body into a default exchange object. The asyncCallbackRequestBody() methods let you specify the target endpoint in one of the following ways: as an endpoint URI, or as an Endpoint object. 37.2. Using Fluent Producer Templates Available as of Camel 2.18 The FluentProducerTemplate interface provides a fluent syntax for building a producer. The DefaultFluentProducerTemplate class implements FluentProducerTemplate . The following example uses a DefaultFluentProducerTemplate object to set headers and a body: The following example shows how to specify a processor in a DefaultFluentProducerTemplate object: The example shows how to customize the default fluent producer template: To create a FluentProducerTemplate instance, call the createFluentProducerTemplate() method on the Camel context. For example: 37.3. Using the Consumer Template Overview The consumer template provides methods for polling a consumer endpoint in order to receive incoming messages. You can choose to receive the incoming message either in the form of an exchange object or in the form of a message body (where the message body can be cast to a particular type using a built-in type converter). Example of polling exchanges You can use a consumer template to poll a consumer endpoint for exchanges using one of the following polling methods: blocking receive() ; receive() with a timeout; or receiveNoWait() , which returns immediately. Because a consumer endpoint represents a service, it is also essential to start the service thread by calling start() before you attempt to poll for exchanges. The following example shows how to poll an exchange from the seda:foo consumer endpoint using the blocking receive() method: Where the consumer template instance, consumer , is instantiated using the CamelContext.createConsumerTemplate() method and the consumer service thread is started by calling ConsumerTemplate.start() . Example of polling message bodies You can also poll a consumer endpoint for incoming message bodies using one of the following methods: blocking receiveBody() ; receiveBody() with a timeout; or receiveBodyNoWait() , which returns immediately. As in the example, it is also essential to start the service thread by calling start() before you attempt to poll for exchanges. The following example shows how to poll an incoming message body from the seda:foo consumer endpoint using the blocking receiveBody() method: Methods for polling exchanges There are three basic methods for polling exchanges from a consumer endpoint: receive() without a timeout blocks indefinitely; receive() with a timeout blocks for the specified period of milliseconds; and receiveNoWait() is non-blocking. You can specify the consumer endpoint either as an endpoint URI or as an Endpoint instance. Methods for polling message bodies There are three basic methods for polling message bodies from a consumer endpoint: receiveBody() without a timeout blocks indefinitely; receiveBody() with a timeout blocks for the specified period of milliseconds; and receiveBodyNoWait() is non-blocking. You can specify the consumer endpoint either as an endpoint URI or as an Endpoint instance. Moreover, by calling the templating forms of these methods, you can convert the returned body to a particular type, T , using a built-in type converter.
[ "import org.apache.camel.ProducerTemplate import org.apache.camel.impl.DefaultProducerTemplate ProducerTemplate template = context.createProducerTemplate(); // Send to a specific queue template.sendBody(\"activemq:MyQueue\", \"<hello>world!</hello>\"); // Send with a body and header template.sendBodyAndHeader( \"activemq:MyQueue\", \"<hello>world!</hello>\", \"CustomerRating\", \"Gold\" );", "import org.apache.camel.ProducerTemplate import org.apache.camel.impl.DefaultProducerTemplate ProducerTemplate template = context.createProducerTemplate(); // Send to a specific queue, using a processor to initialize template.send(\"activemq:MyQueue\", new MyProcessor());", "import org.apache.camel.Processor; import org.apache.camel.Exchange; public class MyProcessor implements Processor { public MyProcessor() { } public void process(Exchange ex) { ex.getIn().setBody(\"<hello>world!</hello>\"); } }", "import java.util.concurrent.Future; import org.apache.camel.Exchange; import org.apache.camel.impl.DefaultExchange; Exchange exchange = new DefaultExchange(context); exchange.getIn().setBody(\"Hello\"); Future<Exchange> future = template.asyncSend(\"direct:start\", exchange); // You can do other things, whilst waiting for the invocation to complete // Now, retrieve the resulting exchange from the Future Exchange result = future.get();", "<T> T extractFutureBody(Future future, Class<T> type); <T> T extractFutureBody(Future future, long timeout, TimeUnit unit, Class<T> type) throws TimeoutException;", "Future<Object> future = template.asyncRequestBody(\"direct:start\", \"Hello\"); // You can do other things, whilst waiting for the invocation to complete // Now, retrieve the reply message body as a String type String result = template.extractFutureBody(future, String.class);", "package org.apache.camel.spi; import org.apache.camel.Exchange; public interface Synchronization { void onComplete(Exchange exchange); void onFailure(Exchange exchange); }", "import java.util.concurrent.Future; import java.util.concurrent.TimeUnit; import org.apache.camel.Exchange; import org.apache.camel.impl.DefaultExchange; import org.apache.camel.impl.SynchronizationAdapter; Exchange exchange = context.getEndpoint(\"direct:start\").createExchange(); exchange.getIn().setBody(\"Hello\"); Future<Exchange> future = template.asyncCallback(\"direct:start\", exchange, new SynchronizationAdapter() { @Override public void onComplete(Exchange exchange) { assertEquals(\"Hello World\", exchange.getIn().getBody()); } });", "// Retrieve the reply from the main thread, specifying a timeout Exchange reply = future.get(10, TimeUnit.SECONDS);", "Exchange send(Exchange exchange); Exchange send(String endpointUri, Exchange exchange); Exchange send(Endpoint endpoint, Exchange exchange);", "Exchange send(Processor processor); Exchange send(String endpointUri, Processor processor); Exchange send(Endpoint endpoint, Processor processor); Exchange send( String endpointUri, ExchangePattern pattern, Processor processor ); Exchange send( Endpoint endpoint, ExchangePattern pattern, Processor processor );", "void sendBody(Object body); void sendBody(String endpointUri, Object body); void sendBody(Endpoint endpoint, Object body); Object sendBody( String endpointUri, ExchangePattern pattern, Object body ); Object sendBody( Endpoint endpoint, ExchangePattern pattern, Object body );", "void sendBodyAndHeader( Object body, String header, Object headerValue ); void sendBodyAndHeader( String endpointUri, Object body, String header, Object headerValue ); void sendBodyAndHeader( Endpoint endpoint, Object body, String header, Object headerValue ); Object sendBodyAndHeader( String endpointUri, ExchangePattern pattern, Object body, String header, Object headerValue ); Object sendBodyAndHeader( Endpoint endpoint, ExchangePattern pattern, Object body, String header, Object headerValue );", "void sendBodyAndHeaders( Object body, Map<String, Object> headers ); void sendBodyAndHeaders( String endpointUri, Object body, Map<String, Object> headers ); void sendBodyAndHeaders( Endpoint endpoint, Object body, Map<String, Object> headers ); Object sendBodyAndHeaders( String endpointUri, ExchangePattern pattern, Object body, Map<String, Object> headers ); Object sendBodyAndHeaders( Endpoint endpoint, ExchangePattern pattern, Object body, Map<String, Object> headers );", "void sendBodyAndProperty( Object body, String property, Object propertyValue ); void sendBodyAndProperty( String endpointUri, Object body, String property, Object propertyValue ); void sendBodyAndProperty( Endpoint endpoint, Object body, String property, Object propertyValue ); Object sendBodyAndProperty( String endpoint, ExchangePattern pattern, Object body, String property, Object propertyValue ); Object sendBodyAndProperty( Endpoint endpoint, ExchangePattern pattern, Object body, String property, Object propertyValue );", "Exchange request(String endpointUri, Processor processor); Exchange request(Endpoint endpoint, Processor processor);", "Object requestBody(Object body); <T> T requestBody(Object body, Class<T> type); Object requestBody( String endpointUri, Object body ); <T> T requestBody( String endpointUri, Object body, Class<T> type ); Object requestBody( Endpoint endpoint, Object body ); <T> T requestBody( Endpoint endpoint, Object body, Class<T> type );", "Object requestBodyAndHeader( String endpointUri, Object body, String header, Object headerValue ); <T> T requestBodyAndHeader( String endpointUri, Object body, String header, Object headerValue, Class<T> type ); Object requestBodyAndHeader( Endpoint endpoint, Object body, String header, Object headerValue ); <T> T requestBodyAndHeader( Endpoint endpoint, Object body, String header, Object headerValue, Class<T> type );", "Object requestBodyAndHeaders( String endpointUri, Object body, Map<String, Object> headers ); <T> T requestBodyAndHeaders( String endpointUri, Object body, Map<String, Object> headers, Class<T> type ); Object requestBodyAndHeaders( Endpoint endpoint, Object body, Map<String, Object> headers ); <T> T requestBodyAndHeaders( Endpoint endpoint, Object body, Map<String, Object> headers, Class<T> type );", "Future<Exchange> asyncSend(String endpointUri, Exchange exchange); Future<Exchange> asyncSend(Endpoint endpoint, Exchange exchange);", "Future<Exchange> asyncSend(String endpointUri, Processor processor); Future<Exchange> asyncSend(Endpoint endpoint, Processor processor);", "Future<Object> asyncSendBody(String endpointUri, Object body); Future<Object> asyncSendBody(Endpoint endpoint, Object body);", "Future<Object> asyncRequestBody( String endpointUri, Object body ); <T> Future<T> asyncRequestBody( String endpointUri, Object body, Class<T> type ); Future<Object> asyncRequestBody( Endpoint endpoint, Object body ); <T> Future<T> asyncRequestBody( Endpoint endpoint, Object body, Class<T> type );", "Future<Object> asyncRequestBodyAndHeader( String endpointUri, Object body, String header, Object headerValue ); <T> Future<T> asyncRequestBodyAndHeader( String endpointUri, Object body, String header, Object headerValue, Class<T> type ); Future<Object> asyncRequestBodyAndHeader( Endpoint endpoint, Object body, String header, Object headerValue ); <T> Future<T> asyncRequestBodyAndHeader( Endpoint endpoint, Object body, String header, Object headerValue, Class<T> type );", "Future<Object> asyncRequestBodyAndHeaders( String endpointUri, Object body, Map<String, Object> headers ); <T> Future<T> asyncRequestBodyAndHeaders( String endpointUri, Object body, Map<String, Object> headers, Class<T> type ); Future<Object> asyncRequestBodyAndHeaders( Endpoint endpoint, Object body, Map<String, Object> headers ); <T> Future<T> asyncRequestBodyAndHeaders( Endpoint endpoint, Object body, Map<String, Object> headers, Class<T> type );", "Future<Exchange> asyncCallback( String endpointUri, Exchange exchange, Synchronization onCompletion ); Future<Exchange> asyncCallback( Endpoint endpoint, Exchange exchange, Synchronization onCompletion );", "Future<Exchange> asyncCallback( String endpointUri, Processor processor, Synchronization onCompletion ); Future<Exchange> asyncCallback( Endpoint endpoint, Processor processor, Synchronization onCompletion );", "Future<Object> asyncCallbackSendBody( String endpointUri, Object body, Synchronization onCompletion ); Future<Object> asyncCallbackSendBody( Endpoint endpoint, Object body, Synchronization onCompletion );", "Future<Object> asyncCallbackRequestBody( String endpointUri, Object body, Synchronization onCompletion ); Future<Object> asyncCallbackRequestBody( Endpoint endpoint, Object body, Synchronization onCompletion );", "Integer result = DefaultFluentProducerTemplate.on(context) .withHeader(\"key-1\", \"value-1\") .withHeader(\"key-2\", \"value-2\") .withBody(\"Hello\") .to(\"direct:inout\") .request(Integer.class);", "Integer result = DefaultFluentProducerTemplate.on(context) .withProcessor(exchange -> exchange.getIn().setBody(\"Hello World\")) .to(\"direct:exception\") .request(Integer.class);", "Object result = DefaultFluentProducerTemplate.on(context) .withTemplateCustomizer( template -> { template.setExecutorService(myExecutor); template.setMaximumCacheSize(10); } ) .withBody(\"the body\") .to(\"direct:start\") .request();", "FluentProducerTemplate fluentProducerTemplate = context.createFluentProducerTemplate();", "import org.apache.camel.ProducerTemplate; import org.apache.camel.ConsumerTemplate; import org.apache.camel.Exchange; ProducerTemplate template = context.createProducerTemplate(); ConsumerTemplate consumer = context.createConsumerTemplate(); // Start the consumer service consumer. start (); template.sendBody(\"seda:foo\", \"Hello\"); Exchange out = consumer. receive (\"seda:foo\"); // Stop the consumer service consumer. stop ();", "import org.apache.camel.ProducerTemplate; import org.apache.camel.ConsumerTemplate; ProducerTemplate template = context.createProducerTemplate(); ConsumerTemplate consumer = context.createConsumerTemplate(); // Start the consumer service consumer. start (); template.sendBody(\"seda:foo\", \"Hello\"); Object body = consumer. receiveBody (\"seda:foo\"); // Stop the consumer service consumer. stop ();", "Exchange receive(String endpointUri); Exchange receive(String endpointUri, long timeout); Exchange receiveNoWait(String endpointUri); Exchange receive(Endpoint endpoint); Exchange receive(Endpoint endpoint, long timeout); Exchange receiveNoWait(Endpoint endpoint);", "Object receiveBody(String endpointUri); Object receiveBody(String endpointUri, long timeout); Object receiveBodyNoWait(String endpointUri); Object receiveBody(Endpoint endpoint); Object receiveBody(Endpoint endpoint, long timeout); Object receiveBodyNoWait(Endpoint endpoint); <T> T receiveBody(String endpointUri, Class<T> type); <T> T receiveBody(String endpointUri, long timeout, Class<T> type); <T> T receiveBodyNoWait(String endpointUri, Class<T> type); <T> T receiveBody(Endpoint endpoint, Class<T> type); <T> T receiveBody(Endpoint endpoint, long timeout, Class<T> type); <T> T receiveBodyNoWait(Endpoint endpoint, Class<T> type);" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/Templates
Chapter 2. ClusterAutoscaler [autoscaling.openshift.io/v1]
Chapter 2. ClusterAutoscaler [autoscaling.openshift.io/v1] Description ClusterAutoscaler is the Schema for the clusterautoscalers API Type object 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Desired state of ClusterAutoscaler resource status object Most recently observed status of ClusterAutoscaler resource 2.1.1. .spec Description Desired state of ClusterAutoscaler resource Type object Property Type Description balanceSimilarNodeGroups boolean BalanceSimilarNodeGroups enables/disables the --balance-similar-node-groups cluster-autoscaler feature. This feature will automatically identify node groups with the same instance type and the same set of labels and try to keep the respective sizes of those node groups balanced. balancingIgnoredLabels array (string) BalancingIgnoredLabels sets "--balancing-ignore-label <label name>" flag on cluster-autoscaler for each listed label. This option specifies labels that cluster autoscaler should ignore when considering node group similarity. For example, if you have nodes with "topology.ebs.csi.aws.com/zone" label, you can add name of this label here to prevent cluster autoscaler from spliting nodes into different node groups based on its value. ignoreDaemonsetsUtilization boolean Enables/Disables --ignore-daemonsets-utilization CA feature flag. Should CA ignore DaemonSet pods when calculating resource utilization for scaling down. false by default logVerbosity integer Sets the autoscaler log level. Default value is 1, level 4 is recommended for DEBUGGING and level 6 will enable almost everything. This option has priority over log level set by the CLUSTER_AUTOSCALER_VERBOSITY environment variable. maxNodeProvisionTime string Maximum time CA waits for node to be provisioned maxPodGracePeriod integer Gives pods graceful termination time before scaling down podPriorityThreshold integer To allow users to schedule "best-effort" pods, which shouldn't trigger Cluster Autoscaler actions, but only run when there are spare resources available, More info: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-cluster-autoscaler-work-with-pod-priority-and-preemption resourceLimits object Constraints of autoscaling resources scaleDown object Configuration of scale down operation skipNodesWithLocalStorage boolean Enables/Disables --skip-nodes-with-local-storage CA feature flag. If true cluster autoscaler will never delete nodes with pods with local storage, e.g. EmptyDir or HostPath. true by default at autoscaler 2.1.2. .spec.resourceLimits Description Constraints of autoscaling resources Type object Property Type Description cores object Minimum and maximum number of cores in cluster, in the format <min>:<max>. Cluster autoscaler will not scale the cluster beyond these numbers. gpus array Minimum and maximum number of different GPUs in cluster, in the format <gpu_type>:<min>:<max>. Cluster autoscaler will not scale the cluster beyond these numbers. Can be passed multiple times. gpus[] object maxNodesTotal integer Maximum number of nodes in all node groups. Cluster autoscaler will not grow the cluster beyond this number. memory object Minimum and maximum number of gigabytes of memory in cluster, in the format <min>:<max>. Cluster autoscaler will not scale the cluster beyond these numbers. 2.1.3. .spec.resourceLimits.cores Description Minimum and maximum number of cores in cluster, in the format <min>:<max>. Cluster autoscaler will not scale the cluster beyond these numbers. Type object Required max min Property Type Description max integer min integer 2.1.4. .spec.resourceLimits.gpus Description Minimum and maximum number of different GPUs in cluster, in the format <gpu_type>:<min>:<max>. Cluster autoscaler will not scale the cluster beyond these numbers. Can be passed multiple times. Type array 2.1.5. .spec.resourceLimits.gpus[] Description Type object Required max min type Property Type Description max integer min integer type string The type of GPU to associate with the minimum and maximum limits. This value is used by the Cluster Autoscaler to identify Nodes that will have GPU capacity by searching for it as a label value on the Node objects. For example, Nodes that carry the label key cluster-api/accelerator with the label value being the same as the Type field will be counted towards the resource limits by the Cluster Autoscaler. 2.1.6. .spec.resourceLimits.memory Description Minimum and maximum number of gigabytes of memory in cluster, in the format <min>:<max>. Cluster autoscaler will not scale the cluster beyond these numbers. Type object Required max min Property Type Description max integer min integer 2.1.7. .spec.scaleDown Description Configuration of scale down operation Type object Required enabled Property Type Description delayAfterAdd string How long after scale up that scale down evaluation resumes delayAfterDelete string How long after node deletion that scale down evaluation resumes, defaults to scan-interval delayAfterFailure string How long after scale down failure that scale down evaluation resumes enabled boolean Should CA scale down the cluster unneededTime string How long a node should be unneeded before it is eligible for scale down utilizationThreshold string Node utilization level, defined as sum of requested resources divided by capacity, below which a node can be considered for scale down 2.1.8. .status Description Most recently observed status of ClusterAutoscaler resource Type object 2.2. API endpoints The following API endpoints are available: /apis/autoscaling.openshift.io/v1/clusterautoscalers DELETE : delete collection of ClusterAutoscaler GET : list objects of kind ClusterAutoscaler POST : create a ClusterAutoscaler /apis/autoscaling.openshift.io/v1/clusterautoscalers/{name} DELETE : delete a ClusterAutoscaler GET : read the specified ClusterAutoscaler PATCH : partially update the specified ClusterAutoscaler PUT : replace the specified ClusterAutoscaler /apis/autoscaling.openshift.io/v1/clusterautoscalers/{name}/status GET : read status of the specified ClusterAutoscaler PATCH : partially update status of the specified ClusterAutoscaler PUT : replace status of the specified ClusterAutoscaler 2.2.1. /apis/autoscaling.openshift.io/v1/clusterautoscalers Table 2.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ClusterAutoscaler Table 2.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ClusterAutoscaler Table 2.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.5. HTTP responses HTTP code Reponse body 200 - OK ClusterAutoscalerList schema 401 - Unauthorized Empty HTTP method POST Description create a ClusterAutoscaler Table 2.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.7. Body parameters Parameter Type Description body ClusterAutoscaler schema Table 2.8. HTTP responses HTTP code Reponse body 200 - OK ClusterAutoscaler schema 201 - Created ClusterAutoscaler schema 202 - Accepted ClusterAutoscaler schema 401 - Unauthorized Empty 2.2.2. /apis/autoscaling.openshift.io/v1/clusterautoscalers/{name} Table 2.9. Global path parameters Parameter Type Description name string name of the ClusterAutoscaler Table 2.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ClusterAutoscaler Table 2.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 2.12. Body parameters Parameter Type Description body DeleteOptions schema Table 2.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ClusterAutoscaler Table 2.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 2.15. HTTP responses HTTP code Reponse body 200 - OK ClusterAutoscaler schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ClusterAutoscaler Table 2.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 2.17. Body parameters Parameter Type Description body Patch schema Table 2.18. HTTP responses HTTP code Reponse body 200 - OK ClusterAutoscaler schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ClusterAutoscaler Table 2.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.20. Body parameters Parameter Type Description body ClusterAutoscaler schema Table 2.21. HTTP responses HTTP code Reponse body 200 - OK ClusterAutoscaler schema 201 - Created ClusterAutoscaler schema 401 - Unauthorized Empty 2.2.3. /apis/autoscaling.openshift.io/v1/clusterautoscalers/{name}/status Table 2.22. Global path parameters Parameter Type Description name string name of the ClusterAutoscaler Table 2.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified ClusterAutoscaler Table 2.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 2.25. HTTP responses HTTP code Reponse body 200 - OK ClusterAutoscaler schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ClusterAutoscaler Table 2.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 2.27. Body parameters Parameter Type Description body Patch schema Table 2.28. HTTP responses HTTP code Reponse body 200 - OK ClusterAutoscaler schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ClusterAutoscaler Table 2.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.30. Body parameters Parameter Type Description body ClusterAutoscaler schema Table 2.31. HTTP responses HTTP code Reponse body 200 - OK ClusterAutoscaler schema 201 - Created ClusterAutoscaler schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/autoscale_apis/clusterautoscaler-autoscaling-openshift-io-v1
10.5.57. Action
10.5.57. Action Action specifies a MIME content type and CGI script pair, so that when a file of that media type is requested, a particular CGI script is executed.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-apache-action
4.152. libvirt-qmf
4.152. libvirt-qmf 4.152.1. RHBA-2012:0525 - libvirt-qmf bug fix update Updated libvirt-qmf packages that fix one bug are now available for Red Hat Enterprise Linux 6. The libvirt-qmf packages provide an interface with libvirt using Qpid Management Framework (QMF), which utilizes the Advanced Message Queuing Protocol (AMQP). AMQP is an open standard application layer protocol providing reliable transport of messages. Bug Fix BZ# 807931 Qpid APIs using the libpidclient and libpidcommon libraries are not application binary interface (ABI) stable. These dependencies have been removed so that Qpid rebuilds do not affect the libvirt-qmf packages. All users of libvirt-qmf are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/libvirt-qmf
Chapter 11. Volume Snapshots
Chapter 11. Volume Snapshots A volume snapshot is the state of the storage volume in a cluster at a particular point in time. These snapshots help to use storage more efficiently by not having to make a full copy each time and can be used as building blocks for developing an application. You can create multiple snapshots of the same persistent volume claim (PVC). For CephFS, you can create up to 100 snapshots per PVC. For RADOS Block Device (RBD), you can create up to 512 snapshots per PVC. Note You cannot schedule periodic creation of snapshots. 11.1. Creating volume snapshots You can create a volume snapshot either from the Persistent Volume Claim (PVC) page or the Volume Snapshots page. Prerequisites For a consistent snapshot, the PVC should be in Bound state and not be in use. Ensure to stop all IO before taking the snapshot. Note OpenShift Container Storage only provides crash consistency for a volume snapshot of a PVC if a pod is using it. For application consistency, be sure to first tear down a running pod to ensure consistent snapshots or use any quiesce mechanism provided by the application to ensure it. Procedure From the Persistent Volume Claims page Click Storage Persistent Volume Claims from the OpenShift Web Console. To create a volume snapshot, do one of the following: Beside the desired PVC, click Action menu (...) Create Snapshot . Click on the PVC for which you want to create the snapshot and click Actions Create Snapshot . Enter a Name for the volume snapshot. Choose the Snapshot Class from the drop-down list. Click Create . You will be redirected to the Details page of the volume snapshot that is created. From the Volume Snapshots page Click Storage Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots page, click Create Volume Snapshot . Choose the required Project from the drop-down list. Choose the Persistent Volume Claim from the drop-down list. Enter a Name for the snapshot. Choose the Snapshot Class from the drop-down list. Click Create . You will be redirected to the Details page of the volume snapshot that is created. Verification steps Go to the Details page of the PVC and click the Volume Snapshots tab to see the list of volume snapshots. Verify that the new volume snapshot is listed. Click Storage Volume Snapshots from the OpenShift Web Console. Verify that the new volume snapshot is listed. Wait for the volume snapshot to be in Ready state. 11.2. Restoring volume snapshots When you restore a volume snapshot, a new Persistent Volume Claim (PVC) gets created. The restored PVC is independent of the volume snapshot and the parent PVC. You can restore a volume snapshot from either the Persistent Volume Claim page or the Volume Snapshots page. Procedure From the Persistent Volume Claims page You can restore volume snapshot from the Persistent Volume Claims page only if the parent PVC is present. Click Storage Persistent Volume Claims from the OpenShift Web Console. Click on the PVC name with the volume snapshot to restore a volume snapshot as a new PVC. In the Volume Snapshots tab, click the Action menu (...) to the volume snapshot you want to restore. Click Restore as new PVC . Enter a name for the new PVC. Select the Storage Class name. Note For Rados Block Device (RBD), you must select a storage class with the same pool as that of the parent PVC. Restoring the snapshot of an encrypted PVC using a storage class where encryption is not enabled and vice versa is not supported. Select the Access Mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. (Optional) For RBD, select Volume mode . Click Restore . You are redirected to the new PVC details page. From the Volume Snapshots page Click Storage Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots tab, click the Action menu (...) to the volume snapshot you want to restore. Click Restore as new PVC . Enter a name for the new PVC. Select the Storage Class name. Note For Rados Block Device (RBD), you must select a storage class with the same pool as that of the parent PVC. Restoring the snapshot of an encrypted PVC using a storage class where encryption is not enabled and vice versa is not supported. Select the Access Mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. (Optional) For RBD, select Volume mode . Click Restore . You are redirected to the new PVC details page. Verification steps Click Storage Persistent Volume Claims from the OpenShift Web Console and confirm that the new PVC is listed in the Persistent Volume Claims page. Wait for the new PVC to reach Bound state. 11.3. Deleting volume snapshots Prerequisites For deleting a volume snapshot, the volume snapshot class which is used in that particular volume snapshot should be present. Procedure From Persistent Volume Claims page Click Storage Persistent Volume Claims from the OpenShift Web Console. Click on the PVC name which has the volume snapshot that needs to be deleted. In the Volume Snapshots tab, beside the desired volume snapshot, click Action menu (...) Delete Volume Snapshot . From Volume Snapshots page Click Storage Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots page, beside the desired volume snapshot click Action menu (...) Delete Volume Snapshot . Verfication steps Ensure that the deleted volume snapshot is not present in the Volume Snapshots tab of the PVC details page. Click Storage Volume Snapshots and ensure that the deleted volume snapshot is not listed.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/deploying_and_managing_openshift_container_storage_using_red_hat_openstack_platform/volume-snapshots_osp