title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
βŒ€
url
stringlengths
79
342
B.5. Add a Certificate to a Truststore Using Keytool
B.5. Add a Certificate to a Truststore Using Keytool Procedure B.3. Add a Certificate to a Truststore Using Keytool Run the keytool -import -alias ALIAS -file public.cert -storetype TYPE -keystore server.truststore command: If the specified truststore already exists, enter the existing password for that truststore, otherwise enter a new password: Enter yes when prompted to trust the certificate: Result The certificate in public.cert has been added to the new truststore named server.truststore .
[ "keytool -import -alias teiid -file public.cert -storetype JKS -keystore server.truststore", "Enter keystore password: <password>", "Owner: CN=<user's name>, OU=<dept name>, O=<company name>, L=<city>, ST=<state>, C=<country> Issuer: CN=<user's name>, OU=<dept name>, O=<company name>, L=<city>, ST=<state>, C=<country> Serial number: 416d8636 Valid from: Fri Jul 31 14:47:02 CDT 2009 until: Sat Jul 31 14:47:02 CDT 2010 Certificate fingerprints: MD5: 22:4C:A4:9D:2E:C8:CA:E8:81:5D:81:35:A1:84:78:2F SHA1: 05:FE:43:CC:EA:39:DC:1C:1E:40:26:45:B7:12:1C:B9:22:1E:64:63 Trust this certificate? [no]: yes" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_1_client_development/add_a_certificate_to_a_truststore_using_keytool1
Chapter 23. Diagnosing virtual machine problems
Chapter 23. Diagnosing virtual machine problems When working with virtual machines (VMs), you may encounter problems with varying levels of severity. Some problems may have a quick and easy fix, while for others, you may have to capture VM-related data and logs to report or diagnose the problems. The following sections provide detailed information about generating logs and diagnosing some common VM problems, as well as about reporting these problems. 23.1. Generating libvirt debug logs To diagnose virtual machine (VM) problems, it is helpful to generate and review libvirt debug logs. Attaching debug logs is also useful when asking for support to resolve VM-related problems. The following sections explain what debug logs are , how you can set them to be persistent , enable them during runtime , and attach them when reporting problems. 23.1.1. Understanding libvirt debug logs Debug logs are text files that contain data about events that occur during virtual machine (VM) runtime. The logs provide information about fundamental server-side functionalities, such as host libraries and the libvirt daemon. The log files also contain the standard error output ( stderr ) of all running VMs. Debug logging is not enabled by default and has to be enabled when libvirt starts. You can enable logging for a single session or persistently . You can also enable logging when a libvirt daemon session is already running by modifying the daemon run-time settings . Attaching the libvirt debug logs is also useful when requesting support with a VM problem. 23.1.2. Enabling persistent settings for libvirt debug logs You can configure libvirt debug logging to be automatically enabled whenever libvirt starts. By default, virtqemud is the main libvirt daemon in RHEL 9. To make persistent changes in the libvirt configuration, you must edit the virtqemud.conf file, located in the /etc/libvirt directory. Note In some cases, for example when you upgrade from RHEL 8, libvirtd might still be the enabled libvirt daemon. In that case, you must edit the libvirtd.conf file instead. Procedure Open the virtqemud.conf file in an editor. Replace or set the filters according to your requirements. Table 23.1. Debugging filter values 1 logs all messages generated by libvirt. 2 logs all non-debugging information. 3 logs all warning and error messages. This is the default value. 4 logs only error messages. Example 23.1. Sample daemon settings for logging filters The following settings: Log all error and warning messages from the remote , util.json , and rpc layers Log only error messages from the event layer. Save the filtered logs to /var/log/libvirt/libvirt.log Save and exit. Restart the libvirt daemon. 23.1.3. Enabling libvirt debug logs during runtime You can modify the libvirt daemon's runtime settings to enable debug logs and save them to an output file. This is useful when restarting the libvirt daemon is not possible because restarting fixes the problem, or because there is another process, such as migration or backup, running at the same time. Modifying runtime settings is also useful if you want to try a command without editing the configuration files or restarting the daemon. Prerequisites Make sure the libvirt-admin package is installed. Procedure Optional: Back up the active set of log filters. Note It is recommended that you back up the active set of filters so that you can restore them after generating the logs. If you do not restore the filters, the messages will continue to be logged which may affect system performance. Use the virt-admin utility to enable debugging and set the filters according to your requirements. Table 23.2. Debugging filter values 1 logs all messages generated by libvirt. 2 logs all non-debugging information. 3 logs all warning and error messages. This is the default value. 4 logs only error messages. Example 23.2. Sample virt-admin setting for logging filters The following command: Logs all error and warning messages from the remote , util.json , and rpc layers Logs only error messages from the event layer. Use the virt-admin utility to save the logs to a specific file or directory. For example, the following command saves the log output to the libvirt.log file in the /var/log/libvirt/ directory. Optional: You can also remove the filters to generate a log file that contains all VM-related information. However, it is not recommended since this file may contain a large amount of redundant information produced by libvirt's modules. Use the virt-admin utility to specify an empty set of filters. Optional: Restore the filters to their original state using the backup file. Perform the second step with the saved values to restore the filters. 23.1.4. Attaching libvirt debug logs to support requests You may have to request additional support to diagnose and resolve virtual machine (VM) problems. Attaching the debug logs to the support request is highly recommended to ensure that the support team has access to all the information they need to provide a quick resolution of the VM-related problem. Procedure To report a problem and request support, open a support case . Based on the encountered problems, attach the following logs along with your report: For problems with the libvirt service, attach the /var/log/libvirt/libvirt.log file from the host. For problems with a specific VM, attach its respective log file. For example, for the testguest1 VM, attach the testguest1.log file, which can be found at /var/log/libvirt/qemu/testguest1.log . Additional resources How to provide log files to Red Hat Support? (Red Hat Knowledgebase) 23.2. Dumping a virtual machine core To analyze why a virtual machine (VM) crashed or malfunctioned, you can dump the VM core to a file on disk for later analysis and diagnostics. This section provides a brief introduction to core dumping and explains how you can dump a VM core to a specific file. 23.2.1. How virtual machine core dumping works A virtual machine (VM) requires numerous running processes to work accurately and efficiently. In some cases, a running VM may terminate unexpectedly or malfunction while you are using it. Restarting the VM may cause the data to be reset or lost, which makes it difficult to diagnose the exact problem that caused the VM to crash. In such cases, you can use the virsh dump utility to save (or dump ) the core of a VM to a file before you reboot the VM. The core dump file contains a raw physical memory image of the VM which contains detailed information about the VM. This information can be used to diagnose VM problems, either manually, or by using a tool such as the crash utility. Additional resources crash man page on your system The crash Github repository 23.2.2. Creating a virtual machine core dump file A virtual machine (VM) core dump contains detailed information about the state of a VM at any given time. This information, which is similar to a snapshot of the VM, can help you detect problems if a VM malfunctions or shuts down suddenly. Prerequisites Make sure you have sufficient disk space to save the file. Note that the space occupied by the VM depends on the amount of RAM allocated to the VM. Procedure Use the virsh dump utility. For example, the following command dumps the lander1 VM's cores, its memory and the CPU common register file to gargantua.file in the /core/file directory. Important The crash utility no longer supports the default file format of the virsh dump command. To analyze a core dump file by using crash , you must create the file with the --memory-only option. Additionally, you must use the --memory-only option when creating a core dump file to attach to a Red Hat Support Case. Troubleshooting If the virsh dump command fails with a System is deadlocked on memory error, ensure you are assigning sufficient memory for the core dump file. To do so, use the following crashkernel option value. Alternatively, do not use crashkernel at all, which assigns core dump memory automatically. Additional resources virsh dump --help command virsh man page on your system Opening a Support Case 23.3. Backtracing virtual machine processes When a process related to a virtual machine (VM) malfunctions, you can use the gstack command along with the process identifier (PID) to generate an execution stack trace of the malfunctioning process. If the process is a part of a thread group then all the threads are traced as well. Prerequisites Ensure that the GDB package is installed. For details about installing GDB and the available components, see Installing the GNU Debugger . Make sure you know the PID of the processes that you want to backtrace. You can find the PID by using the pgrep command followed by the name of the process. For example: Procedure Use the gstack utility followed by the PID of the process you wish to backtrace. For example, the following command backtraces the libvirt process with the PID 22014. Additional resources gstack man page on your system GNU Debugger (GDB) Additional resources for reporting virtual machine problems and providing logs To request additional help and support, you can: Raise a service request by using the redhat-support-tool command line option, the Red Hat Portal UI, or several methods of FTP. To report problems and request support, see Open a Support Case . Upload the SOS Report and the log files when you submit a service request. This ensures that the Red Hat support engineer has all the necessary diagnostic information for reference. For more information about SOS reports, see the Red Hat Knowledgebase solution What is an SOS Report and how to create one in Red Hat Enterprise Linux? For information about attaching log files, see the Red Hat Knowledgebase solution How to provide files to Red Hat Support?
[ "log_filters=\"3:remote 4:event 3:util.json 3:rpc\" log_outputs=\"1:file:/var/log/libvirt/libvirt.log\"", "systemctl restart virtqemud.service", "virt-admin -c virtqemud:///system daemon-log-filters >> virt-filters-backup", "virt-admin -c virtqemud:///system daemon-log-filters \"3:remote 4:event 3:util.json 3:rpc\"", "virt-admin -c virtqemud:///system daemon-log-outputs \"1:file:/var/log/libvirt/libvirt.log\"", "virt-admin -c virtqemud:///system daemon-log-filters Logging filters:", "virsh dump lander1 /core/file/gargantua.file --memory-only Domain 'lander1' dumped to /core/file/gargantua.file", "crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M", "pgrep libvirt 22014 22025", "gstack 22014 Thread 3 (Thread 0x7f33edaf7700 (LWP 22017)): #0 0x00007f33f81aef21 in poll () from /lib64/libc.so.6 #1 0x00007f33f89059b6 in g_main_context_iterate.isra () from /lib64/libglib-2.0.so.0 #2 0x00007f33f8905d72 in g_main_loop_run () from /lib64/libglib-2.0.so.0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_virtualization/diagnosing-virtual-machine-problems_configuring-and-managing-virtualization
Chapter 8. Deploying on OpenStack with rootVolume and etcd on local disk
Chapter 8. Deploying on OpenStack with rootVolume and etcd on local disk Important Deploying on Red Hat OpenStack Platform (RHOSP) with rootVolume and etcd on local disk is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . As a day 2 operation, you can resolve and prevent performance issues of your Red Hat OpenStack Platform (RHOSP) installation by moving etcd from a root volume (provided by OpenStack Cinder) to a dedicated ephemeral local disk. 8.1. Deploying RHOSP on local disk If you have an existing RHOSP cloud, you can move etcd from that cloud to a dedicated ephemeral local disk. Warning This procedure is for testing etcd on a local disk only and should not be used on production clusters. In certain cases, complete loss of the control plane can occur. For more information, see "Overview of backup and restore operation" under "Backup and restore". Prerequisites You have an OpenStack cloud with a working Cinder. Your OpenStack cloud has at least 75 GB of available storage to accommodate 3 root volumes for the OpenShift control plane. The OpenStack cloud is deployed with Nova ephemeral storage that uses a local storage backend and not rbd . Procedure Create a Nova flavor for the control plane with at least 10 GB of ephemeral disk by running the following command, replacing the values for --ram , --disk , and <flavor_name> based on your environment: USD openstack flavor create --<ram 16384> --<disk 0> --ephemeral 10 --vcpus 4 <flavor_name> Deploy a cluster with root volumes for the control plane; for example: Example YAML file # ... controlPlane: name: master platform: openstack: type: USD{CONTROL_PLANE_FLAVOR} rootVolume: size: 25 types: - USD{CINDER_TYPE} replicas: 3 # ... Deploy the cluster you created by running the following command: USD openshift-install create cluster --dir <installation_directory> 1 1 For <installation_directory> , specify the location of the customized ./install-config.yaml file that you previously created. Verify that the cluster you deployed is healthy before proceeding to the step by running the following command: USD oc wait clusteroperators --all --for=condition=Progressing=false 1 1 Ensures that the cluster operators are finished progressing and that the cluster is not deploying or updating. Edit the ControlPlaneMachineSet (CPMS) to add the additional block ephemeral device that is used by etcd by running the following command: USD oc patch ControlPlaneMachineSet/cluster -n openshift-machine-api --type json -p ' 1 [ { "op": "add", "path": "/spec/template/machines_v1beta1_machine_openshift_io/spec/providerSpec/value/additionalBlockDevices", 2 "value": [ { "name": "etcd", "sizeGiB": 10, "storage": { "type": "Local" 3 } } ] } ] ' 1 Applies the JSON patch to the ControlPlaneMachineSet custom resource (CR). 2 Specifies the path where the additionalBlockDevices are added. 3 Adds the etcd devices with at least local storage of 10 GB to the cluster. You can specify values greater than 10 GB as long as the etcd device fits the Nova flavor. For example, if the Nova flavor has 15 GB, you can create the etcd device with 12 GB. Verify that the control plane machines are healthy by using the following steps: Wait for the control plane machine set update to finish by running the following command: USD oc wait --timeout=90m --for=condition=Progressing=false controlplanemachineset.machine.openshift.io -n openshift-machine-api cluster Verify that the 3 control plane machine sets are updated by running the following command: USD oc wait --timeout=90m --for=jsonpath='{.status.updatedReplicas}'=3 controlplanemachineset.machine.openshift.io -n openshift-machine-api cluster Verify that the 3 control plane machine sets are healthy by running the following command: USD oc wait --timeout=90m --for=jsonpath='{.status.replicas}'=3 controlplanemachineset.machine.openshift.io -n openshift-machine-api cluster Verify that the ClusterOperators are not progressing in the cluster by running the following command: USD oc wait clusteroperators --timeout=30m --all --for=condition=Progressing=false Verify that each of the 3 control plane machines has the additional block device you previously created by running the following script: USD cp_machines=USD(oc get machines -n openshift-machine-api --selector='machine.openshift.io/cluster-api-machine-role=master' --no-headers -o custom-columns=NAME:.metadata.name) 1 if [[ USD(echo "USD{cp_machines}" | wc -l) -ne 3 ]]; then exit 1 fi 2 for machine in USD{cp_machines}; do if ! oc get machine -n openshift-machine-api "USD{machine}" -o jsonpath='{.spec.providerSpec.value.additionalBlockDevices}' | grep -q 'etcd'; then exit 1 fi 3 done 1 Retrieves the control plane machines running in the cluster. 2 Iterates over machines which have an additionalBlockDevices entry with the name etcd . 3 Outputs the name of every control plane machine which has an additionalBlockDevice named etcd . Create a file named 98-var-lib-etcd.yaml by using the following YAML file: Warning This procedure is for testing etcd on a local disk and should not be used on a production cluster. In certain cases, complete loss of the control plane can occur. For more information, see "Overview of backup and restore operation" under "Backup and restore". apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 98-var-lib-etcd spec: config: ignition: version: 3.4.0 systemd: units: - contents: | [Unit] Description=Mount local-etcd to /var/lib/etcd [Mount] What=/dev/disk/by-label/local-etcd 1 Where=/var/lib/etcd Type=xfs Options=defaults,prjquota [Install] WantedBy=local-fs.target enabled: true name: var-lib-etcd.mount - contents: | [Unit] Description=Create local-etcd filesystem DefaultDependencies=no After=local-fs-pre.target ConditionPathIsSymbolicLink=!/dev/disk/by-label/local-etcd 2 [Service] Type=oneshot RemainAfterExit=yes ExecStart=/bin/bash -c "[ -L /dev/disk/by-label/ephemeral0 ] || ( >&2 echo Ephemeral disk does not exist; /usr/bin/false )" ExecStart=/usr/sbin/mkfs.xfs -f -L local-etcd /dev/disk/by-label/ephemeral0 3 [Install] RequiredBy=dev-disk-by\x2dlabel-local\x2detcd.device enabled: true name: create-local-etcd.service - contents: | [Unit] Description=Migrate existing data to local etcd After=var-lib-etcd.mount Before=crio.service 4 Requisite=var-lib-etcd.mount ConditionPathExists=!/var/lib/etcd/member ConditionPathIsDirectory=/sysroot/ostree/deploy/rhcos/var/lib/etcd/member 5 [Service] Type=oneshot RemainAfterExit=yes ExecStart=/bin/bash -c "if [ -d /var/lib/etcd/member.migrate ]; then rm -rf /var/lib/etcd/member.migrate; fi" 6 ExecStart=/usr/bin/cp -aZ /sysroot/ostree/deploy/rhcos/var/lib/etcd/member/ /var/lib/etcd/member.migrate ExecStart=/usr/bin/mv /var/lib/etcd/member.migrate /var/lib/etcd/member 7 [Install] RequiredBy=var-lib-etcd.mount enabled: true name: migrate-to-local-etcd.service - contents: | [Unit] Description=Relabel /var/lib/etcd After=migrate-to-local-etcd.service Before=crio.service [Service] Type=oneshot RemainAfterExit=yes ExecCondition=/bin/bash -c "[ -n \"USD(restorecon -nv /var/lib/etcd)\" ]" 8 ExecStart=/usr/sbin/restorecon -R /var/lib/etcd [Install] RequiredBy=var-lib-etcd.mount enabled: true name: relabel-var-lib-etcd.service 1 The etcd database must be mounted by the device, not a label, to ensure that systemd generates the device dependency used in this config to trigger filesystem creation. 2 Do not run if the file system dev/disk/by-label/local-etcd already exists. 3 Fails with an alert message if /dev/disk/by-label/ephemeral0 doesn't exist. 4 Migrates existing data to local etcd database. This config does so after /var/lib/etcd is mounted, but before CRI-O starts so etcd is not running yet. 5 Requires that etcd is mounted and does not contain a member directory, but the ostree does. 6 Cleans up any migration state. 7 Copies and moves in separate steps to ensure atomic creation of a complete member directory. 8 Performs a quick check of the mount point directory before performing a full recursive relabel. If restorecon in the file path /var/lib/etcd cannot rename the directory, the recursive rename is not performed. Create the new MachineConfig object by running the following command: USD oc create -f 98-var-lib-etcd.yaml Note Moving the etcd database onto the local disk of each control plane machine takes time. Verify that the etcd databases has been transferred to the local disk of each control plane by running the following commands: Verify that the cluster is still updating by running the following command: USD oc wait --timeout=45m --for=condition=Updating=false machineconfigpool/master Verify that the cluster is ready by running the following command: USD oc wait node --selector='node-role.kubernetes.io/master' --for condition=Ready --timeout=30s Verify that the cluster Operators are running in the cluster by running the following command: USD oc wait clusteroperators --timeout=30m --all --for=condition=Progressing=false 8.2. Additional resources Recommended etcd practices Overview of backup and restore options
[ "openstack flavor create --<ram 16384> --<disk 0> --ephemeral 10 --vcpus 4 <flavor_name>", "controlPlane: name: master platform: openstack: type: USD{CONTROL_PLANE_FLAVOR} rootVolume: size: 25 types: - USD{CINDER_TYPE} replicas: 3", "openshift-install create cluster --dir <installation_directory> 1", "oc wait clusteroperators --all --for=condition=Progressing=false 1", "oc patch ControlPlaneMachineSet/cluster -n openshift-machine-api --type json -p ' 1 [ { \"op\": \"add\", \"path\": \"/spec/template/machines_v1beta1_machine_openshift_io/spec/providerSpec/value/additionalBlockDevices\", 2 \"value\": [ { \"name\": \"etcd\", \"sizeGiB\": 10, \"storage\": { \"type\": \"Local\" 3 } } ] } ] '", "oc wait --timeout=90m --for=condition=Progressing=false controlplanemachineset.machine.openshift.io -n openshift-machine-api cluster", "oc wait --timeout=90m --for=jsonpath='{.status.updatedReplicas}'=3 controlplanemachineset.machine.openshift.io -n openshift-machine-api cluster", "oc wait --timeout=90m --for=jsonpath='{.status.replicas}'=3 controlplanemachineset.machine.openshift.io -n openshift-machine-api cluster", "oc wait clusteroperators --timeout=30m --all --for=condition=Progressing=false", "cp_machines=USD(oc get machines -n openshift-machine-api --selector='machine.openshift.io/cluster-api-machine-role=master' --no-headers -o custom-columns=NAME:.metadata.name) 1 if [[ USD(echo \"USD{cp_machines}\" | wc -l) -ne 3 ]]; then exit 1 fi 2 for machine in USD{cp_machines}; do if ! oc get machine -n openshift-machine-api \"USD{machine}\" -o jsonpath='{.spec.providerSpec.value.additionalBlockDevices}' | grep -q 'etcd'; then exit 1 fi 3 done", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 98-var-lib-etcd spec: config: ignition: version: 3.4.0 systemd: units: - contents: | [Unit] Description=Mount local-etcd to /var/lib/etcd [Mount] What=/dev/disk/by-label/local-etcd 1 Where=/var/lib/etcd Type=xfs Options=defaults,prjquota [Install] WantedBy=local-fs.target enabled: true name: var-lib-etcd.mount - contents: | [Unit] Description=Create local-etcd filesystem DefaultDependencies=no After=local-fs-pre.target ConditionPathIsSymbolicLink=!/dev/disk/by-label/local-etcd 2 [Service] Type=oneshot RemainAfterExit=yes ExecStart=/bin/bash -c \"[ -L /dev/disk/by-label/ephemeral0 ] || ( >&2 echo Ephemeral disk does not exist; /usr/bin/false )\" ExecStart=/usr/sbin/mkfs.xfs -f -L local-etcd /dev/disk/by-label/ephemeral0 3 [Install] RequiredBy=dev-disk-by\\x2dlabel-local\\x2detcd.device enabled: true name: create-local-etcd.service - contents: | [Unit] Description=Migrate existing data to local etcd After=var-lib-etcd.mount Before=crio.service 4 Requisite=var-lib-etcd.mount ConditionPathExists=!/var/lib/etcd/member ConditionPathIsDirectory=/sysroot/ostree/deploy/rhcos/var/lib/etcd/member 5 [Service] Type=oneshot RemainAfterExit=yes ExecStart=/bin/bash -c \"if [ -d /var/lib/etcd/member.migrate ]; then rm -rf /var/lib/etcd/member.migrate; fi\" 6 ExecStart=/usr/bin/cp -aZ /sysroot/ostree/deploy/rhcos/var/lib/etcd/member/ /var/lib/etcd/member.migrate ExecStart=/usr/bin/mv /var/lib/etcd/member.migrate /var/lib/etcd/member 7 [Install] RequiredBy=var-lib-etcd.mount enabled: true name: migrate-to-local-etcd.service - contents: | [Unit] Description=Relabel /var/lib/etcd After=migrate-to-local-etcd.service Before=crio.service [Service] Type=oneshot RemainAfterExit=yes ExecCondition=/bin/bash -c \"[ -n \\\"USD(restorecon -nv /var/lib/etcd)\\\" ]\" 8 ExecStart=/usr/sbin/restorecon -R /var/lib/etcd [Install] RequiredBy=var-lib-etcd.mount enabled: true name: relabel-var-lib-etcd.service", "oc create -f 98-var-lib-etcd.yaml", "oc wait --timeout=45m --for=condition=Updating=false machineconfigpool/master", "oc wait node --selector='node-role.kubernetes.io/master' --for condition=Ready --timeout=30s", "oc wait clusteroperators --timeout=30m --all --for=condition=Progressing=false" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_openstack/deploying-openstack-on-local-disk
Chapter 8. Network requirements
Chapter 8. Network requirements OpenShift Data Foundation requires that at least one network interface that is used for the cluster network to be capable of at least 10 gigabit network speeds. This section further covers different network considerations for planning deployments. 8.1. IPv6 support Red Hat OpenShift Data Foundation version 4.12 introduced the support of IPv6. IPv6 is supported in single stack only, and cannot be used simultaneously with IPv4. IPv6 is the default behavior in OpenShift Data Foundation when IPv6 is turned on in Openshift Container Platform. Red Hat OpenShift Data Foundation version 4.14 introduces IPv6 auto detection and configuration. Clusters using IPv6 will automatically be configured accordingly. OpenShift Container Platform dual stack with Red Hat OpenShift Data Foundation IPv4 is supported from version 4.13 and later. Dual stack on Red Hat OpenShift Data Foundation IPv6 is not supported. 8.2. Multi network plug-in (Multus) support OpenShift Data Foundation supports the ability to use multi-network plug-in Multus on bare metal infrastructures to improve security and performance by isolating the different types of network traffic. By using Multus, one or more network interfaces on hosts can be reserved for exclusive use of OpenShift Data Foundation. To use Multus, first run the Multus prerequisite validation tool. For instructions to use the tool, see OpenShift Data Foundation - Multus prerequisite validation tool . For more information about Multus networks, see Multiple networks 8.2.1. Segregating storage traffic using Multus By default, Red Hat OpenShift Data Foundation is configured to use the Red Hat OpenShift Software Defined Network (SDN). The default SDN carries the following types of traffic: Pod-to-pod traffic Pod-to-storage traffic, known as public network traffic when the storage is OpenShift Data Foundation OpenShift Data Foundation internal replication and rebalancing traffic, known as cluster network traffic There are three ways to segregate OpenShift Data Foundation from OpenShift default network: Reserve a network interface on the host for the public network of OpenShift Data Foundation Pod-to-storage and internal storage replication traffic coexist on a network that is isolated from pod-to-pod network traffic. Application pods have access to the maximum public network storage bandwidth when the OpenShift Data Foundation cluster is healthy. When the OpenShift Data Foundation cluster is recovering from failure, the application pods will have reduced bandwidth due to ongoing replication and rebalancing traffic. Reserve a network interface on the host for OpenShift Data Foundation's cluster network Pod-to-pod and pod-to-storage traffic both continue to use OpenShift's default network. Pod-to-storage bandwidth is less affected by the health of the OpenShift Data Foundation cluster. Pod-to-pod and pod-to-storage OpenShift Data Foundation traffic might contend for network bandwidth in busy OpenShift clusters. The storage internal network often has an overabundance of bandwidth that is unused, reserved for use during failures. Reserve two network interfaces on the host for OpenShift Data Foundation: one for the public network and one for the cluster network Pod-to-pod, pod-to-storage, and storage internal traffic are all isolated, and none of the traffic types will contend for resources. Service level agreements for all traffic types are more able to be ensured. During healthy runtime, more network bandwidth is reserved but unused across all three networks. Dual network interface segregated configuration schematic example: Triple network interface full segregated configuration schematic example: 8.2.2. When to use Multus Use Multus for OpenShift Data Foundation when you need the following: Improved latency - Multus with ODF always improves latency. Use host interfaces at near-host network speeds and bypass OpenShift's software-defined Pod network. You can also perform Linux per interface level tuning for each interface. Improved bandwidth - Dedicated interfaces for OpenShift Data Foundation client data traffic and internal data traffic. These dedicated interfaces reserve full bandwidth. Improved security - Multus isolates storage network traffic from application network traffic for added security. Bandwidth or performance might not be isolated when networks share an interface, however, you can use QoS or traffic shaping to prioritize bandwidth on shared interfaces. 8.2.3. Multus configuration To use Multus, you must create network attachment definitions (NADs) before deploying the OpenShift Data Foundation cluster, which is later attached to the cluster. For more information, see Creating network attachment definitions . To attach additional network interfaces to a pod, you must create configurations that define how the interfaces are attached. You specify each interface by using a NetworkAttachmentDefinition custom resource (CR). A Container Network Interface (CNI) configuration inside each of these CRs defines how that interface is created. OpenShift Data Foundation supports two types of drivers. The following tables describes the drivers and their features: macvlan (recommended) ipvlan Each connection gets a sub-interface of the parent interface with its own MAC address and is isolated from the host network. Each connection gets its own IP address and shares the same MAC address. Uses less CPU and provides better throughput than Linux bridge or ipvlan . L2 mode is analogous to macvlan bridge mode. Almost always require bridge mode. L3 mode is analogous to a router existing on the parent interface. L3 is useful for Border Gateway Protocol (BGP), otherwise use macvlan for reduced CPU and better throughput. Near-host performance when network interface card (NIC) supports virtual ports/virtual local area networks (VLANs) in hardware. If NIC does not support VLANs in hardware, performance might be better than macvlan . OpenShift Data Foundation supports the following two types IP address management: whereabouts DHCP Uses OpenShift/Kubernetes leases to select unique IP addresses per Pod. Does not require range field. Does not require a DHCP server to provide IPs for Pods. Network DHCP server can give out the same range to Multus Pods as well as any other hosts on the same network. Caution If there is a DHCP server, ensure Multus configured IPAM does not give out the same range so that multiple MAC addresses on the network cannot have the same IP. 8.2.4. Requirements for Multus configuration Prerequisites The interface used for the public network must have the same interface name on each OpenShift storage and worker node, and the interfaces must all be connected to the same underlying network. The interface used for the cluster network must have the same interface name on each OpenShift storage node, and the interfaces must all be connected to the same underlying network. Cluster network interfaces do not have to be present on the OpenShift worker nodes. Each network interface used for the public or cluster network must be capable of at least 10 gigabit network speeds. Each network requires a separate virtual local area network (VLAN) or subnet. See Creating Multus networks for the necessary steps to configure a Multus based configuration on bare metal.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/planning_your_deployment/network-requirements_rhodf
Builds using BuildConfig
Builds using BuildConfig OpenShift Container Platform 4.16 Builds Red Hat OpenShift Documentation Team
[ "kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: \"ruby-sample-build\" 1 spec: runPolicy: \"Serial\" 2 triggers: 3 - type: \"GitHub\" github: secret: \"secret101\" - type: \"Generic\" generic: secret: \"secret101\" - type: \"ImageChange\" source: 4 git: uri: \"https://github.com/openshift/ruby-hello-world\" strategy: 5 sourceStrategy: from: kind: \"ImageStreamTag\" name: \"ruby-20-centos7:latest\" output: 6 to: kind: \"ImageStreamTag\" name: \"origin-ruby-sample:latest\" postCommit: 7 script: \"bundle exec rake test\"", "source: git: uri: https://github.com/openshift/ruby-hello-world.git 1 ref: \"master\" images: - from: kind: ImageStreamTag name: myinputimage:latest namespace: mynamespace paths: - destinationDir: app/dir/injected/dir 2 sourcePath: /usr/lib/somefile.jar contextDir: \"app/dir\" 3 dockerfile: \"FROM centos:7\\nRUN yum install -y httpd\" 4", "source: dockerfile: \"FROM centos:7\\nRUN yum install -y httpd\" 1", "source: git: uri: https://github.com/openshift/ruby-hello-world.git ref: \"master\" images: 1 - from: 2 kind: ImageStreamTag name: myinputimage:latest namespace: mynamespace paths: 3 - destinationDir: injected/dir 4 sourcePath: /usr/lib/somefile.jar 5 - from: kind: ImageStreamTag name: myotherinputimage:latest namespace: myothernamespace pullSecret: mysecret 6 paths: - destinationDir: injected/dir sourcePath: /usr/lib/somefile.jar", "oc secrets link builder dockerhub", "source: git: 1 uri: \"https://github.com/openshift/ruby-hello-world\" ref: \"master\" contextDir: \"app/dir\" 2 dockerfile: \"FROM openshift/ruby-22-centos7\\nUSER example\" 3", "source: git: uri: \"https://github.com/openshift/ruby-hello-world\" ref: \"master\" httpProxy: http://proxy.example.com httpsProxy: https://proxy.example.com noProxy: somedomain.com, otherdomain.com", "oc annotate secret mysecret 'build.openshift.io/source-secret-match-uri-1=ssh://bitbucket.atlassian.com:7999/*'", "kind: Secret apiVersion: v1 metadata: name: matches-all-corporate-servers-https-only annotations: build.openshift.io/source-secret-match-uri-1: https://*.mycorp.com/* data: --- kind: Secret apiVersion: v1 metadata: name: override-for-my-dev-servers-https-only annotations: build.openshift.io/source-secret-match-uri-1: https://mydev1.mycorp.com/* build.openshift.io/source-secret-match-uri-2: https://mydev2.mycorp.com/* data:", "oc annotate secret mysecret 'build.openshift.io/source-secret-match-uri-1=https://*.mycorp.com/*'", "apiVersion: \"build.openshift.io/v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: output: to: kind: \"ImageStreamTag\" name: \"sample-image:latest\" source: git: uri: \"https://github.com/user/app.git\" sourceSecret: name: \"basicsecret\" strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"python-33-centos7:latest\"", "oc set build-secret --source bc/sample-build basicsecret", "oc create secret generic <secret_name> --from-file=<path/to/.gitconfig>", "[http] sslVerify=false", "cat .gitconfig", "[user] name = <name> email = <email> [http] sslVerify = false sslCert = /var/run/secrets/openshift.io/source/client.crt sslKey = /var/run/secrets/openshift.io/source/client.key sslCaInfo = /var/run/secrets/openshift.io/source/cacert.crt", "oc create secret generic <secret_name> --from-literal=username=<user_name> \\ 1 --from-literal=password=<password> \\ 2 --from-file=.gitconfig=.gitconfig --from-file=client.crt=/var/run/secrets/openshift.io/source/client.crt --from-file=cacert.crt=/var/run/secrets/openshift.io/source/cacert.crt --from-file=client.key=/var/run/secrets/openshift.io/source/client.key", "oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --type=kubernetes.io/basic-auth", "oc create secret generic <secret_name> --from-literal=password=<token> --type=kubernetes.io/basic-auth", "ssh-keygen -t ed25519 -C \"[email protected]\"", "oc create secret generic <secret_name> --from-file=ssh-privatekey=<path/to/ssh/private/key> --from-file=<path/to/known_hosts> \\ 1 --type=kubernetes.io/ssh-auth", "cat intermediateCA.crt intermediateCA.crt rootCA.crt > ca.crt", "oc create secret generic mycert --from-file=ca.crt=</path/to/file> 1", "oc create secret generic <secret_name> --from-file=ssh-privatekey=<path/to/ssh/private/key> --from-file=<path/to/.gitconfig> --type=kubernetes.io/ssh-auth", "oc create secret generic <secret_name> --from-file=ca.crt=<path/to/certificate> --from-file=<path/to/.gitconfig>", "oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --from-file=ca-cert=</path/to/file> --type=kubernetes.io/basic-auth", "oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --from-file=</path/to/.gitconfig> --type=kubernetes.io/basic-auth", "oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --from-file=</path/to/.gitconfig> --from-file=ca-cert=</path/to/file> --type=kubernetes.io/basic-auth", "apiVersion: v1 kind: Secret metadata: name: test-secret namespace: my-namespace type: Opaque 1 data: 2 username: <username> 3 password: <password> stringData: 4 hostname: myapp.mydomain.com 5", "oc create -f <filename>", "oc create secret generic dockerhub --from-file=.dockerconfigjson=<path/to/.docker/config.json> --type=kubernetes.io/dockerconfigjson", "apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque 1 data: username: <username> password: <password>", "apiVersion: v1 kind: Secret metadata: name: aregistrykey namespace: myapps type: kubernetes.io/dockerconfigjson 1 data: .dockerconfigjson:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2", "oc create -f <your_yaml_file>.yaml", "oc logs secret-example-pod", "oc delete pod secret-example-pod", "apiVersion: v1 kind: Secret metadata: name: test-secret data: username: <username> 1 password: <password> 2 stringData: hostname: myapp.mydomain.com 3 secret.properties: |- 4 property1=valueA property2=valueB", "apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"cat /etc/secret-volume/*\" ] volumeMounts: # name must match the volume name below - name: secret-volume mountPath: /etc/secret-volume readOnly: true volumes: - name: secret-volume secret: secretName: test-secret restartPolicy: Never", "apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"export\" ] env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: name: test-secret key: username restartPolicy: Never", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: name: test-secret key: username", "oc create configmap settings-mvn --from-file=settings.xml=<path/to/settings.xml>", "apiVersion: core/v1 kind: ConfigMap metadata: name: settings-mvn data: settings.xml: | <settings> ... # Insert maven settings here </settings>", "oc create secret generic secret-mvn --from-file=ssh-privatekey=<path/to/.ssh/id_rsa> --type=kubernetes.io/ssh-auth", "apiVersion: core/v1 kind: Secret metadata: name: secret-mvn type: kubernetes.io/ssh-auth data: ssh-privatekey: | # Insert ssh private key, base64 encoded", "source: git: uri: https://github.com/wildfly/quickstart.git contextDir: helloworld configMaps: - configMap: name: settings-mvn secrets: - secret: name: secret-mvn", "oc new-build openshift/wildfly-101-centos7~https://github.com/wildfly/quickstart.git --context-dir helloworld --build-secret \"secret-mvn\" --build-config-map \"settings-mvn\"", "source: git: uri: https://github.com/wildfly/quickstart.git contextDir: helloworld configMaps: - configMap: name: settings-mvn destinationDir: \".m2\" secrets: - secret: name: secret-mvn destinationDir: \".ssh\"", "oc new-build openshift/wildfly-101-centos7~https://github.com/wildfly/quickstart.git --context-dir helloworld --build-secret \"secret-mvn:.ssh\" --build-config-map \"settings-mvn:.m2\"", "FROM centos/ruby-22-centos7 USER root COPY ./secret-dir /secrets COPY ./config / Create a shell script that will output secrets and ConfigMaps when the image is run RUN echo '#!/bin/sh' > /input_report.sh RUN echo '(test -f /secrets/secret1 && echo -n \"secret1=\" && cat /secrets/secret1)' >> /input_report.sh RUN echo '(test -f /config && echo -n \"relative-configMap=\" && cat /config)' >> /input_report.sh RUN chmod 755 /input_report.sh CMD [\"/bin/sh\", \"-c\", \"/input_report.sh\"]", "#!/bin/sh APP_VERSION=1.0 wget http://repository.example.com/app/app-USDAPP_VERSION.jar -O app.jar", "#!/bin/sh exec java -jar app.jar", "FROM jboss/base-jdk:8 ENV APP_VERSION 1.0 RUN wget http://repository.example.com/app/app-USDAPP_VERSION.jar -O app.jar EXPOSE 8080 CMD [ \"java\", \"-jar\", \"app.jar\" ]", "auths: index.docker.io/v1/: 1 auth: \"YWRfbGzhcGU6R2labnRib21ifTE=\" 2 email: \"[email protected]\" 3 docker.io/my-namespace/my-user/my-image: 4 auth: \"GzhYWRGU6R2fbclabnRgbkSp=\"\" email: \"[email protected]\" docker.io/my-namespace: 5 auth: \"GzhYWRGU6R2deesfrRgbkSp=\"\" email: \"[email protected]\"", "oc create secret generic dockerhub --from-file=.dockerconfigjson=<path/to/.docker/config.json> --type=kubernetes.io/dockerconfigjson", "spec: output: to: kind: \"DockerImage\" name: \"private.registry.com/org/private-image:latest\" pushSecret: name: \"dockerhub\"", "oc set build-secret --push bc/sample-build dockerhub", "oc secrets link builder dockerhub", "strategy: sourceStrategy: from: kind: \"DockerImage\" name: \"docker.io/user/private_repository\" pullSecret: name: \"dockerhub\"", "oc set build-secret --pull bc/sample-build dockerhub", "oc secrets link builder dockerhub", "env: - name: FIELDREF_ENV valueFrom: fieldRef: fieldPath: metadata.name", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: MYVAL valueFrom: secretKeyRef: key: myval name: mysecret", "spec: output: to: kind: \"ImageStreamTag\" name: \"sample-image:latest\"", "spec: output: to: kind: \"DockerImage\" name: \"my-registry.mycompany.com:5000/myimages/myimage:tag\"", "spec: output: to: kind: \"ImageStreamTag\" name: \"my-image:latest\" imageLabels: - name: \"vendor\" value: \"MyCompany\" - name: \"authoritative-source-url\" value: \"registry.mycompany.com\"", "strategy: dockerStrategy: from: kind: \"ImageStreamTag\" name: \"debian:latest\"", "strategy: dockerStrategy: dockerfilePath: dockerfiles/app1/Dockerfile", "dockerStrategy: env: - name: \"HTTP_PROXY\" value: \"http://myproxy.net:5187/\"", "dockerStrategy: buildArgs: - name: \"version\" value: \"latest\"", "strategy: dockerStrategy: imageOptimizationPolicy: SkipLayers", "spec: dockerStrategy: volumes: - name: secret-mvn 1 mounts: - destinationPath: /opt/app-root/src/.ssh 2 source: type: Secret 3 secret: secretName: my-secret 4 - name: settings-mvn 5 mounts: - destinationPath: /opt/app-root/src/.m2 6 source: type: ConfigMap 7 configMap: name: my-config 8 - name: my-csi-volume 9 mounts: - destinationPath: /opt/app-root/src/some_path 10 source: type: CSI 11 csi: driver: csi.sharedresource.openshift.io 12 readOnly: true 13 volumeAttributes: 14 attribute: value", "strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"incremental-image:latest\" 1 incremental: true 2", "strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"builder-image:latest\" scripts: \"http://somehost.com/scripts_directory\" 1", "sourceStrategy: env: - name: \"DISABLE_ASSET_COMPILATION\" value: \"true\"", "#!/bin/bash restore build artifacts if [ \"USD(ls /tmp/s2i/artifacts/ 2>/dev/null)\" ]; then mv /tmp/s2i/artifacts/* USDHOME/. fi move the application source mv /tmp/s2i/src USDHOME/src build application artifacts pushd USD{HOME} make all install the artifacts make install popd", "#!/bin/bash run the application /opt/application/run.sh", "#!/bin/bash pushd USD{HOME} if [ -d deps ]; then # all deps contents to tar stream tar cf - deps fi popd", "#!/bin/bash inform the user how to use the image cat <<EOF This is a S2I sample builder image, to use it, install https://github.com/openshift/source-to-image EOF", "spec: sourceStrategy: volumes: - name: secret-mvn 1 mounts: - destinationPath: /opt/app-root/src/.ssh 2 source: type: Secret 3 secret: secretName: my-secret 4 - name: settings-mvn 5 mounts: - destinationPath: /opt/app-root/src/.m2 6 source: type: ConfigMap 7 configMap: name: my-config 8 - name: my-csi-volume 9 mounts: - destinationPath: /opt/app-root/src/some_path 10 source: type: CSI 11 csi: driver: csi.sharedresource.openshift.io 12 readOnly: true 13 volumeAttributes: 14 attribute: value", "strategy: customStrategy: from: kind: \"DockerImage\" name: \"openshift/sti-image-builder\"", "strategy: customStrategy: secrets: - secretSource: 1 name: \"secret1\" mountPath: \"/tmp/secret1\" 2 - secretSource: name: \"secret2\" mountPath: \"/tmp/secret2\"", "customStrategy: env: - name: \"HTTP_PROXY\" value: \"http://myproxy.net:5187/\"", "oc set env <enter_variables>", "kind: \"BuildConfig\" apiVersion: \"v1\" metadata: name: \"sample-pipeline\" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: |- node('agent') { stage 'build' openshiftBuild(buildConfig: 'ruby-sample-build', showBuildLogs: 'true') stage 'deploy' openshiftDeploy(deploymentConfig: 'frontend') }", "kind: \"BuildConfig\" apiVersion: \"v1\" metadata: name: \"sample-pipeline\" spec: source: git: uri: \"https://github.com/openshift/ruby-hello-world\" strategy: jenkinsPipelineStrategy: jenkinsfilePath: some/repo/dir/filename 1", "jenkinsPipelineStrategy: env: - name: \"FOO\" value: \"BAR\"", "oc project <project_name>", "oc new-app jenkins-ephemeral 1", "kind: \"BuildConfig\" apiVersion: \"v1\" metadata: name: \"nodejs-sample-pipeline\" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: <pipeline content from below> type: JenkinsPipeline", "def templatePath = 'https://raw.githubusercontent.com/openshift/nodejs-ex/master/openshift/templates/nodejs-mongodb.json' 1 def templateName = 'nodejs-mongodb-example' 2 pipeline { agent { node { label 'nodejs' 3 } } options { timeout(time: 20, unit: 'MINUTES') 4 } stages { stage('preamble') { steps { script { openshift.withCluster() { openshift.withProject() { echo \"Using project: USD{openshift.project()}\" } } } } } stage('cleanup') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.selector(\"all\", [ template : templateName ]).delete() 5 if (openshift.selector(\"secrets\", templateName).exists()) { 6 openshift.selector(\"secrets\", templateName).delete() } } } } } } stage('create') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.newApp(templatePath) 7 } } } } } stage('build') { steps { script { openshift.withCluster() { openshift.withProject() { def builds = openshift.selector(\"bc\", templateName).related('builds') timeout(5) { 8 builds.untilEach(1) { return (it.object().status.phase == \"Complete\") } } } } } } } stage('deploy') { steps { script { openshift.withCluster() { openshift.withProject() { def rm = openshift.selector(\"dc\", templateName).rollout() timeout(5) { 9 openshift.selector(\"dc\", templateName).related('pods').untilEach(1) { return (it.object().status.phase == \"Running\") } } } } } } } stage('tag') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.tag(\"USD{templateName}:latest\", \"USD{templateName}-staging:latest\") 10 } } } } } } }", "oc create -f nodejs-sample-pipeline.yaml", "oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/jenkins/pipeline/nodejs-sample-pipeline.yaml", "oc start-build nodejs-sample-pipeline", "FROM registry.redhat.io/rhel8/buildah In this example, `/tmp/build` contains the inputs that build when this custom builder image is run. Normally the custom builder image fetches this content from some location at build time, by using git clone as an example. ADD dockerfile.sample /tmp/input/Dockerfile ADD build.sh /usr/bin RUN chmod a+x /usr/bin/build.sh /usr/bin/build.sh contains the actual custom build logic that will be run when this custom builder image is run. ENTRYPOINT [\"/usr/bin/build.sh\"]", "FROM registry.access.redhat.com/ubi9/ubi RUN touch /tmp/build", "#!/bin/sh Note that in this case the build inputs are part of the custom builder image, but normally this is retrieved from an external source. cd /tmp/input OUTPUT_REGISTRY and OUTPUT_IMAGE are env variables provided by the custom build framework TAG=\"USD{OUTPUT_REGISTRY}/USD{OUTPUT_IMAGE}\" performs the build of the new image defined by dockerfile.sample buildah --storage-driver vfs bud --isolation chroot -t USD{TAG} . buildah requires a slight modification to the push secret provided by the service account to use it for pushing the image cp /var/run/secrets/openshift.io/push/.dockercfg /tmp (echo \"{ \\\"auths\\\": \" ; cat /var/run/secrets/openshift.io/push/.dockercfg ; echo \"}\") > /tmp/.dockercfg push the new image to the target for the build buildah --storage-driver vfs push --tls-verify=false --authfile /tmp/.dockercfg USD{TAG}", "oc new-build --binary --strategy=docker --name custom-builder-image", "oc start-build custom-builder-image --from-dir . -F", "kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: sample-custom-build labels: name: sample-custom-build annotations: template.alpha.openshift.io/wait-for-ready: 'true' spec: strategy: type: Custom customStrategy: forcePull: true from: kind: ImageStreamTag name: custom-builder-image:latest namespace: <yourproject> 1 output: to: kind: ImageStreamTag name: sample-custom:latest", "oc create -f buildconfig.yaml", "kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: sample-custom spec: {}", "oc create -f imagestream.yaml", "oc start-build sample-custom-build -F", "oc start-build <buildconfig_name>", "oc start-build --from-build=<build_name>", "oc start-build <buildconfig_name> --follow", "oc start-build <buildconfig_name> --env=<key>=<value>", "oc start-build hello-world --from-repo=../hello-world --commit=v2", "oc cancel-build <build_name>", "oc cancel-build <build1_name> <build2_name> <build3_name>", "oc cancel-build bc/<buildconfig_name>", "oc cancel-build bc/<buildconfig_name>", "oc delete bc <BuildConfigName>", "oc delete --cascade=false bc <BuildConfigName>", "oc describe build <build_name>", "oc describe build <build_name>", "oc logs -f bc/<buildconfig_name>", "oc logs --version=<number> bc/<buildconfig_name>", "sourceStrategy: env: - name: \"BUILD_LOGLEVEL\" value: \"2\" 1", "type: \"GitHub\" github: secretReference: name: \"mysecret\"", "- kind: Secret apiVersion: v1 metadata: name: mysecret creationTimestamp: data: WebHookSecretKey: c2VjcmV0dmFsdWUx", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: \"true\" name: webhook-access-unauthenticated namespace: <namespace> 1 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: \"system:webhook\" subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: \"system:unauthenticated\"", "oc apply -f add-webhooks-unauth.yaml", "type: \"GitHub\" github: secretReference: name: \"mysecret\"", "https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github", "oc describe bc/<name_of_your_BuildConfig>", "https://api.starter-us-east-1.openshift.com:443/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github", "curl -H \"X-GitHub-Event: push\" -H \"Content-Type: application/json\" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github", "type: \"GitLab\" gitlab: secretReference: name: \"mysecret\"", "https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/gitlab", "oc describe bc <name>", "curl -H \"X-GitLab-Event: Push Hook\" -H \"Content-Type: application/json\" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/gitlab", "type: \"Bitbucket\" bitbucket: secretReference: name: \"mysecret\"", "https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/bitbucket", "oc describe bc <name>", "curl -H \"X-Event-Key: repo:push\" -H \"Content-Type: application/json\" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/bitbucket", "type: \"Generic\" generic: secretReference: name: \"mysecret\" allowEnv: true 1", "https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic", "curl -X POST -k https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic", "git: uri: \"<url to git repository>\" ref: \"<optional git reference>\" commit: \"<commit hash identifying a specific git commit>\" author: name: \"<author name>\" email: \"<author e-mail>\" committer: name: \"<committer name>\" email: \"<committer e-mail>\" message: \"<commit message>\" env: 1 - name: \"<variable name>\" value: \"<variable value>\"", "curl -H \"Content-Type: application/yaml\" --data-binary @payload_file.yaml -X POST -k https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic", "oc describe bc <name>", "kind: \"ImageStream\" apiVersion: \"v1\" metadata: name: \"ruby-20-centos7\"", "strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"ruby-20-centos7:latest\"", "type: \"ImageChange\" 1 imageChange: {} type: \"ImageChange\" 2 imageChange: from: kind: \"ImageStreamTag\" name: \"custom-image:latest\"", "strategy: sourceStrategy: from: kind: \"DockerImage\" name: \"172.30.17.3:5001/mynamespace/ruby-20-centos7:<immutableid>\"", "type: \"ImageChange\" imageChange: from: kind: \"ImageStreamTag\" name: \"custom-image:latest\" paused: true", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: bc-ict-example namespace: bc-ict-example-namespace spec: triggers: - imageChange: from: kind: ImageStreamTag name: input:latest namespace: bc-ict-example-namespace - imageChange: from: kind: ImageStreamTag name: input2:latest namespace: bc-ict-example-namespace type: ImageChange status: imageChangeTriggers: - from: name: input:latest namespace: bc-ict-example-namespace lastTriggerTime: \"2021-06-30T13:47:53Z\" lastTriggeredImageID: image-registry.openshift-image-registry.svc:5000/bc-ict-example-namespace/input@sha256:0f88ffbeb9d25525720bfa3524cb1bf0908b7f791057cf1acfae917b11266a69 - from: name: input2:latest namespace: bc-ict-example-namespace lastTriggeredImageID: image-registry.openshift-image-registry.svc:5000/bc-ict-example-namespace/input2@sha256:0f88ffbeb9d25525720bfa3524cb2ce0908b7f791057cf1acfae917b11266a69 lastVersion: 1", "Then you use the `name` and `namespace` from that build to find the corresponding image change trigger in `buildConfig.spec.triggers`.", "type: \"ConfigChange\"", "oc set triggers bc <name> --from-github", "oc set triggers bc <name> --from-image='<image>'", "oc set triggers bc <name> --from-bitbucket --remove", "oc set triggers --help", "postCommit: script: \"bundle exec rake test --verbose\"", "postCommit: command: [\"/bin/bash\", \"-c\", \"bundle exec rake test --verbose\"]", "postCommit: command: [\"bundle\", \"exec\", \"rake\", \"test\"] args: [\"--verbose\"]", "oc set build-hook bc/mybc --post-commit --command -- bundle exec rake test --verbose", "oc set build-hook bc/mybc --post-commit --script=\"bundle exec rake test --verbose\"", "apiVersion: \"v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: resources: limits: cpu: \"100m\" 1 memory: \"256Mi\" 2", "resources: requests: 1 cpu: \"100m\" memory: \"256Mi\"", "spec: completionDeadlineSeconds: 1800", "apiVersion: \"v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: nodeSelector: 1 key1: value1 key2: value2", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: artifact-build spec: output: to: kind: ImageStreamTag name: artifact-image:latest source: git: uri: https://github.com/openshift/openshift-jee-sample.git ref: \"master\" strategy: sourceStrategy: from: kind: ImageStreamTag name: wildfly:10.1 namespace: openshift", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: image-build spec: output: to: kind: ImageStreamTag name: image-build:latest source: dockerfile: |- FROM jee-runtime:latest COPY ROOT.war /deployments/ROOT.war images: - from: 1 kind: ImageStreamTag name: artifact-image:latest paths: 2 - sourcePath: /wildfly/standalone/deployments/ROOT.war destinationDir: \".\" strategy: dockerStrategy: from: 3 kind: ImageStreamTag name: jee-runtime:latest triggers: - imageChange: {} type: ImageChange", "apiVersion: \"v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: successfulBuildsHistoryLimit: 2 1 failedBuildsHistoryLimit: 2 2", "oc tag --source=docker registry.redhat.io/ubi9/ubi:latest ubi9:latest -n openshift", "apiVersion: image.openshift.io/v1 kind: ImageStream metadata: name: ubi9 namespace: openshift spec: tags: - from: kind: DockerImage name: registry.redhat.io/ubi9/ubi:latest name: latest referencePolicy: type: Source", "oc tag --source=docker registry.redhat.io/ubi9/ubi:latest ubi:latest", "apiVersion: image.openshift.io/v1 kind: ImageStream metadata: name: ubi9 spec: tags: - from: kind: DockerImage name: registry.redhat.io/ubi9/ubi:latest name: latest referencePolicy: type: Source", "cat << EOF > secret-template.txt kind: Secret apiVersion: v1 metadata: name: etc-pki-entitlement type: Opaque data: {{ range \\USDkey, \\USDvalue := .data }} {{ \\USDkey }}: {{ \\USDvalue }} {{ end }} EOF oc get secret etc-pki-entitlement -n openshift-config-managed -o=go-template-file --template=secret-template.txt | oc apply -f -", "strategy: dockerStrategy: from: kind: ImageStreamTag name: ubi9:latest volumes: - name: etc-pki-entitlement mounts: - destinationPath: /etc/pki/entitlement source: type: Secret secret: secretName: etc-pki-entitlement", "FROM registry.redhat.io/ubi9/ubi:latest RUN rm -rf /etc/rhsm-host 1 RUN yum --enablerepo=codeready-builder-for-rhel-9-x86_64-rpms install \\ 2 nss_wrapper uid_wrapper -y && yum clean all -y RUN ln -s /run/secrets/rhsm /etc/rhsm-host 3", "[test-<name>] name=test-<number> baseurl = https://satellite.../content/dist/rhel/server/7/7Server/x86_64/os enabled=1 gpgcheck=0 sslverify=0 sslclientkey = /etc/pki/entitlement/...-key.pem sslclientcert = /etc/pki/entitlement/....pem", "oc create configmap yum-repos-d --from-file /path/to/satellite.repo", "strategy: dockerStrategy: from: kind: ImageStreamTag name: ubi9:latest volumes: - name: yum-repos-d mounts: - destinationPath: /etc/yum.repos.d source: type: ConfigMap configMap: name: yum-repos-d - name: etc-pki-entitlement mounts: - destinationPath: /etc/pki/entitlement source: type: Secret secret: secretName: etc-pki-entitlement", "FROM registry.redhat.io/ubi9/ubi:latest RUN rm -rf /etc/rhsm-host 1 RUN yum --enablerepo=codeready-builder-for-rhel-9-x86_64-rpms install \\ 2 nss_wrapper uid_wrapper -y && yum clean all -y RUN ln -s /run/secrets/rhsm /etc/rhsm-host 3", "oc apply -f - <<EOF kind: SharedSecret apiVersion: sharedresource.openshift.io/v1alpha1 metadata: name: etc-pki-entitlement spec: secretRef: name: etc-pki-entitlement namespace: openshift-config-managed EOF", "oc apply -f - <<EOF apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: builder-etc-pki-entitlement namespace: build-namespace rules: - apiGroups: - sharedresource.openshift.io resources: - sharedsecrets resourceNames: - etc-pki-entitlement verbs: - use EOF", "oc create rolebinding builder-etc-pki-entitlement --role=builder-etc-pki-entitlement --serviceaccount=build-namespace:builder", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: uid-wrapper-rhel9 namespace: build-namespace spec: runPolicy: Serial source: dockerfile: | FROM registry.redhat.io/ubi9/ubi:latest RUN rm -rf /etc/rhsm-host 1 RUN yum --enablerepo=codeready-builder-for-rhel-9-x86_64-rpms install \\ 2 nss_wrapper uid_wrapper -y && yum clean all -y RUN ln -s /run/secrets/rhsm /etc/rhsm-host 3 strategy: type: Docker dockerStrategy: volumes: - mounts: - destinationPath: \"/etc/pki/entitlement\" name: etc-pki-entitlement source: csi: driver: csi.sharedresource.openshift.io readOnly: true 4 volumeAttributes: sharedSecret: etc-pki-entitlement 5 type: CSI", "oc start-build uid-wrapper-rhel9 -n build-namespace -F", "oc annotate clusterrolebinding.rbac system:build-strategy-docker-binding 'rbac.authorization.kubernetes.io/autoupdate=false' --overwrite", "oc adm policy remove-cluster-role-from-group system:build-strategy-docker system:authenticated", "oc get clusterrole admin -o yaml | grep \"builds/docker\"", "oc get clusterrole edit -o yaml | grep \"builds/docker\"", "oc adm policy add-cluster-role-to-user system:build-strategy-docker devuser", "oc adm policy add-role-to-user system:build-strategy-docker devuser -n devproject", "oc edit build.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Build 1 metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 2 name: cluster resourceVersion: \"107233\" selfLink: /apis/config.openshift.io/v1/builds/cluster uid: e2e9cc14-78a9-11e9-b92b-06d6c7da38dc spec: buildDefaults: 2 defaultProxy: 3 httpProxy: http://proxy.com httpsProxy: https://proxy.com noProxy: internal.com env: 4 - name: envkey value: envvalue gitProxy: 5 httpProxy: http://gitproxy.com httpsProxy: https://gitproxy.com noProxy: internalgit.com imageLabels: 6 - name: labelkey value: labelvalue resources: 7 limits: cpu: 100m memory: 50Mi requests: cpu: 10m memory: 10Mi buildOverrides: 8 imageLabels: 9 - name: labelkey value: labelvalue nodeSelector: 10 selectorkey: selectorvalue tolerations: 11 - effect: NoSchedule key: node-role.kubernetes.io/builds operator: Exists", "requested access to the resource is denied", "oc describe quota", "secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60", "oc delete secret <secret_name>", "oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-", "oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num-", "oc create configmap registry-cas -n openshift-config --from-file=myregistry.corp.com..5000=/etc/docker/certs.d/myregistry.corp.com:5000/ca.crt --from-file=otherregistry.com=/etc/docker/certs.d/otherregistry.com/ca.crt", "oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-cas\"}}}' --type=merge" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/builds_using_buildconfig/index
Chapter 1. The Linux kernel
Chapter 1. The Linux kernel Learn about the Linux kernel and the Linux kernel RPM package provided and maintained by Red Hat (Red Hat kernel). Keep the Red Hat kernel updated, which ensures the operating system has all the latest bug fixes, performance enhancements, and patches, and is compatible with new hardware. 1.1. What the kernel is The kernel is a core part of a Linux operating system that manages the system resources and provides interface between hardware and software applications. The Red Hat kernel is a custom-built kernel based on the upstream Linux mainline kernel that Red Hat engineers further develop and harden with a focus on stability and compatibility with the latest technologies and hardware. Before Red Hat releases a new kernel version, the kernel needs to pass a set of rigorous quality assurance tests. The Red Hat kernels are packaged in the RPM format so that they are easily upgraded and verified by the YUM package manager. Warning Kernels that are not compiled by Red Hat are not supported by Red Hat. 1.2. RPM packages An RPM package consists of an archive of files and metadata used to install and erase these files. Specifically, the RPM package contains the following parts: GPG signature The GPG signature is used to verify the integrity of the package. Header (package metadata) The RPM package manager uses this metadata to determine package dependencies, where to install files, and other information. Payload The payload is a cpio archive that contains files to install to the system. There are two types of RPM packages. Both types share the file format and tooling, but have different contents and serve different purposes: Source RPM (SRPM) An SRPM contains source code and a spec file, which describes how to build the source code into a binary RPM. Optionally, the SRPM can contain patches to source code. Binary RPM A binary RPM contains the binaries built from the sources and patches. 1.3. The Linux kernel RPM package overview The kernel RPM is a meta package that does not contain any files, but rather ensures that the following required sub-packages are properly installed: kernel-core Provides the binary image of the kernel, all initramfs -related objects to bootstrap the system, and a minimal number of kernel modules to ensure core functionality. This sub-package alone could be used in virtualized and cloud environments to provide a Red Hat Enterprise Linux 8 kernel with a quick boot time and a small disk size footprint. kernel-modules Provides the remaining kernel modules that are not present in kernel-core . The small set of kernel sub-packages above aims to provide a reduced maintenance surface to system administrators especially in virtualized and cloud environments. Optional kernel packages are for example: kernel-modules-extra Provides kernel modules for rare hardware. Loading of the module is disabled by default. kernel-debug Provides a kernel with many debugging options enabled for kernel diagnosis, at the expense of reduced performance. kernel-tools Provides tools for manipulating the Linux kernel and supporting documentation. kernel-devel Provides the kernel headers and makefiles that are enough to build modules against the kernel package. kernel-abi-stablelists Provides information pertaining to the RHEL kernel ABI, including a list of kernel symbols required by external Linux kernel modules and a yum plug-in to aid enforcement. kernel-headers Includes the C header files that specify the interface between the Linux kernel and user-space libraries and programs. The header files define structures and constants required for building most standard programs. Additional resources What are the kernel-core, kernel-modules, and kernel-modules-extras packages? 1.4. Displaying contents of a kernel package By querying the repository, you can see if a kernel package provides a specific file, such as a module. It is not necessary to download or install the package to display the file list. Use the dnf utility to query the file list, for example, of the kernel-core , kernel-modules-core , or kernel-modules package. Note that the kernel package is a meta package that does not contain any files. Procedure List the available versions of a package: Display the list of files in a package: Additional resources Packaging and distributing software 1.5. Installing specific kernel versions Install new kernels using the yum package manager. Procedure To install a specific kernel version, enter the following command: Additional resources Red Hat Code Browser Red Hat Enterprise Linux Release Dates 1.6. Updating the kernel Update the kernel using the yum package manager. Procedure To update the kernel, enter the following command: This command updates the kernel along with all dependencies to the latest available version. Reboot your system for the changes to take effect. Note When upgrading from RHEL 7 to RHEL 8, follow relevant sections of the Upgrading from RHEL 7 to RHEL 8 document. Additional resources Managing software packages 1.7. Setting a kernel as default Set a specific kernel as default by using the grubby command-line tool and GRUB. Procedure Setting the kernel as default by using the grubby tool. Enter the following command to set the kernel as default using the grubby tool: The command uses a machine ID without the .conf suffix as an argument. Note The machine ID is located in the /boot/loader/entries/ directory. Setting the kernel as default by using the id argument. List the boot entries using the id argument and then set an intended kernel as default: Note To list the boot entries using the title argument, execute the # grubby --info=ALL | grep title command. Setting the default kernel for only the boot. Execute the following command to set the default kernel for only the reboot using the grub2-reboot command: Warning Set the default kernel for only the boot with care. Installing new kernel RPMs, self-built kernels, and manually adding the entries to the /boot/loader/entries/ directory might change the index values.
[ "yum repoquery <package_name>", "yum repoquery -l <package_name>", "yum install kernel- {version}", "yum update kernel", "grubby --set-default USDkernel_path", "grubby --info ALL | grep id grubby --set-default /boot/vmlinuz-<version>.<architecture>", "grub2-reboot <index|title|id>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_monitoring_and_updating_the_kernel/assembly_the-linux-kernel_managing-monitoring-and-updating-the-kernel
11.6. Preparing and Adding Red Hat Gluster Storage
11.6. Preparing and Adding Red Hat Gluster Storage 11.6.1. Preparing Red Hat Gluster Storage For information on setting up and configuring Red Hat Gluster Storage, see the Red Hat Gluster Storage Installation Guide . For the Red Hat Gluster Storage versions that are supported with Red Hat Virtualization, see https://access.redhat.com/articles/2356261 . 11.6.2. Adding Red Hat Gluster Storage To use Red Hat Gluster Storage with Red Hat Virtualization, see Configuring Red Hat Virtualization with Red Hat Gluster Storage . For the Red Hat Gluster Storage versions that are supported with Red Hat Virtualization, see https://access.redhat.com/articles/2356261 .
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/adding_red_hat_gluster_storage
29.2.3. Separating Kernel and User-space Profiles
29.2.3. Separating Kernel and User-space Profiles By default, kernel mode and user mode information is gathered for each event. To configure OProfile to ignore events in kernel mode for a specific counter, execute the following command: Execute the following command to start profiling kernel mode for the counter again: To configure OProfile to ignore events in user mode for a specific counter, execute the following command: Execute the following command to start profiling user mode for the counter again: When the OProfile daemon writes the profile data to sample files, it can separate the kernel and library profile data into separate sample files. To configure how the daemon writes to sample files, execute the following command as root: choice can be one of the following: none - Do not separate the profiles (default). library - Generate per-application profiles for libraries. kernel - Generate per-application profiles for the kernel and kernel modules. all - Generate per-application profiles for libraries and per-application profiles for the kernel and kernel modules. If --separate=library is used, the sample file name includes the name of the executable as well as the name of the library. Note These configuration changes will take effect when the OProfile profiler is restarted.
[ "~]# opcontrol --event= event-name : sample-rate : unit-mask :0", "~]# opcontrol --event= event-name : sample-rate : unit-mask :1", "~]# opcontrol --event= event-name : sample-rate : unit-mask : kernel :0", "~]# opcontrol --event= event-name : sample-rate : unit-mask : kernel :1", "~]# opcontrol --separate= choice" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-oprofile-starting-separate
4.8. VSS Transaction Flow
4.8. VSS Transaction Flow In processing a backup, the requester and the writers coordinate to do several things: to provide a stable system image from which to back up data (the shadow copied volume), to group files together on the basis of their usage, and to store information on the saved data. This must all be done with minimal interruption of the writer's normal work flow. A requester (in our case the Backup Vendor) queries writers for their metadata, processes this data, notifies the writers prior to the beginning of the shadow copy and of the backup operations, and then notifies the writers again after the shadow copy and backup operations end. Here is how the QEMU VSS provider is registered in Windows OS after the Guest Tools installation:
[ "C:\\Users\\Administrator>vssadmin list providers vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool (C) Copyright 2001-2005 Microsoft Corp. Provider name: 'QEMU Guest Agent VSS Provider' Provider type: Software Provider Id: {3629d4ed-ee09-4e0e-9a5c-6d8ba2872aef} Version: 0.12.1" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/vss_transaction_flow
Chapter 38. help.adoc
Chapter 38. help.adoc This chapter describes the commands under the help.adoc command. 38.1. help print detailed help for another command Usage: Table 38.1. Positional arguments Value Summary cmd Name of the command Table 38.2. Command arguments Value Summary -h, --help Show this help message and exit
[ "openstack help [-h] [cmd ...]" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/command_line_interface_reference/help_adoc
Chapter 1. Overview of AMQ Streams
Chapter 1. Overview of AMQ Streams Red Hat AMQ Streams is a massively-scalable, distributed, and high-performance data streaming platform based on the Apache ZooKeeper and Apache Kafka projects. The main components comprise: Kafka Broker Messaging broker responsible for delivering records from producing clients to consuming clients. Apache ZooKeeper is a core dependency for Kafka, providing a cluster coordination service for highly reliable distributed coordination. Kafka Streams API API for writing stream processor applications. Producer and Consumer APIs Java-based APIs for producing and consuming messages to and from Kafka brokers. Kafka Bridge AMQ Streams Kafka Bridge provides a RESTful interface that allows HTTP-based clients to interact with a Kafka cluster. Kafka Connect A toolkit for streaming data between Kafka brokers and other systems using Connector plugins. Kafka MirrorMaker Replicates data between two Kafka clusters, within or across data centers. Kafka Exporter An exporter used in the extraction of Kafka metrics data for monitoring. A cluster of Kafka brokers is the hub connecting all these components. The broker uses Apache ZooKeeper for storing configuration data and for cluster coordination. Before running Apache Kafka, an Apache ZooKeeper cluster has to be ready. Figure 1.1. AMQ Streams architecture 1.1. Using the Kafka Bridge to connect with a Kafka cluster You can use the AMQ Streams Kafka Bridge API to create and manage consumers and send and receive records over HTTP rather than the native Kafka protocol. When you set up the Kafka Bridge you configure HTTP access to the Kafka cluster. You can then use the Kafka Bridge to produce and consume messages from the cluster, as well as performing other operations through its REST interface. Additional resources For information on installing and using the Kafka Bridge, see Using the AMQ Streams Kafka Bridge . 1.2. Kafka capabilities The underlying data stream-processing capabilities and component architecture of Kafka can deliver: Microservices and other applications to share data with extremely high throughput and low latency Message ordering guarantees Message rewind/replay from data storage to reconstruct an application state Message compaction to remove old records when using a key-value log Horizontal scalability in a cluster configuration Replication of data to control fault tolerance Retention of high volumes of data for immediate access 1.3. Kafka use cases Kafka's capabilities make it suitable for: Event-driven architectures Event sourcing to capture changes to the state of an application as a log of events Message brokering Website activity tracking Operational monitoring through metrics Log collection and aggregation Commit logs for distributed systems Stream processing so that applications can respond to data in real time 1.4. Supported Configurations To run in a supported configuration, AMQ Streams must be running in a supported JVM version on a supported operating system. For more information, see https://access.redhat.com/articles/6644711 . 1.5. Document conventions User-replaced values User-replaced values, also known as replaceables , are shown in italics with angle brackets (< >). Underscores ( _ ) are used for multi-word values. If the value refers to code or commands, monospace is also used. For example, in the following code, you will want to replace <bootstrap_address> and <topic_name> with your own address and topic name: bin/kafka-console-consumer.sh --bootstrap-server <bootstrap_address> --topic <topic_name> --from-beginning
[ "bin/kafka-console-consumer.sh --bootstrap-server <bootstrap_address> --topic <topic_name> --from-beginning" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.1/html/using_amq_streams_on_rhel/overview-str
Chapter 13. Node maintenance
Chapter 13. Node maintenance 13.1. About node maintenance 13.1.1. About node maintenance mode Nodes can be placed into maintenance mode using the oc adm utility, or using NodeMaintenance custom resources (CRs). Note The node-maintenance-operator (NMO) is no longer shipped with OpenShift Virtualization. It is now available to deploy as a standalone Operator from the OperatorHub in the OpenShift Container Platform web console, or by using the OpenShift CLI ( oc ). Placing a node into maintenance marks the node as unschedulable and drains all the virtual machines and pods from it. Virtual machine instances that have a LiveMigrate eviction strategy are live migrated to another node without loss of service. This eviction strategy is configured by default in virtual machine created from common templates but must be configured manually for custom virtual machines. Virtual machine instances without an eviction strategy are shut down. Virtual machines with a RunStrategy of Running or RerunOnFailure are recreated on another node. Virtual machines with a RunStrategy of Manual are not automatically restarted. Important Virtual machines must have a persistent volume claim (PVC) with a shared ReadWriteMany (RWX) access mode to be live migrated. The Node Maintenance Operator watches for new or deleted NodeMaintenance CRs. When a new NodeMaintenance CR is detected, no new workloads are scheduled and the node is cordoned off from the rest of the cluster. All pods that can be evicted are evicted from the node. When a NodeMaintenance CR is deleted, the node that is referenced in the CR is made available for new workloads. Note Using a NodeMaintenance CR for node maintenance tasks achieves the same results as the oc adm cordon and oc adm drain commands using standard OpenShift Container Platform custom resource processing. 13.1.2. Maintaining bare metal nodes When you deploy OpenShift Container Platform on bare metal infrastructure, there are additional considerations that must be taken into account compared to deploying on cloud infrastructure. Unlike in cloud environments where the cluster nodes are considered ephemeral, re-provisioning a bare metal node requires significantly more time and effort for maintenance tasks. When a bare metal node fails, for example, if a fatal kernel error happens or a NIC card hardware failure occurs, workloads on the failed node need to be restarted elsewhere else on the cluster while the problem node is repaired or replaced. Node maintenance mode allows cluster administrators to gracefully power down nodes, moving workloads to other parts of the cluster and ensuring workloads do not get interrupted. Detailed progress and node status details are provided during maintenance. 13.1.3. Additional resources Installing the Node Maintenance Operator by using the CLI Setting a node to maintenance mode Resuming a node from maintenance mode About RunStrategies for virtual machines Virtual machine live migration Configuring virtual machine eviction strategy 13.2. Automatic renewal of TLS certificates All TLS certificates for OpenShift Virtualization components are renewed and rotated automatically. You are not required to refresh them manually. 13.2.1. TLS certificates automatic renewal schedules TLS certificates are automatically deleted and replaced according to the following schedule: KubeVirt certificates are renewed daily. Containerized Data Importer controller (CDI) certificates are renewed every 15 days. MAC pool certificates are renewed every year. Automatic TLS certificate rotation does not disrupt any operations. For example, the following operations continue to function without any disruption: Migrations Image uploads VNC and console connections 13.3. Managing node labeling for obsolete CPU models You can schedule a virtual machine (VM) on a node as long as the VM CPU model and policy are supported by the node. 13.3.1. About node labeling for obsolete CPU models The OpenShift Virtualization Operator uses a predefined list of obsolete CPU models to ensure that a node supports only valid CPU models for scheduled VMs. By default, the following CPU models are eliminated from the list of labels generated for the node: Example 13.1. Obsolete CPU models This predefined list is not visible in the HyperConverged CR. You cannot remove CPU models from this list, but you can add to the list by editing the spec.obsoleteCPUs.cpuModels field of the HyperConverged CR. 13.3.2. About node labeling for CPU features Through the process of iteration, the base CPU features in the minimum CPU model are eliminated from the list of labels generated for the node. For example: An environment might have two supported CPU models: Penryn and Haswell . If Penryn is specified as the CPU model for minCPU , each base CPU feature for Penryn is compared to the list of CPU features supported by Haswell . Example 13.2. CPU features supported by Penryn Example 13.3. CPU features supported by Haswell If both Penryn and Haswell support a specific CPU feature, a label is not created for that feature. Labels are generated for CPU features that are supported only by Haswell and not by Penryn . Example 13.4. Node labels created for CPU features after iteration 13.3.3. Configuring obsolete CPU models You can configure a list of obsolete CPU models by editing the HyperConverged custom resource (CR). Procedure Edit the HyperConverged custom resource, specifying the obsolete CPU models in the obsoleteCPUs array. For example: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: obsoleteCPUs: cpuModels: 1 - "<obsolete_cpu_1>" - "<obsolete_cpu_2>" minCPUModel: "<minimum_cpu_model>" 2 1 Replace the example values in the cpuModels array with obsolete CPU models. Any value that you specify is added to a predefined list of obsolete CPU models. The predefined list is not visible in the CR. 2 Replace this value with the minimum CPU model that you want to use for basic CPU features. If you do not specify a value, Penryn is used by default. 13.4. Preventing node reconciliation Use skip-node annotation to prevent the node-labeller from reconciling a node. 13.4.1. Using skip-node annotation If you want the node-labeller to skip a node, annotate that node by using the oc CLI. Prerequisites You have installed the OpenShift CLI ( oc ). Procedure Annotate the node that you want to skip by running the following command: USD oc annotate node <node_name> node-labeller.kubevirt.io/skip-node=true 1 1 Replace <node_name> with the name of the relevant node to skip. Reconciliation resumes on the cycle after the node annotation is removed or set to false. 13.4.2. Additional resources Managing node labeling for obsolete CPU models
[ "\"486\" Conroe athlon core2duo coreduo kvm32 kvm64 n270 pentium pentium2 pentium3 pentiumpro phenom qemu32 qemu64", "apic clflush cmov cx16 cx8 de fpu fxsr lahf_lm lm mca mce mmx msr mtrr nx pae pat pge pni pse pse36 sep sse sse2 sse4.1 ssse3 syscall tsc", "aes apic avx avx2 bmi1 bmi2 clflush cmov cx16 cx8 de erms fma fpu fsgsbase fxsr hle invpcid lahf_lm lm mca mce mmx movbe msr mtrr nx pae pat pcid pclmuldq pge pni popcnt pse pse36 rdtscp rtm sep smep sse sse2 sse4.1 sse4.2 ssse3 syscall tsc tsc-deadline x2apic xsave", "aes avx avx2 bmi1 bmi2 erms fma fsgsbase hle invpcid movbe pcid pclmuldq popcnt rdtscp rtm sse4.2 tsc-deadline x2apic xsave", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: obsoleteCPUs: cpuModels: 1 - \"<obsolete_cpu_1>\" - \"<obsolete_cpu_2>\" minCPUModel: \"<minimum_cpu_model>\" 2", "oc annotate node <node_name> node-labeller.kubevirt.io/skip-node=true 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/virtualization/node-maintenance
Chapter 13. Understanding and managing pod security admission
Chapter 13. Understanding and managing pod security admission Pod security admission is an implementation of the Kubernetes pod security standards . Use pod security admission to restrict the behavior of pods. 13.1. About pod security admission Red Hat OpenShift Service on AWS includes Kubernetes pod security admission . Pods that do not comply with the pod security admission defined globally or at the namespace level are not admitted to the cluster and cannot run. Globally, the privileged profile is enforced, and the restricted profile is used for warnings and audits. You can also configure the pod security admission settings at the namespace level. Important Do not run workloads in or share access to default projects. Default projects are reserved for running core cluster components. The following default projects are considered highly privileged: default , kube-public , kube-system , openshift , openshift-infra , openshift-node , and other system-created projects that have the openshift.io/run-level label set to 0 or 1 . Functionality that relies on admission plugins, such as pod security admission, security context constraints, cluster resource quotas, and image reference resolution, does not work in highly privileged projects. 13.1.1. Pod security admission modes You can configure the following pod security admission modes for a namespace: Table 13.1. Pod security admission modes Mode Label Description enforce pod-security.kubernetes.io/enforce Rejects a pod from admission if it does not comply with the set profile audit pod-security.kubernetes.io/audit Logs audit events if a pod does not comply with the set profile warn pod-security.kubernetes.io/warn Displays warnings if a pod does not comply with the set profile 13.1.2. Pod security admission profiles You can set each of the pod security admission modes to one of the following profiles: Table 13.2. Pod security admission profiles Profile Description privileged Least restrictive policy; allows for known privilege escalation baseline Minimally restrictive policy; prevents known privilege escalations restricted Most restrictive policy; follows current pod hardening best practices 13.1.3. Privileged namespaces The following system namespaces are always set to the privileged pod security admission profile: default kube-public kube-system You cannot change the pod security profile for these privileged namespaces. 13.1.4. Pod security admission and security context constraints Pod security admission standards and security context constraints are reconciled and enforced by two independent controllers. The two controllers work independently using the following processes to enforce security policies: The security context constraint controller may mutate some security context fields per the pod's assigned SCC. For example, if the seccomp profile is empty or not set and if the pod's assigned SCC enforces seccompProfiles field to be runtime/default , the controller sets the default type to RuntimeDefault . The security context constraint controller validates the pod's security context against the matching SCC. The pod security admission controller validates the pod's security context against the pod security standard assigned to the namespace. 13.2. About pod security admission synchronization In addition to the global pod security admission control configuration, a controller applies pod security admission control warn and audit labels to namespaces according to the SCC permissions of the service accounts that are in a given namespace. The controller examines ServiceAccount object permissions to use security context constraints in each namespace. Security context constraints (SCCs) are mapped to pod security profiles based on their field values; the controller uses these translated profiles. Pod security admission warn and audit labels are set to the most privileged pod security profile in the namespace to prevent displaying warnings and logging audit events when pods are created. Namespace labeling is based on consideration of namespace-local service account privileges. Applying pods directly might use the SCC privileges of the user who runs the pod. However, user privileges are not considered during automatic labeling. 13.2.1. Pod security admission synchronization namespace exclusions Pod security admission synchronization is permanently disabled on system-created namespaces and openshift-* prefixed namespaces. Namespaces that are defined as part of the cluster payload have pod security admission synchronization disabled permanently. The following namespaces are permanently disabled: default kube-node-lease kube-system kube-public openshift All system-created namespaces that are prefixed with openshift- 13.3. Controlling pod security admission synchronization You can enable or disable automatic pod security admission synchronization for most namespaces. Important You cannot enable pod security admission synchronization on system-created namespaces. For more information, see Pod security admission synchronization namespace exclusions . Procedure For each namespace that you want to configure, set a value for the security.openshift.io/scc.podSecurityLabelSync label: To disable pod security admission label synchronization in a namespace, set the value of the security.openshift.io/scc.podSecurityLabelSync label to false . Run the following command: USD oc label namespace <namespace> security.openshift.io/scc.podSecurityLabelSync=false To enable pod security admission label synchronization in a namespace, set the value of the security.openshift.io/scc.podSecurityLabelSync label to true . Run the following command: USD oc label namespace <namespace> security.openshift.io/scc.podSecurityLabelSync=true Note Use the --overwrite flag to overwrite the value if this label is already set on the namespace. Additional resources Pod security admission synchronization namespace exclusions 13.4. Configuring pod security admission for a namespace You can configure the pod security admission settings at the namespace level. For each of the pod security admission modes on the namespace, you can set which pod security admission profile to use. Procedure For each pod security admission mode that you want to set on a namespace, run the following command: USD oc label namespace <namespace> \ 1 pod-security.kubernetes.io/<mode>=<profile> \ 2 --overwrite 1 Set <namespace> to the namespace to configure. 2 Set <mode> to enforce , warn , or audit . Set <profile> to restricted , baseline , or privileged . 13.5. About pod security admission alerts A PodSecurityViolation alert is triggered when the Kubernetes API server reports that there is a pod denial on the audit level of the pod security admission controller. This alert persists for one day. View the Kubernetes API server audit logs to investigate alerts that were triggered. As an example, a workload is likely to fail admission if global enforcement is set to the restricted pod security level. For assistance in identifying pod security admission violation audit events, see Audit annotations in the Kubernetes documentation. 13.6. Additional resources Viewing audit logs Managing security context constraints
[ "oc label namespace <namespace> security.openshift.io/scc.podSecurityLabelSync=false", "oc label namespace <namespace> security.openshift.io/scc.podSecurityLabelSync=true", "oc label namespace <namespace> \\ 1 pod-security.kubernetes.io/<mode>=<profile> \\ 2 --overwrite" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/authentication_and_authorization/understanding-and-managing-pod-security-admission
Chapter 5. Troubleshooting Ceph OSDs
Chapter 5. Troubleshooting Ceph OSDs This chapter contains information on how to fix the most common errors related to Ceph OSDs. Prerequisites Verify your network connection. See Troubleshooting networking issues for details. Verify that Monitors have a quorum by using the ceph health command. If the command returns a health status ( HEALTH_OK , HEALTH_WARN , or HEALTH_ERR ), the Monitors are able to form a quorum. If not, address any Monitor problems first. See Troubleshooting Ceph Monitors for details. For details about ceph health see Understanding Ceph health . Optionally, stop the rebalancing process to save time and resources. See Stopping and starting rebalancing for details. 5.1. Most common Ceph OSD errors The following tables list the most common error messages that are returned by the ceph health detail command, or included in the Ceph logs. The tables provide links to corresponding sections that explain the errors and point to specific procedures to fix the problems. Prerequisites Root-level access to the Ceph OSD nodes. 5.1.1. Ceph OSD error messages A table of common Ceph OSD error messages, and a potential fix. Error message See HEALTH_ERR full osds Full OSDs HEALTH_WARN backfillfull osds Backfillfull OSDS nearfull osds Nearfull OSDs osds are down Down OSDs Flapping OSDs requests are blocked Slow request or requests are blocked slow requests Slow request or requests are blocked 5.1.2. Common Ceph OSD error messages in the Ceph logs A table of common Ceph OSD error messages found in the Ceph logs, and a link to a potential fix. Error message Log file See heartbeat_check: no reply from osd.X Main cluster log Flapping OSDs wrongly marked me down Main cluster log Flapping OSDs osds have slow requests Main cluster log Slow request or requests are blocked FAILED assert(0 == "hit suicide timeout") OSD log Down OSDs 5.1.3. Full OSDs The ceph health detail command returns an error message similar to the following one: What This Means Ceph prevents clients from performing I/O operations on full OSD nodes to avoid losing data. It returns the HEALTH_ERR full osds message when the cluster reaches the capacity set by the mon_osd_full_ratio parameter. By default, this parameter is set to 0.95 which means 95% of the cluster capacity. To Troubleshoot This Problem Determine how many percent of raw storage ( %RAW USED ) is used: If %RAW USED is above 70-75%, you can: Delete unnecessary data. This is a short-term solution to avoid production downtime. Scale the cluster by adding a new OSD node. This is a long-term solution recommended by Red Hat. Additional Resources Nearfull OSDs in the Red Hat Ceph Storage Troubleshooting Guide . See Deleting data from a full storage cluster for details. 5.1.4. Backfillfull OSDs The ceph health detail command returns an error message similar to the following one: What this means When one or more OSDs has exceeded the backfillfull threshold, Ceph prevents data from rebalancing to this device. This is an early warning that rebalancing might not complete and that the cluster is approaching full. The default for the backfullfull threshold is 90%. To troubleshoot this problem Check utilization by pool: If %RAW USED is above 70-75%, you can carry out one of the following actions: Delete unnecessary data. This is a short-term solution to avoid production downtime. Scale the cluster by adding a new OSD node. This is a long-term solution recommended by Red Hat. Increase the backfillfull ratio for the OSDs that contain the PGs stuck in backfull_toofull to allow the recovery process to continue. Add new storage to the cluster as soon as possible or remove data to prevent filling more OSDs. Syntax The range for VALUE is 0.0 to 1.0. Example Additional Resources Nearfull OSDS in the Red Hat Ceph Storage Troubleshooting Guide . See Deleting data from a full storage cluster for details. 5.1.5. Nearfull OSDs The ceph health detail command returns an error message similar to the following one: What This Means Ceph returns the nearfull osds message when the cluster reaches the capacity set by the mon osd nearfull ratio defaults parameter. By default, this parameter is set to 0.85 which means 85% of the cluster capacity. Ceph distributes data based on the CRUSH hierarchy in the best possible way but it cannot guarantee equal distribution. The main causes of the uneven data distribution and the nearfull osds messages are: The OSDs are not balanced among the OSD nodes in the cluster. That is, some OSD nodes host significantly more OSDs than others, or the weight of some OSDs in the CRUSH map is not adequate to their capacity. The Placement Group (PG) count is not proper as per the number of the OSDs, use case, target PGs per OSD, and OSD utilization. The cluster uses inappropriate CRUSH tunables. The back-end storage for OSDs is almost full. To Troubleshoot This Problem: Verify that the PG count is sufficient and increase it if needed. Verify that you use CRUSH tunables optimal to the cluster version and adjust them if not. Change the weight of OSDs by utilization. Determine how much space is left on the disks used by OSDs. To view how much space OSDs use in general: To view how much space OSDs use on particular nodes. Use the following command from the node containing nearfull OSDs: If needed, add a new OSD node. Additional Resources Full OSDs See the Set an OSD's Weight by Utilization section in the Storage Strategies guide for Red Hat Ceph Storage 8. For details, see the CRUSH Tunables section in the Storage Strategies guide for Red Hat Ceph Storage 8 and the How can I test the impact CRUSH map tunable modifications will have on my PG distribution across OSDs in Red Hat Ceph Storage? solution on the Red Hat Customer Portal. See Increasing the placement group for details. 5.1.6. Down OSDs The ceph health detail command returns an error similar to the following one: What This Means One of the ceph-osd processes is unavailable due to a possible service failure or problems with communication with other OSDs. As a consequence, the surviving ceph-osd daemons reported this failure to the Monitors. If the ceph-osd daemon is not running, the underlying OSD drive or file system is either corrupted, or some other error, such as a missing keyring, is preventing the daemon from starting. In most cases, networking issues cause the situation when the ceph-osd daemon is running but still marked as down . To Troubleshoot This Problem Determine which OSD is down : Try to restart the ceph-osd daemon. Replace the OSD_ID with the ID of the OSD that is down: Syntax Example If you are not able start ceph-osd , follow the steps in The ceph-osd daemon cannot start . If you are able to start the ceph-osd daemon but it is marked as down , follow the steps in The ceph-osd daemon is running but still marked as `down` . The ceph-osd daemon cannot start If you have a node containing a number of OSDs (generally, more than twelve), verify that the default maximum number of threads (PID count) is sufficient. See Increasing the PID count for details. Verify that the OSD data and journal partitions are mounted properly. You can use the ceph-volume lvm list command to list all devices and volumes associated with the Ceph Storage Cluster and then manually inspect if they are mounted properly. See the mount(8) manual page for details. If you got the ERROR: missing keyring, cannot use cephx for authentication error message, the OSD is a missing keyring. If you got the ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-1 error message, the ceph-osd daemon cannot read the underlying file system. See the following steps for instructions on how to troubleshoot and fix this error. Check the corresponding log file to determine the cause of the failure. By default, Ceph stores log files in the /var/log/ceph/ CLUSTER_FSID / directory after the logging to files is enabled. An EIO error message indicates a failure of the underlying disk. To fix this problem replace the underlying OSD disk. See Replacing an OSD drive for details. If the log includes any other FAILED assert errors, such as the following one, open a support ticket. See Contacting Red Hat Support for service for details. Check the dmesg output for the errors with the underlying file system or disk: The error -5 error message similar to the following one indicates corruption of the underlying XFS file system. For details on how to fix this problem, see the What is the meaning of "xfs_log_force: error -5 returned"? solution on the Red Hat Customer Portal. If the dmesg output includes any SCSI error error messages, see the SCSI Error Codes Solution Finder solution on the Red Hat Customer Portal to determine the best way to fix the problem. Alternatively, if you are unable to fix the underlying file system, replace the OSD drive. See Replacing an OSD drive for details. If the OSD failed with a segmentation fault, such as the following one, gather the required information and open a support ticket. See Contacting Red Hat Support for service for details. The ceph-osd is running but still marked as down Check the corresponding log file to determine the cause of the failure. By default, Ceph stores log files in the /var/log/ceph/ CLUSTER_FSID / directory after the logging to files is enabled. If the log includes error messages similar to the following ones, see Flapping OSDs . If you see any other errors, open a support ticket. See Contacting Red Hat Support for service for details. Additional Resources Flapping OSDs Stale placement groups See the Ceph daemon logs to enable logging to files. 5.1.7. Flapping OSDs The ceph -w | grep osds command shows OSDs repeatedly as down and then up again within a short period of time: In addition the Ceph log contains error messages similar to the following ones: What This Means The main causes of flapping OSDs are: Certain storage cluster operations, such as scrubbing or recovery, take an abnormal amount of time, for example, if you perform these operations on objects with a large index or large placement groups. Usually, after these operations finish, the flapping OSDs problem is solved. Problems with the underlying physical hardware. In this case, the ceph health detail command also returns the slow requests error message. Problems with the network. Ceph OSDs cannot manage situations where the private network for the storage cluster fails, or significant latency is on the public client-facing network. Ceph OSDs use the private network for sending heartbeat packets to each other to indicate that they are up and in . If the private storage cluster network does not work properly, OSDs are unable to send and receive the heartbeat packets. As a consequence, they report each other as being down to the Ceph Monitors, while marking themselves as up . The following parameters in the Ceph configuration file influence this behavior: Parameter Description Default value osd_heartbeat_grace_time How long OSDs wait for the heartbeat packets to return before reporting an OSD as down to the Ceph Monitors. 20 seconds mon_osd_min_down_reporters How many OSDs must report another OSD as down before the Ceph Monitors mark the OSD as down 2 This table shows that in the default configuration, the Ceph Monitors mark an OSD as down if only one OSD made three distinct reports about the first OSD being down . In some cases, if one single host encounters network issues, the entire cluster can experience flapping OSDs. This is because the OSDs that reside on the host will report other OSDs in the cluster as down . Note The flapping OSDs scenario does not include the situation when the OSD processes are started and then immediately killed. To Troubleshoot This Problem Check the output of the ceph health detail command again. If it includes the slow requests error message, see for details on how to troubleshoot this issue. Determine which OSDs are marked as down and on what nodes they reside: On the nodes containing the flapping OSDs, troubleshoot and fix any networking problems. Alternatively, you can temporarily force Monitors to stop marking the OSDs as down and up by setting the noup and nodown flags: Important Using the noup and nodown flags does not fix the root cause of the problem but only prevents OSDs from flapping. To open a support ticket, see the Contacting Red Hat Support for service section for details. Important Flapping OSDs can be caused by MTU misconfiguration on Ceph OSD nodes, at the network switch level, or both. To resolve the issue, set MTU to a uniform size on all storage cluster nodes, including on the core and access network switches with a planned downtime. Do not tune osd heartbeat min size because changing this setting can hide issues within the network, and it will not solve actual network inconsistency. Additional Resources See the Ceph heartbeat section in the Red Hat Ceph Storage Architecture Guide for details. See the Slow requests or requests are blocked section in the Red Hat Ceph Storage Troubleshooting Guide . 5.1.8. Slow requests or requests are blocked The ceph-osd daemon is slow to respond to a request and the ceph health detail command returns an error message similar to the following one: In addition, the Ceph logs include an error message similar to the following ones: What This Means An OSD with slow requests is every OSD that is not able to service the I/O operations per second (IOPS) in the queue within the time defined by the osd_op_complaint_time parameter. By default, this parameter is set to 30 seconds. The main causes of OSDs having slow requests are: Problems with the underlying hardware, such as disk drives, hosts, racks, or network switches Problems with the network. These problems are usually connected with flapping OSDs. See Flapping OSDs for details. System load The following table shows the types of slow requests. Use the dump_historic_ops administration socket command to determine the type of a slow request. For details about the administration socket, see the Using the Ceph Administration Socket section in the Administration Guide for Red Hat Ceph Storage 8. Slow request type Description waiting for rw locks The OSD is waiting to acquire a lock on a placement group for the operation. waiting for subops The OSD is waiting for replica OSDs to apply the operation to the journal. no flag points reached The OSD did not reach any major operation milestone. waiting for degraded object The OSDs have not replicated an object the specified number of times yet. To Troubleshoot This Problem Determine if the OSDs with slow or block requests share a common piece of hardware, for example, a disk drive, host, rack, or network switch. If the OSDs share a disk: Use the smartmontools utility to check the health of the disk or the logs to determine any errors on the disk. Note The smartmontools utility is included in the smartmontools package. Use the iostat utility to get the I/O wait report ( %iowai ) on the OSD disk to determine if the disk is under heavy load. Note The iostat utility is included in the sysstat package. If the OSDs share the node with another service: Check the RAM and CPU utilization Use the netstat utility to see the network statistics on the Network Interface Controllers (NICs) and troubleshoot any networking issues. If the OSDs share a rack, check the network switch for the rack. For example, if you use jumbo frames, verify that the NIC in the path has jumbo frames set. If you are unable to determine a common piece of hardware shared by OSDs with slow requests, or to troubleshoot and fix hardware and networking problems, open a support ticket. See Contacting Red Hat support for service for details. Additional Resources See the Using the Ceph Administration Socket section in the Red Hat Ceph Storage Administration Guide for details. 5.2. Stopping and starting rebalancing When an OSD fails or you stop it, the CRUSH algorithm automatically starts the rebalancing process to redistribute data across the remaining OSDs. Rebalancing can take time and resources, therefore, consider stopping rebalancing during troubleshooting or maintaining OSDs. Note Placement groups within the stopped OSDs become degraded during troubleshooting and maintenance. Prerequisites Root-level access to the Ceph Monitor node. Procedure Log in to the Cephadm shell: Example Set the noout flag before stopping the OSD: Example When you finish troubleshooting or maintenance, unset the noout flag to start rebalancing: Example Additional Resources The Rebalancing and Recovery section in the Red Hat Ceph Storage Architecture Guide . 5.3. Replacing an OSD drive Ceph is designed for fault tolerance, which means that it can operate in a degraded state without losing data. Consequently, Ceph can operate even if a data storage drive fails. In the context of a failed drive, the degraded state means that the extra copies of the data stored on other OSDs will backfill automatically to other OSDs in the cluster. However, if this occurs, replace the failed OSD drive and recreate the OSD manually. When a drive fails, Ceph reports the OSD as down : Note Ceph can mark an OSD as down also as a consequence of networking or permissions problems. See Down OSDs for details. Modern servers typically deploy with hot-swappable drives so you can pull a failed drive and replace it with a new one without bringing down the node. The whole procedure includes these steps: Remove the OSD from the Ceph cluster. For details, see the Removing an OSD from the Ceph Cluster procedure. Replace the drive. For details, see Replacing the physical drive section. Add the OSD to the cluster. For details, see Adding an OSD to the Ceph Cluster procedure. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph Monitor node. At least one OSD is down . Removing an OSD from the Ceph Cluster Log into the Cephadm shell: Example Determine which OSD is down . Example Mark the OSD as out for the cluster to rebalance and copy its data to other OSDs. Syntax Example Note If the OSD is down , Ceph marks it as out automatically after 600 seconds when it does not receive any heartbeat packet from the OSD based on the mon_osd_down_out_interval parameter. When this happens, other OSDs with copies of the failed OSD data begin backfilling to ensure that the required number of copies exists within the cluster. While the cluster is backfilling, the cluster will be in a degraded state. Ensure that the failed OSD is backfilling. Example You should see the placement group states change from active+clean to active , some degraded objects, and finally active+clean when migration completes. Stop the OSD: Syntax Example Remove the OSD from the storage cluster: Syntax Example The OSD_ID is preserved. Replacing the physical drive See the documentation for the hardware node for details on replacing the physical drive. If the drive is hot-swappable, replace the failed drive with a new one. If the drive is not hot-swappable and the node contains multiple OSDs, you might have to shut down the whole node and replace the physical drive. Consider preventing the cluster from backfilling. See the Stopping and Starting Rebalancing chapter in the Red Hat Ceph Storage Troubleshooting Guide for details. When the drive appears under the /dev/ directory, make a note of the drive path. If you want to add the OSD manually, find the OSD drive and format the disk. Adding an OSD to the Ceph Cluster Once the new drive is inserted, you can use the following options to deploy the OSDs: The OSDs are deployed automatically by the Ceph Orchestrator if the --unmanaged parameter is not set. Example Deploy the OSDs on all the available devices with the unmanaged parameter set to true . Example Deploy the OSDs on specific devices and hosts. Example Ensure that the CRUSH hierarchy is accurate: Example Additional Resources See the Deploying Ceph OSDs on all available devices section in the Red Hat Ceph Storage Operations Guide . See the Deploying Ceph OSDs on specific devices and hosts section in the Red Hat Ceph Storage Operations Guide . See the Down OSDs section in the Red Hat Ceph Storage Troubleshooting Guide . See the Red Hat Ceph Storage Installation Guide . 5.4. Increasing the PID count If you have a node containing more than 12 Ceph OSDs, the default maximum number of threads (PID count) can be insufficient, especially during recovery. As a consequence, some ceph-osd daemons can terminate and fail to start again. If this happens, increase the maximum possible number of threads allowed. Procedure To temporary increase the number: To permanently increase the number, update the /etc/sysctl.conf file as follows: 5.5. Deleting data from a full storage cluster Ceph automatically prevents any I/O operations on OSDs that reached the capacity specified by the mon_osd_full_ratio parameter and returns the full osds error message. This procedure shows how to delete unnecessary data to fix this error. Note The mon_osd_full_ratio parameter sets the value of the full_ratio parameter when creating a cluster. You cannot change the value of mon_osd_full_ratio afterward. To temporarily increase the full_ratio value, increase the set-full-ratio instead. Prerequisites Root-level access to the Ceph Monitor node. Procedure Log in to the Cephadm shell: Example Determine the current value of full_ratio , by default it is set to 0.95 : Temporarily increase the value of set-full-ratio to 0.97 : Important Red Hat strongly recommends to not set the set-full-ratio to a value higher than 0.97. Setting this parameter to a higher value makes the recovery process harder. As a consequence, you might not be able to recover full OSDs at all. Verify that you successfully set the parameter to 0.97 : Monitor the cluster state: As soon as the cluster changes its state from full to nearfull , delete any unnecessary data. Set the value of full_ratio back to 0.95 : Verify that you successfully set the parameter to 0.95 : Additional Resources Full OSDs section in the Red Hat Ceph Storage Troubleshooting Guide . Nearfull OSDs section in the Red Hat Ceph Storage Troubleshooting Guide .
[ "HEALTH_ERR 1 full osds osd.3 is full at 95%", "ceph df", "health: HEALTH_WARN 3 backfillfull osd(s) Low space hindering backfill (add storage if this doesn't resolve itself): 32 pgs backfill_toofull", "ceph df", "ceph osd set-backfillfull-ratio VALUE", "ceph osd set-backfillfull-ratio 0.92", "HEALTH_WARN 1 nearfull osds osd.2 is near full at 85%", "ceph osd df", "df", "HEALTH_WARN 1/3 in osds are down", "ceph health detail HEALTH_WARN 1/3 in osds are down osd.0 is down since epoch 23, last address 192.168.106.220:6800/11080", "systemctl restart ceph- FSID @osd. OSD_ID", "systemctl restart [email protected]", "FAILED assert(0 == \"hit suicide timeout\")", "dmesg", "xfs_log_force: error -5 returned", "Caught signal (Segmentation fault)", "wrongly marked me down heartbeat_check: no reply from osd.2 since back", "ceph -w | grep osds 2022-05-05 06:27:20.810535 mon.0 [INF] osdmap e609: 9 osds: 8 up, 9 in 2022-05-05 06:27:24.120611 mon.0 [INF] osdmap e611: 9 osds: 7 up, 9 in 2022-05-05 06:27:25.975622 mon.0 [INF] HEALTH_WARN; 118 pgs stale; 2/9 in osds are down 2022-05-05 06:27:27.489790 mon.0 [INF] osdmap e614: 9 osds: 6 up, 9 in 2022-05-05 06:27:36.540000 mon.0 [INF] osdmap e616: 9 osds: 7 up, 9 in 2022-05-05 06:27:39.681913 mon.0 [INF] osdmap e618: 9 osds: 8 up, 9 in 2022-05-05 06:27:43.269401 mon.0 [INF] osdmap e620: 9 osds: 9 up, 9 in 2022-05-05 06:27:54.884426 mon.0 [INF] osdmap e622: 9 osds: 8 up, 9 in 2022-05-05 06:27:57.398706 mon.0 [INF] osdmap e624: 9 osds: 7 up, 9 in 2022-05-05 06:27:59.669841 mon.0 [INF] osdmap e625: 9 osds: 6 up, 9 in 2022-05-05 06:28:07.043677 mon.0 [INF] osdmap e628: 9 osds: 7 up, 9 in 2022-05-05 06:28:10.512331 mon.0 [INF] osdmap e630: 9 osds: 8 up, 9 in 2022-05-05 06:28:12.670923 mon.0 [INF] osdmap e631: 9 osds: 9 up, 9 in", "2022-05-25 03:44:06.510583 osd.50 127.0.0.1:6801/149046 18992 : cluster [WRN] map e600547 wrongly marked me down", "2022-05-25 19:00:08.906864 7fa2a0033700 -1 osd.254 609110 heartbeat_check: no reply from osd.2 since back 2021-07-25 19:00:07.444113 front 2021-07-25 18:59:48.311935 (cutoff 2021-07-25 18:59:48.906862)", "ceph health detail HEALTH_WARN 30 requests are blocked > 32 sec; 3 osds have slow requests 30 ops are blocked > 268435 sec 1 ops are blocked > 268435 sec on osd.11 1 ops are blocked > 268435 sec on osd.18 28 ops are blocked > 268435 sec on osd.39 3 osds have slow requests", "ceph osd tree | grep down", "ceph osd set noup ceph osd set nodown", "HEALTH_WARN 30 requests are blocked > 32 sec; 3 osds have slow requests 30 ops are blocked > 268435 sec 1 ops are blocked > 268435 sec on osd.11 1 ops are blocked > 268435 sec on osd.18 28 ops are blocked > 268435 sec on osd.39 3 osds have slow requests", "2022-05-24 13:18:10.024659 osd.1 127.0.0.1:6812/3032 9 : cluster [WRN] 6 slow requests, 6 included below; oldest blocked for > 61.758455 secs", "2022-05-25 03:44:06.510583 osd.50 [WRN] slow request 30.005692 seconds old, received at {date-time}: osd_op(client.4240.0:8 benchmark_data_ceph-1_39426_object7 [write 0~4194304] 0.69848840) v4 currently waiting for subops from [610]", "cephadm shell", "ceph osd set noout", "ceph osd unset noout", "HEALTH_WARN 1/3 in osds are down osd.0 is down since epoch 23, last address 192.168.106.220:6800/11080", "cephadm shell", "ceph osd tree | grep -i down ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF 0 hdd 0.00999 osd.0 down 1.00000 1.00000", "ceph osd out OSD_ID .", "ceph osd out osd.0 marked out osd.0.", "ceph -w | grep backfill 2022-05-02 04:48:03.403872 mon.0 [INF] pgmap v10293282: 431 pgs: 1 active+undersized+degraded+remapped+backfilling, 28 active+undersized+degraded, 49 active+undersized+degraded+remapped+wait_backfill, 59 stale+active+clean, 294 active+clean; 72347 MB data, 101302 MB used, 1624 GB / 1722 GB avail; 227 kB/s rd, 1358 B/s wr, 12 op/s; 10626/35917 objects degraded (29.585%); 6757/35917 objects misplaced (18.813%); 63500 kB/s, 15 objects/s recovering 2022-05-02 04:48:04.414397 mon.0 [INF] pgmap v10293283: 431 pgs: 2 active+undersized+degraded+remapped+backfilling, 75 active+undersized+degraded+remapped+wait_backfill, 59 stale+active+clean, 295 active+clean; 72347 MB data, 101398 MB used, 1623 GB / 1722 GB avail; 969 kB/s rd, 6778 B/s wr, 32 op/s; 10626/35917 objects degraded (29.585%); 10580/35917 objects misplaced (29.457%); 125 MB/s, 31 objects/s recovering 2022-05-02 04:48:00.380063 osd.1 [INF] 0.6f starting backfill to osd.0 from (0'0,0'0] MAX to 2521'166639 2022-05-02 04:48:00.380139 osd.1 [INF] 0.48 starting backfill to osd.0 from (0'0,0'0] MAX to 2513'43079 2022-05-02 04:48:00.380260 osd.1 [INF] 0.d starting backfill to osd.0 from (0'0,0'0] MAX to 2513'136847 2022-05-02 04:48:00.380849 osd.1 [INF] 0.71 starting backfill to osd.0 from (0'0,0'0] MAX to 2331'28496 2022-05-02 04:48:00.381027 osd.1 [INF] 0.51 starting backfill to osd.0 from (0'0,0'0] MAX to 2513'87544", "ceph orch daemon stop OSD_ID", "ceph orch daemon stop osd.0", "ceph orch osd rm OSD_ID --replace", "ceph orch osd rm 0 --replace", "ceph orch apply osd --all-available-devices", "ceph orch apply osd --all-available-devices --unmanaged=true", "ceph orch daemon add osd host02:/dev/sdb", "ceph osd tree", "sysctl -w kernel.pid.max=4194303", "kernel.pid.max = 4194303", "cephadm shell", "ceph osd dump | grep -i full full_ratio 0.95", "ceph osd set-full-ratio 0.97", "ceph osd dump | grep -i full full_ratio 0.97", "ceph -w", "ceph osd set-full-ratio 0.95", "ceph osd dump | grep -i full full_ratio 0.95" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/troubleshooting_guide/troubleshooting-ceph-osds
4.2.5. Off-Line Backup Storage
4.2.5. Off-Line Backup Storage Off-line backup storage takes a step beyond hard drive storage in terms of capacity (higher) and speed (slower). Here, capacities are effectively limited only by your ability to procure and store the removable media. The actual technologies used in these devices varies widely. Here are the more popular types: Magnetic tape Optical disk Of course, having removable media means that access times become even longer, particularly when the desired data is on media not currently loaded in the storage device. This situation is alleviated somewhat by the use of robotic devices capable of automatically loading and unloading media, but the media storage capacities of such devices are still finite. Even in the best of cases, access times are measured in seconds, which is much longer than the relatively slow multi-millisecond access times typical for a high-performance hard drive. Now that we have briefly studied the various storage technologies in use today, let us explore basic virtual memory concepts.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s2-memory-backup
Chapter 4. Revoking two-factor authentication when your authenticator device is lost
Chapter 4. Revoking two-factor authentication when your authenticator device is lost You can revoke the two-factor authentication protection on your Red Hat account when your authenticator device is lost and you have no recovery codes available, or when you have no other way to log in to your account with two-factor authentication enabled. Red Hat Customer Service can do this immediately with a phone call or with a seven-day email response. All requests to revoke two-factor authentication must be made by phone. You cannot revoke two-factor authentication with an email request or other online request. See Section 2.2, "Verifying your account information" for information about setting up your contact phone number. Important Account verification through a phone call from Red Hat Customer Service to your account phone number is the only method approved by the Red Hat security teams for quickly allowing two-factor authentication settings to be revoked. There are no exceptions to this process. Note Password resets are done through email, using the email address for your account. You cannot revoke two-factor authentication through email, and you cannot reset your password through a phone call. 4.1. Revoking two-factor authentication immediately To immediately revoke two-factor authentication on your account, you must be reachable by phone. Red Hat Customer Service places a call to the phone number of record for your account. This two-step process with outgoing call confirmation protects the security of your account. It is the only method approved by the Red Hat security team that allows two-factor authentication settings to be revoked by phone. If you can't accept a return call from Red Hat Customer Service , the two-factor authentication on your account can't be quickly revoked. After two-factor authentication is revoked, you can log in using your valid password. Depending on your organization policy, you might be required to immediately enable two-factor authentication after you log in. 4.2. Revoking two-factor authentication with a 7-day waiting period When you cannot accept a call to the phone number of record for your account, the Red Hat Customer Service team sends an email notification to the email address associated with your account. The email notifies the account holder that two-factor authentication will be revoked in 7 days. You can reply to the notification email if you decide you do not want two-factor authentication revoked.
null
https://docs.redhat.com/en/documentation/red_hat_customer_portal/1/html/using_two-factor_authentication/proc-ciam-2fa-revoke_two-factor-authentication
Chapter 2. Instance boot source
Chapter 2. Instance boot source The boot source for an instance can be an image or a bootable volume. The instance disk of an instance that you boot from an image is controlled by the Compute service and deleted when the instance is deleted. The instance disk of an instance that you boot from a volume is controlled by the Block Storage service and is stored remotely. An image contains a bootable operating system. The Image Service (glance) controls image storage and management. You can launch any number of instances from the same base image. Each instance runs from a copy of the base image. Any changes that you make to the instance do not affect the base image. A bootable volume is a block storage volume created from an image that contains a bootable operating system. The instance can use the bootable volume to persist instance data when the instance is deleted. You can use an existing persistent root volume when you launch an instance. You can also create persistent storage when you launch an instance from an image, so that you can save the instance data when the instance is deleted. A new persistent storage volume is created automatically when you create an instance from a volume snapshot. The following diagram shows the instance disks and storage that you can create when you launch an instance. The actual instance disks and storage created depend on the boot source and flavor used.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/creating_and_managing_instances/con_instance-boot-source_osp
Preface
Preface Note You cannot apply a role-based access control (RBAC)-shared security group directly to an instance during instance creation. To apply an RBAC-shared security group to an instance you must first create the port, apply the shared security group to that port, and then assign that port to the instance. See Adding a security group to a port .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/users_and_identity_management_guide/pr01
Chapter 13. Azure Storage Blob Service
Chapter 13. Azure Storage Blob Service Both producer and consumer are supported The Azure Storage Blob component is used for storing and retrieving blobs from Azure Storage Blob Service using Azure APIs v12 . However in case of versions above v12, we will see if this component can adopt these changes depending on how much breaking changes can result. Prerequisites You must have a valid Windows Azure Storage account. More information is available at Azure Documentation Portal . 13.1. Dependencies When using azure-storage-blob with Red Hat build of Camel Spring Boot, add the following Maven dependency to your pom.xml to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-azure-storage-blob-starter</artifactId> </dependency> 13.2. URI Format azure-storage-blob://accountName[/containerName][?options] In case of consumer, accountName , containerName are required. In case of producer, it depends on the operation that being requested, for example if operation is on a container level, for example, createContainer , accountName and containerName are only required, but in case of operation being requested in blob level, for example, getBlob , accountName , containerName and blobName are required. The blob will be created if it does not already exist. You can append query options to the URI in the following format, ?options=value&option2=value&... 13.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 13.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 13.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 13.4. Component Options The Azure Storage Blob Service component supports 35 options, which are listed below. Name Description Default Type blobName (common) The blob name, to consume specific blob from a container. However on producer, is only required for the operations on the blob level. String blobOffset (common) Set the blob offset for the upload or download operations, default is 0. 0 long blobType (common) The blob type in order to initiate the appropriate settings for each blob type. Enum values: blockblob appendblob pageblob blockblob BlobType closeStreamAfterRead (common) Close the stream after read or keep it open, default is true. true boolean configuration (common) The component configurations. BlobConfiguration credentials (common) StorageSharedKeyCredential can be injected to create the azure client, this holds the important authentication information. StorageSharedKeyCredential credentialType (common) Determines the credential strategy to adopt. Enum values: SHARED_ACCOUNT_KEY SHARED_KEY_CREDENTIAL AZURE_IDENTITY AZURE_SAS AZURE_IDENTITY CredentialType dataCount (common) How many bytes to include in the range. Must be greater than or equal to 0 if specified. Long fileDir (common) The file directory where the downloaded blobs will be saved to, this can be used in both, producer and consumer. String maxResultsPerPage (common) Specifies the maximum number of blobs to return, including all BlobPrefix elements. If the request does not specify maxResultsPerPage or specifies a value greater than 5,000, the server will return up to 5,000 items. Integer maxRetryRequests (common) Specifies the maximum number of additional HTTP Get requests that will be made while reading the data from a response body. 0 int prefix (common) Filters the results to return only blobs whose names begin with the specified prefix. May be null to return all blobs. String regex (common) Filters the results to return only blobs whose names match the specified regular expression. May be null to return all if both prefix and regex are set, regex takes the priority and prefix is ignored. String sasToken (common) Set a SAS Token in case of usage of Shared Access Signature String serviceClient (common) Autowired Client to a storage account. This client does not hold any state about a particular storage account but is instead a convenient way of sending off appropriate requests to the resource on the service. It may also be used to construct URLs to blobs and containers. This client contains operations on a service account. Operations on a container are available on BlobContainerClient through BlobServiceClient#getBlobContainerClient(String), and operations on a blob are available on BlobClient through BlobContainerClient#getBlobClient(String). BlobServiceClient timeout (common) An optional timeout value beyond which a RuntimeException will be raised. Duration bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean blobSequenceNumber (producer) A user-controlled value that you can use to track requests. The value of the sequence number must be between 0 and 263 - 1.The default value is 0. 0 Long blockListType (producer) Specifies which type of blocks to return. Enum values: committed uncommitted all COMMITTED BlockListType changeFeedContext (producer) When using getChangeFeed producer operation, this gives additional context that is passed through the Http pipeline during the service call. Context changeFeedEndTime (producer) When using getChangeFeed producer operation, this filters the results to return events approximately before the end time. Note: A few events belonging to the hour can also be returned. A few events belonging to this hour can be missing; to ensure all events from the hour are returned, round the end time up by an hour. OffsetDateTime changeFeedStartTime (producer) When using getChangeFeed producer operation, this filters the results to return events approximately after the start time. Note: A few events belonging to the hour can also be returned. A few events belonging to this hour can be missing; to ensure all events from the hour are returned, round the start time down by an hour. OffsetDateTime closeStreamAfterWrite (producer) Close the stream after write or keep it open, default is true. true boolean commitBlockListLater (producer) When is set to true, the staged blocks will not be committed directly. true boolean createAppendBlob (producer) When is set to true, the append blocks will be created when committing append blocks. true boolean createPageBlob (producer) When is set to true, the page blob will be created when uploading page blob. true boolean downloadLinkExpiration (producer) Override the default expiration (millis) of URL download link. Long lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean operation (producer) The blob operation that can be used with this component on the producer. Enum values: listBlobContainers createBlobContainer deleteBlobContainer listBlobs getBlob deleteBlob downloadBlobToFile downloadLink uploadBlockBlob stageBlockBlobList commitBlobBlockList getBlobBlockList createAppendBlob commitAppendBlob createPageBlob uploadPageBlob resizePageBlob clearPageBlob getPageBlobRanges listBlobContainers BlobOperationsDefinition pageBlobSize (producer) Specifies the maximum size for the page blob, up to 8 TB. The page blob size must be aligned to a 512-byte boundary. 512 Long autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean healthCheckConsumerEnabled (health) Used for enabling or disabling all consumer based health checks from this component. true boolean healthCheckProducerEnabled (health) Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true. true boolean accessKey (security) Access key for the associated azure account name to be used for authentication with azure blob services. String sourceBlobAccessKey (security) Source Blob Access Key: for copyblob operation, sadly, we need to have an accessKey for the source blob we want to copy Passing an accessKey as header, it's unsafe so we could set as key. String 13.5. Endpoint Options The Azure Storage Blob Service endpoint is configured using URI syntax: with the following path and query parameters: 13.5.1. Path Parameters (2 parameters) Name Description Default Type accountName (common) Azure account name to be used for authentication with azure blob services. String containerName (common) The blob container name. String 13.5.2. Query Parameters (50 parameters) Name Description Default Type blobName (common) The blob name, to consume specific blob from a container. However on producer, is only required for the operations on the blob level. String blobOffset (common) Set the blob offset for the upload or download operations, default is 0. 0 long blobServiceClient (common) Client to a storage account. This client does not hold any state about a particular storage account but is instead a convenient way of sending off appropriate requests to the resource on the service. It may also be used to construct URLs to blobs and containers. This client contains operations on a service account. Operations on a container are available on BlobContainerClient through getBlobContainerClient(String), and operations on a blob are available on BlobClient through getBlobContainerClient(String).getBlobClient(String). BlobServiceClient blobType (common) The blob type in order to initiate the appropriate settings for each blob type. Enum values: blockblob appendblob pageblob blockblob BlobType closeStreamAfterRead (common) Close the stream after read or keep it open, default is true. true boolean credentials (common) StorageSharedKeyCredential can be injected to create the azure client, this holds the important authentication information. StorageSharedKeyCredential credentialType (common) Determines the credential strategy to adopt. Enum values: SHARED_ACCOUNT_KEY SHARED_KEY_CREDENTIAL AZURE_IDENTITY AZURE_SAS AZURE_IDENTITY CredentialType dataCount (common) How many bytes to include in the range. Must be greater than or equal to 0 if specified. Long fileDir (common) The file directory where the downloaded blobs will be saved to, this can be used in both, producer and consumer. String maxResultsPerPage (common) Specifies the maximum number of blobs to return, including all BlobPrefix elements. If the request does not specify maxResultsPerPage or specifies a value greater than 5,000, the server will return up to 5,000 items. Integer maxRetryRequests (common) Specifies the maximum number of additional HTTP Get requests that will be made while reading the data from a response body. 0 int prefix (common) Filters the results to return only blobs whose names begin with the specified prefix. May be null to return all blobs. String regex (common) Filters the results to return only blobs whose names match the specified regular expression. May be null to return all if both prefix and regex are set, regex takes the priority and prefix is ignored. String sasToken (common) Set a SAS Token in case of usage of Shared Access Signature. String serviceClient (common) Autowired Client to a storage account. This client does not hold any state about a particular storage account but is instead a convenient way of sending off appropriate requests to the resource on the service. It may also be used to construct URLs to blobs and containers. This client contains operations on a service account. Operations on a container are available on BlobContainerClient through BlobServiceClient#getBlobContainerClient(String), and operations on a blob are available on BlobClient through BlobContainerClient#getBlobClient(String). BlobServiceClient timeout (common) An optional timeout value beyond which a RuntimeException will be raised. Duration bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern pollStrategy (consumer (advanced)) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPollStrategy blobSequenceNumber (producer) A user-controlled value that you can use to track requests. The value of the sequence number must be between 0 and 263 - 1.The default value is 0. 0 Long blockListType (producer) Specifies which type of blocks to return. Enum values: committed uncommitted all COMMITTED BlockListType changeFeedContext (producer) When using getChangeFeed producer operation, this gives additional context that is passed through the Http pipeline during the service call. Context changeFeedEndTime (producer) When using getChangeFeed producer operation, this filters the results to return events approximately before the end time. Note: A few events belonging to the hour can also be returned. A few events belonging to this hour can be missing; to ensure all events from the hour are returned, round the end time up by an hour. OffsetDateTime changeFeedStartTime (producer) When using getChangeFeed producer operation, this filters the results to return events approximately after the start time. Note: A few events belonging to the hour can also be returned. A few events belonging to this hour can be missing; to ensure all events from the hour are returned, round the start time down by an hour. OffsetDateTime closeStreamAfterWrite (producer) Close the stream after write or keep it open, default is true. true boolean commitBlockListLater (producer) When is set to true, the staged blocks will not be committed directly. true boolean createAppendBlob (producer) When is set to true, the append blocks will be created when committing append blocks. true boolean createPageBlob (producer) When is set to true, the page blob will be created when uploading page blob. true boolean downloadLinkExpiration (producer) Override the default expiration (millis) of URL download link. Long lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean operation (producer) The blob operation that can be used with this component on the producer. Enum values: listBlobContainers createBlobContainer deleteBlobContainer listBlobs getBlob deleteBlob downloadBlobToFile downloadLink uploadBlockBlob stageBlockBlobList commitBlobBlockList getBlobBlockList createAppendBlob commitAppendBlob createPageBlob uploadPageBlob resizePageBlob clearPageBlob getPageBlobRanges listBlobContainers BlobOperationsDefinition pageBlobSize (producer) Specifies the maximum size for the page blob, up to 8 TB. The page blob size must be aligned to a 512-byte boundary. 512 Long backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. 500 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. 1000 long repeatCount (scheduler) Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. 0 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. Enum values: TRACE DEBUG INFO WARN ERROR OFF TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutorService scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler. none Object schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. Enum values: NANOSECONDS MICROSECONDS MILLISECONDS SECONDS MINUTES HOURS DAYS MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean accessKey (security) Access key for the associated azure account name to be used for authentication with azure blob services. String sourceBlobAccessKey (security) Source Blob Access Key: for copyblob operation, sadly, we need to have an accessKey for the source blob we want to copy Passing an accessKey as header, it's unsafe so we could set as key. String Required information options To use this component, you have 3 options in order to provide the required Azure authentication information: Provide a link: BlobServiceClient instance which can be injected into blobServiceClient. Note: No need to create a specific client, for Examples: BlockBlobClient , the BlobServiceClient represents the upper level which can be used to retrieve lower level clients. Provide an Azure Identity, when specifying credentialType=AZURE_IDENTITY and providing required environment variables . This enables service principal (for example, app registration) authentication with secret/certificate as well as username password. Note that this is the default authentication strategy. Provide a shared storage account key, when specifying credentialType=SHARED_ACCOUNT_KEY and providing accountName and accessKey for your Azure account, this is the simplest way to get started. The accessKey can be generated through your Azure portal. Provide a shared storage account key, when specifying credentialType=SHARED_KEY_CREDENTIAL and providing a StorageSharedKeyCredential instance which can be injected into credentials option. Via Azure SAS, when specifying credentialType=AZURE_SAS and providing a SAS Token parameter through the sasToken parameter. 13.6. Usage For example, in order to download a blob content from the block blob hello.txt located on the container1 in the camelazure storage account, use the following snippet: from("azure-storage-blob://camelazure/container1?blobName=hello.txt&credentialType=SHARED_ACCOUNT_KEY&accessKey=RAW(yourAccessKey)").to("file://blobdirectory"); 13.6.1. Message headers The Azure Storage Blob Service component supports 63 message header(s), which is/are listed below: Header Variable Name Type Operations Description CamelAzureStorageBlobTimeout BlobConstants.TIMEOUT Duration All An optional timeout value beyond which a {@link RuntimeException} will be raised. CamelAzureStorageBlobMetadata BlobConstants.METADATA Map<String,String> Operations related to container and blob Metadata to associate with the container or blob. CamelAzureStorageBlobPublicAccessType BlobConstants.PUBLIC_ACCESS_TYPE PublicAccessType createContainer Specifies how the data in this container is available to the public. Pass null for no public access. CamelAzureStorageBlobRequestCondition BlobConstants.BLOB_REQUEST_CONDITION BlobRequestConditions Operations related to container and blob This contains values which will restrict the successful operation of a variety of requests to the conditions present. These conditions are entirely optional. CamelAzureStorageBlobListDetails BlobConstants.BLOB_LIST_DETAILS BlobListDetails listBlobs The details for listing specific blobs CamelAzureStorageBlobPrefix BlobConstants.PREFIX String listBlobs , getBlob Filters the results to return only blobs whose names begin with the specified prefix. May be null to return all blobs. CamelAzureStorageBlobMaxResultsPerPage BlobConstants.MAX_RESULTS_PER_PAGE Integer listBlobs Specifies the maximum number of blobs to return, including all BlobPrefix elements. If the request does not specify maxResultsPerPage or specifies a value greater than 5,000, the server will return up to 5,000 items. CamelAzureStorageBlobListBlobOptions BlobConstants.LIST_BLOB_OPTIONS ListBlobsOptions listBlobs Defines options available to configure the behavior of a call to listBlobsFlatSegment on a {@link BlobContainerClient} object. CamelAzureStorageBlobHttpHeaders BlobConstants.BLOB_HTTP_HEADERS BlobHttpHeaders uploadBlockBlob , commitBlobBlockList , createAppendBlob , createPageBlob Additional parameters for a set of operations. CamelAzureStorageBlobAccessTier BlobConstants.ACCESS_TIER AccessTier uploadBlockBlob , commitBlobBlockList Defines values for AccessTier. CamelAzureStorageBlobContentMD5 BlobConstants.CONTENT_MD5 byte[] Most operations related to upload blob An MD5 hash of the block content. This hash is used to verify the integrity of the block during transport. When this header is specified, the storage service compares the hash of the content that has arrived with this header value. Note that this MD5 hash is not stored with the blob. If the two hashes do not match, the operation will fail. CamelAzureStorageBlobPageBlobRange BlobConstants.PAGE_BLOB_RANGE PageRange Operations related to page blob A {@link PageRange} object. Given that pages must be aligned with 512-byte boundaries, the start offset must be a modulus of 512 and the end offset must be a modulus of 512 - 1. Examples of valid byte ranges are 0-511, 512-1023, etc. CamelAzureStorageBlobCommitBlobBlockListLater BlobConstants.COMMIT_BLOCK_LIST_LATER boolean stageBlockBlobList When is set to true , the staged blocks will not be committed directly. CamelAzureStorageBlobCreateAppendBlob BlobConstants.CREATE_APPEND_BLOB boolean commitAppendBlob When is set to true , the append blocks will be created when committing append blocks. CamelAzureStorageBlobCreatePageBlob BlobConstants.CREATE_PAGE_BLOB boolean uploadPageBlob When is set to true , the page blob will be created when uploading page blob. CamelAzureStorageBlobBlockListType BlobConstants.BLOCK_LIST_TYPE BlockListType getBlobBlockList Specifies which type of blocks to return. CamelAzureStorageBlobPageBlobSize BlobConstants.PAGE_BLOB_SIZE Long createPageBlob , resizePageBlob Specifies the maximum size for the page blob, up to 8 TB. The page blob size must be aligned to a 512-byte boundary. CamelAzureStorageBlobSequenceNumber BlobConstants.BLOB_SEQUENCE_NUMBER Long createPageBlob A user-controlled value that you can use to track requests. The value of the sequence number must be between 0 and 2^63 - 1.The default value is 0. CamelAzureStorageBlobDeleteSnapshotsOptionType BlobConstants.DELETE_SNAPSHOT_OPTION_TYPE DeleteSnapshotsOptionType deleteBlob Specifies the behavior for deleting the snapshots on this blob. \{@code Include} will delete the base blob and all snapshots. \{@code Only} will delete only the snapshots. If a snapshot is being deleted, you must pass null. CamelAzureStorageBlobListBlobContainersOptions BlobConstants.LIST_BLOB_CONTAINERS_OPTIONS ListBlobContainersOptions listBlobContainers A {@link ListBlobContainersOptions} which specifies what data should be returned by the service. CamelAzureStorageBlobParallelTransferOptions BlobConstants.PARALLEL_TRANSFER_OPTIONS ParallelTransferOptions downloadBlobToFile {@link ParallelTransferOptions} to use to download to file. Number of parallel transfers parameter is ignored. CamelAzureStorageBlobFileDir BlobConstants.FILE_DIR String downloadBlobToFile The file directory where the downloaded blobs will be saved to. CamelAzureStorageBlobDownloadLinkExpiration BlobConstants.DOWNLOAD_LINK_EXPIRATION Long downloadLink Override the default expiration (millis) of URL download link. CamelAzureStorageBlobBlobName BlobConstants.BLOB_NAME String Operations related to blob Override/set the blob name on the exchange headers. CamelAzureStorageBlobContainerName BlobConstants.BLOB_CONTAINER_NAME String Operations related to container and blob Override/set the container name on the exchange headers. CamelAzureStorageBlobOperation BlobConstants.BLOB_OPERATION BlobOperationsDefinition All Specify the producer operation to execute, please see the doc on this page related to producer operation. CamelAzureStorageBlobRegex BlobConstants.REGEX String listBlobs , getBlob Filters the results to return only blobs whose names match the specified regular expression. May be null to return all. If both prefix and regex are set, regex takes the priority and prefix is ignored. CamelAzureStorageBlobChangeFeedStartTime BlobConstants.CHANGE_FEED_START_TIME OffsetDateTime getChangeFeed It filters the results to return events approximately after the start time. Note: A few events belonging to the hour can also be returned. A few events belonging to this hour can be missing; to ensure all events from the hour are returned, round the start time down by an hour. CamelAzureStorageBlobChangeFeedEndTime BlobConstants.CHANGE_FEED_END_TIME OffsetDateTime getChangeFeed It filters the results to return events approximately before the end time. Note: A few events belonging to the hour can also be returned. A few events belonging to this hour can be missing; to ensure all events from the hour are returned, round the end time up by an hour. CamelAzureStorageBlobChangeFeedContext BlobConstants.CHANGE_FEED_CONTEXT Context getChangeFeed This gives additional context that is passed through the Http pipeline during the service call. CamelAzureStorageBlobSourceBlobAccountName BlobConstants.SOURCE_BLOB_ACCOUNT_NAME String copyBlob The source blob account name to be used as source account name in a copy blob operation CamelAzureStorageBlobSourceBlobContainerName BlobConstants.SOURCE_BLOB_CONTAINER_NAME String copyBlob The source blob container name to be used as source container name in a copy blob operation 13.6.2. Message headers set by either component producer or consumer Header Variable Name Type Description CamelAzureStorageBlobAccessTier BlobConstants.ACCESS_TIER AccessTier Access tier of the blob. CamelAzureStorageBlobAccessTierChangeTime BlobConstants.ACCESS_TIER_CHANGE_TIME OffsetDateTime Datetime when the access tier of the blob last changed. CamelAzureStorageBlobArchiveStatus BlobConstants.ARCHIVE_STATUS ArchiveStatus Archive status of the blob. CamelAzureStorageBlobCreationTime BlobConstants.CREATION_TIME OffsetDateTime Creation time of the blob. CamelAzureStorageBlobSequenceNumber BlobConstants.BLOB_SEQUENCE_NUMBER Long The current sequence number for a page blob. CamelAzureStorageBlobBlobSize BlobConstants.BLOB_SIZE long The size of the blob. CamelAzureStorageBlobBlobType BlobConstants.BLOB_TYPE BlobType The type of the blob. CamelAzureStorageBlobCacheControl BlobConstants.CACHE_CONTROL String Cache control specified for the blob. CamelAzureStorageBlobCommittedBlockCount BlobConstants.COMMITTED_BLOCK_COUNT Integer Number of blocks committed to an append blob CamelAzureStorageBlobContentDisposition BlobConstants.CONTENT_DISPOSITION String Content disposition specified for the blob. CamelAzureStorageBlobContentEncoding BlobConstants.CONTENT_ENCODING String Content encoding specified for the blob. CamelAzureStorageBlobContentLanguage BlobConstants.CONTENT_LANGUAGE String Content language specified for the blob. CamelAzureStorageBlobContentMd5 BlobConstants.CONTENT_MD5 byte[] Content MD5 specified for the blob. CamelAzureStorageBlobContentType BlobConstants.CONTENT_TYPE String Content type specified for the blob. CamelAzureStorageBlobCopyCompletionTime BlobConstants.COPY_COMPILATION_TIME OffsetDateTime Datetime when the last copy operation on the blob completed. CamelAzureStorageBlobCopyDestinationSnapshot BlobConstants.COPY_DESTINATION_SNAPSHOT String Snapshot identifier of the last incremental copy snapshot for the blob. CamelAzureStorageBlobCopyId BlobConstants.COPY_ID String Identifier of the last copy operation performed on the blob. CamelAzureStorageBlobCopyProgress BlobConstants.COPY_PROGRESS String Progress of the last copy operation performed on the blob. CamelAzureStorageBlobCopySource BlobConstants.COPY_SOURCE String Source of the last copy operation performed on the blob. CamelAzureStorageBlobCopyStatus BlobConstants.COPY_STATUS CopyStatusType Status of the last copy operation performed on the blob. CamelAzureStorageBlobCopyStatusDescription BlobConstants.COPY_STATUS_DESCRIPTION String Description of the last copy operation on the blob. CamelAzureStorageBlobETag BlobConstants.E_TAG String The E Tag of the blob CamelAzureStorageBlobIsAccessTierInferred BlobConstants.IS_ACCESS_TIER_INFRRRED boolean Flag indicating if the access tier of the blob was inferred from properties of the blob. CamelAzureStorageBlobIsIncrementalCopy BlobConstants.IS_INCREMENTAL_COPY boolean Flag indicating if the blob was incrementally copied. CamelAzureStorageBlobIsServerEncrypted BlobConstants.IS_SERVER_ENCRYPTED boolean Flag indicating if the blob's content is encrypted on the server. CamelAzureStorageBlobLastModified BlobConstants.LAST_MODIFIED OffsetDateTime Datetime when the blob was last modified. CamelAzureStorageBlobLeaseDuration BlobConstants.LEASE_DURATION LeaseDurationType Type of lease on the blob. CamelAzureStorageBlobLeaseState BlobConstants.LEASE_STATE LeaseStateType State of the lease on the blob. CamelAzureStorageBlobLeaseStatus BlobConstants.LEASE_STATUS LeaseStatusType Status of the lease on the blob. CamelAzureStorageBlobMetadata BlobConstants.METADATA Map<String, String> Additional metadata associated with the blob. CamelAzureStorageBlobAppendOffset BlobConstants.APPEND_OFFSET String The offset at which the block was committed to the block blob. CamelAzureStorageBlobFileName BlobConstants.FILE_NAME String The downloaded filename from the operation downloadBlobToFile . CamelAzureStorageBlobDownloadLink BlobConstants.DOWNLOAD_LINK String The download link generated by downloadLink operation. CamelAzureStorageBlobRawHttpHeaders BlobConstants.RAW_HTTP_HEADERS HttpHeaders Returns non-parsed httpHeaders that can be used by the user. 13.6.3. Advanced Azure Storage Blob configuration If your Camel Application is running behind a firewall or if you need to have more control over the BlobServiceClient instance configuration, you can create your own instance: StorageSharedKeyCredential credential = new StorageSharedKeyCredential("yourAccountName", "yourAccessKey"); String uri = String.format("https://%s.blob.core.windows.net", "yourAccountName"); BlobServiceClient client = new BlobServiceClientBuilder() .endpoint(uri) .credential(credential) .buildClient(); // This is camel context context.getRegistry().bind("client", client); Then refer to this instance in your Camel azure-storage-blob component configuration: from("azure-storage-blob://cameldev/container1?blobName=myblob&serviceClient=#client") .to("mock:result"); 13.6.4. Automatic detection of BlobServiceClient client in registry The component is capable of detecting the presence of an BlobServiceClient bean into the registry. If it's the only instance of that type it will be used as client and you won't have to define it as uri parameter, like the example above. This may be really useful for smarter configuration of the endpoint. 13.6.5. Azure Storage Blob Producer operations Camel Azure Storage Blob component provides wide range of operations on the producer side: Operations on the service level For these operations, accountName is required . Operation Description listBlobContainers Get the content of the blob. You can restrict the output of this operation to a blob range. getChangeFeed Returns transaction logs of all the changes that occur to the blobs and the blob metadata in your storage account. The change feed provides ordered, guaranteed, durable, immutable, read-only log of these changes. Operations on the container level For these operations, accountName and containerName are required . Operation Description createBlobContainer Creates a new container within a storage account. If a container with the same name already exists, the producer will ignore it. deleteBlobContainer Deletes the specified container in the storage account. If the container doesn't exist the operation fails. listBlobs Returns a list of blobs in this container, with folder structures flattened. Operations on the blob level For these operations, accountName , containerName and blobName are required . Operation Blob Type Description getBlob Common Get the content of the blob. You can restrict the output of this operation to a blob range. deleteBlob Common Delete a blob. downloadBlobToFile Common Downloads the entire blob into a file specified by the path.The file will be created and must not exist, if the file already exists a {@link FileAlreadyExistsException} will be thrown. downloadLink Common Generates the download link for the specified blob using shared access signatures (SAS). This by default only limit to 1hour of allowed access. However, you can override the default expiration duration through the headers. uploadBlockBlob BlockBlob Creates a new block blob, or updates the content of an existing block blob. Updating an existing block blob overwrites any existing metadata on the blob. Partial updates are not supported with PutBlob; the content of the existing blob is overwritten with the new content. stageBlockBlobList BlockBlob Uploads the specified block to the block blob's "staging area" to be later committed by a call to commitBlobBlockList. However in case header CamelAzureStorageBlobCommitBlobBlockListLater or config commitBlockListLater is set to false, this will commit the blocks immediately after staging the blocks. commitBlobBlockList BlockBlob Writes a blob by specifying the list of block IDs that are to make up the blob. In order to be written as part of a blob, a block must have been successfully written to the server in a prior stageBlockBlobList operation. You can call commitBlobBlockList to update a blob by uploading only those blocks that have changed, then committing the new and existing blocks together. Any blocks not specified in the block list and permanently deleted. getBlobBlockList BlockBlob Returns the list of blocks that have been uploaded as part of a block blob using the specified block list filter. createAppendBlob AppendBlob Creates a 0-length append blob. Call commitAppendBlo`b operation to append data to an append blob. commitAppendBlob AppendBlob Commits a new block of data to the end of the existing append blob. In case of header CamelAzureStorageBlobCreateAppendBlob or config createAppendBlob is set to true, it will attempt to create the appendBlob through internal call to createAppendBlob operation first before committing. createPageBlob PageBlob Creates a page blob of the specified length. Call uploadPageBlob operation to upload data data to a page blob. uploadPageBlob PageBlob Writes one or more pages to the page blob. The write size must be a multiple of 512. In case of header CamelAzureStorageBlobCreatePageBlob or config createPageBlob is set to true, it will attempt to create the appendBlob through internal call to createPageBlob operation first before uploading. resizePageBlob PageBlob Resizes the page blob to the specified size (which must be a multiple of 512). clearPageBlob PageBlob Frees the specified pages from the page blob. The size of the range must be a multiple of 512. getPageBlobRanges PageBlob Returns the list of valid page ranges for a page blob or snapshot of a page blob. copyBlob Common Copy a blob from one container to another one, even from different accounts. Refer to the example section in this page to learn how to use these operations into your camel application. 13.6.6. Consumer Examples To consume a blob into a file using file component, this can be done like this: from("azure-storage-blob://camelazure/container1?blobName=hello.txt&accountName=yourAccountName&accessKey=yourAccessKey"). to("file://blobdirectory"); However, you can also write to file directly without using the file component, you will need to specify fileDir folder path in order to save your blob in your machine. from("azure-storage-blob://camelazure/container1?blobName=hello.txt&accountName=yourAccountName&accessKey=yourAccessKey&fileDir=/var/to/awesome/dir"). to("mock:results"); Also, the component supports batch consumer, hence you can consume multiple blobs with only specifying the container name, the consumer will return multiple exchanges depending on the number of the blobs in the container. Example from("azure-storage-blob://camelazure/container1?accountName=yourAccountName&accessKey=yourAccessKey&fileDir=/var/to/awesome/dir"). to("mock:results"); 13.6.7. Producer Operations Examples listBlobContainers from("direct:start") .process(exchange -> { // set the header you want the producer to evaluate, refer to the // section to learn about the headers that can be set // e.g: exchange.getIn().setHeader(BlobConstants.LIST_BLOB_CONTAINERS_OPTIONS, new ListBlobContainersOptions().setMaxResultsPerPage(10)); }) .to("azure-storage-blob://camelazure?operation=listBlobContainers&client&serviceClient=#client") .to("mock:result"); createBlobContainer from("direct:start") .process(exchange -> { // set the header you want the producer to evaluate, refer to the // section to learn about the headers that can be set // e.g: exchange.getIn().setHeader(BlobConstants.BLOB_CONTAINER_NAME, "newContainerName"); }) .to("azure-storage-blob://camelazure/container1?operation=createBlobContainer&serviceClient=#client") .to("mock:result"); deleteBlobContainer : from("direct:start") .process(exchange -> { // set the header you want the producer to evaluate, refer to the // section to learn about the headers that can be set // e.g: exchange.getIn().setHeader(BlobConstants.BLOB_CONTAINER_NAME, "overridenName"); }) .to("azure-storage-blob://camelazure/container1?operation=deleteBlobContainer&serviceClient=#client") .to("mock:result"); listBlobs : from("direct:start") .process(exchange -> { // set the header you want the producer to evaluate, refer to the // section to learn about the headers that can be set // e.g: exchange.getIn().setHeader(BlobConstants.BLOB_CONTAINER_NAME, "overridenName"); }) .to("azure-storage-blob://camelazure/container1?operation=listBlobs&serviceClient=#client") .to("mock:result"); getBlob : We can either set an outputStream in the exchange body and write the data to it. E.g: from("direct:start") .process(exchange -> { // set the header you want the producer to evaluate, refer to the // section to learn about the headers that can be set // e.g: exchange.getIn().setHeader(BlobConstants.BLOB_CONTAINER_NAME, "overridenName"); // set our body exchange.getIn().setBody(outputStream); }) .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=getBlob&serviceClient=#client") .to("mock:result"); If we don't set a body, then this operation will give us an InputStream instance which can proceeded further downstream: from("direct:start") .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=getBlob&serviceClient=#client") .process(exchange -> { InputStream inputStream = exchange.getMessage().getBody(InputStream.class); // We use Apache common IO for simplicity, but you are free to do whatever dealing // with inputStream System.out.println(IOUtils.toString(inputStream, StandardCharsets.UTF_8.name())); }) .to("mock:result"); deleteBlob : from("direct:start") .process(exchange -> { // set the header you want the producer to evaluate, refer to the // section to learn about the headers that can be set // e.g: exchange.getIn().setHeader(BlobConstants.BLOB_NAME, "overridenName"); }) .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=deleteBlob&serviceClient=#client") .to("mock:result"); downloadBlobToFile : from("direct:start") .process(exchange -> { // set the header you want the producer to evaluate, refer to the // section to learn about the headers that can be set // e.g: exchange.getIn().setHeader(BlobConstants.BLOB_NAME, "overridenName"); }) .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=downloadBlobToFile&fileDir=/var/mydir&serviceClient=#client") .to("mock:result"); downloadLink from("direct:start") .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=downloadLink&serviceClient=#client") .process(exchange -> { String link = exchange.getMessage().getHeader(BlobConstants.DOWNLOAD_LINK, String.class); System.out.println("My link " + link); }) .to("mock:result"); uploadBlockBlob from("direct:start") .process(exchange -> { // set the header you want the producer to evaluate, refer to the // section to learn about the headers that can be set // e.g: exchange.getIn().setHeader(BlobConstants.BLOB_NAME, "overridenName"); exchange.getIn().setBody("Block Blob"); }) .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=uploadBlockBlob&serviceClient=#client") .to("mock:result"); stageBlockBlobList from("direct:start") .process(exchange -> { final List<BlobBlock> blocks = new LinkedList<>(); blocks.add(BlobBlock.createBlobBlock(new ByteArrayInputStream("Hello".getBytes()))); blocks.add(BlobBlock.createBlobBlock(new ByteArrayInputStream("From".getBytes()))); blocks.add(BlobBlock.createBlobBlock(new ByteArrayInputStream("Camel".getBytes()))); exchange.getIn().setBody(blocks); }) .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=stageBlockBlobList&serviceClient=#client") .to("mock:result"); commitBlockBlobList from("direct:start") .process(exchange -> { // We assume here you have the knowledge of these blocks you want to commit final List<Block> blocksIds = new LinkedList<>(); blocksIds.add(new Block().setName("id-1")); blocksIds.add(new Block().setName("id-2")); blocksIds.add(new Block().setName("id-3")); exchange.getIn().setBody(blocksIds); }) .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=commitBlockBlobList&serviceClient=#client") .to("mock:result"); getBlobBlockList from("direct:start") .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=getBlobBlockList&serviceClient=#client") .log("USD{body}") .to("mock:result"); createAppendBlob from("direct:start") .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=createAppendBlob&serviceClient=#client") .to("mock:result"); commitAppendBlob from("direct:start") .process(exchange -> { final String data = "Hello world from my awesome tests!"; final InputStream dataStream = new ByteArrayInputStream(data.getBytes(StandardCharsets.UTF_8)); exchange.getIn().setBody(dataStream); // of course you can set whatever headers you like, refer to the headers section to learn more }) .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=commitAppendBlob&serviceClient=#client") .to("mock:result"); createPageBlob from("direct:start") .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=createPageBlob&serviceClient=#client") .to("mock:result"); uploadPageBlob from("direct:start") .process(exchange -> { byte[] dataBytes = new byte[512]; // we set range for the page from 0-511 new Random().nextBytes(dataBytes); final InputStream dataStream = new ByteArrayInputStream(dataBytes); final PageRange pageRange = new PageRange().setStart(0).setEnd(511); exchange.getIn().setHeader(BlobConstants.PAGE_BLOB_RANGE, pageRange); exchange.getIn().setBody(dataStream); }) .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=uploadPageBlob&serviceClient=#client") .to("mock:result"); resizePageBlob from("direct:start") .process(exchange -> { final PageRange pageRange = new PageRange().setStart(0).setEnd(511); exchange.getIn().setHeader(BlobConstants.PAGE_BLOB_RANGE, pageRange); }) .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=resizePageBlob&serviceClient=#client") .to("mock:result"); clearPageBlob from("direct:start") .process(exchange -> { final PageRange pageRange = new PageRange().setStart(0).setEnd(511); exchange.getIn().setHeader(BlobConstants.PAGE_BLOB_RANGE, pageRange); }) .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=clearPageBlob&serviceClient=#client") .to("mock:result"); getPageBlobRanges from("direct:start") .process(exchange -> { final PageRange pageRange = new PageRange().setStart(0).setEnd(511); exchange.getIn().setHeader(BlobConstants.PAGE_BLOB_RANGE, pageRange); }) .to("azure-storage-blob://camelazure/container1?blobName=blob&operation=getPageBlobRanges&serviceClient=#client") .log("USD{body}") .to("mock:result"); copyBlob from("direct:copyBlob") .process(exchange -> { exchange.getIn().setHeader(BlobConstants.BLOB_NAME, "file.txt"); exchange.getMessage().setHeader(BlobConstants.SOURCE_BLOB_CONTAINER_NAME, "containerblob1"); exchange.getMessage().setHeader(BlobConstants.SOURCE_BLOB_ACCOUNT_NAME, "account"); }) .to("azure-storage-blob://account/containerblob2?operation=copyBlob&sourceBlobAccessKey=RAW(accessKey)") .to("mock:result"); In this way the file.txt in the container containerblob1 of the account 'account', will be copied to the container containerblob2 of the same account. 13.6.8. SAS Token generation example SAS Blob Container tokens can be generated programmatically or via Azure UI. To generate the token with java code, the following can be done: BlobContainerClient blobClient = new BlobContainerClientBuilder() .endpoint(String.format("https://%s.blob.core.windows.net/%s", accountName, accessKey)) .containerName(containerName) .credential(new StorageSharedKeyCredential(accountName, accessKey)) .buildClient(); // Create a SAS token that's valid for 1 day, as an example OffsetDateTime expiryTime = OffsetDateTime.now().plusDays(1); // Assign permissions to the SAS token BlobContainerSasPermission blobContainerSasPermission = new BlobContainerSasPermission() .setWritePermission(true) .setListPermission(true) .setCreatePermission(true) .setDeletePermission(true) .setAddPermission(true) .setReadPermission(true); BlobServiceSasSignatureValues sasSignatureValues = new BlobServiceSasSignatureValues(expiryTime, blobContainerSasPermission); return blobClient.generateSas(sasSignatureValues); The generated SAS token can be then stored to an application.properties file so that it can be loaded by the camel route, for example: camel.component.azure-storage-blob.sas-token=MY_TOKEN_HERE from("direct:copyBlob") .to("azure-storage-blob://account/containerblob2?operation=uploadBlockBlob&credentialType=AZURE_SAS") 13.7. Spring Boot Auto-Configuration The component supports 36 options, which are listed below. Name Description Default Type camel.component.azure-storage-blob.access-key Access key for the associated azure account name to be used for authentication with azure blob services. String camel.component.azure-storage-blob.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.azure-storage-blob.blob-name The blob name, to consume specific blob from a container. However on producer, is only required for the operations on the blob level. String camel.component.azure-storage-blob.blob-offset Set the blob offset for the upload or download operations, default is 0. 0 Long camel.component.azure-storage-blob.blob-sequence-number A user-controlled value that you can use to track requests. The value of the sequence number must be between 0 and 263 - 1.The default value is 0. 0 Long camel.component.azure-storage-blob.blob-type The blob type in order to initiate the appropriate settings for each blob type. BlobType camel.component.azure-storage-blob.block-list-type Specifies which type of blocks to return. BlockListType camel.component.azure-storage-blob.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.azure-storage-blob.change-feed-context When using getChangeFeed producer operation, this gives additional context that is passed through the Http pipeline during the service call. The option is a com.azure.core.util.Context type. Context camel.component.azure-storage-blob.change-feed-end-time When using getChangeFeed producer operation, this filters the results to return events approximately before the end time. Note: A few events belonging to the hour can also be returned. A few events belonging to this hour can be missing; to ensure all events from the hour are returned, round the end time up by an hour. The option is a java.time.OffsetDateTime type. OffsetDateTime camel.component.azure-storage-blob.change-feed-start-time When using getChangeFeed producer operation, this filters the results to return events approximately after the start time. Note: A few events belonging to the hour can also be returned. A few events belonging to this hour can be missing; to ensure all events from the hour are returned, round the start time down by an hour. The option is a java.time.OffsetDateTime type. OffsetDateTime camel.component.azure-storage-blob.close-stream-after-read Close the stream after read or keep it open, default is true. true Boolean camel.component.azure-storage-blob.close-stream-after-write Close the stream after write or keep it open, default is true. true Boolean camel.component.azure-storage-blob.commit-block-list-later When is set to true, the staged blocks will not be committed directly. true Boolean camel.component.azure-storage-blob.configuration The component configurations. The option is a org.apache.camel.component.azure.storage.blob.BlobConfiguration type. BlobConfiguration camel.component.azure-storage-blob.create-append-blob When is set to true, the append blocks will be created when committing append blocks. true Boolean camel.component.azure-storage-blob.create-page-blob When is set to true, the page blob will be created when uploading page blob. true Boolean camel.component.azure-storage-blob.credential-type Determines the credential strategy to adopt. CredentialType camel.component.azure-storage-blob.credentials StorageSharedKeyCredential can be injected to create the azure client, this holds the important authentication information. The option is a com.azure.storage.common.StorageSharedKeyCredential type. StorageSharedKeyCredential camel.component.azure-storage-blob.data-count How many bytes to include in the range. Must be greater than or equal to 0 if specified. Long camel.component.azure-storage-blob.download-link-expiration Override the default expiration (millis) of URL download link. Long camel.component.azure-storage-blob.enabled Whether to enable auto configuration of the azure-storage-blob component. This is enabled by default. Boolean camel.component.azure-storage-blob.file-dir The file directory where the downloaded blobs will be saved to, this can be used in both, producer and consumer. String camel.component.azure-storage-blob.health-check-consumer-enabled Used for enabling or disabling all consumer based health checks from this component. true Boolean camel.component.azure-storage-blob.health-check-producer-enabled Used for enabling or disabling all producer based health checks from this component. Notice: Camel has by default disabled all producer based health-checks. You can turn on producer checks globally by setting camel.health.producersEnabled=true. true Boolean camel.component.azure-storage-blob.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.azure-storage-blob.max-results-per-page Specifies the maximum number of blobs to return, including all BlobPrefix elements. If the request does not specify maxResultsPerPage or specifies a value greater than 5,000, the server will return up to 5,000 items. Integer camel.component.azure-storage-blob.max-retry-requests Specifies the maximum number of additional HTTP Get requests that will be made while reading the data from a response body. 0 Integer camel.component.azure-storage-blob.operation The blob operation that can be used with this component on the producer. BlobOperationsDefinition camel.component.azure-storage-blob.page-blob-size Specifies the maximum size for the page blob, up to 8 TB. The page blob size must be aligned to a 512-byte boundary. 512 Long camel.component.azure-storage-blob.prefix Filters the results to return only blobs whose names begin with the specified prefix. May be null to return all blobs. String camel.component.azure-storage-blob.regex Filters the results to return only blobs whose names match the specified regular expression. May be null to return all if both prefix and regex are set, regex takes the priority and prefix is ignored. String camel.component.azure-storage-blob.sas-token Set a SAS Token in case of usage of Shared Access Signature String camel.component.azure-storage-blob.service-client Client to a storage account. This client does not hold any state about a particular storage account but is instead a convenient way of sending off appropriate requests to the resource on the service. It may also be used to construct URLs to blobs and containers. This client contains operations on a service account. Operations on a container are available on BlobContainerClient through BlobServiceClient#getBlobContainerClient(String), and operations on a blob are available on BlobClient through BlobContainerClient#getBlobClient(String). The option is a com.azure.storage.blob.BlobServiceClient type. BlobServiceClient camel.component.azure-storage-blob.source-blob-access-key Source Blob Access Key: for copyblob operation, sadly, we need to have an accessKey for the source blob we want to copy Passing an accessKey as header, it's unsafe so we could set as key. String camel.component.azure-storage-blob.timeout An optional timeout value beyond which a RuntimeException will be raised. The option is a java.time.Duration type. Duration
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-azure-storage-blob-starter</artifactId> </dependency>", "azure-storage-blob://accountName[/containerName][?options]", "azure-storage-blob:accountName/containerName", "from(\"azure-storage-blob://camelazure/container1?blobName=hello.txt&credentialType=SHARED_ACCOUNT_KEY&accessKey=RAW(yourAccessKey)\").to(\"file://blobdirectory\");", "StorageSharedKeyCredential credential = new StorageSharedKeyCredential(\"yourAccountName\", \"yourAccessKey\"); String uri = String.format(\"https://%s.blob.core.windows.net\", \"yourAccountName\"); BlobServiceClient client = new BlobServiceClientBuilder() .endpoint(uri) .credential(credential) .buildClient(); // This is camel context context.getRegistry().bind(\"client\", client);", "from(\"azure-storage-blob://cameldev/container1?blobName=myblob&serviceClient=#client\") .to(\"mock:result\");", "from(\"azure-storage-blob://camelazure/container1?blobName=hello.txt&accountName=yourAccountName&accessKey=yourAccessKey\"). to(\"file://blobdirectory\");", "from(\"azure-storage-blob://camelazure/container1?blobName=hello.txt&accountName=yourAccountName&accessKey=yourAccessKey&fileDir=/var/to/awesome/dir\"). to(\"mock:results\");", "from(\"azure-storage-blob://camelazure/container1?accountName=yourAccountName&accessKey=yourAccessKey&fileDir=/var/to/awesome/dir\"). to(\"mock:results\");", "from(\"direct:start\") .process(exchange -> { // set the header you want the producer to evaluate, refer to the previous // section to learn about the headers that can be set // e.g: exchange.getIn().setHeader(BlobConstants.LIST_BLOB_CONTAINERS_OPTIONS, new ListBlobContainersOptions().setMaxResultsPerPage(10)); }) .to(\"azure-storage-blob://camelazure?operation=listBlobContainers&client&serviceClient=#client\") .to(\"mock:result\");", "from(\"direct:start\") .process(exchange -> { // set the header you want the producer to evaluate, refer to the previous // section to learn about the headers that can be set // e.g: exchange.getIn().setHeader(BlobConstants.BLOB_CONTAINER_NAME, \"newContainerName\"); }) .to(\"azure-storage-blob://camelazure/container1?operation=createBlobContainer&serviceClient=#client\") .to(\"mock:result\");", "from(\"direct:start\") .process(exchange -> { // set the header you want the producer to evaluate, refer to the previous // section to learn about the headers that can be set // e.g: exchange.getIn().setHeader(BlobConstants.BLOB_CONTAINER_NAME, \"overridenName\"); }) .to(\"azure-storage-blob://camelazure/container1?operation=deleteBlobContainer&serviceClient=#client\") .to(\"mock:result\");", "from(\"direct:start\") .process(exchange -> { // set the header you want the producer to evaluate, refer to the previous // section to learn about the headers that can be set // e.g: exchange.getIn().setHeader(BlobConstants.BLOB_CONTAINER_NAME, \"overridenName\"); }) .to(\"azure-storage-blob://camelazure/container1?operation=listBlobs&serviceClient=#client\") .to(\"mock:result\");", "from(\"direct:start\") .process(exchange -> { // set the header you want the producer to evaluate, refer to the previous // section to learn about the headers that can be set // e.g: exchange.getIn().setHeader(BlobConstants.BLOB_CONTAINER_NAME, \"overridenName\"); // set our body exchange.getIn().setBody(outputStream); }) .to(\"azure-storage-blob://camelazure/container1?blobName=blob&operation=getBlob&serviceClient=#client\") .to(\"mock:result\");", "from(\"direct:start\") .to(\"azure-storage-blob://camelazure/container1?blobName=blob&operation=getBlob&serviceClient=#client\") .process(exchange -> { InputStream inputStream = exchange.getMessage().getBody(InputStream.class); // We use Apache common IO for simplicity, but you are free to do whatever dealing // with inputStream System.out.println(IOUtils.toString(inputStream, StandardCharsets.UTF_8.name())); }) .to(\"mock:result\");", "from(\"direct:start\") .process(exchange -> { // set the header you want the producer to evaluate, refer to the previous // section to learn about the headers that can be set // e.g: exchange.getIn().setHeader(BlobConstants.BLOB_NAME, \"overridenName\"); }) .to(\"azure-storage-blob://camelazure/container1?blobName=blob&operation=deleteBlob&serviceClient=#client\") .to(\"mock:result\");", "from(\"direct:start\") .process(exchange -> { // set the header you want the producer to evaluate, refer to the previous // section to learn about the headers that can be set // e.g: exchange.getIn().setHeader(BlobConstants.BLOB_NAME, \"overridenName\"); }) .to(\"azure-storage-blob://camelazure/container1?blobName=blob&operation=downloadBlobToFile&fileDir=/var/mydir&serviceClient=#client\") .to(\"mock:result\");", "from(\"direct:start\") .to(\"azure-storage-blob://camelazure/container1?blobName=blob&operation=downloadLink&serviceClient=#client\") .process(exchange -> { String link = exchange.getMessage().getHeader(BlobConstants.DOWNLOAD_LINK, String.class); System.out.println(\"My link \" + link); }) .to(\"mock:result\");", "from(\"direct:start\") .process(exchange -> { // set the header you want the producer to evaluate, refer to the previous // section to learn about the headers that can be set // e.g: exchange.getIn().setHeader(BlobConstants.BLOB_NAME, \"overridenName\"); exchange.getIn().setBody(\"Block Blob\"); }) .to(\"azure-storage-blob://camelazure/container1?blobName=blob&operation=uploadBlockBlob&serviceClient=#client\") .to(\"mock:result\");", "from(\"direct:start\") .process(exchange -> { final List<BlobBlock> blocks = new LinkedList<>(); blocks.add(BlobBlock.createBlobBlock(new ByteArrayInputStream(\"Hello\".getBytes()))); blocks.add(BlobBlock.createBlobBlock(new ByteArrayInputStream(\"From\".getBytes()))); blocks.add(BlobBlock.createBlobBlock(new ByteArrayInputStream(\"Camel\".getBytes()))); exchange.getIn().setBody(blocks); }) .to(\"azure-storage-blob://camelazure/container1?blobName=blob&operation=stageBlockBlobList&serviceClient=#client\") .to(\"mock:result\");", "from(\"direct:start\") .process(exchange -> { // We assume here you have the knowledge of these blocks you want to commit final List<Block> blocksIds = new LinkedList<>(); blocksIds.add(new Block().setName(\"id-1\")); blocksIds.add(new Block().setName(\"id-2\")); blocksIds.add(new Block().setName(\"id-3\")); exchange.getIn().setBody(blocksIds); }) .to(\"azure-storage-blob://camelazure/container1?blobName=blob&operation=commitBlockBlobList&serviceClient=#client\") .to(\"mock:result\");", "from(\"direct:start\") .to(\"azure-storage-blob://camelazure/container1?blobName=blob&operation=getBlobBlockList&serviceClient=#client\") .log(\"USD{body}\") .to(\"mock:result\");", "from(\"direct:start\") .to(\"azure-storage-blob://camelazure/container1?blobName=blob&operation=createAppendBlob&serviceClient=#client\") .to(\"mock:result\");", "from(\"direct:start\") .process(exchange -> { final String data = \"Hello world from my awesome tests!\"; final InputStream dataStream = new ByteArrayInputStream(data.getBytes(StandardCharsets.UTF_8)); exchange.getIn().setBody(dataStream); // of course you can set whatever headers you like, refer to the headers section to learn more }) .to(\"azure-storage-blob://camelazure/container1?blobName=blob&operation=commitAppendBlob&serviceClient=#client\") .to(\"mock:result\");", "from(\"direct:start\") .to(\"azure-storage-blob://camelazure/container1?blobName=blob&operation=createPageBlob&serviceClient=#client\") .to(\"mock:result\");", "from(\"direct:start\") .process(exchange -> { byte[] dataBytes = new byte[512]; // we set range for the page from 0-511 new Random().nextBytes(dataBytes); final InputStream dataStream = new ByteArrayInputStream(dataBytes); final PageRange pageRange = new PageRange().setStart(0).setEnd(511); exchange.getIn().setHeader(BlobConstants.PAGE_BLOB_RANGE, pageRange); exchange.getIn().setBody(dataStream); }) .to(\"azure-storage-blob://camelazure/container1?blobName=blob&operation=uploadPageBlob&serviceClient=#client\") .to(\"mock:result\");", "from(\"direct:start\") .process(exchange -> { final PageRange pageRange = new PageRange().setStart(0).setEnd(511); exchange.getIn().setHeader(BlobConstants.PAGE_BLOB_RANGE, pageRange); }) .to(\"azure-storage-blob://camelazure/container1?blobName=blob&operation=resizePageBlob&serviceClient=#client\") .to(\"mock:result\");", "from(\"direct:start\") .process(exchange -> { final PageRange pageRange = new PageRange().setStart(0).setEnd(511); exchange.getIn().setHeader(BlobConstants.PAGE_BLOB_RANGE, pageRange); }) .to(\"azure-storage-blob://camelazure/container1?blobName=blob&operation=clearPageBlob&serviceClient=#client\") .to(\"mock:result\");", "from(\"direct:start\") .process(exchange -> { final PageRange pageRange = new PageRange().setStart(0).setEnd(511); exchange.getIn().setHeader(BlobConstants.PAGE_BLOB_RANGE, pageRange); }) .to(\"azure-storage-blob://camelazure/container1?blobName=blob&operation=getPageBlobRanges&serviceClient=#client\") .log(\"USD{body}\") .to(\"mock:result\");", "from(\"direct:copyBlob\") .process(exchange -> { exchange.getIn().setHeader(BlobConstants.BLOB_NAME, \"file.txt\"); exchange.getMessage().setHeader(BlobConstants.SOURCE_BLOB_CONTAINER_NAME, \"containerblob1\"); exchange.getMessage().setHeader(BlobConstants.SOURCE_BLOB_ACCOUNT_NAME, \"account\"); }) .to(\"azure-storage-blob://account/containerblob2?operation=copyBlob&sourceBlobAccessKey=RAW(accessKey)\") .to(\"mock:result\");", "BlobContainerClient blobClient = new BlobContainerClientBuilder() .endpoint(String.format(\"https://%s.blob.core.windows.net/%s\", accountName, accessKey)) .containerName(containerName) .credential(new StorageSharedKeyCredential(accountName, accessKey)) .buildClient(); // Create a SAS token that's valid for 1 day, as an example OffsetDateTime expiryTime = OffsetDateTime.now().plusDays(1); // Assign permissions to the SAS token BlobContainerSasPermission blobContainerSasPermission = new BlobContainerSasPermission() .setWritePermission(true) .setListPermission(true) .setCreatePermission(true) .setDeletePermission(true) .setAddPermission(true) .setReadPermission(true); BlobServiceSasSignatureValues sasSignatureValues = new BlobServiceSasSignatureValues(expiryTime, blobContainerSasPermission); return blobClient.generateSas(sasSignatureValues);", "camel.component.azure-storage-blob.sas-token=MY_TOKEN_HERE from(\"direct:copyBlob\") .to(\"azure-storage-blob://account/containerblob2?operation=uploadBlockBlob&credentialType=AZURE_SAS\")" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-azure-storage-blob-component-starter
Chapter 37. MetadataTemplate schema reference
Chapter 37. MetadataTemplate schema reference Used in: BuildConfigTemplate , DeploymentTemplate , InternalServiceTemplate , PodDisruptionBudgetTemplate , PodTemplate , ResourceTemplate , StatefulSetTemplate Full list of MetadataTemplate schema properties Labels and Annotations are used to identify and organize resources, and are configured in the metadata property. For example: # ... template: pod: metadata: labels: label1: value1 label2: value2 annotations: annotation1: value1 annotation2: value2 # ... The labels and annotations fields can contain any labels or annotations that do not contain the reserved string strimzi.io . Labels and annotations containing strimzi.io are used internally by AMQ Streams and cannot be configured. 37.1. MetadataTemplate schema properties Property Description labels Labels added to the OpenShift resource. map annotations Annotations added to the OpenShift resource. map
[ "template: pod: metadata: labels: label1: value1 label2: value2 annotations: annotation1: value1 annotation2: value2" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-MetadataTemplate-reference
Chapter 23. Configuring the cluster-wide proxy
Chapter 23. Configuring the cluster-wide proxy Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure OpenShift Container Platform to use a proxy by modifying the Proxy object for existing clusters or by configuring the proxy settings in the install-config.yaml file for new clusters. 23.1. Prerequisites Review the sites that your cluster requires access to and determine whether any of them must bypass the proxy. By default, all cluster system egress traffic is proxied, including calls to the cloud provider API for the cloud that hosts your cluster. System-wide proxy affects system components only, not user workloads. Add sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). 23.2. Enabling the cluster-wide proxy The Proxy object is used to manage the cluster-wide egress proxy. When a cluster is installed or upgraded without the proxy configured, a Proxy object is still generated but it will have a nil spec . For example: apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: "" status: A cluster administrator can configure the proxy for OpenShift Container Platform by modifying this cluster Proxy object. Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Prerequisites Cluster administrator permissions OpenShift Container Platform oc CLI tool installed Procedure Create a config map that contains any additional CA certificates required for proxying HTTPS connections. Note You can skip this step if the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Create a file called user-ca-bundle.yaml with the following contents, and provide the values of your PEM-encoded certificates: apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4 1 This data key must be named ca-bundle.crt . 2 One or more PEM-encoded X.509 certificates used to sign the proxy's identity certificate. 3 The config map name that will be referenced from the Proxy object. 4 The config map must be in the openshift-config namespace. Create the config map from this file: USD oc create -f user-ca-bundle.yaml Use the oc edit command to modify the Proxy object: USD oc edit proxy/cluster Configure the necessary fields for the proxy: apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. The URL scheme must be either http or https . Specify a URL for the proxy that supports the URL scheme. For example, most proxies will report an error if they are configured to use https but they only support http . This failure message may not propagate to the logs and can appear to be a network connection failure instead. If using a proxy that listens for https connections from the cluster, you may need to configure the cluster to accept the CAs and certificates that the proxy uses. 3 A comma-separated list of destination domain names, domains, IP addresses or other network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy or httpsProxy fields are set. 4 One or more URLs external to the cluster to use to perform a readiness check before writing the httpProxy and httpsProxy values to status. 5 A reference to the config map in the openshift-config namespace that contains additional CA certificates required for proxying HTTPS connections. Note that the config map must already exist before referencing it here. This field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Save the file to apply the changes. 23.3. Removing the cluster-wide proxy The cluster Proxy object cannot be deleted. To remove the proxy from a cluster, remove all spec fields from the Proxy object. Prerequisites Cluster administrator permissions OpenShift Container Platform oc CLI tool installed Procedure Use the oc edit command to modify the proxy: USD oc edit proxy/cluster Remove all spec fields from the Proxy object. For example: apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: {} Save the file to apply the changes. Additional resources Replacing the CA Bundle certificate Proxy certificate customization
[ "apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: \"\" status:", "apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4", "oc create -f user-ca-bundle.yaml", "oc edit proxy/cluster", "apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5", "oc edit proxy/cluster", "apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: {}" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/networking/enable-cluster-wide-proxy
8.127. libvpd
8.127. libvpd 8.127.1. RHEA-2014:1429 - libvpd enhancement update Updated libvpd packages that add various enhancements are now available for Red Hat Enterprise Linux 6. The libvpd packages contain the classes that are used to access Vital Product Data (VPD) created by vpdupdate in the lsvpd package. Note The libvpd packages have been upgraded to upstream version 2.2.3, which provides a number of enhancements over the version. Specifically, the updated libvpd packages prevent segmentation faults from occurring when fetching the corrupted Vital Product Data (VPD) database. In addition, support for vpdupdate command automation has been added for cases of changes happening to device configuration on run time. During hot plugs, the changes made to the device configuration are now caught by an udev rule and are handled by libvpd correctly. (BZ# 739122 ) Users of libvpd are advised to upgrade to these updated packages, which add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/libvpd
12.4. Disabling and Re-enabling Host Entries
12.4. Disabling and Re-enabling Host Entries Active hosts can be accessed by other services, hosts, and users within the domain. There can be situations when it is necessary to remove a host from activity. However, deleting a host removes the entry and all the associated configuration, and it removes it permanently. 12.4.1. Disabling Host Entries Disabling a host prevents domain users from access it without permanently removing it from the domain. This can be done by using the host-disable command. For example: Important Disabling a host entry not only disables that host. It disables every configured service on that host as well. 12.4.2. Re-enabling Hosts This section describes how to re-enable a disabled IdM host. Disabling a host removes its active keytabs, which removed the host from the IdM domain without otherwise touching its configuration entry. To re-enable a host, use the ipa-getkeytab command, adding: the -s option to specify which IdM server to request the keytab from the -p option to specify the principal name the -k option to specify the file to which to save the keytab. For example, to request a new host keytab from server.example.com for client.example.com , and store the keytab in the /etc/krb5.keytab file: Note You can also use the administrator's credentials, specifying -D "uid=admin,cn=users,cn=accounts,dc=example,dc=com" . It is important that the credentials correspond to a user allowed to create the keytab for the host. If you run the ipa-getkeytab command on an active IdM client or server, then you can run it without any LDAP credentials ( -D and -w ) if the user has a TGT obtained using, for example, kinit admin . To run the command directly on the disabled host, supply LDAP credentials to authenticate to the IdM server.
[ "[jsmith@ipaserver ~]USD kinit admin [jsmith@ipaserver ~]USD ipa host-disable server.example.com", "ipa-getkeytab -s server.example.com -p host/client.example.com -k /etc/krb5.keytab -D \"cn=directory manager\" -w password" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/host-disable
Chapter 29. Clusters Overview
Chapter 29. Clusters Overview JBoss EAP messaging clusters allow groups of JBoss EAP messaging servers to be grouped together in order to share message processing load. Each active node in the cluster is an active JBoss EAP messaging server which manages its own messages and handles its own connections. Warning A mixed cluster consisting of different versions of JBoss EAP is not supported by the messaging-activemq subsystem. The servers that form the messaging cluster must all use the same version of JBoss EAP. The cluster is formed by each node declaring cluster connections to other nodes in the JBoss EAP configuration file. When a node forms a cluster connection to another node, it internally creates a core bridge connection between itself and the other node. This is done transparently behind the scenes; you do not have to declare an explicit bridge for each node. These cluster connections allow messages to flow between the nodes of the cluster to balance the load. An important part of clustering is server discovery where servers can broadcast their connection details so clients or other servers can connect to them with minimum configuration. This section also discusses client-side load balancing , to balance client connections across the nodes of the cluster, and message redistribution , where JBoss EAP messaging will redistribute messages between nodes to avoid starvation. Warning Once a cluster node has been configured, it is common to simply copy that configuration to other nodes to produce a symmetric cluster. In fact, each node in the cluster must share the same configuration for the following elements in order to avoid unexpected errors: cluster-connection broadcast-group discovery-group address-settings, including queues and topics However, care must be taken when copying the JBoss EAP messaging files. Do not copy the messaging data, the bindings, journal, and large-messages directories from one node to another. When a cluster node is started for the first time and initializes its journal files, it persists a special identifier to the journal directory. The identifier must be unique among nodes for the cluster to form properly. 29.1. Server Discovery Server discovery is a mechanism by which servers can propagate their connection details to: Messaging clients A messaging client wants to be able to connect to the servers of the cluster without having specific knowledge of which servers in the cluster are up at any one time. Other servers Servers in a cluster want to be able to create cluster connections to each other without having prior knowledge of all the other servers in the cluster. This information, or cluster topology, is sent around normal JBoss EAP messaging connections to clients and to other servers over cluster connections. However, you need a way to establish the initial first connection. This can be done using dynamic discovery techniques like UDP and JGroups, or by providing a static list of initial connectors. 29.1.1. Broadcast Groups A broadcast group is the means by which a server broadcasts connectors over the network. A connector defines a way in which a client, or other server, can make connections to the server. The broadcast group takes a set of connectors and broadcasts them on the network. Depending on which broadcasting technique you configure the cluster, it uses either UDP or JGroups to broadcast connector pairs information. Broadcast groups are defined in the messaging-activemq subsystem of the server configuration. There can be many broadcast groups per JBoss EAP messaging server. Configure a Broadcast Group Using UDP Below is an example configuration of a messaging server that defines a UDP broadcast group. Note that this configuration relies on a messaging-group socket binding. <subsystem xmlns="urn:jboss:domain:messaging-activemq:4.0"> <server name="default"> ... <broadcast-group name="my-broadcast-group" connectors="http-connector" socket-binding="messaging-group"/> ... </server> </subsystem> ... <socket-binding-group name="standard-sockets" default-interface="public" port-offset="USD{jboss.socket.binding.port-offset:0}"> ... <socket-binding name="messaging-group" interface="private" port="5432" multicast-address="231.7.7.7" multicast-port="9876"/> ... </socket-binding-group> This configuration can be achieved using the following management CLI commands: Add the messaging-group socket binding. Add the broadcast group. Configure a Broadcast Group Using JGroups Below is an example configuration of a messaging server that defines broadcast group that uses the default JGroups broadcast group, which uses UDP. Note that to be able to use JGroups to broadcast, you must set a jgroups-channel . <subsystem xmlns="urn:jboss:domain:messaging-activemq:4.0"> <server name="default"> ... <broadcast-group name="my-broadcast-group" connectors="http-connector" jgroups-cluster="activemq-cluster"/> ... </server> </subsystem> This can be configured using the following management CLI command: Broadcast Group Attributes The below table lists the configurable attributes for a broadcast group. Attribute Description broadcast-period The period in milliseconds between consecutive broadcasts. connectors The names of connectors that will be broadcast. jgroups-channel The name of a channel defined in the jgroups subsystem that is used in combination with the jgroups-cluster attribute to form a cluster. If undefined, the default channel will be used. Note that a jgroups-channel multiplexes group communication between distinct logical groups, which are identified by the jgroups-cluster attribute. jgroups-cluster The logical name used for the communication between a broadcast group and a discovery group. A discovery group intending to receive messages from a particular broadcast group must use the same cluster name used by the broadcast group. jgroups-stack The name of a stack defined in the jgroups subsystem that is used to form a cluster. This attribute is deprecated. Use jgroups-channel to form a cluster instead. Since each jgroups-stack is already associated with a jgroups-channel , you can use that channel or you can create a new jgroups-channel and associate it with the jgroups-stack . Important If a jgroups-stack and a jgroups-channel are both specified, a new jgroups-channel is generated and is registered in the same namespace as the jgroups-stack and jgroups-channel . For this reason, the jgroups-stack and jgroup-channel names must be unique. socket-binding The broadcast group socket binding. 29.1.2. Discovery Groups While the broadcast group defines how connector information is broadcasted from a server, a discovery group defines how connector information is received from a broadcast endpoint, for example, a UDP multicast address or JGroup channel. A discovery group maintains a list of connectors, one for each broadcast by a different server. As it receives broadcasts on the broadcast endpoint from a particular server, it updates its entry in the list for that server. If it has not received a broadcast from a particular server for a length of time it will remove that server's entry from its list. Discovery groups are used in two places in JBoss EAP messaging: By cluster connections so they know how to obtain an initial connection to download the topology. By messaging clients so they know how to obtain an initial connection to download the topology. Although a discovery group will always accept broadcasts, its current list of available live and backup servers is only ever used when an initial connection is made. From then on, server discovery is done over the normal JBoss EAP messaging connections. Note Each discovery group must be configured with a broadcast endpoint (UDP or JGroups) that matches its broadcast group counterpart. For example, if the broadcast group is configured using UDP, the discovery group must also use UDP and the same multicast address. 29.1.2.1. Configure Discovery Groups on the Server Discovery groups are defined in the messaging-activemq subsystem of the server configuration. There can be many discovery groups per JBoss EAP messaging server. Configure a Discovery Group Using UDP Below is an example configuration of a messaging server that defines a UDP discovery group. Note that this configuration relies on a messaging-group socket binding. <subsystem xmlns="urn:jboss:domain:messaging-activemq:4.0"> <server name="default"> ... <discovery-group name="my-discovery-group" refresh-timeout="10000" socket-binding="messaging-group"/> ... </server> </subsystem> ... <socket-binding-group name="standard-sockets" default-interface="public" port-offset="USD{jboss.socket.binding.port-offset:0}"> ... <socket-binding name="messaging-group" interface="private" port="5432" multicast-address="231.7.7.7" multicast-port="9876"/> ... </socket-binding-group> This configuration can be achieved using the following management CLI commands: Add the messaging-group socket binding. Add the discovery group. Configure a Discovery Group Using JGroups Below is an example configuration of a messaging server that defines a JGroups discovery group. <subsystem xmlns="urn:jboss:domain:messaging-activemq:4.0"> <server name="default"> ... <discovery-group name="my-discovery-group" refresh-timeout="10000" jgroups-cluster="activemq-cluster"/> ... </server> </subsystem> This can be configured using the following management CLI command: Discovery Group Attributes The below table lists the configurable attributes for a discovery group. Attribute Description initial-wait-timeout Period, in milliseconds, to wait for an initial broadcast to give us at least one node in the cluster. jgroups-channel The name of a channel defined in the jgroups subsystem that is used in combination with the jgroups-cluster attribute to form a cluster. If undefined, the default channel will be used. Note that a jgroups-channel multiplexes group communication between distinct logical groups, which are identified by the jgroups-cluster attribute. jgroups-cluster The logical name used for the communication between a broadcast group and a discovery group. A discovery group intending to receive messages from a particular broadcast group must use the same cluster name used by the broadcast group. jgroups-stack The name of a stack defined in the jgroups subsystem that is used to form a cluster. This attribute is deprecated. Use jgroups-channel to form a cluster instead. Since each jgroups-stack is already associated with a jgroups-channel , you can use that channel or you can create a new jgroups-channel and associate it with the jgroups-stack . Important If a jgroups-stack and a jgroups-channel are both specified, a new jgroups-channel is generated and is registered in the same namespace as the jgroups-stack and jgroups-channel . For this reason, the jgroups-stack and jgroup-channel names must be unique. refresh-timeout Period the discovery group waits after receiving the last broadcast from a particular server before removing that server's connector pair entry from its list. socket-binding The discovery group socket binding. Warning The JGroups attributes and UDP-specific attributes described above are exclusive of each other. Only one set can be specified in a discovery group configuration. 29.1.2.2. Configure Discovery Groups on the Client Side You can use Jakarta Messaging or the core API to configure a JBoss EAP messaging client to discover a list of servers to which it can connect. Configure Client Discovery using Jakarta Messaging Clients using Jakarta Messaging can look up the relevant ConnectionFactory with JNDI. The entries attribute of a connection-factory or a pooled-connection-factory specifies the JNDI name under which the factory will be exposed. Below is an example of a ConnectionFactory configured for a remote client to lookup with JNDI: <subsystem xmlns="urn:jboss:domain:messaging-activemq:4.0"> <server name="default"> ... <connection-factory name="RemoteConnectionFactory" entries="java:jboss/exported/jms/RemoteConnectionFactory" connectors="http-connector"/> ... </server> </subsystem> Note It is important to remember that only JNDI names bound in the java:jboss/exported namespace are available to remote clients. If a connection-factory has an entry bound in the java:jboss/exported namespace a remote client would look up the connection-factory using the text after java:jboss/exported . For example, the RemoteConnectionFactory is bound by default to java:jboss/exported/jms/RemoteConnectionFactory which means a remote client would look-up this connection-factory using jms/RemoteConnectionFactory . A pooled-connection-factory should not have any entry bound in the java:jboss/exported namespace because a pooled-connection-factory is not suitable for remote clients. Since Jakarta Messaging 2.0, a default Jakarta Messaging connection factory is accessible to Jakarta EE applications under the JNDI name java:comp/DefaultJMSConnectionFactory . The JBoss EAP messaging-activemq subsystem defines a pooled-connection-factory that is used to provide this default connection factory. Any parameter change on this pooled-connection-factory will be taken into account by any Jakarta EE application looking the default Jakarta Messaging provider under the JNDI name java:comp/DefaultJMSConnectionFactory . Below is the default pooled connection factory as defined in the *-full and *-full-ha profiles. <subsystem xmlns="urn:jboss:domain:messaging-activemq:4.0"> <server name="default"> ... <pooled-connection-factory name="activemq-ra" transaction="xa" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" connectors="in-vm"/> ... </server> </subsystem> Configure Client Discovery using the Core API If you are using the core API to directly instantiate ClientSessionFactory instances, then you can specify the discovery group parameters directly when creating the session factory. For example: final String groupAddress = "231.7.7.7"; final int groupPort = 9876; ServerLocator factory = ActiveMQClient.createServerLocatorWithHA(new DiscoveryGroupConfiguration( groupAddress, groupPort, new UDPBroadcastGroupConfiguration(groupAddress, groupPort, null, -1))); ClientSessionFactory factory = locator.createSessionFactory(); ClientSession session1 = factory.createSession(); ClientSession session2 = factory.createSession(); You can use the setDiscoveryRefreshTimeout() setter method on the DiscoveryGroupConfiguration to set the refresh-timeout value, which defaults to 10000 milliseconds. You can also use the setDiscoveryInitialWaitTimeout() setter method on the DiscoveryGroupConfiguration to set the initial-wait-timeout value, which determines how long the session factory will wait before creating the first session. The default value is 10000 milliseconds. 29.1.3. Static Discovery In situations where you can not or do not want to use UDP on your network, you can configure a connection with an initial list of one or more servers. This does not mean that you have to know where all your servers are going to be hosted. You can configure these servers to connect to a reliable server, and have their connection details propagated by way of that server. Configuring a Cluster Connection For cluster connections there, is no additional configuration needed, you just need to make sure that any connectors are defined in the usual manner. These are then referenced by the cluster connection configuration. Configuring a Client Connection A static list of possible servers can also be used by a client. Configuring Client Discovery Using Jakarta Messaging The recommended way to use static discovery with Jakarta Messaging is to configure a connection-factory with multiple connectors (each pointing to a unique node in the cluster) and have the client look up the ConnectionFactory using JNDI. Below is a snippet of configuration showing just such a connection-factory : <subsystem xmlns="urn:jboss:domain:messaging-activemq:4.0"> <server name="default"> ... <connection-factory name="MyConnectionFactory" entries="..." connectors="http-connector http-node1 http-node2"/> ... </server> </subsystem> In the above example, http-connector is an HTTP connector ( <http-connector> ) pointing to the local server, http-node1 is an HTTP connector pointing to server node1 , and so on. See the Connectors and Acceptors section for configuring connectors in the server configuration. Configuring Client Discovery Using the Core API If you are using the core API, create a unique TransportConfiguration for each server in the cluster and pass them into the method responsible for creating the ServerLocator , as in the below example code. HashMap<String, Object> map = new HashMap<String, Object>(); map.put("host", "myhost"); map.put("port", "8080"); HashMap<String, Object> map2 = new HashMap<String, Object>(); map2.put("host", "myhost2"); map2.put("port", "8080"); TransportConfiguration server1 = new TransportConfiguration(NettyConnectorFactory.class.getName(), map); TransportConfiguration server2 = new TransportConfiguration(NettyConnectorFactory.class.getName(), map2); ServerLocator locator = ActiveMQClient.createServerLocatorWithHA(server1, server2); ClientSessionFactory factory = locator.createSessionFactory(); ClientSession session = factory.createSession(); 29.1.4. Default JGroups values Previously, you had to review the jgroups-defaults.xml file to find the default JGroups values, which was time-consuming. For your review convenience, Red Hat listed the following default JGroups values: <?xml version="1.0" encoding="UTF-8"?> <config xmlns="urn:org:jgroups"> <UDP ip_ttl="2" mcast_recv_buf_size="25m" mcast_send_buf_size="1m" ucast_recv_buf_size="20m" ucast_send_buf_size="1m" port_range="0" /> <TCP send_buf_size="640k" sock_conn_timeout="300" port_range="0" /> <TCP_NIO2 send_buf_size="640k" sock_conn_timeout="300" port_range="0" /> <TCPPING port_range="0" num_discovery_runs="4"/> <MPING ip_ttl="2"/> <kubernetes.KUBE_PING port_range="0"/> <MERGE3 min_interval="10000" max_interval="30000" /> <FD max_tries="5" msg_counts_as_heartbeat="false" timeout="3000" /> <FD_ALL interval="15000" timeout="60000" timeout_check_interval="5000"/> <FD_SOCK/> <VERIFY_SUSPECT timeout="1000"/> <pbcast.NAKACK2 xmit_interval="100" xmit_table_num_rows="50" /> <UNICAST3 xmit_interval="100" xmit_table_num_rows="50" /> <pbcast.STABLE stability_delay="500" desired_avg_gossip="5000" max_bytes="1m" /> <pbcast.GMS print_local_addr="false"/> <UFC max_credits="2m"/> <MFC max_credits="2m"/> <FRAG2 frag_size="30k"/> </config> 29.2. Server-side Message Load Balancing If a cluster connection is defined between nodes of a cluster, then JBoss EAP messaging will load balance messages arriving at a particular node from a client. A messaging cluster connection can be configured to load balance messages in a round robin fashion, irrespective of whether there are any matching consumers on other nodes. It can also be configured to distribute to other nodes only if matching consumers exist. See the Message Redistribution section for more information. Configuring the Cluster Connection A cluster connection group servers into clusters so that messages can be load balanced between the nodes of the cluster. A cluster connection is defined in the JBoss EAP server configuration using the cluster-connection element. Warning Red Hat supports using only one cluster-connection within the messaging-activemq subsystem. Below is the default cluster-connection as defined in the *-full and *-full-ha profiles. See Cluster Connection Attributes for the complete list of attributes. <subsystem xmlns="urn:jboss:domain:messaging-activemq:4.0"> <server name="default"> ... <cluster-connection name="my-cluster" discovery-group="dg-group1" connector-name="http-connector" address="jms"/> ... </server> </subsystem> In the case shown above the cluster connection will load balance messages sent to addresses that start with "jms". This cluster connection will, in effect, apply to all Jakarta Messaging queues and topics since they map to core queues that start with the substring "jms". Note When a packet is sent using a cluster connection and is at a blocked state and waiting for acknowledgements, the call-timeout attribute specifies how long it will wait for the reply before throwing an exception. The default value is 30000 . In certain cases, for example, if the remote Jakarta Messaging broker is disconnected from network and the transaction is incomplete, the thread could remain stuck until connection is re-established. To avoid this situation, it is recommended to use the call-failover-timeout attribute along with the call-timeout attribute. The call-failover-timeout attribute is used when a call is made during a failover attempt. The default value is -1 , which means no timeout. For more information on Client Failover, see Automatic Client Failover . Note Alternatively, if you would like the cluster connection to use a static list of servers for discovery then you can use the static-connectors attribute. For example: <subsystem xmlns="urn:jboss:domain:messaging-activemq:4.0"> <server name="default"> ... <cluster-connection name="my-cluster" static-connectors="server0-connector server1-connector" .../> ... </server> </subsystem> In this example, there are two servers defined where we know that at least one will be available. There may be many more servers in the cluster, but these will be discovered using one of these connectors once an initial connection has been made. Configuring a Cluster Connection for Duplicate Detection The cluster connection internally uses a core bridge to move messages between nodes of the cluster. To configure a cluster connection for duplicate message detection, set the use-duplicate-detection attribute to true , which is the default value. Cluster User Credentials When creating connections between nodes of a cluster to form a cluster connection, JBoss EAP messaging uses a cluster user and password. You can set the cluster user and password by using the following management CLI commands. This adds the following XML content to the messaging-activemq subsystem in the JBoss EAP configuration file. <subsystem xmlns="urn:jboss:domain:messaging-activemq:4.0"> <server name="default"> ... <cluster user="NewClusterUser" password="NewClusterPassword123"/> ... </server> </subsystem> Warning The default value for cluster-user is ACTIVEMQ.CLUSTER.ADMIN.USER and the default value for cluster-password is CHANGE ME!! . It is imperative that these values are changed from their default, or remote clients will be able to make connections to the server using the default values. If they are not changed from the default, JBoss EAP messaging will detect this and display a warning upon every startup. Note You can also use the cluster-credential-reference attribute to reference a credential store instead of setting a cluster password. 29.3. Client-side Load Balancing With JBoss EAP messaging client-side load balancing, subsequent sessions created using a single session factory can be connected to different nodes of the cluster. This allows sessions to spread smoothly across the nodes of a cluster and not be clumped on any particular node. The recommended way to declare a load balancing policy to be used by the client factory is to set the connection-load-balancing-policy-class-name attribute of the <connection-factory> resource. JBoss EAP messaging provides the following out-of-the-box load balancing policies, and you can also implement your own. Round robin With this policy, the first node is chosen randomly then each subsequent node is chosen sequentially in the same order. For example, nodes might be chosen in the order B , C , D , A , B , C , D , A , B or D , A , B , C , D , A , B , C . Use org.apache.activemq.artemis.api.core.client.loadbalance.RoundRobinConnectionLoadBalancingPolicy as the connection-load-balancing-policy-class-name . Random With this policy, each node is chosen randomly. Use org.apache.activemq.artemis.api.core.client.loadbalance.RandomConnectionLoadBalancingPolicy as the connection-load-balancing-policy-class-name . Random Sticky With this policy, the first node is chosen randomly and then reused for subsequent connections. Use org.apache.activemq.artemis.api.core.client.loadbalance.RandomStickyConnectionLoadBalancingPolicy as the connection-load-balancing-policy-class-name . First Element With this policy, the first, or 0th, node is always returned. Use org.apache.activemq.artemis.api.core.client.loadbalance.FirstElementConnectionLoadBalancingPolicy as the connection-load-balancing-policy-class-name . You can also implement your own policy by implementing the interface org.apache.activemq.artemis.api.core.client.loadbalance.ConnectionLoadBalancingPolicy 29.4. Message Redistribution With message redistribution, JBoss EAP messaging can be configured to automatically redistribute messages from queues which have no consumers back to other nodes in the cluster which do have matching consumers. To enable this functionality, cluster connection's message-load-balancing-type must be set to ON_DEMAND , which is the default value. You can set this using the following management CLI command. Message redistribution can be configured to kick in immediately after the last consumer on a queue is closed, or to wait a configurable delay after the last consumer on a queue is closed before redistributing. This is configured using the redistribution-delay attribute. You use the redistribution-delay attribute to set how many milliseconds to wait after the last consumer is closed on a queue before redistributing messages from that queue to other nodes of the cluster that have matching consumers. A value of -1 , which is the default value, means that messages will never be redistributed. A value of 0 means that messages will be immediately redistributed. The address-setting in the default JBoss EAP configuration sets a redistribution-delay value of 1000 , meaning that it will wait 1000 milliseconds before redistributing messages. <subsystem xmlns="urn:jboss:domain:messaging-activemq:4.0"> <server name="default"> ... <address-setting name="#" redistribution-delay="1000" message-counter-history-day-limit="10" page-size-bytes="2097152" max-size-bytes="10485760" expiry-address="jms.queue.ExpiryQueue" dead-letter-address="jms.queue.DLQ"/> ... </server> </subsystem> It often makes sense to introduce a delay before redistributing as it is a common case that a consumer closes but another one quickly is created on the same queue. In this case, you may not want to redistribute immediately since the new consumer will arrive shortly. Below is an example of an address-setting that sets a redistribution-delay of 0 for any queue or topic that is bound to an address that starts with "jms.". In this case, messages will be redistributed immediately. <subsystem xmlns="urn:jboss:domain:messaging-activemq:4.0"> <server name="default"> ... <address-setting name="jms.#" redistribution-delay="0"/> ... </server> </subsystem> This address setting can be added using the following management CLI command. 29.5. Clustered Message Grouping Important This feature is not supported. Clustered grouping follows a different approach relative to normal message grouping . In a cluster, message groups with specific group ids can arrive on any of the nodes. It is important for a node to determine which group ids are bound to which consumer on which node. Each node is responsible for routing message groups correctly to the node which has the consumer processing those group ids, irrespective of where the message groups arrive by default. Once messages with a given group id are sent to a specific consumer connected to the given node in the cluster, then those messages are never sent to another node even if the consumer is disconnected. This situation is addressed by a grouping handler. Each node has a grouping handler and this grouping handler (along with other handlers) is responsible for routing the message groups to the correct node. There are two types of grouping handlers: LOCAL and REMOTE . The local handler is responsible for deciding the route that a message group should take. The remote handlers communicate with the local handler and work accordingly. Each cluster should choose a specific node to have a local grouping handler and all the other nodes should have remote handlers. Warning If message grouping is used in a cluster, it will break if a node configured as a remote grouping handler fails. Setting up a backup for the remote grouping handler will not correct this. The node that initially receives a message group takes the routing decision based on regular cluster routing conditions (round-robin queue availability). The node proposes this decision to the respective grouping handler which then routes the messages to the proposed queue if it accepts the proposal. If the grouping handler rejects the proposal, it proposes some other route and the routing takes place accordingly. The other nodes follow suite and forward the message groups to the chosen queue. After a message arrives on a queue, it is pinned to a customer on that queue. You can configure grouping handlers using the management CLI. The following command adds a LOCAL grouping handler with the address news.europe.# . This will require a server reload. The below table lists the configurable attributes for a grouping-handler . Attribute Description group-timeout With a REMOTE handler, this value specifies how often the REMOTE will notify the LOCAL that the route was used. With a LOCAL handler, if a route is not used for the time specified, it is removed, and a new path would need to be established. The value is in milliseconds. grouping-handler-address A reference to a cluster connection and the address it uses. reaper-period How often the reaper will be run to check for timed out group bindings (only valid for LOCAL handlers). timeout How long to wait for a handling decision to be made; an exception will be thrown during the send if this timeout is reached, ensuring that strict ordering is kept. type Whether the handler is the single local handler for the cluster, which makes handling decisions, or a remote handler which converses with the local handler. Possible values are LOCAL or REMOTE . 29.5.1. Best Practices for Clustered Message Grouping Some best practices for clustered grouping are as follows: If you create and close consumers regularly, make sure that your consumers are distributed evenly across the different nodes. Once a queue is pinned, messages are automatically transferred to that queue regardless of removing customers from it. If you wish to remove a queue that has a message group bound to it, make sure the queue is deleted by the session that is sending the messages. Doing this will ensure that other nodes will not try to route messages to this queue after it is removed. As a failover mechanism, always replicate the node that has the local grouping handler. 29.6. Starting and Stopping Messaging Clusters When you configure JBoss EAP 7.4 servers to form an ActiveMQ Artemis cluster, there can be other servers and clients that are connected to the running clustered servers. It is recommended to shutdown the connected clients and servers first, before shutting down the JBoss EAP 7.4 servers that are running in the cluster. This must be done in sequence and not in parallel in order to provide enough time for the servers to close all connections and avoid failures during closing that might lead to inconsistent states. ActiveMQ Artemis does not support automatic scale down of cluster nodes and expects that all cluster nodes will be restarted. The same holds true when starting the servers. You must first start the JBoss EAP 7.4 servers in the ActiveMQ Artemis cluster. When startup is complete, you can then start the other servers and clients that connect to the cluster.
[ "<subsystem xmlns=\"urn:jboss:domain:messaging-activemq:4.0\"> <server name=\"default\"> <broadcast-group name=\"my-broadcast-group\" connectors=\"http-connector\" socket-binding=\"messaging-group\"/> </server> </subsystem> <socket-binding-group name=\"standard-sockets\" default-interface=\"public\" port-offset=\"USD{jboss.socket.binding.port-offset:0}\"> <socket-binding name=\"messaging-group\" interface=\"private\" port=\"5432\" multicast-address=\"231.7.7.7\" multicast-port=\"9876\"/> </socket-binding-group>", "/socket-binding-group=standard-sockets/socket-binding=messaging-group:add(interface=private,port=5432,multicast-address=231.7.7.7,multicast-port=9876)", "/subsystem=messaging-activemq/server=default/broadcast-group=my-broadcast-group:add(socket-binding=messaging-group,broadcast-period=2000,connectors=[http-connector])", "<subsystem xmlns=\"urn:jboss:domain:messaging-activemq:4.0\"> <server name=\"default\"> <broadcast-group name=\"my-broadcast-group\" connectors=\"http-connector\" jgroups-cluster=\"activemq-cluster\"/> </server> </subsystem>", "/subsystem=messaging-activemq/server=default/broadcast-group=my-broadcast-group:add(connectors=[http-connector],jgroups-cluster=activemq-cluster)", "<subsystem xmlns=\"urn:jboss:domain:messaging-activemq:4.0\"> <server name=\"default\"> <discovery-group name=\"my-discovery-group\" refresh-timeout=\"10000\" socket-binding=\"messaging-group\"/> </server> </subsystem> <socket-binding-group name=\"standard-sockets\" default-interface=\"public\" port-offset=\"USD{jboss.socket.binding.port-offset:0}\"> <socket-binding name=\"messaging-group\" interface=\"private\" port=\"5432\" multicast-address=\"231.7.7.7\" multicast-port=\"9876\"/> </socket-binding-group>", "/socket-binding-group=standard-sockets/socket-binding=messaging-group:add(interface=private,port=5432,multicast-address=231.7.7.7,multicast-port=9876)", "/subsystem=messaging-activemq/server=default/discovery-group=my-discovery-group:add(socket-binding=messaging-group,refresh-timeout=10000)", "<subsystem xmlns=\"urn:jboss:domain:messaging-activemq:4.0\"> <server name=\"default\"> <discovery-group name=\"my-discovery-group\" refresh-timeout=\"10000\" jgroups-cluster=\"activemq-cluster\"/> </server> </subsystem>", "/subsystem=messaging-activemq/server=default/discovery-group=my-discovery-group:add(refresh-timeout=10000,jgroups-cluster=activemq-cluster)", "<subsystem xmlns=\"urn:jboss:domain:messaging-activemq:4.0\"> <server name=\"default\"> <connection-factory name=\"RemoteConnectionFactory\" entries=\"java:jboss/exported/jms/RemoteConnectionFactory\" connectors=\"http-connector\"/> </server> </subsystem>", "<subsystem xmlns=\"urn:jboss:domain:messaging-activemq:4.0\"> <server name=\"default\"> <pooled-connection-factory name=\"activemq-ra\" transaction=\"xa\" entries=\"java:/JmsXA java:jboss/DefaultJMSConnectionFactory\" connectors=\"in-vm\"/> </server> </subsystem>", "final String groupAddress = \"231.7.7.7\"; final int groupPort = 9876; ServerLocator factory = ActiveMQClient.createServerLocatorWithHA(new DiscoveryGroupConfiguration( groupAddress, groupPort, new UDPBroadcastGroupConfiguration(groupAddress, groupPort, null, -1))); ClientSessionFactory factory = locator.createSessionFactory(); ClientSession session1 = factory.createSession(); ClientSession session2 = factory.createSession();", "<subsystem xmlns=\"urn:jboss:domain:messaging-activemq:4.0\"> <server name=\"default\"> <connection-factory name=\"MyConnectionFactory\" entries=\"...\" connectors=\"http-connector http-node1 http-node2\"/> </server> </subsystem>", "HashMap<String, Object> map = new HashMap<String, Object>(); map.put(\"host\", \"myhost\"); map.put(\"port\", \"8080\"); HashMap<String, Object> map2 = new HashMap<String, Object>(); map2.put(\"host\", \"myhost2\"); map2.put(\"port\", \"8080\"); TransportConfiguration server1 = new TransportConfiguration(NettyConnectorFactory.class.getName(), map); TransportConfiguration server2 = new TransportConfiguration(NettyConnectorFactory.class.getName(), map2); ServerLocator locator = ActiveMQClient.createServerLocatorWithHA(server1, server2); ClientSessionFactory factory = locator.createSessionFactory(); ClientSession session = factory.createSession();", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <config xmlns=\"urn:org:jgroups\"> <UDP ip_ttl=\"2\" mcast_recv_buf_size=\"25m\" mcast_send_buf_size=\"1m\" ucast_recv_buf_size=\"20m\" ucast_send_buf_size=\"1m\" port_range=\"0\" /> <TCP send_buf_size=\"640k\" sock_conn_timeout=\"300\" port_range=\"0\" /> <TCP_NIO2 send_buf_size=\"640k\" sock_conn_timeout=\"300\" port_range=\"0\" /> <TCPPING port_range=\"0\" num_discovery_runs=\"4\"/> <MPING ip_ttl=\"2\"/> <kubernetes.KUBE_PING port_range=\"0\"/> <MERGE3 min_interval=\"10000\" max_interval=\"30000\" /> <FD max_tries=\"5\" msg_counts_as_heartbeat=\"false\" timeout=\"3000\" /> <FD_ALL interval=\"15000\" timeout=\"60000\" timeout_check_interval=\"5000\"/> <FD_SOCK/> <VERIFY_SUSPECT timeout=\"1000\"/> <pbcast.NAKACK2 xmit_interval=\"100\" xmit_table_num_rows=\"50\" /> <UNICAST3 xmit_interval=\"100\" xmit_table_num_rows=\"50\" /> <pbcast.STABLE stability_delay=\"500\" desired_avg_gossip=\"5000\" max_bytes=\"1m\" /> <pbcast.GMS print_local_addr=\"false\"/> <UFC max_credits=\"2m\"/> <MFC max_credits=\"2m\"/> <FRAG2 frag_size=\"30k\"/> </config>", "<subsystem xmlns=\"urn:jboss:domain:messaging-activemq:4.0\"> <server name=\"default\"> <cluster-connection name=\"my-cluster\" discovery-group=\"dg-group1\" connector-name=\"http-connector\" address=\"jms\"/> </server> </subsystem>", "<subsystem xmlns=\"urn:jboss:domain:messaging-activemq:4.0\"> <server name=\"default\"> <cluster-connection name=\"my-cluster\" static-connectors=\"server0-connector server1-connector\" .../> </server> </subsystem>", "/subsystem=messaging-activemq/server=default/cluster-connection=my-cluster:write-attribute(name=use-duplicate-detection,value=true)", "/subsystem=messaging-activemq/server=default:write-attribute(name=cluster-user,value=\"NewClusterUser\") /subsystem=messaging-activemq/server=default:write-attribute(name=cluster-password,value=\"NewClusterPassword123\")", "<subsystem xmlns=\"urn:jboss:domain:messaging-activemq:4.0\"> <server name=\"default\"> <cluster user=\"NewClusterUser\" password=\"NewClusterPassword123\"/> </server> </subsystem>", "/subsystem=messaging-activemq/server=default:write-attribute(name=cluster-credential-reference,value={clear-text=SecretStorePassword})", "/subsystem=messaging-activemq/server=default/cluster-connection=my-cluster:write-attribute(name=message-load-balancing-type,value=ON_DEMAND)", "<subsystem xmlns=\"urn:jboss:domain:messaging-activemq:4.0\"> <server name=\"default\"> <address-setting name=\"#\" redistribution-delay=\"1000\" message-counter-history-day-limit=\"10\" page-size-bytes=\"2097152\" max-size-bytes=\"10485760\" expiry-address=\"jms.queue.ExpiryQueue\" dead-letter-address=\"jms.queue.DLQ\"/> </server> </subsystem>", "<subsystem xmlns=\"urn:jboss:domain:messaging-activemq:4.0\"> <server name=\"default\"> <address-setting name=\"jms.#\" redistribution-delay=\"0\"/> </server> </subsystem>", "/subsystem=messaging-activemq/server=default/address-setting=jms.#:add(redistribution-delay=1000)", "/subsystem=messaging-activemq/server=default/grouping-handler=my-group-handler:add(grouping-handler-address=\"news.europe.#\",type=LOCAL)", "reload" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/configuring_messaging/clusters_overview
Chapter 6. Updating a cluster in a disconnected environment
Chapter 6. Updating a cluster in a disconnected environment 6.1. About cluster updates in a disconnected environment A disconnected environment is one in which your cluster nodes cannot access the internet or where you want to manage update recommendations and release images locally for policy or performance purposes. This section covers mirroring OpenShift Container Platform images, managing an OpenShift Update Service, and performing cluster updates in a disconnected environment. 6.1.1. Mirroring OpenShift Container Platform images To update your cluster in a disconnected environment, your cluster environment must have access to a mirror registry that has the necessary images and resources for your targeted update. A single container image registry is sufficient to host mirrored images for several clusters in the disconnected network. The following page has instructions for mirroring images onto a repository in your disconnected cluster: Mirroring OpenShift Container Platform images 6.1.2. Performing a cluster update in a disconnected environment You can use one of the following procedures to update a disconnected OpenShift Container Platform cluster: Updating a cluster in a disconnected environment using the OpenShift Update Service Updating a cluster in a disconnected environment without the OpenShift Update Service 6.1.3. Uninstalling the OpenShift Update Service from a cluster You can use the following procedure to uninstall a local copy of the OpenShift Update Service (OSUS) from your cluster: Uninstalling the OpenShift Update Service from a cluster 6.2. Mirroring OpenShift Container Platform images You must mirror container images onto a mirror registry before you can update a cluster in a disconnected environment. You can also use this procedure in connected environments to ensure your clusters run only approved container images that have satisfied your organizational controls for external content. Note Your mirror registry must be running at all times while the cluster is running. The following steps outline the high-level workflow on how to mirror images to a mirror registry: Install the OpenShift CLI ( oc ) on all devices being used to retrieve and push release images. Download the registry pull secret and add it to your cluster. If you use the oc-mirror OpenShift CLI ( oc ) plugin : Install the oc-mirror plugin on all devices being used to retrieve and push release images. Create an image set configuration file for the plugin to use when determining which release images to mirror. You can edit this configuration file later to change which release images that the plugin mirrors. Mirror your targeted release images directly to a mirror registry, or to removable media and then to a mirror registry. Configure your cluster to use the resources generated by the oc-mirror plugin. Repeat these steps as needed to update your mirror registry. If you use the oc adm release mirror command : Set environment variables that correspond to your environment and the release images you want to mirror. Mirror your targeted release images directly to a mirror registry, or to removable media and then to a mirror registry. Repeat these steps as needed to update your mirror registry. Compared to using the oc adm release mirror command, the oc-mirror plugin has the following advantages: It can mirror content other than container images. After mirroring images for the first time, it is easier to update images in the registry. The oc-mirror plugin provides an automated way to mirror the release payload from Quay, and also builds the latest graph data image for the OpenShift Update Service running in the disconnected environment. 6.2.1. Mirroring resources using the oc-mirror plugin You can use the oc-mirror OpenShift CLI ( oc ) plugin to mirror images to a mirror registry in your fully or partially disconnected environments. You must run oc-mirror from a system with internet connectivity to download the required images from the official Red Hat registries. See Mirroring images for a disconnected installation using the oc-mirror plugin for additional details. 6.2.2. Mirroring images using the oc adm release mirror command You can use the oc adm release mirror command to mirror images to your mirror registry. 6.2.2.1. Prerequisites You must have a container image registry that supports Docker v2-2 in the location that will host the OpenShift Container Platform cluster, such as Red Hat Quay. Note If you use Red Hat Quay, you must use version 3.6 or later with the oc-mirror plugin. If you have an entitlement to Red Hat Quay, see the documentation on deploying Red Hat Quay for proof-of-concept purposes or by using the Quay Operator . If you need additional assistance selecting and installing a registry, contact your sales representative or Red Hat Support. If you do not have an existing solution for a container image registry, the mirror registry for Red Hat OpenShift is included in OpenShift Container Platform subscriptions. The mirror registry for Red Hat OpenShift is a small-scale container registry that you can use to mirror OpenShift Container Platform container images in disconnected installations and updates. 6.2.2.2. Preparing your mirror host Before you perform the mirror procedure, you must prepare the host to retrieve content and push it to the remote location. 6.2.2.2.1. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.17. Download and install the new version of oc . If you are updating a cluster in a disconnected environment, install the oc version that you plan to update to. Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.17 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> Additional resources Installing and using CLI plugins 6.2.2.2.2. Configuring credentials that allow images to be mirrored Create a container image registry credentials file that enables you to mirror images from Red Hat to your mirror. Warning Do not use this image registry credentials file as the pull secret when you install a cluster. If you provide this file when you install cluster, all of the machines in the cluster will have write access to your mirror registry. Prerequisites You configured a mirror registry to use in your disconnected environment. You identified an image repository location on your mirror registry to mirror images into. You provisioned a mirror registry account that allows images to be uploaded to that image repository. You have write access to the mirror registry. Procedure Complete the following steps on the installation host: Download your registry.redhat.io pull secret from Red Hat OpenShift Cluster Manager . Make a copy of your pull secret in JSON format by running the following command: USD cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1 1 Specify the path to the folder to store the pull secret in and a name for the JSON file that you create. Example pull secret { "auths": { "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "quay.io": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" } } } Optional: If using the oc-mirror plugin, save the file as either ~/.docker/config.json or USDXDG_RUNTIME_DIR/containers/auth.json : If the .docker or USDXDG_RUNTIME_DIR/containers directories do not exist, create one by entering the following command: USD mkdir -p <directory_name> Where <directory_name> is either ~/.docker or USDXDG_RUNTIME_DIR/containers . Copy the pull secret to the appropriate directory by entering the following command: USD cp <path>/<pull_secret_file_in_json> <directory_name>/<auth_file> Where <directory_name> is either ~/.docker or USDXDG_RUNTIME_DIR/containers , and <auth_file> is either config.json or auth.json . Generate the base64-encoded user name and password or token for your mirror registry by running the following command: USD echo -n '<user_name>:<password>' | base64 -w0 1 1 For <user_name> and <password> , specify the user name and password that you configured for your registry. Example output BGVtbYk3ZHAtqXs= Edit the JSON file and add a section that describes your registry to it: "auths": { "<mirror_registry>": { 1 "auth": "<credentials>", 2 "email": "[email protected]" } }, 1 Specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:8443 2 Specify the base64-encoded user name and password for the mirror registry. Example modified pull secret { "auths": { "registry.example.com": { "auth": "BGVtbYk3ZHAtqXs=", "email": "[email protected]" }, "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "quay.io": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" } } } 6.2.2.3. Mirroring images to a mirror registry Important To avoid excessive memory usage by the OpenShift Update Service application, you must mirror release images to a separate repository as described in the following procedure. Prerequisites You configured a mirror registry to use in your disconnected environment and can access the certificate and credentials that you configured. You downloaded the pull secret from Red Hat OpenShift Cluster Manager and modified it to include authentication to your mirror repository. If you use self-signed certificates, you have specified a Subject Alternative Name in the certificates. Procedure Use the Red Hat OpenShift Container Platform Update Graph visualizer and update planner to plan an update from one version to another. The OpenShift Update Graph provides channel graphs and a way to confirm that there is an update path between your current and intended cluster versions. Set the required environment variables: Export the release version: USD export OCP_RELEASE=<release_version> For <release_version> , specify the tag that corresponds to the version of OpenShift Container Platform to which you want to update, such as 4.5.4 . Export the local registry name and host port: USD LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>' For <local_registry_host_name> , specify the registry domain name for your mirror repository, and for <local_registry_host_port> , specify the port that it serves content on. Export the local repository name: USD LOCAL_REPOSITORY='<local_repository_name>' For <local_repository_name> , specify the name of the repository to create in your registry, such as ocp4/openshift4 . If you are using the OpenShift Update Service, export an additional local repository name to contain the release images: USD LOCAL_RELEASE_IMAGES_REPOSITORY='<local_release_images_repository_name>' For <local_release_images_repository_name> , specify the name of the repository to create in your registry, such as ocp4/openshift4-release-images . Export the name of the repository to mirror: USD PRODUCT_REPO='openshift-release-dev' For a production release, you must specify openshift-release-dev . Export the path to your registry pull secret: USD LOCAL_SECRET_JSON='<path_to_pull_secret>' For <path_to_pull_secret> , specify the absolute path to and file name of the pull secret for your mirror registry that you created. Note If your cluster uses an ImageContentSourcePolicy object to configure repository mirroring, you can use only global pull secrets for mirrored registries. You cannot add a pull secret to a project. Export the release mirror: USD RELEASE_NAME="ocp-release" For a production release, you must specify ocp-release . Export the type of architecture for your cluster: USD ARCHITECTURE=<cluster_architecture> 1 1 Specify the architecture of the cluster, such as x86_64 , aarch64 , s390x , or ppc64le . Export the path to the directory to host the mirrored images: USD REMOVABLE_MEDIA_PATH=<path> 1 1 Specify the full path, including the initial forward slash (/) character. Review the images and configuration manifests to mirror: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run Mirror the version images to the mirror registry. If your mirror host does not have internet access, take the following actions: Connect the removable media to a system that is connected to the internet. Mirror the images and configuration manifests to a directory on the removable media: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} Note This command also generates and saves the mirrored release image signature config map onto the removable media. Take the media to the disconnected environment and upload the images to the local container registry. USD oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror "file://openshift/release:USD{OCP_RELEASE}*" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1 1 For REMOVABLE_MEDIA_PATH , you must use the same path that you specified when you mirrored the images. Use oc command-line interface (CLI) to log in to the cluster that you are updating. Apply the mirrored release image signature config map to the connected cluster: USD oc apply -f USD{REMOVABLE_MEDIA_PATH}/mirror/config/<image_signature_file> 1 1 For <image_signature_file> , specify the path and name of the file, for example, signature-sha256-81154f5c03294534.yaml . If you are using the OpenShift Update Service, mirror the release image to a separate repository: USD oc image mirror -a USD{LOCAL_SECRET_JSON} USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} USD{LOCAL_REGISTRY}/USD{LOCAL_RELEASE_IMAGES_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} If the local container registry and the cluster are connected to the mirror host, take the following actions: Directly push the release images to the local registry and apply the config map to the cluster by using following command: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} \ --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --apply-release-image-signature Note If you include the --apply-release-image-signature option, do not create the config map for image signature verification. If you are using the OpenShift Update Service, mirror the release image to a separate repository: USD oc image mirror -a USD{LOCAL_SECRET_JSON} USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} USD{LOCAL_REGISTRY}/USD{LOCAL_RELEASE_IMAGES_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} 6.3. Updating a cluster in a disconnected environment using the OpenShift Update Service To get an update experience similar to connected clusters, you can use the following procedures to install and configure the OpenShift Update Service (OSUS) in a disconnected environment. The following steps outline the high-level workflow on how to update a cluster in a disconnected environment using OSUS: Configure access to a secured registry. Update the global cluster pull secret to access your mirror registry. Install the OSUS Operator. Create a graph data container image for the OpenShift Update Service. Install the OSUS application and configure your clusters to use the OpenShift Update Service in your environment. Perform a supported update procedure from the documentation as you would with a connected cluster. 6.3.1. Using the OpenShift Update Service in a disconnected environment The OpenShift Update Service (OSUS) provides update recommendations to OpenShift Container Platform clusters. Red Hat publicly hosts the OpenShift Update Service, and clusters in a connected environment can connect to the service through public APIs to retrieve update recommendations. However, clusters in a disconnected environment cannot access these public APIs to retrieve update information. To have a similar update experience in a disconnected environment, you can install and configure the OpenShift Update Service so that it is available within the disconnected environment. A single OSUS instance is capable of serving recommendations to thousands of clusters. OSUS can be scaled horizontally to cater to more clusters by changing the replica value. So for most disconnected use cases, one OSUS instance is enough. For example, Red Hat hosts just one OSUS instance for the entire fleet of connected clusters. If you want to keep update recommendations separate in different environments, you can run one OSUS instance for each environment. For example, in a case where you have separate test and stage environments, you might not want a cluster in a stage environment to receive update recommendations to version A if that version has not been tested in the test environment yet. The following sections describe how to install an OSUS instance and configure it to provide update recommendations to a cluster. Additional resources About the OpenShift Update Service Understanding update channels and releases 6.3.2. Prerequisites You must have the oc command-line interface (CLI) tool installed. You must provision a container image registry in your environment with the container images for your update, as described in Mirroring OpenShift Container Platform images . 6.3.3. Configuring access to a secured registry for the OpenShift Update Service If the release images are contained in a registry whose HTTPS X.509 certificate is signed by a custom certificate authority, complete the steps in Configuring additional trust stores for image registry access along with following changes for the update service. The OpenShift Update Service Operator needs the config map key name updateservice-registry in the registry CA cert. Image registry CA config map example for the update service apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: updateservice-registry: | 1 -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- registry-with-port.example.com..5000: | 2 -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- 1 The OpenShift Update Service Operator requires the config map key name updateservice-registry in the registry CA cert. 2 If the registry has the port, such as registry-with-port.example.com:5000 , : should be replaced with .. . 6.3.4. Updating the global cluster pull secret You can update the global pull secret for your cluster by either replacing the current pull secret or appending a new pull secret. The procedure is required when users use a separate registry to store images than the registry used during installation. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Optional: To append a new pull secret to the existing pull secret, complete the following steps: Enter the following command to download the pull secret: USD oc get secret/pull-secret -n openshift-config \ --template='{{index .data ".dockerconfigjson" | base64decode}}' \ ><pull_secret_location> 1 1 Provide the path to the pull secret file. Enter the following command to add the new pull secret: USD oc registry login --registry="<registry>" \ 1 --auth-basic="<username>:<password>" \ 2 --to=<pull_secret_location> 3 1 Provide the new registry. You can include multiple repositories within the same registry, for example: --registry="<registry/my-namespace/my-repository>" . 2 Provide the credentials of the new registry. 3 Provide the path to the pull secret file. Alternatively, you can perform a manual update to the pull secret file. Enter the following command to update the global pull secret for your cluster: USD oc set data secret/pull-secret -n openshift-config \ --from-file=.dockerconfigjson=<pull_secret_location> 1 1 Provide the path to the new pull secret file. This update is rolled out to all nodes, which can take some time depending on the size of your cluster. Note As of OpenShift Container Platform 4.7.4, changes to the global pull secret no longer trigger a node drain or reboot. 6.3.5. Installing the OpenShift Update Service Operator To install the OpenShift Update Service, you must first install the OpenShift Update Service Operator by using the OpenShift Container Platform web console or CLI. Note For clusters that are installed in disconnected environments, also known as disconnected clusters, Operator Lifecycle Manager by default cannot access the Red Hat-provided OperatorHub sources hosted on remote registries because those remote sources require full internet connectivity. For more information, see Using Operator Lifecycle Manager in disconnected environments . 6.3.5.1. Installing the OpenShift Update Service Operator by using the web console You can use the web console to install the OpenShift Update Service Operator. Procedure In the web console, click Operators OperatorHub . Note Enter Update Service into the Filter by keyword... field to find the Operator faster. Choose OpenShift Update Service from the list of available Operators, and click Install . Select an Update channel . Select a Version . Select A specific namespace on the cluster under Installation Mode . Select a namespace for Installed Namespace or accept the recommended namespace openshift-update-service . Select an Update approval strategy: The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual strategy requires a cluster administrator to approve the Operator update. Click Install . Go to Operators Installed Operators and verify that the OpenShift Update Service Operator is installed. Ensure that OpenShift Update Service is listed in the correct namespace with a Status of Succeeded . 6.3.5.2. Installing the OpenShift Update Service Operator by using the CLI You can use the OpenShift CLI ( oc ) to install the OpenShift Update Service Operator. Procedure Create a namespace for the OpenShift Update Service Operator: Create a Namespace object YAML file, for example, update-service-namespace.yaml , for the OpenShift Update Service Operator: apiVersion: v1 kind: Namespace metadata: name: openshift-update-service annotations: openshift.io/node-selector: "" labels: openshift.io/cluster-monitoring: "true" 1 1 Set the openshift.io/cluster-monitoring label to enable Operator-recommended cluster monitoring on this namespace. Create the namespace: USD oc create -f <filename>.yaml For example: USD oc create -f update-service-namespace.yaml Install the OpenShift Update Service Operator by creating the following objects: Create an OperatorGroup object YAML file, for example, update-service-operator-group.yaml : apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: update-service-operator-group namespace: openshift-update-service spec: targetNamespaces: - openshift-update-service Create an OperatorGroup object: USD oc -n openshift-update-service create -f <filename>.yaml For example: USD oc -n openshift-update-service create -f update-service-operator-group.yaml Create a Subscription object YAML file, for example, update-service-subscription.yaml : Example Subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: update-service-subscription namespace: openshift-update-service spec: channel: v1 installPlanApproval: "Automatic" source: "redhat-operators" 1 sourceNamespace: "openshift-marketplace" name: "cincinnati-operator" 1 Specify the name of the catalog source that provides the Operator. For clusters that do not use a custom Operator Lifecycle Manager (OLM), specify redhat-operators . If your OpenShift Container Platform cluster is installed in a disconnected environment, specify the name of the CatalogSource object created when you configured Operator Lifecycle Manager (OLM). Create the Subscription object: USD oc create -f <filename>.yaml For example: USD oc -n openshift-update-service create -f update-service-subscription.yaml The OpenShift Update Service Operator is installed to the openshift-update-service namespace and targets the openshift-update-service namespace. Verify the Operator installation: USD oc -n openshift-update-service get clusterserviceversions Example output NAME DISPLAY VERSION REPLACES PHASE update-service-operator.v4.6.0 OpenShift Update Service 4.6.0 Succeeded ... If the OpenShift Update Service Operator is listed, the installation was successful. The version number might be different than shown. Additional resources Installing Operators in your namespace . 6.3.6. Creating the OpenShift Update Service graph data container image The OpenShift Update Service requires a graph data container image, from which the OpenShift Update Service retrieves information about channel membership and blocked update edges. Graph data is typically fetched directly from the update graph data repository. In environments where an internet connection is unavailable, loading this information from an init container is another way to make the graph data available to the OpenShift Update Service. The role of the init container is to provide a local copy of the graph data, and during pod initialization, the init container copies the data to a volume that is accessible by the service. Note The oc-mirror OpenShift CLI ( oc ) plugin creates this graph data container image in addition to mirroring release images. If you used the oc-mirror plugin to mirror your release images, you can skip this procedure. Procedure Create a Dockerfile, for example, ./Dockerfile , containing the following: FROM registry.access.redhat.com/ubi9/ubi:latest RUN curl -L -o cincinnati-graph-data.tar.gz https://api.openshift.com/api/upgrades_info/graph-data RUN mkdir -p /var/lib/cincinnati-graph-data && tar xvzf cincinnati-graph-data.tar.gz -C /var/lib/cincinnati-graph-data/ --no-overwrite-dir --no-same-owner CMD ["/bin/bash", "-c" ,"exec cp -rp /var/lib/cincinnati-graph-data/* /var/lib/cincinnati/graph-data"] Use the docker file created in the above step to build a graph data container image, for example, registry.example.com/openshift/graph-data:latest : USD podman build -f ./Dockerfile -t registry.example.com/openshift/graph-data:latest Push the graph data container image created in the step to a repository that is accessible to the OpenShift Update Service, for example, registry.example.com/openshift/graph-data:latest : USD podman push registry.example.com/openshift/graph-data:latest Note To push a graph data image to a registry in a disconnected environment, copy the graph data container image created in the step to a repository that is accessible to the OpenShift Update Service. Run oc image mirror --help for available options. 6.3.7. Creating an OpenShift Update Service application You can create an OpenShift Update Service application by using the OpenShift Container Platform web console or CLI. 6.3.7.1. Creating an OpenShift Update Service application by using the web console You can use the OpenShift Container Platform web console to create an OpenShift Update Service application by using the OpenShift Update Service Operator. Prerequisites The OpenShift Update Service Operator has been installed. The OpenShift Update Service graph data container image has been created and pushed to a repository that is accessible to the OpenShift Update Service. The current release and update target releases have been mirrored to a registry in the disconnected environment. Procedure In the web console, click Operators Installed Operators . Choose OpenShift Update Service from the list of installed Operators. Click the Update Service tab. Click Create UpdateService . Enter a name in the Name field, for example, service . Enter the local pullspec in the Graph Data Image field to the graph data container image created in "Creating the OpenShift Update Service graph data container image", for example, registry.example.com/openshift/graph-data:latest . In the Releases field, enter the registry and repository created to contain the release images in "Mirroring the OpenShift Container Platform image repository", for example, registry.example.com/ocp4/openshift4-release-images . Enter 2 in the Replicas field. Click Create to create the OpenShift Update Service application. Verify the OpenShift Update Service application: From the UpdateServices list in the Update Service tab, click the Update Service application just created. Click the Resources tab. Verify each application resource has a status of Created . 6.3.7.2. Creating an OpenShift Update Service application by using the CLI You can use the OpenShift CLI ( oc ) to create an OpenShift Update Service application. Prerequisites The OpenShift Update Service Operator has been installed. The OpenShift Update Service graph data container image has been created and pushed to a repository that is accessible to the OpenShift Update Service. The current release and update target releases have been mirrored to a registry in the disconnected environment. Procedure Configure the OpenShift Update Service target namespace, for example, openshift-update-service : USD NAMESPACE=openshift-update-service The namespace must match the targetNamespaces value from the operator group. Configure the name of the OpenShift Update Service application, for example, service : USD NAME=service Configure the registry and repository for the release images as configured in "Mirroring the OpenShift Container Platform image repository", for example, registry.example.com/ocp4/openshift4-release-images : USD RELEASE_IMAGES=registry.example.com/ocp4/openshift4-release-images Set the local pullspec for the graph data image to the graph data container image created in "Creating the OpenShift Update Service graph data container image", for example, registry.example.com/openshift/graph-data:latest : USD GRAPH_DATA_IMAGE=registry.example.com/openshift/graph-data:latest Create an OpenShift Update Service application object: USD oc -n "USD{NAMESPACE}" create -f - <<EOF apiVersion: updateservice.operator.openshift.io/v1 kind: UpdateService metadata: name: USD{NAME} spec: replicas: 2 releases: USD{RELEASE_IMAGES} graphDataImage: USD{GRAPH_DATA_IMAGE} EOF Verify the OpenShift Update Service application: Use the following command to obtain a policy engine route: USD while sleep 1; do POLICY_ENGINE_GRAPH_URI="USD(oc -n "USD{NAMESPACE}" get -o jsonpath='{.status.policyEngineURI}/api/upgrades_info/v1/graph{"\n"}' updateservice "USD{NAME}")"; SCHEME="USD{POLICY_ENGINE_GRAPH_URI%%:*}"; if test "USD{SCHEME}" = http -o "USD{SCHEME}" = https; then break; fi; done You might need to poll until the command succeeds. Retrieve a graph from the policy engine. Be sure to specify a valid version for channel . For example, if running in OpenShift Container Platform 4.17, use stable-4.17 : USD while sleep 10; do HTTP_CODE="USD(curl --header Accept:application/json --output /dev/stderr --write-out "%{http_code}" "USD{POLICY_ENGINE_GRAPH_URI}?channel=stable-4.6")"; if test "USD{HTTP_CODE}" -eq 200; then break; fi; echo "USD{HTTP_CODE}"; done This polls until the graph request succeeds; however, the resulting graph might be empty depending on which release images you have mirrored. Note The policy engine route name must not be more than 63 characters based on RFC-1123. If you see ReconcileCompleted status as false with the reason CreateRouteFailed caused by host must conform to DNS 1123 naming convention and must be no more than 63 characters , try creating the Update Service with a shorter name. 6.3.8. Configuring the Cluster Version Operator (CVO) After the OpenShift Update Service Operator has been installed and the OpenShift Update Service application has been created, the Cluster Version Operator (CVO) can be updated to pull graph data from the OpenShift Update Service installed in your environment. Prerequisites The OpenShift Update Service Operator has been installed. The OpenShift Update Service graph data container image has been created and pushed to a repository that is accessible to the OpenShift Update Service. The current release and update target releases have been mirrored to a registry in the disconnected environment. The OpenShift Update Service application has been created. Procedure Set the OpenShift Update Service target namespace, for example, openshift-update-service : USD NAMESPACE=openshift-update-service Set the name of the OpenShift Update Service application, for example, service : USD NAME=service Obtain the policy engine route: USD POLICY_ENGINE_GRAPH_URI="USD(oc -n "USD{NAMESPACE}" get -o jsonpath='{.status.policyEngineURI}/api/upgrades_info/v1/graph{"\n"}' updateservice "USD{NAME}")" Set the patch for the pull graph data: USD PATCH="{\"spec\":{\"upstream\":\"USD{POLICY_ENGINE_GRAPH_URI}\"}}" Patch the CVO to use the OpenShift Update Service in your environment: USD oc patch clusterversion version -p USDPATCH --type merge Note See Configuring the cluster-wide proxy to configure the CA to trust the update server. 6.3.9. steps Before updating your cluster, confirm that the following conditions are met: The Cluster Version Operator (CVO) is configured to use your installed OpenShift Update Service application. The release image signature config map for the new release is applied to your cluster. Note The Cluster Version Operator (CVO) uses release image signatures to ensure that release images have not been modified, by verifying that the release image signatures match the expected result. The current release and update target release images are mirrored to a registry in the disconnected environment. A recent graph data container image has been mirrored to your registry. A recent version of the OpenShift Update Service Operator is installed. Note If you have not recently installed or updated the OpenShift Update Service Operator, there might be a more recent version available. See Using Operator Lifecycle Manager in disconnected environments for more information about how to update your OLM catalog in a disconnected environment. After you configure your cluster to use the installed OpenShift Update Service and local mirror registry, you can use any of the following update methods: Updating a cluster using the web console Updating a cluster using the CLI Performing a Control Plane Only update Performing a canary rollout update Updating a cluster that includes RHEL compute machines 6.4. Updating a cluster in a disconnected environment without the OpenShift Update Service Use the following procedures to update a cluster in a disconnected environment without access to the OpenShift Update Service. 6.4.1. Prerequisites You must have the oc command-line interface (CLI) tool installed. You must provision a local container image registry with the container images for your update, as described in Mirroring OpenShift Container Platform images . You must have access to the cluster as a user with admin privileges. See Using RBAC to define and apply permissions . You must have a recent etcd backup in case your update fails and you must restore your cluster to a state . You have updated all Operators previously installed through Operator Lifecycle Manager (OLM) to a version that is compatible with your target release. Updating the Operators ensures they have a valid update path when the default OperatorHub catalogs switch from the current minor version to the during a cluster update. See Updating installed Operators for more information on how to check compatibility and, if necessary, update the installed Operators. You must ensure that all machine config pools (MCPs) are running and not paused. Nodes associated with a paused MCP are skipped during the update process. You can pause the MCPs if you are performing a canary rollout update strategy. If your cluster uses manually maintained credentials, update the cloud provider resources for the new release. For more information, including how to determine if this is a requirement for your cluster, see Preparing to update a cluster with manually maintained credentials . If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the update process. If minAvailable is set to 1 in PodDisruptionBudget , the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the PodDisruptionBudget field can prevent the node drain. Note If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the update process. If minAvailable is set to 1 in PodDisruptionBudget , the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the PodDisruptionBudget field can prevent the node drain. 6.4.2. Pausing a MachineHealthCheck resource During the update process, nodes in the cluster might become temporarily unavailable. In the case of worker nodes, the machine health check might identify such nodes as unhealthy and reboot them. To avoid rebooting such nodes, pause all the MachineHealthCheck resources before updating the cluster. Prerequisites Install the OpenShift CLI ( oc ). Procedure To list all the available MachineHealthCheck resources that you want to pause, run the following command: USD oc get machinehealthcheck -n openshift-machine-api To pause the machine health checks, add the cluster.x-k8s.io/paused="" annotation to the MachineHealthCheck resource. Run the following command: USD oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused="" The annotated MachineHealthCheck resource resembles the following YAML file: apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example namespace: openshift-machine-api annotations: cluster.x-k8s.io/paused: "" spec: selector: matchLabels: role: worker unhealthyConditions: - type: "Ready" status: "Unknown" timeout: "300s" - type: "Ready" status: "False" timeout: "300s" maxUnhealthy: "40%" status: currentHealthy: 5 expectedMachines: 5 Important Resume the machine health checks after updating the cluster. To resume the check, remove the pause annotation from the MachineHealthCheck resource by running the following command: USD oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused- 6.4.3. Retrieving a release image digest In order to update a cluster in a disconnected environment using the oc adm upgrade command with the --to-image option, you must reference the sha256 digest that corresponds to your targeted release image. Procedure Run the following command on a device that is connected to the internet: USD oc adm release info -o 'jsonpath={.digest}{"\n"}' quay.io/openshift-release-dev/ocp-release:USD{OCP_RELEASE_VERSION}-USD{ARCHITECTURE} For {OCP_RELEASE_VERSION} , specify the version of OpenShift Container Platform to which you want to update, such as 4.10.16 . For {ARCHITECTURE} , specify the architecture of the cluster, such as x86_64 , aarch64 , s390x , or ppc64le . Example output sha256:a8bfba3b6dddd1a2fbbead7dac65fe4fb8335089e4e7cae327f3bad334add31d Copy the sha256 digest for use when updating your cluster. 6.4.4. Updating the disconnected cluster Update the disconnected cluster to the OpenShift Container Platform version that you downloaded the release images for. Note If you have a local OpenShift Update Service, you can update by using the connected web console or CLI instructions instead of this procedure. Prerequisites You mirrored the images for the new release to your registry. You applied the release image signature ConfigMap for the new release to your cluster. Note The release image signature config map allows the Cluster Version Operator (CVO) to ensure the integrity of release images by verifying that the actual image signatures match the expected signatures. You obtained the sha256 digest for your targeted release image. You installed the OpenShift CLI ( oc ). You paused all MachineHealthCheck resources. Procedure Update the cluster: USD oc adm upgrade --allow-explicit-upgrade --to-image <defined_registry>/<defined_repository>@<digest> Where: <defined_registry> Specifies the name of the mirror registry you mirrored your images to. <defined_repository> Specifies the name of the image repository you want to use on the mirror registry. <digest> Specifies the sha256 digest for the targeted release image, for example, sha256:81154f5c03294534e1eaf0319bef7a601134f891689ccede5d705ef659aa8c92 . Note See "Mirroring OpenShift Container Platform images" to review how your mirror registry and repository names are defined. If you used an ImageContentSourcePolicy or ImageDigestMirrorSet , you can use the canonical registry and repository names instead of the names you defined. The canonical registry name is quay.io and the canonical repository name is openshift-release-dev/ocp-release . You can only configure global pull secrets for clusters that have an ImageContentSourcePolicy , ImageDigestMirrorSet , or ImageTagMirrorSet object. You cannot add a pull secret to a project. Additional resources Mirroring OpenShift Container Platform images 6.4.5. Understanding image registry repository mirroring Setting up container registry repository mirroring enables you to perform the following tasks: Configure your OpenShift Container Platform cluster to redirect requests to pull images from a repository on a source image registry and have it resolved by a repository on a mirrored image registry. Identify multiple mirrored repositories for each target repository, to make sure that if one mirror is down, another can be used. Repository mirroring in OpenShift Container Platform includes the following attributes: Image pulls are resilient to registry downtimes. Clusters in disconnected environments can pull images from critical locations, such as quay.io, and have registries behind a company firewall provide the requested images. A particular order of registries is tried when an image pull request is made, with the permanent registry typically being the last one tried. The mirror information you enter is added to the /etc/containers/registries.conf file on every node in the OpenShift Container Platform cluster. When a node makes a request for an image from the source repository, it tries each mirrored repository in turn until it finds the requested content. If all mirrors fail, the cluster tries the source repository. If successful, the image is pulled to the node. Setting up repository mirroring can be done in the following ways: At OpenShift Container Platform installation: By pulling container images needed by OpenShift Container Platform and then bringing those images behind your company's firewall, you can install OpenShift Container Platform into a data center that is in a disconnected environment. After OpenShift Container Platform installation: If you did not configure mirroring during OpenShift Container Platform installation, you can do so postinstallation by using any of the following custom resource (CR) objects: ImageDigestMirrorSet (IDMS). This object allows you to pull images from a mirrored registry by using digest specifications. The IDMS CR enables you to set a fall back policy that allows or stops continued attempts to pull from the source registry if the image pull fails. ImageTagMirrorSet (ITMS). This object allows you to pull images from a mirrored registry by using image tags. The ITMS CR enables you to set a fall back policy that allows or stops continued attempts to pull from the source registry if the image pull fails. ImageContentSourcePolicy (ICSP). This object allows you to pull images from a mirrored registry by using digest specifications. The ICSP CR always falls back to the source registry if the mirrors do not work. Important Using an ImageContentSourcePolicy (ICSP) object to configure repository mirroring is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. If you have existing YAML files that you used to create ImageContentSourcePolicy objects, you can use the oc adm migrate icsp command to convert those files to an ImageDigestMirrorSet YAML file. For more information, see "Converting ImageContentSourcePolicy (ICSP) files for image registry repository mirroring" in the following section. Each of these custom resource objects identify the following information: The source of the container image repository you want to mirror. A separate entry for each mirror repository you want to offer the content requested from the source repository. For new clusters, you can use IDMS, ITMS, and ICSP CRs objects as desired. However, using IDMS and ITMS is recommended. If you upgraded a cluster, any existing ICSP objects remain stable, and both IDMS and ICSP objects are supported. Workloads using ICSP objects continue to function as expected. However, if you want to take advantage of the fallback policies introduced in the IDMS CRs, you can migrate current workloads to IDMS objects by using the oc adm migrate icsp command as shown in the Converting ImageContentSourcePolicy (ICSP) files for image registry repository mirroring section that follows. Migrating to IDMS objects does not require a cluster reboot. Note If your cluster uses an ImageDigestMirrorSet , ImageTagMirrorSet , or ImageContentSourcePolicy object to configure repository mirroring, you can use only global pull secrets for mirrored registries. You cannot add a pull secret to a project. 6.4.5.1. Configuring image registry repository mirroring You can create postinstallation mirror configuration custom resources (CR) to redirect image pull requests from a source image registry to a mirrored image registry. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Configure mirrored repositories, by either: Setting up a mirrored repository with Red Hat Quay, as described in Red Hat Quay Repository Mirroring . Using Red Hat Quay allows you to copy images from one repository to another and also automatically sync those repositories repeatedly over time. Using a tool such as skopeo to copy images manually from the source repository to the mirrored repository. For example, after installing the skopeo RPM package on a Red Hat Enterprise Linux (RHEL) 7 or RHEL 8 system, use the skopeo command as shown in this example: USD skopeo copy --all \ docker://registry.access.redhat.com/ubi9/ubi-minimal:latest@sha256:5cf... \ docker://example.io/example/ubi-minimal In this example, you have a container image registry that is named example.io with an image repository named example to which you want to copy the ubi9/ubi-minimal image from registry.access.redhat.com . After you create the mirrored registry, you can configure your OpenShift Container Platform cluster to redirect requests made of the source repository to the mirrored repository. Create a postinstallation mirror configuration CR, by using one of the following examples: Create an ImageDigestMirrorSet or ImageTagMirrorSet CR, as needed, replacing the source and mirrors with your own registry and repository pairs and images: apiVersion: config.openshift.io/v1 1 kind: ImageDigestMirrorSet 2 metadata: name: ubi9repo spec: imageDigestMirrors: 3 - mirrors: - example.io/example/ubi-minimal 4 - example.com/example/ubi-minimal 5 source: registry.access.redhat.com/ubi9/ubi-minimal 6 mirrorSourcePolicy: AllowContactingSource 7 - mirrors: - mirror.example.com/redhat source: registry.example.com/redhat 8 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.com source: registry.example.com 9 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net/image source: registry.example.com/example/myimage 10 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net source: registry.example.com/example 11 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net/registry-example-com source: registry.example.com 12 mirrorSourcePolicy: AllowContactingSource 1 Indicates the API to use with this CR. This must be config.openshift.io/v1 . 2 Indicates the kind of object according to the pull type: ImageDigestMirrorSet : Pulls a digest reference image. ImageTagMirrorSet : Pulls a tag reference image. 3 Indicates the type of image pull method, either: imageDigestMirrors : Use for an ImageDigestMirrorSet CR. imageTagMirrors : Use for an ImageTagMirrorSet CR. 4 Indicates the name of the mirrored image registry and repository. 5 Optional: Indicates a secondary mirror repository for each target repository. If one mirror is down, the target repository can use the secondary mirror. 6 Indicates the registry and repository source, which is the repository that is referred to in an image pull specification. 7 Optional: Indicates the fallback policy if the image pull fails: AllowContactingSource : Allows continued attempts to pull the image from the source repository. This is the default. NeverContactSource : Prevents continued attempts to pull the image from the source repository. 8 Optional: Indicates a namespace inside a registry, which allows you to use any image in that namespace. If you use a registry domain as a source, the object is applied to all repositories from the registry. 9 Optional: Indicates a registry, which allows you to use any image in that registry. If you specify a registry name, the object is applied to all repositories from a source registry to a mirror registry. 10 Pulls the image registry.example.com/example/myimage@sha256:... from the mirror mirror.example.net/image@sha256:.. . 11 Pulls the image registry.example.com/example/image@sha256:... in the source registry namespace from the mirror mirror.example.net/image@sha256:... . 12 Pulls the image registry.example.com/myimage@sha256 from the mirror registry example.net/registry-example-com/myimage@sha256:... . Create an ImageContentSourcePolicy custom resource, replacing the source and mirrors with your own registry and repository pairs and images: apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: mirror-ocp spec: repositoryDigestMirrors: - mirrors: - mirror.registry.com:443/ocp/release 1 source: quay.io/openshift-release-dev/ocp-release 2 - mirrors: - mirror.registry.com:443/ocp/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 Specifies the name of the mirror image registry and repository. 2 Specifies the online registry and repository containing the content that is mirrored. Create the new object: USD oc create -f registryrepomirror.yaml After the object is created, the Machine Config Operator (MCO) drains the nodes for ImageTagMirrorSet objects only. The MCO does not drain the nodes for ImageDigestMirrorSet and ImageContentSourcePolicy objects. To check that the mirrored configuration settings are applied, do the following on one of the nodes. List your nodes: USD oc get node Example output NAME STATUS ROLES AGE VERSION ip-10-0-137-44.ec2.internal Ready worker 7m v1.30.3 ip-10-0-138-148.ec2.internal Ready master 11m v1.30.3 ip-10-0-139-122.ec2.internal Ready master 11m v1.30.3 ip-10-0-147-35.ec2.internal Ready worker 7m v1.30.3 ip-10-0-153-12.ec2.internal Ready worker 7m v1.30.3 ip-10-0-154-10.ec2.internal Ready master 11m v1.30.3 Start the debugging process to access the node: USD oc debug node/ip-10-0-147-35.ec2.internal Example output Starting pod/ip-10-0-147-35ec2internal-debug ... To use host binaries, run `chroot /host` Change your root directory to /host : sh-4.2# chroot /host Check the /etc/containers/registries.conf file to make sure the changes were made: sh-4.2# cat /etc/containers/registries.conf The following output represents a registries.conf file where postinstallation mirror configuration CRs were applied. The final two entries are marked digest-only and tag-only respectively. Example output unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] short-name-mode = "" [[registry]] prefix = "" location = "registry.access.redhat.com/ubi9/ubi-minimal" 1 [[registry.mirror]] location = "example.io/example/ubi-minimal" 2 pull-from-mirror = "digest-only" 3 [[registry.mirror]] location = "example.com/example/ubi-minimal" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.example.com" [[registry.mirror]] location = "mirror.example.net/registry-example-com" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.example.com/example" [[registry.mirror]] location = "mirror.example.net" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.example.com/example/myimage" [[registry.mirror]] location = "mirror.example.net/image" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.example.com" [[registry.mirror]] location = "mirror.example.com" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.example.com/redhat" [[registry.mirror]] location = "mirror.example.com/redhat" pull-from-mirror = "digest-only" [[registry]] prefix = "" location = "registry.access.redhat.com/ubi9/ubi-minimal" blocked = true 4 [[registry.mirror]] location = "example.io/example/ubi-minimal-tag" pull-from-mirror = "tag-only" 5 1 Indicates the repository that is referred to in a pull spec. 2 Indicates the mirror for that repository. 3 Indicates that the image pull from the mirror is a digest reference image. 4 Indicates that the NeverContactSource parameter is set for this repository. 5 Indicates that the image pull from the mirror is a tag reference image. Pull an image to the node from the source and check if it is resolved by the mirror. sh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi9/ubi-minimal@sha256:5cf... Troubleshooting repository mirroring If the repository mirroring procedure does not work as described, use the following information about how repository mirroring works to help troubleshoot the problem. The first working mirror is used to supply the pulled image. The main registry is only used if no other mirror works. From the system context, the Insecure flags are used as fallback. The format of the /etc/containers/registries.conf file has changed recently. It is now version 2 and in TOML format. 6.4.5.2. Converting ImageContentSourcePolicy (ICSP) files for image registry repository mirroring Using an ImageContentSourcePolicy (ICSP) object to configure repository mirroring is a deprecated feature. This functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. ICSP objects are being replaced by ImageDigestMirrorSet and ImageTagMirrorSet objects to configure repository mirroring. If you have existing YAML files that you used to create ImageContentSourcePolicy objects, you can use the oc adm migrate icsp command to convert those files to an ImageDigestMirrorSet YAML file. The command updates the API to the current version, changes the kind value to ImageDigestMirrorSet , and changes spec.repositoryDigestMirrors to spec.imageDigestMirrors . The rest of the file is not changed. Because the migration does not change the registries.conf file, the cluster does not need to reboot. For more information about ImageDigestMirrorSet or ImageTagMirrorSet objects, see "Configuring image registry repository mirroring" in the section. Prerequisites Access to the cluster as a user with the cluster-admin role. Ensure that you have ImageContentSourcePolicy objects on your cluster. Procedure Use the following command to convert one or more ImageContentSourcePolicy YAML files to an ImageDigestMirrorSet YAML file: USD oc adm migrate icsp <file_name>.yaml <file_name>.yaml <file_name>.yaml --dest-dir <path_to_the_directory> where: <file_name> Specifies the name of the source ImageContentSourcePolicy YAML. You can list multiple file names. --dest-dir Optional: Specifies a directory for the output ImageDigestMirrorSet YAML. If unset, the file is written to the current directory. For example, the following command converts the icsp.yaml and icsp-2.yaml file and saves the new YAML files to the idms-files directory. USD oc adm migrate icsp icsp.yaml icsp-2.yaml --dest-dir idms-files Example output wrote ImageDigestMirrorSet to idms-files/imagedigestmirrorset_ubi8repo.5911620242173376087.yaml wrote ImageDigestMirrorSet to idms-files/imagedigestmirrorset_ubi9repo.6456931852378115011.yaml Create the CR object by running the following command: USD oc create -f <path_to_the_directory>/<file-name>.yaml where: <path_to_the_directory> Specifies the path to the directory, if you used the --dest-dir flag. <file_name> Specifies the name of the ImageDigestMirrorSet YAML. Remove the ICSP objects after the IDMS objects are rolled out. 6.4.6. Widening the scope of the mirror image catalog to reduce the frequency of cluster node reboots You can scope the mirrored image catalog at the repository level or the wider registry level. A widely scoped ImageContentSourcePolicy resource reduces the number of times the nodes need to reboot in response to changes to the resource. To widen the scope of the mirror image catalog in the ImageContentSourcePolicy resource, perform the following procedure. Prerequisites Install the OpenShift Container Platform CLI oc . Log in as a user with cluster-admin privileges. Configure a mirrored image catalog for use in your disconnected cluster. Procedure Run the following command, specifying values for <local_registry> , <pull_spec> , and <pull_secret_file> : USD oc adm catalog mirror <local_registry>/<pull_spec> <local_registry> -a <pull_secret_file> --icsp-scope=registry where: <local_registry> is the local registry you have configured for your disconnected cluster, for example, local.registry:5000 . <pull_spec> is the pull specification as configured in your disconnected registry, for example, redhat/redhat-operator-index:v4.17 <pull_secret_file> is the registry.redhat.io pull secret in .json file format. You can download the pull secret from Red Hat OpenShift Cluster Manager . The oc adm catalog mirror command creates a /redhat-operator-index-manifests directory and generates imageContentSourcePolicy.yaml , catalogSource.yaml , and mapping.txt files. Apply the new ImageContentSourcePolicy resource to the cluster: USD oc apply -f imageContentSourcePolicy.yaml Verification Verify that oc apply successfully applied the change to ImageContentSourcePolicy : USD oc get ImageContentSourcePolicy -o yaml Example output apiVersion: v1 items: - apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operator.openshift.io/v1alpha1","kind":"ImageContentSourcePolicy","metadata":{"annotations":{},"name":"redhat-operator-index"},"spec":{"repositoryDigestMirrors":[{"mirrors":["local.registry:5000"],"source":"registry.redhat.io"}]}} ... After you update the ImageContentSourcePolicy resource, OpenShift Container Platform deploys the new settings to each node and the cluster starts using the mirrored repository for requests to the source repository. 6.4.7. Additional resources Using Operator Lifecycle Manager in disconnected environments Machine Config Overview 6.5. Uninstalling the OpenShift Update Service from a cluster To remove a local copy of the OpenShift Update Service (OSUS) from your cluster, you must first delete the OSUS application and then uninstall the OSUS Operator. 6.5.1. Deleting an OpenShift Update Service application You can delete an OpenShift Update Service application by using the OpenShift Container Platform web console or CLI. 6.5.1.1. Deleting an OpenShift Update Service application by using the web console You can use the OpenShift Container Platform web console to delete an OpenShift Update Service application by using the OpenShift Update Service Operator. Prerequisites The OpenShift Update Service Operator has been installed. Procedure In the web console, click Operators Installed Operators . Choose OpenShift Update Service from the list of installed Operators. Click the Update Service tab. From the list of installed OpenShift Update Service applications, select the application to be deleted and then click Delete UpdateService . From the Delete UpdateService? confirmation dialog, click Delete to confirm the deletion. 6.5.1.2. Deleting an OpenShift Update Service application by using the CLI You can use the OpenShift CLI ( oc ) to delete an OpenShift Update Service application. Procedure Get the OpenShift Update Service application name using the namespace the OpenShift Update Service application was created in, for example, openshift-update-service : USD oc get updateservice -n openshift-update-service Example output NAME AGE service 6s Delete the OpenShift Update Service application using the NAME value from the step and the namespace the OpenShift Update Service application was created in, for example, openshift-update-service : USD oc delete updateservice service -n openshift-update-service Example output updateservice.updateservice.operator.openshift.io "service" deleted 6.5.2. Uninstalling the OpenShift Update Service Operator You can uninstall the OpenShift Update Service Operator by using the OpenShift Container Platform web console or CLI. 6.5.2.1. Uninstalling the OpenShift Update Service Operator by using the web console You can use the OpenShift Container Platform web console to uninstall the OpenShift Update Service Operator. Prerequisites All OpenShift Update Service applications have been deleted. Procedure In the web console, click Operators Installed Operators . Select OpenShift Update Service from the list of installed Operators and click Uninstall Operator . From the Uninstall Operator? confirmation dialog, click Uninstall to confirm the uninstallation. 6.5.2.2. Uninstalling the OpenShift Update Service Operator by using the CLI You can use the OpenShift CLI ( oc ) to uninstall the OpenShift Update Service Operator. Prerequisites All OpenShift Update Service applications have been deleted. Procedure Change to the project containing the OpenShift Update Service Operator, for example, openshift-update-service : USD oc project openshift-update-service Example output Now using project "openshift-update-service" on server "https://example.com:6443". Get the name of the OpenShift Update Service Operator operator group: USD oc get operatorgroup Example output NAME AGE openshift-update-service-fprx2 4m41s Delete the operator group, for example, openshift-update-service-fprx2 : USD oc delete operatorgroup openshift-update-service-fprx2 Example output operatorgroup.operators.coreos.com "openshift-update-service-fprx2" deleted Get the name of the OpenShift Update Service Operator subscription: USD oc get subscription Example output NAME PACKAGE SOURCE CHANNEL update-service-operator update-service-operator updateservice-index-catalog v1 Using the Name value from the step, check the current version of the subscribed OpenShift Update Service Operator in the currentCSV field: USD oc get subscription update-service-operator -o yaml | grep " currentCSV" Example output currentCSV: update-service-operator.v0.0.1 Delete the subscription, for example, update-service-operator : USD oc delete subscription update-service-operator Example output subscription.operators.coreos.com "update-service-operator" deleted Delete the CSV for the OpenShift Update Service Operator using the currentCSV value from the step: USD oc delete clusterserviceversion update-service-operator.v0.0.1 Example output clusterserviceversion.operators.coreos.com "update-service-operator.v0.0.1" deleted
[ "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1", "{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "mkdir -p <directory_name>", "cp <path>/<pull_secret_file_in_json> <directory_name>/<auth_file>", "echo -n '<user_name>:<password>' | base64 -w0 1", "BGVtbYk3ZHAtqXs=", "\"auths\": { \"<mirror_registry>\": { 1 \"auth\": \"<credentials>\", 2 \"email\": \"[email protected]\" } },", "{ \"auths\": { \"registry.example.com\": { \"auth\": \"BGVtbYk3ZHAtqXs=\", \"email\": \"[email protected]\" }, \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "export OCP_RELEASE=<release_version>", "LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'", "LOCAL_REPOSITORY='<local_repository_name>'", "LOCAL_RELEASE_IMAGES_REPOSITORY='<local_release_images_repository_name>'", "PRODUCT_REPO='openshift-release-dev'", "LOCAL_SECRET_JSON='<path_to_pull_secret>'", "RELEASE_NAME=\"ocp-release\"", "ARCHITECTURE=<cluster_architecture> 1", "REMOVABLE_MEDIA_PATH=<path> 1", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE}", "oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror \"file://openshift/release:USD{OCP_RELEASE}*\" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1", "oc apply -f USD{REMOVABLE_MEDIA_PATH}/mirror/config/<image_signature_file> 1", "oc image mirror -a USD{LOCAL_SECRET_JSON} USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} USD{LOCAL_REGISTRY}/USD{LOCAL_RELEASE_IMAGES_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --apply-release-image-signature", "oc image mirror -a USD{LOCAL_SECRET_JSON} USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} USD{LOCAL_REGISTRY}/USD{LOCAL_RELEASE_IMAGES_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}", "apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: updateservice-registry: | 1 -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registry-with-port.example.com..5000: | 2 -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----", "oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' ><pull_secret_location> 1", "oc registry login --registry=\"<registry>\" \\ 1 --auth-basic=\"<username>:<password>\" \\ 2 --to=<pull_secret_location> 3", "oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1", "apiVersion: v1 kind: Namespace metadata: name: openshift-update-service annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\" 1", "oc create -f <filename>.yaml", "oc create -f update-service-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: update-service-operator-group namespace: openshift-update-service spec: targetNamespaces: - openshift-update-service", "oc -n openshift-update-service create -f <filename>.yaml", "oc -n openshift-update-service create -f update-service-operator-group.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: update-service-subscription namespace: openshift-update-service spec: channel: v1 installPlanApproval: \"Automatic\" source: \"redhat-operators\" 1 sourceNamespace: \"openshift-marketplace\" name: \"cincinnati-operator\"", "oc create -f <filename>.yaml", "oc -n openshift-update-service create -f update-service-subscription.yaml", "oc -n openshift-update-service get clusterserviceversions", "NAME DISPLAY VERSION REPLACES PHASE update-service-operator.v4.6.0 OpenShift Update Service 4.6.0 Succeeded", "FROM registry.access.redhat.com/ubi9/ubi:latest RUN curl -L -o cincinnati-graph-data.tar.gz https://api.openshift.com/api/upgrades_info/graph-data RUN mkdir -p /var/lib/cincinnati-graph-data && tar xvzf cincinnati-graph-data.tar.gz -C /var/lib/cincinnati-graph-data/ --no-overwrite-dir --no-same-owner CMD [\"/bin/bash\", \"-c\" ,\"exec cp -rp /var/lib/cincinnati-graph-data/* /var/lib/cincinnati/graph-data\"]", "podman build -f ./Dockerfile -t registry.example.com/openshift/graph-data:latest", "podman push registry.example.com/openshift/graph-data:latest", "NAMESPACE=openshift-update-service", "NAME=service", "RELEASE_IMAGES=registry.example.com/ocp4/openshift4-release-images", "GRAPH_DATA_IMAGE=registry.example.com/openshift/graph-data:latest", "oc -n \"USD{NAMESPACE}\" create -f - <<EOF apiVersion: updateservice.operator.openshift.io/v1 kind: UpdateService metadata: name: USD{NAME} spec: replicas: 2 releases: USD{RELEASE_IMAGES} graphDataImage: USD{GRAPH_DATA_IMAGE} EOF", "while sleep 1; do POLICY_ENGINE_GRAPH_URI=\"USD(oc -n \"USD{NAMESPACE}\" get -o jsonpath='{.status.policyEngineURI}/api/upgrades_info/v1/graph{\"\\n\"}' updateservice \"USD{NAME}\")\"; SCHEME=\"USD{POLICY_ENGINE_GRAPH_URI%%:*}\"; if test \"USD{SCHEME}\" = http -o \"USD{SCHEME}\" = https; then break; fi; done", "while sleep 10; do HTTP_CODE=\"USD(curl --header Accept:application/json --output /dev/stderr --write-out \"%{http_code}\" \"USD{POLICY_ENGINE_GRAPH_URI}?channel=stable-4.6\")\"; if test \"USD{HTTP_CODE}\" -eq 200; then break; fi; echo \"USD{HTTP_CODE}\"; done", "NAMESPACE=openshift-update-service", "NAME=service", "POLICY_ENGINE_GRAPH_URI=\"USD(oc -n \"USD{NAMESPACE}\" get -o jsonpath='{.status.policyEngineURI}/api/upgrades_info/v1/graph{\"\\n\"}' updateservice \"USD{NAME}\")\"", "PATCH=\"{\\\"spec\\\":{\\\"upstream\\\":\\\"USD{POLICY_ENGINE_GRAPH_URI}\\\"}}\"", "oc patch clusterversion version -p USDPATCH --type merge", "oc get machinehealthcheck -n openshift-machine-api", "oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused=\"\"", "apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example namespace: openshift-machine-api annotations: cluster.x-k8s.io/paused: \"\" spec: selector: matchLabels: role: worker unhealthyConditions: - type: \"Ready\" status: \"Unknown\" timeout: \"300s\" - type: \"Ready\" status: \"False\" timeout: \"300s\" maxUnhealthy: \"40%\" status: currentHealthy: 5 expectedMachines: 5", "oc -n openshift-machine-api annotate mhc <mhc-name> cluster.x-k8s.io/paused-", "oc adm release info -o 'jsonpath={.digest}{\"\\n\"}' quay.io/openshift-release-dev/ocp-release:USD{OCP_RELEASE_VERSION}-USD{ARCHITECTURE}", "sha256:a8bfba3b6dddd1a2fbbead7dac65fe4fb8335089e4e7cae327f3bad334add31d", "oc adm upgrade --allow-explicit-upgrade --to-image <defined_registry>/<defined_repository>@<digest>", "skopeo copy --all docker://registry.access.redhat.com/ubi9/ubi-minimal:latest@sha256:5cf... docker://example.io/example/ubi-minimal", "apiVersion: config.openshift.io/v1 1 kind: ImageDigestMirrorSet 2 metadata: name: ubi9repo spec: imageDigestMirrors: 3 - mirrors: - example.io/example/ubi-minimal 4 - example.com/example/ubi-minimal 5 source: registry.access.redhat.com/ubi9/ubi-minimal 6 mirrorSourcePolicy: AllowContactingSource 7 - mirrors: - mirror.example.com/redhat source: registry.example.com/redhat 8 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.com source: registry.example.com 9 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net/image source: registry.example.com/example/myimage 10 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net source: registry.example.com/example 11 mirrorSourcePolicy: AllowContactingSource - mirrors: - mirror.example.net/registry-example-com source: registry.example.com 12 mirrorSourcePolicy: AllowContactingSource", "apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: mirror-ocp spec: repositoryDigestMirrors: - mirrors: - mirror.registry.com:443/ocp/release 1 source: quay.io/openshift-release-dev/ocp-release 2 - mirrors: - mirror.registry.com:443/ocp/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "oc create -f registryrepomirror.yaml", "oc get node", "NAME STATUS ROLES AGE VERSION ip-10-0-137-44.ec2.internal Ready worker 7m v1.30.3 ip-10-0-138-148.ec2.internal Ready master 11m v1.30.3 ip-10-0-139-122.ec2.internal Ready master 11m v1.30.3 ip-10-0-147-35.ec2.internal Ready worker 7m v1.30.3 ip-10-0-153-12.ec2.internal Ready worker 7m v1.30.3 ip-10-0-154-10.ec2.internal Ready master 11m v1.30.3", "oc debug node/ip-10-0-147-35.ec2.internal", "Starting pod/ip-10-0-147-35ec2internal-debug To use host binaries, run `chroot /host`", "sh-4.2# chroot /host", "sh-4.2# cat /etc/containers/registries.conf", "unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] short-name-mode = \"\" [[registry]] prefix = \"\" location = \"registry.access.redhat.com/ubi9/ubi-minimal\" 1 [[registry.mirror]] location = \"example.io/example/ubi-minimal\" 2 pull-from-mirror = \"digest-only\" 3 [[registry.mirror]] location = \"example.com/example/ubi-minimal\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com\" [[registry.mirror]] location = \"mirror.example.net/registry-example-com\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com/example\" [[registry.mirror]] location = \"mirror.example.net\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com/example/myimage\" [[registry.mirror]] location = \"mirror.example.net/image\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com\" [[registry.mirror]] location = \"mirror.example.com\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.example.com/redhat\" [[registry.mirror]] location = \"mirror.example.com/redhat\" pull-from-mirror = \"digest-only\" [[registry]] prefix = \"\" location = \"registry.access.redhat.com/ubi9/ubi-minimal\" blocked = true 4 [[registry.mirror]] location = \"example.io/example/ubi-minimal-tag\" pull-from-mirror = \"tag-only\" 5", "sh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi9/ubi-minimal@sha256:5cf", "oc adm migrate icsp <file_name>.yaml <file_name>.yaml <file_name>.yaml --dest-dir <path_to_the_directory>", "oc adm migrate icsp icsp.yaml icsp-2.yaml --dest-dir idms-files", "wrote ImageDigestMirrorSet to idms-files/imagedigestmirrorset_ubi8repo.5911620242173376087.yaml wrote ImageDigestMirrorSet to idms-files/imagedigestmirrorset_ubi9repo.6456931852378115011.yaml", "oc create -f <path_to_the_directory>/<file-name>.yaml", "oc adm catalog mirror <local_registry>/<pull_spec> <local_registry> -a <pull_secret_file> --icsp-scope=registry", "oc apply -f imageContentSourcePolicy.yaml", "oc get ImageContentSourcePolicy -o yaml", "apiVersion: v1 items: - apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.openshift.io/v1alpha1\",\"kind\":\"ImageContentSourcePolicy\",\"metadata\":{\"annotations\":{},\"name\":\"redhat-operator-index\"},\"spec\":{\"repositoryDigestMirrors\":[{\"mirrors\":[\"local.registry:5000\"],\"source\":\"registry.redhat.io\"}]}}", "oc get updateservice -n openshift-update-service", "NAME AGE service 6s", "oc delete updateservice service -n openshift-update-service", "updateservice.updateservice.operator.openshift.io \"service\" deleted", "oc project openshift-update-service", "Now using project \"openshift-update-service\" on server \"https://example.com:6443\".", "oc get operatorgroup", "NAME AGE openshift-update-service-fprx2 4m41s", "oc delete operatorgroup openshift-update-service-fprx2", "operatorgroup.operators.coreos.com \"openshift-update-service-fprx2\" deleted", "oc get subscription", "NAME PACKAGE SOURCE CHANNEL update-service-operator update-service-operator updateservice-index-catalog v1", "oc get subscription update-service-operator -o yaml | grep \" currentCSV\"", "currentCSV: update-service-operator.v0.0.1", "oc delete subscription update-service-operator", "subscription.operators.coreos.com \"update-service-operator\" deleted", "oc delete clusterserviceversion update-service-operator.v0.0.1", "clusterserviceversion.operators.coreos.com \"update-service-operator.v0.0.1\" deleted" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/disconnected_environments/updating-a-cluster-in-a-disconnected-environment
14.2.2. Adding Unallocated Volumes to a Volume Group
14.2.2. Adding Unallocated Volumes to a Volume Group Once initialized, a volume will be listed in the 'Unallocated Volumes' list. The figure below illustrates an unallocated partition (Partition 3). The respective buttons at the bottom of the window allow you to: create a new volume group, add the unallocated volume to an existing volume group, remove the volume from LVM. To add the volume to an existing volume group, click on the Add to Existing Volume Group button. Figure 14.7. Unallocated Volumes Clicking on the Add to Existing Volume Group button will display a pop-up window listing the existing volume groups to which you can add the physical volume you are about to initialize. A volume group may span across one or more hard disks. Example 14.3. Add a physical volume to volume group In this example only one volume group exists as illustrated below. Once added to an existing volume group the new logical volume is automatically added to the unused space of the selected volume group. You can use the unused space to: create a new logical volume (click on the Create New Logical Volume(s) button), select one of the existing logical volumes and increase the extents (see Section 14.2.6, "Extending a Volume Group" ), select an existing logical volume and remove it from the volume group by clicking on the Remove Selected Logical Volume(s) button. You cannot select unused space to perform this operation. The figure below illustrates the logical view of 'VolGroup00' after adding the new volume group. Figure 14.8. Logical view of volume group In the figure below, the uninitialized entities (partitions 3, 5, 6 and 7) were added to 'VolGroup00'. Figure 14.9. Logical view of volume group
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/s1-system-config-lvm-add-unallocated
8. Security
8. Security 8.1. Technology Previews OpenSCAP OpenSCAP is a set of open source libraries that support the Security Content Automation Protocol (SCAP) standards from the National Institute of Standards and Technology (NIST). OpenSCAP supports the SCAP components: Common Vulnerabilities and Exposures (CVE) Common Platform Enumeration (CPE) Common Configuration Enumeration (CCE) Common Vulnerability Scoring System (CVSS) Open Vulnerability and Assessment Language (OVAL) Extensible Configuration Checklist Description Format (XCCDF) Additionally, the openSCAP package includes an application to generate SCAP reports about system configuration. This package is considered a Technology Preview in Red Hat Enterprise Linux 6. TPM TPM hardware can create, store and use RSA keys securely (without ever being exposed in memory), verify a platform's software state using cryptographic hashes and more. The user space libraries, trousers and tpm-tools are considered a Technology Preview in this Red Hat Enterprise Linux 6.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/ar01s08
Appendix J. BlueStore configuration options
Appendix J. BlueStore configuration options The following are Ceph BlueStore configuration options that can be configured during deployment. Note This list is not complete. rocksdb_cache_size Description The size of the RocksDB cache in MB. Type 32-bit Integer Default 512
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/configuration_guide/bluestore-configuration-options_conf
6.11. Affinity Groups
6.11. Affinity Groups Affinity groups help you determine where selected virtual machines run in relation to each other and specified hosts. This capability helps manage workload scenarios such as licensing requirements, high-availability workloads, and disaster recovery. The VM Affinity Rule When you create an affinity group, you select the virtual machines that belong to the group. To define where these virtual machines can run in relation to each other , you enable a VM Affinity Rule : A positive rule tries to run the virtual machines together on a single host; a negative affinity rule tries to run the virtual machines apart on separate hosts. If the rule cannot be fulfilled, the outcome depends on whether the weight or filter module is enabled. The Host Affinity Rule Optionally, you can add hosts to the affinity group. To define where virtual machines in the group can run in relation to hosts in the group , you enable a Host Affinity Rule : A positive rule tries to run the virtual machines on hosts in the affinity group; a negative affinity rule tries to run the virtual machines on hosts that are not in the affinity group. If the rule cannot be fulfilled, the outcome depends on whether the weight or filter module is enabled. The Default Weight Module By default, both rules apply the weight module in the cluster's scheduling policy. With the weight module, the scheduler attempts to fulfill a rule, but allows the virtual machines in the affinity group to run anyway if the rule cannot be fulfilled. For example, with a positive VM Affinity Rule and the weight module enabled, the scheduler tries to run all of the affinity group's virtual machines on a single host. However, if a single host does not have sufficient resources for this, the scheduler runs the virtual machines on multiple hosts. For this module to work, the weight module section of the scheduling policies must contain the VmAffinityGroups and VmToHostsAffinityGroups keywords. The Enforcing Option and Filter Module Both rules have an Enforcing option which applies the filter module in the cluster's scheduling policy. The filter module overrides the weight module. With the filter module enabled, the scheduler requires that a rule be fulfilled. If a rule cannot be fulfilled, the filter module prevents the virtual machines in the affinity group from running. For example, with a positive Host Affinity Rule and Enforcing enabled (the filter module enabled), the scheduler requires the affinity group's virtual machines to run on hosts that are part of the affinity group. However, if those hosts are down, the scheduler does not run the virtual machines at all. For this module to work, the filter module section of the scheduling policies must contain the VmAffinityGroups and VmToHostsAffinityGroups keywords. Examples To see how these rules and options can be used with one another, see Section 6.11.4, "Affinity Groups Examples" . Warning An affinity label is functionally the same as an affinity group with a positive Host Affinity Rule and Enforcing enabled. For affinity labels to work, the filter module section of the scheduling policies must contain Label . If an affinity group and affinity label conflict with each other, the affected virtual machines do not run. To help prevent, troubleshoot, and resolve conflicts, see Section 6.11.5, "Affinity Groups Troubleshooting" . Important Each rule is affected by the weight and filter modules in the cluster's scheduling policy. For the VM Affinity Rule rule to work, the scheduling policy must have the VmAffinityGroups keyword in its Weight module and Filter module sections. For the Host Affinity Rule to work, the scheduling policy must have the VmToHostsAffinityGroups keyword in its Weight module and Filter module sections. For more information, see Scheduling Policies in the Administration Guide . Note Affinity groups apply to virtual machines on the cluster level. Moving a virtual machine from one cluster to another removes it from the affinity groups in the original cluster. Virtual machines do not have to restart for the affinity group rules to take effect. 6.11.1. Creating an Affinity Group You can create new affinity groups in the Administration Portal. Creating Affinity Groups Click Compute Virtual Machines and select a virtual machine. Click the virtual machine's name to go to the details view. Click the Affinity Groups tab. Click New . Enter a Name and Description for the affinity group. From the VM Affinity Rule drop-down, select Positive to apply positive affinity or Negative to apply negative affinity. Select Disable to disable the affinity rule. Select the Enforcing check box to apply hard enforcement, or ensure this check box is cleared to apply soft enforcement. Use the drop-down list to select the virtual machines to be added to the affinity group. Use the + and - buttons to add or remove additional virtual machines. Click OK . 6.11.2. Editing an Affinity Group Editing Affinity Groups Click Compute Virtual Machines and select a virtual machine. Click the virtual machine's name to go to the details view. Click the Affinity Groups tab. Click Edit . Change the VM Affinity Rule drop-down and Enforcing check box to the preferred values and use the + and - buttons to add or remove virtual machines to or from the affinity group. Click OK . 6.11.3. Removing an Affinity Group Removing Affinity Groups Click Compute Virtual Machines and select a virtual machine. Click the virtual machine's name to go to the details view. Click the Affinity Groups tab. Click Remove . Click OK . The affinity policy that applied to the virtual machines that were members of that affinity group no longer applies. 6.11.4. Affinity Groups Examples The following examples illustrate how to apply affinity rules for various scenarios, using the different features of the affinity group capability described in this chapter. Example 6.1. High Availability Dalia is the DevOps engineer for a startup. For high availability, a particular system's two virtual machines should run on separate hosts anywhere in the cluster. Dalia creates an affinity group named "high availability" and does the following: Adds the two virtual machines, VM01 and VM02 , to the affinity group. Sets VM Affinity to Negative so the virtual machines try to run on separate hosts. Leaves Enforcing unchecked (disabled) so both virtual machines can continue running in case only one host is available during an outage. Leaves the Hosts list empty so the virtual machines run on any host in the cluster. Example 6.2. Performance Sohni is a software developer who uses two virtual machines to build and test his software many times each day. There is heavy network traffic between these two virtual machines. Running the machines on the same host reduces both network traffic and the effects of network latency on the build and test process. Using high-specification hosts (faster CPUs, SSDs, and more memory) further accelerates this process. Sohni creates an affinity group called "build and test" and does the following: Adds VM01 and VM02 , the build and test virtual machines, to the affinity group. Adds the high-specification hosts, host03 , host04 , and host05 , to the affinity group. Sets VM affinity to Positive so the virtual machines try to run on the same host, reducing network traffic and latency effects. Sets Host affinity to Positive so the virtual machines try to run on the high specification hosts, accelerating the process. Leaves Enforcing unchecked (disabled) for both rules so the virtual machines can run if the high-specification hosts are not available. Example 6.3. Licensing Bandile, a software asset manager, helps his organization comply with the restrictive licensing requirements of a 3D imaging software vendor. These terms require the virtual machines for its licensing server, VM-LS , and imaging workstations, VM-WS # , to run on the same host. Additionally, the physical CPU-based licensing model requires that the workstations run on either of two GPU-equipped hosts, host-gpu-primary or host-gpu-backup . To meet these requirements, Bandile creates an affinity group called "3D seismic imaging" and does the following: Adds the previously mentioned virtual machines and hosts to the affinity group. Sets VM affinity to Positive and selects Enforcing so the licensing server and workstations must run together on one of the hosts, not on multiple hosts. Sets Host affinity to Positive and selects Enforcing so the virtual machines must run on either of the GPU-equipped the hosts, not other hosts in the cluster. 6.11.5. Affinity Groups Troubleshooting To help prevent problems with affinity groups Plan and document the scenarios and outcomes you expect when using affinity groups. Verify and test the outcomes under a range of conditions. Follow change management best practices. Only use the Enforcing option if it is required. If you observe problems with virtual machines not running Verify that the cluster has a scheduling policy whose weight module and filter module sections contain VmAffinityGroups and VmToHostsAffinityGroups . For more information, see Explanation of Settings in the New Scheduling Policy and Edit Scheduling Policy Window in the Administration Guide . Check for conflicts between affinity labels and affinity groups. For possible conflicts between affinity labels and affinity groups Understand that an affinity label is the equivalent of an affinity group with a Host affinity rule that is Positive and has Enforcing enabled. Understand that if an affinity label and affinity group conflict with each other, the intersecting set of virtual machines do not run. Determine whether a conflict is possible: Inspect the filter module section of the cluster's scheduling policies. These must contain both a Label keyword and a VmAffinityGroups OR VmToHostsAffinityGroups keyword. Otherwise, a conflict is not possible . (The presence of VmAffinityGroups and VmToHostsAffinityGroups in the weight module section does not matter because Label in a filter module section would override them.) Inspect the affinity groups. They must contain a rule that has Enforcing enabled. Otherwise, a conflict is not possible . If a conflict is possible, identify the set of virtual machines that might be involved: Inspect the affinity labels and groups. Make a list of virtual machines that are members of both an affinity label and an affinity group with an Enforcing option enabled. For each host and virtual machine in this intersecting set, analyze the conditions under which a potential conflict occurs. Determine whether the actual non-running virtual machines match the ones in the analysis. Finally, restructure the affinity groups and affinity labels to help avoid unintended conflicts. Verify that any changes produce the expected results under a range of conditions. If you have overlapping affinity groups and affinity labels, it can be easier to view them in one place as affinity groups. Consider converting an affinity label into an equivalent affinity group, which has a Host affinity rule with Positive selected and Enforcing enabled.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/sect-Affinity_Groups
probe::ioscheduler_trace.elv_completed_request
probe::ioscheduler_trace.elv_completed_request Name probe::ioscheduler_trace.elv_completed_request - Fires when a request is Synopsis ioscheduler_trace.elv_completed_request Values elevator_name The type of I/O elevator currently enabled. rq Address of request. name Name of the probe point rq_flags Request flags. disk_minor Disk minor number of request. disk_major Disk major no of request. Description completed.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-ioscheduler-trace-elv-completed-request
Updating OpenShift Data Foundation
Updating OpenShift Data Foundation Red Hat OpenShift Data Foundation 4.14 Instructions for cluster and storage administrators regarding upgrading Red Hat Storage Documentation Team Abstract This document explains how to update versions of Red Hat OpenShift Data Foundation.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/updating_openshift_data_foundation/index
22.4. Understanding the Drift File
22.4. Understanding the Drift File The drift file is used to store the frequency offset between the system clock running at its nominal frequency and the frequency required to remain in synchronization with UTC. If present, the value contained in the drift file is read at system start and used to correct the clock source. Use of the drift file reduces the time required to achieve a stable and accurate time. The value is calculated, and the drift file replaced, once per hour by ntpd . The drift file is replaced, rather than just updated, and for this reason the drift file must be in a directory for which ntpd has write permissions.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-Understanding_the_Drift_File
Chapter 21. Supportability and Maintenance
Chapter 21. Supportability and Maintenance ABRT 2.1 Red Hat Enterprise Linux 7 includes the Automatic Bug Reporting Tool ( ABRT ) 2.1, which features an improved user interface and the ability to send mReports , lightweight anonymous problem reports suitable for machine processing, such as gathering crash statistics. The set of supported languages, for which ABRT is capable of detecting problems, has been extended with the addition of Java and Ruby in ABRT 2.1. In order to use ABRT , ensure that the abrt-desktop or the abrt-cli package is installed on your system. The abrt-desktop package provides a graphical user interface for ABRT , and the abrt-cli package contains a tool for using ABRT on the command line. You can also install both. To install the package containing the graphical user interface for ABRT , run the following command as the root user: To install the package that provides the command line ABRT tool, use the following command: Note that while both of the above commands cause the main ABRT system to be installed, you may need to install additional packages to obtain support for detecting crashes in software programmed using various languages. See the Automatic Bug Reporting Tool (ABRT) chapter of the Red Hat Enterprise Linux 7 System Administrator's Guide for information on additional packages available with the ABRT system. Upon installation, the abrtd daemon, which is the core of the ABRT crash-detecting service, is configured to start at boot time. You can use the following command to verify its current status: In order to discover as many software bugs as possible, administrators should configure ABRT to automatically send reports of application crashes to Red Hat. To enable the autoreporting feature, issue the following command as root : Additional Information on ABRT Red Hat Enterprise Linux 7 System Administrator's Guide - The Automatic Bug Reporting Tool (ABRT) chapter of the Administrator's Guide for Red Hat Enterprise Linux 7 contains detailed information on installing, configuring, and using the ABRT service.
[ "~]# yum install abrt-desktop", "~]# yum install abrt-cli", "~]USD systemctl is-active abrtd.service active", "~]# abrt-auto-reporting enabled" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.0_release_notes/chap-red_hat_enterprise_linux-7.0_release_notes-supportability_and_maintenance
Chapter 3. ProjectRequest [project.openshift.io/v1]
Chapter 3. ProjectRequest [project.openshift.io/v1] Description ProjectRequest is the set of options necessary to fully qualify a project request Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources description string Description is the description to apply to a project displayName string DisplayName is the display name to apply to a project kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta_v2 metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 3.2. API endpoints The following API endpoints are available: /apis/project.openshift.io/v1/projectrequests GET : list objects of kind ProjectRequest POST : create a ProjectRequest 3.2.1. /apis/project.openshift.io/v1/projectrequests HTTP method GET Description list objects of kind ProjectRequest Table 3.1. HTTP responses HTTP code Reponse body 200 - OK Status_v6 schema 401 - Unauthorized Empty HTTP method POST Description create a ProjectRequest Table 3.2. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.3. Body parameters Parameter Type Description body ProjectRequest schema Table 3.4. HTTP responses HTTP code Reponse body 200 - OK ProjectRequest schema 201 - Created ProjectRequest schema 202 - Accepted ProjectRequest schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/project_apis/projectrequest-project-openshift-io-v1
Chapter 17. Storage
Chapter 17. Storage Support added in LVM for RAID level takeover LVM now provides full support for RAID takeover, previously available as a Technology Preview, which allows users to convert a RAID logical volume from one RAID level to another. This release expands the number of RAID takeover combinations. Support for some transitions may require intermediate steps. New RAID types that are added by means of RAID takeover are not supported in older released kernel versions; these RAID types are raid0, raid0_meta, raid5_n, and raid6_{ls,rs,la,ra,n}_6. Users creating those RAID types or converting to those RAID types on Red Hat Enterprise Linux 7.4 cannot activate the logical volumes on systems running releases. RAID takeover is available only on top-level logical volumes in single machine mode (that is, takeover is not available for cluster volume groups or while the RAID is under a snapshot or part of a thin pool). (BZ# 1366296 ) LVM now supports RAID reshaping LVM now provides support for RAID reshaping. While takeover allows users to change from one RAID type to another, reshaping allows users to change properties such as the RAID algorithm, stripe size, region size, or number of images. For example, a user can change a 3-way stripe to a 5-way stripe by adding two additional devices. Reshaping is available only on top-level logical volumes in single machine mode, and only while the logical volume is not in-use (for example, when it is mounted by a file system). (BZ# 1191935 , BZ#834579, BZ# 1191978 , BZ# 1392947 ) Device Mapper linear devices now support DAX Direct Access (DAX) support has been added to the dm-linear and dm-stripe targets. Multiple Non-Volatile Dual In-line Memory Module (NVDIMM) devices can now be combined to provide larger persistent memory (PMEM) block devices. (BZ#1384648) libstoragemgmt rebased to version 1.4.0 The libstoragemgmt packages have been upgraded to upstream version 1.4.0, which provides a number of bug fixes and enhancements over the version. Notably, the following libraries have been added: Query serial number of local disk: lsm_local_disk_serial_num_get()/lsm.LocalDisk.serial_num_get() Query LED status of local disk: lsm_local_disk_led_status_get()/lsm.LocalDisk.led_status_get() Query link speed of local disk: lsm_local_disk_link_speed_get()/lsm.LocalDisk.link_speed_get() Notable bug fixes include: The megaraid plug-in for the Dell PowerEdge RAID Controller (PERC) has been fixed. The local disk rotation speed query on the NVM Express (NVMe) disk has been fixed. lsmcli incorrect error handling on a local disk query has been fixed. All gcc compile warnings have been fixed. The obsolete usage of the autoconf AC_OUTPUT macro has been fixed. (BZ# 1403142 ) mpt3sas updated to version 15.100.00.00 The mpt3sas storage driver has been updated to version 15.100.00.00, which adds support for new devices. Contact your vendor for more details. (BZ#1306453) The lpfc_no_hba_reset module parameter for the lpfc driver is now available With this update, the lpfc driver for certain models of Emulex Fibre Channel Host Bus Adapters (HBAs) has been enhanced by adding the lpfc_no_hba_reset module parameter. This parameter accepts a list of one or more hexadecimal world-wide port numbers (WWPNs) of HBAs that are not reset during SCSI error handling. Now, lpfc allows you to control which ports on the HBA may be reset during SCSI error handling time. Also, lpfc now allows you to set the eh_deadline parameter, which represents an upper limit of the SCSI error handling time. (BZ#1366564) LVM now detects Veritas Dynamic Multi-Pathing systems and no longer accesses the underlying device paths directly For LVM to work correctly with Veritas Dynamic Multi-Pathing, you must set obtain_device_list_from_udev to 0 in the devices section of the configuration file /etc/lvm/lvm.conf . These multi-pathed devices are not exposed through the standard udev interfaces and so without this setting LVM will be unaware of their existence. (BZ#1346280) The libnvdimm kernel subsystem now supports PMEM subdivision Intel's Non-Volatile Dual In-line Memory Module (NVDIMM) label specification has been extended to allow more than one Persistent Memory (PMEM) namespace to be configured per region (interleave set). The kernel shipped with Red Hat Enterprise Linux 7.4 has been modified to support these new configurations. Without subdivision support, a single region could previously be used in only one mode: pmem , device dax , or sector . With this update, a single region can be subdivided, and each subdivision can be configured independently of the others. (BZ#1383827) Warning messages when multipathd is not running Users now get warning messages if they run a multipath command that creates or lists multipath devices while multipathd is not running. If multipathd is not running, then the devices are not able to restore paths that have failed or react to changes in the device setup. The multipathd daemon now prints a warning message if there are multipath devices and multipathd is not running. (BZ# 1359510 ) c library interface added to multipathd to give structured output Users can now use the libdmmp library to get structured information from multipathd. Other programs that want to get information from multipathd can now get this information without running a command and parsing the results. (BZ# 1430097 ) New remove retries multipath configuration value If a multipath device is temporarily in use when multipath tries to remove it, the remove will fail. It is now possible to control the number of times that the multipath command will retry removing a multipath device that is busy by setting the remove_retries configuration value. The default value is 0, in which case multipath will not retry failed removes. (BZ# 1368211 ) New multipathd reset multipaths stats commands Multipath now supports two new multipathd commands: multipathd reset multipaths stats and multipathd reset multipath dev stats . These commands reset the device stats that multipathd tracks for all the devices, or the specified device, respectively. This allows users to reset their device stats after they make changes to them. (BZ# 1416569 ) New disable_changed_wwids mulitpath configuration parameter Multipath now supports a new multipath.conf defaults section parameter, disable_changed_wwids . Setting this will make multipathd notice when a path device changes its wwid while in use, and will disable access to the path device until its wwid returns to its value. When the wwid of a scsi device changes, it is often a sign that the device has been remapped to a different LUN. If this happens while the scsi device is in use, it can lead to data corruption. Setting the disable_changed_wwids parameter will warn users when the scsi device changes its wwid. In many cases multipathd will disable access to the path device as soon as it gets unmapped from its original LUN, removing the possibility of corruption. However multipathd is not always able to catch the change before the scsi device has been remapped, meaning there may still be a window for corruption. Remapping in-use scsi devices is not currently supported. (BZ#1169168) Updated built-in configuration for HPE 3PAR array The built-in configuration for the 3PAR array now sets no_path_retry to 12. (BZ#1279355) Added built-in configuration for NFINIDAT InfiniBox.* devices Multipath now autoconfigures NFINIDAT InfiniBox.* devices (BZ#1362409) device-mapper-multipath now supports the max_sectors_kb configuration parameter With this update, device-mapper-multipath provides a new max_sectors_kb parameter in the defaults, devices, and multipaths sections of the multipath.conf file. The max_sectors_kb parameter allows you to set the max_sectors_kb device queue parameter to the specified value on all underlying paths of a multipath device before the multipath device is first activated. When a multipath device is created, the device inherits the max_sectors_kb value from the path devices. Manually raising this value for the multipath device or lowering this value for the path devices can cause multipath to create I/O operations larger than the path devices allow. Using the max_sectors_kb multipath.conf parameter is an easy way to set these values before a multipath device is created on top of the path devices, and prevent invalid-sized I/O operations from being passed down. (BZ#1394059) New detect_checker multipath configuration parameter Some devices, such as the VNX2, can be optionally configured in ALUA mode. In this mode, they need to use a different path_checker and prioritizer than in their non-ALUA mode. Multipath now supports the detect_checker parameter in the multipath.conf defaults and devices sections. If this is set, multipath will detect if a device supports ALUA, and if so, it will override the configured path_checker and use the TUR checker instead. The detect_checker option allows devices with an optional ALUA mode to be correctly autoconfigured, regardless of what mode they are in. (BZ#1372032) Multipath now has a built-in default configuration for Nimble Storage devices The multipath default hardware table now includes an entry for Nimble Storage arrays. (BZ# 1406226 ) LVM supports reducing the size of a RAID logical volume As of Red Hat Enterprise Linux 7,4, you can use the lvreduce or lvresize command to reduce the size of a RAID logical volume. (BZ# 1394048 ) iprutils rebased to version 2.4.14 The iprutils packages have been upgraded to upstream version 2.4.14, which provides a number of bug fixes and enhancements over the version. Notably: Endian swapped device_id is now compatible with earlier versions. VSET write cache in bare metal mode is now allowed. Creating RAIDS on dual adapter setups has been fixed. Verifying rebuilds for single adapter configurations is now disabled by default. (BZ#1384382) mdadm rebased to version 4.0 The mdadm packages have been upgraded to upstream version 4.0, which provides a number of bug fixes and enhancements over the version. Notably, this update adds bad block management support for Intel Matrix Storage Manager (IMSM) metadata. The features included in this update are supported on external metadata formats, and Red Hat continues supporting the Intel Rapid Storage Technology enterprise (Intel RSTe) software stack. (BZ#1380017) LVM extends the size of a thin pool logical volume when a thin pool fills over 50 percent When a thin pool logical volume fills by more than 50 percent, by default the dmeventd thin plugin now calls the dmeventd thin_command command with every 5 percent increase. This resizes the thin pool when it has been filled above the configured thin_pool_autoextend_threshold in the activation section of the configuration file. A user may override this default by configuring an external command and specifying this command as the value of thin_command in the dmeventd section of the lvm.conf file. For information on the thin plugin and on configuring external commands to maintain a thin pool, see the dmeventd(8) man page. In releases, when a thin pool resize failed, the dmeventd plugin would try to unmount unconditionally all thin volumes associated with the thin pool when a compile-time defined threshold of more than 95 percent was reached. The dmeventd plugin, by default, no longer unmounts any volumes. Reproducing the logic requires configuring an external script. (BZ# 1442992 ) LVM now supports dm-cache metadata version 2 LVM/DM cache has been significantly improved. It provides support for larger cache sizes, better adaptation to changing workloads, greatly improved startup and shutdown times, and higher performance overall. Version 2 of the dm-cache metadata format is now the default when creating cache logical volumes with LVM. Version 1 will continue to be supported for previously created LVM cache logical volumes. Upgrading to version 2 will require the removal of the old cache layer and the creation of a new cache layer. (BZ# 1436748 ) Support for DIF/DIX (T10 PI) on specified hardware SCSI T10 DIF/DIX is fully supported in Red Hat Enterprise Linux 7.4, provided that the hardware vendor has qualified it and provides full support for the particular HBA and storage array configuration. DIF/DIX is not supported on other configurations, it is not supported for use on the boot device, and it is not supported on virtualized guests. At the current time, the following vendors are known to provide this support. FUJITSU supports DIF and DIX on: EMULEX 16G FC HBA: EMULEX LPe16000/LPe16002, 10.2.254.0 BIOS, 10.4.255.23 FW, with: FUJITSU ETERNUS DX100 S3, DX200 S3, DX500 S3, DX600 S3, DX8100 S3, DX8700 S3, DX8900 S3, DX200F, DX60 S3, AF250, AF650 QLOGIC 16G FC HBA: QLOGIC QLE2670/QLE2672, 3.28 BIOS, 8.00.00 FW, with: FUJITSU ETERNUS DX100 S3, DX200 S3, DX500 S3, DX600 S3, DX8100 S3, DX8700 S3, DX8900 S3, DX200F, DX60 S3 Note that T10 DIX requires database or some other software that provides generation and verification of checksums on disk blocks. No currently supported Linux file systems have this capability. EMC supports DIF on: EMULEX 8G FC HBA: LPe12000-E and LPe12002-E with firmware 2.01a10 or later, with: EMC VMAX3 Series with Enginuity 5977; EMC Symmetrix VMAX Series with Enginuity 5876.82.57 and later EMULEX 16G FC HBA: LPe16000B-E and LPe16002B-E with firmware 10.0.803.25 or later, with: EMC VMAX3 Series with Enginuity 5977; EMC Symmetrix VMAX Series with Enginuity 5876.82.57 and later QLOGIC 16G FC HBA: QLE2670-E-SP and QLE2672-E-SP, with: EMC VMAX3 Series with Enginuity 5977; EMC Symmetrix VMAX Series with Enginuity 5876.82.57 and later Please refer to the hardware vendor's support information for the latest status. Support for DIF/DIX remains in Technology Preview for other HBAs and storage arrays. (BZ#1457907) The dmstats facility can now track the statistics for files that change Previously, the dmstats facility was able to report statistics for files that did not change in size. It now has the ability to watch files for changes and update its mappings to track file I/O even as the file changes in size (or fills holes that may be in the file). (BZ# 1378956 ) Support for thin snapshots of cached logical volumes LVM in Red Hat Enterprise Linux 7.4 allows you to create thin snapshots of cached logical volumes. This feature was not available in earlier releases. These external origin cached logical volumes are converted to a read-only state and thus can be used by different thin pools. (BZ# 1189108 ) New package: nvmetcli The nvmetcli utility enables you to configure Red Hat Enterprise Linux as an NVMEoF target, using the NVME-over-RDMA fabric type. With nvmetcli , you can configure nvmet interactively, or use a JSON file to save and restore the configuration. (BZ#1383837) Device DAX is now available for NVDIMM devices Device DAX enables users like hypervisors and databases to have raw access to persistent memory without an intervening file system. In particular, Device DAX allows applications to have predictable fault granularities and the ability to flush data to the persistence domain from user space. Starting with Red Hat Enterprise Linux 7.4, Device Dax is available for Non-Volatile Dual In-line Memory Module (NVDIMM) devices. (BZ#1383489)
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.4_release_notes/new_features_storage
Chapter 3. Known Issues
Chapter 3. Known Issues This section documents unexpected behavior known to affect Red Hat Hyperconverged Infrastructure for Virtualization (RHHI for Virtualization). BZ#1851114 - Error message device path is not a valid name for this device is shown When logical volume (LV) name exceeds 55 characters which is a limitation with python-blivet, error message like ValueError: gluster_thinpool_gluster_vg_<WWID> is not a valid name for this device is seen in vdsm.log and supervdsm.log files. To work around this issue, follow these steps: Rename the volume group (VG): Rename thinpool: BZ#1853995 - Updating storage domain gives error dialog box While replacing the primary volfile during host replacement to update the storage domain via the Administrator Portal, the portal gives an operation canceled dialog box . However, in the backend the values get updated. BZ#1855945 - RHHI for Virtualization depolyment fails using multipath configuration and lvm cache During the deployment of RHHI for Virtualization with multipath device names, volume groups (VG) and logical volumes (LV) are created with the suffix of WWID leading to LV names longer than 128 characters. This results in failure of LV cache creation. To work around this issue, follow these steps: When initiating RHHI for Virtualization deployment with multipath device names as ``/dev/mapper/<WWID>`, replace VG and thinpool suffix with last 4 digits of WWID as: During deployment from the web console, provide a multipath device name as /dev/mapper/<WWID> for bricks. Click to generate an inventory file. Login in to the deployment node via SSH. Find the <WWID> with LVM components: For all WWIDs, replace WWID with the last 4 digits of WWID. Continue deployment from web console. BZ#1856577 - Shared storage volume fails to mount in IPv6 environment When gluster volumes are created with the gluster_shared_storage option during the deployment of RHHI for Virtualization using IPv6 addresses, the mount option is not added in the fstab file. As a result, the shared storage fails to mount. To workaround this issue, add mount option as xlator-option=transport.address-family=inet6 in the fstab file. BZ#1856594 - Fails to create VDO enabled gluster volume with day2 operation from web console Virtual Disk Optimization (VDO) enabled gluster volume with day2 operation fails from the web console. To work around this issue, modify the playbook vdo_create.yml at /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vdo_create.yml and change the ansible tasks as: BZ#1858197 - Pending self-heal on the volume, post the bricks are online In dual network configurations (one for gluster and other for ovirt management), Automatic File Replication (AFR) healer threads are not spawned with the restart of self-heal daemon resulting in pending self heal entries in the volume. To work around this issue,follow these steps: change the hostname to the other network FQDN using the command Start the heal using the command: BZ#1554241 - Cache volumes must be manually attached to asymmetric brick configurations When bricks are configured asymmetrically, and a logical cache volume is configured, the cache volume is attached to only one brick. This is because the current implementation of asymmetric brick configuration creates a separate volume group and thin pool for each device, so asymmetric brick configurations would require a cache volume per device. However, this would use a large number of cache devices, and is not currently possible to configure using the Web Console. To work around this issue, first remove any cache volumes that have been applied to an asymmetric brick set. Then, follow the instructions in Configuring a logical cache volume to create a logical cache volume manually. BZ#1690820 - Create volume populates host field with IP address not FQDN When you create a new volume using the Web Console using the Create Volume button, the value for hosts is populated from gluster peer list, and the first host is an IP address instead of an FQDN. As part of volume creation, this value is passed to an FQDN validation process, which fails with an IP address. To work around this issue, edit the generated variable file and manually insert the FQDN instead of the IP address. BZ#1506680 - Disk properties not cleaned correctly on reinstall The installer cannot clean some kinds of metadata from existing logical volumes. This means that reinstalling a hyperconverged host fails unless the disks have been manually cleared beforehand. To work around this issue, run the following commands to remove all data and metadata from disks attached to the physical machine. Warning Back up any data that you want to keep before running these commands, as these commands completely remove all data and metadata on all disks.
[ "vgrename gluster_vg_<WWID> gluster_vg_<last-4-digit-WWID>", "lvrename gluster_vg_<last-4-digit-WWID> gluster_thinpool_gluster_vg_<WWID> gluster_thinpool_gluster_vg_<last-4-digit-WWID>", "grep vg /etc/ansible/hc_wizard_inventory.yml", "sed -i 's/<WWID>/<last-4-digit-WWID>/g' /etc/ansible/hc_wizard_inventory.yml", "- name: Restart all VDO volumes shell: \"vdo stop -n {{item.name}} && vdo start -n {{item.name}}\" with_items: \"{{ gluster_infra_vdo }}\"", "hostnamectl set-hostname <other-network-FQDN>", "gluster volume heal <volname>", "lvconvert --uncache volume_group/logical_cache_volume", "pvremove /dev/* --force -y for disk in USD(ls /dev/{sd*,nv*}); do wipefs -a -f USDdisk; done" ]
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/1.8_release_notes/known-issues-180
Chapter 1. Release notes for the Red Hat build of OpenTelemetry
Chapter 1. Release notes for the Red Hat build of OpenTelemetry 1.1. Red Hat build of OpenTelemetry overview Red Hat build of OpenTelemetry is based on the open source OpenTelemetry project , which aims to provide unified, standardized, and vendor-neutral telemetry data collection for cloud-native software. Red Hat build of OpenTelemetry product provides support for deploying and managing the OpenTelemetry Collector and simplifying the workload instrumentation. The OpenTelemetry Collector can receive, process, and forward telemetry data in multiple formats, making it the ideal component for telemetry processing and interoperability between telemetry systems. The Collector provides a unified solution for collecting and processing metrics, traces, and logs. The OpenTelemetry Collector has a number of features including the following: Data Collection and Processing Hub It acts as a central component that gathers telemetry data like metrics and traces from various sources. This data can be created from instrumented applications and infrastructure. Customizable telemetry data pipeline The OpenTelemetry Collector is designed to be customizable. It supports various processors, exporters, and receivers. Auto-instrumentation features Automatic instrumentation simplifies the process of adding observability to applications. Developers don't need to manually instrument their code for basic telemetry data. Here are some of the use cases for the OpenTelemetry Collector: Centralized data collection In a microservices architecture, the Collector can be deployed to aggregate data from multiple services. Data enrichment and processing Before forwarding data to analysis tools, the Collector can enrich, filter, and process this data. Multi-backend receiving and exporting The Collector can receive and send data to multiple monitoring and analysis platforms simultaneously. You can use the Red Hat build of OpenTelemetry in combination with the Red Hat OpenShift distributed tracing platform (Tempo) . Note Only supported features are documented. Undocumented features are currently unsupported. If you need assistance with a feature, contact Red Hat's support. 1.2. Release notes for Red Hat build of OpenTelemetry 3.5 The Red Hat build of OpenTelemetry 3.5 is provided through the Red Hat build of OpenTelemetry Operator 0.119.0 . Note The Red Hat build of OpenTelemetry 3.5 is based on the open source OpenTelemetry release 0.119.0. 1.2.1. New features and enhancements This update introduces the following enhancements: The following Technology Preview features reach General Availability: Host Metrics Receiver Kubelet Stats Receiver With this update, the OpenTelemetry Collector uses the OTLP HTTP Exporter to push logs to a LokiStack instance. With this update, the Operator automatically creates RBAC rules for the Kubernetes Events Receiver ( k8sevents ), Kubernetes Cluster Receiver ( k8scluster ), and Kubernetes Objects Receiver ( k8sobjects ) if the Operator has sufficient permissions. For more information, see "Creating the required RBAC resources automatically" in Configuring the Collector . 1.2.2. Deprecated functionality In the Red Hat build of OpenTelemetry 3.5, the Loki Exporter, which is a temporary Technology Preview feature, is deprecated. The Loki Exporter is planned to be removed in the Red Hat build of OpenTelemetry 3.6. If you currently use the Loki Exporter for the OpenShift Logging 6.1 or later, replace the Loki Exporter with the OTLP HTTP Exporter. Important The Loki Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.2.3. Bug fixes This update introduces the following bug fix: Before this update, manually created routes for the Collector services were unintentionally removed when the Operator pod was restarted. With this update, restarting the Operator pod does not result in the removal of the manually created routes. 1.3. Release notes for Red Hat build of OpenTelemetry 3.4 The Red Hat build of OpenTelemetry 3.4 is provided through the Red Hat build of OpenTelemetry Operator 0.113.0 . The Red Hat build of OpenTelemetry 3.4 is based on the open source OpenTelemetry release 0.113.0. 1.3.1. Technology Preview features This update introduces the following Technology Preview features: OpenTelemetry Protocol (OTLP) JSON File Receiver Count Connector Important Each of these features is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.3.2. New features and enhancements This update introduces the following enhancements: The following Technology Preview features reach General Availability: BearerTokenAuth Extension Kubernetes Attributes Processor Spanmetrics Connector You can use the instrumentation.opentelemetry.io/inject-sdk annotation with the Instrumentation custom resource to enable injection of the OpenTelemetry SDK environment variables into multi-container pods. 1.3.3. Removal notice In the Red Hat build of OpenTelemetry 3.4, the Logging Exporter has been removed from the Collector. As an alternative, you must use the Debug Exporter instead. Warning If you have the Logging Exporter configured, upgrading to the Red Hat build of OpenTelemetry 3.4 will cause crash loops. To avoid such issues, you must configure the Red Hat build of OpenTelemetry to use the Debug Exporter instead of the Logging Exporter before upgrading to the Red Hat build of OpenTelemetry 3.4. In the Red Hat build of OpenTelemetry 3.4, the Technology Preview Memory Ballast Extension has been removed. As an alternative, you can use the GOMEMLIMIT environment variable instead. 1.4. Release notes for Red Hat build of OpenTelemetry 3.3.1 The Red Hat build of OpenTelemetry is provided through the Red Hat build of OpenTelemetry Operator. The Red Hat build of OpenTelemetry 3.3.1 is based on the open source OpenTelemetry release 0.107.0. 1.4.1. Bug fixes This update introduces the following bug fix: Before this update, injection of the NGINX auto-instrumentation failed when copying the instrumentation libraries into the application container. With this update, the copy command is configured correctly, which fixes the issue. ( TRACING-4673 ) 1.5. Release notes for Red Hat build of OpenTelemetry 3.3 The Red Hat build of OpenTelemetry is provided through the Red Hat build of OpenTelemetry Operator. The Red Hat build of OpenTelemetry 3.3 is based on the open source OpenTelemetry release 0.107.0. 1.5.1. CVEs This release fixes the following CVEs: CVE-2024-6104 CVE-2024-42368 1.5.2. Technology Preview features This update introduces the following Technology Preview features: Group-by-Attributes Processor Transform Processor Routing Connector Prometheus Remote Write Exporter Exporting logs to the LokiStack log store Important Each of these features is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.5.3. New features and enhancements This update introduces the following enhancements: Collector dashboard for the internal Collector metrics and analyzing Collector health and performance. ( TRACING-3768 ) Support for automatically reloading certificates in both the OpenTelemetry Collector and instrumentation. ( TRACING-4186 ) 1.5.4. Bug fixes This update introduces the following bug fixes: Before this update, the ServiceMonitor object was failing to scrape operator metrics due to missing permissions for accessing the metrics endpoint. With this update, this issue is fixed by creating the ServiceMonitor custom resource when operator monitoring is enabled. ( TRACING-4288 ) Before this update, the Collector service and the headless service were both monitoring the same endpoints, which caused duplication of metrics collection and ServiceMonitor objects. With this update, this issue is fixed by not creating the headless service. ( OBSDA-773 ) 1.6. Release notes for Red Hat build of OpenTelemetry 3.2.2 The Red Hat build of OpenTelemetry is provided through the Red Hat build of OpenTelemetry Operator. 1.6.1. CVEs This release fixes the following CVEs: CVE-2023-2953 CVE-2024-28182 1.6.2. Bug fixes This update introduces the following bug fix: Before this update, secrets were perpetually generated on OpenShift Container Platform 4.16 because the operator tried to reconcile a new openshift.io/internal-registry-pull-secret-ref annotation for service accounts, causing a loop. With this update, the operator ignores this new annotation. ( TRACING-4435 ) 1.7. Release notes for Red Hat build of OpenTelemetry 3.2.1 The Red Hat build of OpenTelemetry is provided through the Red Hat build of OpenTelemetry Operator. 1.7.1. CVEs This release fixes the following CVEs: CVE-2024-25062 Upstream CVE-2024-36129 1.7.2. New features and enhancements This update introduces the following enhancement: Red Hat build of OpenTelemetry 3.2.1 is based on the open source OpenTelemetry release 0.102.1. 1.8. Release notes for Red Hat build of OpenTelemetry 3.2 The Red Hat build of OpenTelemetry is provided through the Red Hat build of OpenTelemetry Operator. 1.8.1. Technology Preview features This update introduces the following Technology Preview features: Host Metrics Receiver OIDC Auth Extension Kubernetes Cluster Receiver Kubernetes Events Receiver Kubernetes Objects Receiver Load-Balancing Exporter Kubelet Stats Receiver Cumulative to Delta Processor Forward Connector Journald Receiver Filelog Receiver File Storage Extension Important Each of these features is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.8.2. New features and enhancements This update introduces the following enhancement: Red Hat build of OpenTelemetry 3.2 is based on the open source OpenTelemetry release 0.100.0. 1.8.3. Deprecated functionality In Red Hat build of OpenTelemetry 3.2, use of empty values and null keywords in the OpenTelemetry Collector custom resource is deprecated and planned to be unsupported in a future release. Red Hat will provide bug fixes and support for this syntax during the current release lifecycle, but this syntax will become unsupported. As an alternative to empty values and null keywords, you can update the OpenTelemetry Collector custom resource to contain empty JSON objects as open-closed braces {} instead. 1.8.4. Bug fixes This update introduces the following bug fix: Before this update, the checkbox to enable Operator monitoring was not available in the web console when installing the Red Hat build of OpenTelemetry Operator. As a result, a ServiceMonitor resource was not created in the openshift-opentelemetry-operator namespace. With this update, the checkbox appears for the Red Hat build of OpenTelemetry Operator in the web console so that Operator monitoring can be enabled during installation. ( TRACING-3761 ) 1.9. Release notes for Red Hat build of OpenTelemetry 3.1.1 The Red Hat build of OpenTelemetry is provided through the Red Hat build of OpenTelemetry Operator. 1.9.1. CVEs This release fixes CVE-2023-39326 . 1.10. Release notes for Red Hat build of OpenTelemetry 3.1 The Red Hat build of OpenTelemetry is provided through the Red Hat build of OpenTelemetry Operator. 1.10.1. Technology Preview features This update introduces the following Technology Preview feature: The target allocator is an optional component of the OpenTelemetry Operator that shards Prometheus receiver scrape targets across the deployed fleet of OpenTelemetry Collector instances. The target allocator provides integration with the Prometheus PodMonitor and ServiceMonitor custom resources. Important The target allocator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.10.2. New features and enhancements This update introduces the following enhancement: Red Hat build of OpenTelemetry 3.1 is based on the open source OpenTelemetry release 0.93.0. 1.11. Release notes for Red Hat build of OpenTelemetry 3.0 1.11.1. New features and enhancements This update introduces the following enhancements: Red Hat build of OpenTelemetry 3.0 is based on the open source OpenTelemetry release 0.89.0. The OpenShift distributed tracing data collection Operator is renamed as the Red Hat build of OpenTelemetry Operator . Support for the ARM architecture. Support for the Prometheus receiver for metrics collection. Support for the Kafka receiver and exporter for sending traces and metrics to Kafka. Support for cluster-wide proxy environments. The Red Hat build of OpenTelemetry Operator creates the Prometheus ServiceMonitor custom resource if the Prometheus exporter is enabled. The Operator enables the Instrumentation custom resource that allows injecting upstream OpenTelemetry auto-instrumentation libraries. 1.11.2. Removal notice In Red Hat build of OpenTelemetry 3.0, the Jaeger exporter has been removed. Bug fixes and support are provided only through the end of the 2.9 lifecycle. As an alternative to the Jaeger exporter for sending data to the Jaeger collector, you can use the OTLP exporter instead. 1.11.3. Bug fixes This update introduces the following bug fixes: Fixed support for disconnected environments when using the oc adm catalog mirror CLI command. 1.11.4. Known issues There is currently a known issue: Currently, the cluster monitoring of the Red Hat build of OpenTelemetry Operator is disabled due to a bug ( TRACING-3761 ). The bug is preventing the cluster monitoring from scraping metrics from the Red Hat build of OpenTelemetry Operator due to a missing label openshift.io/cluster-monitoring=true that is required for the cluster monitoring and service monitor object. Workaround You can enable the cluster monitoring as follows: Add the following label in the Operator namespace: oc label namespace openshift-opentelemetry-operator openshift.io/cluster-monitoring=true Create a service monitor, role, and role binding: apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: opentelemetry-operator-controller-manager-metrics-service namespace: openshift-opentelemetry-operator spec: endpoints: - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token path: /metrics port: https scheme: https tlsConfig: insecureSkipVerify: true selector: matchLabels: app.kubernetes.io/name: opentelemetry-operator control-plane: controller-manager --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: otel-operator-prometheus namespace: openshift-opentelemetry-operator annotations: include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" rules: - apiGroups: - "" resources: - services - endpoints - pods verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: otel-operator-prometheus namespace: openshift-opentelemetry-operator annotations: include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: otel-operator-prometheus subjects: - kind: ServiceAccount name: prometheus-k8s namespace: openshift-monitoring 1.12. Release notes for Red Hat build of OpenTelemetry 2.9.2 Important The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat build of OpenTelemetry 2.9.2 is based on the open source OpenTelemetry release 0.81.0. 1.12.1. CVEs This release fixes CVE-2023-46234 . 1.12.2. Known issues There is currently a known issue: Currently, you must manually set Operator maturity to Level IV, Deep Insights. ( TRACING-3431 ) 1.13. Release notes for Red Hat build of OpenTelemetry 2.9.1 Important The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat build of OpenTelemetry 2.9.1 is based on the open source OpenTelemetry release 0.81.0. 1.13.1. CVEs This release fixes CVE-2023-44487 . 1.13.2. Known issues There is currently a known issue: Currently, you must manually set Operator maturity to Level IV, Deep Insights. ( TRACING-3431 ) 1.14. Release notes for Red Hat build of OpenTelemetry 2.9 Important The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat build of OpenTelemetry 2.9 is based on the open source OpenTelemetry release 0.81.0. 1.14.1. New features and enhancements This release introduces the following enhancements for the Red Hat build of OpenTelemetry: Support OTLP metrics ingestion. The metrics can be forwarded and stored in the user-workload-monitoring via the Prometheus exporter. Support the Operator maturity Level IV, Deep Insights, which enables upgrading and monitoring of OpenTelemetry Collector instances and the Red Hat build of OpenTelemetry Operator. Report traces and metrics from remote clusters using OTLP or HTTP and HTTPS. Collect OpenShift Container Platform resource attributes via the resourcedetection processor. Support the managed and unmanaged states in the OpenTelemetryCollector custom resouce. 1.14.2. Known issues There is currently a known issue: Currently, you must manually set Operator maturity to Level IV, Deep Insights. ( TRACING-3431 ) 1.15. Release notes for Red Hat build of OpenTelemetry 2.8 Important The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat build of OpenTelemetry 2.8 is based on the open source OpenTelemetry release 0.74.0. 1.15.1. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.16. Release notes for Red Hat build of OpenTelemetry 2.7 Important The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat build of OpenTelemetry 2.7 is based on the open source OpenTelemetry release 0.63.1. 1.16.1. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.17. Release notes for Red Hat build of OpenTelemetry 2.6 Important The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat build of OpenTelemetry 2.6 is based on the open source OpenTelemetry release 0.60. 1.17.1. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.18. Release notes for Red Hat build of OpenTelemetry 2.5 Important The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat build of OpenTelemetry 2.5 is based on the open source OpenTelemetry release 0.56. 1.18.1. New features and enhancements This update introduces the following enhancement: Support for collecting Kubernetes resource attributes to the Red Hat build of OpenTelemetry Operator. 1.18.2. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.19. Release notes for Red Hat build of OpenTelemetry 2.4 Important The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat build of OpenTelemetry 2.4 is based on the open source OpenTelemetry release 0.49. 1.19.1. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.20. Release notes for Red Hat build of OpenTelemetry 2.3 Important The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat build of OpenTelemetry 2.3.1 is based on the open source OpenTelemetry release 0.44.1. Red Hat build of OpenTelemetry 2.3.0 is based on the open source OpenTelemetry release 0.44.0. 1.20.1. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.21. Release notes for Red Hat build of OpenTelemetry 2.2 Important The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat build of OpenTelemetry 2.2 is based on the open source OpenTelemetry release 0.42.0. 1.21.1. Technology Preview features The unsupported OpenTelemetry Collector components included in the 2.1 release are removed. 1.21.2. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.22. Release notes for Red Hat build of OpenTelemetry 2.1 Important The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat build of OpenTelemetry 2.1 is based on the open source OpenTelemetry release 0.41.1. 1.22.1. Technology Preview features This release introduces a breaking change to how to configure certificates in the OpenTelemetry custom resource file. With this update, the ca_file moves under tls in the custom resource, as shown in the following examples. CA file configuration for OpenTelemetry version 0.33 spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" CA file configuration for OpenTelemetry version 0.41.1 spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 tls: ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" 1.22.2. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.23. Release notes for Red Hat build of OpenTelemetry 2.0 Important The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat build of OpenTelemetry 2.0 is based on the open source OpenTelemetry release 0.33.0. This release adds the Red Hat build of OpenTelemetry as a Technology Preview , which you install using the Red Hat build of OpenTelemetry Operator. Red Hat build of OpenTelemetry is based on the OpenTelemetry APIs and instrumentation. The Red Hat build of OpenTelemetry includes the OpenTelemetry Operator and Collector. You can use the Collector to receive traces in the OpenTelemetry or Jaeger protocol and send the trace data to the Red Hat build of OpenTelemetry. Other capabilities of the Collector are not supported at this time. The OpenTelemetry Collector allows developers to instrument their code with vendor agnostic APIs, avoiding vendor lock-in and enabling a growing ecosystem of observability tooling. 1.24. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager Hybrid Cloud Console . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 1.25. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
[ "apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: opentelemetry-operator-controller-manager-metrics-service namespace: openshift-opentelemetry-operator spec: endpoints: - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token path: /metrics port: https scheme: https tlsConfig: insecureSkipVerify: true selector: matchLabels: app.kubernetes.io/name: opentelemetry-operator control-plane: controller-manager --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: otel-operator-prometheus namespace: openshift-opentelemetry-operator annotations: include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" rules: - apiGroups: - \"\" resources: - services - endpoints - pods verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: otel-operator-prometheus namespace: openshift-opentelemetry-operator annotations: include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: otel-operator-prometheus subjects: - kind: ServiceAccount name: prometheus-k8s namespace: openshift-monitoring", "spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\"", "spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/red_hat_build_of_opentelemetry/otel_rn
Chapter 4. Build [config.openshift.io/v1]
Chapter 4. Build [config.openshift.io/v1] Description Build configures the behavior of OpenShift builds for the entire cluster. This includes default settings that can be overridden in BuildConfig objects, and overrides which are applied to all builds. The canonical name is "cluster" Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Spec holds user-settable values for the build controller configuration 4.1.1. .spec Description Spec holds user-settable values for the build controller configuration Type object Property Type Description additionalTrustedCA object AdditionalTrustedCA is a reference to a ConfigMap containing additional CAs that should be trusted for image pushes and pulls during builds. The namespace for this config map is openshift-config. DEPRECATED: Additional CAs for image pull and push should be set on image.config.openshift.io/cluster instead. buildDefaults object BuildDefaults controls the default information for Builds buildOverrides object BuildOverrides controls override settings for builds 4.1.2. .spec.additionalTrustedCA Description AdditionalTrustedCA is a reference to a ConfigMap containing additional CAs that should be trusted for image pushes and pulls during builds. The namespace for this config map is openshift-config. DEPRECATED: Additional CAs for image pull and push should be set on image.config.openshift.io/cluster instead. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 4.1.3. .spec.buildDefaults Description BuildDefaults controls the default information for Builds Type object Property Type Description defaultProxy object DefaultProxy contains the default proxy settings for all build operations, including image pull/push and source download. Values can be overrode by setting the HTTP_PROXY , HTTPS_PROXY , and NO_PROXY environment variables in the build config's strategy. env array Env is a set of default environment variables that will be applied to the build if the specified variables do not exist on the build env[] object EnvVar represents an environment variable present in a Container. gitProxy object GitProxy contains the proxy settings for git operations only. If set, this will override any Proxy settings for all git commands, such as git clone. Values that are not set here will be inherited from DefaultProxy. imageLabels array ImageLabels is a list of docker labels that are applied to the resulting image. User can override a default label by providing a label with the same name in their Build/BuildConfig. imageLabels[] object resources object Resources defines resource requirements to execute the build. 4.1.4. .spec.buildDefaults.defaultProxy Description DefaultProxy contains the default proxy settings for all build operations, including image pull/push and source download. Values can be overrode by setting the HTTP_PROXY , HTTPS_PROXY , and NO_PROXY environment variables in the build config's strategy. Type object Property Type Description httpProxy string httpProxy is the URL of the proxy for HTTP requests. Empty means unset and will not result in an env var. httpsProxy string httpsProxy is the URL of the proxy for HTTPS requests. Empty means unset and will not result in an env var. noProxy string noProxy is a comma-separated list of hostnames and/or CIDRs and/or IPs for which the proxy should not be used. Empty means unset and will not result in an env var. readinessEndpoints array (string) readinessEndpoints is a list of endpoints used to verify readiness of the proxy. trustedCA object trustedCA is a reference to a ConfigMap containing a CA certificate bundle. The trustedCA field should only be consumed by a proxy validator. The validator is responsible for reading the certificate bundle from the required key "ca-bundle.crt", merging it with the system default trust bundle, and writing the merged trust bundle to a ConfigMap named "trusted-ca-bundle" in the "openshift-config-managed" namespace. Clients that expect to make proxy connections must use the trusted-ca-bundle for all HTTPS requests to the proxy, and may use the trusted-ca-bundle for non-proxy HTTPS requests as well. The namespace for the ConfigMap referenced by trustedCA is "openshift-config". Here is an example ConfigMap (in yaml): apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE----- 4.1.5. .spec.buildDefaults.defaultProxy.trustedCA Description trustedCA is a reference to a ConfigMap containing a CA certificate bundle. The trustedCA field should only be consumed by a proxy validator. The validator is responsible for reading the certificate bundle from the required key "ca-bundle.crt", merging it with the system default trust bundle, and writing the merged trust bundle to a ConfigMap named "trusted-ca-bundle" in the "openshift-config-managed" namespace. Clients that expect to make proxy connections must use the trusted-ca-bundle for all HTTPS requests to the proxy, and may use the trusted-ca-bundle for non-proxy HTTPS requests as well. The namespace for the ConfigMap referenced by trustedCA is "openshift-config". Here is an example ConfigMap (in yaml): apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: \| -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE----- Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 4.1.6. .spec.buildDefaults.env Description Env is a set of default environment variables that will be applied to the build if the specified variables do not exist on the build Type array 4.1.7. .spec.buildDefaults.env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 4.1.8. .spec.buildDefaults.env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 4.1.9. .spec.buildDefaults.env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 4.1.10. .spec.buildDefaults.env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 4.1.11. .spec.buildDefaults.env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 4.1.12. .spec.buildDefaults.env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 4.1.13. .spec.buildDefaults.gitProxy Description GitProxy contains the proxy settings for git operations only. If set, this will override any Proxy settings for all git commands, such as git clone. Values that are not set here will be inherited from DefaultProxy. Type object Property Type Description httpProxy string httpProxy is the URL of the proxy for HTTP requests. Empty means unset and will not result in an env var. httpsProxy string httpsProxy is the URL of the proxy for HTTPS requests. Empty means unset and will not result in an env var. noProxy string noProxy is a comma-separated list of hostnames and/or CIDRs and/or IPs for which the proxy should not be used. Empty means unset and will not result in an env var. readinessEndpoints array (string) readinessEndpoints is a list of endpoints used to verify readiness of the proxy. trustedCA object trustedCA is a reference to a ConfigMap containing a CA certificate bundle. The trustedCA field should only be consumed by a proxy validator. The validator is responsible for reading the certificate bundle from the required key "ca-bundle.crt", merging it with the system default trust bundle, and writing the merged trust bundle to a ConfigMap named "trusted-ca-bundle" in the "openshift-config-managed" namespace. Clients that expect to make proxy connections must use the trusted-ca-bundle for all HTTPS requests to the proxy, and may use the trusted-ca-bundle for non-proxy HTTPS requests as well. The namespace for the ConfigMap referenced by trustedCA is "openshift-config". Here is an example ConfigMap (in yaml): apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE----- 4.1.14. .spec.buildDefaults.gitProxy.trustedCA Description trustedCA is a reference to a ConfigMap containing a CA certificate bundle. The trustedCA field should only be consumed by a proxy validator. The validator is responsible for reading the certificate bundle from the required key "ca-bundle.crt", merging it with the system default trust bundle, and writing the merged trust bundle to a ConfigMap named "trusted-ca-bundle" in the "openshift-config-managed" namespace. Clients that expect to make proxy connections must use the trusted-ca-bundle for all HTTPS requests to the proxy, and may use the trusted-ca-bundle for non-proxy HTTPS requests as well. The namespace for the ConfigMap referenced by trustedCA is "openshift-config". Here is an example ConfigMap (in yaml): apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: \| -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE----- Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 4.1.15. .spec.buildDefaults.imageLabels Description ImageLabels is a list of docker labels that are applied to the resulting image. User can override a default label by providing a label with the same name in their Build/BuildConfig. Type array 4.1.16. .spec.buildDefaults.imageLabels[] Description Type object Property Type Description name string Name defines the name of the label. It must have non-zero length. value string Value defines the literal value of the label. 4.1.17. .spec.buildDefaults.resources Description Resources defines resource requirements to execute the build. Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 4.1.18. .spec.buildOverrides Description BuildOverrides controls override settings for builds Type object Property Type Description forcePull boolean ForcePull overrides, if set, the equivalent value in the builds, i.e. false disables force pull for all builds, true enables force pull for all builds, independently of what each build specifies itself imageLabels array ImageLabels is a list of docker labels that are applied to the resulting image. If user provided a label in their Build/BuildConfig with the same name as one in this list, the user's label will be overwritten. imageLabels[] object nodeSelector object (string) NodeSelector is a selector which must be true for the build pod to fit on a node tolerations array Tolerations is a list of Tolerations that will override any existing tolerations set on a build pod. tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. 4.1.19. .spec.buildOverrides.imageLabels Description ImageLabels is a list of docker labels that are applied to the resulting image. If user provided a label in their Build/BuildConfig with the same name as one in this list, the user's label will be overwritten. Type array 4.1.20. .spec.buildOverrides.imageLabels[] Description Type object Property Type Description name string Name defines the name of the label. It must have non-zero length. value string Value defines the literal value of the label. 4.1.21. .spec.buildOverrides.tolerations Description Tolerations is a list of Tolerations that will override any existing tolerations set on a build pod. Type array 4.1.22. .spec.buildOverrides.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 4.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/builds DELETE : delete collection of Build GET : list objects of kind Build POST : create a Build /apis/config.openshift.io/v1/builds/{name} DELETE : delete a Build GET : read the specified Build PATCH : partially update the specified Build PUT : replace the specified Build /apis/config.openshift.io/v1/builds/{name}/status GET : read status of the specified Build PATCH : partially update status of the specified Build PUT : replace status of the specified Build 4.2.1. /apis/config.openshift.io/v1/builds Table 4.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Build Table 4.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 4.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Build Table 4.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 4.5. HTTP responses HTTP code Reponse body 200 - OK BuildList schema 401 - Unauthorized Empty HTTP method POST Description create a Build Table 4.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.7. Body parameters Parameter Type Description body Build schema Table 4.8. HTTP responses HTTP code Reponse body 200 - OK Build schema 201 - Created Build schema 202 - Accepted Build schema 401 - Unauthorized Empty 4.2.2. /apis/config.openshift.io/v1/builds/{name} Table 4.9. Global path parameters Parameter Type Description name string name of the Build Table 4.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Build Table 4.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 4.12. Body parameters Parameter Type Description body DeleteOptions schema Table 4.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Build Table 4.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 4.15. HTTP responses HTTP code Reponse body 200 - OK Build schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Build Table 4.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.17. Body parameters Parameter Type Description body Patch schema Table 4.18. HTTP responses HTTP code Reponse body 200 - OK Build schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Build Table 4.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.20. Body parameters Parameter Type Description body Build schema Table 4.21. HTTP responses HTTP code Reponse body 200 - OK Build schema 201 - Created Build schema 401 - Unauthorized Empty 4.2.3. /apis/config.openshift.io/v1/builds/{name}/status Table 4.22. Global path parameters Parameter Type Description name string name of the Build Table 4.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Build Table 4.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 4.25. HTTP responses HTTP code Reponse body 200 - OK Build schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Build Table 4.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.27. Body parameters Parameter Type Description body Patch schema Table 4.28. HTTP responses HTTP code Reponse body 200 - OK Build schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Build Table 4.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.30. Body parameters Parameter Type Description body Build schema Table 4.31. HTTP responses HTTP code Reponse body 200 - OK Build schema 201 - Created Build schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/config_apis/build-config-openshift-io-v1
Chapter 28. Process instance management
Chapter 28. Process instance management To view process instances, in Business Central click Menu Manage Process Instances . + NOTE: Each row in the Manage Process Instances list represents a process instance from a particular process definition. Each instance has its own internal state of the information that the process is manipulating. Click a process instance to view the corresponding tabs with runtime information related to the process. Figure 28.1. Process instance tab view Instance Details : Provides an overview about what is going on inside the process. It displays the current state of the instance and the current activity that is being executed. Process Variables : Displays all of the process variables that are being manipulated by the instance, with the exception of the variables that contain documents. You can edit the process variable value and view its history. Documents : Displays process documents if the process contains a variable of the type org.jbpm.Document . This enables access, download, and manipulation of the attached documents. Logs : Displays process instance logs for the end users. For more information, see Interacting with processes and tasks . Diagram : Tracks the progress of the process instance through the BPMN2 diagram. The node or nodes of the process flow that are in progress are highlighted in red. Reusable sub-processes appear collapsed within the parent process. Double-click on the reusable sub-process node to open its diagram from the parent process diagram. For information on user credentials and conditions to be met to access KIE Server runtime data, see Planning a Red Hat Process Automation Manager installation . 28.1. Process instance filtering For process instances in Menu Manage Process Instances , you can use the Filters and Advanced Filters panels to sort process instances as needed. Procedure In Business Central, go to Menu Manage Process Instances . On the Manage Process Instances page, click the Filters icon on the left of the page to select the filters that you want to use: State : Filter process instances based on their state ( Active , Aborted , Completed , Pending , and Suspended ). Errors : Filter process instances that contain at least one or no errors. Filter By : Filter process instances based on the following attributes: Id : Filter by process instance ID. Input: Numeric Initiator : Filter by the user ID of the process instance initiator. The user ID is a unique value, and depends on the ID management system. Input: String Correlation key : Filter by correlation key. Input: String Description : Filter by process instance description. Input: String Name : Filter process instances based on process definition name. Definition ID : The ID of the instance definition. Deployment ID : The ID of the instance deployment. SLA Compliance : SLA compliance status ( Aborted , Met , N/A , Pending , and Violated ). Parent Process ID : The ID of the parent process. Start Date : Filter process instances based on their creation date. Last update : Filter process instances based on their last modified date. You can also use the Advanced Filters option to create custom filters in Business Central. 28.2. Creating a custom process instance list You can view the list of all the running process instances in Menu Manage Process Instances in Business Central. From this page, you can manage the instances during run time and monitor their execution. You can customize which columns are displayed, the number of rows displayed per page, and filter the results. You can also create a custom process instance list. Prerequisites A project with a process definition has been deployed in Business Central. Procedure In Business Central, go to Menu Manage Process Instances . In the Manage Process Instances page, click the advanced filters icon on the left to open the list of process instance Advanced Filters options. In the Advanced Filters panel, enter the name and description of the filter that you want to use for your custom process instance list, and click Add New . From the list of filter values, select the parameters and values to configure the custom process instance list, and click Save . A new filter is created and immediately applied to the process instances list. The filter is also saved in the Saved Filters list. You can access saved filters by clicking the star icon on the left side of the Manage Process Instances page. 28.3. Managing process instances using a default filter You can set a process instance filter as a default filter using the Saved Filter option in Business Central. A default filter will be executed every time when the page is open by the user. Procedure In Business Central, go to Menu Manage Process Instances . On the Manage Process Instances page, click the star icon on the left of the page to expand the Saved Filters panel. In the Saved Filters panel, you can view the saved advanced filters. Default filter selection for Process Instances In the Saved Filters panel, set a saved process instance filter as the default filter. 28.4. Viewing process instance variables using basic filters Business Central provides basic filters to view process instance variables. You can view the process instance variables of the process as columns using Show/hide columns . Procedure In Business Central, go to Menu Manage Process Instances . On the Manage Process Instances page, click the filter icon on the left of the page to expand the Filters panel. In the Filters panel, select the Definition Id and select a definition ID from the list. The filter is applied to the current process instance list. Click columns icon (to the right of Bulk Actions) in the upper-right of the screen to display or hide columns in the process instances table. Click the star icon to open the Saved Filters panel. In the Saved Filters panel, you can view all the saved advanced filters. 28.5. Viewing process instance variables using advanced filters You can use the Advanced Filters option in Business Central to view process instance variables. When you create a filter over the column processId , you can view the process instance variables of the process as columns using Show/hide columns . Procedure In Business Central, go to Menu Manage Process Instances . On the Manage Process Instances page, click the advanced filters icon to expand the Advanced Filters panel. In the Advanced Filters panel, enter the name and description of the filter, and click Add New . From the Select column list, select the processId attribute. The value will change to processId != value1 . From the Select column list, select equals to for the query. In the text field, enter the name of the process id. Click Save and the filter is applied on the current process instance list. Click the columns icon (to the right of Bulk Actions ) in the upper-right of the process instances list and the process instance variables of the specified process ID will be displayed. Click the star icon to open the Saved Filters panel. In the Saved Filters panel, you can view all the saved advanced filters. 28.6. Aborting a process instance using Business Central If a process instance becomes obsolete, you can abort the process instance in Business Central. Procedure In Business Central, go to Menu Manage Process Instances to view the list of available process instances. Select the process instance you want to abort from the list. In the process details page, click the Abort button in the upper-right corner. 28.7. Signaling process instances from Business Central You can signal a process instance from Business Central. Prerequisites A project with a process definition has been deployed in Business Central. Procedure In Business Central, go to Menu Manage Process Instances . Locate the required process instance, click the button and select Signal from the drop-down menu. Fill the following fields: Signal Name : Corresponds to the SignalRef or MessageRef attributes of the signal. This field is required. Note You can also send a Message event to the process by adding the Message- prefix in front of the MessageRef value. Signal Data : Corresponds to data accompanying the signal. This field is optional. Note When using the Business Central user interface, you can only signal Signal intermediate catch events. 28.8. Asynchronous signal events When several process instances from different process definitions are waiting for the same signal, they are executed sequentially in the same thread. But, if one of those process instances throws a runtime exception, all the other process instances are affected and usually result in a rolled back transaction. To avoid this situation, Red Hat Process Automation Manager supports using asynchronous signals events for: Throwing intermediate signal events End events 28.8.1. Configuring asynchronous signals for intermediate events Intermediate events drive the flow of a business process. Intermediate events are used to either catch or throw an event during the execution of the business process. An intermediate event handles a particular situation that occurs during process execution. A throwing signal intermediate event produces a signal object based on the defined properties. You can configure an asynchronous signal for intermediate events in Business Central. Prerequisites You have created a project in Business Central and it contains at least one business process asset. A project with a process definition has been deployed in Business Central. Procedure Open a business process asset. In the process designer canvas, drag and drop the Intermediate Signal from the left toolbar. In the upper-right corner, click to open the Properties panel. Expand Data Assignments . Click the box under the Assignments sub-section. The Task Data I/O dialog box opens. Click Add to Data Inputs and Assignments . Enter a name of the throw event as async in the Name field. Leave the Data Type and Source fields blank. Click OK . It will automatically set the executor service on each session. This ensures that each process instance is signaled in a different transaction. 28.8.2. Configuring asynchronous signals for end events End events indicate the completion of a business process. All end events, with the exception of the none and terminate end events, are throw events. A throwing signal end event is used to finish a process or sub-process flow. When the execution flow enters the element, the execution flow finishes and produces a signal identified by its SignalRef property. You can configure an asynchronous signal for end events in Business Central. Prerequisites You have created a project in Business Central and it contains at least one business process asset. A project with a process definition has been deployed in Business Central. Procedure Open a business process asset. In the process designer canvas, drag and drop the End Signal from the left toolbar. In the upper-right corner, click to open the Properties panel. Expand Data Assignments . Click the box under the Assignments sub-section. The Task Data I/O dialog box opens. Click Add to Data Inputs and Assignments . Enter a name of the throw event as async in the Name field. Leave the Data Type and Source fields blank. Click OK . It will automatically set the executor service on each session. This ensures that each process instance is signaled in a different transaction. 28.9. Process instance operations Process instance administration API exposes the following operations for the process engine and the individual process instance. get process nodes - by process instance id : Returns all nodes, including all embedded sub-processes that exist in the process instance. You must retrieve the nodes from the specified process instance to ensure that the node exists and includes a valid ID so that it can be used by other administration operations. cancel node instance - by process instance id and node instance id : Cancels a node instance within a process instance using the process and node instance IDs. retrigger node instance - by process instance id and node instance id : Re-triggers a node instance by canceling the active node instance and creates a new node instance of the same type using the process and node instance IDs. update timer - by process instance id and timer id : Updates the timer expiration of an active timer based on the time elapsed since the timer was scheduled. For example, if a timer was initially created with delay of one hour and after thirty minutes you set it to update in two hours, it expires in one and a half hours from the time it was updated. delay : The duration after the timer expires. period : The interval between the timer expiration for cycle timers. repeat limit : Limits the expiration for a specified number for cycle timers. update timer relative to current time - by process instance id and timer id : Updates the timer expiration of an active timer based on the current time. For example, if a timer was initially created with delay of one hour and after thirty minutes you set it to update in two hours, it expires in two hours from the time it was updated. list timer instances - by process instance id : Returns all active timers for a specified process instance. trigger node - by process instance id and node id : Triggers any node in a process instance at any time.
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_process_services_in_red_hat_process_automation_manager/process-instance-details-con-managing-and-monitoring-processes
B.58. nss_db
B.58. nss_db B.58.1. RHBA-2011:0941 - nss_db bug fix update An updated nss_db package that fixes a bug is now available for Red Hat Enterprise Linux 6 Extended Update Support. The nss_db package contains a set of C library extensions which allow Berkeley Databases to be used as a primary source of aliases, groups, hosts, networks, protocols, users, services, or shadow passwords instead of, or in addition to, using flat files or NIS (Network Information Service). Bug Fix BZ# 718202 When a module does not provide its own method for retrieving a user's list of supplemental group memberships, the libc library's default method is used instead to get that information by examining all of the groups known to the module. Consequently, applications which attempted to retrieve the information from multiple threads simultaneously, interfered with each other and each received an incomplete result set. This update provides a module-specific method which prevents this interference in the nss_db module. Users of nss_db are advised to upgrade to this updated package, which fixes this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/nss_db
Chapter 23. Using availability zones to make network resources highly available
Chapter 23. Using availability zones to make network resources highly available Starting with version 16.2, Red Hat OpenStack Platform (RHOSP) supports RHOSP Networking service (neutron) availability zones (AZs). AZs enable you to make your RHOSP network resources highly available. You can group network nodes that are attached to different power sources on different AZs, and then schedule crucial services to be on separate AZs. Often Networking service AZs are used in conjunction with Compute service (nova) AZs to ensure that customers use specific virtual networks that are local to a physical site that workloads run on. For more information on Distributed Compute Node architecture see, the Deploying a Distributed Compute Node architecture guide. 23.1. About Networking service availability zones The required extensions that provide Red Hat OpenStack Platform (RHOSP) Networking service (neutron) availability zones (AZ) functionality are availability_zone , router_availability_zone , and network_availability_zone . The Modular Layer 2 plug-in with the Open vSwitch (ML2/OVS) mechanism driver support all of these extensions. Note The Modular Layer 2 plug-in with the Open Virtual Network (ML2/OVN) mechanism driver supports only router availability zones. ML2/OVN has a distributed DHCP server, so supporting network AZs is unnecessary. When you create your network resource, you can specify an AZ by using the OpenStack client command line option, --availability-zone-hint . The AZ that you specify is added to the AZ hint list. However, the AZ attribute is not actually set until the resource is scheduled. The actual AZ that is assigned to your network resource can vary from the AZ that you specified with the hint option. The reasons for this mismatch can be that there is a zone failure, or that the zone specified has no remaining capacity. You can configure the Networking service for a default AZ, in case users fail to specify an AZ when they create a network resource. In addition to setting a default AZ, you can also configure specific drivers to schedule networks and routers on AZs. Additional resources Configuring Network service availability zones with ML2/OVS Configuring Network service availability zones with ML2/OVN Manually Assigning availability zones to networks and routers 23.2. Configuring Network service availability zones for ML2/OVS You can set one or more default availability zones (AZs) that are automatically assigned by the Red Hat OpenStack Platform (RHOSP) Networking service (neutron) when users create networks and routers. In addition, you can also set the network and router drivers that the Networking service uses to schedule these resources for a respective AZ. Note When using Networking service AZs in distributed compute node (DCN) environments, it is recommended to match the Networking service AZ names to the Compute service (nova) AZ names. The information contained in this topic is for deployments that run the RHOSP Networking service that uses the Module Layer 2 plug-in with the Open vSwitch mechanism driver (ML2/OVS). Prerequisites Deployed RHOSP 16.2 or later. Running the RHOSP Networking service that uses the ML2/OVS mechanism driver. For more information, see the Deploying a Distributed Compute Node architecture guide. Procedure Log in to the undercloud as the stack user, and source the stackrc file to enable the director command line tools. Example Create a custom YAML environment file. Example Tip The Red Hat OpenStack Platform Orchestration service (heat) uses a set of plans called templates to install and configure your environment. You can customize aspects of the overcloud with a custom environment file , which is a special type of template that provides customization for your heat templates. In the YAML environment file, under parameter_defaults , enter the NeutronDefaultAvailabilityZones parameter and one or more AZs. The Networking service assigns these AZs if a user fails to specify an AZ with the --availability-zone-hint option when creating a network or a router. Important In DCN environments, you must match the Networking service AZ names with Compute service AZ names. Example Determine the AZs for the DHCP and the L3 agents, by entering values for the parameters, NeutronDhcpAgentAvailabilityZone and NeutronL3AgentAvailabilityZone , respectively. Example Important In DCN environments, define a single AZ for NeutronDhcpAgentAvailabilityZone so that ports are scheduled in the AZ relevant to the particular edge site. By default, the network and router schedulers are AZAwareWeightScheduler and AZLeastRoutersScheduler , respectively. If you want to change one or both of these, enter the new schedulers with the NeutronNetworkSchedulerDriver and NeutronRouterSchedulerDriver parameters, respectively. Example Run the openstack overcloud deploy command and include the core heat templates, environment files, and this new custom environment file. Important The order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence. Example Verification Confirm that availability zones are properly defined, by running the availability zone list command. Example Sample output Additional resources About Networking service availability zones Configuring Network service availability zones with ML2/OVN Manually Assigning availability zones to networks and routers 23.3. Configuring Network service availability zones with ML2/OVN You can set one or more default availability zones (AZs) that are automatically assigned by the Red Hat OpenStack Platform (RHOSP) Networking service (neutron) when users create routers. In addition, you can also set the router driver that the Networking service uses to schedule these resources for a respective AZ. The information contained in this topic is for deployments that run the RHOSP Networking service that uses the Modular Layer 2 plug-in with the Open Virtual Network (ML2/OVN) mechanism driver. Note The ML2/OVN mechanism driver supports only router availability zones. ML2/OVN has a distributed DHCP server, so supporting network AZs is unnecessary. Prerequisites Deployed RHOSP 16.2 or later. Running the RHOSP Networking service that uses the ML2/OVN mechanism driver. When using Networking service AZs in distributed compute node (DCN) environments, you must match the Networking service AZ names to the Compute service (nova) AZ names. For more information, see the Deploying a Distributed Compute Node architecture guide. Important Ensure that all router gateway ports reside on the OpenStack Controller nodes by setting OVNCMSOptions: 'enable-chassis-as-gw' and by providing one or more AZ values for the OVNAvailabilityZone parameter. Performing these actions prevent the routers from scheduling all chassis as potential hosts for the router gateway ports. Procedure Log in to the undercloud as the stack user, and source the stackrc file to enable the director command line tools. Example Create a custom YAML environment file. Example Tip The Red Hat OpenStack Platform Orchestration service (heat) uses a set of plans called templates to install and configure your environment. You can customize aspects of the overcloud with a custom environment file , which is a special type of template that provides customization for your heat templates. In the YAML environment file, under parameter_defaults , enter the NeutronDefaultAvailabilityZones parameter and one or more AZs. Important In DCN environments, you must match the Networking service AZ names with Compute service AZ names. The Networking service assigns these AZs if a user fails to specify an AZ with the --availability-zone-hint option when creating a network or a router. Example Determine the AZs for the gateway nodes (Controllers and Network nodes), by entering values for the parameter, OVNAvailabilityZone . Important The OVNAvailabilityZone parameter replaces the use of AZ values in the OVNCMSOptions parameter. If you use the OVNAvailabilityZone parameter, ensure that there are no AZ values in the OVNCMSOptions parameter. Example In this example, roles have been predefined for Controllers for the az-central AZ, and Networkers for the datacenter1 and datacenter2 AZs: Important In DCN environments, define a single AZ for ControllerCentralParameter so that ports are scheduled in the AZ relevant to the particular edge site. By default, the router scheduler is AZLeastRoutersScheduler . If you want to change this, enter the new scheduler with the NeutronRouterSchedulerDriver parameters. Example Run the openstack overcloud deploy command and include the core heat templates, environment files, and this new custom environment file. Important The order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence. Example Verification Confirm that availability zones are properly defined, by running the availability zone list command. Example Sample output Additional resources About Networking service availability zones Configuring Network service availability zones with ML2/OVS Manually Assigning availability zones to networks and routers 23.4. Manually Assigning availability zones to networks and routers You can manually assign a Red Hat OpenStack Platform (RHOSP) Networking service (neutron) availability zone (AZ) when you create a RHOSP network or a router. AZs enable you to make your RHOSP network resources highly available. You can group network nodes that are attached to different power sources on different AZs, and then schedule nodes running crucial services to be on separate AZs. Note If you fail to assign an AZ when creating a network or a router, the RHOSP Networking service automatically assigns to the resource the value that was specified to the RHOSP Orchestration service (heat) parameter. If no value is defined for NeutronDefaultAvailabilityZones the resources are scheduled without any AZ attributes. For RHOSP Networking service agents that use the Modular Layer 2 plug-in with the Open vSwitch (ML2/OVS) mechanism driver, if no AZ hint is supplied and no value specified for NeutronDefaultAvailabilityZones , then the Compute service (nova) AZ value is used to schedule the agent. Prerequisites Deployed RHOSP 16.2 or later. Running the RHOSP Networking service that uses either the ML2/OVS or ML2/OVN (Open Virtual Network) mechanism drivers. Procedure When you create a network on the overcloud using the OpenStack client, use the --availability-zone-hint option. Note The ML2/OVN mechanism driver supports only router availability zones. ML2/OVN has a distributed DHCP server, so supporting network AZs is unnecessary. In the following example, a network ( net1 ) is created and assigned to either AZ zone-1 or zone-2 : Network example Sample output When you create a router on the overcloud using the OpenStack client, use the --ha and --availability-zone-hint options. In the following example, a router ( router1 ) is created and assigned to either AZ zone-1 or zone-2 : Router example Sample output Notice that the actual AZ is not assigned at the time that you create the network resource. The RHOSP Networking service assigns the AZ when it schedules the resource. Verification Enter the appropriate OpenStack client show command to confirm in which zone the resource is hosted. Example Sample output Additional resources About Networking service availability zones Configuring Network service availability zones with ML2/OVS Configuring Network service availability zones with ML2/OVN
[ "source ~/stackrc", "vi /home/stack/templates/my-neutron-environment.yaml", "parameter_defaults: NeutronDefaultAvailabilityZones: 'az-central,az-datacenter2,az-datacenter1'", "parameter_defaults: NeutronDefaultAvailabilityZones: 'az-central,az-datacenter2,az-datacenter1' NeutronL3AgentAvailabilityZone: 'az-central,az-datacenter2,az-datacenter1' NeutronDhcpAgentAvailabilityZone: 'az-central,az-datacenter2,az-datacenter1'", "parameter_defaults: NeutronDefaultAvailabilityZones: 'az-central,az-datacenter2,az-datacenter1' NeutronL3AgentAvailabilityZone: 'az-central,az-datacenter2,az-datacenter1' NeutronDhcpAgentAvailabilityZone: 'az-central,az-datacenter2,az-datacenter1' NeutronNetworkSchedulerDriver: 'neutron.scheduler.dhcp_agent_scheduler.AZAwareWeightScheduler' NeutronRouterSchedulerDriver: 'neutron.scheduler.l3_agent_scheduler.AZLeastRoutersScheduler'", "openstack overcloud deploy --templates -e <your-environment-files> -e /usr/share/openstack-tripleo-heat-templates/environments/services/ my-neutron-environment.yaml", "openstack availability zone list", "+----------------+-------------+ | Zone Name | Zone Status | +----------------+-------------+ | az-central | available | | az-datacenter1 | available | | az-datacenter2 | available | +----------------+-------------+", "source ~/stackrc", "vi /home/stack/templates/my-neutron-environment.yaml", "parameter_defaults: NeutronDefaultAvailabilityZones: 'az-central,az-datacenter2,az-datacenter1'", "parameter_defaults: NeutronDefaultAvailabilityZones: 'az-central,az-datacenter2,az-datacenter1' ControllerCentralParameters: OVNCMSOptions: 'enable-chassis-as-gw' OVNAvailabilityZone: 'az-central,az-datacenter2,az-datacenter1' NetworkerDatacenter1Parameters: OVNCMSOptions: 'enable-chassis-as-gw' OVNAvailabilityZone: 'az-datacenter1' NetworkerDatacenter2Parameters: OVNCMSOptions: 'enable-chassis-as-gw' OVNAvailabilityZone: 'az-datacenter2'", "parameter_defaults: NeutronDefaultAvailabilityZones: 'az-central,az-datacenter2,az-datacenter1' ControllerCentralParameters: OVNCMSOptions: 'enable-chassis-as-gw' OVNAvailabilityZone: 'az-central,az-datacenter2,az-datacenter1' NetworkerDCN1Parameters: OVNCMSOptions: 'enable-chassis-as-gw' OVNAvailabilityZone: 'az-datacenter1' NetworkerDCN2Parameters: OVNCMSOptions: 'enable-chassis-as-gw' OVNAvailabilityZone: 'az-datacenter2' NeutronRouterSchedulerDriver: 'neutron.scheduler.l3_agent_scheduler.AZLeastRoutersScheduler'", "openstack overcloud deploy --templates -e <your-environment-files> -e /usr/share/openstack-tripleo-heat-templates/environments/services/ my-neutron-environment.yaml", "openstack availability zone list", "+----------------+-------------+ | Zone Name | Zone Status | +----------------+-------------+ | az-central | available | | az-datacenter1 | available | | az-datacenter2 | available | +----------------+-------------+", "openstack network create --availability-zone-hint zone-1 --availability-zone-hint zone-2 net1", "+---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | UP | | availability_zone_hints | zone-1 | | | zone-2 | | availability_zones | | | created_at | 2021-07-31T22:14:12Z | | description | | | headers | | | id | ad88e059-e7fa-4cf7-8857-6731a2a3a554 | | ipv4_address_scope | None | | ipv6_address_scope | None | | mtu | 1450 | | name | net1 | | port_security_enabled | True | | project_id | cfd1889ac7d64ad891d4f20aef9f8d7c | | provider:network_type | vxlan | | provider:physical_network | None | | provider:segmentation_id | 77 | | revision_number | 3 | | router:external | Internal | | shared | False | | status | ACTIVE | | subnets | | | tags | [] | | updated_at | 2021-07-31T22:14:13Z | +---------------------------+--------------------------------------+", "openstack router create --ha --availability-zone-hint zone-1 --availability-zone-hint zone-2 router1", "+-------------------------+--------------------------------------+ | Field | Value | +-------------------------+--------------------------------------+ | admin_state_up | UP | | availability_zone_hints | zone-1 | | | zone-2 | | availability_zones | | | created_at | 2021-07-31T22:16:54Z | | description | | | distributed | False | | external_gateway_info | null | | flavor_id | None | | ha | False | | headers | | | id | ced10262-6cfe-47c1-8847-cd64276a868c | | name | router1 | | project_id | cfd1889ac7d64ad891d4f20aef9f8d7c | | revision_number | 3 | | routes | | | status | ACTIVE | | tags | [] | | updated_at | 2021-07-31T22:16:56Z | +-------------------------+--------------------------------------+", "openstack network show net1", "+---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | UP | | availability_zone_hints | zone-1 | | | zone-2 | | availability_zones | zone-1 | | | zone-2 | | created_at | 2021-07-31T22:14:12Z | | description | | | headers | | | id | ad88e059-e7fa-4cf7-8857-6731a2a3a554 | | ipv4_address_scope | None | | ipv6_address_scope | None | | mtu | 1450 | | name | net1 | | port_security_enabled | True | | project_id | cfd1889ac7d64ad891d4f20aef9f8d7c | | provider:network_type | vxlan | | provider:physical_network | None | | provider:segmentation_id | 77 | | revision_number | 3 | | router:external | Internal | | shared | False | | status | ACTIVE | | subnets | | | tags | [] | | updated_at | 2021-07-31T22:14:13Z | +---------------------------+--------------------------------------+" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_red_hat_openstack_platform_networking/use-azs-make-network-nodes-ha_rhosp-network
Chapter 375. XMPP Component
Chapter 375. XMPP Component Available as of Camel version 1.0 The xmpp: component implements an XMPP (Jabber) transport. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-xmpp</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 375.1. URI format xmpp://[login@]hostname[:port][/participant][?Options] The component supports both room based and private person-person conversations. The component supports both producer and consumer (you can get messages from XMPP or send messages to XMPP). Consumer mode supports rooms starting. You can append query options to the URI in the following format, ?option=value&option=value&... 375.2. Options The XMPP component has no options. The XMPP endpoint is configured using URI syntax: with the following path and query parameters: 375.2.1. Path Parameters (3 parameters): Name Description Default Type host Required Hostname for the chat server String port Required Port number for the chat server int participant JID (Jabber ID) of person to receive messages. room parameter has precedence over participant. String 375.2.2. Query Parameters (18 parameters): Name Description Default Type login (common) Whether to login the user. true boolean nickname (common) Use nickname when joining room. If room is specified and nickname is not, user will be used for the nickname. String pubsub (common) Accept pubsub packets on input, default is false false boolean room (common) If this option is specified, the component will connect to MUC (Multi User Chat). Usually, the domain name for MUC is different from the login domain. For example, if you are supermanjabber.org and want to join the krypton room, then the room URL is kryptonconference.jabber.org. Note the conference part. It is not a requirement to provide the full room JID. If the room parameter does not contain the symbol, the domain part will be discovered and added by Camel String serviceName (common) The name of the service you are connecting to. For Google Talk, this would be gmail.com. String testConnectionOnStartup (common) Specifies whether to test the connection on startup. This is used to ensure that the XMPP client has a valid connection to the XMPP server when the route starts. Camel throws an exception on startup if a connection cannot be established. When this option is set to false, Camel will attempt to establish a lazy connection when needed by a producer, and will poll for a consumer connection until the connection is established. Default is true. true boolean createAccount (common) If true, an attempt to create an account will be made. Default is false. false boolean resource (common) XMPP resource. The default is Camel. Camel String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean connectionPollDelay (consumer) The amount of time in seconds between polls (in seconds) to verify the health of the XMPP connection, or between attempts to establish an initial consumer connection. Camel will try to re-establish a connection if it has become inactive. Default is 10 seconds. 10 int doc (consumer) Set a doc header on the IN message containing a Document form of the incoming packet; default is true if presence or pubsub are true, otherwise false false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern connectionConfig (advanced) To use an existing connection configuration. Currently org.jivesoftware.smack.tcp.XMPPTCPConnectionConfiguration is only supported (XMPP over TCP). ConnectionConfiguration synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean headerFilterStrategy (filter) To use a custom HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy password (security) Password for login String user (security) User name (without server name). If not specified, anonymous login will be attempted. String 375.3. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.xmpp.enabled Enable xmpp component true Boolean camel.component.xmpp.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 375.4. Headers and setting Subject or Language Camel sets the message IN headers as properties on the XMPP message. You can configure a HeaderFilterStategy if you need custom filtering of headers. The Subject and Language of the XMPP message are also set if they are provided as IN headers. 375.5. Examples User superman to join room krypton at jabber server with password, secret : xmpp://[email protected]/[email protected]&password=secret User superman to send messages to joker : xmpp://[email protected]/[email protected]?password=secret Routing example in Java: from("timer://kickoff?period=10000"). setBody(constant("I will win!\n Your Superman.")). to("xmpp://[email protected]/[email protected]?password=secret"); Consumer configuration, which writes all messages from joker into the queue, evil.talk . from("xmpp://[email protected]/[email protected]?password=secret"). to("activemq:evil.talk"); Consumer configuration, which listens to room messages: from("xmpp://[email protected]/?password=secret&[email protected]"). to("activemq:krypton.talk"); Room in short notation (no domain part): from("xmpp://[email protected]/?password=secret&room=krypton"). to("activemq:krypton.talk"); When connecting to the Google Chat service, you'll need to specify the serviceName as well as your credentials: from("direct:start"). to("xmpp://talk.google.com:5222/[email protected]?serviceName=gmail.com&user=fromuser&password=secret"). to("mock:result"); 375.6. See Also Configuring Camel Component Endpoint Getting Started
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-xmpp</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "xmpp://[login@]hostname[:port][/participant][?Options]", "xmpp:host:port/participant", "xmpp://[email protected]/[email protected]&password=secret", "xmpp://[email protected]/[email protected]?password=secret", "from(\"timer://kickoff?period=10000\"). setBody(constant(\"I will win!\\n Your Superman.\")). to(\"xmpp://[email protected]/[email protected]?password=secret\");", "from(\"xmpp://[email protected]/[email protected]?password=secret\"). to(\"activemq:evil.talk\");", "from(\"xmpp://[email protected]/?password=secret&[email protected]\"). to(\"activemq:krypton.talk\");", "from(\"xmpp://[email protected]/?password=secret&room=krypton\"). to(\"activemq:krypton.talk\");", "from(\"direct:start\"). to(\"xmpp://talk.google.com:5222/[email protected]?serviceName=gmail.com&user=fromuser&password=secret\"). to(\"mock:result\");" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/xmpp-component
2.4. What to Monitor?
2.4. What to Monitor? As stated earlier, the resources present in every system are CPU power, bandwidth, memory, and storage. At first glance, it would seem that monitoring would need only consist of examining these four different things. Unfortunately, it is not that simple. For example, consider a disk drive. What things might you want to know about its performance? How much free space is available? How many I/O operations on average does it perform each second? How long on average does it take each I/O operation to be completed? How many of those I/O operations are reads? How many are writes? What is the average amount of data read/written with each I/O? There are more ways of studying disk drive performance; these points have only scratched the surface. The main concept to keep in mind is that there are many different types of data for each resource. The following sections explore the types of utilization information that would be helpful for each of the major resource types. 2.4.1. Monitoring CPU Power In its most basic form, monitoring CPU power can be no more difficult than determining if CPU utilization ever reaches 100%. If CPU utilization stays below 100%, no matter what the system is doing, there is additional processing power available for more work. However, it is a rare system that does not reach 100% CPU utilization at least some of the time. At that point it is important to examine more detailed CPU utilization data. By doing so, it becomes possible to start determining where the majority of your processing power is being consumed. Here are some of the more popular CPU utilization statistics: User Versus System The percentage of time spent performing user-level processing versus system-level processing can point out whether a system's load is primarily due to running applications or due to operating system overhead. High user-level percentages tend to be good (assuming users are not experiencing unsatisfactory performance), while high system-level percentages tend to point toward problems that will require further investigation. Context Switches A context switch happens when the CPU stops running one process and starts running another. Because each context switch requires the operating system to take control of the CPU, excessive context switches and high levels of system-level CPU consumption tend to go together. Interrupts As the name implies, interrupts are situations where the processing being performed by the CPU is abruptly changed. Interrupts generally occur due to hardware activity (such as an I/O device completing an I/O operation) or due to software (such as software interrupts that control application processing). Because interrupts must be serviced at a system level, high interrupt rates lead to higher system-level CPU consumption. Runnable Processes A process may be in different states. For example, it may be: Waiting for an I/O operation to complete Waiting for the memory management subsystem to handle a page fault In these cases, the process has no need for the CPU. However, eventually the process state changes, and the process becomes runnable. As the name implies, a runnable process is one that is capable of getting work done as soon as it is scheduled to receive CPU time. However, if more than one process is runnable at any given time, all but one [4] of the runnable processes must wait for their turn at the CPU. By monitoring the number of runnable processes, it is possible to determine how CPU-bound your system is. Other performance metrics that reflect an impact on CPU utilization tend to include different services the operating system provides to processes. They may include statistics on memory management, I/O processing, and so on. These statistics also reveal that, when system performance is monitored, there are no boundaries between the different statistics. In other words, CPU utilization statistics may end up pointing to a problem in the I/O subsystem, or memory utilization statistics may reveal an application design flaw. Therefore, when monitoring system performance, it is not possible to examine any one statistic in complete isolation; only by examining the overall picture it it possible to extract meaningful information from any performance statistics you gather. [4] Assuming a single-processor computer system.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s1-resource-what-to-monitor
Chapter 7. User Storage SPI
Chapter 7. User Storage SPI You can use the User Storage SPI to write extensions to Red Hat Single Sign-On to connect to external user databases and credential stores. The built-in LDAP and ActiveDirectory support is an implementation of this SPI in action. Out of the box, Red Hat Single Sign-On uses its local database to create, update, and look up users and validate credentials. Often though, organizations have existing external proprietary user databases that they cannot migrate to Red Hat Single Sign-On's data model. For those situations, application developers can write implementations of the User Storage SPI to bridge the external user store and the internal user object model that Red Hat Single Sign-On uses to log in users and manage them. When the Red Hat Single Sign-On runtime needs to look up a user, such as when a user is logging in, it performs a number of steps to locate the user. It first looks to see if the user is in the user cache; if the user is found it uses that in-memory representation. Then it looks for the user within the Red Hat Single Sign-On local database. If the user is not found, it then loops through User Storage SPI provider implementations to perform the user query until one of them returns the user the runtime is looking for. The provider queries the external user store for the user and maps the external data representation of the user to Red Hat Single Sign-On's user metamodel. User Storage SPI provider implementations can also perform complex criteria queries, perform CRUD operations on users, validate and manage credentials, or perform bulk updates of many users at once. It depends on the capabilities of the external store. User Storage SPI provider implementations are packaged and deployed similarly to (and often are) Java EE components. They are not enabled by default, but instead must be enabled and configured per realm under the User Federation tab in the administration console. 7.1. Provider Interfaces When building an implementation of the User Storage SPI you have to define a provider class and a provider factory. Provider class instances are created per transaction by provider factories. Provider classes do all the heavy lifting of user lookup and other user operations. They must implement the org.keycloak.storage.UserStorageProvider interface. package org.keycloak.storage; public interface UserStorageProvider extends Provider { /** * Callback when a realm is removed. Implement this if, for example, you want to do some * cleanup in your user storage when a realm is removed * * @param realm */ default void preRemove(RealmModel realm) { } /** * Callback when a group is removed. Allows you to do things like remove a user * group mapping in your external store if appropriate * * @param realm * @param group */ default void preRemove(RealmModel realm, GroupModel group) { } /** * Callback when a role is removed. Allows you to do things like remove a user * role mapping in your external store if appropriate * @param realm * @param role */ default void preRemove(RealmModel realm, RoleModel role) { } } You may be thinking that the UserStorageProvider interface is pretty sparse? You'll see later in this chapter that there are other mix-in interfaces your provider class may implement to support the meat of user integration. UserStorageProvider instances are created once per transaction. When the transaction is complete, the UserStorageProvider.close() method is invoked and the instance is then garbage collected. Instances are created by provider factories. Provider factories implement the org.keycloak.storage.UserStorageProviderFactory interface. package org.keycloak.storage; /** * @author <a href="mailto:[email protected]">Bill Burke</a> * @version USDRevision: 1 USD */ public interface UserStorageProviderFactory<T extends UserStorageProvider> extends ComponentFactory<T, UserStorageProvider> { /** * This is the name of the provider and will be shown in the admin console as an option. * * @return */ @Override String getId(); /** * called per Keycloak transaction. * * @param session * @param model * @return */ T create(KeycloakSession session, ComponentModel model); ... } Provider factory classes must specify the concrete provider class as a template parameter when implementing the UserStorageProviderFactory . This is a must as the runtime will introspect this class to scan for its capabilities (the other interfaces it implements). So for example, if your provider class is named FileProvider , then the factory class should look like this: public class FileProviderFactory implements UserStorageProviderFactory<FileProvider> { public String getId() { return "file-provider"; } public FileProvider create(KeycloakSession session, ComponentModel model) { ... } The getId() method returns the name of the User Storage provider. This id will be displayed in the admin console's User Federation page when you want to enable the provider for a specific realm. The create() method is responsible for allocating an instance of the provider class. It takes a org.keycloak.models.KeycloakSession parameter. This object can be used to look up other information and metadata as well as provide access to various other components within the runtime. The ComponentModel parameter represents how the provider was enabled and configured within a specific realm. It contains the instance id of the enabled provider as well as any configuration you may have specified for it when you enabled through the admin console. The UserStorageProviderFactory has other capabilities as well which we will go over later in this chapter. 7.2. Provider Capability Interfaces If you have examined the UserStorageProvider interface closely you might notice that it does not define any methods for locating or managing users. These methods are actually defined in other capability interfaces depending on what scope of capabilities your external user store can provide and execute on. For example, some external stores are read-only and can only do simple queries and credential validation. You will only be required to implement the capability interfaces for the features you are able to. You can implement these interfaces: SPI Description org.keycloak.storage.user.UserLookupProvider This interface is required if you want to be able to log in with users from this external store. Most (all?) providers implement this interface. org.keycloak.storage.user.UserQueryProvider Defines complex queries that are used to locate one or more users. You must implement this interface if you want to view and manage users from the administration console. org.keycloak.storage.user.UserRegistrationProvider Implement this interface if your provider supports adding and removing users. org.keycloak.storage.user.UserBulkUpdateProvider Implement this interface if your provider supports bulk update of a set of users. org.keycloak.credential.CredentialInputValidator Implement this interface if your provider can validate one or more different credential types (for example, if your provider can validate a password). org.keycloak.credential.CredentialInputUpdater Implement this interface if your provider supports updating one or more different credential types. 7.3. Model Interfaces Most of the methods defined in the capability interfaces either return or are passed in representations of a user. These representations are defined by the org.keycloak.models.UserModel interface. App developers are required to implement this interface. It provides a mapping between the external user store and the user metamodel that Red Hat Single Sign-On uses. package org.keycloak.models; public interface UserModel extends RoleMapperModel { String getId(); String getUsername(); void setUsername(String username); String getFirstName(); void setFirstName(String firstName); String getLastName(); void setLastName(String lastName); String getEmail(); void setEmail(String email); ... } UserModel implementations provide access to read and update metadata about the user including things like username, name, email, role and group mappings, as well as other arbitrary attributes. There are other model classes within the org.keycloak.models package that represent other parts of the Red Hat Single Sign-On metamodel: RealmModel , RoleModel , GroupModel , and ClientModel . 7.3.1. Storage Ids One important method of UserModel is the getId() method. When implementing UserModel developers must be aware of the user id format. The format must be: The Red Hat Single Sign-On runtime often has to look up users by their user id. The user id contains enough information so that the runtime does not have to query every single UserStorageProvider in the system to find the user. The component id is the id returned from ComponentModel.getId() . The ComponentModel is passed in as a parameter when creating the provider class so you can get it from there. The external id is information your provider class needs to find the user in the external store. This is often a username or a uid. For example, it might look something like this: When the runtime does a lookup by id, the id is parsed to obtain the component id. The component id is used to locate the UserStorageProvider that was originally used to load the user. That provider is then passed the id. The provider again parses the id to obtain the external id and it will use to locate the user in external user storage. 7.4. Packaging and Deployment User Storage providers are packaged in a JAR and deployed or undeployed to the Red Hat Single Sign-On runtime in the same way you would deploy something in the JBoss EAP application server. You can either copy the JAR directly to the standalone/deployments/ directory of the server, or use the JBoss CLI to execute the deployment. In order for Red Hat Single Sign-On to recognize the provider, you need to add a file to the JAR: META-INF/services/org.keycloak.storage.UserStorageProviderFactory . This file must contain a line-separated list of fully qualified classnames of the UserStorageProviderFactory implementations: Red Hat Single Sign-On supports hot deployment of these provider JARs. You'll also see later in this chapter that you can package it within and as Java EE components. 7.5. Simple Read-Only, Lookup Example To illustrate the basics of implementing the User Storage SPI let's walk through a simple example. In this chapter you'll see the implementation of a simple UserStorageProvider that looks up users in a simple property file. The property file contains username and password definitions and is hardcoded to a specific location on the classpath. The provider will be able to look up the user by ID and username and also be able to validate passwords. Users that originate from this provider will be read-only. 7.5.1. Provider Class The first thing we will walk through is the UserStorageProvider class. public class PropertyFileUserStorageProvider implements UserStorageProvider, UserLookupProvider, CredentialInputValidator, CredentialInputUpdater { ... } Our provider class, PropertyFileUserStorageProvider , implements many interfaces. It implements the UserStorageProvider as that is a base requirement of the SPI. It implements the UserLookupProvider interface because we want to be able to log in with users stored by this provider. It implements the CredentialInputValidator interface because we want to be able to validate passwords entered in using the login screen. Our property file is read-only. We implement the CredentialInputUpdater because we want to post an error condition when the user attempts to update his password. protected KeycloakSession session; protected Properties properties; protected ComponentModel model; // map of loaded users in this transaction protected Map<String, UserModel> loadedUsers = new HashMap<>(); public PropertyFileUserStorageProvider(KeycloakSession session, ComponentModel model, Properties properties) { this.session = session; this.model = model; this.properties = properties; } The constructor for this provider class is going to store the reference to the KeycloakSession , ComponentModel , and property file. We'll use all of these later. Also notice that there is a map of loaded users. Whenever we find a user we will store it in this map so that we avoid re-creating it again within the same transaction. This is a good practice to follow as many providers will need to do this (that is, any provider that integrates with JPA). Remember also that provider class instances are created once per transaction and are closed after the transaction completes. 7.5.1.1. UserLookupProvider Implementation @Override public UserModel getUserByUsername(String username, RealmModel realm) { UserModel adapter = loadedUsers.get(username); if (adapter == null) { String password = properties.getProperty(username); if (password != null) { adapter = createAdapter(realm, username); loadedUsers.put(username, adapter); } } return adapter; } protected UserModel createAdapter(RealmModel realm, String username) { return new AbstractUserAdapter(session, realm, model) { @Override public String getUsername() { return username; } }; } @Override public UserModel getUserById(String id, RealmModel realm) { StorageId storageId = new StorageId(id); String username = storageId.getExternalId(); return getUserByUsername(username, realm); } @Override public UserModel getUserByEmail(String email, RealmModel realm) { return null; } The getUserByUsername() method is invoked by the Red Hat Single Sign-On login page when a user logs in. In our implementation we first check the loadedUsers map to see if the user has already been loaded within this transaction. If it hasn't been loaded we look in the property file for the username. If it exists we create an implementation of UserModel , store it in loadedUsers for future reference, and return this instance. The createAdapter() method uses the helper class org.keycloak.storage.adapter.AbstractUserAdapter . This provides a base implementation for UserModel . It automatically generates a user id based on the required storage id format using the username of the user as the external id. Every get method of AbstractUserAdapter either returns null or empty collections. However, methods that return role and group mappings will return the default roles and groups configured for the realm for every user. Every set method of AbstractUserAdapter will throw a org.keycloak.storage.ReadOnlyException . So if you attempt to modify the user in the admininstration console, you will get an error. The getUserById() method parses the id parameter using the org.keycloak.storage.StorageId helper class. The StorageId.getExternalId() method is invoked to obtain the username embeded in the id parameter. The method then delegates to getUserByUsername() . Emails are not stored, so the getUserByEmail() method returns null. 7.5.1.2. CredentialInputValidator Implementation let's look at the method implementations for CredentialInputValidator . @Override public boolean isConfiguredFor(RealmModel realm, UserModel user, String credentialType) { String password = properties.getProperty(user.getUsername()); return credentialType.equals(CredentialModel.PASSWORD) && password != null; } @Override public boolean supportsCredentialType(String credentialType) { return credentialType.equals(CredentialModel.PASSWORD); } @Override public boolean isValid(RealmModel realm, UserModel user, CredentialInput input) { if (!supportsCredentialType(input.getType())) return false; String password = properties.getProperty(user.getUsername()); if (password == null) return false; return password.equals(input.getChallengeResponse()); } The isConfiguredFor() method is called by the runtime to determine if a specific credential type is configured for the user. This method checks to see that the password is set for the user. The supportsCredentialType() method returns whether validation is supported for a specific credential type. We check to see if the credential type is password . The isValid() method is responsible for validating passwords. The CredentialInput parameter is really just an abstract interface for all credential types. We make sure that we support the credential type and also that it is an instance of UserCredentialModel . When a user logs in through the login page, the plain text of the password input is put into an instance of UserCredentialModel . The isValid() method checks this value against the plain text password stored in the properties file. A return value of true means the password is valid. 7.5.1.3. CredentialInputUpdater Implementation As noted before, the only reason we implement the CredentialInputUpdater interface in this example is to forbid modifications of user passwords. The reason we have to do this is because otherwise the runtime would allow the password to be overridden in Red Hat Single Sign-On local storage. We'll talk more about this later in this chapter. @Override public boolean updateCredential(RealmModel realm, UserModel user, CredentialInput input) { if (input.getType().equals(CredentialModel.PASSWORD)) throw new ReadOnlyException("user is read only for this update"); return false; } @Override public void disableCredentialType(RealmModel realm, UserModel user, String credentialType) { } @Override public Set<String> getDisableableCredentialTypes(RealmModel realm, UserModel user) { return Collections.EMPTY_SET; } The updateCredential() method just checks to see if the credential type is password. If it is, a ReadOnlyException is thrown. 7.5.2. Provider Factory Implementation Now that the provider class is complete, we now turn our attention to the provider factory class. public class PropertyFileUserStorageProviderFactory implements UserStorageProviderFactory<PropertyFileUserStorageProvider> { public static final String PROVIDER_NAME = "readonly-property-file"; @Override public String getId() { return PROVIDER_NAME; } First thing to notice is that when implementing the UserStorageProviderFactory class, you must pass in the concrete provider class implementation as a template parameter. Here we specify the provider class we defined before: PropertyFileUserStorageProvider . Warning If you do not specify the template parameter, your provider will not function. The runtime does class introspection to determine the capability interfaces that the provider implements. The getId() method identifies the factory in the runtime and will also be the string shown in the admin console when you want to enable a user storage provider for the realm. 7.5.2.1. Initialization private static final Logger logger = Logger.getLogger(PropertyFileUserStorageProviderFactory.class); protected Properties properties = new Properties(); @Override public void init(Config.Scope config) { InputStream is = getClass().getClassLoader().getResourceAsStream("/users.properties"); if (is == null) { logger.warn("Could not find users.properties in classpath"); } else { try { properties.load(is); } catch (IOException ex) { logger.error("Failed to load users.properties file", ex); } } } @Override public PropertyFileUserStorageProvider create(KeycloakSession session, ComponentModel model) { return new PropertyFileUserStorageProvider(session, model, properties); } The UserStorageProviderFactory interface has an optional init() method you can implement. When Red Hat Single Sign-On boots up, only one instance of each provider factory is created. Also at boot time, the init() method is called on each of these factory instances. There's also a postInit() method you can implement as well. After each factory's init() method is invoked, their postInit() methods are called. In our init() method implementation, we find the property file containing our user declarations from the classpath. We then load the properties field with the username and password combinations stored there. The Config.Scope parameter is factory configuration that can be set up within standalone.xml , standalone-ha.xml , or domain.xml . For example, by adding the following to standalone.xml : <spi name="storage"> <provider name="readonly-property-file" enabled="true"> <properties> <property name="path" value="/other-users.properties"/> </properties> </provider> </spi> We can specify the classpath of the user property file instead of hardcoding it. Then you can retrieve the configuration in the PropertyFileUserStorageProviderFactory.init() : public void init(Config.Scope config) { String path = config.get("path"); InputStream is = getClass().getClassLoader().getResourceAsStream(path); ... } 7.5.2.2. Create Method Our last step in creating the provider factory is the create() method. @Override public PropertyFileUserStorageProvider create(KeycloakSession session, ComponentModel model) { return new PropertyFileUserStorageProvider(session, model, properties); } We simply allocate the PropertyFileUserStorageProvider class. This create method will be called once per transaction. 7.5.3. Packaging and Deployment The class files for our provider implementation should be placed in a jar. You also have to declare the provider factory class within the META-INF/services/org.keycloak.storage.UserStorageProviderFactory file. Once you create the jar you can deploy it using regular JBoss EAP means: copy the jar into the standalone/deployments/ directory or using the JBoss CLI. 7.5.4. Enabling the Provider in the Administration Console You enable user storage providers per realm within the User Federation page in the administration console. Select the provider we just created from the list: readonly-property-file . It brings you to the configuration page for our provider. We do not have anything to configure, so click Save . When you go back to the main User Federation page, you now see your provider listed. You will now be able to log in with a user declared in the users.properties file. This user will only be able to view the account page after logging in. 7.6. Configuration Techniques Our PropertyFileUserStorageProvider example is bit contrived. It is hardcoded to a property file that is embedded in the jar of the provider, which is not terribly useful. We might want to make the location of this file configurable per instance of the provider. In other words, we might want to reuse this provider mulitple times in multiple different realms and point to completely different user property files. We'll also want to perform this configuration within the administration console UI. The UserStorageProviderFactory has additional methods you can implement that handle provider configuration. You describe the variables you want to configure per provider and the administration console automatically renders a generic input page to gather this configuration. When implemented, callback methods also validate the configuration before it is saved, when a provider is created for the first time, and when it is updated. UserStorageProviderFactory inherits these methods from the org.keycloak.component.ComponentFactory interface. List<ProviderConfigProperty> getConfigProperties(); default void validateConfiguration(KeycloakSession session, RealmModel realm, ComponentModel model) throws ComponentValidationException { } default void onCreate(KeycloakSession session, RealmModel realm, ComponentModel model) { } default void onUpdate(KeycloakSession session, RealmModel realm, ComponentModel model) { } The ComponentFactory.getConfigProperties() method returns a list of org.keycloak.provider.ProviderConfigProperty instances. These instances declare metadata that is needed to render and store each configuration variable of the provider. 7.6.1. Configuration Example Let's expand our PropertyFileUserStorageProviderFactory example to allow you to point a provider instance to a specific file on disk. PropertyFileUserStorageProviderFactory public class PropertyFileUserStorageProviderFactory implements UserStorageProviderFactory<PropertyFileUserStorageProvider> { protected static final List<ProviderConfigProperty> configMetadata; static { configMetadata = ProviderConfigurationBuilder.create() .property().name("path") .type(ProviderConfigProperty.STRING_TYPE) .label("Path") .defaultValue("USD{jboss.server.config.dir}/example-users.properties") .helpText("File path to properties file") .add().build(); } @Override public List<ProviderConfigProperty> getConfigProperties() { return configMetadata; } The ProviderConfigurationBuilder class is a great helper class to create a list of configuration properties. Here we specify a variable named path that is a String type. On the administration console configuration page for this provider, this configuration variable is labeled as Path and has a default value of USD{jboss.server.config.dir}/example-users.properties . When you hover over the tooltip of this configuration option, it displays the help text, File path to properties file . The thing we want to do is to verify that this file exists on disk. We do not want to enable an instance of this provider in the realm unless it points to a valid user property file. To do this, we implement the validateConfiguration() method. @Override public void validateConfiguration(KeycloakSession session, RealmModel realm, ComponentModel config) throws ComponentValidationException { String fp = config.getConfig().getFirst("path"); if (fp == null) throw new ComponentValidationException("user property file does not exist"); fp = EnvUtil.replace(fp); File file = new File(fp); if (!file.exists()) { throw new ComponentValidationException("user property file does not exist"); } } In the validateConfiguration() method we get the configuration variable from the ComponentModel and we check to see if that file exists on disk. Notice that we use the org.keycloak.common.util.EnvUtil.replace() method. With this method any string that has USD{} within it will replace that with a system property value. The USD{jboss.server.config.dir} string corresponds to the configuration/ directory of our server and is really useful for this example. thing we have to do is remove the old init() method. We do this because user property files are going to be unique per provider instance. We move this logic to the create() method. @Override public PropertyFileUserStorageProvider create(KeycloakSession session, ComponentModel model) { String path = model.getConfig().getFirst("path"); Properties props = new Properties(); try { InputStream is = new FileInputStream(path); props.load(is); is.close(); } catch (IOException e) { throw new RuntimeException(e); } return new PropertyFileUserStorageProvider(session, model, props); } This logic is, of course, inefficient as every transaction reads the entire user property file from disk, but hopefully this illustrates, in a simple way, how to hook in configuration variables. 7.6.2. Configuring the Provider in the Administration Console Now that the configuration is enabled, you can set the path variable when you configure the provider in the administration console. 7.7. Add/Remove User and Query Capability interfaces One thing we have not done with our example is allow it to add and remove users or change passwords. Users defined in our example are also not queryable or viewable in the administration console. To add these enhancements, our example provider must implement the UserQueryProvider and UserRegistrationProvider interfaces. 7.7.1. Implementing UserRegistrationProvider To implement adding and removing users from this particular store, we first have to be able to save our properties file to disk. PropertyFileUserStorageProvider public void save() { String path = model.getConfig().getFirst("path"); path = EnvUtil.replace(path); try { FileOutputStream fos = new FileOutputStream(path); properties.store(fos, ""); fos.close(); } catch (IOException e) { throw new RuntimeException(e); } } Then, the implementation of the addUser() and removeUser() methods becomes simple. PropertyFileUserStorageProvider public static final String UNSET_PASSWORD="#USD!-UNSET-PASSWORD"; @Override public UserModel addUser(RealmModel realm, String username) { synchronized (properties) { properties.setProperty(username, UNSET_PASSWORD); save(); } return createAdapter(realm, username); } @Override public boolean removeUser(RealmModel realm, UserModel user) { synchronized (properties) { if (properties.remove(user.getUsername()) == null) return false; save(); return true; } } Notice that when adding a user we set the password value of the property map to be UNSET_PASSWORD . We do this as we can't have null values for a property in the property value. We also have to modify the CredentialInputValidator methods to reflect this. The addUser() method will be called if the provider implements the UserRegistrationProvider interface. If your provider has a configuration switch to turn off adding a user, returning null from this method will skip the provider and call the one. PropertyFileUserStorageProvider @Override public boolean isValid(RealmModel realm, UserModel user, CredentialInput input) { if (!supportsCredentialType(input.getType()) || !(input instanceof UserCredentialModel)) return false; UserCredentialModel cred = (UserCredentialModel)input; String password = properties.getProperty(user.getUsername()); if (password == null || UNSET_PASSWORD.equals(password)) return false; return password.equals(cred.getValue()); } Since we can now save our property file, it also makes sense to allow password updates. PropertyFileUserStorageProvider @Override public boolean updateCredential(RealmModel realm, UserModel user, CredentialInput input) { if (!(input instanceof UserCredentialModel)) return false; if (!input.getType().equals(CredentialModel.PASSWORD)) return false; UserCredentialModel cred = (UserCredentialModel)input; synchronized (properties) { properties.setProperty(user.getUsername(), cred.getValue()); save(); } return true; } We can now also implement disabling a password. PropertyFileUserStorageProvider @Override public void disableCredentialType(RealmModel realm, UserModel user, String credentialType) { if (!credentialType.equals(CredentialModel.PASSWORD)) return; synchronized (properties) { properties.setProperty(user.getUsername(), UNSET_PASSWORD); save(); } } private static final Set<String> disableableTypes = new HashSet<>(); static { disableableTypes.add(CredentialModel.PASSWORD); } @Override public Set<String> getDisableableCredentialTypes(RealmModel realm, UserModel user) { return disableableTypes; } With these methods implemented, you'll now be able to change and disable the password for the user in the administration console. 7.7.2. Implementing UserQueryProvider Without implementing UserQueryProvider the administration console would not be able to view and manage users that were loaded by our example provider. Let's look at implementing this interface. PropertyFileUserStorageProvider @Override public int getUsersCount(RealmModel realm) { return properties.size(); } @Override public List<UserModel> getUsers(RealmModel realm) { return getUsers(realm, 0, Integer.MAX_VALUE); } @Override public List<UserModel> getUsers(RealmModel realm, int firstResult, int maxResults) { List<UserModel> users = new LinkedList<>(); int i = 0; for (Object obj : properties.keySet()) { if (i++ < firstResult) continue; String username = (String)obj; UserModel user = getUserByUsername(username, realm); users.add(user); if (users.size() >= maxResults) break; } return users; } The getUsers() method iterates over the key set of the property file, delegating to getUserByUsername() to load a user. Notice that we are indexing this call based on the firstResult and maxResults parameter. If your external store does not support pagination, you will have to do similar logic. PropertyFileUserStorageProvider @Override public List<UserModel> searchForUser(String search, RealmModel realm) { return searchForUser(search, realm, 0, Integer.MAX_VALUE); } @Override public List<UserModel> searchForUser(String search, RealmModel realm, int firstResult, int maxResults) { List<UserModel> users = new LinkedList<>(); int i = 0; for (Object obj : properties.keySet()) { String username = (String)obj; if (!username.contains(search)) continue; if (i++ < firstResult) continue; UserModel user = getUserByUsername(username, realm); users.add(user); if (users.size() >= maxResults) break; } return users; } The first declaration of searchForUser() takes a String parameter. This is supposed to be a string that you use to search username and email attributes to find the user. This string can be a substring, which is why we use the String.contains() method when doing our search. PropertyFileUserStorageProvider @Override public List<UserModel> searchForUser(Map<String, String> params, RealmModel realm) { return searchForUser(params, realm, 0, Integer.MAX_VALUE); } @Override public List<UserModel> searchForUser(Map<String, String> params, RealmModel realm, int firstResult, int maxResults) { // only support searching by username String usernameSearchString = params.get("username"); if (usernameSearchString == null) return Collections.EMPTY_LIST; return searchForUser(usernameSearchString, realm, firstResult, maxResults); } The searchForUser() method that takes a Map parameter can search for a user based on first, last name, username, and email. We only store usernames, so we only search based on usernames. We delegate to searchForUser() for this. PropertyFileUserStorageProvider @Override public List<UserModel> getGroupMembers(RealmModel realm, GroupModel group, int firstResult, int maxResults) { return Collections.EMPTY_LIST; } @Override public List<UserModel> getGroupMembers(RealmModel realm, GroupModel group) { return Collections.EMPTY_LIST; } @Override public List<UserModel> searchForUserByUserAttribute(String attrName, String attrValue, RealmModel realm) { return Collections.EMPTY_LIST; } We do not store groups or attributes, so the other methods return an empty list. 7.8. Augmenting External Storage The PropertyFileUserStorageProvider example is really limited. While we will be able to login with users stored in a property file, we won't be able to do much else. If users loaded by this provider need special role or group mappings to fully access particular applications there is no way for us to add additional role mappings to these users. You also can't modify or add additional important attributes like email, first and last name. For these types of situations, Red Hat Single Sign-On allows you to augment your external store by storing extra information in Red Hat Single Sign-On's database. This is called federated user storage and is encapsulated within the org.keycloak.storage.federated.UserFederatedStorageProvider class. UserFederatedStorageProvider package org.keycloak.storage.federated; public interface UserFederatedStorageProvider extends Provider { Set<GroupModel> getGroups(RealmModel realm, String userId); void joinGroup(RealmModel realm, String userId, GroupModel group); void leaveGroup(RealmModel realm, String userId, GroupModel group); List<String> getMembership(RealmModel realm, GroupModel group, int firstResult, int max); ... The UserFederatedStorageProvider instance is available on the KeycloakSession.userFederatedStorage() method. It has all different kinds of methods for storing attributes, group and role mappings, different credential types, and required actions. If your external store's datamodel cannot support the full Red Hat Single Sign-On feature set, then this service can fill in the gaps. Red Hat Single Sign-On comes with a helper class org.keycloak.storage.adapter.AbstractUserAdapterFederatedStorage that will delegate every single UserModel method except get/set of username to user federated storage. Override the methods you need to override to delegate to your external storage representations. It is strongly suggested you read the javadoc of this class as it has smaller protected methods you may want to override. Specifically surrounding group membership and role mappings. 7.8.1. Augmentation Example In our PropertyFileUserStorageProvider example, we just need a simple change to our provider to use the AbstractUserAdapterFederatedStorage . PropertyFileUserStorageProvider protected UserModel createAdapter(RealmModel realm, String username) { return new AbstractUserAdapterFederatedStorage(session, realm, model) { @Override public String getUsername() { return username; } @Override public void setUsername(String username) { String pw = (String)properties.remove(username); if (pw != null) { properties.put(username, pw); save(); } } }; } We instead define an anonymous class implementation of AbstractUserAdapterFederatedStorage . The setUsername() method makes changes to the properties file and saves it. 7.9. Import Implementation Strategy When implementing a user storage provider, there's another strategy you can take. Instead of using user federated storage, you can create a user locally in the Red Hat Single Sign-On built-in user database and copy attributes from your external store into this local copy. There are many advantages to this approach. Red Hat Single Sign-On basically becomes a persistence user cache for your external store. Once the user is imported you'll no longer hit the external store thus taking load off of it. If you are moving to Red Hat Single Sign-On as your official user store and deprecating the old external store, you can slowly migrate applications to use Red Hat Single Sign-On. When all applications have been migrated, unlink the imported user, and retire the old legacy external store. There are some obvious disadvantages though to using an import strategy: Looking up a user for the first time will require multiple updates to Red Hat Single Sign-On database. This can be a big performance loss under load and put a lot of strain on the Red Hat Single Sign-On database. The user federated storage approach will only store extra data as needed and may never be used depending on the capabilities of your external store. With the import approach, you have to keep local Red Hat Single Sign-On storage and external storage in sync. The User Storage SPI has capability interfaces that you can implement to support synchronization, but this can quickly become painful and messy. To implement the import strategy you simply check to see first if the user has been imported locally. If so return the local user, if not create the user locally and import data from the external store. You can also proxy the local user so that most changes are automatically synchronized. This will be a bit contrived, but we can extend our PropertyFileUserStorageProvider to take this approach. We begin first by modifying the createAdapter() method. PropertyFileUserStorageProvider protected UserModel createAdapter(RealmModel realm, String username) { UserModel local = session.userLocalStorage().getUserByUsername(username, realm); if (local == null) { local = session.userLocalStorage().addUser(realm, username); local.setFederationLink(model.getId()); } return new UserModelDelegate(local) { @Override public void setUsername(String username) { String pw = (String)properties.remove(username); if (pw != null) { properties.put(username, pw); save(); } super.setUsername(username); } }; } In this method we call the KeycloakSession.userLocalStorage() method to obtain a reference to local Red Hat Single Sign-On user storage. We see if the user is stored locally, if not, we add it locally. Do not set the id of the local user. Let Red Hat Single Sign-On automatically generate the id . Also note that we call UserModel.setFederationLink() and pass in the ID of the ComponentModel of our provider. This sets a link between the provider and the imported user. Note When a user storage provider is removed, any user imported by it will also be removed. This is one of the purposes of calling UserModel.setFederationLink() . Another thing to note is that if a local user is linked, your storage provider will still be delegated to for methods that it implements from the CredentialInputValidator and CredentialInputUpdater interfaces. Returning false from a validation or update will just result in Red Hat Single Sign-On seeing if it can validate or update using local storage. Also notice that we are proxying the local user using the org.keycloak.models.utils.UserModelDelegate class. This class is an implementation of UserModel . Every method just delegates to the UserModel it was instantiated with. We override the setUsername() method of this delegate class to synchronize automatically with the property file. For your providers, you can use this to intercept other methods on the local UserModel to perform synchronization with your external store. For example, get methods could make sure that the local store is in sync. Set methods keep the external store in sync with the local one. One thing to note is that the getId() method should always return the id that was auto generated when you created the user locally. You should not return a federated id as shown in the other non-import examples. Note If your provider is implementing the UserRegistrationProvider interface, your removeUser() method does not need to remove the user from local storage. The runtime will automatically perform this operation. Also note that removeUser() will be invoked before it is removed from local storage. 7.9.1. ImportedUserValidation Interface If you remember earlier in this chapter, we discussed how querying for a user worked. Local storage is queried first, if the user is found there, then the query ends. This is a problem for our above implementation as we want to proxy the local UserModel so that we can keep usernames in sync. The User Storage SPI has a callback for whenever a linked local user is loaded from the local database. package org.keycloak.storage.user; public interface ImportedUserValidation { /** * If this method returns null, then the user in local storage will be removed * * @param realm * @param user * @return null if user no longer valid */ UserModel validate(RealmModel realm, UserModel user); } Whenever a linked local user is loaded, if the user storage provider class implements this interface, then the validate() method is called. Here you can proxy the local user passed in as a parameter and return it. That new UserModel will be used. You can also optionally do a check to see if the user still exists in the external store. If validate() returns null , then the local user will be removed from the database. 7.9.2. ImportSynchronization Interface With the import strategy you can see that it is possible for the local user copy to get out of sync with external storage. For example, maybe a user has been removed from the external store. The User Storage SPI has an additional interface you can implement to deal with this, org.keycloak.storage.user.ImportSynchronization : package org.keycloak.storage.user; public interface ImportSynchronization { SynchronizationResult sync(KeycloakSessionFactory sessionFactory, String realmId, UserStorageProviderModel model); SynchronizationResult syncSince(Date lastSync, KeycloakSessionFactory sessionFactory, String realmId, UserStorageProviderModel model); } This interface is implemented by the provider factory. Once this interface is implemented by the provider factory, the administration console management page for the provider shows additional options. You can manually force a synchronization by clicking a button. This invokes the ImportSynchronization.sync() method. Also, additional configuration options are displayed that allow you to automatically schedule a synchronization. Automatic synchronizations invoke the syncSince() method. 7.10. User Caches When a user object is loaded by ID, username, or email queries it is cached. When a user object is being cached, it iterates through the entire UserModel interface and pulls this information to a local in-memory-only cache. In a cluster, this cache is still local, but it becomes an invalidation cache. When a user object is modified, it is evicted. This eviction event is propagated to the entire cluster so that the other nodes' user cache is also invalidated. 7.10.1. Managing the user cache You can access the user cache by calling KeycloakSession.userCache() . /** * All these methods effect an entire cluster of Keycloak instances. * * @author <a href="mailto:[email protected]">Bill Burke</a> * @version USDRevision: 1 USD */ public interface UserCache extends UserProvider { /** * Evict user from cache. * * @param user */ void evict(RealmModel realm, UserModel user); /** * Evict users of a specific realm * * @param realm */ void evict(RealmModel realm); /** * Clear cache entirely. * */ void clear(); } There are methods for evicting specific users, users contained in a specific realm, or the entire cache. 7.10.2. OnUserCache Callback Interface You might want to cache additional information that is specific to your provider implementation. The User Storage SPI has a callback whenever a user is cached: org.keycloak.models.cache.OnUserCache . public interface OnUserCache { void onCache(RealmModel realm, CachedUserModel user, UserModel delegate); } Your provider class should implement this interface if it wants this callback. The UserModel delegate parameter is the UserModel instance returned by your provider. The CachedUserModel is an expanded UserModel interface. This is the instance that is cached locally in local storage. public interface CachedUserModel extends UserModel { /** * Invalidates the cache for this user and returns a delegate that represents the actual data provider * * @return */ UserModel getDelegateForUpdate(); boolean isMarkedForEviction(); /** * Invalidate the cache for this model * */ void invalidate(); /** * When was the model was loaded from database. * * @return */ long getCacheTimestamp(); /** * Returns a map that contains custom things that are cached along with this model. You can write to this map. * * @return */ ConcurrentHashMap getCachedWith(); } This CachedUserModel interface allows you to evict the user from the cache and get the provider UserModel instance. The getCachedWith() method returns a map that allows you to cache additional information pertaining to the user. For example, credentials are not part of the UserModel interface. If you wanted to cache credentials in memory, you would implement OnUserCache and cache your user's credentials using the getCachedWith() method. 7.10.3. Cache Policies On the administration console management page for your user storage provider, you can specify a unique cache policy. 7.11. Leveraging Java EE The user storage providers can be packaged within any Java EE component if you set up the META-INF/services file correctly to point to your providers. For example, if your provider needs to use third-party libraries, you can package up your provider within an EAR and store these third-party libraries in the lib/ directory of the EAR. Also note that provider JARs can make use of the jboss-deployment-structure.xml file that EJBs, WARS, and EARs can use in a JBoss EAP environment. For more details on this file, see the JBoss EAP documentation. It allows you to pull in external dependencies among other fine-grained actions. Provider implementations are required to be plain java objects. But we also currently support implementing UserStorageProvider classes as Stateful EJBs. This is especially useful if you want to use JPA to connect to a relational store. This is how you would do it: @Stateful @Local(EjbExampleUserStorageProvider.class) public class EjbExampleUserStorageProvider implements UserStorageProvider, UserLookupProvider, UserRegistrationProvider, UserQueryProvider, CredentialInputUpdater, CredentialInputValidator, OnUserCache { @PersistenceContext protected EntityManager em; protected ComponentModel model; protected KeycloakSession session; public void setModel(ComponentModel model) { this.model = model; } public void setSession(KeycloakSession session) { this.session = session; } @Remove @Override public void close() { } ... } You have to define the @Local annotation and specify your provider class there. If you do not do this, EJB will not proxy the user correctly and your provider won't work. You must put the @Remove annotation on the close() method of your provider. If you do not, the stateful bean will never be cleaned up and you might eventually see error messages. Implementations of UserStorageProvider are required to be plain Java objects. Your factory class would perform a JNDI lookup of the Stateful EJB in its create() method. public class EjbExampleUserStorageProviderFactory implements UserStorageProviderFactory<EjbExampleUserStorageProvider> { @Override public EjbExampleUserStorageProvider create(KeycloakSession session, ComponentModel model) { try { InitialContext ctx = new InitialContext(); EjbExampleUserStorageProvider provider = (EjbExampleUserStorageProvider)ctx.lookup( "java:global/user-storage-jpa-example/" + EjbExampleUserStorageProvider.class.getSimpleName()); provider.setModel(model); provider.setSession(session); return provider; } catch (Exception e) { throw new RuntimeException(e); } } This example also assumes that you have defined a JPA deployment in the same JAR as the provider. This means a persistence.xml file as well as any JPA @Entity classes. Warning When using JPA any additional datasource must be an XA datasource. The Red Hat Single Sign-On datasource is not an XA datasource. If you interact with two or more non-XA datasources in the same transaction, the server returns an error message. Only one non-XA resource is permitted in a single transaction. See the JBoss EAP manual for more details on deploying an XA datasource. CDI is not supported. 7.12. REST Management API You can create, remove, and update your user storage provider deployments through the administrator REST API. The User Storage SPI is built on top of a generic component interface so you will be using that generic API to manage your providers. The REST Component API lives under your realm admin resource. We will only show this REST API interaction with the Java client. Hopefully you can extract how to do this from curl from this API. public interface ComponentsResource { @GET @Produces(MediaType.APPLICATION_JSON) public List<ComponentRepresentation> query(); @GET @Produces(MediaType.APPLICATION_JSON) public List<ComponentRepresentation> query(@QueryParam("parent") String parent); @GET @Produces(MediaType.APPLICATION_JSON) public List<ComponentRepresentation> query(@QueryParam("parent") String parent, @QueryParam("type") String type); @GET @Produces(MediaType.APPLICATION_JSON) public List<ComponentRepresentation> query(@QueryParam("parent") String parent, @QueryParam("type") String type, @QueryParam("name") String name); @POST @Consumes(MediaType.APPLICATION_JSON) Response add(ComponentRepresentation rep); @Path("{id}") ComponentResource component(@PathParam("id") String id); } public interface ComponentResource { @GET public ComponentRepresentation toRepresentation(); @PUT @Consumes(MediaType.APPLICATION_JSON) public void update(ComponentRepresentation rep); @DELETE public void remove(); } To create a user storage provider, you must specify the provider id, a provider type of the string org.keycloak.storage.UserStorageProvider , as well as the configuration. import org.keycloak.admin.client.Keycloak; import org.keycloak.representations.idm.RealmRepresentation; ... Keycloak keycloak = Keycloak.getInstance( "http://localhost:8080/auth", "master", "admin", "password", "admin-cli"); RealmResource realmResource = keycloak.realm("master"); RealmRepresentation realm = realmResource.toRepresentation(); ComponentRepresentation component = new ComponentRepresentation(); component.setName("home"); component.setProviderId("readonly-property-file"); component.setProviderType("org.keycloak.storage.UserStorageProvider"); component.setParentId(realm.getId()); component.setConfig(new MultivaluedHashMap()); component.getConfig().putSingle("path", "~/users.properties"); realmResource.components().add(component); // retrieve a component List<ComponentRepresentation> components = realmResource.components().query(realm.getId(), "org.keycloak.storage.UserStorageProvider", "home"); component = components.get(0); // Update a component component.getConfig().putSingle("path", "~/my-users.properties"); realmResource.components().component(component.getId()).update(component); // Remove a component realmREsource.components().component(component.getId()).remove(); 7.13. Migrating from an Earlier User Federation SPI Note This chapter is only applicable if you have implemented a provider using the earlier (and now removed) User Federation SPI. In Keycloak version 2.4.0 and earlier there was a User Federation SPI. Red Hat Single Sign-On version 7.0, although unsupported, had this earlier SPI available as well. This earlier User Federation SPI has been removed from Keycloak version 2.5.0 and Red Hat Single Sign-On version 7.1. However, if you have written a provider with this earlier SPI, this chapter discusses some strategies you can use to port it. 7.13.1. Import vs. Non-Import The earlier User Federation SPI required you to create a local copy of a user in the Red Hat Single Sign-On's database and import information from your external store to the local copy. However, this is no longer a requirement. You can still port your earlier provider as-is, but you should consider whether a non-import strategy might be a better approach. Advantages of the import strategy: Red Hat Single Sign-On basically becomes a persistence user cache for your external store. Once the user is imported you'll no longer hit the external store, thus taking load off of it. If you are moving to Red Hat Single Sign-On as your official user store and deprecating the earlier external store, you can slowly migrate applications to use Red Hat Single Sign-On. When all applications have been migrated, unlink the imported user, and retire the earlier legacy external store. There are some obvious disadvantages though to using an import strategy: Looking up a user for the first time will require multiple updates to Red Hat Single Sign-On database. This can be a big performance loss under load and put a lot of strain on the Red Hat Single Sign-On database. The user federated storage approach will only store extra data as needed and might never be used depending on the capabilities of your external store. With the import approach, you have to keep local Red Hat Single Sign-On storage and external storage in sync. The User Storage SPI has capability interfaces that you can implement to support synchronization, but this can quickly become painful and messy. 7.13.2. UserFederationProvider vs. UserStorageProvider The first thing to notice is that UserFederationProvider was a complete interface. You implemented every method in this interface. However, UserStorageProvider has instead broken up this interface into multiple capability interfaces that you implement as needed. UserFederationProvider.getUserByUsername() and getUserByEmail() have exact equivalents in the new SPI. The difference between the two is how you import. If you are going to continue with an import strategy, you no longer call KeycloakSession.userStorage().addUser() to create the user locally. Instead you call KeycloakSession.userLocalStorage().addUser() . The userStorage() method no longer exists. The UserFederationProvider.validateAndProxy() method has been moved to an optional capability interface, ImportedUserValidation . You want to implement this interface if you are porting your earlier provider as-is. Also note that in the earlier SPI, this method was called every time the user was accessed, even if the local user is in the cache. In the later SPI, this method is only called when the local user is loaded from local storage. If the local user is cached, then the ImportedUserValidation.validate() method is not called at all. The UserFederationProvider.isValid() method no longer exists in the later SPI. The UserFederationProvider methods synchronizeRegistrations() , registerUser() , and removeUser() have been moved to the UserRegistrationProvider capability interface. This new interface is optional to implement so if your provider does not support creating and removing users, you don't have to implement it. If your earlier provider had switch to toggle support for registering new users, this is supported in the new SPI, returning null from UserRegistrationProvider.addUser() if the provider doesn't support adding users. The earlier UserFederationProvider methods centered around credentials are now encapsulated in the CredentialInputValidator and CredentialInputUpdater interfaces, which are also optional to implement depending on if you support validating or updating credentials. Credential management used to exist in UserModel methods. These also have been moved to the CredentialInputValidator and CredentialInputUpdater interfaces. One thing to note that if you do not implement the CredentialInputUpdater interface, then any credentials provided by your provider can be overridden locally in Red Hat Single Sign-On storage. So if you want your credentials to be read-only, implement the CredentialInputUpdater.updateCredential() method and return a ReadOnlyException . The UserFederationProvider query methods such as searchByAttributes() and getGroupMembers() are now encapsulated in an optional interface UserQueryProvider . If you do not implement this interface, then users will not be viewable in the admin console. You'll still be able to login though. 7.13.3. UserFederationProviderFactory vs. UserStorageProviderFactory The synchronization methods in the earlier SPI are now encapsulated within an optional ImportSynchronization interface. If you have implemented synchronization logic, then have your new UserStorageProviderFactory implement the ImportSynchronization interface. 7.13.4. Upgrading to a New Model The User Storage SPI instances are stored in a different set of relational tables. Red Hat Single Sign-On automatically runs a migration script. If any earlier User Federation providers are deployed for a realm, they are converted to the later storage model as is, including the id of the data. This migration will only happen if a User Storage provider exists with the same provider ID (i.e., "ldap", "kerberos") as the earlier User Federation provider. So, knowing this there are different approaches you can take. You can remove the earlier provider in your earlier Red Hat Single Sign-On deployment. This will remove the local linked copies of all users you imported. Then, when you upgrade Red Hat Single Sign-On, just deploy and configure your new provider for your realm. The second option is to write your new provider making sure it has the same provider ID: UserStorageProviderFactory.getId() . Make sure this provider is in the standalone/deployments/ directory of the new Red Hat Single Sign-On installation. Boot the server, and have the built-in migration script convert from the earlier data model to the later data model. In this case all your earlier linked imported users will work and be the same. If you have decided to get rid of the import strategy and rewrite your User Storage provider, we suggest that you remove the earlier provider before upgrading Red Hat Single Sign-On. This will remove linked local imported copies of any user you imported.
[ "package org.keycloak.storage; public interface UserStorageProvider extends Provider { /** * Callback when a realm is removed. Implement this if, for example, you want to do some * cleanup in your user storage when a realm is removed * * @param realm */ default void preRemove(RealmModel realm) { } /** * Callback when a group is removed. Allows you to do things like remove a user * group mapping in your external store if appropriate * * @param realm * @param group */ default void preRemove(RealmModel realm, GroupModel group) { } /** * Callback when a role is removed. Allows you to do things like remove a user * role mapping in your external store if appropriate * @param realm * @param role */ default void preRemove(RealmModel realm, RoleModel role) { } }", "package org.keycloak.storage; /** * @author <a href=\"mailto:[email protected]\">Bill Burke</a> * @version USDRevision: 1 USD */ public interface UserStorageProviderFactory<T extends UserStorageProvider> extends ComponentFactory<T, UserStorageProvider> { /** * This is the name of the provider and will be shown in the admin console as an option. * * @return */ @Override String getId(); /** * called per Keycloak transaction. * * @param session * @param model * @return */ T create(KeycloakSession session, ComponentModel model); }", "public class FileProviderFactory implements UserStorageProviderFactory<FileProvider> { public String getId() { return \"file-provider\"; } public FileProvider create(KeycloakSession session, ComponentModel model) { }", "package org.keycloak.models; public interface UserModel extends RoleMapperModel { String getId(); String getUsername(); void setUsername(String username); String getFirstName(); void setFirstName(String firstName); String getLastName(); void setLastName(String lastName); String getEmail(); void setEmail(String email); }", "\"f:\" + component id + \":\" + external id", "f:332a234e31234:wburke", "org.keycloak.examples.federation.properties.ClasspathPropertiesStorageFactory org.keycloak.examples.federation.properties.FilePropertiesStorageFactory", "public class PropertyFileUserStorageProvider implements UserStorageProvider, UserLookupProvider, CredentialInputValidator, CredentialInputUpdater { }", "protected KeycloakSession session; protected Properties properties; protected ComponentModel model; // map of loaded users in this transaction protected Map<String, UserModel> loadedUsers = new HashMap<>(); public PropertyFileUserStorageProvider(KeycloakSession session, ComponentModel model, Properties properties) { this.session = session; this.model = model; this.properties = properties; }", "@Override public UserModel getUserByUsername(String username, RealmModel realm) { UserModel adapter = loadedUsers.get(username); if (adapter == null) { String password = properties.getProperty(username); if (password != null) { adapter = createAdapter(realm, username); loadedUsers.put(username, adapter); } } return adapter; } protected UserModel createAdapter(RealmModel realm, String username) { return new AbstractUserAdapter(session, realm, model) { @Override public String getUsername() { return username; } }; } @Override public UserModel getUserById(String id, RealmModel realm) { StorageId storageId = new StorageId(id); String username = storageId.getExternalId(); return getUserByUsername(username, realm); } @Override public UserModel getUserByEmail(String email, RealmModel realm) { return null; }", "\"f:\" + component id + \":\" + username", "@Override public boolean isConfiguredFor(RealmModel realm, UserModel user, String credentialType) { String password = properties.getProperty(user.getUsername()); return credentialType.equals(CredentialModel.PASSWORD) && password != null; } @Override public boolean supportsCredentialType(String credentialType) { return credentialType.equals(CredentialModel.PASSWORD); } @Override public boolean isValid(RealmModel realm, UserModel user, CredentialInput input) { if (!supportsCredentialType(input.getType())) return false; String password = properties.getProperty(user.getUsername()); if (password == null) return false; return password.equals(input.getChallengeResponse()); }", "@Override public boolean updateCredential(RealmModel realm, UserModel user, CredentialInput input) { if (input.getType().equals(CredentialModel.PASSWORD)) throw new ReadOnlyException(\"user is read only for this update\"); return false; } @Override public void disableCredentialType(RealmModel realm, UserModel user, String credentialType) { } @Override public Set<String> getDisableableCredentialTypes(RealmModel realm, UserModel user) { return Collections.EMPTY_SET; }", "public class PropertyFileUserStorageProviderFactory implements UserStorageProviderFactory<PropertyFileUserStorageProvider> { public static final String PROVIDER_NAME = \"readonly-property-file\"; @Override public String getId() { return PROVIDER_NAME; }", "private static final Logger logger = Logger.getLogger(PropertyFileUserStorageProviderFactory.class); protected Properties properties = new Properties(); @Override public void init(Config.Scope config) { InputStream is = getClass().getClassLoader().getResourceAsStream(\"/users.properties\"); if (is == null) { logger.warn(\"Could not find users.properties in classpath\"); } else { try { properties.load(is); } catch (IOException ex) { logger.error(\"Failed to load users.properties file\", ex); } } } @Override public PropertyFileUserStorageProvider create(KeycloakSession session, ComponentModel model) { return new PropertyFileUserStorageProvider(session, model, properties); }", "<spi name=\"storage\"> <provider name=\"readonly-property-file\" enabled=\"true\"> <properties> <property name=\"path\" value=\"/other-users.properties\"/> </properties> </provider> </spi>", "public void init(Config.Scope config) { String path = config.get(\"path\"); InputStream is = getClass().getClassLoader().getResourceAsStream(path); }", "@Override public PropertyFileUserStorageProvider create(KeycloakSession session, ComponentModel model) { return new PropertyFileUserStorageProvider(session, model, properties); }", "org.keycloak.examples.federation.properties.FilePropertiesStorageFactory", "List<ProviderConfigProperty> getConfigProperties(); default void validateConfiguration(KeycloakSession session, RealmModel realm, ComponentModel model) throws ComponentValidationException { } default void onCreate(KeycloakSession session, RealmModel realm, ComponentModel model) { } default void onUpdate(KeycloakSession session, RealmModel realm, ComponentModel model) { }", "public class PropertyFileUserStorageProviderFactory implements UserStorageProviderFactory<PropertyFileUserStorageProvider> { protected static final List<ProviderConfigProperty> configMetadata; static { configMetadata = ProviderConfigurationBuilder.create() .property().name(\"path\") .type(ProviderConfigProperty.STRING_TYPE) .label(\"Path\") .defaultValue(\"USD{jboss.server.config.dir}/example-users.properties\") .helpText(\"File path to properties file\") .add().build(); } @Override public List<ProviderConfigProperty> getConfigProperties() { return configMetadata; }", "@Override public void validateConfiguration(KeycloakSession session, RealmModel realm, ComponentModel config) throws ComponentValidationException { String fp = config.getConfig().getFirst(\"path\"); if (fp == null) throw new ComponentValidationException(\"user property file does not exist\"); fp = EnvUtil.replace(fp); File file = new File(fp); if (!file.exists()) { throw new ComponentValidationException(\"user property file does not exist\"); } }", "@Override public PropertyFileUserStorageProvider create(KeycloakSession session, ComponentModel model) { String path = model.getConfig().getFirst(\"path\"); Properties props = new Properties(); try { InputStream is = new FileInputStream(path); props.load(is); is.close(); } catch (IOException e) { throw new RuntimeException(e); } return new PropertyFileUserStorageProvider(session, model, props); }", "public void save() { String path = model.getConfig().getFirst(\"path\"); path = EnvUtil.replace(path); try { FileOutputStream fos = new FileOutputStream(path); properties.store(fos, \"\"); fos.close(); } catch (IOException e) { throw new RuntimeException(e); } }", "public static final String UNSET_PASSWORD=\"#USD!-UNSET-PASSWORD\"; @Override public UserModel addUser(RealmModel realm, String username) { synchronized (properties) { properties.setProperty(username, UNSET_PASSWORD); save(); } return createAdapter(realm, username); } @Override public boolean removeUser(RealmModel realm, UserModel user) { synchronized (properties) { if (properties.remove(user.getUsername()) == null) return false; save(); return true; } }", "@Override public boolean isValid(RealmModel realm, UserModel user, CredentialInput input) { if (!supportsCredentialType(input.getType()) || !(input instanceof UserCredentialModel)) return false; UserCredentialModel cred = (UserCredentialModel)input; String password = properties.getProperty(user.getUsername()); if (password == null || UNSET_PASSWORD.equals(password)) return false; return password.equals(cred.getValue()); }", "@Override public boolean updateCredential(RealmModel realm, UserModel user, CredentialInput input) { if (!(input instanceof UserCredentialModel)) return false; if (!input.getType().equals(CredentialModel.PASSWORD)) return false; UserCredentialModel cred = (UserCredentialModel)input; synchronized (properties) { properties.setProperty(user.getUsername(), cred.getValue()); save(); } return true; }", "@Override public void disableCredentialType(RealmModel realm, UserModel user, String credentialType) { if (!credentialType.equals(CredentialModel.PASSWORD)) return; synchronized (properties) { properties.setProperty(user.getUsername(), UNSET_PASSWORD); save(); } } private static final Set<String> disableableTypes = new HashSet<>(); static { disableableTypes.add(CredentialModel.PASSWORD); } @Override public Set<String> getDisableableCredentialTypes(RealmModel realm, UserModel user) { return disableableTypes; }", "@Override public int getUsersCount(RealmModel realm) { return properties.size(); } @Override public List<UserModel> getUsers(RealmModel realm) { return getUsers(realm, 0, Integer.MAX_VALUE); } @Override public List<UserModel> getUsers(RealmModel realm, int firstResult, int maxResults) { List<UserModel> users = new LinkedList<>(); int i = 0; for (Object obj : properties.keySet()) { if (i++ < firstResult) continue; String username = (String)obj; UserModel user = getUserByUsername(username, realm); users.add(user); if (users.size() >= maxResults) break; } return users; }", "@Override public List<UserModel> searchForUser(String search, RealmModel realm) { return searchForUser(search, realm, 0, Integer.MAX_VALUE); } @Override public List<UserModel> searchForUser(String search, RealmModel realm, int firstResult, int maxResults) { List<UserModel> users = new LinkedList<>(); int i = 0; for (Object obj : properties.keySet()) { String username = (String)obj; if (!username.contains(search)) continue; if (i++ < firstResult) continue; UserModel user = getUserByUsername(username, realm); users.add(user); if (users.size() >= maxResults) break; } return users; }", "@Override public List<UserModel> searchForUser(Map<String, String> params, RealmModel realm) { return searchForUser(params, realm, 0, Integer.MAX_VALUE); } @Override public List<UserModel> searchForUser(Map<String, String> params, RealmModel realm, int firstResult, int maxResults) { // only support searching by username String usernameSearchString = params.get(\"username\"); if (usernameSearchString == null) return Collections.EMPTY_LIST; return searchForUser(usernameSearchString, realm, firstResult, maxResults); }", "@Override public List<UserModel> getGroupMembers(RealmModel realm, GroupModel group, int firstResult, int maxResults) { return Collections.EMPTY_LIST; } @Override public List<UserModel> getGroupMembers(RealmModel realm, GroupModel group) { return Collections.EMPTY_LIST; } @Override public List<UserModel> searchForUserByUserAttribute(String attrName, String attrValue, RealmModel realm) { return Collections.EMPTY_LIST; }", "package org.keycloak.storage.federated; public interface UserFederatedStorageProvider extends Provider { Set<GroupModel> getGroups(RealmModel realm, String userId); void joinGroup(RealmModel realm, String userId, GroupModel group); void leaveGroup(RealmModel realm, String userId, GroupModel group); List<String> getMembership(RealmModel realm, GroupModel group, int firstResult, int max);", "protected UserModel createAdapter(RealmModel realm, String username) { return new AbstractUserAdapterFederatedStorage(session, realm, model) { @Override public String getUsername() { return username; } @Override public void setUsername(String username) { String pw = (String)properties.remove(username); if (pw != null) { properties.put(username, pw); save(); } } }; }", "protected UserModel createAdapter(RealmModel realm, String username) { UserModel local = session.userLocalStorage().getUserByUsername(username, realm); if (local == null) { local = session.userLocalStorage().addUser(realm, username); local.setFederationLink(model.getId()); } return new UserModelDelegate(local) { @Override public void setUsername(String username) { String pw = (String)properties.remove(username); if (pw != null) { properties.put(username, pw); save(); } super.setUsername(username); } }; }", "package org.keycloak.storage.user; public interface ImportedUserValidation { /** * If this method returns null, then the user in local storage will be removed * * @param realm * @param user * @return null if user no longer valid */ UserModel validate(RealmModel realm, UserModel user); }", "package org.keycloak.storage.user; public interface ImportSynchronization { SynchronizationResult sync(KeycloakSessionFactory sessionFactory, String realmId, UserStorageProviderModel model); SynchronizationResult syncSince(Date lastSync, KeycloakSessionFactory sessionFactory, String realmId, UserStorageProviderModel model); }", "/** * All these methods effect an entire cluster of Keycloak instances. * * @author <a href=\"mailto:[email protected]\">Bill Burke</a> * @version USDRevision: 1 USD */ public interface UserCache extends UserProvider { /** * Evict user from cache. * * @param user */ void evict(RealmModel realm, UserModel user); /** * Evict users of a specific realm * * @param realm */ void evict(RealmModel realm); /** * Clear cache entirely. * */ void clear(); }", "public interface OnUserCache { void onCache(RealmModel realm, CachedUserModel user, UserModel delegate); }", "public interface CachedUserModel extends UserModel { /** * Invalidates the cache for this user and returns a delegate that represents the actual data provider * * @return */ UserModel getDelegateForUpdate(); boolean isMarkedForEviction(); /** * Invalidate the cache for this model * */ void invalidate(); /** * When was the model was loaded from database. * * @return */ long getCacheTimestamp(); /** * Returns a map that contains custom things that are cached along with this model. You can write to this map. * * @return */ ConcurrentHashMap getCachedWith(); }", "@Stateful @Local(EjbExampleUserStorageProvider.class) public class EjbExampleUserStorageProvider implements UserStorageProvider, UserLookupProvider, UserRegistrationProvider, UserQueryProvider, CredentialInputUpdater, CredentialInputValidator, OnUserCache { @PersistenceContext protected EntityManager em; protected ComponentModel model; protected KeycloakSession session; public void setModel(ComponentModel model) { this.model = model; } public void setSession(KeycloakSession session) { this.session = session; } @Remove @Override public void close() { } }", "public class EjbExampleUserStorageProviderFactory implements UserStorageProviderFactory<EjbExampleUserStorageProvider> { @Override public EjbExampleUserStorageProvider create(KeycloakSession session, ComponentModel model) { try { InitialContext ctx = new InitialContext(); EjbExampleUserStorageProvider provider = (EjbExampleUserStorageProvider)ctx.lookup( \"java:global/user-storage-jpa-example/\" + EjbExampleUserStorageProvider.class.getSimpleName()); provider.setModel(model); provider.setSession(session); return provider; } catch (Exception e) { throw new RuntimeException(e); } }", "/admin/realms/{realm-name}/components", "public interface ComponentsResource { @GET @Produces(MediaType.APPLICATION_JSON) public List<ComponentRepresentation> query(); @GET @Produces(MediaType.APPLICATION_JSON) public List<ComponentRepresentation> query(@QueryParam(\"parent\") String parent); @GET @Produces(MediaType.APPLICATION_JSON) public List<ComponentRepresentation> query(@QueryParam(\"parent\") String parent, @QueryParam(\"type\") String type); @GET @Produces(MediaType.APPLICATION_JSON) public List<ComponentRepresentation> query(@QueryParam(\"parent\") String parent, @QueryParam(\"type\") String type, @QueryParam(\"name\") String name); @POST @Consumes(MediaType.APPLICATION_JSON) Response add(ComponentRepresentation rep); @Path(\"{id}\") ComponentResource component(@PathParam(\"id\") String id); } public interface ComponentResource { @GET public ComponentRepresentation toRepresentation(); @PUT @Consumes(MediaType.APPLICATION_JSON) public void update(ComponentRepresentation rep); @DELETE public void remove(); }", "import org.keycloak.admin.client.Keycloak; import org.keycloak.representations.idm.RealmRepresentation; Keycloak keycloak = Keycloak.getInstance( \"http://localhost:8080/auth\", \"master\", \"admin\", \"password\", \"admin-cli\"); RealmResource realmResource = keycloak.realm(\"master\"); RealmRepresentation realm = realmResource.toRepresentation(); ComponentRepresentation component = new ComponentRepresentation(); component.setName(\"home\"); component.setProviderId(\"readonly-property-file\"); component.setProviderType(\"org.keycloak.storage.UserStorageProvider\"); component.setParentId(realm.getId()); component.setConfig(new MultivaluedHashMap()); component.getConfig().putSingle(\"path\", \"~/users.properties\"); realmResource.components().add(component); // retrieve a component List<ComponentRepresentation> components = realmResource.components().query(realm.getId(), \"org.keycloak.storage.UserStorageProvider\", \"home\"); component = components.get(0); // Update a component component.getConfig().putSingle(\"path\", \"~/my-users.properties\"); realmResource.components().component(component.getId()).update(component); // Remove a component realmREsource.components().component(component.getId()).remove();" ]
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.4/html/server_developer_guide/user-storage-spi
Chapter 10. Troubleshooting common installation problems
Chapter 10. Troubleshooting common installation problems If you are experiencing difficulties installing the Red Hat OpenShift AI Add-on, read this section to understand what could be causing the problem and how to resolve it. If the problem is not included here or in the release notes, contact Red Hat Support . When opening a support case, it is helpful to include debugging information about your cluster. You can collect this information by using the must-gather tool as described in Must-Gather for Red Hat OpenShift AI and Gathering data about your cluster . You can also adjust the log level of OpenShift AI Operator components to increase or reduce log verbosity to suit your use case. For more information, see Configuring the OpenShift AI Operator logger . 10.1. The Red Hat OpenShift AI Operator cannot be retrieved from the image registry Problem When attempting to retrieve the Red Hat OpenShift AI Operator from the image registry, an Failure to pull from quay error message appears. The Red Hat OpenShift AI Operator might be unavailable for retrieval in the following circumstances: The image registry is unavailable. There is a problem with your network connection. Your cluster is not operational and is therefore unable to retrieve the image registry. Diagnosis Check the logs in the Events section in OpenShift for further information about the Failure to pull from quay error message. Resolution Contact Red Hat support. 10.2. OpenShift AI cannot be installed due to insufficient cluster resources Problem When attempting to install OpenShift AI, an error message appears stating that installation prerequisites have not been met. Diagnosis Log in to Red Hat OpenShift Cluster Manager ( https://console.redhat.com/openshift/ ). Click Clusters . The Clusters page opens. Click the name of the cluster you want to install OpenShift AI on. The Details page for the cluster opens. Click the Add-ons tab and locate the Red Hat OpenShift AI tile. Click Install . The Configure Red Hat OpenShift AI pane appears. If the installation fails, click the Prerequisites tab. Note down the error message. If the error message states that you require a new machine pool, or that more resources are required, take the appropriate action to resolve the problem. Resolution You might need to add more resources to your cluster, or increase the size of your machine pool. To increase your cluster's resources, contact your infrastructure administrator. For more information about increasing the size of your machine pool, see Nodes and Allocating additional resources to OpenShift AI users . 10.3. OpenShift AI does not install on unsupported infrastructure Problem You are deploying on an environment that is not documented as supported by the Red Hat OpenShift AI Operator. Diagnosis In the OpenShift web console, switch to the Administrator perspective. Click Workloads Pods . Set the Project to All Projects or redhat-ods-operator . Click the rhods-operator-<random string> pod. The Pod details page appears. Click Logs . Select rhods-operator from the drop-down list. Check the log for the ERROR: Deploying on USDinfrastructure, which is not supported. Failing Installation error message. Resolution Before proceeding with a new installation, ensure that you have a fully supported environment on which to install OpenShift AI. For more information, see Red Hat OpenShift AI: Supported Configurations . 10.4. The creation of the OpenShift AI Custom Resource (CR) fails Problem During the installation process, the OpenShift AI Custom Resource (CR) does not get created. This issue occurs in unknown circumstances. Diagnosis In the OpenShift web console, switch to the Administrator perspective. Click Workloads Pods . Set the Project to All Projects or redhat-ods-operator . Click the rhods-operator-<random string> pod. The Pod details page appears. Click Logs . Select rhods-operator from the drop-down list. Check the log for the ERROR: Attempt to create the ODH CR failed. error message. Resolution Contact Red Hat support. 10.5. The creation of the OpenShift AI Notebooks Custom Resource (CR) fails Problem During the installation process, the OpenShift AI Notebooks Custom Resource (CR) does not get created. This issue occurs in unknown circumstances. Diagnosis In the OpenShift web console, switch to the Administrator perspective. Click Workloads Pods . Set the Project to All Projects or redhat-ods-operator . Click the rhods-operator-<random string> pod. The Pod details page appears. Click Logs . Select rhods-operator from the drop-down list. Check the log for the ERROR: Attempt to create the RHODS Notebooks CR failed. error message. Resolution Contact Red Hat support. 10.6. The OpenShift AI dashboard is not accessible Problem After installing OpenShift AI, the redhat-ods-applications , redhat-ods-monitoring , and redhat-ods-operator project namespaces are Active but you cannot access the dashboard due to an error in the pod. Diagnosis In the OpenShift web console, switch to the Administrator perspective. Click Workloads Pods . Set the Project to All Projects . Click Filter and select the checkbox for every status except Running and Completed . The page displays the pods that have an error. Resolution To see more information and troubleshooting steps for a pod, on the Pods page, click the link in the Status column for the pod. If the Status column does not display a link, click the pod name to open the pod details page and then click the Logs tab. 10.7. The dedicated-admins Role-based access control (RBAC) policy cannot be created Problem The Role-based access control (RBAC) policy for the dedicated-admins group in the target project cannot be created. This issue occurs in unknown circumstances. Diagnosis In the OpenShift web console, switch to the Administrator perspective. Click Workloads Pods . Set the Project to All Projects or redhat-ods-operator . Click the rhods-operator-<random string> pod. The Pod details page appears. Click Logs . Select rhods-operator from the drop-down list. Check the log for the ERROR: Attempt to create the RBAC policy for dedicated admins group in USDtarget_project failed. error message. Resolution Contact Red Hat support. 10.8. The Dead Man's Snitch operator's secret does not get created Problem An issue with Managed Tenants SRE automation process causes the Dead Man's Snitch operator's secret to not get created. Diagnosis In the OpenShift web console, switch to the Administrator perspective. Click Workloads Pods . Set the Project to All Projects or redhat-ods-operator . Click the rhods-operator-<random string> pod. The Pod details page appears. Click Logs . Select rhods-operator from the drop-down list. Check the log for the ERROR: Dead Man Snitch secret does not exist. error message. Resolution Contact Red Hat support. 10.9. The PagerDuty secret does not get created Problem An issue with Managed Tenants SRE automation process causes the PagerDuty's secret to not get created. Diagnosis In the OpenShift web console, switch to the Administrator perspective. Click Workloads Pods . Set the Project to All Projects or redhat-ods-operator . Click the rhods-operator-<random string> pod. The Pod details page appears. Click Logs . Select rhods-operator from the drop-down list. Check the log for the ERROR: Pagerduty secret does not exist error message. Resolution Contact Red Hat support. 10.10. The SMTP secret does not exist Problem An issue with Managed Tenants SRE automation process causes the SMTP secret to not get created. Diagnosis In the OpenShift web console, switch to the Administrator perspective. Click Workloads Pods . Set the Project to All Projects or redhat-ods-operator . Click the rhods-operator-<random string> pod. The Pod details page appears. Click Logs . Select rhods-operator from the drop-down list. Check the log for the ERROR: SMTP secret does not exist error message. Resolution Contact Red Hat support. 10.11. The ODH parameter secret does not get created Problem An issue with the OpenShift AI Add-on's flow could result in failure to create the ODH parameter. Diagnosis In the OpenShift web console, switch to the Administrator perspective. Click Workloads Pods . Set the Project to All Projects or redhat-ods-operator . Click the rhods-operator-<random string> pod. The Pod details page appears. Click Logs . Select rhods-operator from the drop-down list. Check the log for the ERROR: Addon managed odh parameter secret does not exist. error message. Resolution Contact Red Hat support. 10.12. Data science pipelines are not enabled after installing OpenShift AI 2.9 or later due to existing Argo Workflows resources Problem After installing OpenShift AI 2.9 or later with an Argo Workflows installation that is not installed by OpenShift AI on your cluster, data science pipelines are not enabled despite the datasciencepipelines component being enabled in the DataScienceCluster object. Diagnosis After you install OpenShift AI 2.9 or later, the Data Science Pipelines tab is not visible on the OpenShift AI dashboard navigation menu. Resolution Delete the separate installation of Argo workflows on your cluster. After you have removed any Argo Workflows resources that are not created by OpenShift AI from your cluster, data science pipelines are enabled automatically.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/installing_and_uninstalling_openshift_ai_cloud_service/troubleshooting-common-installation-problems_install
Chapter 2. Onboarding certification partners
Chapter 2. Onboarding certification partners Use the Red Hat Partner Connect Portal to create a new account if you are a new partner, or use your existing Red Hat account if you are a current partner to onboard with Red Hat for certifying your products. 2.1. Onboarding existing certification partners As an existing partner you could be: A member of the one-to-many EPM program who has some degree of representation on the EPM team, but does not have any assistance with the certification process. OR A member fully managed by the EPM team in the traditional manner with a dedicated EPM team member who is assigned to manage the partner, including questions about the certification requests. Note If you think your company has an existing Red Hat account but are not sure who is the Organization Administrator for your company, email [email protected] to add you to your company's existing account. Prerequisites You have an existing Red Hat account. Procedure Access Red Hat Partner Connect and click Log in . Enter your Red Hat login or email address and click . Then, use either of the following options: Log in with company single sign-on Log in with Red Hat account From the menu bar on the header, click your avatar to view the account details. If an account number is associated with your account, then log in to the Red Hat Partner Connect , to proceed with the certification process. If an account number is not associated with your account, then first contact the Red Hat global customer service team to raise a request for creating a new account number. After that, log in to the Red Hat Partner Connect to proceed with the certification process. 2.2. Onboarding new certification partners Creating a new Red Hat account is the first step in onboarding new certification partners. Access Red Hat Partner Connect and click Log in . Click Register for a Red Hat account . Enter the following details to create a new Red Hat account: Choose a Red Hat login and password . Important If your login ID is associated with multiple accounts, then do not use your contact email as the login ID as this can cause issues during login. Also, you cannot change your login ID once created. Enter your Personal information and Company information . Select Corporate for the Account Type field. If you have created a Corporate type account and require an account number, contact the Red Hat global customer service team. Note Ensure that you create a company account and not a personal account. The account created during this step is also used to sign in to the Red Hat Ecosystem Catalog when working with certification requests. Enter your Contact information . Click Create My Account . A new Red Hat account is created. Log in to the Red Hat Partner Connect , to proceed with the certification process. 2.3. Exploring the Partner landing page After logging in to Red Hat Partner Connect , the partner landing page opens. This page serves as a centralized hub, offering access to various partner services and capabilities that enable you to start working on opportunities. The Partner landing page offers the following services: Certified technology portal Deal registrations Red Hat Partner Training Portal Access to our library of marketing, sales & technical content Help and support Email preference center Partner subscriptions User account As part of the Red Hat partnership, partners receive access to various Red Hat systems and services that enable them to create shared value with Red Hat for our joint customers. Select the Certified technology portal tile to begin your product certification journey. The personalized Certified Technology partner dashboard opens.
null
https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_software_certification_workflow_guide/assembly_onboarding-certification-partners_openshift-sw-cert-workflow-introduction-to-redhat-openshift-operator-certification
Chapter 1. Overview
Chapter 1. Overview Red Hat OpenShift Data Foundation is software-defined storage that is optimized for container environments. It runs as an operator on OpenShift Container Platform to provide highly integrated and simplified persistent storage management for containers. Red Hat OpenShift Data Foundation is integrated into the latest Red Hat OpenShift Container Platform to address platform services, application portability, and persistence challenges.It provides a highly scalable backend for the generation of cloud-native applications, built on a technology stack that includes Red Hat Ceph Storage, the Rook.io Operator, and NooBaa's Multicloud Object Gateway technology. Red Hat OpenShift Data Foundation provides a trusted, enterprise-grade application development environment that simplifies and enhances the user experience across the application lifecycle in a number of ways: Provides block storage for databases. Shared file storage for continuous integration, messaging, and data aggregation. Object storage for cloud-first development, archival, backup, and media storage. Scale applications and data exponentially. Attach and detach persistent data volumes at an accelerated rate. Stretch clusters across multiple data-centers or availability zones. Establish a comprehensive application container registry. Support the generation of OpenShift workloads such as Data Analytics, Artificial Intelligence, Machine Learning, Deep Learning, and Internet of Things (IoT). Dynamically provision not only application containers, but data service volumes and containers, as well as additional OpenShift Container Platform nodes, Elastic Block Store (EBS) volumes and other infrastructure services. 1.1. About this release Red Hat OpenShift Data Foundation 4.13 ( RHBA-2023:3734 and RHSA-2023:3742 ) is now available. New enhancements, features, and known issues that pertain to OpenShift Data Foundation 4.13 are included in this topic. Red Hat OpenShift Data Foundation 4.13 is supported on the Red Hat OpenShift Container Platform version 4.13. For more information, see Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . For Red Hat OpenShift Data Foundation life cycle information, refer to the layered and dependent products life cycle section in Red Hat OpenShift Container Platform Life Cycle Policy .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/4.13_release_notes/overview
Chapter 4. Configuring Satellite Server with external services
Chapter 4. Configuring Satellite Server with external services If you do not want to configure the DNS, DHCP, and TFTP services on Satellite Server, use this section to configure your Satellite Server to work with external DNS, DHCP, and TFTP services. 4.1. Configuring Satellite Server with external DNS You can configure Satellite Server with external DNS. Satellite Server uses the nsupdate utility to update DNS records on the remote server. To make any changes persistent, you must enter the satellite-installer command with the options appropriate for your environment. Prerequisites You must have a configured external DNS server. This guide assumes you have an existing installation. Procedure Copy the /etc/rndc.key file from the external DNS server to Satellite Server: Configure the ownership, permissions, and SELinux context: To test the nsupdate utility, add a host remotely: Enter the satellite-installer command to make the following persistent changes to the /etc/foreman-proxy/settings.d/dns.yml file: In the Satellite web UI, navigate to Infrastructure > Capsules . Locate the Satellite Server and select Refresh from the list in the Actions column. Associate the DNS service with the appropriate subnets and domain. 4.2. Configuring Satellite Server with external DHCP To configure Satellite Server with external DHCP, you must complete the following procedures: Section 4.2.1, "Configuring an external DHCP server to use with Satellite Server" Section 4.2.2, "Configuring Satellite Server with an external DHCP server" 4.2.1. Configuring an external DHCP server to use with Satellite Server To configure an external DHCP server running Red Hat Enterprise Linux to use with Satellite Server, you must install the ISC DHCP Service and Berkeley Internet Name Domain (BIND) utilities packages. You must also share the DHCP configuration and lease files with Satellite Server. The example in this procedure uses the distributed Network File System (NFS) protocol to share the DHCP configuration and lease files. Note If you use dnsmasq as an external DHCP server, enable the dhcp-no-override setting. This is required because Satellite creates configuration files on the TFTP server under the grub2/ subdirectory. If the dhcp-no-override setting is disabled, hosts fetch the bootloader and its configuration from the root directory, which might cause an error. Procedure On your Red Hat Enterprise Linux host, install the ISC DHCP Service and Berkeley Internet Name Domain (BIND) utilities packages: Generate a security token: Edit the dhcpd configuration file for all subnets and add the key generated by tsig-keygen . The following is an example: Note that the option routers value is the IP address of your Satellite Server or Capsule Server that you want to use with an external DHCP service. On Satellite Server, define each subnet. Do not set DHCP Capsule for the defined Subnet yet. To prevent conflicts, set up the lease and reservation ranges separately. For example, if the lease range is 192.168.38.10 to 192.168.38.100, in the Satellite web UI define the reservation range as 192.168.38.101 to 192.168.38.250. Configure the firewall for external access to the DHCP server: Make the changes persistent: On Satellite Server, determine the UID and GID of the foreman user: On the DHCP server, create the foreman user and group with the same IDs as determined in a step: To ensure that the configuration files are accessible, restore the read and execute flags: Enable and start the DHCP service: Export the DHCP configuration and lease files using NFS: Create directories for the DHCP configuration and lease files that you want to export using NFS: To create mount points for the created directories, add the following line to the /etc/fstab file: Mount the file systems in /etc/fstab : Ensure the following lines are present in /etc/exports : Note that the IP address that you enter is the Satellite or Capsule IP address that you want to use with an external DHCP service. Reload the NFS server: Configure the firewall for DHCP omapi port 7911: Optional: Configure the firewall for external access to NFS. Clients are configured using NFSv3. Make the changes persistent: 4.2.2. Configuring Satellite Server with an external DHCP server You can configure Satellite Server with an external DHCP server. Prerequisites Ensure that you have configured an external DHCP server and that you have shared the DHCP configuration and lease files with Satellite Server. For more information, see Section 4.2.1, "Configuring an external DHCP server to use with Satellite Server" . Procedure Install the nfs-utils package: Create the DHCP directories for NFS: Change the file owner: Verify communication with the NFS server and the Remote Procedure Call (RPC) communication paths: Add the following lines to the /etc/fstab file: Mount the file systems on /etc/fstab : To verify that the foreman-proxy user can access the files that are shared over the network, display the DHCP configuration and lease files: Enter the satellite-installer command to make the following persistent changes to the /etc/foreman-proxy/settings.d/dhcp.yml file: Associate the DHCP service with the appropriate subnets and domain. 4.3. Using Infoblox as DHCP and DNS providers You can use Satellite Server to connect to your Infoblox application to create and manage DHCP and DNS records, and to reserve IP addresses. The supported Infoblox version is NIOS 8.0 or higher. 4.3.1. Infoblox limitations All DHCP and DNS records can be managed only in a single Network or DNS view. After you install the Infoblox modules on Satellite Server and set up the view using the satellite-installer command, you cannot edit the view. Satellite Server communicates with a single Infoblox node by using the standard HTTPS web API. If you want to configure clustering and High Availability, make the configurations in Infoblox. Hosting PXE-related files by using the TFTP functionality of Infoblox is not supported. You must use Satellite Server as a TFTP server for PXE provisioning. For more information, see Configuring networking in Provisioning hosts . Satellite IPAM feature cannot be integrated with Infoblox. 4.3.2. Infoblox prerequisites You must have Infoblox account credentials to manage DHCP and DNS entries in Satellite. Ensure that you have Infoblox administration roles with the names: DHCP Admin and DNS Admin . The administration roles must have permissions or belong to an admin group that permits the accounts to perform tasks through the Infoblox API. 4.3.3. Installing the Infoblox CA certificate You must install Infoblox HTTPS CA certificate on the base system of Satellite Server. Procedure Download the certificate from the Infoblox web UI or you use the following OpenSSL commands to download the certificate: The infoblox.example.com entry must match the host name for the Infoblox application in the X509 certificate. Verification Test the CA certificate by using a curl query: Example positive response: 4.3.4. Installing the DHCP Infoblox module Install the DHCP Infoblox module on Satellite Server. Note that you cannot manage records in separate views. You can also install DHCP and DNS Infoblox modules simultaneously by combining this procedure and Section 4.3.5, "Installing the DNS Infoblox module" . DHCP Infoblox record type considerations If you want to use the DHCP and DNS Infoblox modules together, configure the DHCP Infoblox module with the fixedaddress record type only. The host record type causes DNS conflicts and is not supported. If you configure the DHCP Infoblox module with the host record type, you have to unset both DNS Capsule and Reverse DNS Capsule options on your Infoblox-managed subnets, because Infoblox does DNS management by itself. Using the host record type leads to creating conflicts and being unable to rename hosts in Satellite. Procedure On Satellite Server, enter the following command: Optional: In the Satellite web UI, navigate to Infrastructure > Capsules , select the Capsule with the DHCP Infoblox module, and ensure that the dhcp feature is listed. In the Satellite web UI, navigate to Infrastructure > Subnets . For all subnets managed through Infoblox, ensure that the IP address management ( IPAM ) method of the subnet is set to DHCP . 4.3.5. Installing the DNS Infoblox module Install the DNS Infoblox module on Satellite Server. You can also install DHCP and DNS Infoblox modules simultaneously by combining this procedure and Section 4.3.4, "Installing the DHCP Infoblox module" . Procedure On Satellite Server, enter the following command to configure the Infoblox module: Optionally, you can change the value of the --foreman-proxy-plugin-dns-infoblox-dns-view option to specify an Infoblox DNS view other than the default view. Optional: In the Satellite web UI, navigate to Infrastructure > Capsules , select the Capsule with the Infoblox DNS module, and ensure that the dns feature is listed. In the Satellite web UI, navigate to Infrastructure > Domains . For all domains managed through Infoblox, ensure that the DNS Proxy is set for those domains. In the Satellite web UI, navigate to Infrastructure > Subnets . For all subnets managed through Infoblox, ensure that the DNS Capsule and Reverse DNS Capsule are set for those subnets. 4.4. Configuring Satellite Server with external TFTP You can configure Satellite Server with external TFTP services. Procedure Create the TFTP directory for NFS: In the /etc/fstab file, add the following line: Mount the file systems in /etc/fstab : Enter the satellite-installer command to make the following persistent changes to the /etc/foreman-proxy/settings.d/tftp.yml file: If the TFTP service is running on a different server than the DHCP service, update the tftp_servername setting with the FQDN or IP address of the server that the TFTP service is running on: In the Satellite web UI, navigate to Infrastructure > Capsules . Locate the Satellite Server and select Refresh from the list in the Actions column. Associate the TFTP service with the appropriate subnets and domain. 4.5. Configuring Satellite Server with external IdM DNS When Satellite Server adds a DNS record for a host, it first determines which Capsule is providing DNS for that domain. It then communicates with the Capsule that is configured to provide DNS service for your deployment and adds the record. The hosts are not involved in this process. Therefore, you must install and configure the IdM client on the Satellite or Capsule that is currently configured to provide a DNS service for the domain you want to manage by using the IdM server. Satellite Server can be configured to use a Red Hat Identity Management (IdM) server to provide DNS service. For more information about Red Hat Identity Management, see the Red Hat Enterprise Linux 7 Linux Domain Identity, Authentication, and Policy Guide . To configure Satellite Server to use a Red Hat Identity Management (IdM) server to provide DNS service, use one of the following procedures: Section 4.5.1, "Configuring dynamic DNS update with GSS-TSIG authentication" Section 4.5.2, "Configuring dynamic DNS update with TSIG authentication" To revert to internal DNS service, use the following procedure: Section 4.5.3, "Reverting to internal DNS service" Note You are not required to use Satellite Server to manage DNS. When you are using the realm enrollment feature of Satellite, where provisioned hosts are enrolled automatically to IdM, the ipa-client-install script creates DNS records for the client. Configuring Satellite Server with external IdM DNS and realm enrollment are mutually exclusive. For more information about configuring realm enrollment, see Configuring Satellite to manage the lifecycle of a host registered to a Identity Management realm in Installing Satellite Server in a connected network environment . 4.5.1. Configuring dynamic DNS update with GSS-TSIG authentication You can configure the IdM server to use the generic security service algorithm for secret key transaction (GSS-TSIG) technology defined in RFC3645 . To configure the IdM server to use the GSS-TSIG technology, you must install the IdM client on the Satellite Server base operating system. Prerequisites You must ensure the IdM server is deployed and the host-based firewall is configured correctly. For more information, see Port requirements for IdM in Red Hat Enterprise Linux 9 Installing Identity Management or Port requirements for IdM in Red Hat Enterprise Linux 8 Installing Identity Management . You must contact the IdM server administrator to ensure that you obtain an account on the IdM server with permissions to create zones on the IdM server. You should create a backup of the answer file. You can use the backup to restore the answer file to its original state if it becomes corrupted. For more information, see Configuring Satellite Server . Procedure To configure dynamic DNS update with GSS-TSIG authentication, complete the following steps: Creating a Kerberos principal on the IdM server Obtain a Kerberos ticket for the account obtained from the IdM administrator: Create a new Kerberos principal for Satellite Server to use to authenticate on the IdM server: Installing and configuring the idM client On the base operating system of either the Satellite or Capsule that is managing the DNS service for your deployment, install the ipa-client package: Configure the IdM client by running the installation script and following the on-screen prompts: Obtain a Kerberos ticket: Remove any preexisting keytab : Obtain the keytab for this system: Note When adding a keytab to a standby system with the same host name as the original system in service, add the r option to prevent generating new credentials and rendering the credentials on the original system invalid. For the dns.keytab file, set the group and owner to foreman-proxy : Optional: To verify that the keytab file is valid, enter the following command: Configuring DNS zones in the IdM web UI Create and configure the zone that you want to manage: Navigate to Network Services > DNS > DNS Zones . Select Add and enter the zone name. For example, example.com . Click Add and Edit . Click the Settings tab and in the BIND update policy box, add the following to the semi-colon separated list: Set Dynamic update to True . Enable Allow PTR sync . Click Save to save the changes. Create and configure the reverse zone: Navigate to Network Services > DNS > DNS Zones . Click Add . Select Reverse zone IP network and add the network address in CIDR format to enable reverse lookups. Click Add and Edit . Click the Settings tab and in the BIND update policy box, add the following to the semi-colon separated list: Set Dynamic update to True . Click Save to save the changes. Configuring the Satellite or Capsule Server that manages the DNS service for the domain Configure your Satellite Server or Capsule Server to connect to your DNS service: For each affected Capsule, update the configuration of that Capsule in the Satellite web UI: In the Satellite web UI, navigate to Infrastructure > Capsules , locate the Satellite Server, and from the list in the Actions column, select Refresh . Configure the domain: In the Satellite web UI, navigate to Infrastructure > Domains and select the domain name. In the Domain tab, ensure DNS Capsule is set to the Capsule where the subnet is connected. Configure the subnet: In the Satellite web UI, navigate to Infrastructure > Subnets and select the subnet name. In the Subnet tab, set IPAM to None . In the Domains tab, select the domain that you want to manage using the IdM server. In the Capsules tab, ensure Reverse DNS Capsule is set to the Capsule where the subnet is connected. Click Submit to save the changes. 4.5.2. Configuring dynamic DNS update with TSIG authentication You can configure an IdM server to use the secret key transaction authentication for DNS (TSIG) technology that uses the rndc.key key file for authentication. The TSIG protocol is defined in RFC2845 . Prerequisites You must ensure the IdM server is deployed and the host-based firewall is configured correctly. For more information, see Port Requirements in the Red Hat Enterprise Linux 7 Linux Domain Identity, Authentication, and Policy Guide . You must obtain root user access on the IdM server. You must confirm whether Satellite Server or Capsule Server is configured to provide DNS service for your deployment. You must configure DNS, DHCP and TFTP services on the base operating system of either the Satellite or Capsule that is managing the DNS service for your deployment. You must create a backup of the answer file. You can use the backup to restore the answer file to its original state if it becomes corrupted. For more information, see Configuring Satellite Server . Procedure To configure dynamic DNS update with TSIG authentication, complete the following steps: Enabling external updates to the DNS zone in the IdM server On the IdM Server, add the following to the top of the /etc/named.conf file: ######################################################################## include "/etc/rndc.key"; controls { inet _IdM_Server_IP_Address_ port 953 allow { _Satellite_IP_Address_; } keys { "rndc-key"; }; }; ######################################################################## Reload the named service to make the changes take effect: In the IdM web UI, navigate to Network Services > DNS > DNS Zones and click the name of the zone. In the Settings tab, apply the following changes: Add the following in the BIND update policy box: grant "rndc-key" zonesub ANY; Set Dynamic update to True . Click Update to save the changes. Copy the /etc/rndc.key file from the IdM server to the base operating system of your Satellite Server. Enter the following command: To set the correct ownership, permissions, and SELinux context for the rndc.key file, enter the following command: Assign the foreman-proxy user to the named group manually. Normally, satellite-installer ensures that the foreman-proxy user belongs to the named UNIX group, however, in this scenario Satellite does not manage users and groups, therefore you need to assign the foreman-proxy user to the named group manually. On Satellite Server, enter the following satellite-installer command to configure Satellite to use the external DNS server: Testing external updates to the DNS zone in the IdM server Ensure that the key in the /etc/rndc.key file on Satellite Server is the same key file that is used on the IdM server: key "rndc-key" { algorithm hmac-md5; secret " secret-key =="; }; On Satellite Server, create a test DNS entry for a host. For example, host test.example.com with an A record of 192.168.25.20 on the IdM server at 192.168.25.1 . On Satellite Server, test the DNS entry: Example output: Server: 192.168.25.1 Address: 192.168.25.1#53 Name: test.example.com Address: 192.168.25.20 To view the entry in the IdM web UI, navigate to Network Services > DNS > DNS Zones . Click the name of the zone and search for the host by name. If resolved successfully, remove the test DNS entry: Confirm that the DNS entry was removed: The above nslookup command fails and returns the SERVFAIL error message if the record was successfully deleted. 4.5.3. Reverting to internal DNS service You can revert to using Satellite Server and Capsule Server as your DNS providers. You can use a backup of the answer file that was created before configuring external DNS, or you can create a backup of the answer file. For more information about answer files, see Configuring Satellite Server . Procedure On the Satellite or Capsule Server that you want to configure to manage DNS service for the domain, complete the following steps: Configuring Satellite or Capsule as a DNS server If you have created a backup of the answer file before configuring external DNS, restore the answer file and then enter the satellite-installer command: If you do not have a suitable backup of the answer file, create a backup of the answer file now. To configure Satellite or Capsule as DNS server without using an answer file, enter the following satellite-installer command on Satellite or Capsule: For more information, see Configuring DNS, DHCP, and TFTP on Capsule Server . After you run the satellite-installer command to make any changes to your Capsule configuration, you must update the configuration of each affected Capsule in the Satellite web UI. Updating the configuration in the Satellite web UI In the Satellite web UI, navigate to Infrastructure > Capsules . For each Capsule that you want to update, from the Actions list, select Refresh . Configure the domain: In the Satellite web UI, navigate to Infrastructure > Domains and click the domain name that you want to configure. In the Domain tab, set DNS Capsule to the Capsule where the subnet is connected. Configure the subnet: In the Satellite web UI, navigate to Infrastructure > Subnets and select the subnet name. In the Subnet tab, set IPAM to DHCP or Internal DB . In the Domains tab, select the domain that you want to manage using Satellite or Capsule. In the Capsules tab, set Reverse DNS Capsule to the Capsule where the subnet is connected. Click Submit to save the changes. 4.6. Configuring Satellite to manage the lifecycle of a host registered to a Identity Management realm As well as providing access to Satellite Server, hosts provisioned with Satellite can also be integrated with Identity Management realms. Red Hat Satellite has a realm feature that automatically manages the lifecycle of any system registered to a realm or domain provider. Use this section to configure Satellite Server or Capsule Server for Identity Management realm support, then add hosts to the Identity Management realm group. Prerequisites Satellite Server that is registered to the Content Delivery Network or an external Capsule Server that is registered to Satellite Server. A deployed realm or domain provider such as Identity Management. To install and configure Identity Management packages on Satellite Server or Capsule Server: To use Identity Management for provisioned hosts, complete the following steps to install and configure Identity Management packages on Satellite Server or Capsule Server: Install the ipa-client package on Satellite Server or Capsule Server: Configure the server as a Identity Management client: Create a realm proxy user, realm-capsule , and the relevant roles in Identity Management: Note the principal name that returns and your Identity Management server configuration details because you require them for the following procedure. To configure Satellite Server or Capsule Server for Identity Management realm support: Complete the following procedure on Satellite and every Capsule that you want to use: Copy the /root/freeipa.keytab file to any Capsule Server that you want to include in the same principal and realm: Move the /root/freeipa.keytab file to the /etc/foreman-proxy directory and set the ownership settings to the foreman-proxy user: Enter the following command on all Capsules that you want to include in the realm. If you use the integrated Capsule on Satellite, enter this command on Satellite Server: You can also use these options when you first configure the Satellite Server. Ensure that the most updated versions of the ca-certificates package is installed and trust the Identity Management Certificate Authority: Optional: If you configure Identity Management on an existing Satellite Server or Capsule Server, complete the following steps to ensure that the configuration changes take effect: Restart the foreman-proxy service: In the Satellite web UI, navigate to Infrastructure > Capsules . Locate the Capsule you have configured for Identity Management and from the list in the Actions column, select Refresh . To create a realm for the Identity Management-enabled Capsule After you configure your integrated or external Capsule with Identity Management, you must create a realm and add the Identity Management-configured Capsule to the realm. Procedure In the Satellite web UI, navigate to Infrastructure > Realms and click Create Realm . In the Name field, enter a name for the realm. From the Realm Type list, select the type of realm. From the Realm Capsule list, select Capsule Server where you have configured Identity Management. Click the Locations tab and from the Locations list, select the location where you want to add the new realm. Click the Organizations tab and from the Organizations list, select the organization where you want to add the new realm. Click Submit . Updating host groups with realm information You must update any host groups that you want to use with the new realm information. In the Satellite web UI, navigate to Configure > Host Groups , select the host group that you want to update, and click the Network tab. From the Realm list, select the realm you create as part of this procedure, and then click Submit . Adding hosts to a Identity Management host group Identity Management supports the ability to set up automatic membership rules based on a system's attributes. Red Hat Satellite's realm feature provides administrators with the ability to map the Red Hat Satellite host groups to the Identity Management parameter userclass which allow administrators to configure automembership. When nested host groups are used, they are sent to the Identity Management server as they are displayed in the Red Hat Satellite User Interface. For example, "Parent/Child/Child". Satellite Server or Capsule Server sends updates to the Identity Management server, however automembership rules are only applied at initial registration. To add hosts to a Identity Management host group: On the Identity Management server, create a host group: Create an automembership rule: Where you can use the following options: automember-add flags the group as an automember group. --type=hostgroup identifies that the target group is a host group, not a user group. automember_rule adds the name you want to identify the automember rule by. Define an automembership condition based on the userclass attribute: Where you can use the following options: automember-add-condition adds regular expression conditions to identify group members. --key=userclass specifies the key attribute as userclass . --type=hostgroup identifies that the target group is a host group, not a user group. --inclusive-regex= ^webserver identifies matching values with a regular expression pattern. hostgroup_name - identifies the target host group's name. When a system is added to Satellite Server's hostgroup_name host group, it is added automatically to the Identity Management server's " hostgroup_name " host group. Identity Management host groups allow for Host-Based Access Controls (HBAC), sudo policies and other Identity Management functions.
[ "scp root@ dns.example.com :/etc/rndc.key /etc/foreman-proxy/rndc.key", "restorecon -v /etc/foreman-proxy/rndc.key chown -v root:foreman-proxy /etc/foreman-proxy/rndc.key chmod -v 640 /etc/foreman-proxy/rndc.key", "echo -e \"server DNS_IP_Address \\n update add aaa.example.com 3600 IN A Host_IP_Address \\n send\\n\" | nsupdate -k /etc/foreman-proxy/rndc.key nslookup aaa.example.com DNS_IP_Address echo -e \"server DNS_IP_Address \\n update delete aaa.example.com 3600 IN A Host_IP_Address \\n send\\n\" | nsupdate -k /etc/foreman-proxy/rndc.key", "satellite-installer --foreman-proxy-dns=true --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\" DNS_IP_Address \" --foreman-proxy-keyfile=/etc/foreman-proxy/rndc.key", "dnf install dhcp-server bind-utils", "tsig-keygen -a hmac-md5 omapi_key", "cat /etc/dhcp/dhcpd.conf default-lease-time 604800; max-lease-time 2592000; log-facility local7; subnet 192.168.38.0 netmask 255.255.255.0 { range 192.168.38.10 192.168.38.100 ; option routers 192.168.38.1 ; option subnet-mask 255.255.255.0 ; option domain-search \" virtual.lan \"; option domain-name \" virtual.lan \"; option domain-name-servers 8.8.8.8 ; } omapi-port 7911; key omapi_key { algorithm hmac-md5; secret \" My_Secret \"; }; omapi-key omapi_key;", "firewall-cmd --add-service dhcp", "firewall-cmd --runtime-to-permanent", "id -u foreman 993 id -g foreman 990", "groupadd -g 990 foreman useradd -u 993 -g 990 -s /sbin/nologin foreman", "chmod o+rx /etc/dhcp/ chmod o+r /etc/dhcp/dhcpd.conf chattr +i /etc/dhcp/ /etc/dhcp/dhcpd.conf", "systemctl enable --now dhcpd", "dnf install nfs-utils systemctl enable --now nfs-server", "mkdir -p /exports/var/lib/dhcpd /exports/etc/dhcp", "/var/lib/dhcpd /exports/var/lib/dhcpd none bind,auto 0 0 /etc/dhcp /exports/etc/dhcp none bind,auto 0 0", "mount -a", "/exports 192.168.38.1 (rw,async,no_root_squash,fsid=0,no_subtree_check) /exports/etc/dhcp 192.168.38.1 (ro,async,no_root_squash,no_subtree_check,nohide) /exports/var/lib/dhcpd 192.168.38.1 (ro,async,no_root_squash,no_subtree_check,nohide)", "exportfs -rva", "firewall-cmd --add-port=7911/tcp", "firewall-cmd --add-service mountd --add-service nfs --add-service rpc-bind --zone public", "firewall-cmd --runtime-to-permanent", "satellite-maintain packages install nfs-utils", "mkdir -p /mnt/nfs/etc/dhcp /mnt/nfs/var/lib/dhcpd", "chown -R foreman-proxy /mnt/nfs", "showmount -e DHCP_Server_FQDN rpcinfo -p DHCP_Server_FQDN", "DHCP_Server_FQDN :/exports/etc/dhcp /mnt/nfs/etc/dhcp nfs ro,vers=3,auto,nosharecache,context=\"system_u:object_r:dhcp_etc_t:s0\" 0 0 DHCP_Server_FQDN :/exports/var/lib/dhcpd /mnt/nfs/var/lib/dhcpd nfs ro,vers=3,auto,nosharecache,context=\"system_u:object_r:dhcpd_state_t:s0\" 0 0", "mount -a", "su foreman-proxy -s /bin/bash cat /mnt/nfs/etc/dhcp/dhcpd.conf cat /mnt/nfs/var/lib/dhcpd/dhcpd.leases exit", "satellite-installer --enable-foreman-proxy-plugin-dhcp-remote-isc --foreman-proxy-dhcp-provider=remote_isc --foreman-proxy-dhcp-server= My_DHCP_Server_FQDN --foreman-proxy-dhcp=true --foreman-proxy-plugin-dhcp-remote-isc-dhcp-config /mnt/nfs/etc/dhcp/dhcpd.conf --foreman-proxy-plugin-dhcp-remote-isc-dhcp-leases /mnt/nfs/var/lib/dhcpd/dhcpd.leases --foreman-proxy-plugin-dhcp-remote-isc-key-name=omapi_key --foreman-proxy-plugin-dhcp-remote-isc-key-secret= My_Secret --foreman-proxy-plugin-dhcp-remote-isc-omapi-port=7911", "update-ca-trust enable openssl s_client -showcerts -connect infoblox.example.com :443 </dev/null | openssl x509 -text >/etc/pki/ca-trust/source/anchors/infoblox.crt update-ca-trust extract", "curl -u admin:password https:// infoblox.example.com /wapi/v2.0/network", "[ { \"_ref\": \"network/ZG5zLm5ldHdvcmskMTkyLjE2OC4yMDIuMC8yNC8w: infoblox.example.com /24/default\", \"network\": \"192.168.202.0/24\", \"network_view\": \"default\" } ]", "satellite-installer --enable-foreman-proxy-plugin-dhcp-infoblox --foreman-proxy-dhcp true --foreman-proxy-dhcp-provider infoblox --foreman-proxy-dhcp-server infoblox.example.com --foreman-proxy-plugin-dhcp-infoblox-username admin --foreman-proxy-plugin-dhcp-infoblox-password infoblox --foreman-proxy-plugin-dhcp-infoblox-record-type fixedaddress --foreman-proxy-plugin-dhcp-infoblox-dns-view default --foreman-proxy-plugin-dhcp-infoblox-network-view default", "satellite-installer --enable-foreman-proxy-plugin-dns-infoblox --foreman-proxy-dns true --foreman-proxy-dns-provider infoblox --foreman-proxy-plugin-dns-infoblox-dns-server infoblox.example.com --foreman-proxy-plugin-dns-infoblox-username admin --foreman-proxy-plugin-dns-infoblox-password infoblox --foreman-proxy-plugin-dns-infoblox-dns-view default", "mkdir -p /mnt/nfs/var/lib/tftpboot", "TFTP_Server_IP_Address :/exports/var/lib/tftpboot /mnt/nfs/var/lib/tftpboot nfs rw,vers=3,auto,nosharecache,context=\"system_u:object_r:tftpdir_rw_t:s0\" 0 0", "mount -a", "satellite-installer --foreman-proxy-tftp-root /mnt/nfs/var/lib/tftpboot --foreman-proxy-tftp=true", "satellite-installer --foreman-proxy-tftp-servername= TFTP_Server_FQDN", "kinit idm_user", "ipa service-add capsule/satellite.example.com", "satellite-maintain packages install ipa-client", "ipa-client-install", "kinit admin", "rm /etc/foreman-proxy/dns.keytab", "ipa-getkeytab -p capsule/ [email protected] -s idm1.example.com -k /etc/foreman-proxy/dns.keytab", "chown foreman-proxy:foreman-proxy /etc/foreman-proxy/dns.keytab", "kinit -kt /etc/foreman-proxy/dns.keytab capsule/ [email protected]", "grant capsule\\047 [email protected] wildcard * ANY;", "grant capsule\\047 [email protected] wildcard * ANY;", "satellite-installer --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate_gss --foreman-proxy-dns-server=\" idm1.example.com \" --foreman-proxy-dns-tsig-keytab=/etc/foreman-proxy/dns.keytab --foreman-proxy-dns-tsig-principal=\"capsule/ [email protected] \" --foreman-proxy-dns=true", "######################################################################## include \"/etc/rndc.key\"; controls { inet _IdM_Server_IP_Address_ port 953 allow { _Satellite_IP_Address_; } keys { \"rndc-key\"; }; }; ########################################################################", "systemctl reload named", "grant \"rndc-key\" zonesub ANY;", "scp /etc/rndc.key root@ satellite.example.com :/etc/rndc.key", "restorecon -v /etc/rndc.key chown -v root:named /etc/rndc.key chmod -v 640 /etc/rndc.key", "usermod -a -G named foreman-proxy", "satellite-installer --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\" IdM_Server_IP_Address \" --foreman-proxy-dns-ttl=86400 --foreman-proxy-dns=true --foreman-proxy-keyfile=/etc/rndc.key", "key \"rndc-key\" { algorithm hmac-md5; secret \" secret-key ==\"; };", "echo -e \"server 192.168.25.1\\n update add test.example.com 3600 IN A 192.168.25.20\\n send\\n\" | nsupdate -k /etc/rndc.key", "nslookup test.example.com 192.168.25.1", "Server: 192.168.25.1 Address: 192.168.25.1#53 Name: test.example.com Address: 192.168.25.20", "echo -e \"server 192.168.25.1\\n update delete test.example.com 3600 IN A 192.168.25.20\\n send\\n\" | nsupdate -k /etc/rndc.key", "nslookup test.example.com 192.168.25.1", "satellite-installer", "satellite-installer --foreman-proxy-dns-managed=true --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\"127.0.0.1\" --foreman-proxy-dns=true", "satellite-maintain packages install ipa-client", "ipa-client-install", "foreman-prepare-realm admin realm-capsule", "scp /root/freeipa.keytab root@ capsule.example.com :/etc/foreman-proxy/freeipa.keytab", "mv /root/freeipa.keytab /etc/foreman-proxy chown foreman-proxy:foreman-proxy /etc/foreman-proxy/freeipa.keytab", "satellite-installer --foreman-proxy-realm true --foreman-proxy-realm-keytab /etc/foreman-proxy/freeipa.keytab --foreman-proxy-realm-principal [email protected] --foreman-proxy-realm-provider freeipa", "cp /etc/ipa/ca.crt /etc/pki/ca-trust/source/anchors/ipa.crt update-ca-trust enable update-ca-trust", "systemctl restart foreman-proxy", "ipa hostgroup-add hostgroup_name --desc= hostgroup_description", "ipa automember-add --type=hostgroup hostgroup_name automember_rule", "ipa automember-add-condition --key=userclass --type=hostgroup --inclusive-regex= ^webserver hostgroup_name ---------------------------------- Added condition(s) to \" hostgroup_name \" ---------------------------------- Automember Rule: automember_rule Inclusive Regex: userclass= ^webserver ---------------------------- Number of conditions added 1 ----------------------------" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/installing_satellite_server_in_a_disconnected_network_environment/configuring-external-services
5.3. Resource-Specific Parameters
5.3. Resource-Specific Parameters For any individual resource, you can use the following command to display the parameters you can set for that resource. For example, the following command displays the parameters you can set for a resource of type LVM .
[ "pcs resource describe standard:provider:type | type", "pcs resource describe LVM Resource options for: LVM volgrpname (required): The name of volume group. exclusive: If set, the volume group will be activated exclusively. partial_activation: If set, the volume group will be activated even only partial of the physical volumes available. It helps to set to true, when you are using mirroring logical volumes." ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-genresourceparams-haar
Chapter 1. Why use cost management
Chapter 1. Why use cost management Cost management is a free offering as part of your subscription to the Red Hat Insights portfolio of services. With cost management, you can monitor and analyze your costs to improve the management of your business. Cost management helps you simplify the management of your resources and costs across container platforms like OpenShift Container Platform, as well as public clouds like Amazon Web Services (AWS), Google Cloud, Oracle Cloud, and Microsoft Azure. 1.1. What can you accomplish with cost management? With the expanding scale and performance of containerized business applications, you need aggregated and meaningful data so that you can quickly analyze your cluster spending and align with business priorities. To overcome business challenges, cost management gives your organization visibility into your costs down to the project level for on-premise and public cloud environments. This visibility gives IT and financial stakeholders a unique snapshot into the costs associated with applications. With cost management, you can achieve some of the following goals: Visualize, understand, and analyze how you use your resources and costs across hybrid cloud infrastructure Track cost trends Map charges to projects and organizations Use cost models to normalize data and add markups Forecast your future consumption and compare it with your budgets Optimize your resources and usage Identify patterns of usage that you might want to investigate Integrate with third party tools that can use your cost and resourcing data These preceding goals can ultimately help your organization optimize costs, increase efficiency, and save money. 1.2. How does cost management work? It's important to understand some key OpenShift concepts: Cluster a group of servers that are managed together and participate in workload management. Node a worker machine that is either virtual or physical, depending on the cluster. Master node : The master node hosts the control plane and manages the cluster, including scheduling and scaling applications and maintaining the state of the cluster. Worker node : Worker nodes are responsible for running the containers and executing the workloads. Pod a collection of one or more containers. It is the smallest unit possible. Persistent volume claim (PVC) Persistent volume (PV) framework enables cluster administrators to provision persistent storage for a cluster. Developers can use persistent volume claims (PVCs) to request PV resources. At a high level, cost management calculates your costs by processing data from your integrations in the following ways: From your cloud bill, cost management takes the cost of all of your nodes and determines what nodes belong to what cluster and which nodes are worker or master nodes. Cost management then determines what pods are running on what cluster and namespace and calculates how much central processing units (CPU), memory, disk space, and PVCs each one uses. Cost management multiplies the cost from the cloud bill by the established usage metrics to calculate the amount of money that each pod is costing you. If you have a cost model, it distributes the cost of the platform or the cost of unallocated capacity. If you do not create a Red Hat OpenShift Container Platform cost model, we use the implicit cost model. This method distributes the cost from the cloud bill based on CPU effective use. Cost management does not use public prices. Rather, it reads your cloud bill to process the savings plans, reserved instances, discounts, or other costs that you have. Cost management also tracks which nodes run on which pods. If you have different instance types, or same instance types but with different prices, cost management can still attribute the correct cost to each pod.
null
https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/getting_started_with_cost_management/about-cost-management
Chapter 11. Atlasmap Component
Chapter 11. Atlasmap Component Note Only producer is supported. You can use the AtlasMap component to process data mapping using an AtlasMap data mapping definition. When you export the AtlasMap mapping from the AtlasMap Data Mapper UI, it is packaged as an ADM archive file. NOTE: Although it is possible to load a mapping definition JSON file that is not packaged into an ADM archive file, some features will not work. We recommend that you always use the ADM archive file for production purposes. To use the component with Maven, add the following dependency to your pom.xml : <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-atlasmap</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> Optionally, you can include the Apache Daffodil module DFDL module: <dependency> <groupId>io.atlasmap</groupId> <artifactId>atlas-dfdl-module</artifactId> <version>x.x.x</version> <!-- use the same version as atlasmap-core in camel-atlasmap --> </dependency> 11.1. URI format atlas:mappingName[?options] The mappingName is the classpath-local URI of the AtlasMap mapping definition to process, either an ADM archive file (preferably) or a mapping definition JSON file. 11.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 11.2.1. Configuring Component Level Options The component level is the highest configuration level. It contains general and common configurations for all endpoints. You can configure components with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. Some components only have a few options, and others may have many. A component may have security settings, credentials for authentication, URLs for network connection, and so on. Components typically have preconfigured defaults for the most common cases, so you may not need to configure any options, or only configure a few. 11.2.2. Component Options The AtlasMap component supports 4 options: Name Description Comment Default Type lazyStartProducer (producer) Lazy start of the producer. The producer starts on the first message. Allows CamelContext and routes to start in situations where a producer fails to start and causes the route to fail. When lazy start is enabled, you can handle failures during routing messages via Camel's routing error handlers. When the first message is processed then creating and starting the producer may prolong the total processing time. false boolean atlasContextFactory (advanced) To use the AtlasContextFactory, otherwise a new engine is created. AtlasContextFactory autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuration of JDBC data sources, JMS connection factories, AWS Clients, and so on. true boolean propertiesFile (advanced) The URI of the properties file used for AtlasContextFactory initialization. 11.2.3. Configuring Endpoint Level Options At the endpoint level contains configurations for the endpoints themselves. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and Data Format DSL as type safe ways of configuring endpoints in Java. Endpoints often have many options that configure what you need the endpoint to do. Endpoint options are categorized by their use, either as a consumer ( from ) or producer ( to ), or both. A good practice when configuring options is to use Property Placeholders instead of hardcoded settings for urls, port numbers, and sensitive information. Use placeholders to externalize the configuration from your code to make it more flexible and reusable. 11.2.4. Endpoint Options The Apache Camel Component Reference endpoint is configured using URI syntax, with path and query parameters: atlas:resourceUri 11.2.4.1. Path Parameters (1 parameters) Name Description Default Type resourceUri (producer) Required Path to the resource. You can prefix with: classpath , file , http , ref , or bean . The prefix classpath , file , and http loads the resource using these protocols. The ref prefix looks up the resource in the registry. The prefix bean calls a bean method to be used as the resource by name, given after the dot: bean:myBean.myMethod . classpath String 11.2.4.2. Query Parameters (7 parameters) Name Description Comments Default Type allowContextMapAll (producer) Allow access to all context map details. By default, only access to message body and headers is allowed. When enabled, allowContextMapAll allows full access to the current Exchange and CamelContext which imposes a potential security risk as this opens access to the full power of CamelContext API. false boolean contentCache (producer) Use the resource content cache. false boolean forceReload (producer) Use force reload mode. This loads the ADM from a file on every Exchange. By default, the ADM file is loaded from a file only on a first Exchange, and AtlasContext will be reused until the endpoint is recreated. false boolean lazyStartProducer (producer)(advanced) Lazy start of the producer. The producer starts on the first message. Allows CamelContext and routes to start in situations where a producer fails to start and causes the route to fail. When lazy start is enabled, you can handle failures during routing messages via Camel's routing error handlers. When the first message is processed, creating and starting the producer may prolong the total processing time. false boolean sourceMapName (producer) The Exchange property name for a source message map which holds a java.util.Map<String, Message> where the key is AtlasMap Document ID. AtlasMap consumes Message bodies as source documents, as well as message headers as source properties where the scope is equal to the Document ID. String targetMapMode (producer) TargetMapMode enum value to specify how multiple target documents are delivered if they exist. Enum values: * MAP * MESSAGE_HEADER * EXCHANGE_PROPERTY MAP : Stores documents in a java.util.Map . The java.util.Map is set to an exchange property if targetMapName is specified, otherwise it is set to the message body. MESSAGE_HEADER : Stores them into message headers. EXCHANGE_PROPERTY : Stores them in exchange properties. ). MAP TargetMapMode 11.3. Examples 11.3.1. Producer Example The following example shows an export of and ADM archive file from AtlasMap Data Mapper UI: from("activemq:My.Queue"). to("atlas:atlasmap-mapping.adm"); The Apache Camel Component Reference endpoint has no path parameters.
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-atlasmap</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "<dependency> <groupId>io.atlasmap</groupId> <artifactId>atlas-dfdl-module</artifactId> <version>x.x.x</version> <!-- use the same version as atlasmap-core in camel-atlasmap --> </dependency>", "atlas:mappingName[?options]", "atlas:resourceUri", "from(\"activemq:My.Queue\"). to(\"atlas:atlasmap-mapping.adm\");" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/atlasmap-component
11.4.2.6. Spam Filters
11.4.2.6. Spam Filters Because it is called by Sendmail, Postfix, and Fetchmail upon receiving new emails, Procmail can be used as a powerful tool for combating spam. This is particularly true when Procmail is used in conjunction with SpamAssassin. When used together, these two applications can quickly identify spam emails, and sort or destroy them. SpamAssassin uses header analysis, text analysis, blacklists, a spam-tracking database, and self-learning Bayesian spam analysis to quickly and accurately identify and tag spam. The easiest way for a local user to use SpamAssassin is to place the following line near the top of the ~/.procmailrc file: The /etc/mail/spamassassin/spamassassin-default.rc contains a simple Procmail rule that activates SpamAssassin for all incoming email. If an email is determined to be spam, it is tagged in the header as such and the title is prepended with the following pattern: The message body of the email is also prepended with a running tally of what elements caused it to be diagnosed as spam. To file email tagged as spam, a rule similar to the following can be used: This rule files all email tagged in the header as spam into a mailbox called spam . Since SpamAssassin is a Perl script, it may be necessary on busy servers to use the binary SpamAssassin daemon ( spamd ) and client application ( spamc ). Configuring SpamAssassin this way, however, requires root access to the host. To start the spamd daemon, type the following command as root: To start the SpamAssassin daemon when the system is booted, use an initscript utility, such as the Services Configuration Tool ( system-config-services ), to turn on the spamassassin service. Refer to Section 1.4.2, "Runlevel Utilities" for more information about initscript utilities. To configure Procmail to use the SpamAssassin client application instead of the Perl script, place the following line near the top of the ~/.procmailrc file. For a system-wide configuration, place it in /etc/procmailrc :
[ "INCLUDERC=/etc/mail/spamassassin/spamassassin-default.rc", "*****SPAM*****", ":0 Hw * ^X-Spam-Status: Yes spam", "service spamassassin start", "INCLUDERC=/etc/mail/spamassassin/spamassassin-spamc.rc" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s3-email-mda-spam
Chapter 18. OpenShift
Chapter 18. OpenShift The namespace for openshift-logging specific metadata Data type group 18.1. openshift.labels Labels added by the Cluster Log Forwarder configuration Data type group
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/logging/openshift
Chapter 2. Differences from upstream OpenJDK 17
Chapter 2. Differences from upstream OpenJDK 17 Red Hat build of OpenJDK in Red Hat Enterprise Linux contains a number of structural changes from the upstream distribution of OpenJDK. The Microsoft Windows version of Red Hat build of OpenJDK attempts to follow Red Hat Enterprise Linux updates as closely as possible. The following list details the most notable Red Hat build of OpenJDK 17 changes: FIPS support. Red Hat build of OpenJDK 17 automatically detects whether RHEL is in FIPS mode and automatically configures Red Hat build of OpenJDK 17 to operate in that mode. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Cryptographic policy support. Red Hat build of OpenJDK 17 obtains the list of enabled cryptographic algorithms and key size constraints from the RHEL system configuration. These configuration components are used by the Transport Layer Security (TLS) encryption protocol, the certificate path validation, and any signed JARs. You can set different security profiles to balance safety and compatibility. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Red Hat build of OpenJDK on RHEL dynamically links against native libraries such as zlib for archive format support and libjpeg-turbo , libpng , and giflib for image support. RHEL also dynamically links against Harfbuzz and Freetype for font rendering and management. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. The src.zip file includes the source for all of the JAR libraries shipped with Red Hat build of OpenJDK. Red Hat build of OpenJDK on RHEL uses system-wide timezone data files as a source for timezone information. Red Hat build of OpenJDK on RHEL uses system-wide CA certificates. Red Hat build of OpenJDK on Microsoft Windows includes the latest available timezone data from RHEL. Red Hat build of OpenJDK on Microsoft Windows uses the latest available CA certificate from RHEL. Additional resources See, Improve system FIPS detection (RHEL Planning Jira) See, Using system-wide cryptographic policies (RHEL documentation)
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.13/rn-openjdk-diff-from-upstream
Chapter 11. Storage Considerations During Installation
Chapter 11. Storage Considerations During Installation Many storage device and file system settings can only be configured at install time. Other settings, such as file system type, can only be modified up to a certain point without requiring a reformat. As such, it is prudent that you plan your storage configuration accordingly before installing Red Hat Enterprise Linux 7. This chapter discusses several considerations when planning a storage configuration for your system. For installation instructions (including storage configuration during installation), see the Installation Guide provided by Red Hat. For information on what Red Hat officially supports with regards to size and storage limits, see the article http://www.redhat.com/resourcelibrary/articles/articles-red-hat-enterprise-linux-6-technology-capabilities-and-limits . 11.1. Special Considerations This section enumerates several issues and factors to consider for specific storage configurations. Separate Partitions for /home, /opt, /usr/local If it is likely that you will upgrade your system in the future, place /home , /opt , and /usr/local on a separate device. This allows you to reformat the devices or file systems containing the operating system while preserving your user and application data. DASD and zFCP Devices on IBM System Z On the IBM System Z platform, DASD and zFCP devices are configured via the Channel Command Word (CCW) mechanism. CCW paths must be explicitly added to the system and then brought online. For DASD devices, this means listing the device numbers (or device number ranges) as the DASD= parameter at the boot command line or in a CMS configuration file. For zFCP devices, you must list the device number, logical unit number (LUN), and world wide port name (WWPN). Once the zFCP device is initialized, it is mapped to a CCW path. The FCP_x= lines on the boot command line (or in a CMS configuration file) allow you to specify this information for the installer. Encrypting Block Devices Using LUKS Formatting a block device for encryption using LUKS/ dm-crypt destroys any existing formatting on that device. As such, you should decide which devices to encrypt (if any) before the new system's storage configuration is activated as part of the installation process. Stale BIOS RAID Metadata Moving a disk from a system configured for firmware RAID without removing the RAID metadata from the disk can prevent Anaconda from correctly detecting the disk. Warning Removing/deleting RAID metadata from disk could potentially destroy any stored data. Red Hat recommends that you back up your data before proceeding. Note If you have created the RAID volume using dmraid , which is now deprecated, use the dmraid utility to delete it: For more information about managing RAID devices, see man dmraid and Chapter 18, Redundant Array of Independent Disks (RAID) . iSCSI Detection and Configuration For plug and play detection of iSCSI drives, configure them in the firmware of an iBFT boot-capable network interface card (NIC). CHAP authentication of iSCSI targets is supported during installation. However, iSNS discovery is not supported during installation. FCoE Detection and Configuration For plug and play detection of Fibre Channel over Ethernet (FCoE) drives, configure them in the firmware of an EDD boot-capable NIC. DASD Direct-access storage devices (DASD) cannot be added or configured during installation. Such devices are specified in the CMS configuration file. Block Devices with DIF/DIX Enabled DIF/DIX is a hardware checksum feature provided by certain SCSI host bus adapters and block devices. When DIF/DIX is enabled, error occurs if the block device is used as a general-purpose block device. Buffered I/O or mmap(2) -based I/O will not work reliably, as there are no interlocks in the buffered write path to prevent buffered data from being overwritten after the DIF/DIX checksum has been calculated. This causes the I/O to later fail with a checksum error. This problem is common to all block device (or file system-based) buffered I/O or mmap(2) I/O, so it is not possible to work around these errors caused by overwrites. As such, block devices with DIF/DIX enabled should only be used with applications that use O_DIRECT . Such applications should use the raw block device. Alternatively, it is also safe to use the XFS file system on a DIF/DIX enabled block device, as long as only O_DIRECT I/O is issued through the file system. XFS is the only file system that does not fall back to buffered I/O when doing certain allocation operations. The responsibility for ensuring that the I/O data does not change after the DIF/DIX checksum has been computed always lies with the application, so only applications designed for use with O_DIRECT I/O and DIF/DIX hardware should use DIF/DIX.
[ "dmraid -r -E / device /" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/ch-install-config
13.3. Removing a Partition
13.3. Removing a Partition Warning Do not attempt to remove a partition on a device that is in use. Procedure 13.3. Remove a Partition Before removing a partition, do one of the following: Boot into rescue mode, or Unmount any partitions on the device and turn off any swap space on the device. Start the parted utility: Replace device with the device on which to remove the partition: for example, /dev/sda . View the current partition table to determine the minor number of the partition to remove: Remove the partition with the command rm . For example, to remove the partition with minor number 3: The changes start taking place as soon as you press Enter , so review the command before committing to it. After removing the partition, use the print command to confirm that it is removed from the partition table: Exit from the parted shell: Examine the content of the /proc/partitions file to make sure the kernel knows the partition is removed: Remove the partition from the /etc/fstab file. Find the line that declares the removed partition, and remove it from the file. Regenerate mount units so that your system registers the new /etc/fstab configuration:
[ "parted device", "(parted) print", "(parted) rm 3", "(parted) print", "(parted) quit", "cat /proc/partitions", "systemctl daemon-reload" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/s2-disk-storage-parted-remove-part
8.7. Date & Time
8.7. Date & Time To configure time zone, date, and optionally settings for network time, select Date & Time at the Installation Summary screen. There are three ways for you to select a time zone: Using your mouse, click on the interactive map to select a specific city. A red pin appears indicating your selection. You can also scroll through the Region and City drop-down menus at the top of the screen to select your time zone. Select Etc at the bottom of the Region drop-down menu, then select your time zone in the menu adjusted to GMT/UTC, for example GMT+1 . If your city is not available on the map or in the drop-down menu, select the nearest major city in the same time zone. Alternatively you can use a Kickstart file, which will allow you to specify some additional time zones which are not available in the graphical interface. See the timezone command in timezone (required) for details. Note The list of available cities and regions comes from the Time Zone Database (tzdata) public domain, which is maintained by the Internet Assigned Numbers Authority (IANA). Red Hat cannot add cities or regions into this database. You can find more information at the official website, available at http://www.iana.org/time-zones . Specify a time zone even if you plan to use NTP (Network Time Protocol) to maintain the accuracy of the system clock. If you are connected to the network, the Network Time switch will be enabled. To set the date and time using NTP, leave the Network Time switch in the ON position and click the configuration icon to select which NTP servers Red Hat Enterprise Linux should use. To set the date and time manually, move the switch to the OFF position. The system clock should use your time zone selection to display the correct date and time at the bottom of the screen. If they are still incorrect, adjust them manually. Note that NTP servers might be unavailable at the time of installation. In such a case, enabling them will not set the time automatically. When the servers become available, the date and time will update. Once you have made your selection, click Done to return to the Installation Summary screen. Note To change your time zone configuration after you have completed the installation, visit the Date & Time section of the Settings dialog window.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-date-time-configuration-x86
Chapter 33. Getting started with OptaPlanner in Business Central: An employee rostering example
Chapter 33. Getting started with OptaPlanner in Business Central: An employee rostering example You can build and deploy the employee-rostering sample project in Business Central. The project demonstrates how to create each of the Business Central assets required to solve the shift rostering planning problem and use Red Hat build of OptaPlanner to find the best possible solution. You can deploy the preconfigured employee-rostering project in Business Central. Alternatively, you can create the project yourself using Business Central. Note The employee-rostering sample project in Business Central does not include a data set. You must supply a data set in XML format using a REST API call. 33.1. Deploying the employee rostering sample project in Business Central Business Central includes a number of sample projects that you can use to get familiar with the product and its features. The employee rostering sample project is designed and created to demonstrate the shift rostering use case for Red Hat build of OptaPlanner. Use the following procedure to deploy and run the employee rostering sample in Business Central. Prerequisites Red Hat Process Automation Manager has been downloaded and installed. For installation options, see Planning a Red Hat Process Automation Manager installation . You have started Red Hat Process Automation Manager, as described the installation documentation, and you are logged in to Business Central as a user with admin permissions. Procedure In Business Central, click Menu Design Projects . In the preconfigured MySpace space, click Try Samples . Select employee-rostering from the list of sample projects and click Ok in the upper-right corner to import the project. After the asset list has complied, click Build & Deploy to deploy the employee rostering example. The rest of this document explains each of the project assets and their configuration. 33.2. Re-creating the employee rostering sample project The employee rostering sample project is a preconfigured project available in Business Central. You can learn about how to deploy this project in Section 33.1, "Deploying the employee rostering sample project in Business Central" . You can create the employee rostering example "from scratch". You can use the workflow in this example to create a similar project of your own in Business Central. 33.2.1. Setting up the employee rostering project To start developing a solver in Business Central, you must set up the project. Prerequisites Red Hat Process Automation Manager has been downloaded and installed. You have deployed Business Central and logged in with a user that has the admin role. Procedure Create a new project in Business Central by clicking Menu Design Projects Add Project . In the Add Project window, fill out the following fields: Name : employee-rostering Description (optional): Employee rostering problem optimization using OptaPlanner. Assigns employees to shifts based on their skill. Optional: Click Configure Advanced Options to populate the Group ID , Artifact ID , and Version information. Group ID : employeerostering Artifact ID : employeerostering Version : 1.0.0-SNAPSHOT Click Add to add the project to the Business Central project repository. 33.2.2. Problem facts and planning entities Each of the domain classes in the employee rostering planning problem is categorized as one of the following: An unrelated class: not used by any of the score constraints. From a planning standpoint, this data is obsolete. A problem fact class: used by the score constraints, but does not change during planning (as long as the problem stays the same), for example, Shift and Employee . All the properties of a problem fact class are problem properties. A planning entity class: used by the score constraints and changes during planning, for example, ShiftAssignment . The properties that change during planning are planning variables . The other properties are problem properties. Ask yourself the following questions: What class changes during planning? Which class has variables that I want the Solver to change? That class is a planning entity. A planning entity class needs to be annotated with the @PlanningEntity annotation, or defined in Business Central using the Red Hat build of OptaPlanner dock in the domain designer. Each planning entity class has one or more planning variables , and must also have one or more defining properties. Most use cases have only one planning entity class, and only one planning variable per planning entity class. 33.2.3. Creating the data model for the employee rostering project Use this section to create the data objects required to run the employee rostering sample project in Business Central. Prerequisites You have completed the project setup described in Section 33.2.1, "Setting up the employee rostering project" . Procedure With your new project, either click Data Object in the project perspective, or click Add Asset Data Object to create a new data object. Name the first data object Timeslot , and select employeerostering.employeerostering as the Package . Click Ok . In the Data Objects perspective, click +add field to add fields to the Timeslot data object. In the id field, type endTime . Click the drop-down menu to Type and select LocalDateTime . Click Create and continue to add another field. Add another field with the id startTime and Type LocalDateTime . Click Create . Click Save in the upper-right corner to save the Timeslot data object. Click the x in the upper-right corner to close the Data Objects perspective and return to the Assets menu. Using the steps, create the following data objects and their attributes: Table 33.1. Skill id Type name String Table 33.2. Employee id Type name String skills employeerostering.employeerostering.Skill[List] Table 33.3. Shift id Type requiredSkill employeerostering.employeerostering.Skill timeslot employeerostering.employeerostering.Timeslot Table 33.4. DayOffRequest id Type date LocalDate employee employeerostering.employeerostering.Employee Table 33.5. ShiftAssignment id Type employee employeerostering.employeerostering.Employee shift employeerostering.employeerostering.Shift For more examples of creating data objects, see Getting started with decision services . 33.2.3.1. Creating the employee roster planning entity In order to solve the employee rostering planning problem, you must create a planning entity and a solver. The planning entity is defined in the domain designer using the attributes available in the Red Hat build of OptaPlanner dock. Use the following procedure to define the ShiftAssignment data object as the planning entity for the employee rostering example. Prerequisites You have created the relevant data objects and planning entity required to run the employee rostering example by completing the procedures in Section 33.2.3, "Creating the data model for the employee rostering project" . Procedure From the project Assets menu, open the ShiftAssignment data object. In the Data Objects perspective, open the OptaPlanner dock by clicking the on the right. Select Planning Entity . Select employee from the list of fields under the ShiftAssignment data object. In the OptaPlanner dock, select Planning Variable . In the Value Range Id input field, type employeeRange . This adds the @ValueRangeProvider annotation to the planning entity, which you can view by clicking the Source tab in the designer. The value range of a planning variable is defined with the @ValueRangeProvider annotation. A @ValueRangeProvider annotation always has a property id , which is referenced by the @PlanningVariable property valueRangeProviderRefs . Close the dock and click Save to save the data object. 33.2.3.2. Creating the employee roster planning solution The employee roster problem relies on a defined planning solution. The planning solution is defined in the domain designer using the attributes available in the Red Hat build of OptaPlanner dock. Prerequisites You have created the relevant data objects and planning entity required to run the employee rostering example by completing the procedures in Section 33.2.3, "Creating the data model for the employee rostering project" and Section 33.2.3.1, "Creating the employee roster planning entity" . Procedure Create a new data object with the identifier EmployeeRoster . Create the following fields: Table 33.6. EmployeeRoster id Type dayOffRequestList employeerostering.employeerostering.DayOffRequest[List] shiftAssignmentList employeerostering.employeerostering.ShiftAssignment[List] shiftList employeerostering.employeerostering.Shift[List] skillList employeerostering.employeerostering.Skill[List] timeslotList employeerostering.employeerostering.Timeslot[List] In the Data Objects perspective, open the OptaPlanner dock by clicking the on the right. Select Planning Solution . Leave the default Hard soft score as the Solution Score Type . This automatically generates a score field in the EmployeeRoster data object with the solution score as the type. Add a new field with the following attributes: id Type employeeList employeerostering.employeerostering.Employee[List] With the employeeList field selected, open the OptaPlanner dock and select the Planning Value Range Provider box. In the id field, type employeeRange . Close the dock. Click Save in the upper-right corner to save the asset. 33.2.4. Employee rostering constraints Employee rostering is a planning problem. All planning problems include constraints that must be satisfied in order to find an optimal solution. The employee rostering sample project in Business Central includes the following hard and soft constraints: Hard constraint Employees are only assigned one shift per day. All shifts that require a particular employee skill are assigned an employee with that particular skill. Soft constraints All employees are assigned a shift. If an employee requests a day off, their shift is reassigned to another employee. Hard and soft constraints are defined in Business Central using either the free-form DRL designer, or using guided rules. 33.2.4.1. DRL (Drools Rule Language) rules DRL (Drools Rule Language) rules are business rules that you define directly in .drl text files. These DRL files are the source in which all other rule assets in Business Central are ultimately rendered. You can create and manage DRL files within the Business Central interface, or create them externally as part of a Maven or Java project using Red Hat CodeReady Studio or another integrated development environment (IDE). A DRL file can contain one or more rules that define at a minimum the rule conditions ( when ) and actions ( then ). The DRL designer in Business Central provides syntax highlighting for Java, DRL, and XML. DRL files consist of the following components: Components in a DRL file The following example DRL rule determines the age limit in a loan application decision service: Example rule for loan application age limit A DRL file can contain single or multiple rules, queries, and functions, and can define resource declarations such as imports, globals, and attributes that are assigned and used by your rules and queries. The DRL package must be listed at the top of a DRL file and the rules are typically listed last. All other DRL components can follow any order. Each rule must have a unique name within the rule package. If you use the same rule name more than once in any DRL file in the package, the rules fail to compile. Always enclose rule names with double quotation marks ( rule "rule name" ) to prevent possible compilation errors, especially if you use spaces in rule names. All data objects related to a DRL rule must be in the same project package as the DRL file in Business Central. Assets in the same package are imported by default. Existing assets in other packages can be imported with the DRL rule. 33.2.4.2. Defining constraints for employee rostering using the DRL designer You can create constraint definitions for the employee rostering example using the free-form DRL designer in Business Central. Use this procedure to create a hard constraint where no employee is assigned a shift that begins less than 10 hours after their shift ended. Procedure In Business Central, go to Menu Design Projects and click the project name. Click Add Asset DRL file . In the DRL file name field, type ComplexScoreRules . Select the employeerostering.employeerostering package. Click +Ok to create the DRL file. In the Model tab of the DRL designer, define the Employee10HourShiftSpace rule as a DRL file: Click Save to save the DRL file. For more information about creating DRL files, see Designing a decision service using DRL rules . 33.2.5. Creating rules for employee rostering using guided rules You can create rules that define hard and soft constraints for employee rostering using the guided rules designer in Business Central. 33.2.5.1. Guided rules Guided rules are business rules that you create in a UI-based guided rules designer in Business Central that leads you through the rule-creation process. The guided rules designer provides fields and options for acceptable input based on the data objects for the rule being defined. The guided rules that you define are compiled into Drools Rule Language (DRL) rules as with all other rule assets. All data objects related to a guided rule must be in the same project package as the guided rule. Assets in the same package are imported by default. After you create the necessary data objects and the guided rule, you can use the Data Objects tab of the guided rules designer to verify that all required data objects are listed or to import other existing data objects by adding a New item . 33.2.5.2. Creating a guided rule to balance employee shift numbers The BalanceEmployeesShiftNumber guided rule creates a soft constraint that ensures shifts are assigned to employees in a way that is balanced as evenly as possible. It does this by creating a score penalty that increases when shift distribution is less even. The score formula, implemented by the rule, incentivizes the Solver to distribute shifts in a more balanced way. Procedure In Business Central, go to Menu Design Projects and click the project name. Click Add Asset Guided Rule . Enter BalanceEmployeesShiftNumber as the Guided Rule name and select the employeerostering.employeerostering Package . Click Ok to create the rule asset. Add a WHEN condition by clicking the in the WHEN field. Select Employee in the Add a condition to the rule window. Click +Ok . Click the Employee condition to modify the constraints and add the variable name USDemployee . Add the WHEN condition From Accumulate . Above the From Accumulate condition, click click to add pattern and select Number as the fact type from the drop-down list. Add the variable name USDshiftCount to the Number condition. Below the From Accumulate condition, click click to add pattern and select the ShiftAssignment fact type from the drop-down list. Add the variable name USDshiftAssignment to the ShiftAssignment fact type. Click the ShiftAssignment condition again and from the Add a restriction on a field drop-down list, select employee . Select equal to from the drop-down list to the employee constraint. Click the icon to the drop-down button to add a variable, and click Bound variable in the Field value window. Select USDemployee from the drop-down list. In the Function box type count(USDshiftAssignment) . Add the THEN condition by clicking the in the THEN field. Select Modify Soft Score in the Add a new action window. Click +Ok . Type the following expression into the box: -(USDshiftCount.intValue()*USDshiftCount.intValue()) Click Validate in the upper-right corner to check all rule conditions are valid. If the rule validation fails, address any problems described in the error message, review all components in the rule, and try again to validate the rule until the rule passes. Click Save to save the rule. For more information about creating guided rules, see Designing a decision service using guided rules . 33.2.5.3. Creating a guided rule for no more than one shift per day The OneEmployeeShiftPerDay guided rule creates a hard constraint that employees are not assigned more than one shift per day. In the employee rostering example, this constraint is created using the guided rule designer. Procedure In Business Central, go to Menu Design Projects and click the project name. Click Add Asset Guided Rule . Enter OneEmployeeShiftPerDay as the Guided Rule name and select the employeerostering.employeerostering Package . Click Ok to create the rule asset. Add a WHEN condition by clicking the in the WHEN field. Select Free form DRL from the Add a condition to the rule window. In the free form DRL box, type the following condition: USDshiftAssignment : ShiftAssignment( employee != null ) ShiftAssignment( this != USDshiftAssignment , employee == USDshiftAssignment.employee , shift.timeslot.startTime.toLocalDate() == USDshiftAssignment.shift.timeslot.startTime.toLocalDate() ) This condition states that a shift cannot be assigned to an employee that already has another shift assignment on the same day. Add the THEN condition by clicking the in the THEN field. Select Add free form DRL from the Add a new action window. In the free form DRL box, type the following condition: scoreHolder.addHardConstraintMatch(kcontext, -1); Click Validate in the upper-right corner to check all rule conditions are valid. If the rule validation fails, address any problems described in the error message, review all components in the rule, and try again to validate the rule until the rule passes. Click Save to save the rule. For more information about creating guided rules, see Designing a decision service using guided rules . 33.2.5.4. Creating a guided rule to match skills to shift requirements The ShiftReqiredSkillsAreMet guided rule creates a hard constraint that ensures all shifts are assigned an employee with the correct set of skills. In the employee rostering example, this constraint is created using the guided rule designer. Procedure In Business Central, go to Menu Design Projects and click the project name. Click Add Asset Guided Rule . Enter ShiftReqiredSkillsAreMet as the Guided Rule name and select the employeerostering.employeerostering Package . Click Ok to create the rule asset. Add a WHEN condition by clicking the in the WHEN field. Select ShiftAssignment in the Add a condition to the rule window. Click +Ok . Click the ShiftAssignment condition, and select employee from the Add a restriction on a field drop-down list. In the designer, click the drop-down list to employee and select is not null . Click the ShiftAssignment condition, and click Expression editor . In the designer, click [not bound] to open the Expression editor , and bind the expression to the variable USDrequiredSkill . Click Set . In the designer, to USDrequiredSkill , select shift from the first drop-down list, then requiredSkill from the drop-down list. Click the ShiftAssignment condition, and click Expression editor . In the designer, to [not bound] , select employee from the first drop-down list, then skills from the drop-down list. Leave the drop-down list as Choose . In the drop-down box, change please choose to excludes . Click the icon to excludes , and in the Field value window, click the New formula button. Type USDrequiredSkill into the formula box. Add the THEN condition by clicking the in the THEN field. Select Modify Hard Score in the Add a new action window. Click +Ok . Type -1 into the score actions box. Click Validate in the upper-right corner to check all rule conditions are valid. If the rule validation fails, address any problems described in the error message, review all components in the rule, and try again to validate the rule until the rule passes. Click Save to save the rule. For more information about creating guided rules, see Designing a decision service using guided rules . 33.2.5.5. Creating a guided rule to manage day off requests The DayOffRequest guided rule creates a soft constraint. This constraint allows a shift to be reassigned to another employee in the event the employee who was originally assigned the shift is no longer able to work that day. In the employee rostering example, this constraint is created using the guided rule designer. Procedure In Business Central, go to Menu Design Projects and click the project name. Click Add Asset Guided Rule . Enter DayOffRequest as the Guided Rule name and select the employeerostering.employeerostering Package . Click Ok to create the rule asset. Add a WHEN condition by clicking the in the WHEN field. Select Free form DRL from the Add a condition to the rule window. In the free form DRL box, type the following condition: USDdayOffRequest : DayOffRequest( ) ShiftAssignment( employee == USDdayOffRequest.employee , shift.timeslot.startTime.toLocalDate() == USDdayOffRequest.date ) This condition states if a shift is assigned to an employee who has made a day off request, the employee can be unassigned the shift on that day. Add the THEN condition by clicking the in the THEN field. Select Add free form DRL from the Add a new action window. In the free form DRL box, type the following condition: scoreHolder.addSoftConstraintMatch(kcontext, -100); Click Validate in the upper-right corner to check all rule conditions are valid. If the rule validation fails, address any problems described in the error message, review all components in the rule, and try again to validate the rule until the rule passes. Click Save to save the rule. For more information about creating guided rules, see Designing a decision service using guided rules . 33.2.6. Creating a solver configuration for employee rostering You can create and edit Solver configurations in Business Central. The Solver configuration designer creates a solver configuration that can be run after the project is deployed. Prerequisites Red Hat Process Automation Manager has been downloaded and installed. You have created and configured all of the relevant assets for the employee rostering example. Procedure In Business Central, click Menu Projects , and click your project to open it. In the Assets perspective, click Add Asset Solver configuration In the Create new Solver configuration window, type the name EmployeeRosteringSolverConfig for your Solver and click Ok . This opens the Solver configuration designer. In the Score Director Factory configuration section, define a KIE base that contains scoring rule definitions. The employee rostering sample project uses defaultKieBase . Select one of the KIE sessions defined within the KIE base. The employee rostering sample project uses defaultKieSession . Click Validate in the upper-right corner to check the Score Director Factory configuration is correct. If validation fails, address any problems described in the error message, and try again to validate until the configuration passes. Click Save to save the Solver configuration. 33.2.7. Configuring Solver termination for the employee rostering project You can configure the Solver to terminate after a specified amount of time. By default, the planning engine is given an unlimited time period to solve a problem instance. The employee rostering sample project is set up to run for 30 seconds. Prerequisites You have created all relevant assets for the employee rostering project and created the EmployeeRosteringSolverConfig solver configuration in Business Central as described in Section 33.2.6, "Creating a solver configuration for employee rostering" . Procedure Open the EmployeeRosteringSolverConfig from the Assets perspective. This will open the Solver configuration designer. In the Termination section, click Add to create new termination element within the selected logical group. Select the Time spent termination type from the drop-down list. This is added as an input field in the termination configuration. Use the arrows to the time elements to adjust the amount of time spent to 30 seconds. Click Validate in the upper-right corner to check the Score Director Factory configuration is correct. If validation fails, address any problems described in the error message, and try again to validate until the configuration passes. Click Save to save the Solver configuration. 33.3. Accessing the solver using the REST API After deploying or re-creating the sample solver, you can access it using the REST API. You must register a solver instance using the REST API. Then you can supply data sets and retrieve optimized solutions. Prerequisites The employee rostering project is set up and deployed according to the sections in this document. You can either deploy the sample project, as described in Section 33.1, "Deploying the employee rostering sample project in Business Central" , or re-create the project, as described in Section 33.2, "Re-creating the employee rostering sample project" . 33.3.1. Registering the Solver using the REST API You must register the solver instance using the REST API before you can use the solver. Each solver instance is capable of optimizing one planning problem at a time. Procedure Create a HTTP request using the following header: Register the Solver using the following request: PUT http://localhost:8080/kie-server/services/rest/server/containers/employeerostering_1.0.0-SNAPSHOT/solvers/EmployeeRosteringSolver Request body <solver-instance> <solver-config-file>employeerostering/employeerostering/EmployeeRosteringSolverConfig.solver.xml</solver-config-file> </solver-instance> 33.3.2. Calling the Solver using the REST API After registering the solver instance, you can use the REST API to submit a data set to the solver and to retrieve an optimized solution. Procedure Create a HTTP request using the following header: Submit a request to the Solver with a data set, as in the following example: POST http://localhost:8080/kie-server/services/rest/server/containers/employeerostering_1.0.0-SNAPSHOT/solvers/EmployeeRosteringSolver/state/solving Request body <employeerostering.employeerostering.EmployeeRoster> <employeeList> <employeerostering.employeerostering.Employee> <name>John</name> <skills> <employeerostering.employeerostering.Skill> <name>reading</name> </employeerostering.employeerostering.Skill> </skills> </employeerostering.employeerostering.Employee> <employeerostering.employeerostering.Employee> <name>Mary</name> <skills> <employeerostering.employeerostering.Skill> <name>writing</name> </employeerostering.employeerostering.Skill> </skills> </employeerostering.employeerostering.Employee> <employeerostering.employeerostering.Employee> <name>Petr</name> <skills> <employeerostering.employeerostering.Skill> <name>speaking</name> </employeerostering.employeerostering.Skill> </skills> </employeerostering.employeerostering.Employee> </employeeList> <shiftList> <employeerostering.employeerostering.Shift> <timeslot> <startTime>2017-01-01T00:00:00</startTime> <endTime>2017-01-01T01:00:00</endTime> </timeslot> <requiredSkill reference="../../../employeeList/employeerostering.employeerostering.Employee/skills/employeerostering.employeerostering.Skill"/> </employeerostering.employeerostering.Shift> <employeerostering.employeerostering.Shift> <timeslot reference="../../employeerostering.employeerostering.Shift/timeslot"/> <requiredSkill reference="../../../employeeList/employeerostering.employeerostering.Employee[3]/skills/employeerostering.employeerostering.Skill"/> </employeerostering.employeerostering.Shift> <employeerostering.employeerostering.Shift> <timeslot reference="../../employeerostering.employeerostering.Shift/timeslot"/> <requiredSkill reference="../../../employeeList/employeerostering.employeerostering.Employee[2]/skills/employeerostering.employeerostering.Skill"/> </employeerostering.employeerostering.Shift> </shiftList> <skillList> <employeerostering.employeerostering.Skill reference="../../employeeList/employeerostering.employeerostering.Employee/skills/employeerostering.employeerostering.Skill"/> <employeerostering.employeerostering.Skill reference="../../employeeList/employeerostering.employeerostering.Employee[3]/skills/employeerostering.employeerostering.Skill"/> <employeerostering.employeerostering.Skill reference="../../employeeList/employeerostering.employeerostering.Employee[2]/skills/employeerostering.employeerostering.Skill"/> </skillList> <timeslotList> <employeerostering.employeerostering.Timeslot reference="../../shiftList/employeerostering.employeerostering.Shift/timeslot"/> </timeslotList> <dayOffRequestList/> <shiftAssignmentList> <employeerostering.employeerostering.ShiftAssignment> <shift reference="../../../shiftList/employeerostering.employeerostering.Shift"/> </employeerostering.employeerostering.ShiftAssignment> <employeerostering.employeerostering.ShiftAssignment> <shift reference="../../../shiftList/employeerostering.employeerostering.Shift[3]"/> </employeerostering.employeerostering.ShiftAssignment> <employeerostering.employeerostering.ShiftAssignment> <shift reference="../../../shiftList/employeerostering.employeerostering.Shift[2]"/> </employeerostering.employeerostering.ShiftAssignment> </shiftAssignmentList> </employeerostering.employeerostering.EmployeeRoster> Request the best solution to the planning problem: GET http://localhost:8080/kie-server/services/rest/server/containers/employeerostering_1.0.0-SNAPSHOT/solvers/EmployeeRosteringSolver/bestsolution Example response <solver-instance> <container-id>employee-rostering</container-id> <solver-id>solver1</solver-id> <solver-config-file>employeerostering/employeerostering/EmployeeRosteringSolverConfig.solver.xml</solver-config-file> <status>NOT_SOLVING</status> <score scoreClass="org.optaplanner.core.api.score.buildin.hardsoft.HardSoftScore">0hard/0soft</score> <best-solution class="employeerostering.employeerostering.EmployeeRoster"> <employeeList> <employeerostering.employeerostering.Employee> <name>John</name> <skills> <employeerostering.employeerostering.Skill> <name>reading</name> </employeerostering.employeerostering.Skill> </skills> </employeerostering.employeerostering.Employee> <employeerostering.employeerostering.Employee> <name>Mary</name> <skills> <employeerostering.employeerostering.Skill> <name>writing</name> </employeerostering.employeerostering.Skill> </skills> </employeerostering.employeerostering.Employee> <employeerostering.employeerostering.Employee> <name>Petr</name> <skills> <employeerostering.employeerostering.Skill> <name>speaking</name> </employeerostering.employeerostering.Skill> </skills> </employeerostering.employeerostering.Employee> </employeeList> <shiftList> <employeerostering.employeerostering.Shift> <timeslot> <startTime>2017-01-01T00:00:00</startTime> <endTime>2017-01-01T01:00:00</endTime> </timeslot> <requiredSkill reference="../../../employeeList/employeerostering.employeerostering.Employee/skills/employeerostering.employeerostering.Skill"/> </employeerostering.employeerostering.Shift> <employeerostering.employeerostering.Shift> <timeslot reference="../../employeerostering.employeerostering.Shift/timeslot"/> <requiredSkill reference="../../../employeeList/employeerostering.employeerostering.Employee[3]/skills/employeerostering.employeerostering.Skill"/> </employeerostering.employeerostering.Shift> <employeerostering.employeerostering.Shift> <timeslot reference="../../employeerostering.employeerostering.Shift/timeslot"/> <requiredSkill reference="../../../employeeList/employeerostering.employeerostering.Employee[2]/skills/employeerostering.employeerostering.Skill"/> </employeerostering.employeerostering.Shift> </shiftList> <skillList> <employeerostering.employeerostering.Skill reference="../../employeeList/employeerostering.employeerostering.Employee/skills/employeerostering.employeerostering.Skill"/> <employeerostering.employeerostering.Skill reference="../../employeeList/employeerostering.employeerostering.Employee[3]/skills/employeerostering.employeerostering.Skill"/> <employeerostering.employeerostering.Skill reference="../../employeeList/employeerostering.employeerostering.Employee[2]/skills/employeerostering.employeerostering.Skill"/> </skillList> <timeslotList> <employeerostering.employeerostering.Timeslot reference="../../shiftList/employeerostering.employeerostering.Shift/timeslot"/> </timeslotList> <dayOffRequestList/> <shiftAssignmentList/> <score>0hard/0soft</score> </best-solution> </solver-instance>
[ "package import function // Optional query // Optional declare // Optional global // Optional rule \"rule name\" // Attributes when // Conditions then // Actions end rule \"rule2 name\"", "rule \"Underage\" salience 15 agenda-group \"applicationGroup\" when USDapplication : LoanApplication() Applicant( age < 21 ) then USDapplication.setApproved( false ); USDapplication.setExplanation( \"Underage\" ); end", "package employeerostering.employeerostering; rule \"Employee10HourShiftSpace\" when USDshiftAssignment : ShiftAssignment( USDemployee : employee != null, USDshiftEndDateTime : shift.timeslot.endTime) ShiftAssignment( this != USDshiftAssignment, USDemployee == employee, USDshiftEndDateTime <= shift.timeslot.endTime, USDshiftEndDateTime.until(shift.timeslot.startTime, java.time.temporal.ChronoUnit.HOURS) <10) then scoreHolder.addHardConstraintMatch(kcontext, -1); end", "USDshiftAssignment : ShiftAssignment( employee != null ) ShiftAssignment( this != USDshiftAssignment , employee == USDshiftAssignment.employee , shift.timeslot.startTime.toLocalDate() == USDshiftAssignment.shift.timeslot.startTime.toLocalDate() )", "scoreHolder.addHardConstraintMatch(kcontext, -1);", "USDdayOffRequest : DayOffRequest( ) ShiftAssignment( employee == USDdayOffRequest.employee , shift.timeslot.startTime.toLocalDate() == USDdayOffRequest.date )", "scoreHolder.addSoftConstraintMatch(kcontext, -100);", "authorization: admin:admin X-KIE-ContentType: xstream content-type: application/xml", "<solver-instance> <solver-config-file>employeerostering/employeerostering/EmployeeRosteringSolverConfig.solver.xml</solver-config-file> </solver-instance>", "authorization: admin:admin X-KIE-ContentType: xstream content-type: application/xml", "<employeerostering.employeerostering.EmployeeRoster> <employeeList> <employeerostering.employeerostering.Employee> <name>John</name> <skills> <employeerostering.employeerostering.Skill> <name>reading</name> </employeerostering.employeerostering.Skill> </skills> </employeerostering.employeerostering.Employee> <employeerostering.employeerostering.Employee> <name>Mary</name> <skills> <employeerostering.employeerostering.Skill> <name>writing</name> </employeerostering.employeerostering.Skill> </skills> </employeerostering.employeerostering.Employee> <employeerostering.employeerostering.Employee> <name>Petr</name> <skills> <employeerostering.employeerostering.Skill> <name>speaking</name> </employeerostering.employeerostering.Skill> </skills> </employeerostering.employeerostering.Employee> </employeeList> <shiftList> <employeerostering.employeerostering.Shift> <timeslot> <startTime>2017-01-01T00:00:00</startTime> <endTime>2017-01-01T01:00:00</endTime> </timeslot> <requiredSkill reference=\"../../../employeeList/employeerostering.employeerostering.Employee/skills/employeerostering.employeerostering.Skill\"/> </employeerostering.employeerostering.Shift> <employeerostering.employeerostering.Shift> <timeslot reference=\"../../employeerostering.employeerostering.Shift/timeslot\"/> <requiredSkill reference=\"../../../employeeList/employeerostering.employeerostering.Employee[3]/skills/employeerostering.employeerostering.Skill\"/> </employeerostering.employeerostering.Shift> <employeerostering.employeerostering.Shift> <timeslot reference=\"../../employeerostering.employeerostering.Shift/timeslot\"/> <requiredSkill reference=\"../../../employeeList/employeerostering.employeerostering.Employee[2]/skills/employeerostering.employeerostering.Skill\"/> </employeerostering.employeerostering.Shift> </shiftList> <skillList> <employeerostering.employeerostering.Skill reference=\"../../employeeList/employeerostering.employeerostering.Employee/skills/employeerostering.employeerostering.Skill\"/> <employeerostering.employeerostering.Skill reference=\"../../employeeList/employeerostering.employeerostering.Employee[3]/skills/employeerostering.employeerostering.Skill\"/> <employeerostering.employeerostering.Skill reference=\"../../employeeList/employeerostering.employeerostering.Employee[2]/skills/employeerostering.employeerostering.Skill\"/> </skillList> <timeslotList> <employeerostering.employeerostering.Timeslot reference=\"../../shiftList/employeerostering.employeerostering.Shift/timeslot\"/> </timeslotList> <dayOffRequestList/> <shiftAssignmentList> <employeerostering.employeerostering.ShiftAssignment> <shift reference=\"../../../shiftList/employeerostering.employeerostering.Shift\"/> </employeerostering.employeerostering.ShiftAssignment> <employeerostering.employeerostering.ShiftAssignment> <shift reference=\"../../../shiftList/employeerostering.employeerostering.Shift[3]\"/> </employeerostering.employeerostering.ShiftAssignment> <employeerostering.employeerostering.ShiftAssignment> <shift reference=\"../../../shiftList/employeerostering.employeerostering.Shift[2]\"/> </employeerostering.employeerostering.ShiftAssignment> </shiftAssignmentList> </employeerostering.employeerostering.EmployeeRoster>", "<solver-instance> <container-id>employee-rostering</container-id> <solver-id>solver1</solver-id> <solver-config-file>employeerostering/employeerostering/EmployeeRosteringSolverConfig.solver.xml</solver-config-file> <status>NOT_SOLVING</status> <score scoreClass=\"org.optaplanner.core.api.score.buildin.hardsoft.HardSoftScore\">0hard/0soft</score> <best-solution class=\"employeerostering.employeerostering.EmployeeRoster\"> <employeeList> <employeerostering.employeerostering.Employee> <name>John</name> <skills> <employeerostering.employeerostering.Skill> <name>reading</name> </employeerostering.employeerostering.Skill> </skills> </employeerostering.employeerostering.Employee> <employeerostering.employeerostering.Employee> <name>Mary</name> <skills> <employeerostering.employeerostering.Skill> <name>writing</name> </employeerostering.employeerostering.Skill> </skills> </employeerostering.employeerostering.Employee> <employeerostering.employeerostering.Employee> <name>Petr</name> <skills> <employeerostering.employeerostering.Skill> <name>speaking</name> </employeerostering.employeerostering.Skill> </skills> </employeerostering.employeerostering.Employee> </employeeList> <shiftList> <employeerostering.employeerostering.Shift> <timeslot> <startTime>2017-01-01T00:00:00</startTime> <endTime>2017-01-01T01:00:00</endTime> </timeslot> <requiredSkill reference=\"../../../employeeList/employeerostering.employeerostering.Employee/skills/employeerostering.employeerostering.Skill\"/> </employeerostering.employeerostering.Shift> <employeerostering.employeerostering.Shift> <timeslot reference=\"../../employeerostering.employeerostering.Shift/timeslot\"/> <requiredSkill reference=\"../../../employeeList/employeerostering.employeerostering.Employee[3]/skills/employeerostering.employeerostering.Skill\"/> </employeerostering.employeerostering.Shift> <employeerostering.employeerostering.Shift> <timeslot reference=\"../../employeerostering.employeerostering.Shift/timeslot\"/> <requiredSkill reference=\"../../../employeeList/employeerostering.employeerostering.Employee[2]/skills/employeerostering.employeerostering.Skill\"/> </employeerostering.employeerostering.Shift> </shiftList> <skillList> <employeerostering.employeerostering.Skill reference=\"../../employeeList/employeerostering.employeerostering.Employee/skills/employeerostering.employeerostering.Skill\"/> <employeerostering.employeerostering.Skill reference=\"../../employeeList/employeerostering.employeerostering.Employee[3]/skills/employeerostering.employeerostering.Skill\"/> <employeerostering.employeerostering.Skill reference=\"../../employeeList/employeerostering.employeerostering.Employee[2]/skills/employeerostering.employeerostering.Skill\"/> </skillList> <timeslotList> <employeerostering.employeerostering.Timeslot reference=\"../../shiftList/employeerostering.employeerostering.Shift/timeslot\"/> </timeslotList> <dayOffRequestList/> <shiftAssignmentList/> <score>0hard/0soft</score> </best-solution> </solver-instance>" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/getting_started_with_red_hat_process_automation_manager/workbench-er-tutorial-con
Managing cost data using tagging
Managing cost data using tagging Cost Management Service 1-latest Organize resources and allocate costs with tags Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/managing_cost_data_using_tagging/index
Chapter 6. Image tags overview
Chapter 6. Image tags overview An image tag refers to a label or identifier assigned to a specific version or variant of a container image. Container images are typically composed of multiple layers that represent different parts of the image. Image tags are used to differentiate between different versions of an image or to provide additional information about the image. Image tags have the following benefits: Versioning and Releases : Image tags allow you to denote different versions or releases of an application or software. For example, you might have an image tagged as v1.0 to represent the initial release and v1.1 for an updated version. This helps in maintaining a clear record of image versions. Rollbacks and Testing : If you encounter issues with a new image version, you can easily revert to a version by specifying its tag. This is helpful during debugging and testing phases. Development Environments : Image tags are beneficial when working with different environments. You might use a dev tag for a development version, qa for quality assurance testing, and prod for production, each with their respective features and configurations. Continuous Integration/Continuous Deployment (CI/CD) : CI/CD pipelines often utilize image tags to automate the deployment process. New code changes can trigger the creation of a new image with a specific tag, enabling seamless updates. Feature Branches : When multiple developers are working on different features or bug fixes, they can create distinct image tags for their changes. This helps in isolating and testing individual features. Customization : You can use image tags to customize images with different configurations, dependencies, or optimizations, while keeping track of each variant. Security and Patching : When security vulnerabilities are discovered, you can create patched versions of images with updated tags, ensuring that your systems are using the latest secure versions. Dockerfile Changes : If you modify the Dockerfile or build process, you can use image tags to differentiate between images built from the and updated Dockerfiles. Overall, image tags provide a structured way to manage and organize container images, enabling efficient development, deployment, and maintenance workflows. 6.1. Viewing image tag information by using the UI Use the following procedure to view image tag information using the v2 UI. Prerequisites You have pushed an image tag to a repository. Procedure On the v2 UI, click Repositories . Click the name of a repository. Click the name of a tag. You are taken to the Details page of that tag. The page reveals the following information: Name Repository Digest Vulnerabilities Creation Modified Size Labels How to fetch the image tag Click Security Report to view the tag's vulnerabilities. You can expand an advisory column to open up CVE data. Click Packages to view the tag's packages. Click the name of the repository to return to the Tags page. 6.2. Adding a new image tag to an image by using the UI You can add a new tag to an image in Quay.io. Procedure On the Red Hat Quay v2 UI dashboard, click Repositories in the navigation pane. Click the name of a repository that has image tags. Click the menu kebab, then click Add new tag . Enter a name for the tag, then, click Create tag . The new tag is now listed on the Repository Tags page. 6.3. Adding and managing labels by using the UI Administrators can add and manage labels for tags by using the following procedure. Procedure On the v2 UI dashboard, click Repositories in the navigation pane. Click the name of a repository that has image tags. Click the menu kebab for an image and select Edit labels . In the Edit labels window, click Add new label . Enter a label for the image tag using the key=value format, for example, com.example.release-date=2023-11-14 . Note The following error is returned when failing to use the key=value format: Invalid label format, must be key value separated by = . Click the whitespace of the box to add the label. Optional. Add a second label. Click Save labels to save the label to the image tag. The following notification is returned: Created labels successfully . Optional. Click the same image tag's menu kebab Edit labels X on the label to remove it; alternatively, you can edit the text. Click Save labels . The label is now removed or edited. 6.4. Setting tag expirations Image tags can be set to expire from a Quay.io repository at a chosen date and time using the tag expiration feature. This feature includes the following characteristics: When an image tag expires, it is deleted from the repository. If it is the last tag for a specific image, the image is also set to be deleted. Expiration is set on a per-tag basis. It is not set for a repository as a whole. After a tag is expired or deleted, it is not immediately removed from the registry. This is contingent upon the allotted time designed in the time machine feature, which defines when the tag is permanently deleted, or garbage collected. By default, this value is set at 14 days , however the administrator can adjust this time to one of multiple options. Up until the point that garbage collection occurs, tags changes can be reverted. Tag expiration can be set up in one of two ways: By setting the quay.expires-after= label in the Dockerfile when the image is created. This sets a time to expire from when the image is built. By selecting an expiration date on the Quay.io UI. For example: Setting tag expirations can help automate the cleanup of older or unused tags, helping to reduce storage space. 6.4.1. Setting tag expiration from a repository Procedure On the Red Hat Quay v2 UI dashboard, click Repositories in the navigation pane. Click the name of a repository that has image tags. Click the menu kebab for an image and select Change expiration . Optional. Alternatively, you can bulk add expiration dates by clicking the box of multiple tags, and then select Actions Set expiration . In the Change Tags Expiration window, set an expiration date, specifying the day of the week, month, day of the month, and year. For example, Wednesday, November 15, 2023 . Alternatively, you can click the calendar button and manually select the date. Set the time, for example, 2:30 PM . Click Change Expiration to confirm the date and time. The following notification is returned: Successfully set expiration for tag test to Nov 15, 2023, 2:26 PM . On the Red Hat Quay v2 UI Tags page, you can see when the tag is set to expire. For example: 6.4.2. Setting tag expiration from a Dockerfile You can add a label, for example, quay.expires-after=20h to an image tag by using the docker label command to cause the tag to automatically expire after the time that is indicated. The following values for hours, days, or weeks are accepted: 1h 2d 3w Expiration begins from the time that the image is pushed to the registry. Procedure Enter the following docker label command to add a label to the desired image tag. The label should be in the format quay.expires-after=20h to indicate that the tag should expire after 20 hours. Replace 20h with the desired expiration time. For example: USD docker label quay.expires-after=20h quay-server.example.com/quayadmin/<image>:<tag> 6.5. Fetching an image by tag or digest Quay.io offers multiple ways of pulling images using Docker and Podman clients. Procedure Navigate to the Tags page of a repository. Under Manifest , click the Fetch Tag icon. When the popup box appears, users are presented with the following options: Podman Pull (by tag) Docker Pull (by tag) Podman Pull (by digest) Docker Pull (by digest) Selecting any one of the four options returns a command for the respective client that allows users to pull the image. Click Copy Command to copy the command, which can be used on the command-line interface (CLI). For example: USD podman pull quay.io/quayadmin/busybox:test2 6.6. Viewing Red Hat Quay tag history by using the UI Quay.io offers a comprehensive history of images and their respective image tags. Procedure On the Red Hat Quay v2 UI dashboard, click Repositories in the navigation pane. Click the name of a repository that has image tags. Click Tag History . On this page, you can perform the following actions: Search by tag name Select a date range View tag changes View tag modification dates and the time at which they were changed 6.7. Deleting an image tag Deleting an image tag removes that specific version of the image from the registry. To delete an image tag, use the following procedure. Procedure On the Repositories page of the v2 UI, click the name of the image you want to delete, for example, quay/admin/busybox . Click the More Actions drop-down menu. Click Delete . Note If desired, you could click Make Public or Make Private . Type confirm in the box, and then click Delete . After deletion, you are returned to the Repositories page. Note Deleting an image tag can be reverted based on the amount of time allotted assigned to the time machine feature. For more information, see "Reverting tag changes". 6.8. Reverting tag changes by using the UI Quay.io offers a comprehensive time machine feature that allows older images tags to remain in the repository for set periods of time so that they can revert changes made to tags. This feature allows users to revert tag changes, like tag deletions. Procedure On the Repositories page of the v2 UI, click the name of the image you want to revert. Click the Tag History tab. Find the point in the timeline at which image tags were changed or removed. , click the option under Revert to restore a tag to its image.
[ "docker label quay.expires-after=20h quay-server.example.com/quayadmin/<image>:<tag>", "podman pull quay.io/quayadmin/busybox:test2" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/about_quay_io/image-tags-overview
12.3. Enabling LDAP-based Enrollment Profiles
12.3. Enabling LDAP-based Enrollment Profiles To install with LDAP-based profiles set the pki_profile_in_ldap=True option in the [CA] section of the pkispawn configuration file. Note In this case, profile files will still appear in /var/lib/pki/ instance_name /ca/profiles/ca/ , but will be ignored. To enable LDAP-based profiles on an existing instance, change the following in the instance's CS.cfg : Then, import profiles manually into the database using either the pki command line utility or a custom script.
[ "subsystem.1.class=com.netscape.cmscore.profile.LDAPProfileSubsystem" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/sect-additional-install-options-ldap-based-enrollment-profiles
Red Hat OpenStack Services on OpenShift Certification Policy Guide
Red Hat OpenStack Services on OpenShift Certification Policy Guide Red Hat Software Certification 2025 For Use with Red Hat OpenStack 18 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_openstack_services_on_openshift_certification_policy_guide/index
14.2. Setting Access ACLs
14.2. Setting Access ACLs There are two types of ACLs: access ACLs and default ACLs . An access ACL is the access control list for a specific file or directory. A default ACL can only be associated with a directory; if a file within the directory does not have an access ACL, it uses the rules of the default ACL for the directory. Default ACLs are optional. ACLs can be configured: Per user Per group Via the effective rights mask For users not in the user group for the file The setfacl utility sets ACLs for files and directories. Use the -m option to add or modify the ACL of a file or directory: Rules ( <rules> ) must be specified in the following formats. Multiple rules can be specified in the same command if they are separated by commas. u: <uid> : <perms> Sets the access ACL for a user. The user name or UID may be specified. The user may be any valid user on the system. g: <gid> : <perms> Sets the access ACL for a group. The group name or GID may be specified. The group may be any valid group on the system. m: <perms> Sets the effective rights mask. The mask is the union of all permissions of the owning group and all of the user and group entries. o: <perms> Sets the access ACL for users other than the ones in the group for the file. White space is ignored. Permissions ( <perms> ) must be a combination of the characters r , w , and x for read, write, and execute. If a file or directory already has an ACL, and the setfacl command is used, the additional rules are added to the existing ACL or the existing rule is modified. For example, to give read and write permissions to user andrius: To remove all the permissions for a user, group, or others, use the -x option and do not specify any permissions: For example, to remove all permissions from the user with UID 500:
[ "setfacl -m <rules> <files>", "setfacl -m u:andrius:rw /project/somefile", "setfacl -x <rules> <files>", "setfacl -x u:500 /project/somefile" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Access_Control_Lists-Setting_Access_ACLs
function::ns_euid
function::ns_euid Name function::ns_euid - Returns the effective user ID of a target process as seen in a user namespace Synopsis Arguments None Description This function returns the effective user ID of the target process as seen in the target user namespace if provided, or the stap process namespace.
[ "ns_euid:long()" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-ns-euid
Chapter 7. Deploying the Shared File Systems service with CephFS-NFS
Chapter 7. Deploying the Shared File Systems service with CephFS-NFS When you use the Shared File Systems service (manila) with Ceph File System (CephFS) through an NFS gateway (NFS-Ganesha), you can use the same Red Hat Ceph Storage cluster that you use for block and object storage to provide file shares through the NFS protocol. CephFS-NFS has been fully supported since Red Hat OpenStack Platform (RHOSP) version 13. The RHOSP Shared File Systems service (manila) with CephFS-NFS for RHOSP 17.0 and later is supported for use with Red Hat Ceph Storage version 5.2 or later. For more information about how to determine the version of Ceph Storage installed on your system, see Red Hat Ceph Storage releases and corresponding Ceph package versions . CephFS is the highly scalable, open-source distributed file system component of Red Hat Ceph Storage, a unified distributed storage platform. Ceph Storage implements object, block, and file storage using Reliable Autonomic Distributed Object Store (RADOS). CephFS, which is POSIX compatible, provides file access to a Ceph Storage cluster. The Shared File Systems service enables users to create shares in CephFS and access them with NFS 4.1 through user-space NFS server software, NFS-Ganesha. NFS-Ganesha controls access to the shares and exports them to clients through the NFS 4.1 protocol. The Shared File Systems service manages the life cycle of these shares in RHOSP. When cloud administrators configure the service to use CephFS-NFS, these file shares come from the CephFS cluster, but they are created and accessed as familiar NFS shares. For more information about the Shared File Systems service, see Configuring the Shared File Systems service (manila) in Configuring persistent storage . 7.1. Prerequisites You install the Shared File Systems service on Controller nodes, as is the default behavior. You must create a StorageNFS network for storage traffic through RHOSP director. You install the NFS-Ganesha gateway service on the Pacemaker cluster of the Controller nodes. You configure only a single instance of a CephFS back end to use the Shared File Systems service. You can use other non-CephFS back ends with the single CephFS back end. 7.2. CephFS-NFS driver The CephFS-NFS back end in the Shared File Systems service (manila) is composed of Ceph metadata servers (MDS), the NFS gateway (NFS-Ganesha), and the Red Hat Ceph Storage cluster service components. The Shared File Systems service CephFS-NFS driver uses NFS-Ganesha to provide NFSv4 protocol access to CephFS shares. The Ceph MDS service maps the directories and file names of the file system to objects that are stored in RADOS clusters. NFS gateways can serve NFS file shares with different storage back ends, such as Ceph. The NFS-Ganesha service runs on the Controller nodes with the Ceph services. Deployment with an isolated network is optional but recommended. In this scenario, instances are booted with at least two NICs: one NIC connects to the project router and the second NIC connects to the StorageNFS network, which connects directly to NFS-Ganesha. The instance mounts shares by using the NFS protocol. CephFS shares that are hosted on Ceph Object Storage Daemon (OSD) nodes are provided through the NFS gateway. NFS-Ganesha improves security by preventing user instances from directly accessing the MDS and other Ceph services. Instances do not have direct access to the Ceph daemons. 7.3. Red Hat Ceph Storage services and client access When you use Red Hat Ceph Storage to provide object and block storage, you require the following services for deployment: Ceph monitor (MON) Object Storage Daemon (OSD) Rados Gateway (RGW) Manager For native CephFS, you also require the Ceph Storage Metadata Service (MDS), and for CephFS-NFS, you require the NFS-Ganesha service as a gateway to native CephFS using the NFS protocol. NFS-Ganesha runs in its own container that interfaces both to the Ceph public network and to a new isolated network, StorageNFS. If you use the composable network feature of Red Hat OpenStack Platform (RHOSP) director, you can deploy the isolated network and connect it to the Controller nodes. As the cloud administrator, you can configure the network as a Networking (neutron) provider network. NFS-Ganesha accesses CephFS over the Ceph public network and binds its NFS service using an address on the StorageNFS network. To access NFS shares, you provision Compute (nova) instances with an additional NIC that connects to the Storage NFS network. Export locations for CephFS shares appear as standard NFS IP:<path> tuples that use the NFS-Ganesha server VIP on the StorageNFS network. The network uses the IP address of the instance to perform access control on the NFS shares. Networking (neutron) security groups prevent an instance that belongs to project 1 from accessing an instance that belongs to project 2 over the StorageNFS network. Projects share the same CephFS file system, but project data path separation is enforced because instances can access files only under export trees: /path/to/share1/... , /path/to/share2/... . 7.4. Shared File Systems service with CephFS-NFS fault tolerance When Red Hat OpenStack Platform (RHOSP) director starts the Red Hat Ceph Storage service daemons, they manage their own high availability (HA) state and, in general, there are multiple instances of these daemons running. By contrast, in this release, only one instance of NFS-Ganesha can serve file shares at a time. To avoid a single point of failure in the data path for CephFS-NFS shares, NFS-Ganesha runs on a RHOSP Controller node in an active-passive configuration that is managed by a Pacemaker-Corosync cluster. NFS-Ganesha acts across the Controller nodes as a virtual service with a virtual service IP address. If a Controller node fails or the service on a particular Controller node fails and cannot be recovered on that node, Pacemaker-Corosync starts a new NFS-Ganesha instance on a different Controller node using the same virtual IP address. Existing client mounts are preserved because they use the virtual IP address for the export location of shares. Using default NFS mount-option settings and NFS 4.1 or later, after a failure, TCP connections are reset and clients reconnect. I/O operations temporarily stop responding during failover, but they do not fail. Application I/O also stops responding but resumes after failover completes. New connections, new lock states, and so on are refused until after a grace period of up to 90 seconds during which time the server waits for clients to reclaim their locks. NFS-Ganesha keeps a list of the clients and exits the grace period earlier if all clients reclaim their locks. 7.5. CephFS-NFS installation A typical CephFS-NFS installation in a Red Hat OpenStack Platform (RHOSP) environment includes the following configurations: OpenStack Controller nodes that are running the following: Ceph monitor (MON) Containerized Ceph metadata server (MDS) Shared File Systems service (manila) NFS-Ganesha Some of these services can coexist on the same node or can have one or more dedicated nodes. A Red Hat Ceph Storage cluster with containerized object storage daemons (OSDs) running on Ceph Storage nodes An isolated StorageNFS network that provides access from projects to the NFS-Ganesha service for NFS share provisioning Important The Shared File Systems service with CephFS-NFS fully supports serving shares to Red Hat OpenShift Container Platform through Manila CSI. This solution is not intended for large scale deployments. For important recommendations, see https://access.redhat.com/articles/6667651 . The Shared File Systems service provides APIs that allow the projects to request file system shares, which are fulfilled by driver modules. If you use the driver for CephFS, manila.share.drivers.cephfs.driver.CephFSDriver , you can use the Shared File Systems service with a CephFS back end. RHOSP director configures the driver to deploy NFS-Ganesha so that the CephFS shares are presented through the NFS 4.1 protocol. While preparing your CephFS NFS deployment, you will require the isolated StorageNFS network. You can use director to create this isolated StorageNFS network. For more information, see Configuring overcloud networking in Installing and managing Red Hat OpenStack Platform with director . Manual configuration options for Shared File Systems service back ends You can manually configure the Shared File Systems service by editing the node file /etc/manila/manila.conf . However, RHOSP director can override any settings in future overcloud updates. You can add CephFS-NFS to an externally deployed Ceph Storage cluster, which was not configured by director. Currently, you can only define one CephFS back end in director. For more information, see Integrating an overcloud with Ceph Storage in Integrating the overcloud with an existing Red Hat Ceph Storage Cluster . 7.6. File shares The Shared File Systems service (manila), Ceph File System (CephFS), and CephFS-NFS manage shares differently. The Shared File Systems service provides shares, where a share is an individual file system namespace and a unit of storage with a defined size. Shared file system storage allows multiple clients to connect, read, and write data to any given share, but you must give each client access to the share through the Shared File Systems service access control APIs before they can connect. CephFS manages a share like a directory with a defined quota and a layout that points to a particular storage pool or namespace. CephFS quotas limit the size of a directory to the size of the share that the Shared File Systems service creates. You control access to CephFS-NFS shares by specifying the IP address of the client. With CephFS-NFS, file shares are provisioned and accessed through the NFS protocol. The NFS protocol also manages security. 7.7. Network isolation for CephFS-NFS For security, isolate NFS traffic to a separate network when using CephFS-NFS so that the NFS server is accessible only through the isolated network. Deployers can restrict the isolated network to a select group of projects in the cloud. Red Hat OpenStack (RHOSP) director ships with support to deploy a dedicated StorageNFS network. Before you deploy the overcloud to enable CephFS-NFS for use with the Shared File Systems service, you must create the following: An isolated network for NFS traffic, called StorageNFS A Virtual IP (VIP) on the isolated network A custom role for the Controller nodes that configures the nodes with the StorageNFS network For more information about creating the isolated network, the VIP, and the custom role, see Configuring overcloud networking in Installing and managing Red Hat OpenStack Platform with director . Important It is possible to omit the creation of an isolated network for NFS traffic. However, if you omit the StorageNFS network in a production deployment that has untrusted clients, director can connect the Ceph NFS server on any shared, non-isolated network, such as an external network. Shared networks are usually routable to all user private networks in the cloud. When the NFS server is accessed through a routed network in this manner, you cannot control access to Shared File Systems service shares by applying client IP access rules. Users must allow access to their shares by using the generic 0.0.0.0/0 IP. Because of the generic IP, anyone who discovers the export path can mount the shares. 7.8. Deploying the CephFS-NFS environment When you are ready to deploy your environment, use the openstack overcloud deploy command with the custom environments and roles required to run CephFS with NFS-Ganesha. The overcloud deploy command has the following options in addition to other required options. Action Option Additional information Reference the deployed networks including the StorageNFS network -e /home/stack/templates/overcloud-networks-deployed.yaml Configuring overcloud networking in Installing and managing Red Hat OpenStack Platform with director . You can omit the StorageNFS network option if you do not want to isolate NFS traffic to a separate network. Reference the Virtual IPs created on the deployed networks, including the VIP for the StorageNFS network -e /home/stack/templates/overcloud-vip-deployed.yaml Configuring overcloud networking in Installing and managing Red Hat OpenStack Platform with director . You can omit this option if you do not want to isolate NFS traffic to a separate network. Add the custom roles defined in the roles_data.yaml file. The deployment command uses the custom roles to assign networks to the Controller nodes -r /home/stack/roles_data.yaml You can omit this option if you do not want to isolate NFS traffic to a separate network. Deploy the Ceph daemons. -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml Initiating overcloud deployment in Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director Deploy the Ceph metadata server with ceph-mds.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/ceph-mds.yaml Initiating overcloud deployment in Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director Deploy the Shared File Systems service (manila) with the CephFS-NFS back end. Configure NFS-Ganesha with director. -e /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml The manila-cephfsganesha-config.yaml environment file The following example shows an openstack overcloud deploy command with options to deploy CephFS with NFS-Ganesha, a Ceph Storage cluster, and Ceph MDS: For more information about the openstack overcloud deploy command, see Provisioning and deploying your overcloud in Installing and managing Red Hat OpenStack Platform with director . 7.9. CephFS-NFS back-end environment file The environment file for defining a CephFS-NFS back end, manila-cephfsganesha-config.yaml , is located in the following path of an undercloud node: /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml . The manila-cephfsganesha-config.yaml environment file contains settings relevant to the deployment of the Shared File Systems service (manila). The back-end default settings work for most environments. The following example shows the default values that director uses during deployment of the Shared File Systems service: The parameter_defaults header signifies the start of the configuration. To override default values set in resource_registry , copy this manila-cephfsganesha-config.yaml environment file to your local environment file directory, /home/stack/templates/ , and edit the parameter settings as required by your environment. This includes values set by OS::Tripleo::Services::ManilaBackendCephFs , which sets defaults for a CephFS back end. 1 ManilaCephFSBackendName sets the name of the manila configuration of your CephFS back end. In this case, the default back-end name is cephfs . 2 ManilaCephFSDriverHandlesShareServers controls the lifecycle of the share server. When set to false , the driver does not handle the lifecycle. This is the only supported option. 3 ManilaCephFSCephFSAuthId defines the Ceph auth ID that director creates for the manila service to access the Ceph cluster. For more information about environment files, see Environment files in Installing and managing Red Hat OpenStack Platform with director .
[ "[stack@undercloud ~]USD openstack overcloud deploy --templates /usr/share/openstack-tripleo-heat-templates -r /home/stack/roles_data.yaml -e /home/stack/templates/overcloud-networks-deployed.yaml -e /home/stack/templates/overcloud-vip-deployed.yaml -e /home/stack/containers-default-parameters.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /home/stack/network-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/ceph-mds.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml", "[stack@undercloud ~]USD cat /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml A Heat environment file which can be used to enable a a Manila CephFS-NFS driver backend. resource_registry: OS::TripleO::Services::ManilaApi: ../deployment/manila/manila-api-container-puppet.yaml OS::TripleO::Services::ManilaScheduler: ../deployment/manila/manila-scheduler-container-puppet.yaml # Only manila-share is pacemaker managed: OS::TripleO::Services::ManilaShare: ../deployment/manila/manila-share-pacemaker-puppet.yaml OS::TripleO::Services::ManilaBackendCephFs: ../deployment/manila/manila-backend-cephfs.yaml # ceph-nfs (ganesha) service is installed and configured by Director # but it's still managed by pacemaker OS::TripleO::Services::CephNfs: ../deployment/cephadm/ceph-nfs.yaml parameter_defaults: ManilaCephFSBackendName: cephfs 1 ManilaCephFSDriverHandlesShareServers: false 2 ManilaCephFSCephFSAuthId: 'manila' 3 # manila cephfs driver supports either native cephfs backend - 'CEPHFS' # (users mount shares directly from ceph cluster), or nfs-ganesha backend - # 'NFS' (users mount shares through nfs-ganesha server) ManilaCephFSCephFSProtocolHelperType: 'NFS'" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/deploying_red_hat_ceph_storage_and_red_hat_openstack_platform_together_with_director/assembly_deploying-the-Shared-File-Systems-service-with-CephFS-NFS_deployingcontainerizedrhcs
function::usymname
function::usymname Name function::usymname - Return the symbol of an address in the current task. Synopsis Arguments addr The address to translate. Description Returns the (function) symbol name associated with the given address if known. If not known it will return the hex string representation of addr.
[ "usymname:string(addr:long)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-usymname
Chapter 5. Fixed issues
Chapter 5. Fixed issues The issues fixed in AMQ Streams 2.1 are shown in the following table. For details of the issues fixed in Kafka 3.1.0, refer to the Kafka 3.1.0 Release Notes . Issue Number Description ENTMQST-3595 Cluster operator missing Java options to be passed to Kafka bridge ENTMQST-3835 Connector gets restarted on every reconciliation when tasks.max is set ENTMQST-3763 Errors when scaling-down ZooKeeper nodes ENTMQST-3422 Failure to run on FIPS-enabled clusters ENTMQST-3417 Fix leaking keystores/truststores in ZooKeeperScaler ENTMQST-3583 JVM options provided by the Cluster Operator are ignored ENTMQST-3345 Kafka upgrade, with inter broker protocol and log message format as M.m.p, fails with misleading error ENTMQST-3411 KafkaExporter, CruiseControl and EntityOperator pods are rolled on clients CA renewal ENTMQST-3325 KafkaMirrorMaker2 conditions do not reflect the state of the MM2 connectors ENTMQST-3504 OptimizationFailureException due to invalid CPU utilization ENTMQST-3585 Pass Java system properties to Cruise Control ENTMQST-3856 Rack Awareness doesn't work for connectors ENTMQST-3354 Set the base image in Kafka Connect Build properly when it is specified in the custom resource ENTMQST-3826 The /tmp volume is not big enough for the compression libraries ENTMQST-3584 The strimzi_resources{kind="Kafka"} metric is not removed when the Kafka related namespace is deleted ENTMQST-3839 The broker is stuck in an inconsistent state after ZooKeeper disconnection ENTMQST-2331 ZooKeeper, Kafka, and EntityOperator certificates are not renewed using your own cluster CA certificate Table 5.1. Fixed common vulnerabilities and exposures (CVEs) Issue Number Description ENTMQST-2851 CVE-2021-3520 lz4: memory corruption due to an integer overflow bug caused by memmove argument ENTMQST-3631 CVE-2021-43797 netty: control chars in header names may lead to HTTP request smuggling
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.1/html/release_notes_for_amq_streams_2.1_on_openshift/fixed-issues-str
B.2. Constraints Reference
B.2. Constraints Reference Constraints are used to define the allowable contents of a certificate and the values associated with that content. This section lists the predefined constraints with complete definitions of each. B.2.1. Basic Constraints Extension Constraint The Basic Constraints extension constraint checks if the basic constraint in the certificate request satisfies the criteria set in this constraint. Table B.25. Basic Constraints Extension Constraint Configuration Parameters Parameter Description basicConstraintsCritical Specifies whether the extension can be marked critical or noncritical. Select true to mark this extension critical; select false to prevent this extension from being marked critical. Selecting a hyphen - , implies no criticality preference. basicConstraintsIsCA Specifies whether the certificate subject is a CA. Select true to require a value of true for this parameter (is a CA); select false to disallow a value of true for this parameter; select a hyphen, - , to indicate no constraints are placed for this parameter. basicConstraintsMinPathLen Specifies the minimum allowable path length, the maximum number of CA certificates that may be chained below (subordinate to) the subordinate CA certificate being issued. The path length affects the number of CA certificates used during certificate validation. The chain starts with the end-entity certificate being validated and moves up. This parameter has no effect if the extension is set in end-entity certificates. The permissible values are 0 or n . The value must be less than the path length specified in the Basic Constraints extension of the CA signing certificate. 0 specifies that no subordinate CA certificates are allowed below the subordinate CA certificate being issued; only an end-entity certificate may follow in the path. n must be an integer greater than zero. This is the minimun number of subordinate CA certificates allowed below the subordinate CA certificate being used. basicConstraintsMaxPathLen Specifies the maximum allowable path length, the maximum number of CA certificates that may be chained below (subordinate to) the subordinate CA certificate being issued. The path length affects the number of CA certificates used during certificate validation. The chain starts with the end-entity certificate being validated and moves up. This parameter has no effect if the extension is set in end-entity certificates. The permissible values are 0 or n . The value must be greater than the path length specified in the Basic Constraints extension of the CA signing certificate. 0 specifies that no subordinate CA certificates are allowed below the subordinate CA certificate being issued; only an end-entity certificate may follow in the path. n must be an integer greater than zero. This is the maximum number of subordinate CA certificates allowed below the subordinate CA certificate being used. If the field is blank, the path length defaults to a value determined by the path length set on the Basic Constraints extension in the issuer's certificate. If the issuer's path length is unlimited, the path length in the subordinate CA certificate is also unlimited. If the issuer's path length is an integer greater than zero, the path length in the subordinate CA certificate is set to a value one less than the issuer's path length; for example, if the issuer's path length is 4, the path length in the subordinate CA certificate is set to 3. B.2.2. CA Validity Constraint The CA Validity constraint checks if the validity period in the certificate template is within the CA's validity period. If the validity period of the certificate is out outside the CA certificate's validity period, the constraint is rejected. B.2.3. Extended Key Usage Extension Constraint The Extended Key Usage extension constraint checks if the Extended Key Usage extension on the certificate satisfies the criteria set in this constraint. Table B.26. Extended Key Usage Extension Constraint Configuration Parameters Parameter Description exKeyUsageCritical When set to true , the extension can be marked as critical. When set to false , the extension can be marked noncritical. exKeyUsageOIDs Specifies the allowable OIDs that identifies a key-usage purpose. Multiple OIDs can be added in a comma-separated list. B.2.4. Extension Constraint This constraint implements the general extension constraint. It checks if the extension is present. Table B.27. Extension Constraint Parameter Description extCritical Specifies whether the extension can be marked critical or noncritical. Select true to mark the extension critical; select false to mark it noncritical. Select - to enforce no preference. extOID The OID of an extension that must be present in the cert to pass the constraint. B.2.5. Key Constraint This constraint checks the size of the key for RSA keys, and the name of the elliptic curve for EC keys. When used with RSA keys the KeyParameters parameter contains a comma-separated list of legal key sizes, and with EC Keys the KeyParameters parameter contains a comma-separated list of available ECC curves. Table B.28. Key Constraint Configuration Parameters Parameter Description keyType Gives a key type; this is set to - by default and uses an RSA key system. The choices are rsa and ec. If the key type is specified and not identified by the system, the constraint will be rejected. KeyParameters Defines the specific key parameters. The parameters which are set for the key differe, depending on the value of the keyType parameter (meaning, depending on the key type). With RSA keys, the KeyParameters parameter contains a comma-separated list of legal key sizes. With ECC keys, the KeyParameters parameter contains a comma-separated list of available ECC curves. B.2.6. Key Usage Extension Constraint The Key Usage extension constraint checks if the key usage constraint in the certificate request satisfies the criteria set in this constraint. Table B.29. Key Usage Extension Constraint Configuration Parameters Parameter Description keyUsageCritical Select true to mark this extension critical; select false to mark it noncritical. Select - for no preference. keyUsageDigitalSignature Specifies whether to sign SSL client certificates and S/MIME signing certificates. Select true to mark this as set; select false to keep this from being set; select a hyphen, - , to indicate no constraints are placed for this parameter. kleyUsageNonRepudiation Specifies whether to set S/MIME signing certificates. Select true to mark this as set; select false to keep this from being set; select a hyphen, - , to indicate no constraints are placed for this parameter. Warning Using this bit is controversial. Carefully consider the legal consequences of its use before setting it for any certificate. keyEncipherment Specifies whether to set the extension for SSL server certificates and S/MIME encryption certificates. Select true to mark this as set; select false to keep this from being set; select a hyphen, - , to indicate no constraints are placed for this parameter. keyUsageDataEncipherment Specifies whether to set the extension when the subject's public key is used to encrypt user data, instead of key material. Select true to mark this as set; select false to keep this from being set; select a hyphen, - , to indicate no constraints are placed for this parameter. keyUsageKeyAgreement Specifies whether to set the extension whenever the subject's public key is used for key agreement. Select true to mark this as set; select false to keep this from being set; select a hyphen, - , to indicate no constraints are placed for this parameter. keyUsageCertsign Specifies whether the extension applies for all CA signing certificates. Select true to mark this as set; select false to keep this from being set; select a hyphen, - , to indicate no constraints are placed for this parameter. keyUsageCRLSign Specifies whether to set the extension for CA signing certificates that are used to sign CRLs. Select true to mark this as set; select false to keep this from being set; select a hyphen, - , to indicate no constraints are placed for this parameter. keyUsageEncipherOnly Specifies whether to set the extension if the public key is to be used only for encrypting data. If this bit is set, keyUsageKeyAgreement should also be set. Select true to mark this as set; select false to keep this from being set; select a hyphen, - , to indicate no constraints are placed for this parameter. keyUsageDecipherOnly Specifies whether to set the extension if the public key is to be used only for deciphering data. If this bit is set, keyUsageKeyAgreement should also be set. Select true to mark this as set; select false to keep this from being set; select a hyphen, - , to indicate no constraints are placed for this parameter. B.2.7. Netscape Certificate Type Extension Constraint Warning This constraint is obsolete. Instead of using the Netscape Certificate Type extension constraint, use the Key Usage extension or Extended Key Usage extension. The Netscape Certificate Type extension constraint checks if the Netscape Certificate Type extension in the certificate request satisfies the criteria set in this constraint. B.2.8. No Constraint This constraint implements no constraint. When chosen along with a default, there are not constraints placed on that default. B.2.9. Renewal Grace Period Constraint The Renewal Grace Period Constraint sets rules on when a user can renew a certificate based on its expiration date. For example, users cannot renew a certificate until a certain time before it expires or if it goes past a certain time after its expiration date. One important thing to remember when using this constraint is that this constraint is set on the original enrollment profile , not the renewal profile. The rules for the renewal grace period are part of the original certificate and are carried over and applied for any subsequent renewals. This constraint is only available with the No Default extension. Table B.30. Renewal Grace Period Constraint Configuration Parameters Parameter Description renewal.graceAfter Sets the period, in days, after the certificate expires that it can be submitted for renewal. If the certificate has been expired longer that that time, then the renewal request is rejected. If no value is given, there is no limit. renewal.graceBefore Sets the period, in days, before the certificate expires that it can be submitted for renewal. If the certificate is not that close to its expiration date, then the renewal request is rejected. If no value is given, there is no limit. B.2.10. Signing Algorithm Constraint The Signing Algorithm constraint checks if the signing algorithm in the certificate request satisfies the criteria set in this constraint. Table B.31. Signing Algorithms Constraint Configuration Parameters Parameter Description signingAlgsAllowed Sets the signing algorithms that can be specified to sign the certificate. The algorithms can be any or all of the following: MD2withRSA MD5withRSA SHA256withRSA SHA512withRSA SHA256withEC SHA384withEC SHA512withEC B.2.11. Subject Name Constraint The Subject Name constraint checks if the subject name in the certificate request satisfies the criteria. Table B.32. Subject Name Constraint Configuration Parameters Parameter Description Pattern Specifies a regular expression or other string to build the subject DN. Subject Names and Regular Expressions The regular expression for the Subject Name Constraint is matched by the Java facility for matching regular expressions. The format for these regular expressions are listed in https://docs.oracle.com/javase/7/docs/api/java/util/regex/Pattern.html . This allows wildcards such as asterisks ( * ) to search for any number of the characters and periods ( . ) to search for any type character. For example, if the pattern of the subject name constraint is set to uid=.* , the certificate profile framework checks if the subject name in the certificate request matches the pattern. A subject name like uid=user, o=Example, c=US satisfies the pattern uid=.* . The subject name cn=user, o=example,c=US does not satisfy the pattern. uid=.* means the subject name must begin with the uid attribute; the period-asterisk ( .* ) wildcards allow any type and number of characters to follow uid . It is possible to require internal patterns, such as .*ou=Engineering.* , which requires the ou=Engineering attribute with any kind of string before and after it. This matches cn=jdoe,ou=internal,ou=west coast,ou=engineering,o="Example Corp",st=NC as well as uid=bjensen,ou=engineering,dc=example,dc=com . Lastly, it is also possible to allow requests that are either one string or another by setting a pipe sign ( | ) between the options. For example, to permit subject names that contain either ou=engineering,ou=people or ou=engineering,o="Example Corp" , the pattern is .*ou=engineering,ou=people.* | .*ou=engineering,o="Example Corp".* . Note For constructing a pattern which uses a special character, such as a period ( . ), escape the character with a back slash ( \ ). For example, to search for the string o="Example Inc." , set the pattern to o="Example Inc\." . Subject Names and the UID or CN in the Certificate Request The pattern that is used to build the subject DN can also be based on the CN or UID of the person requesting the certificate. The Subject Name Constraint sets the patter of the CN (or UID) to recognize in the DN of the certificate request, and then the Subject Name Default builds on that CN to create the subject DN of the certificate, using a predefined directory tree. For example, to use the CN of the certificate request: B.2.12. Unique Key Constraint This constraint checks that the public key is unique. Table B.33. Unique Key Constraints Parameters Parameter Description allowSameKeyRenewal A request is considered a renewal and is accepted if this parameter is set to true , if a public key is not unique, and if the subject DN matches an existing certificate. However, if the public key is a duplicate and does not match an existing Subject DN, the request is rejected. When the parameter is set to false , a duplicate public key request will be rejected. B.2.13. Unique Subject Name Constraint The Unique Subject Name constraint restricts the server from issuing multiple certificates with the same subject names. When a certificate request is submitted, the server automatically checks the nickname against other issued certificate nicknames. This constraint can be applied to certificate enrollment and renewal through the end-entities' page. Certificates cannot have the same subject name unless one certificate is expired or revoked (and not on hold). So, active certificates cannot share a subject name, with one exception: if certificates have different key usage bits, then they can share the same subject name, because they have different uses. Table B.34. Unique Subject Name Constraint Configuration Parameters Parameter Description enableKeyUsageExtensionChecking Optional setting which allows certificates to have the same subject name as long as their key usage settings are different. This is either true or false . The default is true , which allows duplicate subject names. B.2.14. Validity Constraint The Validity constraint checks if the validity period in the certificate request satisfies the criteria. The parameters provided must be sensible values. For instance, a notBefore parameter that provides a time which has already passed will not be accepted, and a notAfter parameter that provides a time earlier than the notBefore time will not be accepted. Table B.35. Validity Constraint Configuration Parameters Parameter Description range The range of the validity period. This is an integer which sets the number of days. The difference (in days) between the notBefore time and the notAfter time must be less than the range value, or this constraint will be rejected. notBeforeCheck Verifies that the range is not within the grace period. When the NotBeforeCheck Boolean parameter is set to true, the system will check the notBefore time is not greater than the current time plus the notBeforeGracePeriod value. If the notBeforeTime is not between the current time and the notBeforeGracePeriod value, this constraint will be rejected. notBeforeGracePeriod The grace period (in seconds) after the notBefore time. If the notBeforeTime is not between the current time and the notBeforeGracePeriod value, this constraint will be rejected. This constraint is only checked if the notBeforeCheck parameter has been set to true. notAfterCheck Verfies whether the given time is not after the expiration period. When the notAfterCheck Boolean parameter is set to true, the system will check the notAfter time is not greater than the current time. If the current time exceeds the notAfter time, this constraint will be rejected.
[ "policyset.serverCertSet.1.constraint.class_id=subjectNameConstraintImpl policyset.serverCertSet.1.constraint.name=Subject Name Constraint policyset.serverCertSet.1.constraint.params. pattern=CN=[^,]+,.+ policyset.serverCertSet.1.constraint.params.accept=true policyset.serverCertSet.1.default.class_id=subjectNameDefaultImpl policyset.serverCertSet.1.default.name=Subject Name Default policyset.serverCertSet.1.default.params. name=CN=USDrequest.req_subject_name.cnUSD,DC=example, DC=com" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/constraints_reference
11.3. Editing Users
11.3. Editing Users Editing Users in the Web UI Select the Identity Users tab. Search the Active users , Stage users , or Preserved users category to find the user to edit. Click the name of the user to edit. Figure 11.10. Selecting a User to Edit Edit the user attribute fields as required. Click Save at the top of the page. Figure 11.11. Save Modified User Attributes After you update user details in the web UI, the new values are not synchronized immediately. It might take up to approximately 5 minutes before the new values are reflected at the client system. Editing Users from the Command Line To modify a user in the active or preserved states, use the ipa user-mod command. To modify a user in the stage state, use the ipa stageuser-mod command. The ipa user-mod and ipa stageuser-mod commands accept the following options: the user login, which identifies the user account to be modified options specifying the new attribute values For a complete list of user entry attributes that can be modified from the command line, see the list of options accepted by ipa user-mod and ipa stageuser-mod . To display the list of options, run the commands with the --help option added. Simply adding an attribute option to ipa user-mod or ipa stageuser-mod overwrites the current attribute value. For example, the following changes a user's title or adds a new title if the user did not yet have a title specified: For LDAP attributes that are allowed to have multiple values, IdM also accepts multiple values. For example, a user can have two email addresses saved in their user account. To add an additional attribute value without overwriting the existing value, use the --addattr option together with the option to specify the new attribute value. For example, to add a new email address to a user account that already has an email address specified: To set two attribute values at the same time, use the --addattr option twice: The ipa user-mod command also accepts the --setattr option for setting attribute values and the --delattr option for deleting attribute values. These options are used in a way similar to using --addattr . For details, see the output of the ipa user-mod --help command. Note To overwrite the current email address for a user, use the --email option. However, to add an additional email address, use the mail option with the --addattr option:
[ "ipa user-mod user_login --title= new_title", "ipa user-mod user --addattr=mobile= new_mobile_number -------------------- Modified user \"user\" -------------------- User login: user Mobile Telephone Number: mobile_number, new_mobile_number", "ipa user-mod user --addattr=mobile= mobile_number_1 --addattr=mobile= mobile_number_2", "ipa user-mod user --email= [email protected] ipa user-mod user --addattr=mail= [email protected]" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/editing-users
Chapter 46. Using standalone custom pages(dashboards)
Chapter 46. Using standalone custom pages(dashboards) Apart from standalone perspectives, you can also embed custom pages, also known as dashboards, in your application. For accessing the custom pages from your application, provide the name of the custom page as the value of the perspective parameter. Note that the perspective parameter is case-sensitive. Procedure Log in to Business Central. In a web browser, enter the custom page's web address in the address bar, for example, http://localhost:8080/business-central/kie-wb.jsp?standalone=true&perspective=CustomPageName The standalone custom page opens in the browser. Replace the value, CustomPageName , with the name of the custom page you want to use in the standalone mode.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/managing_red_hat_decision_manager_and_kie_server_settings/using-standalone-perspectives-standalone-custom-pages-proc
Glossary
Glossary A access control The process of controlling what particular users are allowed to do. For example, access control to servers is typically based on an identity, established by a password or a certificate, and on rules regarding what that entity can do. See also access control list (ACL) . access control instructions (ACI) An access rule that specifies how subjects requesting access are to be identified or what rights are allowed or denied for a particular subject. See access control list (ACL) . access control list (ACL) A collection of access control entries that define a hierarchy of access rules to be evaluated when a server receives a request for access to a particular resource. See access control instructions (ACI) . administrator The person who installs and configures one or more Certificate System managers and sets up privileged users, or agents, for them. See also agent . Advanced Encryption Standard (AES) The Advanced Encryption Standard (AES), like its predecessor Data Encryption Standard (DES), is a FIPS-approved symmetric-key encryption standard. AES was adopted by the US government in 2002. It defines three block ciphers, AES-128, AES-192 and AES-256. The National Institute of Standards and Technology (NIST) defined the AES standard in U.S. FIPS PUB 197. For more information, see http://csrc.nist.gov/publications/fips/fips197/fips-197.pdf . agent A user who belongs to a group authorized to manage agent services for a Certificate System manager. See also Certificate Manager agent , Key Recovery Authority agent . agent services 1. Services that can be administered by a Certificate System agent through HTML pages served by the Certificate System subsystem for which the agent has been assigned the necessary privileges. 2. The HTML pages for administering such services. agent-approved enrollment An enrollment that requires an agent to approve the request before the certificate is issued. APDU Application protocol data unit. A communication unit (analogous to a byte) that is used in communications between a smart card and a smart card reader. attribute value assertion (AVA) An assertion of the form attribute = value , where attribute is a tag, such as o (organization) or uid (user ID), and value is a value such as "Red Hat, Inc." or a login name. AVAs are used to form the distinguished name (DN) that identifies the subject of a certificate, called the subject name of the certificate. audit log A log that records various system events. This log can be signed, providing proof that it was not tampered with, and can only be read by an auditor user. auditor A privileged user who can view the signed audit logs. authentication Confident identification; assurance that a party to some computerized transaction is not an impostor. Authentication typically involves the use of a password, certificate, PIN, or other information to validate identity over a computer network. See also password-based authentication , certificate-based authentication , client authentication , server authentication . authentication module A set of rules (implemented as a Java TM class) for authenticating an end entity, agent, administrator, or any other entity that needs to interact with a Certificate System subsystem. In the case of typical end-user enrollment, after the user has supplied the information requested by the enrollment form, the enrollment servlet uses an authentication module associated with that form to validate the information and authenticate the user's identity. See servlet . authorization Permission to access a resource controlled by a server. Authorization typically takes place after the ACLs associated with a resource have been evaluated by a server. See access control list (ACL) . automated enrollment A way of configuring a Certificate System subsystem that allows automatic authentication for end-entity enrollment, without human intervention. With this form of authentication, a certificate request that completes authentication module processing successfully is automatically approved for profile processing and certificate issuance. B bind DN A user ID, in the form of a distinguished name (DN), used with a password to authenticate to Red Hat Directory Server. C CA certificate A certificate that identifies a certificate authority. See also certificate authority (CA) , subordinate CA , root CA . CA hierarchy A hierarchy of CAs in which a root CA delegates the authority to issue certificates to subordinate CAs. Subordinate CAs can also expand the hierarchy by delegating issuing status to other CAs. See also certificate authority (CA) , subordinate CA , root CA . CA server key The SSL server key of the server providing a CA service. CA signing key The private key that corresponds to the public key in the CA certificate. A CA uses its signing key to sign certificates and CRLs. certificate Digital data, formatted according to the X.509 standard, that specifies the name of an individual, company, or other entity (the subject name of the certificate) and certifies that a public key , which is also included in the certificate, belongs to that entity. A certificate is issued and digitally signed by a certificate authority (CA) . A certificate's validity can be verified by checking the CA's digital signature through public-key cryptography techniques. To be trusted within a public-key infrastructure (PKI) , a certificate must be issued and signed by a CA that is trusted by other entities enrolled in the PKI. certificate authority (CA) A trusted entity that issues a certificate after verifying the identity of the person or entity the certificate is intended to identify. A CA also renews and revokes certificates and generates CRLs. The entity named in the issuer field of a certificate is always a CA. Certificate authorities can be independent third parties or a person or organization using certificate-issuing server software, such as Red Hat Certificate System. certificate chain A hierarchical series of certificates signed by successive certificate authorities. A CA certificate identifies a certificate authority (CA) and is used to sign certificates issued by that authority. A CA certificate can in turn be signed by the CA certificate of a parent CA, and so on up to a root CA . Certificate System allows any end entity to retrieve all the certificates in a certificate chain. certificate extensions An X.509 v3 certificate contains an extensions field that permits any number of additional fields to be added to the certificate. Certificate extensions provide a way of adding information such as alternative subject names and usage restrictions to certificates. A number of standard extensions have been defined by the PKIX working group. certificate fingerprint A one-way hash associated with a certificate. The number is not part of the certificate itself, but is produced by applying a hash function to the contents of the certificate. If the contents of the certificate changes, even by a single character, the same function produces a different number. Certificate fingerprints can therefore be used to verify that certificates have not been tampered with. Certificate Management Message Formats (CMMF) Message formats used to convey certificate requests and revocation requests from end entities to a Certificate Manager and to send a variety of information to end entities. A proposed standard from the Internet Engineering Task Force (IETF) PKIX working group. CMMF is subsumed by another proposed standard, Certificate Management Messages over Cryptographic Message Syntax (CMC) . For detailed information, see https://tools.ietf.org/html/draft-ietf-pkix-cmmf-02 . Certificate Management Messages over Cryptographic Message Syntax (CMC) Message format used to convey a request for a certificate to a Certificate Manager. A proposed standard from the Internet Engineering Task Force (IETF) PKIX working group. For detailed information, see https://tools.ietf.org/html/draft-ietf-pkix-cmc-02 . Certificate Manager An independent Certificate System subsystem that acts as a certificate authority. A Certificate Manager instance issues, renews, and revokes certificates, which it can publish along with CRLs to an LDAP directory. It accepts requests from end entities. See certificate authority (CA) . Certificate Manager agent A user who belongs to a group authorized to manage agent services for a Certificate Manager. These services include the ability to access and modify (approve and reject) certificate requests and issue certificates. certificate profile A set of configuration settings that defines a certain type of enrollment. The certificate profile sets policies for a particular type of enrollment along with an authentication method in a certificate profile. Certificate Request Message Format (CRMF) Format used for messages related to management of X.509 certificates. This format is a subset of CMMF. See also Certificate Management Message Formats (CMMF) . For detailed information, see http://www.ietf.org/rfc/rfc2511.txt . certificate revocation list (CRL) As defined by the X.509 standard, a list of revoked certificates by serial number, generated and signed by a certificate authority (CA) . Certificate System See Red Hat Certificate System , Cryptographic Message Syntax (CS) . Certificate System console A console that can be opened for any single Certificate System instance. A Certificate System console allows the Certificate System administrator to control configuration settings for the corresponding Certificate System instance. Certificate System subsystem One of the five Certificate System managers: Certificate Manager , Online Certificate Status Manager, Key Recovery Authority , Token Key Service, or Token Processing System. certificate-based authentication Authentication based on certificates and public-key cryptography. See also password-based authentication . chain of trust See certificate chain . chained CA See linked CA . cipher See cryptographic algorithm . client authentication The process of identifying a client to a server, such as with a name and password or with a certificate and some digitally signed data. See certificate-based authentication , password-based authentication , server authentication . client SSL certificate A certificate used to identify a client to a server using the SSL protocol. See Secure Sockets Layer (SSL) . CMC See Certificate Management Messages over Cryptographic Message Syntax (CMC) . CMC Enrollment Features that allow either signed enrollment or signed revocation requests to be sent to a Certificate Manager using an agent's signing certificate. These requests are then automatically processed by the Certificate Manager. CMMF See Certificate Management Message Formats (CMMF) . CRL See certificate revocation list (CRL) . CRMF See Certificate Request Message Format (CRMF) . cross-certification The exchange of certificates by two CAs in different certification hierarchies, or chains. Cross-certification extends the chain of trust so that it encompasses both hierarchies. See also certificate authority (CA) . cross-pair certificate A certificate issued by one CA to another CA which is then stored by both CAs to form a circle of trust. The two CAs issue certificates to each other, and then store both cross-pair certificates as a certificate pair. cryptographic algorithm A set of rules or directions used to perform cryptographic operations such as encryption and decryption . Cryptographic Message Syntax (CS) The syntax used to digitally sign, digest, authenticate, or encrypt arbitrary messages, such as CMMF. cryptographic module See PKCS #11 module . cryptographic service provider (CSP) A cryptographic module that performs cryptographic services, such as key generation, key storage, and encryption, on behalf of software that uses a standard interface such as that defined by PKCS #11 to request such services. CSP See cryptographic service provider (CSP) . D decryption Unscrambling data that has been encrypted. See encryption . delta CRL A CRL containing a list of those certificates that have been revoked since the last full CRL was issued. digital ID See certificate . digital signature To create a digital signature, the signing software first creates a one-way hash from the data to be signed, such as a newly issued certificate. The one-way hash is then encrypted with the private key of the signer. The resulting digital signature is unique for each piece of data signed. Even a single comma added to a message changes the digital signature for that message. Successful decryption of the digital signature with the signer's public key and comparison with another hash of the same data provides tamper detection . Verification of the certificate chain for the certificate containing the public key provides authentication of the signer. See also nonrepudiation , encryption . distinguished name (DN) A series of AVAs that identify the subject of a certificate. See attribute value assertion (AVA) . distribution points Used for CRLs to define a set of certificates. Each distribution point is defined by a set of certificates that are issued. A CRL can be created for a particular distribution point. dual key pair Two public-private key pairs, four keys altogether, corresponding to two separate certificates. The private key of one pair is used for signing operations, and the public and private keys of the other pair are used for encryption and decryption operations. Each pair corresponds to a separate certificate . See also encryption key , public-key cryptography , signing key . Key Recovery Authority An optional, independent Certificate System subsystem that manages the long-term archival and recovery of RSA encryption keys for end entities. A Certificate Manager can be configured to archive end entities' encryption keys with a Key Recovery Authority before issuing new certificates. The Key Recovery Authority is useful only if end entities are encrypting data, such as sensitive email, that the organization may need to recover someday. It can be used only with end entities that support dual key pairs: two separate key pairs, one for encryption and one for digital signatures. Key Recovery Authority agent A user who belongs to a group authorized to manage agent services for a Key Recovery Authority, including managing the request queue and authorizing recovery operation using HTML-based administration pages. Key Recovery Authority recovery agent One of the m of n people who own portions of the storage key for the Key Recovery Authority . Key Recovery Authority storage key Special key used by the Key Recovery Authority to encrypt the end entity's encryption key after it has been decrypted with the Key Recovery Authority's private transport key. The storage key never leaves the Key Recovery Authority. Key Recovery Authority transport certificate Certifies the public key used by an end entity to encrypt the entity's encryption key for transport to the Key Recovery Authority. The Key Recovery Authority uses the private key corresponding to the certified public key to decrypt the end entity's key before encrypting it with the storage key. E eavesdropping Surreptitious interception of information sent over a network by an entity for which the information is not intended. Elliptic Curve Cryptography (ECC) A cryptographic algorithm which uses elliptic curves to create additive logarithms for the mathematical problems which are the basis of the cryptographic keys. ECC ciphers are more efficient to use than RSA ciphers and, because of their intrinsic complexity, are stronger at smaller bits than RSA ciphers. encryption Scrambling information in a way that disguises its meaning. See decryption . encryption key A private key used for encryption only. An encryption key and its equivalent public key, plus a signing key and its equivalent public key, constitute a dual key pair . end entity In a public-key infrastructure (PKI) , a person, router, server, or other entity that uses a certificate to identify itself. enrollment The process of requesting and receiving an X.509 certificate for use in a public-key infrastructure (PKI) . Also known as registration . extensions field See certificate extensions . F Federal Bridge Certificate Authority (FBCA) A configuration where two CAs form a circle of trust by issuing cross-pair certificates to each other and storing the two cross-pair certificates as a single certificate pair. fingerprint See certificate fingerprint . FIPS PUBS 140 Federal Information Standards Publications (FIPS PUBS) 140 is a US government standard for implementations of cryptographic modules, hardware or software that encrypts and decrypts data or performs other cryptographic operations, such as creating or verifying digital signatures. Many products sold to the US government must comply with one or more of the FIPS standards. firewall A system or combination of systems that enforces a boundary between two or more networks. H Hypertext Transport Protocol (HTTP) and Hypertext Transport Protocol Secure (HTTPS) Protocols used to communicate with web servers. HTTPS consists of communication over HTTP (Hypertext Transfer Protocol) within a connection encrypted by Transport Layer Security (TLS). The main purpose of HTTPS is authentication of the visited website and protection of privacy and integrity of the exchanged data. I impersonation The act of posing as the intended recipient of information sent over a network. Impersonation can take two forms: spoofing and misrepresentation . input In the context of the certificate profile feature, it defines the enrollment form for a particular certificate profile. Each input is set, which then dynamically creates the enrollment form from all inputs configured for this enrollment. intermediate CA A CA whose certificate is located between the root CA and the issued certificate in a certificate chain . IP spoofing The forgery of client IP addresses. IPv4 and IPv6 Certificate System supports both IPv4 and IPv6 address namespaces for communications and operations with all subsystems and tools, as well as for clients, subsystem creation, and token and certificate enrollment. J JAR file A digital envelope for a compressed collection of files organized according to the Java TM archive (JAR) format . Java TM archive (JAR) format A set of conventions for associating digital signatures, installer scripts, and other information with files in a directory. Java TM Cryptography Architecture (JCA) The API specification and reference developed by Sun Microsystems for cryptographic services. See http://java.sun.com/products/jdk/1.2/docs/guide/security/CryptoSpec.Introduction . Java TM Development Kit (JDK) Software development kit provided by Sun Microsystems for developing applications and applets using the Java TM programming language. Java TM Native Interface (JNI) A standard programming interface that provides binary compatibility across different implementations of the Java TM Virtual Machine (JVM) on a given platform, allowing existing code written in a language such as C or C++ for a single platform to bind to Java TM. See http://java.sun.com/products/jdk/1.2/docs/guide/jni/index.html . Java TM Security Services (JSS) A Java TM interface for controlling security operations performed by Network Security Services (NSS). K KEA See Key Exchange Algorithm (KEA) . key A large number used by a cryptographic algorithm to encrypt or decrypt data. A person's public key , for example, allows other people to encrypt messages intended for that person. The messages must then be decrypted by using the corresponding private key . key exchange A procedure followed by a client and server to determine the symmetric keys they will both use during an SSL session. Key Exchange Algorithm (KEA) An algorithm used for key exchange by the US Government. KEYGEN tag An HTML tag that generates a key pair for use with a certificate. L Lightweight Directory Access Protocol (LDAP) A directory service protocol designed to run over TCP/IP and across multiple platforms. LDAP is a simplified version of Directory Access Protocol (DAP), used to access X.500 directories. LDAP is under IETF change control and has evolved to meet Internet requirements. linked CA An internally deployed certificate authority (CA) whose certificate is signed by a public, third-party CA. The internal CA acts as the root CA for certificates it issues, and the third- party CA acts as the root CA for certificates issued by other CAs that are linked to the same third-party root CA. Also known as "chained CA" and by other terms used by different public CAs. M manual authentication A way of configuring a Certificate System subsystem that requires human approval of each certificate request. With this form of authentication, a servlet forwards a certificate request to a request queue after successful authentication module processing. An agent with appropriate privileges must then approve each request individually before profile processing and certificate issuance can proceed. MD5 A message digest algorithm that was developed by Ronald Rivest. See also one-way hash . message digest See one-way hash . misrepresentation The presentation of an entity as a person or organization that it is not. For example, a website might pretend to be a furniture store when it is really a site that takes credit-card payments but never sends any goods. Misrepresentation is one form of impersonation . See also spoofing . N Network Security Services (NSS) A set of libraries designed to support cross-platform development of security-enabled communications applications. Applications built using the NSS libraries support the Secure Sockets Layer (SSL) protocol for authentication, tamper detection, and encryption, and the PKCS #11 protocol for cryptographic token interfaces. NSS is also available separately as a software development kit. non-TMS Non-token management system. Refers to a configuration of subsystems (the CA and, optionally, KRA and OCSP) which do not handle smart cards directly. See Also token management system (TMS) . nonrepudiation The inability by the sender of a message to deny having sent the message. A digital signature provides one form of nonrepudiation. O object signing A method of file signing that allows software developers to sign Java code, JavaScript scripts, or any kind of file and allows users to identify the signers and control access by signed code to local system resources. object-signing certificate A certificate that is associated private key is used to sign objects; related to object signing . OCSP Online Certificate Status Protocol. one-way hash 1. A number of fixed-length generated from data of arbitrary length with the aid of a hashing algorithm. The number, also called a message digest, is unique to the hashed data. Any change in the data, even deleting or altering a single character, results in a different value. 2. The content of the hashed data cannot be deduced from the hash. operation The specific operation, such as read or write, that is being allowed or denied in an access control instruction. output In the context of the certificate profile feature, it defines the resulting form from a successful certificate enrollment for a particular certificate profile. Each output is set, which then dynamically creates the form from all outputs configured for this enrollment. P password-based authentication Confident identification by means of a name and password. See also authentication , certificate-based authentication . PKCS #10 The public-key cryptography standard that governs certificate requests. PKCS #11 The public-key cryptography standard that governs cryptographic tokens such as smart cards. PKCS #11 module A driver for a cryptographic device that provides cryptographic services, such as encryption and decryption, through the PKCS #11 interface. A PKCS #11 module, also called a cryptographic module or cryptographic service provider , can be implemented in either hardware or software. A PKCS #11 module always has one or more slots, which may be implemented as physical hardware slots in some form of physical reader, such as for smart cards, or as conceptual slots in software. Each slot for a PKCS #11 module can in turn contain a token, which is the hardware or software device that actually provides cryptographic services and optionally stores certificates and keys. Red Hat provides a built-in PKCS #11 module with Certificate System. PKCS #12 The public-key cryptography standard that governs key portability. PKCS #7 The public-key cryptography standard that governs signing and encryption. PKIX Certificate and CRL Profile A standard developed by the IETF for a public-key infrastructure for the Internet. It specifies profiles for certificates and CRLs. private key One of a pair of keys used in public-key cryptography. The private key is kept secret and is used to decrypt data encrypted with the corresponding public key . proof-of-archival (POA) Data signed with the private Key Recovery Authority transport key that contains information about an archived end-entity key, including key serial number, name of the Key Recovery Authority, subject name of the corresponding certificate, and date of archival. The signed proof-of-archival data are the response returned by the Key Recovery Authority to the Certificate Manager after a successful key archival operation. See also Key Recovery Authority transport certificate . public key One of a pair of keys used in public-key cryptography. The public key is distributed freely and published as part of a certificate . It is typically used to encrypt data sent to the public key's owner, who then decrypts the data with the corresponding private key . public-key cryptography A set of well-established techniques and standards that allow an entity to verify its identity electronically or to sign and encrypt electronic data. Two keys are involved, a public key and a private key. A public key is published as part of a certificate, which associates that key with a particular identity. The corresponding private key is kept secret. Data encrypted with the public key can be decrypted only with the private key. public-key infrastructure (PKI) The standards and services that facilitate the use of public-key cryptography and X.509 v3 certificates in a networked environment. R RC2, RC4 Cryptographic algorithms developed for RSA Data Security by Rivest. See also cryptographic algorithm . Red Hat Certificate System A highly configurable set of software components and tools for creating, deploying, and managing certificates. Certificate System is comprised of five major subsystems that can be installed in different Certificate System instances in different physical locations: Certificate Manager , Online Certificate Status Manager, Key Recovery Authority , Token Key Service, and Token Processing System. registration See enrollment . root CA The certificate authority (CA) with a self-signed certificate at the top of a certificate chain. See also CA certificate , subordinate CA . RSA algorithm Short for Rivest-Shamir-Adleman, a public-key algorithm for both encryption and authentication. It was developed by Ronald Rivest, Adi Shamir, and Leonard Adleman and introduced in 1978. RSA key exchange A key-exchange algorithm for SSL based on the RSA algorithm. S sandbox A Java TM term for the carefully defined limits within which Java TM code must operate. secure channel A security association between the TPS and the smart card which allows encrypted communciation based on a shared master key generated by the TKS and the smart card APDUs. Secure Sockets Layer (SSL) A protocol that allows mutual authentication between a client and server and the establishment of an authenticated and encrypted connection. SSL runs above TCP/IP and below HTTP, LDAP, IMAP, NNTP, and other high-level network protocols. security domain A centralized repository or inventory of PKI subsystems. Its primary purpose is to facilitate the installation and configuration of new PKI services by automatically establishing trusted relationships between subsystems. Security-Enhanced Linux (SELinux) Security-enhanced Linux (SELinux) is a set of security protocols enforcing mandatory access control on Linux system kernels. SELinux was developed by the United States National Security Agency to keep applications from accessing confidential or protected files through lenient or flawed access controls. self tests A feature that tests a Certificate System instance both when the instance starts up and on-demand. server authentication The process of identifying a server to a client. See also client authentication . server SSL certificate A certificate used to identify a server to a client using the Secure Sockets Layer (SSL) protocol. servlet Java TM code that handles a particular kind of interaction with end entities on behalf of a Certificate System subsystem. For example, certificate enrollment, revocation, and key recovery requests are each handled by separate servlets. SHA Secure Hash Algorithm, a hash function used by the US government. signature algorithm A cryptographic algorithm used to create digital signatures. Certificate System supports the MD5 and SHA signing algorithms. See also cryptographic algorithm , digital signature . signed audit log See audit log . signing certificate A certificate that is public key corresponds to a private key used to create digital signatures. For example, a Certificate Manager must have a signing certificate that is public key corresponds to the private key it uses to sign the certificates it issues. signing key A private key used for signing only. A signing key and its equivalent public key, plus an encryption key and its equivalent public key, constitute a dual key pair . Simple Certificate Enrollment Protocol (SCEP) A protocol designed by Cisco to specify a way for a router to communicate with a CA for router certificate enrollment. Certificate System supports SCEP's CA mode of operation, where the request is encrypted with the CA signing certificate. single sign-on 1. In Certificate System, a password that simplifies the way to sign on to Red Hat Certificate System by storing the passwords for the internal database and tokens. Each time a user logs on, he is required to enter this single password. 2. The ability for a user to log in once to a single computer and be authenticated automatically by a variety of servers within a network. Partial single sign-on solutions can take many forms, including mechanisms for automatically tracking passwords used with different servers. Certificates support single sign-on within a public-key infrastructure (PKI) . A user can log in once to a local client's private-key database and, as long as the client software is running, rely on certificate-based authentication to access each server within an organization that the user is allowed to access. slot The portion of a PKCS #11 module , implemented in either hardware or software, that contains a token . smart card A small device that contains a microprocessor and stores cryptographic information, such as keys and certificates, and performs cryptographic operations. Smart cards implement some or all of the PKCS #11 interface. spoofing Pretending to be someone else. For example, a person can pretend to have the email address [email protected] , or a computer can identify itself as a site called www.redhat.com when it is not. Spoofing is one form of impersonation . See also misrepresentation . SSL See Secure Sockets Layer (SSL) . subject The entity identified by a certificate . In particular, the subject field of a certificate contains a subject name that uniquely describes the certified entity. subject name A distinguished name (DN) that uniquely describes the subject of a certificate . subordinate CA A certificate authority that is certificate is signed by another subordinate CA or by the root CA. See CA certificate , root CA . symmetric encryption An encryption method that uses the same cryptographic key to encrypt and decrypt a given message. T tamper detection A mechanism ensuring that data received in electronic form entirely corresponds with the original version of the same data. token A hardware or software device that is associated with a slot in a PKCS #11 module . It provides cryptographic services and optionally stores certificates and keys. token key service (TKS) A subsystem in the token management system which derives specific, separate keys for every smart card based on the smart card APDUs and other shared information, like the token CUID. token management system (TMS) The interrelated subsystems - CA, TKS, TPS, and, optionally, the KRA - which are used to manage certificates on smart cards (tokens). token processing system (TPS) A subsystem which interacts directly the Enterprise Security Client and smart cards to manage the keys and certificates on those smart cards. Transport Layer Security (TLS) A set of rules governing server authentication, client authentication, and encrypted communication between servers and clients. tree hierarchy The hierarchical structure of an LDAP directory. trust Confident reliance on a person or other entity. In a public-key infrastructure (PKI) , trust refers to the relationship between the user of a certificate and the certificate authority (CA) that issued the certificate. If a CA is trusted, then valid certificates issued by that CA can be trusted. U UTF-8 The certificate enrollment pages support all UTF-8 characters for specific fields (common name, organizational unit, requester name, and additional notes). The UTF-8 strings are searchable and correctly display in the CA, OCSP, and KRA end user and agents services pages. However, the UTF-8 support does not extend to internationalized domain names, such as those used in email addresses. V virtual private network (VPN) A way of connecting geographically distant divisions of an enterprise. The VPN allows the divisions to communicate over an encrypted channel, allowing authenticated, confidential transactions that would normally be restricted to a private network. X X.509 version 1 and version 3 Digital certificate formats recommended by the International Telecommunications Union (ITU).
null
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/glossary
19.2. The Virtual Machine Manager Main Window
19.2. The Virtual Machine Manager Main Window This main window displays all the running guests and resources used by guests. Select a guest by double clicking the guest's name. Figure 19.2. Virtual Machine Manager main window
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-managing_guests_with_the_virtual_machine_manager_virt_manager-the_virtual_machine_manager_main_window
Chapter 10. Scheduling resources
Chapter 10. Scheduling resources Taints and tolerations allow the node to control which pods should (or should not) be scheduled on them. A node selector specifies a map of key/value pairs that are defined using custom labels on nodes and selectors specified in pods. For the pod to be eligible to run on a node, the pod must have the same key/value node selector as the label on the node. 10.1. Network Observability deployment in specific nodes You can configure the FlowCollector to control the deployment of Network Observability components in specific nodes. The spec.agent.ebpf.advanced.scheduling , spec.processor.advanced.scheduling , and spec.consolePlugin.advanced.scheduling specifications have the following configurable settings: NodeSelector Tolerations Affinity PriorityClassName Sample FlowCollector resource for spec.<component>.advanced.scheduling apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: # ... advanced: scheduling: tolerations: - key: "<taint key>" operator: "Equal" value: "<taint value>" effect: "<taint effect>" nodeSelector: <key>: <value> affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: name operator: In values: - app-worker-node priorityClassName: """ # ... Additional resources Understanding taints and tolerations Assign Pods to Nodes (Kubernetes documentation) Pod Priority and Preemption (Kubernetes documentation)
[ "apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: advanced: scheduling: tolerations: - key: \"<taint key>\" operator: \"Equal\" value: \"<taint value>\" effect: \"<taint effect>\" nodeSelector: <key>: <value> affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: name operator: In values: - app-worker-node priorityClassName: \"\"\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/network_observability/network-observability-scheduling-resources
Chapter 8. Summary
Chapter 8. Summary This document has provided only a general introduction to security for Red Hat Ceph Storage. Contact the Red Hat Ceph Storage consulting team for additional help.
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/data_security_and_hardening_guide/con-sec-summay-sec
Chapter 19. OAuth [config.openshift.io/v1]
Chapter 19. OAuth [config.openshift.io/v1] Description OAuth holds cluster-wide information about OAuth. The canonical name is cluster . It is used to configure the integrated OAuth server. This configuration is only honored when the top level Authentication config has type set to IntegratedOAuth. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 19.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 19.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description identityProviders array identityProviders is an ordered list of ways for a user to identify themselves. When this list is empty, no identities are provisioned for users. identityProviders[] object IdentityProvider provides identities for users authenticating using credentials templates object templates allow you to customize pages like the login page. tokenConfig object tokenConfig contains options for authorization and access tokens 19.1.2. .spec.identityProviders Description identityProviders is an ordered list of ways for a user to identify themselves. When this list is empty, no identities are provisioned for users. Type array 19.1.3. .spec.identityProviders[] Description IdentityProvider provides identities for users authenticating using credentials Type object Property Type Description basicAuth object basicAuth contains configuration options for the BasicAuth IdP github object github enables user authentication using GitHub credentials gitlab object gitlab enables user authentication using GitLab credentials google object google enables user authentication using Google credentials htpasswd object htpasswd enables user authentication using an HTPasswd file to validate credentials keystone object keystone enables user authentication using keystone password credentials ldap object ldap enables user authentication using LDAP credentials mappingMethod string mappingMethod determines how identities from this provider are mapped to users Defaults to "claim" name string name is used to qualify the identities returned by this provider. - It MUST be unique and not shared by any other identity provider used - It MUST be a valid path segment: name cannot equal "." or ".." or contain "/" or "%" or ":" Ref: https://godoc.org/github.com/openshift/origin/pkg/user/apis/user/validation#ValidateIdentityProviderName openID object openID enables user authentication using OpenID credentials requestHeader object requestHeader enables user authentication using request header credentials type string type identifies the identity provider type for this entry. 19.1.4. .spec.identityProviders[].basicAuth Description basicAuth contains configuration options for the BasicAuth IdP Type object Property Type Description ca object ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca.crt" is used to locate the data. If specified and the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. If empty, the default system roots are used. The namespace for this config map is openshift-config. tlsClientCert object tlsClientCert is an optional reference to a secret by name that contains the PEM-encoded TLS client certificate to present when connecting to the server. The key "tls.crt" is used to locate the data. If specified and the secret or expected key is not found, the identity provider is not honored. If the specified certificate data is not valid, the identity provider is not honored. The namespace for this secret is openshift-config. tlsClientKey object tlsClientKey is an optional reference to a secret by name that contains the PEM-encoded TLS private key for the client certificate referenced in tlsClientCert. The key "tls.key" is used to locate the data. If specified and the secret or expected key is not found, the identity provider is not honored. If the specified certificate data is not valid, the identity provider is not honored. The namespace for this secret is openshift-config. url string url is the remote URL to connect to 19.1.5. .spec.identityProviders[].basicAuth.ca Description ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca.crt" is used to locate the data. If specified and the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. If empty, the default system roots are used. The namespace for this config map is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 19.1.6. .spec.identityProviders[].basicAuth.tlsClientCert Description tlsClientCert is an optional reference to a secret by name that contains the PEM-encoded TLS client certificate to present when connecting to the server. The key "tls.crt" is used to locate the data. If specified and the secret or expected key is not found, the identity provider is not honored. If the specified certificate data is not valid, the identity provider is not honored. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 19.1.7. .spec.identityProviders[].basicAuth.tlsClientKey Description tlsClientKey is an optional reference to a secret by name that contains the PEM-encoded TLS private key for the client certificate referenced in tlsClientCert. The key "tls.key" is used to locate the data. If specified and the secret or expected key is not found, the identity provider is not honored. If the specified certificate data is not valid, the identity provider is not honored. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 19.1.8. .spec.identityProviders[].github Description github enables user authentication using GitHub credentials Type object Property Type Description ca object ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca.crt" is used to locate the data. If specified and the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. If empty, the default system roots are used. This can only be configured when hostname is set to a non-empty value. The namespace for this config map is openshift-config. clientID string clientID is the oauth client ID clientSecret object clientSecret is a required reference to the secret by name containing the oauth client secret. The key "clientSecret" is used to locate the data. If the secret or expected key is not found, the identity provider is not honored. The namespace for this secret is openshift-config. hostname string hostname is the optional domain (e.g. "mycompany.com") for use with a hosted instance of GitHub Enterprise. It must match the GitHub Enterprise settings value configured at /setup/settings#hostname. organizations array (string) organizations optionally restricts which organizations are allowed to log in teams array (string) teams optionally restricts which teams are allowed to log in. Format is <org>/<team>. 19.1.9. .spec.identityProviders[].github.ca Description ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca.crt" is used to locate the data. If specified and the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. If empty, the default system roots are used. This can only be configured when hostname is set to a non-empty value. The namespace for this config map is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 19.1.10. .spec.identityProviders[].github.clientSecret Description clientSecret is a required reference to the secret by name containing the oauth client secret. The key "clientSecret" is used to locate the data. If the secret or expected key is not found, the identity provider is not honored. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 19.1.11. .spec.identityProviders[].gitlab Description gitlab enables user authentication using GitLab credentials Type object Property Type Description ca object ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca.crt" is used to locate the data. If specified and the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. If empty, the default system roots are used. The namespace for this config map is openshift-config. clientID string clientID is the oauth client ID clientSecret object clientSecret is a required reference to the secret by name containing the oauth client secret. The key "clientSecret" is used to locate the data. If the secret or expected key is not found, the identity provider is not honored. The namespace for this secret is openshift-config. url string url is the oauth server base URL 19.1.12. .spec.identityProviders[].gitlab.ca Description ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca.crt" is used to locate the data. If specified and the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. If empty, the default system roots are used. The namespace for this config map is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 19.1.13. .spec.identityProviders[].gitlab.clientSecret Description clientSecret is a required reference to the secret by name containing the oauth client secret. The key "clientSecret" is used to locate the data. If the secret or expected key is not found, the identity provider is not honored. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 19.1.14. .spec.identityProviders[].google Description google enables user authentication using Google credentials Type object Property Type Description clientID string clientID is the oauth client ID clientSecret object clientSecret is a required reference to the secret by name containing the oauth client secret. The key "clientSecret" is used to locate the data. If the secret or expected key is not found, the identity provider is not honored. The namespace for this secret is openshift-config. hostedDomain string hostedDomain is the optional Google App domain (e.g. "mycompany.com") to restrict logins to 19.1.15. .spec.identityProviders[].google.clientSecret Description clientSecret is a required reference to the secret by name containing the oauth client secret. The key "clientSecret" is used to locate the data. If the secret or expected key is not found, the identity provider is not honored. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 19.1.16. .spec.identityProviders[].htpasswd Description htpasswd enables user authentication using an HTPasswd file to validate credentials Type object Property Type Description fileData object fileData is a required reference to a secret by name containing the data to use as the htpasswd file. The key "htpasswd" is used to locate the data. If the secret or expected key is not found, the identity provider is not honored. If the specified htpasswd data is not valid, the identity provider is not honored. The namespace for this secret is openshift-config. 19.1.17. .spec.identityProviders[].htpasswd.fileData Description fileData is a required reference to a secret by name containing the data to use as the htpasswd file. The key "htpasswd" is used to locate the data. If the secret or expected key is not found, the identity provider is not honored. If the specified htpasswd data is not valid, the identity provider is not honored. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 19.1.18. .spec.identityProviders[].keystone Description keystone enables user authentication using keystone password credentials Type object Property Type Description ca object ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca.crt" is used to locate the data. If specified and the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. If empty, the default system roots are used. The namespace for this config map is openshift-config. domainName string domainName is required for keystone v3 tlsClientCert object tlsClientCert is an optional reference to a secret by name that contains the PEM-encoded TLS client certificate to present when connecting to the server. The key "tls.crt" is used to locate the data. If specified and the secret or expected key is not found, the identity provider is not honored. If the specified certificate data is not valid, the identity provider is not honored. The namespace for this secret is openshift-config. tlsClientKey object tlsClientKey is an optional reference to a secret by name that contains the PEM-encoded TLS private key for the client certificate referenced in tlsClientCert. The key "tls.key" is used to locate the data. If specified and the secret or expected key is not found, the identity provider is not honored. If the specified certificate data is not valid, the identity provider is not honored. The namespace for this secret is openshift-config. url string url is the remote URL to connect to 19.1.19. .spec.identityProviders[].keystone.ca Description ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca.crt" is used to locate the data. If specified and the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. If empty, the default system roots are used. The namespace for this config map is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 19.1.20. .spec.identityProviders[].keystone.tlsClientCert Description tlsClientCert is an optional reference to a secret by name that contains the PEM-encoded TLS client certificate to present when connecting to the server. The key "tls.crt" is used to locate the data. If specified and the secret or expected key is not found, the identity provider is not honored. If the specified certificate data is not valid, the identity provider is not honored. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 19.1.21. .spec.identityProviders[].keystone.tlsClientKey Description tlsClientKey is an optional reference to a secret by name that contains the PEM-encoded TLS private key for the client certificate referenced in tlsClientCert. The key "tls.key" is used to locate the data. If specified and the secret or expected key is not found, the identity provider is not honored. If the specified certificate data is not valid, the identity provider is not honored. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 19.1.22. .spec.identityProviders[].ldap Description ldap enables user authentication using LDAP credentials Type object Property Type Description attributes object attributes maps LDAP attributes to identities bindDN string bindDN is an optional DN to bind with during the search phase. bindPassword object bindPassword is an optional reference to a secret by name containing a password to bind with during the search phase. The key "bindPassword" is used to locate the data. If specified and the secret or expected key is not found, the identity provider is not honored. The namespace for this secret is openshift-config. ca object ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca.crt" is used to locate the data. If specified and the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. If empty, the default system roots are used. The namespace for this config map is openshift-config. insecure boolean insecure, if true, indicates the connection should not use TLS WARNING: Should not be set to true with the URL scheme "ldaps://" as "ldaps://" URLs always attempt to connect using TLS, even when insecure is set to true When true , "ldap://" URLS connect insecurely. When false , "ldap://" URLs are upgraded to a TLS connection using StartTLS as specified in https://tools.ietf.org/html/rfc2830 . url string url is an RFC 2255 URL which specifies the LDAP search parameters to use. The syntax of the URL is: ldap://host:port/basedn?attribute?scope?filter 19.1.23. .spec.identityProviders[].ldap.attributes Description attributes maps LDAP attributes to identities Type object Property Type Description email array (string) email is the list of attributes whose values should be used as the email address. Optional. If unspecified, no email is set for the identity id array (string) id is the list of attributes whose values should be used as the user ID. Required. First non-empty attribute is used. At least one attribute is required. If none of the listed attribute have a value, authentication fails. LDAP standard identity attribute is "dn" name array (string) name is the list of attributes whose values should be used as the display name. Optional. If unspecified, no display name is set for the identity LDAP standard display name attribute is "cn" preferredUsername array (string) preferredUsername is the list of attributes whose values should be used as the preferred username. LDAP standard login attribute is "uid" 19.1.24. .spec.identityProviders[].ldap.bindPassword Description bindPassword is an optional reference to a secret by name containing a password to bind with during the search phase. The key "bindPassword" is used to locate the data. If specified and the secret or expected key is not found, the identity provider is not honored. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 19.1.25. .spec.identityProviders[].ldap.ca Description ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca.crt" is used to locate the data. If specified and the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. If empty, the default system roots are used. The namespace for this config map is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 19.1.26. .spec.identityProviders[].openID Description openID enables user authentication using OpenID credentials Type object Property Type Description ca object ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca.crt" is used to locate the data. If specified and the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. If empty, the default system roots are used. The namespace for this config map is openshift-config. claims object claims mappings clientID string clientID is the oauth client ID clientSecret object clientSecret is a required reference to the secret by name containing the oauth client secret. The key "clientSecret" is used to locate the data. If the secret or expected key is not found, the identity provider is not honored. The namespace for this secret is openshift-config. extraAuthorizeParameters object (string) extraAuthorizeParameters are any custom parameters to add to the authorize request. extraScopes array (string) extraScopes are any scopes to request in addition to the standard "openid" scope. issuer string issuer is the URL that the OpenID Provider asserts as its Issuer Identifier. It must use the https scheme with no query or fragment component. 19.1.27. .spec.identityProviders[].openID.ca Description ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca.crt" is used to locate the data. If specified and the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. If empty, the default system roots are used. The namespace for this config map is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 19.1.28. .spec.identityProviders[].openID.claims Description claims mappings Type object Property Type Description email array (string) email is the list of claims whose values should be used as the email address. Optional. If unspecified, no email is set for the identity groups array (string) groups is the list of claims value of which should be used to synchronize groups from the OIDC provider to OpenShift for the user. If multiple claims are specified, the first one with a non-empty value is used. name array (string) name is the list of claims whose values should be used as the display name. Optional. If unspecified, no display name is set for the identity preferredUsername array (string) preferredUsername is the list of claims whose values should be used as the preferred username. If unspecified, the preferred username is determined from the value of the sub claim 19.1.29. .spec.identityProviders[].openID.clientSecret Description clientSecret is a required reference to the secret by name containing the oauth client secret. The key "clientSecret" is used to locate the data. If the secret or expected key is not found, the identity provider is not honored. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 19.1.30. .spec.identityProviders[].requestHeader Description requestHeader enables user authentication using request header credentials Type object Property Type Description ca object ca is a required reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. Specifically, it allows verification of incoming requests to prevent header spoofing. The key "ca.crt" is used to locate the data. If the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. The namespace for this config map is openshift-config. challengeURL string challengeURL is a URL to redirect unauthenticated /authorize requests to Unauthenticated requests from OAuth clients which expect WWW-Authenticate challenges will be redirected here. USD{url} is replaced with the current URL, escaped to be safe in a query parameter https://www.example.com/sso-login?then=USD{url} USD{query} is replaced with the current query string https://www.example.com/auth-proxy/oauth/authorize?USD{query} Required when challenge is set to true. clientCommonNames array (string) clientCommonNames is an optional list of common names to require a match from. If empty, any client certificate validated against the clientCA bundle is considered authoritative. emailHeaders array (string) emailHeaders is the set of headers to check for the email address headers array (string) headers is the set of headers to check for identity information loginURL string loginURL is a URL to redirect unauthenticated /authorize requests to Unauthenticated requests from OAuth clients which expect interactive logins will be redirected here USD{url} is replaced with the current URL, escaped to be safe in a query parameter https://www.example.com/sso-login?then=USD{url} USD{query} is replaced with the current query string https://www.example.com/auth-proxy/oauth/authorize?USD{query} Required when login is set to true. nameHeaders array (string) nameHeaders is the set of headers to check for the display name preferredUsernameHeaders array (string) preferredUsernameHeaders is the set of headers to check for the preferred username 19.1.31. .spec.identityProviders[].requestHeader.ca Description ca is a required reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. Specifically, it allows verification of incoming requests to prevent header spoofing. The key "ca.crt" is used to locate the data. If the config map or expected key is not found, the identity provider is not honored. If the specified ca data is not valid, the identity provider is not honored. The namespace for this config map is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 19.1.32. .spec.templates Description templates allow you to customize pages like the login page. Type object Property Type Description error object error is the name of a secret that specifies a go template to use to render error pages during the authentication or grant flow. The key "errors.html" is used to locate the template data. If specified and the secret or expected key is not found, the default error page is used. If the specified template is not valid, the default error page is used. If unspecified, the default error page is used. The namespace for this secret is openshift-config. login object login is the name of a secret that specifies a go template to use to render the login page. The key "login.html" is used to locate the template data. If specified and the secret or expected key is not found, the default login page is used. If the specified template is not valid, the default login page is used. If unspecified, the default login page is used. The namespace for this secret is openshift-config. providerSelection object providerSelection is the name of a secret that specifies a go template to use to render the provider selection page. The key "providers.html" is used to locate the template data. If specified and the secret or expected key is not found, the default provider selection page is used. If the specified template is not valid, the default provider selection page is used. If unspecified, the default provider selection page is used. The namespace for this secret is openshift-config. 19.1.33. .spec.templates.error Description error is the name of a secret that specifies a go template to use to render error pages during the authentication or grant flow. The key "errors.html" is used to locate the template data. If specified and the secret or expected key is not found, the default error page is used. If the specified template is not valid, the default error page is used. If unspecified, the default error page is used. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 19.1.34. .spec.templates.login Description login is the name of a secret that specifies a go template to use to render the login page. The key "login.html" is used to locate the template data. If specified and the secret or expected key is not found, the default login page is used. If the specified template is not valid, the default login page is used. If unspecified, the default login page is used. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 19.1.35. .spec.templates.providerSelection Description providerSelection is the name of a secret that specifies a go template to use to render the provider selection page. The key "providers.html" is used to locate the template data. If specified and the secret or expected key is not found, the default provider selection page is used. If the specified template is not valid, the default provider selection page is used. If unspecified, the default provider selection page is used. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 19.1.36. .spec.tokenConfig Description tokenConfig contains options for authorization and access tokens Type object Property Type Description accessTokenInactivityTimeout string accessTokenInactivityTimeout defines the token inactivity timeout for tokens granted by any client. The value represents the maximum amount of time that can occur between consecutive uses of the token. Tokens become invalid if they are not used within this temporal window. The user will need to acquire a new token to regain access once a token times out. Takes valid time duration string such as "5m", "1.5h" or "2h45m". The minimum allowed value for duration is 300s (5 minutes). If the timeout is configured per client, then that value takes precedence. If the timeout value is not specified and the client does not override the value, then tokens are valid until their lifetime. WARNING: existing tokens' timeout will not be affected (lowered) by changing this value accessTokenInactivityTimeoutSeconds integer accessTokenInactivityTimeoutSeconds - DEPRECATED: setting this field has no effect. accessTokenMaxAgeSeconds integer accessTokenMaxAgeSeconds defines the maximum age of access tokens 19.1.37. .status Description status holds observed values from the cluster. They may not be overridden. Type object 19.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/oauths DELETE : delete collection of OAuth GET : list objects of kind OAuth POST : create an OAuth /apis/config.openshift.io/v1/oauths/{name} DELETE : delete an OAuth GET : read the specified OAuth PATCH : partially update the specified OAuth PUT : replace the specified OAuth /apis/config.openshift.io/v1/oauths/{name}/status GET : read status of the specified OAuth PATCH : partially update status of the specified OAuth PUT : replace status of the specified OAuth 19.2.1. /apis/config.openshift.io/v1/oauths Table 19.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of OAuth Table 19.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 19.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind OAuth Table 19.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 19.5. HTTP responses HTTP code Reponse body 200 - OK OAuthList schema 401 - Unauthorized Empty HTTP method POST Description create an OAuth Table 19.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.7. Body parameters Parameter Type Description body OAuth schema Table 19.8. HTTP responses HTTP code Reponse body 200 - OK OAuth schema 201 - Created OAuth schema 202 - Accepted OAuth schema 401 - Unauthorized Empty 19.2.2. /apis/config.openshift.io/v1/oauths/{name} Table 19.9. Global path parameters Parameter Type Description name string name of the OAuth Table 19.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an OAuth Table 19.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 19.12. Body parameters Parameter Type Description body DeleteOptions schema Table 19.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OAuth Table 19.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 19.15. HTTP responses HTTP code Reponse body 200 - OK OAuth schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OAuth Table 19.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 19.17. Body parameters Parameter Type Description body Patch schema Table 19.18. HTTP responses HTTP code Reponse body 200 - OK OAuth schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OAuth Table 19.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.20. Body parameters Parameter Type Description body OAuth schema Table 19.21. HTTP responses HTTP code Reponse body 200 - OK OAuth schema 201 - Created OAuth schema 401 - Unauthorized Empty 19.2.3. /apis/config.openshift.io/v1/oauths/{name}/status Table 19.22. Global path parameters Parameter Type Description name string name of the OAuth Table 19.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified OAuth Table 19.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 19.25. HTTP responses HTTP code Reponse body 200 - OK OAuth schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified OAuth Table 19.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 19.27. Body parameters Parameter Type Description body Patch schema Table 19.28. HTTP responses HTTP code Reponse body 200 - OK OAuth schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified OAuth Table 19.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.30. Body parameters Parameter Type Description body OAuth schema Table 19.31. HTTP responses HTTP code Reponse body 200 - OK OAuth schema 201 - Created OAuth schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/config_apis/oauth-config-openshift-io-v1
Chapter 4. Verifying OpenShift Data Foundation deployment
Chapter 4. Verifying OpenShift Data Foundation deployment Use this section to verify that OpenShift Data Foundation is deployed correctly. 4.1. Verifying the state of the pods Procedure Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see the following table: Set filter for Running and Completed pods to verify that the following pods are in Running and Completed state: Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) ux-backend-server- * (1 pod on any storage node) * ocs-client-operator -* (1 pod on any storage node) ocs-client-operator-console -* (1 pod on any storage node) ocs-provider-server -* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (3 pods distributed across storage nodes) MGR rook-ceph-mgr-* (1 pod on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods distributed across storage nodes) CSI cephfs csi-cephfsplugin-* (1 pod on each storage node) csi-cephfsplugin-provisioner-* (2 pods distributed across storage nodes) rbd csi-rbdplugin-* (1 pod on each storage node) csi-rbdplugin-provisioner-* (2 pods distributed across storage nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node) OSD rook-ceph-osd-* (1 pod for each device) rook-ceph-osd-prepare-ocs-deviceset-* (1 pod for each device) ceph-csi-operator ceph-csi-controller-manager-* (1 pod for each device) 4.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 4.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 4.4. Verifying that the specific storage classes exist Procedure Click Storage Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_using_microsoft_azure/verifying_openshift_data_foundation_deployment
Chapter 49. RbacService
Chapter 49. RbacService 49.1. ListRoleBindings GET /v1/rbac/bindings 49.1.1. Description 49.1.2. Parameters 49.1.2.1. Query Parameters Name Description Required Default Pattern query - null pagination.limit - null pagination.offset - null pagination.sortOption.field - null pagination.sortOption.reversed - null pagination.sortOption.aggregateBy.aggrFunc - UNSET pagination.sortOption.aggregateBy.distinct - null 49.1.3. Return Type V1ListRoleBindingsResponse 49.1.4. Content Type application/json 49.1.5. Responses Table 49.1. HTTP Response Codes Code Message Datatype 200 A successful response. V1ListRoleBindingsResponse 0 An unexpected error response. GooglerpcStatus 49.1.6. Samples 49.1.7. Common object reference 49.1.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 49.1.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 49.1.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 49.1.7.3. StorageK8sRoleBinding Field Name Required Nullable Type Description Format id String name String namespace String clusterId String clusterName String clusterRole Boolean ClusterRole specifies whether the binding binds a cluster role. However, it cannot be used to determine whether the binding is a cluster role binding. This can be done in conjunction with the namespace. If the namespace is empty and cluster role is true, the binding is a cluster role binding. labels Map of string annotations Map of string createdAt Date date-time subjects List of StorageSubject roleId String 49.1.7.4. StorageSubject Field Name Required Nullable Type Description Format id String kind StorageSubjectKind UNSET_KIND, SERVICE_ACCOUNT, USER, GROUP, name String namespace String clusterId String clusterName String 49.1.7.5. StorageSubjectKind Enum Values UNSET_KIND SERVICE_ACCOUNT USER GROUP 49.1.7.6. V1ListRoleBindingsResponse Field Name Required Nullable Type Description Format bindings List of StorageK8sRoleBinding 49.2. GetRoleBinding GET /v1/rbac/bindings/{id} 49.2.1. Description 49.2.2. Parameters 49.2.2.1. Path Parameters Name Description Required Default Pattern id X null 49.2.3. Return Type V1GetRoleBindingResponse 49.2.4. Content Type application/json 49.2.5. Responses Table 49.2. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetRoleBindingResponse 0 An unexpected error response. GooglerpcStatus 49.2.6. Samples 49.2.7. Common object reference 49.2.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 49.2.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 49.2.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 49.2.7.3. StorageK8sRoleBinding Field Name Required Nullable Type Description Format id String name String namespace String clusterId String clusterName String clusterRole Boolean ClusterRole specifies whether the binding binds a cluster role. However, it cannot be used to determine whether the binding is a cluster role binding. This can be done in conjunction with the namespace. If the namespace is empty and cluster role is true, the binding is a cluster role binding. labels Map of string annotations Map of string createdAt Date date-time subjects List of StorageSubject roleId String 49.2.7.4. StorageSubject Field Name Required Nullable Type Description Format id String kind StorageSubjectKind UNSET_KIND, SERVICE_ACCOUNT, USER, GROUP, name String namespace String clusterId String clusterName String 49.2.7.5. StorageSubjectKind Enum Values UNSET_KIND SERVICE_ACCOUNT USER GROUP 49.2.7.6. V1GetRoleBindingResponse Field Name Required Nullable Type Description Format binding StorageK8sRoleBinding 49.3. ListRoles GET /v1/rbac/roles 49.3.1. Description 49.3.2. Parameters 49.3.2.1. Query Parameters Name Description Required Default Pattern query - null pagination.limit - null pagination.offset - null pagination.sortOption.field - null pagination.sortOption.reversed - null pagination.sortOption.aggregateBy.aggrFunc - UNSET pagination.sortOption.aggregateBy.distinct - null 49.3.3. Return Type V1ListRolesResponse 49.3.4. Content Type application/json 49.3.5. Responses Table 49.3. HTTP Response Codes Code Message Datatype 200 A successful response. V1ListRolesResponse 0 An unexpected error response. GooglerpcStatus 49.3.6. Samples 49.3.7. Common object reference 49.3.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 49.3.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 49.3.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 49.3.7.3. StorageK8sRole Field Name Required Nullable Type Description Format id String name String namespace String clusterId String clusterName String clusterRole Boolean labels Map of string annotations Map of string createdAt Date date-time rules List of StoragePolicyRule 49.3.7.4. StoragePolicyRule Field Name Required Nullable Type Description Format verbs List of string apiGroups List of string resources List of string nonResourceUrls List of string resourceNames List of string 49.3.7.5. V1ListRolesResponse Field Name Required Nullable Type Description Format roles List of StorageK8sRole 49.4. GetRole GET /v1/rbac/roles/{id} 49.4.1. Description 49.4.2. Parameters 49.4.2.1. Path Parameters Name Description Required Default Pattern id X null 49.4.3. Return Type V1GetRoleResponse 49.4.4. Content Type application/json 49.4.5. Responses Table 49.4. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetRoleResponse 0 An unexpected error response. GooglerpcStatus 49.4.6. Samples 49.4.7. Common object reference 49.4.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 49.4.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 49.4.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 49.4.7.3. StorageK8sRole Field Name Required Nullable Type Description Format id String name String namespace String clusterId String clusterName String clusterRole Boolean labels Map of string annotations Map of string createdAt Date date-time rules List of StoragePolicyRule 49.4.7.4. StoragePolicyRule Field Name Required Nullable Type Description Format verbs List of string apiGroups List of string resources List of string nonResourceUrls List of string resourceNames List of string 49.4.7.5. V1GetRoleResponse Field Name Required Nullable Type Description Format role StorageK8sRole 49.5. GetSubject GET /v1/rbac/subject/{id} Subjects served from this API are Groups and Users only. Id in this case is the Name field, since for users and groups, that is unique, and subjects do not have IDs. 49.5.1. Description 49.5.2. Parameters 49.5.2.1. Path Parameters Name Description Required Default Pattern id X null 49.5.3. Return Type V1GetSubjectResponse 49.5.4. Content Type application/json 49.5.5. Responses Table 49.5. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetSubjectResponse 0 An unexpected error response. GooglerpcStatus 49.5.6. Samples 49.5.7. Common object reference 49.5.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 49.5.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 49.5.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 49.5.7.3. StorageK8sRole Field Name Required Nullable Type Description Format id String name String namespace String clusterId String clusterName String clusterRole Boolean labels Map of string annotations Map of string createdAt Date date-time rules List of StoragePolicyRule 49.5.7.4. StoragePolicyRule Field Name Required Nullable Type Description Format verbs List of string apiGroups List of string resources List of string nonResourceUrls List of string resourceNames List of string 49.5.7.5. StorageSubject Field Name Required Nullable Type Description Format id String kind StorageSubjectKind UNSET_KIND, SERVICE_ACCOUNT, USER, GROUP, name String namespace String clusterId String clusterName String 49.5.7.6. StorageSubjectKind Enum Values UNSET_KIND SERVICE_ACCOUNT USER GROUP 49.5.7.7. V1GetSubjectResponse Field Name Required Nullable Type Description Format subject StorageSubject clusterRoles List of StorageK8sRole scopedRoles List of V1ScopedRoles 49.5.7.8. V1ScopedRoles Field Name Required Nullable Type Description Format namespace String roles List of StorageK8sRole 49.6. ListSubjects GET /v1/rbac/subjects 49.6.1. Description 49.6.2. Parameters 49.6.2.1. Query Parameters Name Description Required Default Pattern query - null pagination.limit - null pagination.offset - null pagination.sortOption.field - null pagination.sortOption.reversed - null pagination.sortOption.aggregateBy.aggrFunc - UNSET pagination.sortOption.aggregateBy.distinct - null 49.6.3. Return Type V1ListSubjectsResponse 49.6.4. Content Type application/json 49.6.5. Responses Table 49.6. HTTP Response Codes Code Message Datatype 200 A successful response. V1ListSubjectsResponse 0 An unexpected error response. GooglerpcStatus 49.6.6. Samples 49.6.7. Common object reference 49.6.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 49.6.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 49.6.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 49.6.7.3. StorageK8sRole Field Name Required Nullable Type Description Format id String name String namespace String clusterId String clusterName String clusterRole Boolean labels Map of string annotations Map of string createdAt Date date-time rules List of StoragePolicyRule 49.6.7.4. StoragePolicyRule Field Name Required Nullable Type Description Format verbs List of string apiGroups List of string resources List of string nonResourceUrls List of string resourceNames List of string 49.6.7.5. StorageSubject Field Name Required Nullable Type Description Format id String kind StorageSubjectKind UNSET_KIND, SERVICE_ACCOUNT, USER, GROUP, name String namespace String clusterId String clusterName String 49.6.7.6. StorageSubjectKind Enum Values UNSET_KIND SERVICE_ACCOUNT USER GROUP 49.6.7.7. V1ListSubjectsResponse Field Name Required Nullable Type Description Format subjectAndRoles List of V1SubjectAndRoles 49.6.7.8. V1SubjectAndRoles Field Name Required Nullable Type Description Format subject StorageSubject roles List of StorageK8sRole
[ "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Properties of an individual k8s RoleBinding or ClusterRoleBinding. ////////////////////////////////////////", "Properties of an individual subjects who are granted roles via role bindings. ////////////////////////////////////////", "A list of k8s role bindings (free of scoped information) Next Tag: 2", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Properties of an individual k8s RoleBinding or ClusterRoleBinding. ////////////////////////////////////////", "Properties of an individual subjects who are granted roles via role bindings. ////////////////////////////////////////", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Properties of an individual k8s Role or ClusterRole. ////////////////////////////////////////", "Properties of an individual rules that grant permissions to resources. ////////////////////////////////////////", "A list of k8s roles (free of scoped information) Next Tag: 2", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Properties of an individual k8s Role or ClusterRole. ////////////////////////////////////////", "Properties of an individual rules that grant permissions to resources. ////////////////////////////////////////", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Properties of an individual k8s Role or ClusterRole. ////////////////////////////////////////", "Properties of an individual rules that grant permissions to resources. ////////////////////////////////////////", "Properties of an individual subjects who are granted roles via role bindings. ////////////////////////////////////////", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Properties of an individual k8s Role or ClusterRole. ////////////////////////////////////////", "Properties of an individual rules that grant permissions to resources. ////////////////////////////////////////", "Properties of an individual subjects who are granted roles via role bindings. ////////////////////////////////////////", "A list of k8s subjects (users and groups only, for service accounts, try the service account service) Next Tag: 2" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/api_reference/rbacservice
Chapter 2. About Network Observability
Chapter 2. About Network Observability Red Hat offers cluster administrators and developers the Network Observability Operator to observe the network traffic for OpenShift Container Platform clusters. The Network Observability Operator uses the eBPF technology to create network flows. The network flows are then enriched with OpenShift Container Platform information. They are available as Prometheus metrics or as logs in Loki. You can view and analyze the stored network flows information in the OpenShift Container Platform console for further insight and troubleshooting. 2.1. Optional dependencies of the Network Observability Operator Loki Operator: Loki is the backend that can be used to store all collected flows with a maximal level of details. You can choose to use Network Observability without Loki , but there are some considerations for doing this, as described in the linked section. If you choose to install Loki, it is recommended to use the Loki Operator, which is supported by Red Hat. AMQ Streams Operator: Kafka provides scalability, resiliency and high availability in the OpenShift Container Platform cluster for large scale deployments. If you choose to use Kafka, it is recommended to use the AMQ Streams Operator, because it is supported by Red Hat. 2.2. Network Observability Operator The Network Observability Operator provides the Flow Collector API custom resource definition. A Flow Collector instance is a cluster-scoped resource that enables configuration of network flow collection. The Flow Collector instance deploys pods and services that form a monitoring pipeline where network flows are then collected and enriched with the Kubernetes metadata before storing in Loki or generating Prometheus metrics. The eBPF agent, which is deployed as a daemonset object, creates the network flows. 2.3. OpenShift Container Platform console integration OpenShift Container Platform console integration offers overview, topology view, and traffic flow tables in both Administrator and Developer perspectives. In the Administrator perspective, you can find the Network Observability Overview , Traffic flows , and Topology views by clicking Observe Network Traffic . In the Developer perspective, you can view this information by clicking Observe . The Network Observability metrics dashboards in Observe Dashboards are only available to administrators. Note To enable multi-tenancy for the developer perspective and for administrators with limited access to namespaces, you must specify permissions by defining roles. For more information, see Enabling multi-tenancy in Network Observability . 2.3.1. Network Observability metrics dashboards On the Overview tab in the OpenShift Container Platform console, you can view the overall aggregated metrics of the network traffic flow on the cluster. You can choose to display the information by zone, node, namespace, owner, pod, and service. Filters and display options can further refine the metrics. For more information, see Observing the network traffic from the Overview view . In Observe Dashboards , the Netobserv dashboards provide a quick overview of the network flows in your OpenShift Container Platform cluster. The Netobserv/Health dashboard provides metrics about the health of the Operator. For more information, see Network Observability Metrics and Viewing health information . 2.3.2. Network Observability topology views The OpenShift Container Platform console offers the Topology tab which displays a graphical representation of the network flows and the amount of traffic. The topology view represents traffic between the OpenShift Container Platform components as a network graph. You can refine the graph by using the filters and display options. You can access the information for zone, node, namespace, owner, pod, and service. 2.3.3. Traffic flow tables The Traffic flow table view provides a view for raw flows, non aggregated filtering options, and configurable columns. The OpenShift Container Platform console offers the Traffic flows tab which displays the data of the network flows and the amount of traffic. 2.4. Network Observability CLI You can quickly debug and troubleshoot networking issues with Network Observability by using the Network Observability CLI ( oc netobserv ). The Network Observability CLI is a flow and packet visualization tool that relies on eBPF agents to stream collected data to an ephemeral collector pod. It requires no persistent storage during the capture. After the run, the output is transferred to your local machine. This enables quick, live insight into packets and flow data without installing the Network Observability Operator.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/network_observability/network-observability-overview
Chapter 3. Ceph Object Gateway and the S3 API
Chapter 3. Ceph Object Gateway and the S3 API As a developer, you can use a RESTful application programming interface (API) that is compatible with the Amazon S3 data access model. You can manage the buckets and objects stored in a Red Hat Ceph Storage cluster through the Ceph Object Gateway. Prerequisites A running Red Hat Ceph Storage cluster. A RESTful client. 3.1. S3 limitations Important The following limitations should be used with caution. There are implications related to your hardware selections, so you should always discuss these requirements with your Red Hat account team. Maximum object size when using Amazon S3: Individual Amazon S3 objects can range in size from a minimum of 0B to a maximum of 5TB. The largest object that can be uploaded in a single PUT is 5GB. For objects larger than 100MB, you should consider using the Multipart Upload capability. Maximum metadata size when using Amazon S3: There is no defined limit on the total size of user metadata that can be applied to an object, but a single HTTP request is limited to 16,000 bytes. The amount of data overhead Red Hat Ceph Storage cluster produces to store S3 objects and metadata: The estimate here is 200-300 bytes plus the length of the object name. Versioned objects consume additional space proportional to the number of versions. Also, transient overhead is produced during multi-part upload and other transactional updates, but these overheads are recovered during garbage collection. Additional Resources See the Red Hat Ceph Storage Developer Guide for details on the unsupported header fields . 3.2. Accessing the Ceph Object Gateway with the S3 API As a developer, you must configure access to the Ceph Object Gateway and the Secure Token Service (STS) before you can start using the Amazon S3 API. Prerequisites A running Red Hat Ceph Storage cluster. A running Ceph Object Gateway. A RESTful client. 3.2.1. S3 authentication Requests to the Ceph Object Gateway can be either authenticated or unauthenticated. Ceph Object Gateway assumes unauthenticated requests are sent by an anonymous user. Ceph Object Gateway supports canned ACLs. For most use cases, clients use existing open source libraries like the Amazon SDK's AmazonS3Client for Java, and Python Boto. With open source libraries you simply pass in the access key and secret key and the library builds the request header and authentication signature for you. However, you can create requests and sign them too. Authenticating a request requires including an access key and a base 64-encoded hash-based Message Authentication Code (HMAC) in the request before it is sent to the Ceph Object Gateway server. Ceph Object Gateway uses an S3-compatible authentication approach. Example In the above example, replace ACCESS_KEY with the value for the access key ID followed by a colon ( : ). Replace HASH_OF_HEADER_AND_SECRET with a hash of a canonicalized header string and the secret corresponding to the access key ID. Generate hash of header string and secret To generate the hash of the header string and secret: Get the value of the header string. Normalize the request header string into canonical form. Generate an HMAC using a SHA-1 hashing algorithm. Encode the hmac result as base-64. Normalize header To normalize the header into canonical form: Get all content- headers. Remove all content- headers except for content-type and content-md5 . Ensure the content- header names are lowercase. Sort the content- headers lexicographically. Ensure you have a Date header AND ensure the specified date uses GMT and not an offset. Get all headers beginning with x-amz- . Ensure that the x-amz- headers are all lowercase. Sort the x-amz- headers lexicographically. Combine multiple instances of the same field name into a single field and separate the field values with a comma. Replace white space and line breaks in header values with a single space. Remove white space before and after colons. Append a new line after each header. Merge the headers back into the request header. Replace the HASH_OF_HEADER_AND_SECRET with the base-64 encoded HMAC string. Additional Resources For additional details, consult the Signing and Authenticating REST Requests section of Amazon Simple Storage Service documentation. 3.2.2. S3-server-side encryption The Ceph Object Gateway supports server-side encryption of uploaded objects for the S3 application programming interface (API). Server-side encryption means that the S3 client sends data over HTTP in its unencrypted form, and the Ceph Object Gateway stores that data in the Red Hat Ceph Storage cluster in encrypted form. Note Red Hat does NOT support S3 object encryption of Static Large Object (SLO) or Dynamic Large Object (DLO). Important To use encryption, client requests MUST send requests over an SSL connection. Red Hat does not support S3 encryption from a client unless the Ceph Object Gateway uses SSL. However, for testing purposes, administrators can disable SSL during testing by setting the rgw_crypt_require_ssl configuration setting to false at runtime, using the ceph config set client.rgw command, and then restarting the Ceph Object Gateway instance. In a production environment, it might not be possible to send encrypted requests over SSL. In such a case, send requests using HTTP with server-side encryption. For information about how to configure HTTP with server-side encryption, see the Additional Resources section below. There are two options for the management of encryption keys: Customer-provided Keys When using customer-provided keys, the S3 client passes an encryption key along with each request to read or write encrypted data. It is the customer's responsibility to manage those keys. Customers must remember which key the Ceph Object Gateway used to encrypt each object. Ceph Object Gateway implements the customer-provided key behavior in the S3 API according to the Amazon SSE-C specification. Since the customer handles the key management and the S3 client passes keys to the Ceph Object Gateway, the Ceph Object Gateway requires no special configuration to support this encryption mode. Key Management Service When using a key management service, the secure key management service stores the keys and the Ceph Object Gateway retrieves them on demand to serve requests to encrypt or decrypt data. Ceph Object Gateway implements the key management service behavior in the S3 API according to the Amazon SSE-KMS specification. Important Currently, the only tested key management implementations are HashiCorp Vault, and OpenStack Barbican. However, OpenStack Barbican is a Technology Preview and is not supported for use in production systems. Additional Resources Amazon SSE-C Amazon SSE-KMS Configuring server-side encryption The HashiCorp Vault 3.2.3. S3 access control lists Ceph Object Gateway supports S3-compatible Access Control Lists (ACL) functionality. An ACL is a list of access grants that specify which operations a user can perform on a bucket or on an object. Each grant has a different meaning when applied to a bucket versus applied to an object: Table 3.1. User Operations Permission Bucket Object READ Grantee can list the objects in the bucket. Grantee can read the object. WRITE Grantee can write or delete objects in the bucket. N/A READ_ACP Grantee can read bucket ACL. Grantee can read the object ACL. WRITE_ACP Grantee can write bucket ACL. Grantee can write to the object ACL. FULL_CONTROL Grantee has full permissions for object in the bucket. Grantee can read or write to the object ACL. 3.2.4. Preparing access to the Ceph Object Gateway using S3 You have to follow some pre-requisites on the Ceph Object Gateway node before attempting to access the gateway server. Prerequisites Installation of the Ceph Object Gateway software. Root-level access to the Ceph Object Gateway node. Procedure As root , open port 8080 on the firewall: Add a wildcard to the DNS server that you are using for the gateway as mentioned in the Object Gateway Configuration and Administration Guide . You can also set up the gateway node for local DNS caching. To do so, execute the following steps: As root , install and setup dnsmasq : Replace IP_OF_GATEWAY_NODE and FQDN_OF_GATEWAY_NODE with the IP address and FQDN of the gateway node. As root , stop NetworkManager: As root , set the gateway server's IP as the nameserver: Replace IP_OF_GATEWAY_NODE and FQDN_OF_GATEWAY_NODE with the IP address and FQDN of the gateway node. Verify subdomain requests: Replace FQDN_OF_GATEWAY_NODE with the FQDN of the gateway node. Warning Setting up the gateway server for local DNS caching is for testing purposes only. You won't be able to access the outside network after doing this. It is strongly recommended to use a proper DNS server for the Red Hat Ceph Storage cluster and gateway node. Create the radosgw user for S3 access carefully as mentioned in the Object Gateway Configuration and Administration Guide and copy the generated access_key and secret_key . You will need these keys for S3 access and subsequent bucket management tasks. 3.2.5. Accessing the Ceph Object Gateway using Ruby AWS S3 You can use Ruby programming language along with aws-s3 gem for S3 access. Execute the steps mentioned below on the node used for accessing the Ceph Object Gateway server with Ruby AWS::S3 . Prerequisites User-level access to Ceph Object Gateway. Root-level access to the node accessing the Ceph Object Gateway. Internet access. Procedure Install the ruby package: Note The above command will install ruby and its essential dependencies like rubygems and ruby-libs . If somehow the command does not install all the dependencies, install them separately. Install the aws-s3 Ruby package: Create a project directory: Create the connection file: Paste the following contents into the conn.rb file: Syntax Replace FQDN_OF_GATEWAY_NODE with the FQDN of the Ceph Object Gateway node. Replace MY_ACCESS_KEY and MY_SECRET_KEY with the access_key and secret_key that were generated when you created the radosgw user for S3 access as mentioned in the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide . Example Save the file and exit the editor. Make the file executable: Run the file: If you have provided the values correctly in the file, the output of the command will be 0 . Create a new file for creating a bucket: Paste the following contents into the file: Save the file and exit the editor. Make the file executable: Run the file: If the output of the command is true it would mean that bucket my-new-bucket1 was created successfully. Create a new file for listing owned buckets: Paste the following content into the file: Save the file and exit the editor. Make the file executable: Run the file: The output should look something like this: Create a new file for creating an object: Paste the following contents into the file: Save the file and exit the editor. Make the file executable: Run the file: This will create a file hello.txt with the string Hello World! . Create a new file for listing a bucket's content: Paste the following content into the file: Save the file and exit the editor. Make the file executable. Run the file: The output will look something like this: Create a new file for deleting an empty bucket: Paste the following contents into the file: Save the file and exit the editor. Make the file executable: Run the file: If the bucket is successfully deleted, the command will return 0 as output. Note Edit the create_bucket.rb file to create empty buckets, for example, my-new-bucket4 , my-new-bucket5 . , edit the above-mentioned del_empty_bucket.rb file accordingly before trying to delete empty buckets. Create a new file for deleting non-empty buckets: Paste the following contents into the file: Save the file and exit the editor. Make the file executable: Run the file: If the bucket is successfully deleted, the command will return 0 as output. Create a new file for deleting an object: Paste the following contents into the file: Save the file and exit the editor. Make the file executable: Run the file: This will delete the object hello.txt . 3.2.6. Accessing the Ceph Object Gateway using Ruby AWS SDK You can use the Ruby programming language along with aws-sdk gem for S3 access. Execute the steps mentioned below on the node used for accessing the Ceph Object Gateway server with Ruby AWS::SDK . Prerequisites User-level access to Ceph Object Gateway. Root-level access to the node accessing the Ceph Object Gateway. Internet access. Procedure Install the ruby package: Note The above command will install ruby and its essential dependencies like rubygems and ruby-libs . If somehow the command does not install all the dependencies, install them separately. Install the aws-sdk Ruby package: Create a project directory: Create the connection file: Paste the following contents into the conn.rb file: Syntax Replace FQDN_OF_GATEWAY_NODE with the FQDN of the Ceph Object Gateway node. Replace MY_ACCESS_KEY and MY_SECRET_KEY with the access_key and secret_key that were generated when you created the radosgw user for S3 access as mentioned in the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide . Example Save the file and exit the editor. Make the file executable: Run the file: If you have provided the values correctly in the file, the output of the command will be 0 . Create a new file for creating a bucket: Paste the following contents into the file: Syntax Save the file and exit the editor. Make the file executable: Run the file: If the output of the command is true , this means that bucket my-new-bucket2 was created successfully. Create a new file for listing owned buckets: Paste the following content into the file: Save the file and exit the editor. Make the file executable: Run the file: The output should look something like this: Create a new file for creating an object: Paste the following contents into the file: Save the file and exit the editor. Make the file executable: Run the file: This will create a file hello.txt with the string Hello World! . Create a new file for listing a bucket's content: Paste the following content into the file: Save the file and exit the editor. Make the file executable. Run the file: The output will look something like this: Create a new file for deleting an empty bucket: Paste the following contents into the file: Save the file and exit the editor. Make the file executable: Run the file: If the bucket is successfully deleted, the command will return 0 as output. Note Edit the create_bucket.rb file to create empty buckets, for example, my-new-bucket6 , my-new-bucket7 . , edit the above-mentioned del_empty_bucket.rb file accordingly before trying to delete empty buckets. Create a new file for deleting a non-empty bucket: Paste the following contents into the file: Save the file and exit the editor. Make the file executable: Run the file: If the bucket is successfully deleted, the command will return 0 as output. Create a new file for deleting an object: Paste the following contents into the file: Save the file and exit the editor. Make the file executable: Run the file: This will delete the object hello.txt . 3.2.7. Accessing the Ceph Object Gateway using PHP You can use PHP scripts for S3 access. This procedure provides some example PHP scripts to do various tasks, such as deleting a bucket or an object. Important The examples given below are tested against php v5.4.16 and aws-sdk v2.8.24 . Prerequisites Root-level access to a development workstation. Internet access. Procedure Install the php package: Download the zip archive of aws-sdk for PHP and extract it. Create a project directory: Copy the extracted aws directory to the project directory. For example: Create the connection file: Paste the following contents in the conn.php file: Syntax Replace FQDN_OF_GATEWAY_NODE with the FQDN of the gateway node. Replace MY_ACCESS_KEY and MY_SECRET_KEY with the access_key and secret_key that were generated when creating the radosgw user for S3 access as mentioned in the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide . Replace PATH_TO_AWS with the absolute path to the extracted aws directory that you copied to the php project directory. Save the file and exit the editor. Run the file: If you have provided the values correctly in the file, the output of the command will be 0 . Create a new file for creating a bucket: Paste the following contents into the new file: Syntax Save the file and exit the editor. Run the file: Create a new file for listing owned buckets: Paste the following content into the file: Syntax Save the file and exit the editor. Run the file: The output should look similar to this: Create an object by first creating a source file named hello.txt : Create a new php file: Paste the following contents into the file: Syntax Save the file and exit the editor. Run the file: This will create the object hello.txt in bucket my-new-bucket3 . Create a new file for listing a bucket's content: Paste the following content into the file: Syntax Save the file and exit the editor. Run the file: The output will look similar to this: Create a new file for deleting an empty bucket: Paste the following contents into the file: Syntax Save the file and exit the editor. Run the file: If the bucket is successfully deleted, the command will return 0 as output. Note Edit the create_bucket.php file to create empty buckets, for example, my-new-bucket4 , my-new-bucket5 . , edit the above-mentioned del_empty_bucket.php file accordingly before trying to delete empty buckets. Important Deleting a non-empty bucket is currently not supported in PHP 2 and newer versions of aws-sdk . Create a new file for deleting an object: Paste the following contents into the file: Syntax Save the file and exit the editor. Run the file: This will delete the object hello.txt . 3.2.8. Secure Token Service The Amazon Web Services' Secure Token Service (STS) returns a set of temporary security credentials for authenticating users. Red Hat Ceph Storage Object Gateway supports a subset of Amazon STS application programming interfaces (APIs) for identity and access management (IAM). Users first authenticate against STS and receive a short-lived S3 access key and secret key that can be used in subsequent requests. Red Hat Ceph Storage can authenticate S3 users by integrating with a Single Sign-On by configuring an OIDC provider. This feature enables Object Storage users to authenticate against an enterprise identity provider rather than the local Ceph Object Gateway database. For instance, if the SSO is connected to an enterprise IDP in the backend, Object Storage users can use their enterprise credentials to authenticate and get access to the Ceph Object Gateway S3 endpoint. By using STS along with the IAM role policy feature, you can create finely tuned authorization policies to control access to your data. This enables you to implement either a Role-Based Access Control (RBAC) or Attribute-Based Access Control (ABAC) authorization model for your object storage data, giving you complete control over who can access the data. Simplifies workflow to access S3 resources with STS The user wants access S3 resources in Red Hat Ceph Storage. The user needs to authenticate against the SSO provider. The SSO provider is federated with an IDP and checks if the user credentials are valid, the user gets authenticated and the SSO provides a Token to the user. Using the Token provided by the SSO, the user accesses the Ceph Object Gateway STS endpoint, asking to assume an IAM role that provides the user with access to S3 resources. The Red Hat Ceph Storage gateway receives the user token and asks the SSO to validate the token. Once the SSO validates the token, the user is allowed to assume the role. Through STS, the user is with temporary access and secret keys that give the user access to the S3 resources. Depending on the policies attached to the IAM role the user has assumed, the user can access a set of S3 resources. For example, read for bucket A and write to bucket B. Additional Resources Amazon Web Services Secure Token Service welcome page . See the Configuring and using STS Lite with Keystone section of the Red Hat Ceph Storage Developer Guide for details on STS Lite and Keystone. See the Working around the limitations of using STS Lite with Keystone section of the Red Hat Ceph Storage Developer Guide for details on the limitations of STS Lite and Keystone. 3.2.8.1. The Secure Token Service application programming interfaces The Ceph Object Gateway implements the following Secure Token Service (STS) application programming interfaces (APIs): AssumeRole This API returns a set of temporary credentials for cross-account access. These temporary credentials allow for both, permission policies attached with Role and policies attached with AssumeRole API. The RoleArn and the RoleSessionName request parameters are required, but the other request parameters are optional. RoleArn Description The role to assume for the Amazon Resource Name (ARN) with a length of 20 to 2048 characters. Type String Required Yes RoleSessionName Description Identifying the role session name to assume. The role session name can uniquely identify a session when different principals or different reasons assume a role. This parameter's value has a length of 2 to 64 characters. The = , , , . , @ , and - characters are allowed, but no spaces allowed. Type String Required Yes Policy Description An identity and access management policy (IAM) in a JSON format for use in an inline session. This parameter's value has a length of 1 to 2048 characters. Type String Required No DurationSeconds Description The duration of the session in seconds, with a minimum value of 900 seconds to a maximum value of 43200 seconds. The default value is 3600 seconds. Type Integer Required No ExternalId Description When assuming a role for another account, provide the unique external identifier if available. This parameter's value has a length of 2 to 1224 characters. Type String Required No SerialNumber Description A user's identification number from their associated multi-factor authentication (MFA) device. The parameter's value can be the serial number of a hardware device or a virtual device, with a length of 9 to 256 characters. Type String Required No TokenCode Description The value generated from the multi-factor authentication (MFA) device, if the trust policy requires MFA. If an MFA device is required, and if this parameter's value is empty or expired, then AssumeRole call returns an "access denied" error message. This parameter's value has a fixed length of 6 characters. Type String Required No AssumeRoleWithWebIdentity This API returns a set of temporary credentials for users who have been authenticated by an application, such as OpenID Connect or OAuth 2.0 Identity Provider. The RoleArn and the RoleSessionName request parameters are required, but the other request parameters are optional. RoleArn Description The role to assume for the Amazon Resource Name (ARN) with a length of 20 to 2048 characters. Type String Required Yes RoleSessionName Description Identifying the role session name to assume. The role session name can uniquely identify a session when different principals or different reasons assume a role. This parameter's value has a length of 2 to 64 characters. The = , , , . , @ , and - characters are allowed, but no spaces are allowed. Type String Required Yes Policy Description An identity and access management policy (IAM) in a JSON format for use in an inline session. This parameter's value has a length of 1 to 2048 characters. Type String Required No DurationSeconds Description The duration of the session in seconds, with a minimum value of 900 seconds to a maximum value of 43200 seconds. The default value is 3600 seconds. Type Integer Required No ProviderId Description The fully qualified host component of the domain name from the identity provider. This parameter's value is only valid for OAuth 2.0 access tokens, with a length of 4 to 2048 characters. Type String Required No WebIdentityToken Description The OpenID Connect identity token or OAuth 2.0 access token provided from an identity provider. This parameter's value has a length of 4 to 2048 characters. Type String Required No Additional Resources See the Examples using the Secure Token Service APIs section of the Red Hat Ceph Storage Developer Guide for more details. Amazon Web Services Security Token Service, the AssumeRole action. Amazon Web Services Security Token Service, the AssumeRoleWithWebIdentity action. 3.2.8.2. Configuring the Secure Token Service Configure the Secure Token Service (STS) for use with the Ceph Object Gateway by setting the rgw_sts_key , and rgw_s3_auth_use_sts options. Note The S3 and STS APIs co-exist in the same namespace, and both can be accessed from the same endpoint in the Ceph Object Gateway. Prerequisites A running Red Hat Ceph Storage cluster. A running Ceph Object Gateway. Root-level access to a Ceph Manager node. Procedure Set the following configuration options for the Ceph Object Gateway client: Syntax The rgw_sts_key is the STS key for encrypting or decrypting the session token and is exactly 16 hex characters. Important The STS key needs to be alphanumeric. Example Restart the Ceph Object Gateway for the added key to take effect. Note Use the output from the ceph orch ps command, under the NAME column, to get the SERVICE_TYPE . ID information. To restart the Ceph Object Gateway on an individual node in the storage cluster: Syntax Example To restart the Ceph Object Gateways on all nodes in the storage cluster: Syntax Example Additional Resources See Secure Token Service application programming interfaces section in the Red Hat Ceph Storage Developer Guide for more details on the STS APIs. See the The basics of Ceph configuration chapter in the Red Hat Ceph Storage Configuration Guide for more details on using the Ceph configuration database. 3.2.8.3. Creating a user for an OpenID Connect provider To establish trust between the Ceph Object Gateway and the OpenID Connect Provider create a user entity and a role trust policy. Prerequisites User-level access to the Ceph Object Gateway node. Secure Token Service configured. Procedure Create a new Ceph user: Syntax Example Configure the Ceph user capabilities: Syntax Example Add a condition to the role trust policy using the Secure Token Service (STS) API: Syntax Important The app_id in the syntax example above must match the AUD_FIELD field of the incoming token. Additional Resources See the Obtaining the Root CA Thumbprint for an OpenID Connect Identity Provider article on Amazon's website. See the Secure Token Service application programming interfaces section in the Red Hat Ceph Storage Developer Guide for more details on the STS APIs. See the Examples using the Secure Token Service APIs section of the Red Hat Ceph Storage Developer Guide for more details. 3.2.8.4. Obtaining a thumbprint of an OpenID Connect provider Get the OpenID Connect provider's (IDP) configuration document. Any SSO that follows the OIDC protocol standards is expected to work with the Ceph Object Gateway. Red Hat has tested with the following SSO providers: Red Hat Single Sing-on Keycloak Prerequisites Installation of the openssl and curl packages. Procedure Get the configuration document from the IDP's URL: Syntax Example Get the IDP certificate: Syntax Example Note The x5c cert can be available on the /certs path or in the /jwks path depending on the SSO provider. Copy the result of the "x5c" response from the command and paste it into the certificate.crt file. Include --BEGIN CERTIFICATE-- at the beginning and --END CERTIFICATE-- at the end. Example Get the certificate thumbprint: Syntax Example Remove all the colons from the SHA1 fingerprint and use this as the input for creating the IDP entity in the IAM request. Additional Resources See the Obtaining the Root CA Thumbprint for an OpenID Connect Identity Provider article on Amazon's website. See the Secure Token Service application programming interfaces section in the Red Hat Ceph Storage Developer Guide for more details on the STS APIs. See the Examples using the Secure Token Service APIs section of the Red Hat Ceph Storage Developer Guide for more details. 3.2.8.5. Registering the OpenID Connect provider Register the OpenID Connect provider's (IDP) configuration document. Prerequisites Installation of the openssl and curl packages. Secure Token Service configured. User created for an OIDC provider. Thumbprint of an OIDC obtained. Procedure Extract URL from the token. Example Register the OIDC provider with Ceph Object Gateway. Example Verify that the OIDC provider is added to the Ceph Object Gateway. Example 3.2.8.6. Creating IAM roles and policies Create IAM roles and policies. Prerequisites Installation of the openssl and curl packages. Secure Token Service configured. User created for an OIDC provider. Thumbprint of an OIDC obtained. The OIDC provider in Ceph Object Gateway registered. Procedure Retrieve and validate JWT token. Example Verify the token. Example In this example, the jq filter is used by the subfield in the token and is set to ceph. Create a JSON file with role properties. Set Statement to Allow and the Action as AssumeRoleWithWebIdentity . Allow access to any user with the JWT token that matches the condition with sub:ceph . Example Create a Ceph Object Gateway role using the JSON file. Example . 3.2.8.7. Accessing S3 resources Verify the Assume Role with STS credentials to access S3 resources. Prerequisites Installation of the openssl and curl packages. Secure Token Service configured. User created for an OIDC provider. Thumbprint of an OIDC obtained. The OIDC provider in Ceph Object Gateway registered. IAM roles and policies created Procedure Following is an example of assume Role with STS to get temporary access and secret key to access S3 resources. Run the script. Example 3.2.9. Configuring and using STS Lite with Keystone (Technology Preview) The Amazon Secure Token Service (STS) and S3 APIs co-exist in the same namespace. The STS options can be configured in conjunction with the Keystone options. Note Both S3 and STS APIs can be accessed using the same endpoint in Ceph Object Gateway. Prerequisites Red Hat Ceph Storage 5.0 or higher. A running Ceph Object Gateway. Installation of the Boto Python module, version 3 or higher. Root-level access to a Ceph Manager node. User-level access to an OpenStack node. Procedure Set the following configuration options for the Ceph Object Gateway client: Syntax The rgw_sts_key is the STS key for encrypting or decrypting the session token and is exactly 16 hex characters. Important The STS key needs to be alphanumeric. Example Generate the EC2 credentials on the OpenStack node: Example Use the generated credentials to get back a set of temporary security credentials using GetSessionToken API: Example Obtaining the temporary credentials can be used for making S3 calls: Example Create a new S3Access role and configure a policy. Assign a user with administrative CAPS: Syntax Example Create the S3Access role: Syntax Example Attach a permission policy to the S3Access role: Syntax Example Now another user can assume the role of the gwadmin user. For example, the gwuser user can assume the permissions of the gwadmin user. Make a note of the assuming user's access_key and secret_key values. Example Use the AssumeRole API call, providing the access_key and secret_key values from the assuming user: Example Important The AssumeRole API requires the S3Access role. Additional Resources See the Test S3 Access section in the Red Hat Ceph Storage Object Gateway Guide for more information on installing the Boto Python module. See the Create a User section in the Red Hat Ceph Storage Object Gateway Guide for more information. 3.2.10. Working around the limitations of using STS Lite with Keystone (Technology Preview) A limitation with Keystone is that it does not supports Secure Token Service (STS) requests. Another limitation is the payload hash is not included with the request. To work around these two limitations the Boto authentication code must be modified. Prerequisites A running Red Hat Ceph Storage cluster, version 5.0 or higher. A running Ceph Object Gateway. Installation of Boto Python module, version 3 or higher. Procedure Open and edit Boto's auth.py file. Add the following four lines to the code block: class SigV4Auth(BaseSigner): """ Sign a request with Signature V4. """ REQUIRES_REGION = True def __init__(self, credentials, service_name, region_name): self.credentials = credentials # We initialize these value here so the unit tests can have # valid values. But these will get overriden in ``add_auth`` # later for real requests. self._region_name = region_name if service_name == 'sts': 1 self._service_name = 's3' 2 else: 3 self._service_name = service_name 4 Add the following two lines to the code block: def _modify_request_before_signing(self, request): if 'Authorization' in request.headers: del request.headers['Authorization'] self._set_necessary_date_headers(request) if self.credentials.token: if 'X-Amz-Security-Token' in request.headers: del request.headers['X-Amz-Security-Token'] request.headers['X-Amz-Security-Token'] = self.credentials.token if not request.context.get('payload_signing_enabled', True): if 'X-Amz-Content-SHA256' in request.headers: del request.headers['X-Amz-Content-SHA256'] request.headers['X-Amz-Content-SHA256'] = UNSIGNED_PAYLOAD 1 else: 2 request.headers['X-Amz-Content-SHA256'] = self.payload(request) Additional Resources See the Test S3 Access section in the Red Hat Ceph Storage Object Gateway Guide for more information on installing the Boto Python module. 3.3. S3 bucket operations As a developer, you can perform bucket operations with the Amazon S3 application programming interface (API) through the Ceph Object Gateway. The following table list the Amazon S3 functional operations for buckets, along with the function's support status. Table 3.2. Bucket operations Feature Status Notes List Buckets Supported Create a Bucket Supported Different set of canned ACLs. Put Bucket Website Supported Get Bucket Website Supported Delete Bucket Website Supported Put Bucket replication Supported Get Bucket replication Supported Delete Bucket replication Supported Bucket Lifecycle Partially Supported Expiration , NoncurrentVersionExpiration and AbortIncompleteMultipartUpload supported. Put Bucket Lifecycle Partially Supported Expiration , NoncurrentVersionExpiration and AbortIncompleteMultipartUpload supported. Delete Bucket Lifecycle Supported Get Bucket Objects Supported Bucket Location Supported Get Bucket Version Supported Put Bucket Version Supported Delete Bucket Supported Get Bucket ACLs Supported Different set of canned ACLs Put Bucket ACLs Supported Different set of canned ACLs Get Bucket cors Supported Put Bucket cors Supported Delete Bucket cors Supported List Bucket Object Versions Supported Head Bucket Supported List Bucket Multipart Uploads Supported Bucket Policies Partially Supported Get a Bucket Request Payment Supported Put a Bucket Request Payment Supported Multi-tenant Bucket Operations Supported GET PublicAccessBlock Supported PUT PublicAccessBlock Supported Delete PublicAccessBlock Supported Prerequisites A running Red Hat Ceph Storage cluster. A RESTful client. 3.3.1. S3 create bucket notifications Create bucket notifications at the bucket level. The notification configuration has the Red Hat Ceph Storage Object Gateway S3 events, ObjectCreated , ObjectRemoved , and ObjectLifecycle:Expiration . These need to be published and the destination to send the bucket notifications. Bucket notifications are S3 operations. To create a bucket notification for s3:objectCreate , s3:objectRemove and s3:ObjectLifecycle:Expiration events, use PUT: Example Important Red Hat supports ObjectCreate events, such as put , post , multipartUpload , and copy . Red Hat also supports ObjectRemove events, such as object_delete and s3_multi_object_delete . Request Entities NotificationConfiguration Description list of TopicConfiguration entities. Type Container Required Yes TopicConfiguration Description Id , Topic , and list of Event entities. Type Container Required Yes id Description Name of the notification. Type String Required Yes Topic Description Topic Amazon Resource Name(ARN) Note The topic must be created beforehand. Type String Required Yes Event Description List of supported events. Multiple event entities can be used. If omitted, all events are handled. Type String Required No Filter Description S3Key , S3Metadata and S3Tags entities. Type Container Required No S3Key Description A list of FilterRule entities, for filtering based on the object key. At most, 3 entities may be in the list, for example Name would be prefix , suffix , or regex . All filter rules in the list must match for the filter to match. Type Container Required No S3Metadata Description A list of FilterRule entities, for filtering based on object metadata. All filter rules in the list must match the metadata defined on the object. However, the object still matches if it has other metadata entries not listed in the filter. Type Container Required No S3Tags Description A list of FilterRule entities, for filtering based on object tags. All filter rules in the list must match the tags defined on the object. However, the object still matches if it has other tags not listed in the filter. Type Container Required No S3Key.FilterRule Description Name and Value entities. Name is : prefix , suffix , or regex . The Value would hold the key prefix, key suffix, or a regular expression for matching the key, accordingly. Type Container Required Yes S3Metadata.FilterRule Description Name and Value entities. Name is the name of the metadata attribute for example x-amz-meta-xxx . The value is the expected value for this attribute. Type Container Required Yes S3Tags.FilterRule Description Name and Value entities. Name is the tag key, and the value is the tag value. Type Container Required Yes HTTP response 400 Status Code MalformedXML Description The XML is not well-formed. 400 Status Code InvalidArgument Description Missing Id or missing or invalid topic ARN or invalid event. 404 Status Code NoSuchBucket Description The bucket does not exist. 404 Status Code NoSuchKey Description The topic does not exist. 3.3.2. S3 get bucket notifications Get a specific notification or list all the notifications configured on a bucket. Syntax Example Example Response Note The notification subresource returns the bucket notification configuration or an empty NotificationConfiguration element. The caller must be the bucket owner. Request Entities notification-id Description Name of the notification. All notifications are listed if the ID is not provided. Type String NotificationConfiguration Description list of TopicConfiguration entities. Type Container Required Yes TopicConfiguration Description Id , Topic , and list of Event entities. Type Container Required Yes id Description Name of the notification. Type String Required Yes Topic Description Topic Amazon Resource Name(ARN) Note The topic must be created beforehand. Type String Required Yes Event Description Handled event. Multiple event entities may exist. Type String Required Yes Filter Description The filters for the specified configuration. Type Container Required No HTTP response 404 Status Code NoSuchBucket Description The bucket does not exist. 404 Status Code NoSuchKey Description The notification does not exist if it has been provided. 3.3.3. S3 delete bucket notifications Delete a specific or all notifications from a bucket. Note Notification deletion is an extension to the S3 notification API. Any defined notifications on a bucket are deleted when the bucket is deleted. Deleting an unknown notification for example double delete , is not considered an error. To delete a specific or all notifications use DELETE: Syntax Example Request Entities notification-id Description Name of the notification. All notifications on the bucket are deleted if the notification ID is not provided. Type String HTTP response 404 Status Code NoSuchBucket Description The bucket does not exist. 3.3.4. Accessing bucket host names There are two different modes of accessing the buckets. The first, and preferred method identifies the bucket as the top-level directory in the URI. Example The second method identifies the bucket via a virtual bucket host name. Example Tip Red Hat prefers the first method, because the second method requires expensive domain certification and DNS wild cards. 3.3.5. S3 list buckets GET / returns a list of buckets created by the user making the request. GET / only returns buckets created by an authenticated user. You cannot make an anonymous request. Syntax Response Entities Buckets Description Container for list of buckets. Type Container Bucket Description Container for bucket information. Type Container Name Description Bucket name. Type String CreationDate Description UTC time when the bucket was created. Type Date ListAllMyBucketsResult Description A container for the result. Type Container Owner Description A container for the bucket owner's ID and DisplayName . Type Container ID Description The bucket owner's ID. Type String DisplayName Description The bucket owner's display name. Type String 3.3.6. S3 return a list of bucket objects Returns a list of bucket objects. Syntax Parameters prefix Description Only returns objects that contain the specified prefix. Type String delimiter Description The delimiter between the prefix and the rest of the object name. Type String marker Description A beginning index for the list of objects returned. Type String max-keys Description The maximum number of keys to return. Default is 1000. Type Integer HTTP Response 200 Status Code OK Description Buckets retrieved. GET / BUCKET returns a container for buckets with the following fields: Bucket Response Entities ListBucketResult Description The container for the list of objects. Type Entity Name Description The name of the bucket whose contents will be returned. Type String Prefix Description A prefix for the object keys. Type String Marker Description A beginning index for the list of objects returned. Type String MaxKeys Description The maximum number of keys returned. Type Integer Delimiter Description If set, objects with the same prefix will appear in the CommonPrefixes list. Type String IsTruncated Description If true , only a subset of the bucket's contents were returned. Type Boolean CommonPrefixes Description If multiple objects contain the same prefix, they will appear in this list. Type Container The ListBucketResult contains objects, where each object is within a Contents container. Object Response Entities Contents Description A container for the object. Type Object Key Description The object's key. Type String LastModified Description The object's last-modified date and time. Type Date ETag Description An MD-5 hash of the object. Etag is an entity tag. Type String Size Description The object's size. Type Integer StorageClass Description Should always return STANDARD . Type String 3.3.7. S3 create a new bucket Creates a new bucket. To create a bucket, you must have a user ID and a valid AWS Access Key ID to authenticate requests. You can not create buckets as an anonymous user. Constraints In general, bucket names should follow domain name constraints. Bucket names must be unique. Bucket names cannot be formatted as IP address. Bucket names can be between 3 and 63 characters long. Bucket names must not contain uppercase characters or underscores. Bucket names must start with a lowercase letter or number. Bucket names can contain a dash (-). Bucket names must be a series of one or more labels. Adjacent labels are separated by a single period (.). Bucket names can contain lowercase letters, numbers, and hyphens. Each label must start and end with a lowercase letter or a number. Note The above constraints are relaxed if rgw_relaxed_s3_bucket_names is set to true . The bucket names must still be unique, cannot be formatted as IP address, and can contain letters, numbers, periods, dashes, and underscores of up to 255 characters long. Syntax Parameters x-amz-acl Description Canned ACLs. Valid Values private , public-read , public-read-write , authenticated-read Required No HTTP Response If the bucket name is unique, within constraints, and unused, the operation will succeed. If a bucket with the same name already exists and the user is the bucket owner, the operation will succeed. If the bucket name is already in use, the operation will fail. 409 Status Code BucketAlreadyExists Description Bucket already exists under different user's ownership. 3.3.8. S3 put bucket website The put bucket website API sets the configuration of the website that is specified in the website subresource. To configure a bucket as a website, the website subresource can be added on the bucket. Note Put operation requires S3:PutBucketWebsite permission. By default, only the bucket owner can configure the website attached to a bucket. Syntax Example Additional Resources For more information about this API call, see S3 API . 3.3.9. S3 get bucket website The get bucket website API retrieves the configuration of the website that is specified in the website subresource. Note Get operation requires the S3:GetBucketWebsite permission. By default, only the bucket owner can read the bucket website configuration. Syntax Example Additional Resources For more information about this API call, see S3 API . 3.3.10. S3 delete bucket website The delete bucket website API removes the website configuration for a bucket. Syntax Example Additional Resources For more information about this API call, see S3 API . 3.3.11. S3 put bucket replication The put bucket replication API configures replication configuration for a bucket or replaces an existing one. Syntax Example 3.3.12. S3 get bucket replication The get bucket replication API returns the replication configuration of a bucket. Syntax Example 3.3.13. S3 delete bucket replication The delete bucket replication API deletes the replication configuration from a bucket. Syntax Example 3.3.14. S3 delete a bucket Deletes a bucket. You can reuse bucket names following a successful bucket removal. Syntax HTTP Response 204 Status Code No Content Description Bucket removed. 3.3.15. S3 bucket lifecycle You can use a bucket lifecycle configuration to manage your objects so they are stored effectively throughout their lifetime. The S3 API in the Ceph Object Gateway supports a subset of the AWS bucket lifecycle actions: Expiration : This defines the lifespan of objects within a bucket. It takes the number of days the object should live or expiration date, at which point Ceph Object Gateway will delete the object. If the bucket doesn't enable versioning, Ceph Object Gateway will delete the object permanently. If the bucket enables versioning, Ceph Object Gateway will create a delete marker for the current version, and then delete the current version. NoncurrentVersionExpiration : This defines the lifespan of noncurrent object versions within a bucket. To use this feature, you must enable bucket versioning. It takes the number of days a noncurrent object should live, at which point Ceph Object Gateway will delete the noncurrent object. NewerNoncurrentVersions : Specifies how many noncurrent object versions to retain. You can specify up to 100 noncurrent versions to retain. If the specified number to retain is more than 100, additional noncurrent versions are deleted. AbortIncompleteMultipartUpload : This defines the number of days an incomplete multipart upload should live before it is aborted. BlockPublicPolicy reject : This action is for public access block. It calls PUT access point policy and PUT bucket policy that are made through the access point if the specified policy (for either the access point or the underlying bucket) allows public access. The Amazon S3 Block Public Access feature is available in Red Hat Ceph Storage 5.x/ Ceph Pacific versions. It provides settings for access points, buckets, and accounts to help you manage public access to Amazon S3 resources. By default, new buckets, access points, and objects do not allow public access. However, you can modify bucket policies, access point policies, or object permissions to allow public access. S3 Block Public Access settings override these policies and permissions so that you can limit public access to these resources. The lifecycle configuration contains one or more rules using the <Rule> element. Example A lifecycle rule can apply to all or a subset of objects in a bucket based on the <Filter> element that you specify in the lifecycle rule. You can specify a filter in several ways: Key prefixes Object tags Both key prefix and one or more object tags Key prefixes You can apply a lifecycle rule to a subset of objects based on the key name prefix. For example, specifying <keypre/> would apply to objects that begin with keypre/ : You can also apply different lifecycle rules to objects with different key prefixes: Object tags You can apply a lifecycle rule to only objects with a specific tag using the <Key> and <Value> elements: Both prefix and one or more tags In a lifecycle rule, you can specify a filter based on both the key prefix and one or more tags. They must be wrapped in the <And> element. A filter can have only one prefix, and zero or more tags: Additional Resources See the S3 GET bucket lifecycle section in the Red Hat Ceph Storage Developer Guide for details on getting a bucket lifecycle. See the S3 create or replace a bucket lifecycle section in the Red Hat Ceph Storage Developer Guide for details on creating a bucket lifecycle. See the S3 delete a bucket lifecycle secton in the Red Hat Ceph Storage Developer Guide for details on deleting a bucket lifecycle. 3.3.16. S3 GET bucket lifecycle To get a bucket lifecycle, use GET and specify a destination bucket. Syntax Request Headers See the S3 common request headers in Appendix B for more information about common request headers. Response The response contains the bucket lifecycle and its elements. 3.3.17. S3 create or replace a bucket lifecycle To create or replace a bucket lifecycle, use PUT and specify a destination bucket and a lifecycle configuration. The Ceph Object Gateway only supports a subset of the S3 lifecycle functionality. Syntax Request Headers content-md5 Description A base64 encoded MD-5 hash of the message Valid Values String No defaults or constraints. Required No Additional Resources See the S3 common request headers section in Appendix B of the Red Hat Ceph Storage Developer Guide for more information on Amazon S3 common request headers. See the S3 bucket lifecycles section of the Red Hat Ceph Storage Developer Guide for more information on Amazon S3 bucket lifecycles. 3.3.18. S3 delete a bucket lifecycle To delete a bucket lifecycle, use DELETE and specify a destination bucket. Syntax Request Headers The request does not contain any special elements. Response The response returns common response status. Additional Resources See the S3 common request headers section in Appendix B of the Red Hat Ceph Storage Developer Guide for more information on Amazon S3 common request headers. See the S3 common response status codes section in Appendix C of Red Hat Ceph Storage Developer Guide for more information on Amazon S3 common response status codes. 3.3.19. S3 get bucket location Retrieves the bucket's zone group. The user needs to be the bucket owner to call this. A bucket can be constrained to a zone group by providing LocationConstraint during a PUT request. Add the location subresource to the bucket resource as shown below. Syntax Response Entities LocationConstraint Description The zone group where bucket resides, an empty string for default zone group. Type String 3.3.20. S3 get bucket versioning Retrieves the versioning state of a bucket. The user needs to be the bucket owner to call this. Add the versioning subresource to the bucket resource as shown below. Syntax 3.3.21. S3 put bucket versioning This subresource set the versioning state of an existing bucket. The user needs to be the bucket owner to set the versioning state. If the versioning state has never been set on a bucket, then it has no versioning state. Doing a GET versioning request does not return a versioning state value. Setting the bucket versioning state: Enabled : Enables versioning for the objects in the bucket. All objects added to the bucket receive a unique version ID. Suspended : Disables versioning for the objects in the bucket. All objects added to the bucket receive the version ID null. Syntax Example Bucket Request Entities VersioningConfiguration Description A container for the request. Type Container Status Description Sets the versioning state of the bucket. Valid Values: Suspended/Enabled Type String 3.3.22. S3 get bucket access control lists Retrieves the bucket access control list. The user needs to be the bucket owner or to have been granted READ_ACP permission on the bucket. Add the acl subresource to the bucket request as shown below. Syntax Response Entities AccessControlPolicy Description A container for the response. Type Container AccessControlList Description A container for the ACL information. Type Container Owner Description A container for the bucket owner's ID and DisplayName . Type Container ID Description The bucket owner's ID. Type String DisplayName Description The bucket owner's display name. Type String Grant Description A container for Grantee and Permission . Type Container Grantee Description A container for the DisplayName and ID of the user receiving a grant of permission. Type Container Permission Description The permission given to the Grantee bucket. Type String 3.3.23. S3 put bucket Access Control Lists Sets an access control to an existing bucket. The user needs to be the bucket owner or to have been granted WRITE_ACP permission on the bucket. Add the acl subresource to the bucket request as shown below. Syntax Request Entities S3 list multipart uploads AccessControlList Description A container for the ACL information. Type Container Owner Description A container for the bucket owner's ID and DisplayName . Type Container ID Description The bucket owner's ID. Type String DisplayName Description The bucket owner's display name. Type String Grant Description A container for Grantee and Permission . Type Container Grantee Description A container for the DisplayName and ID of the user receiving a grant of permission. Type Container Permission Description The permission given to the Grantee bucket. Type String 3.3.24. S3 get bucket cors Retrieves the cors configuration information set for the bucket. The user needs to be the bucket owner or to have been granted READ_ACP permission on the bucket. Add the cors subresource to the bucket request as shown below. Syntax 3.3.25. S3 put bucket cors Sets the cors configuration for the bucket. The user needs to be the bucket owner or to have been granted READ_ACP permission on the bucket. Add the cors subresource to the bucket request as shown below. Syntax 3.3.26. S3 delete a bucket cors Deletes the cors configuration information set for the bucket. The user needs to be the bucket owner or to have been granted READ_ACP permission on the bucket. Add the cors subresource to the bucket request as shown below. Syntax 3.3.27. S3 list bucket object versions Returns a list of metadata about all the version of objects within a bucket. Requires READ access to the bucket. Add the versions subresource to the bucket request as shown below. Syntax You can specify parameters for GET / BUCKET ?versions , but none of them are required. Parameters prefix Description Returns in-progress uploads whose keys contain the specified prefix. Type String delimiter Description The delimiter between the prefix and the rest of the object name. Type String key-marker Description The beginning marker for the list of uploads. Type String max-keys Description The maximum number of in-progress uploads. The default is 1000. Type Integer version-id-marker Description Specifies the object version to begin the list. Type String Response Entities KeyMarker Description The key marker specified by the key-marker request parameter, if any. Type String NextKeyMarker Description The key marker to use in a subsequent request if IsTruncated is true . Type String NextUploadIdMarker Description The upload ID marker to use in a subsequent request if IsTruncated is true . Type String IsTruncated Description If true , only a subset of the bucket's upload contents were returned. Type Boolean Size Description The size of the uploaded part. Type Integer DisplayName Description The owner's display name. Type String ID Description The owner's ID. Type String Owner Description A container for the ID and DisplayName of the user who owns the object. Type Container StorageClass Description The method used to store the resulting object. STANDARD or REDUCED_REDUNDANCY Type String Version Description Container for the version information. Type Container versionId Description Version ID of an object. Type String versionIdMarker Description The last version of the key in a truncated response. Type String 3.3.28. S3 head bucket Calls HEAD on a bucket to determine if it exists and if the caller has access permissions. Returns 200 OK if the bucket exists and the caller has permissions; 404 Not Found if the bucket does not exist; and, 403 Forbidden if the bucket exists but the caller does not have access permissions. Syntax 3.3.29. S3 list multipart uploads GET /?uploads returns a list of the current in-progress multipart uploads, that is, the application initiates a multipart upload, but the service hasn't completed all the uploads yet. Syntax You can specify parameters for GET / BUCKET ?uploads , but none of them are required. Parameters prefix Description Returns in-progress uploads whose keys contain the specified prefix. Type String delimiter Description The delimiter between the prefix and the rest of the object name. Type String key-marker Description The beginning marker for the list of uploads. Type String max-keys Description The maximum number of in-progress uploads. The default is 1000. Type Integer max-uploads Description The maximum number of multipart uploads. The range is from 1-1000. The default is 1000. Type Integer version-id-marker Description Ignored if key-marker isn't specified. Specifies the ID of the first upload to list in lexicographical order at or following the ID . Type String Response Entities ListMultipartUploadsResult Description A container for the results. Type Container ListMultipartUploadsResult.Prefix Description The prefix specified by the prefix request parameter, if any. Type String Bucket Description The bucket that will receive the bucket contents. Type String KeyMarker Description The key marker specified by the key-marker request parameter, if any. Type String UploadIdMarker Description The marker specified by the upload-id-marker request parameter, if any. Type String NextKeyMarker Description The key marker to use in a subsequent request if IsTruncated is true . Type String NextUploadIdMarker Description The upload ID marker to use in a subsequent request if IsTruncated is true . Type String MaxUploads Description The max uploads specified by the max-uploads request parameter. Type Integer Delimiter Description If set, objects with the same prefix will appear in the CommonPrefixes list. Type String IsTruncated Description If true , only a subset of the bucket's upload contents were returned. Type Boolean Upload Description A container for Key , UploadId , InitiatorOwner , StorageClass , and Initiated elements. Type Container Key Description The key of the object once the multipart upload is complete. Type String UploadId Description The ID that identifies the multipart upload. Type String Initiator Description Contains the ID and DisplayName of the user who initiated the upload. Type Container DisplayName Description The initiator's display name. Type String ID Description The initiator's ID. Type String Owner Description A container for the ID and DisplayName of the user who owns the uploaded object. Type Container StorageClass Description The method used to store the resulting object. STANDARD or REDUCED_REDUNDANCY Type String Initiated Description The date and time the user initiated the upload. Type Date CommonPrefixes Description If multiple objects contain the same prefix, they will appear in this list. Type Container CommonPrefixes.Prefix Description The substring of the key after the prefix as defined by the prefix request parameter. Type String 3.3.30. S3 bucket policies The Ceph Object Gateway supports a subset of the Amazon S3 policy language applied to buckets. Creation and Removal Ceph Object Gateway manages S3 Bucket policies through standard S3 operations rather than using the radosgw-admin CLI tool. Administrators may use the s3cmd command to set or delete a policy. Example Limitations Ceph Object Gateway only supports the following S3 actions: s3:AbortMultipartUpload s3:CreateBucket s3:DeleteBucketPolicy s3:DeleteBucket s3:DeleteBucketWebsite s3:DeleteBucketReplication s3:DeleteReplicationConfiguration s3:DeleteObject s3:DeleteObjectVersion s3:GetBucketAcl s3:GetBucketCORS s3:GetBucketLocation s3:GetBucketPolicy s3:GetBucketRequestPayment s3:GetBucketVersioning s3:GetBucketWebsite s3:GetBucketReplication s3:GetReplicationConfiguration s3:GetLifecycleConfiguration s3:GetObjectAcl s3:GetObject s3:GetObjectTorrent s3:GetObjectVersionAcl s3:GetObjectVersion s3:GetObjectVersionTorrent s3:ListAllMyBuckets s3:ListBucketMultiPartUploads s3:ListBucket s3:ListBucketVersions s3:ListMultipartUploadParts s3:PutBucketAcl s3:PutBucketCORS s3:PutBucketPolicy s3:PutBucketRequestPayment s3:PutBucketVersioning s3:PutBucketWebsite s3:PutBucketReplication s3:PutReplicationConfiguration s3:PutLifecycleConfiguration s3:PutObjectAcl s3:PutObject s3:PutObjectVersionAcl Note Ceph Object Gateway does not support setting policies on users, groups, or roles. The Ceph Object Gateway uses the RGW tenant identifier in place of the Amazon twelve-digit account ID. Ceph Object Gateway administrators who want to use policies between Amazon Web Service (AWS) S3 and Ceph Object Gateway S3 will have to use the Amazon account ID as the tenant ID when creating users. With AWS S3, all tenants share a single namespace. By contrast, Ceph Object Gateway gives every tenant its own namespace of buckets. At present, Ceph Object Gateway clients trying to access a bucket belonging to another tenant MUST address it as tenant:bucket in the S3 request. In the AWS, a bucket policy can grant access to another account, and that account owner can then grant access to individual users with user permissions. Since Ceph Object Gateway does not yet support user, role, and group permissions, account owners will need to grant access directly to individual users. Important Granting an entire account access to a bucket grants access to ALL users in that account. Bucket policies do NOT support string interpolation. Ceph Object Gateway supports the following condition keys: aws:CurrentTime aws:EpochTime aws:PrincipalType aws:Referer aws:SecureTransport aws:SourceIp aws:UserAgent aws:username Ceph Object Gateway ONLY supports the following condition keys for the ListBucket action: s3:prefix s3:delimiter s3:max-keys Impact on Swift Ceph Object Gateway provides no functionality to set bucket policies under the Swift API. However, bucket policies that are set with the S3 API govern Swift and S3 operations. Ceph Object Gateway matches Swift credentials against principals that are specified in a policy. 3.3.31. S3 get the request payment configuration on a bucket Uses the requestPayment subresource to return the request payment configuration of a bucket. The user needs to be the bucket owner or to have been granted READ_ACP permission on the bucket. Add the requestPayment subresource to the bucket request as shown below. Syntax 3.3.32. S3 set the request payment configuration on a bucket Uses the requestPayment subresource to set the request payment configuration of a bucket. By default, the bucket owner pays for downloads from the bucket. This configuration parameter enables the bucket owner to specify that the person requesting the download will be charged for the request and the data download from the bucket. Add the requestPayment subresource to the bucket request as shown below. Syntax Request Entities Payer Description Specifies who pays for the download and request fees. Type Enum RequestPaymentConfiguration Description A container for Payer . Type Container 3.3.33. Multi-tenant bucket operations When a client application accesses buckets, it always operates with the credentials of a particular user. In Red Hat Ceph Storage cluster, every user belongs to a tenant. Consequently, every bucket operation has an implicit tenant in its context if no tenant is specified explicitly. Thus multi-tenancy is completely backward compatible with releases, as long as the referred buckets and referring user belong to the same tenant. Extensions employed to specify an explicit tenant differ according to the protocol and authentication system used. In the following example, a colon character separates tenant and bucket. Thus a sample URL would be: By contrast, a simple Python example separates the tenant and bucket in the bucket method itself: Example Note It's not possible to use S3-style subdomains using multi-tenancy, since host names cannot contain colons or any other separators that are not already valid in bucket names. Using a period creates an ambiguous syntax. Therefore, the bucket-in-URL-path format has to be used with multi-tenancy. Additional Resources See the Multi Tenancy section under User Management in the Red Hat Ceph Storage Object Gateway Guide for additional details. 3.3.34. S3 Block Public Access You can use the S3 Block Public Access feature to set buckets and users to help you manage public access to Red Hat Ceph Storage object storage S3 resources. Using this feature, bucket policies, access point policies, and object permissions can be overridden to allow public access. By default, new buckets, access points, and objects do not allow public access. The S3 API in the Ceph Object Gateway supports a subset of the AWS public access settings: BlockPublicPolicy : This defines the setting to allow users to manage access point and bucket policies. This setting does not allow the users to publicly share the bucket or the objects it contains. Existing access point and bucket policies are not affected by enabling this setting. Setting this option to TRUE causes the S3: To reject calls to PUT Bucket policy. To reject calls to PUT access point policy for all of the bucket's same-account access points. Important Apply this setting at the user level so that users cannot alter a specific bucket's block public access setting. Note The TRUE setting only works if the specified policy allows public access. RestrictPublicBuckets : This defines the setting to restrict access to a bucket or access point with public policy. The restriction applies to only AWS service principals and authorized users within the bucket owner's account and access point owner's account. This blocks cross-account access to the access point or bucket, except for the cases specified, while still allowing users within the account to manage the access points or buckets. Enabling this setting does not affect existing access point or bucket policies. It only defines that Amazon S3 blocks public and cross-account access derived from any public access point or bucket policy, including non-public delegation to specific accounts. Note Access control lists (ACLs) are not currently supported by Red Hat Ceph Storage. Bucket policies are assumed to be public unless defined otherwise. To block public access a bucket policy must give access only to fixed values for one or more of the following: Note A fixed value does not contain a wildcard ( * ) or an AWS Identity and Access Management Policy Variable. An AWS principal, user, role, or service principal A set of Classless Inter-Domain Routings (CIDRs), using aws:SourceIp aws:SourceArn aws:SourceVpc aws:SourceVpce aws:SourceOwner aws:SourceAccount s3:x-amz-server-side-encryption-aws-kms-key-id aws:userid , outside the pattern AROLEID:* s3:DataAccessPointArn Note When used in a bucket policy, this value can contain a wildcard for the access point name without rendering the policy public, as long as the account ID is fixed. s3:DataAccessPointPointAccount The following example policy is considered public. Example To make a policy non-public, include any of the condition keys with a fixed value. Example Additional Resources See the S3 GET `PublicAccessBlock` section in the Red Hat Ceph Storage Developer Guide for details on getting a PublicAccessBlock. See the S3 PUT `PublicAccessBlock` section in the Red Hat Ceph Storage Developer Guide for details on creating or modifying a PublicAccessBlock. See the S3 Delete `PublicAccessBlock` section in the Red Hat Ceph Storage Developer Guide for details on deleting a PublicAccessBlock. See the S3 bucket policies section in the Red Hat Ceph Storage Developer Guide for details on bucket policies. See the Blocking public access to your Amazon S3 storage section of Amazon Simple Storage Service (S3) documentation. 3.3.35. S3 GET PublicAccessBlock To get the S3 Block Public Access feature configured, use GET and specify a destination AWS account. Syntax Request Headers See the S3 common request headers in Appendix B for more information about common request headers. Response The response is an HTTP 200 response and is returned in XML format. 3.3.36. S3 PUT PublicAccessBlock Use this to create or modify the PublicAccessBlock configuration for an S3 bucket. To use this operation, you must have the s3:PutBucketPublicAccessBlock permission. Important If the PublicAccessBlock configuration is different between the bucket and the account, Amazon S3 uses the most restrictive combination of the bucket-level and account-level settings. Syntax Request Headers See the S3 common request headers in Appendix B for more information about common request headers. Response The response is an HTTP 200 response and is returned with an empty HTTP body. 3.3.37. S3 delete PublicAccessBlock Use this to delete the PublicAccessBlock configuration for an S3 bucket. Syntax Request Headers See the S3 common request headers in Appendix B for more information about common request headers. Response The response is an HTTP 200 response and is returned with an empty HTTP body. 3.4. S3 object operations As a developer, you can perform object operations with the Amazon S3 application programming interface (API) through the Ceph Object Gateway. The following table list the Amazon S3 functional operations for objects, along with the function's support status. Table 3.3. Object operations Feature Status Get Object Supported Head object Supported Put Object Lock Supported Get Object Lock Supported Put Object Legal Hold Supported Get Object Legal Hold Supported Put Object Retention Supported Get Object Retention Supported Put Object Tagging Supported Get Object Tagging Supported Delete Object Tagging Supported Put Object Supported Delete Object Supported Delete Multiple Objects Supported Get Object ACLs Supported Put Object ACLs Supported Copy Object Supported Post Object Supported Options Object Supported Initiate Multipart Upload Supported Add a Part to a Multipart Upload Supported List Parts of a Multipart Upload Supported Assemble Multipart Upload Supported Copy Multipart Upload Supported Abort Multipart Upload Supported Multi-Tenancy Supported Prerequisites A running Red Hat Ceph Storage cluster. A RESTful client. 3.4.1. S3 get an object from a bucket Retrieves an object from a bucket: Syntax Add the versionId subresource to retrieve a particular version of the object: Syntax Request Headers partNumber Description Part number of the object being read. This enables a ranged GET request for the specified part. Using this request is useful for downloading just a part of an object. Valid Values A positive integer between 1 and 10,000. Required No range Description The range of the object to retrieve. Note Multiple ranges of data per GET request are not supported. Valid Values Range:bytes=beginbyte-endbyte Required No if-modified-since Description Gets only if modified since the timestamp. Valid Values Timestamp Required No if-unmodified-since Description Gets only if not modified since the timestamp. Valid Values Timestamp Required No if-match Description Gets only if object ETag matches ETag. Valid Values Entity Tag Required No if-none-match Description Gets only if object ETag does not match ETag. Valid Values Entity Tag Required No Sytnax with request headers Response Headers Content-Range Description Data range, will only be returned if the range header field was specified in the request. x-amz-version-id Description Returns the version ID or null. x-rgw-replicated-from Description Returns the source zone and any intermediate zones involved in an object's replication path within a Ceph multi-zone environment. This header is included in GetObject and HeadObject responses. x-rgw-replicated-at Description Returns a timestamp indicating when the object was replicated to its current location. You can calculate the duration for replication to complete by using this header with Last-Modified header. Note As of now, x-rgw-replicated-from and x-rgw-replicated-at are supported by client tools like s3cmd or curl verify at the replicated zone. These tools can be used in addition to radosgw-admin command for verification. With radosgw-admin object stat we have a known issue BZ-2312552 of missing header key x-rgw-replicated-from . 3.4.2. S3 get object attributes Use the S3 GetObjectAttributes API to retrieve the metadata of an object without returning the object's data. GetObjectAttributes API combines the functionality of HeadObject and ListParts. It provides all the information returned by these two calls in a single request, streamlining the process and reducing the number of API calls needed. Syntax Example The versionId subresource retrieves a particular version of the object. 3.4.2.1. Request entities Example 3.4.2.2. Get request headers Name Description Type / Valid values Required? Bucket The name of the bucket that contains the object. String Yes Key The object key. String Yes versionId The version ID used to reference a specific version of the object. String No x-amz-max-parts Sets the maximum number of parts to return. String No x-amz-object-attributes Specifies the fields at the root level that you want returned in the response. Fields that you do not specify are not returned. ETag,Checksum,ObjectParts,StorageClass, ObjectSize Yes x-amz-part-number-marker Specifies the part after which listing should begin. Only parts with higher part numbers will be listed. String No 3.4.2.3. Response entities Example 3.4.2.4. Get response headers Name Description last modified The creation date of the object. x-amz-delete-marker Specifies whether the object retrieved was (true) or was not (false) a delete marker. If false, this response header does not appear in the response. x-amz-request-charged If present, indicates that the requester was successfully charged for the request. x-amz-version-id The version ID of the object. GetObjectAttributesOutput TRoot level tag for the GetObjectAttributesOutput parameters. Checksum The checksum or digest of the object. ChecksumCRC32 (string) The base64-encoded, 32-bit CRC-32 checksum of the object. This will only be present if it was uploaded with the object. When you use an API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it's a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide . ChecksumCRC32C (string) The base64-encoded, 32-bit CRC-32C checksum of the object. This will only be present if it was uploaded with the object. When you use an API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it's a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide . ChecksumSHA1 (string) The base64-encoded, 160-bit SHA-1 digest of the object. This will only be present if it was uploaded with the object. When you use the API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it's a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide . ChecksumSHA256 (string) The base64-encoded, 256-bit SHA-256 digest of the object. This will only be present if it was uploaded with the object. When you use an API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it's a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide . ObjectParts The creation date of the object.A collection of parts associated with a multipart upload. ObjectParts (structure) A collection of parts associated with a multipart upload. TotalPartsCount (integer) The total number of parts. PartNumberMarker (integer) The marker for the current part. NextPartNumberMarker (integer) When a list is truncated, this element specifies the last part in the list, as well as the value to use for the PartNumberMarker request parameter in a subsequent request. MaxParts (integer) The maximum number of parts allowed in the response. IsTruncated (boolean) Indicates whether the returned list of parts is truncated. A value of true indicates that the list was truncated. A list can be truncated if the number of parts exceeds the limit returned in the MaxParts element. Parts (list) A container for elements related to a particular part. A response can contain zero or more Parts elements. Note General purpose buckets - For GetObjectAttributes , if a additional checksum (including x-amz-checksum-crc32 , x-amz-checksum-crc32c , x-amz-checksum-sha1 , or x-amz-checksum-sha256 ) isn't applied to the object specified in the request, the response doesn't return Part . Directory buckets - For GetObjectAttributes , no matter whether a additional checksum is applied to the object specified in the request, the response returns Part . (structure) A container for elements related to an individual part. PartNumber (integer) The part number identifying the part. This value is a positive integer between 1 and 10,000. Size (long) The size of the uploaded part in bytes. ChecksumCRC32 (string) This header can be used as a data integrity check to verify that the data received is the same data that was originally sent. This header specifies the base64-encoded, 32-bit CRC-32 checksum of the object. For more information, see Checking object integrity in the Amazon S3 User Guide . ChecksumCRC32C (string) The base64-encoded, 32-bit CRC-32C checksum of the object. This will only be present if it was uploaded with the object. When you use an API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it's a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide . ChecksumSHA1 (string) The base64-encoded, 160-bit SHA-1 digest of the object. This will only be present if it was uploaded with the object. When you use the API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it's a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide . ChecksumSHA256 (string) The base64-encoded, 256-bit SHA-256 digest of the object. This will only be present if it was uploaded with the object. When you use an API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it is a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see Checking object integrity in the Amazon S3 User Guide . ObjectSize The size of the object in bytes. StorageClass Provides the storage class information of the object. Amazon S3 returns this header for all objects except for S3 Standard storage class objects. 3.4.3. Retrieve sync replication Headers of object Returns information about an object. This request will return the same header information as with the Get Object request, but will include the metadata only, not the object data payload. Retrieves the current version of the object: Syntax Add the versionId subresource to retrieve info for a particular version: Syntax Request Headers range Description The range of the object to retrieve. Valid Values Range:bytes=beginbyte-endbyte Required No if-modified-since Description Gets only if modified since the timestamp. Valid Values Timestamp Required No if-match Description Gets only if object ETag matches ETag. Valid Values Entity Tag Required No if-none-match Description Gets only if object ETag matches ETag. Valid Values Entity Tag Required No Response Headers x-amz-version-id Description Returns the version ID or null. x-rgw-replicated-from Description Returns the source zone and any intermediate zones involved in an object's replication path within a Ceph multi-zone environment. This header is included in GetObject and HeadObject responses. x-rgw-replicated-at Description Returns a timestamp indicating when the object was replicated to its current location. You can calculate the duration for replication to complete by using this header with Last-Modified header. Note As of now, x-rgw-replicated-from and x-rgw-replicated-at are supported by client tools like s3cmd or curl verify at the replicated zone. These tools can be used in addition to radosgw-admin command for verification. With radosgw-admin object stat we have a known issue BZ-2312552 of missing header key x-rgw-replicated-from . 3.4.4. S3 put object lock The put object lock API places a lock configuration on the selected bucket. With object lock, you can store objects using a Write-Once-Read-Many (WORM) model. Object lock ensures an object is not deleted or overwritten, for a fixed amount of time or indefinitely. The rule specified in the object lock configuration is applied by default to every new object placed in the selected bucket. Important Enable the object lock when creating a bucket otherwise, the operation fails. Syntax Example Request Entities ObjectLockConfiguration Description A container for the request. Type Container Required Yes ObjectLockEnabled Description Indicates whether this bucket has an object lock configuration enabled. Type String Required Yes Rule Description The object lock rule in place for the specified bucket. Type Container Required No DefaultRetention Description The default retention period applied to new objects placed in the specified bucket. Type Container Required No Mode Description The default object lock retention mode. Valid values: GOVERNANCE/COMPLIANCE. Type Container Required Yes Days Description The number of days specified for the default retention period. Type Integer Required No Years Description The number of years specified for the default retention period. Type Integer Required No HTTP Response 400 Status Code MalformedXML Description The XML is not well-formed. 409 Status Code InvalidBucketState Description The bucket object lock is not enabled. Additional Resources For more information about this API call, see S3 API . 3.4.5. S3 get object lock The get object lock API retrieves the lock configuration for a bucket. Syntax Example Response Entities ObjectLockConfiguration Description A container for the request. Type Container Required Yes ObjectLockEnabled Description Indicates whether this bucket has an object lock configuration enabled. Type String Required Yes Rule Description The object lock rule is in place for the specified bucket. Type Container Required No DefaultRetention Description The default retention period applied to new objects placed in the specified bucket. Type Container Required No Mode Description The default object lock retention mode. Valid values: GOVERNANCE/COMPLIANCE. Type Container Required Yes Days Description The number of days specified for the default retention period. Type Integer Required No Years Description The number of years specified for the default retention period. Type Integer Required No Additional Resources For more information about this API call, see S3 API . 3.4.6. S3 put object legal hold The put object legal hold API applies a legal hold configuration to the selected object. With a legal hold in place, you cannot overwrite or delete an object version. A legal hold does not have an associated retention period and remains in place until you explicitly remove it. Syntax Example The versionId subresource retrieves a particular version of the object. Request Entities LegalHold Description A container for the request. Type Container Required Yes Status Description Indicates whether the specified object has a legal hold in place. Valid values: ON/OFF Type String Required Yes Additional Resources For more information about this API call, see S3 API . 3.4.7. S3 get object legal hold The get object legal hold API retrieves an object's current legal hold status. Syntax Example The versionId subresource retrieves a particular version of the object. Response Entities LegalHold Description A container for the request. Type Container Required Yes Status Description Indicates whether the specified object has a legal hold in place. Valid values: ON/OFF Type String Required Yes Additional Resources For more information about this API call, see S3 API . 3.4.8. S3 put object retention The put object retention API places an object retention configuration on an object. A retention period protects an object version for a fixed amount of time. There are two modes: GOVERNANCE and COMPLIANCE. These two retention modes apply different levels of protection to your objects. Note During this period, your object is Write-Once-Read-Many-protected (WORM-protected) and cannot be overwritten or deleted. Syntax Example The versionId sub-resource retrieves a particular version of the object. Request Entities Retention Description A container for the request. Type Container Required Yes Mode Description Retention mode for the specified object. Valid values: GOVERNANCE, COMPLIANCE. Type String Required Yes RetainUntilDate Description Retention date. Format 2020-01-05T00:00:00.000Z Type Timestamp Required Yes Additional Resources For more information about this API call, see S3 API . 3.4.9. S3 get object retention The get object retention API retrieves an object retention configuration on an object. Syntax Example The versionId subresource retrieves a particular version of the object. Response Entities Retention Description A container for the request. Type Container Required Yes Mode Description Retention mode for the specified object. Valid values: GOVERNANCE/COMPLIANCE Type String Required Yes RetainUntilDate Description Retention date. Format: 2020-01-05T00:00:00.000Z Type Timestamp Required Yes Additional Resources For more information about this API call, see S3 API . 3.4.10. S3 put object tagging The put object tagging API associates tags with an object. A tag is a key-value pair. To put tags of any other version, use the versionId query parameter. You must have permission to perform the s3:PutObjectTagging action. By default, the bucket owner has this permission and can grant this permission to others. Syntax Example Request Entities Tagging Description A container for the request. Type Container Required Yes TagSet Description A collection of a set of tags. Type String Required Yes Additional Resources For more information about this API call, see S3 API . 3.4.11. S3 get object tagging The get object tagging API returns the tag of an object. By default, the GET operation returns information on the current version of an object. Note For a versioned bucket, you can have multiple versions of an object in your bucket. To retrieve tags of any other version, add the versionId query parameter in the request. Syntax Example Additional Resources For more information about this API call, see S3 API . 3.4.12. S3 delete object tagging The delete object tagging API removes the entire tag set from the specified object. You must have permission to perform the s3:DeleteObjectTagging action, to use this operation. Note To delete tags of a specific object version, add the versionId query parameter in the request. Syntax Example Additional Resources For more information about this API call, see S3 API . 3.4.13. S3 add an object to a bucket Adds an object to a bucket. You must have write permissions on the bucket to perform this operation. Syntax Request Headers content-md5 Description A base64 encoded MD-5 hash of the message. Valid Values A string. No defaults or constraints. Required No content-type Description A standard MIME type. Valid Values Any MIME type. Default: binary/octet-stream . Required No x-amz-meta-<... >* Description User metadata. Stored with the object. Valid Values A string up to 8kb. No defaults. Required No x-amz-acl Description A canned ACL. Valid Values private , public-read , public-read-write , authenticated-read Required No Response Headers x-amz-version-id Description Returns the version ID or null. 3.4.14. S3 delete an object Removes an object. Requires WRITE permission set on the containing bucket. Deletes an object. If object versioning is on, it creates a marker. Syntax To delete an object when versioning is on, you must specify the versionId subresource and the version of the object to delete. 3.4.15. S3 delete multiple objects This API call deletes multiple objects from a bucket. Syntax 3.4.16. S3 get an object's Access Control List (ACL) Returns the ACL for the current version of the object: Syntax Add the versionId subresource to retrieve the ACL for a particular version: Syntax Response Headers x-amz-version-id Description Returns the version ID or null. Response Entities AccessControlPolicy Description A container for the response. Type Container AccessControlList Description A container for the ACL information. Type Container Owner Description A container for the bucket owner's ID and DisplayName . Type Container ID Description The bucket owner's ID. Type String DisplayName Description The bucket owner's display name. Type String Grant Description A container for Grantee and Permission . Type Container Grantee Description A container for the DisplayName and ID of the user receiving a grant of permission. Type Container Permission Description The permission given to the Grantee bucket. Type String 3.4.17. S3 set an object's Access Control List (ACL) Sets an object ACL for the current version of the object. Syntax Request Entities AccessControlPolicy Description A container for the response. Type Container AccessControlList Description A container for the ACL information. Type Container Owner Description A container for the bucket owner's ID and DisplayName . Type Container ID Description The bucket owner's ID. Type String DisplayName Description The bucket owner's display name. Type String Grant Description A container for Grantee and Permission . Type Container Grantee Description A container for the DisplayName and ID of the user receiving a grant of permission. Type Container Permission Description The permission given to the Grantee bucket. Type String 3.4.18. S3 copy an object To copy an object, use PUT and specify a destination bucket and the object name. Syntax Request Headers x-amz-copy-source Description The source bucket name + object name. Valid Values BUCKET / OBJECT Required Yes x-amz-acl Description A canned ACL. Valid Values private , public-read , public-read-write , authenticated-read Required No x-amz-copy-if-modified-since Description Copies only if modified since the timestamp. Valid Values Timestamp Required No x-amz-copy-if-unmodified-since Description Copies only if unmodified since the timestamp. Valid Values Timestamp Required No x-amz-copy-if-match Description Copies only if object ETag matches ETag. Valid Values Entity Tag Required No x-amz-copy-if-none-match Description Copies only if object ETag matches ETag. Valid Values Entity Tag Required No Response Entities CopyObjectResult Description A container for the response elements. Type Container LastModified Description The last modified date of the source object. Type Date Etag Description The ETag of the new object. Type String 3.4.19. S3 add an object to a bucket using HTML forms Adds an object to a bucket using HTML forms. You must have write permissions on the bucket to perform this operation. Syntax 3.4.20. S3 determine options for a request A preflight request to determine if an actual request can be sent with the specific origin, HTTP method, and headers. Syntax 3.4.21. S3 initiate a multipart upload Initiates a multi-part upload process. Returns a UploadId , which you can specify when adding additional parts, listing parts, and completing or abandoning a multi-part upload. Syntax Request Headers content-md5 Description A base64 encoded MD-5 hash of the message. Valid Values A string. No defaults or constraints. Required No content-type Description A standard MIME type. Valid Values Any MIME type. Default: binary/octet-stream Required No x-amz-meta-<... > Description User metadata. Stored with the object. Valid Values A string up to 8kb. No defaults. Required No x-amz-acl Description A canned ACL. Valid Values private , public-read , public-read-write , authenticated-read Required No Response Entities InitiatedMultipartUploadsResult Description A container for the results. Type Container Bucket Description The bucket that will receive the object contents. Type String Key Description The key specified by the key request parameter, if any. Type String UploadId Description The ID specified by the upload-id request parameter identifying the multipart upload, if any. Type String 3.4.22. S3 add a part to a multipart upload Adds a part to a multi-part upload. Specify the uploadId subresource and the upload ID to add a part to a multi-part upload: Syntax The following HTTP response might be returned: HTTP Response 404 Status Code NoSuchUpload Description Specified upload-id does not match any initiated upload on this object. 3.4.23. S3 list the parts of a multipart upload Specify the uploadId subresource and the upload ID to list the parts of a multi-part upload: Syntax Response Entities InitiatedMultipartUploadsResult Description A container for the results. Type Container Bucket Description The bucket that will receive the object contents. Type String Key Description The key specified by the key request parameter, if any. Type String UploadId Description The ID specified by the upload-id request parameter identifying the multipart upload, if any. Type String Initiator Description Contains the ID and DisplayName of the user who initiated the upload. Type Container ID Description The initiator's ID. Type String DisplayName Description The initiator's display name. Type String Owner Description A container for the ID and DisplayName of the user who owns the uploaded object. Type Container StorageClass Description The method used to store the resulting object. STANDARD or REDUCED_REDUNDANCY Type String PartNumberMarker Description The part marker to use in a subsequent request if IsTruncated is true . Precedes the list. Type String NextPartNumberMarker Description The part marker to use in a subsequent request if IsTruncated is true . The end of the list. Type String IsTruncated Description If true , only a subset of the object's upload contents were returned. Type Boolean Part Description A container for Key , Part , InitiatorOwner , StorageClass , and Initiated elements. Type Container PartNumber Description A container for Key , Part , InitiatorOwner , StorageClass , and Initiated elements. Type Integer ETag Description The part's entity tag. Type String Size Description The size of the uploaded part. Type Integer 3.4.24. S3 assemble the uploaded parts Assembles uploaded parts and creates a new object, thereby completing a multipart upload. Specify the uploadId subresource and the upload ID to complete a multi-part upload: Syntax Request Entities CompleteMultipartUpload Description A container consisting of one or more parts. Type Container Required Yes Part Description A container for the PartNumber and ETag . Type Container Required Yes PartNumber Description The identifier of the part. Type Integer Required Yes ETag Description The part's entity tag. Type String Required Yes Response Entities CompleteMultipartUploadResult Description A container for the response. Type Container Location Description The resource identifier (path) of the new object. Type URI bucket Description The name of the bucket that contains the new object. Type String Key Description The object's key. Type String ETag Description The entity tag of the new object. Type String 3.4.25. S3 copy a multipart upload Uploads a part by copying data from an existing object as data source. Specify the uploadId subresource and the upload ID to perform a multi-part upload copy: Syntax Request Headers x-amz-copy-source Description The source bucket name and object name. Valid Values BUCKET / OBJECT Required Yes x-amz-copy-source-range Description The range of bytes to copy from the source object. Valid Values Range: bytes=first-last , where the first and last are the zero-based byte offsets to copy. For example, bytes=0-9 indicates that you want to copy the first ten bytes of the source. Required No Response Entities CopyPartResult Description A container for all response elements. Type Container ETag Description Returns the ETag of the new part. Type String LastModified Description Returns the date the part was last modified. Type String Additional Resources For more information about this feature, see the Amazon S3 site . 3.4.26. S3 abort a multipart upload Aborts a multipart upload. Specify the uploadId subresource and the upload ID to abort a multi-part upload: Syntax 3.4.27. S3 Hadoop interoperability For data analytics applications that require Hadoop Distributed File System (HDFS) access, the Ceph Object Gateway can be accessed using the Apache S3A connector for Hadoop. The S3A connector is an open-source tool that presents S3 compatible object storage as an HDFS file system with HDFS file system read and write semantics to the applications while data is stored in the Ceph Object Gateway. Ceph Object Gateway is fully compatible with the S3A connector that ships with Hadoop 2.7.3. Additional Resources See the Red Hat Ceph Storage Object Gateway Guide for details on multi-tenancy. 3.5. S3 select operations As a developer, you can run S3 select to accelerate throughput. Users can run S3 select queries directly without a mediator. There are three S3 select workflow - CSV, Apache Parquet (Parquet), and JSON that provide S3 select operations with CSV, Parquet, and JSON objects: A CSV file stores tabular data in plain text format. Each line of the file is a data record. Parquet is an open source, column-oriented data file format designed for efficient data storage and retrieval. It provides highly efficient data compression and encoding schemes with enhanced performance to handle complex data in bulk. Parquet enables the S3 select-engine to skip columns and chunks, thereby reducing IOPS dramatically (contrary to CSV and JSON format). JSON is a format structure. The S3 select engine enables the use of SQL statements on top of the JSON format input data using the JSON reader, enabling the scanning of highly nested and complex JSON formatted data. For example, a CSV, Parquet, or JSON S3 object with several gigabytes of data allows the user to extract a single column which is filtered by another column using the following query: Example Currently, the S3 object must retrieve data from the Ceph OSD through the Ceph Object Gateway before filtering and extracting data. There is improved performance when the object is large and the query is more specific. The Parquet format can be processed more efficiently than CSV. Prerequisites A running Red Hat Ceph Storage cluster. A RESTful client. A S3 user created with user access. 3.5.1. S3 select content from an object The select object content API filters the content of an object through the structured query language (SQL). See the Metadata collected by inventory section in the AWS Systems Manager User Guide for an example of the description of what should reside in the inventory object. The inventory content impacts the type of queries that should be run against that inventory. The number of SQL statements that potentially could provide essential information is large, but S3 select is an SQL-like utility and therefore, some operators are not supported, such as group-by and join . For CSV only, you must specify the data serialization format as comma-separated values of the object to retrieve the specified content. Parquet has no delimiter because it is in binary format. Amazon Web Services (AWS) command-line interface (CLI) select object content uses the CSV or Parquet format to parse object data into records and returns only the records specified in the query. You must specify the data serialization format for the response. You must have s3:GetObject permission for this operation. Note The InputSerialization element describes the format of the data in the object that is being queried. Objects can be in CSV or Parquet format. The OutputSerialization element is part of the AWS-CLI user client and describes how the output data is formatted. Ceph has implemented the server client for AWS-CLI and therefore, provides the same output according to OutputSerialization which currently is CSV only. The format of the InputSerialization does not need to match the format of the OutputSerialization . So, for example, you can specify Parquet in the InputSerialization and CSV in the OutputSerialization . Syntax Example Request entities Bucket Description The bucket to select object content from. Type String Required Yes Key Description The object key. Length Constraints Minimum length of 1. Type String Required Yes SelectObjectContentRequest Description Root level tag for the select object content request parameters. Type String Required Yes Expression Description The expression that is used to query the object. Type String Required Yes ExpressionType Description The type of the provided expression for example SQL. Type String Valid Values SQL Required Yes InputSerialization Description Describes the format of the data in the object that is being queried. Type String Required Yes OutputSerialization Description Format of data returned in comma separator and new-line. Type String Required Yes Response entities If the action is successful, the service sends back HTTP 200 response. Data is returned in XML format by the service: Payload Description Root level tag for the payload parameters. Type String Required Yes Records Description The records event. Type Base64-encoded binary data object Required No Stats Description The stats event. Type Long Required No The Ceph Object Gateway supports the following response: Example Syntax (for CSV) Example (for CSV) Syntax (for Parquet) Example (for Parquet) Syntax (for JSON) Example (for JSON) Example (for BOTO3) Supported features Currently, only part of the AWS s3 select command is supported: Features Details Description Example Arithmetic operators ^ * % / + - ( ) select (int(_1)+int(_2))*int(_9) from s3object; Arithmetic operators % modulo select count(*) from s3object where cast(_1 as int)%2 = 0; Arithmetic operators ^ power-of select cast(2^10 as int) from s3object; Compare operators > < >= ⇐ == != select _1,_2 from s3object where (int(_1)+int(_3))>int(_5); logical operator AND OR NOT select count(*) from s3object where not (int(1)>123 and int(_5)<200); logical operator is null Returns true/false for null indication in expression logical operator and NULL is not null Returns true/false for null indication in expression logical operator and NULL unknown state Review null-handle and observe the results of logical operations with NULL. The query returns 0 . select count(*) from s3object where null and (3>2); Arithmetic operator with NULL unknown state Review null-handle and observe the results of binary operations with NULL. The query returns 0 . select count(*) from s3object where (null+1) and (3>2); Compare with NULL unknown state Review null-handle and observe results of compare operations with NULL. The query returns 0 . select count(*) from s3object where (null*1.5) != 3; missing column unknown state select count(*) from s3object where _1 is null; projection column Similar to if or then or else select case when (1+1==(2+1)*3) then 'case_1' when 4*3)==(12 then 'case_2' else 'case_else' end, age*2 from s3object; projection column Similar to switch/case default select case cast(_1 as int) + 1 when 2 then "a" when 3 then "b" else "c" end from s3object; logical operator coalesce returns first non-null argument select coalesce(nullif(5,5),nullif(1,1.0),age+12) from s3object; logical operator nullif returns null in case both arguments are equal, or else the first one, nullif(1,1)=NULL nullif(null,1)=NULL nullif(2,1)=2 select nullif(cast(_1 as int),cast(_2 as int)) from s3object; logical operator {expression} in ( .. {expression} ..) select count(*) from s3object where 'ben' in (trim(_5),substring(_1,char_length(_1)-3,3),last_name); logical operator {expression} between {expression} and {expression} select _1 from s3object where cast(_1 as int) between 800 and 900 ; select count(*) from stdin where substring(_3,char_length(_3),1) between "x" and trim(_1) and substring(_3,char_length(_3)-1,1) = ":"; logical operator {expression} like {match-pattern} select count( ) from s3object where first_name like '%de_'; select count( ) from s3object where _1 like "%a[r-s]; casting operator select cast(123 as int)%2 from s3object; casting operator select cast(123.456 as float)%2 from s3object; casting operator select cast('ABC0-9' as string),cast(substr('ab12cd',3,2) as int)*4 from s3object; casting operator select cast(substring('publish on 2007-01-01',12,10) as timestamp) from s3object; non AWS casting operator select int(_1),int( 1.2 + 3.4) from s3object; non AWS casting operator select float(1.2) from s3object; non AWS casting operator select to_timestamp('1999-10-10T12:23:44Z') from s3object; Aggregation Function sun select sum(int(_1)) from s3object; Aggregation Function avg select avg(cast(_1 as float) + cast(_2 as int)) from s3object; Aggregation Function min select avg(cast(_1 a float) + cast(_2 as int)) from s3object; Aggregation Function max select max(float(_1)),min(int(_5)) from s3object; Aggregation Function count select count(*) from s3object where (int(1)+int(_3))>int(_5); Timestamp Functions extract select count(*) from s3object where extract(year from to_timestamp(_2)) > 1950 and extract(year from to_timestamp(_1)) < 1960; Timestamp Functions dateadd select count(0) from s3object where date_diff(year,to_timestamp(_1),date_add(day,366,to_timestamp(_1))) = 1; Timestamp Functions datediff select count(0) from s3object where date_diff(month,to_timestamp(_1),to_timestamp(_2)) = 2; Timestamp Functions utcnow select count(0) from s3object where date_diff(hour,utcnow(),date_add(day,1,utcnow())) = 24 Timestamp Functions to_string select to_string( to_timestamp("2009-09-17T17:56:06.234567Z"), "yyyyMMdd-H:m:s") from s3object; String Functions substring select count(0) from s3object where int(substring(_1,1,4))>1950 and int(substring(_1,1,4))<1960; String Functions substring substring with from negative number is valid considered as first select substring("123456789" from -4) from s3object; String Functions substring substring with from zero for out-of-bound number is valid just as (first,last) select substring("123456789" from 0 for 100) from s3object; String Functions trim select trim(' foobar ') from s3object; String Functions trim select trim(trailing from ' foobar ') from s3object; String Functions trim select trim(leading from ' foobar ') from s3object; String Functions trim select trim(both '12' from '1112211foobar22211122') from s3object; String Functions lower or upper select lower('ABcD12#USDe') from s3object; String Functions char_length, character_length select count(*) from s3object where char_length(_3)=3; Complex queries select sum(cast(_1 as int)),max(cast(_3 as int)), substring('abcdefghijklm', (2-1)*3+sum(cast(_1 as int))/sum(cast(_1 as int))+1, (count() + count(0))/count(0)) from s3object; alias support select int(_1) as a1, int(_2) as a2 , (a1+a2) as a3 from s3object where a3>100 and a3<300; Additional Resources See Amazon's S3 Select Object Content API for more details. 3.5.2. S3 supported select functions S3 select supports the following functions: .Timestamp to_timestamp(string) Description Converts string to timestamp basic type. In the string format, any missing 'time' value is populated with zero; for missing month and day value, 1 is the default value. 'Timezone' is in format +/-HH:mm or Z , where the letter 'Z' indicates Coordinated Universal Time (UTC). Value of timezone can range between - 12:00 and +14:00. Supported Currently it can convert the following string formats into timestamp: YYYY-MM-DDTHH:mm:ss.SSSSSS+/-HH:mm YYYY-MM-DDTHH:mm:ss.SSSSSSZ YYYY-MM-DDTHH:mm:ss+/-HH:mm YYYY-MM-DDTHH:mm:ssZ YYYY-MM-DDTHH:mm+/-HH:mm YYYY-MM-DDTHH:mmZ YYYY-MM-DDT YYYYT to_string(timestamp, format_pattern) Description Returns a string representation of the input timestamp in the given input string format. Parameters Format Example Description yy 69 2-year digit. y 1969 4-year digit. yyyy 1969 Zero-padded 4-digit year. M 1 Month of the year. MM 01 Zero-padded month of the year. MMM Jan Abbreviated month of the year name. MMMM January full month of the year name. MMMMM J Month of the year first letter. Not valid for use with the to_timestamp function. d 2 Day of the month (1-31). dd 02 Zero-padded day of the month (01-31). a AM AM or PM of day. h 3 Hour of the day (1-12). hh 03 Zero-padded hour of day (01-12). H 3 Hour of the day (0-23). HH 03 Zero-padded hour of the day (00-23). m 4 Minute of the hour (0-59). mm 04 Zero-padded minute of the hour (00-59). s 5 Second of the minute (0-59). ss 05 Zero-padded second of the minute (00-59). S 1 Fraction of the second (precision: 0.1, range: 0.0-0.9). SS 12 Fraction of the second (precision: 0.01, range: 0.0-0.99). SSS 123 Fraction of the second (precision: 0.01, range: 0.0-0.999). SSSS 1234 Fraction of the second (precision: 0.001, range: 0.0-0.9999). SSSSSS 123456 Fraction of the second (maximum precision: 1 nanosecond, range: 0.0-0.999999). n 60000000 Nano of second. X +07 or Z Offset in hours or "Z" if the offset is 0. XX or XXXX +0700 or Z Offset in hours and minutes or "Z" if the offset is 0. XXX or XXXXX +07:00 or Z Offset in hours and minutes or "Z" if the offset is 0. x 7 Offset in hours. xx or xxxx 700 Offset in hours and minutes. xxx or xxxxx +07:00 Offset in hours and minutes. extract(date-part from timestamp) Description Returns integer according to date-part extract from input timestamp. Supported year, month, week, day, hour, minute, second, timezone_hour, timezone_minute. date_add(date-part ,integer,timestamp) Description Returns timestamp, a calculation based on the results of input timestamp and date-part. Supported year, month, day, hour, minute, second. date_diff(date-part,timestamp,timestamp) Description Return an integer, a calculated result of the difference between two timestamps according to date-part. Supported year, month, day, hour, minute, second. utcnow() Description Return timestamp of current time. Aggregation count() Description Returns integers based on the number of rows that match a condition if there is one. sum(expression) Description Returns a summary of expression on each row that matches a condition if there is one. avg(expression) Description Returns an average expression on each row that matches a condition if there is one. max(expression) Description Returns the maximal result for all expressions that match a condition if there is one. min(expression) Description Returns the minimal result for all expressions that match a condition if there is one. String substring (string,from,for) Description Returns a string extract from the input string according to from, for inputs. Char_length Description Returns a number of characters in string. Character_length also does the same. trim([[leading | trailing | both remove_chars] from] string ) Description Trims leading/trailing (or both) characters from the target string. The default value is a blank character. Upper\lower Description Converts characters into uppercase or lowercase. NULL The NULL value is missing or unknown that is NULL can not produce a value on any arithmetic operations. The same applies to arithmetic comparison, any comparison to NULL is NULL that is unknown. Table 3.4. The NULL use case A is NULL Result(NULL=UNKNOWN) Not A NULL A or False NULL A or True True A or A NULL A and False False A and True NULL A and A NULL Additional Resources See Amazon's S3 Select Object Content API for more details. 3.5.3. S3 alias programming construct Alias programming construct is an essential part of the s3 select language because it enables better programming with objects that contain many columns or complex queries. When a statement with alias construct is parsed, it replaces the alias with a reference to the right projection column and on query execution, the reference is evaluated like any other expression. Alias maintains result-cache that is if an alias is used more than once, the same expression is not evaluated and the same result is returned because the result from the cache is used. Currently, Red Hat supports the column alias. Example 3.5.4. S3 parsing explained The S3 select engine has parsers for all three file formats - CSV, Parquet, and JSON which separate the commands into more processable components, which are then attached to tags that define each component. 3.5.4.1. S3 CSV parsing The CSV definitions with input serialization uses these default values: Use {\n}` for row-delimiter. Use {"} for quote. Use {\} for escape characters. The csv-header-info is parsed upon USE appearing in the AWS-CLI; this is the first row in the input object containing the schema. Currently, output serialization and compression-type is not supported. The S3 select engine has a CSV parser which parses S3-objects: Each row ends with a row-delimiter. The field-separator separates the adjacent columns. The successive field separator defines the NULL column. The quote-character overrides the field-separator; that is, the field separator is any character between the quotes. The escape character disables any special character except the row delimiter. The following are examples of CSV parsing rules: Table 3.5. CSV parsing Feature Description Input (Tokens) NULL Successive field delimiter ,,1,,2, =β‡’ {null}{null}{1}{null}{2}{null} QUOTE The quote character overrides the field delimiter. 11,22,"a,b,c,d",last =β‡’ {11}{22}{"a,b,c,d"}{last} Escape The escape character overrides the meta-character. A container for the object owner's ID and DisplayName row delimiter There is no closed quote; row delimiter is the closing line. 11,22,a="str,44,55,66 =β‡’ {11}{22}{a="str,44,55,66} csv header info FileHeaderInfo tag USE value means each token on the first line is the column-name; IGNORE value means to skip the first line. Additional Resources See Amazon's S3 Select Object Content API for more details. 3.5.4.2. S3 Parquet parsing Apache Parquet is an open-source, columnar data file format designed for efficient data storage and retrieval. The S3 select engine's Parquet parser parses S3-objects as follows: Example In the above example, there are N columns in this table, split into M row groups. The file metadata contains the locations of all the column metadata start locations. Metadata is written after the data to allow for single pass writing. All the column chunks can be found in the file metadata which should later be read sequentially. The format is explicitly designed to separate the metadata from the data. This allows splitting columns into multiple files, as well as having a single metadata file reference multiple parquet files. 3.5.4.3. S3 JSON parsing JSON document enables nesting values within objects or arrays without limitations. When querying a specific value in a JSON document in the S3 select engine, the location of the value is specified through a path in the SELECT statement. The generic structure of a JSON document does not have a row and column structure like CSV and Parquet. Instead, it is the SQL statement itself that defines the rows and columns when querying a JSON document. The S3 select engine's JSON parser parses S3-objects as follows: The FROM clause in the SELECT statement defines the row boundaries. A row in a JSON document is similar to how the row delimiter is used to define rows for CSV objects, and how row groups are used to define rows for Parquet objects Consider the following example: Example The statement instructs the reader to search for the path aa.bb.cc and defines the row boundaries based on the occurrence of this path. A row begins when the reader encounters the path, and it ends when the reader exits the innermost part of the path, which in this case is the object cc . 3.5.5. Integrating Ceph Object Gateway with Trino Integrate the Ceph Object Gateway with Trino, an important utility that enables the user to run SQL queries 9x faster on S3 objects. Following are some benefits of using Trino: Trino is a complete SQL engine. Pushes down S3 select requests wherein the Trino engine identifies part of the SQL statement that is cost effective to run on the server-side. uses the optimization rules of Ceph/S3select to enhance performance. Leverages Red Hat Ceph Storage scalability and divides the original object into multiple equal parts, performs S3 select requests, and merges the request. Important If the s3select syntax does not work while querying through trino, use the SQL syntax. Prerequisites A running Red Hat Ceph Storage cluster with Ceph Object Gateway installed. Docker or Podman installed. Buckets created. Objects are uploaded. Procedure Deploy Trino and hive. Example Modify the hms_trino.yaml file with S3 endpoint, access key, and secret key. Example Modify the hive.properties file with S3 endpoint, access key, and secret key. Example Start a Trino container to integrate Ceph Object Gateway. Example Verify integration. Example Note The external location must point to the bucket name or a directory, and not the end of a file.
[ "HTTP/1.1 PUT /buckets/bucket/object.mpeg Host: cname.domain.com Date: Mon, 2 Jan 2012 00:01:01 +0000 Content-Encoding: mpeg Content-Length: 9999999 Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "firewall-cmd --zone=public --add-port=8080/tcp --permanent firewall-cmd --reload", "yum install dnsmasq echo \"address=/. FQDN_OF_GATEWAY_NODE / IP_OF_GATEWAY_NODE \" | tee --append /etc/dnsmasq.conf systemctl start dnsmasq systemctl enable dnsmasq", "systemctl stop NetworkManager systemctl disable NetworkManager", "echo \"DNS1= IP_OF_GATEWAY_NODE \" | tee --append /etc/sysconfig/network-scripts/ifcfg-eth0 echo \" IP_OF_GATEWAY_NODE FQDN_OF_GATEWAY_NODE \" | tee --append /etc/hosts systemctl restart network systemctl enable network systemctl restart dnsmasq", "[user@rgw ~]USD ping mybucket. FQDN_OF_GATEWAY_NODE", "yum install ruby", "gem install aws-s3", "[user@dev ~]USD mkdir ruby_aws_s3 [user@dev ~]USD cd ruby_aws_s3", "[user@dev ~]USD vim conn.rb", "#!/usr/bin/env ruby require 'aws/s3' require 'resolv-replace' AWS::S3::Base.establish_connection!( :server => ' FQDN_OF_GATEWAY_NODE ', :port => '8080', :access_key_id => ' MY_ACCESS_KEY ', :secret_access_key => ' MY_SECRET_KEY ' )", "#!/usr/bin/env ruby require 'aws/s3' require 'resolv-replace' AWS::S3::Base.establish_connection!( :server => 'testclient.englab.pnq.redhat.com', :port => '8080', :access_key_id => '98J4R9P22P5CDL65HKP8', :secret_access_key => '6C+jcaP0dp0+FZfrRNgyGA9EzRy25pURldwje049' )", "[user@dev ~]USD chmod +x conn.rb", "[user@dev ~]USD ./conn.rb | echo USD?", "[user@dev ~]USD vim create_bucket.rb", "#!/usr/bin/env ruby load 'conn.rb' AWS::S3::Bucket.create('my-new-bucket1')", "[user@dev ~]USD chmod +x create_bucket.rb", "[user@dev ~]USD ./create_bucket.rb", "[user@dev ~]USD vim list_owned_buckets.rb", "#!/usr/bin/env ruby load 'conn.rb' AWS::S3::Service.buckets.each do |bucket| puts \"{bucket.name}\\t{bucket.creation_date}\" end", "[user@dev ~]USD chmod +x list_owned_buckets.rb", "[user@dev ~]USD ./list_owned_buckets.rb", "my-new-bucket1 2020-01-21 10:33:19 UTC", "[user@dev ~]USD vim create_object.rb", "#!/usr/bin/env ruby load 'conn.rb' AWS::S3::S3Object.store( 'hello.txt', 'Hello World!', 'my-new-bucket1', :content_type => 'text/plain' )", "[user@dev ~]USD chmod +x create_object.rb", "[user@dev ~]USD ./create_object.rb", "[user@dev ~]USD vim list_bucket_content.rb", "#!/usr/bin/env ruby load 'conn.rb' new_bucket = AWS::S3::Bucket.find('my-new-bucket1') new_bucket.each do |object| puts \"{object.key}\\t{object.about['content-length']}\\t{object.about['last-modified']}\" end", "[user@dev ~]USD chmod +x list_bucket_content.rb", "[user@dev ~]USD ./list_bucket_content.rb", "hello.txt 12 Fri, 22 Jan 2020 15:54:52 GMT", "[user@dev ~]USD vim del_empty_bucket.rb", "#!/usr/bin/env ruby load 'conn.rb' AWS::S3::Bucket.delete('my-new-bucket1')", "[user@dev ~]USD chmod +x del_empty_bucket.rb", "[user@dev ~]USD ./del_empty_bucket.rb | echo USD?", "[user@dev ~]USD vim del_non_empty_bucket.rb", "#!/usr/bin/env ruby load 'conn.rb' AWS::S3::Bucket.delete('my-new-bucket1', :force => true)", "[user@dev ~]USD chmod +x del_non_empty_bucket.rb", "[user@dev ~]USD ./del_non_empty_bucket.rb | echo USD?", "[user@dev ~]USD vim delete_object.rb", "#!/usr/bin/env ruby load 'conn.rb' AWS::S3::S3Object.delete('hello.txt', 'my-new-bucket1')", "[user@dev ~]USD chmod +x delete_object.rb", "[user@dev ~]USD ./delete_object.rb", "yum install ruby", "gem install aws-sdk", "[user@dev ~]USD mkdir ruby_aws_sdk [user@dev ~]USD cd ruby_aws_sdk", "[user@dev ~]USD vim conn.rb", "#!/usr/bin/env ruby require 'aws-sdk' require 'resolv-replace' Aws.config.update( endpoint: 'http:// FQDN_OF_GATEWAY_NODE :8080', access_key_id: ' MY_ACCESS_KEY ', secret_access_key: ' MY_SECRET_KEY ', force_path_style: true, region: 'us-east-1' )", "#!/usr/bin/env ruby require 'aws-sdk' require 'resolv-replace' Aws.config.update( endpoint: 'http://testclient.englab.pnq.redhat.com:8080', access_key_id: '98J4R9P22P5CDL65HKP8', secret_access_key: '6C+jcaP0dp0+FZfrRNgyGA9EzRy25pURldwje049', force_path_style: true, region: 'us-east-1' )", "[user@dev ~]USD chmod +x conn.rb", "[user@dev ~]USD ./conn.rb | echo USD?", "[user@dev ~]USD vim create_bucket.rb", "#!/usr/bin/env ruby load 'conn.rb' s3_client = Aws::S3::Client.new s3_client.create_bucket(bucket: 'my-new-bucket2')", "[user@dev ~]USD chmod +x create_bucket.rb", "[user@dev ~]USD ./create_bucket.rb", "[user@dev ~]USD vim list_owned_buckets.rb", "#!/usr/bin/env ruby load 'conn.rb' s3_client = Aws::S3::Client.new s3_client.list_buckets.buckets.each do |bucket| puts \"{bucket.name}\\t{bucket.creation_date}\" end", "[user@dev ~]USD chmod +x list_owned_buckets.rb", "[user@dev ~]USD ./list_owned_buckets.rb", "my-new-bucket2 2020-01-21 10:33:19 UTC", "[user@dev ~]USD vim create_object.rb", "#!/usr/bin/env ruby load 'conn.rb' s3_client = Aws::S3::Client.new s3_client.put_object( key: 'hello.txt', body: 'Hello World!', bucket: 'my-new-bucket2', content_type: 'text/plain' )", "[user@dev ~]USD chmod +x create_object.rb", "[user@dev ~]USD ./create_object.rb", "[user@dev ~]USD vim list_bucket_content.rb", "#!/usr/bin/env ruby load 'conn.rb' s3_client = Aws::S3::Client.new s3_client.list_objects(bucket: 'my-new-bucket2').contents.each do |object| puts \"{object.key}\\t{object.size}\" end", "[user@dev ~]USD chmod +x list_bucket_content.rb", "[user@dev ~]USD ./list_bucket_content.rb", "hello.txt 12 Fri, 22 Jan 2020 15:54:52 GMT", "[user@dev ~]USD vim del_empty_bucket.rb", "#!/usr/bin/env ruby load 'conn.rb' s3_client = Aws::S3::Client.new s3_client.delete_bucket(bucket: 'my-new-bucket2')", "[user@dev ~]USD chmod +x del_empty_bucket.rb", "[user@dev ~]USD ./del_empty_bucket.rb | echo USD?", "[user@dev ~]USD vim del_non_empty_bucket.rb", "#!/usr/bin/env ruby load 'conn.rb' s3_client = Aws::S3::Client.new Aws::S3::Bucket.new('my-new-bucket2', client: s3_client).clear! s3_client.delete_bucket(bucket: 'my-new-bucket2')", "[user@dev ~]USD chmod +x del_non_empty_bucket.rb", "[user@dev ~]USD ./del_non_empty_bucket.rb | echo USD?", "[user@dev ~]USD vim delete_object.rb", "#!/usr/bin/env ruby load 'conn.rb' s3_client = Aws::S3::Client.new s3_client.delete_object(key: 'hello.txt', bucket: 'my-new-bucket2')", "[user@dev ~]USD chmod +x delete_object.rb", "[user@dev ~]USD ./delete_object.rb", "yum install php", "[user@dev ~]USD mkdir php_s3 [user@dev ~]USD cd php_s3", "[user@dev ~]USD cp -r ~/Downloads/aws/ ~/php_s3/", "[user@dev ~]USD vim conn.php", "<?php define('AWS_KEY', ' MY_ACCESS_KEY '); define('AWS_SECRET_KEY', ' MY_SECRET_KEY '); define('HOST', ' FQDN_OF_GATEWAY_NODE '); define('PORT', '8080'); // require the AWS SDK for php library require '/ PATH_TO_AWS /aws-autoloader.php'; use Aws\\S3\\S3Client; // Establish connection with host using S3 Client client = S3Client::factory(array( 'base_url' => HOST , 'port' => PORT , 'key' => AWS_KEY , 'secret' => AWS_SECRET_KEY )); ?>", "[user@dev ~]USD php -f conn.php | echo USD?", "[user@dev ~]USD vim create_bucket.php", "<?php include 'conn.php'; client->createBucket(array('Bucket' => 'my-new-bucket3')); ?>", "[user@dev ~]USD php -f create_bucket.php", "[user@dev ~]USD vim list_owned_buckets.php", "<?php include 'conn.php'; blist = client->listBuckets(); echo \"Buckets belonging to \" . blist['Owner']['ID'] . \":\\n\"; foreach (blist['Buckets'] as b) { echo \"{b['Name']}\\t{b['CreationDate']}\\n\"; } ?>", "[user@dev ~]USD php -f list_owned_buckets.php", "my-new-bucket3 2020-01-21 10:33:19 UTC", "[user@dev ~]USD echo \"Hello World!\" > hello.txt", "[user@dev ~]USD vim create_object.php", "<?php include 'conn.php'; key = 'hello.txt'; source_file = './hello.txt'; acl = 'private'; bucket = 'my-new-bucket3'; client->upload(bucket, key, fopen(source_file, 'r'), acl); ?>", "[user@dev ~]USD php -f create_object.php", "[user@dev ~]USD vim list_bucket_content.php", "<?php include 'conn.php'; o_iter = client->getIterator('ListObjects', array( 'Bucket' => 'my-new-bucket3' )); foreach (o_iter as o) { echo \"{o['Key']}\\t{o['Size']}\\t{o['LastModified']}\\n\"; } ?>", "[user@dev ~]USD php -f list_bucket_content.php", "hello.txt 12 Fri, 22 Jan 2020 15:54:52 GMT", "[user@dev ~]USD vim del_empty_bucket.php", "<?php include 'conn.php'; client->deleteBucket(array('Bucket' => 'my-new-bucket3')); ?>", "[user@dev ~]USD php -f del_empty_bucket.php | echo USD?", "[user@dev ~]USD vim delete_object.php", "<?php include 'conn.php'; client->deleteObject(array( 'Bucket' => 'my-new-bucket3', 'Key' => 'hello.txt', )); ?>", "[user@dev ~]USD php -f delete_object.php", "ceph config set RGW_CLIENT_NAME rgw_sts_key STS_KEY ceph config set RGW_CLIENT_NAME rgw_s3_auth_use_sts true", "ceph config set client.rgw rgw_sts_key 7f8fd8dd4700mnop ceph config set client.rgw rgw_s3_auth_use_sts true", "systemctl restart ceph- CLUSTER_ID @ SERVICE_TYPE . ID .service", "systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service", "ceph orch restart SERVICE_TYPE", "ceph orch restart rgw", "radosgw-admin --uid USER_NAME --display-name \" DISPLAY_NAME \" --access_key USER_NAME --secret SECRET user create", "[user@rgw ~]USD radosgw-admin --uid TESTER --display-name \"TestUser\" --access_key TESTER --secret test123 user create", "radosgw-admin caps add --uid=\" USER_NAME \" --caps=\"oidc-provider=*\"", "[user@rgw ~]USD radosgw-admin caps add --uid=\"TESTER\" --caps=\"oidc-provider=*\"", "\"{\\\"Version\\\":\\\"2020-01-17\\\",\\\"Statement\\\":[{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":{\\\"Federated\\\":[\\\"arn:aws:iam:::oidc-provider/ IDP_URL \\\"]},\\\"Action\\\":[\\\"sts:AssumeRoleWithWebIdentity\\\"],\\\"Condition\\\":{\\\"StringEquals\\\":{\\\" IDP_URL :app_id\\\":\\\" AUD_FIELD \\\"\\}\\}\\}\\]\\}\"", "curl -k -v -X GET -H \"Content-Type: application/x-www-form-urlencoded\" \" IDP_URL :8000/ CONTEXT /realms/ REALM /.well-known/openid-configuration\" | jq .", "[user@client ~]USD curl -k -v -X GET -H \"Content-Type: application/x-www-form-urlencoded\" \"http://www.example.com:8000/auth/realms/quickstart/.well-known/openid-configuration\" | jq .", "curl -k -v -X GET -H \"Content-Type: application/x-www-form-urlencoded\" \" IDP_URL / CONTEXT /realms/ REALM /protocol/openid-connect/certs\" | jq .", "[user@client ~]USD curl -k -v -X GET -H \"Content-Type: application/x-www-form-urlencoded\" \"http://www.example.com/auth/realms/quickstart/protocol/openid-connect/certs\" | jq .", "-----BEGIN CERTIFICATE----- MIIDYjCCAkqgAwIBAgIEEEd2CDANBgkqhkiG9w0BAQsFADBzMQkwBwYDVQQGEwAxCTAHBgNVBAgTADEJMAcGA1UEBxMAMQkwBwYDVQQKEwAxCTAHBgNVBAsTADE6MDgGA1UEAxMxYXV0aHN2Yy1pbmxpbmVtZmEuZGV2LnZlcmlmeS5pYm1jbG91ZHNlY3VyaXR5LmNvbTAeFw0yMTA3MDUxMzU2MzZaFw0zMTA3MDMxMzU2MzZaMHMxCTAHBgNVBAYTADEJMAcGA1UECBMAMQkwBwYDVQQHEwAxCTAHBgNVBAoTADEJMAcGA1UECxMAMTowOAYDVQQDEzFhdXRoc3ZjLWlubGluZW1mYS5kZXYudmVyaWZ5LmlibWNsb3Vkc2VjdXJpdHkuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAphyu3HaAZ14JH/EXetZxtNnerNuqcnfxcmLhBz9SsTlFD59ta+BOVlRnK5SdYEqO3ws2iGEzTvC55rczF+hDVHFZEBJLVLQe8ABmi22RAtG1P0dA/Bq8ReFxpOFVWJUBc31QM+ummW0T4yw44wQJI51LZTMz7PznB0ScpObxKe+frFKd1TCMXPlWOSzmTeFYKzR83Fg9hsnz7Y8SKGxi+RoBbTLT+ektfWpR7O+oWZIf4INe1VYJRxZvn+qWcwI5uMRCtQkiMknc3Rj6Eupiqq6FlAjDs0p//EzsHAlW244jMYnHCGq0UP3oE7vViLJyiOmZw7J3rvs3m9mOQiPLoQIDAQABMA0GCSqGSIb3DQEBCwUAA4IBAQCeVqAzSh7Tp8LgaTIFUuRbdjBAKXC9Nw3+pRBHoiUTdhqO3ualyGih9m/js/clb8Vq/39zl0VPeaslWl2NNX9zaK7xo+ckVIOY3ucCaTC04ZUn1KzZu/7azlN0C5XSWg/CfXgU2P3BeMNzc1UNY1BASGyWn2lEplIVWKLaDZpNdSyyGyaoQAIBdzxeNCyzDfPCa2oSO8WH1czmFiNPqR5kdknHI96CmsQdi+DT4jwzVsYgrLfcHXmiWyIAb883hR3Pobp+Bsw7LUnxebQ5ewccjYmrJzOk5Wb5FpXBhaJH1B3AEd6RGalRUyc/zUKdvEy0nIRMDS9x2BP3NVvZSADD -----END CERTIFICATE-----", "openssl x509 -in CERT_FILE -fingerprint -noout", "[user@client ~]USD openssl x509 -in certificate.crt -fingerprint -noout SHA1 Fingerprint=F7:D7:B3:51:5D:D0:D3:19:DD:21:9A:43:A9:EA:72:7A:D6:06:52:87", "bash check_token_isv.sh | jq .iss \"https://keycloak-sso.apps.ocp.example.com/auth/realms/ceph\"", "aws --endpoint https://cephproxy1.example.com:8443 iam create-open-id-connect-provider --url https://keycloak-sso.apps.ocp.example.com/auth/realms/ceph --thumbprint-list 00E9CFD697E0B16DD13C86B0FFDC29957E5D24DF", "aws --endpoint https://cephproxy1.example.com:8443 iam list-open-id-connect-providers { \"OpenIDConnectProviderList\": [ { \"Arn\": \"arn:aws:iam:::oidc-provider/keycloak-sso.apps.ocp.example.com/auth/realms/ceph\" } ] }", "curl -k -q -L -X POST \"https://keycloak-sso.apps.example.com/auth/realms/ceph/protocol/openid-connect/ token\" -H 'Content-Type: application/x-www-form-urlencoded' --data-urlencode 'client_id=ceph' --data-urlencode 'grant_type=password' --data-urlencode 'client_secret=XXXXXXXXXXXXXXXXXXXXXXX' --data-urlencode 'scope=openid' --data-urlencode \"username=SSOUSERNAME\" --data-urlencode \"password=SSOPASSWORD\"", "cat check_token.sh USERNAME=USD1 PASSWORD=USD2 KC_CLIENT=\"ceph\" KC_CLIENT_SECRET=\"7sQXqyMSzHIeMcSALoKaljB6sNIBDRjU\" KC_ACCESS_TOKEN=\"USD(./get_web_token.sh USDUSERNAME USDPASSWORD | jq -r '.access_token')\" KC_SERVER=\"https://keycloak-sso.apps.ocp.stg.local\" KC_CONTEXT=\"auth\" KC_REALM=\"ceph\" curl -k -s -q -X POST -u \"USDKC_CLIENT:USDKC_CLIENT_SECRET\" -d \"token=USDKC_ACCESS_TOKEN\" \"USDKC_SERVER/USDKC_CONTEXT/realms/USDKC_REALM/protocol/openid-connect/token/introspect\" | jq . ./check_token.sh s3admin passw0rd | jq .sub \"ceph\"", "cat role-rgwadmins.json { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": [ \"arn:aws:iam:::oidc-provider/keycloak-sso.apps.example.com/auth/realms/ceph\" ] }, \"Action\": [ \"sts:AssumeRoleWithWebIdentity\" ], \"Condition\": { \"StringLike\": { \"keycloak-sso.apps.example.com/auth/realms/ceph:sub\":\"ceph\" } } } ] }", "radosgw-admin role create --role-name rgwadmins --assume-role-policy-doc=USD(jq -rc . /root/role-rgwadmins.json)", "cat test-assume-role.sh #!/bin/bash export AWS_CA_BUNDLE=\"/etc/pki/ca-trust/source/anchors/cert.pem\" unset AWS_ACCESS_KEY_ID unset AWS_SECRET_ACCESS_KEY unset AWS_SESSION_TOKEN KC_ACCESS_TOKEN=USD(curl -k -q -L -X POST \"https://keycloak-sso.apps.ocp.example.com/auth/realms/ceph/protocol/openid-connect/ token\" -H 'Content-Type: application/x-www-form-urlencoded' --data-urlencode 'client_id=ceph' --data-urlencode 'grant_type=password' --data-urlencode 'client_secret=XXXXXXXXXXXXXXXXXXXXXXX' --data-urlencode 'scope=openid' --data-urlencode \"<varname>SSOUSERNAME</varname>\" --data-urlencode \"<varname>SSOPASSWORD</varname>\" | jq -r .access_token) echo USD{KC_ACCESS_TOKEN} IDM_ASSUME_ROLE_CREDS=USD(aws sts assume-role-with-web-identity --role-arn \"arn:aws:iam:::role/USD3\" --role-session-name testbr --endpoint=https://cephproxy1.example.com:8443 --web-identity-token=\"USDKC_ACCESS_TOKEN\") echo \"aws sts assume-role-with-web-identity --role-arn \"arn:aws:iam:::role/USD3\" --role-session-name testb --endpoint=https://cephproxy1.example.com:8443 --web-identity-token=\"USDKC_ACCESS_TOKEN\"\" echo USDIDM_ASSUME_ROLE_CREDS export AWS_ACCESS_KEY_ID=USD(echo USDIDM_ASSUME_ROLE_CREDS | jq -r .Credentials.AccessKeyId) export AWS_SECRET_ACCESS_KEY=USD(echo USDIDM_ASSUME_ROLE_CREDS | jq -r .Credentials.SecretAccessKey) export AWS_SESSION_TOKEN=USD(echo USDIDM_ASSUME_ROLE_CREDS | jq -r .Credentials.SessionToken)", "source ./test-assume-role.sh s3admin passw0rd rgwadmins aws s3 mb s3://testbucket aws s3 ls", "ceph config set RGW_CLIENT_NAME rgw_sts_key STS_KEY ceph config set RGW_CLIENT_NAME rgw_s3_auth_use_sts true", "ceph config set client.rgw rgw_sts_key 7f8fd8dd4700mnop ceph config set client.rgw rgw_s3_auth_use_sts true", "[user@osp ~]USD openstack ec2 credentials create +------------+--------------------------------------------------------+ | Field | Value | +------------+--------------------------------------------------------+ | access | b924dfc87d454d15896691182fdeb0ef | | links | {u'self': u'http://192.168.0.15/identity/v3/users/ | | | 40a7140e424f493d8165abc652dc731c/credentials/ | | | OS-EC2/b924dfc87d454d15896691182fdeb0ef'} | | project_id | c703801dccaf4a0aaa39bec8c481e25a | | secret | 6a2142613c504c42a94ba2b82147dc28 | | trust_id | None | | user_id | 40a7140e424f493d8165abc652dc731c | +------------+--------------------------------------------------------+", "import boto3 access_key = b924dfc87d454d15896691182fdeb0ef secret_key = 6a2142613c504c42a94ba2b82147dc28 client = boto3.client('sts', aws_access_key_id=access_key, aws_secret_access_key=secret_key, endpoint_url=https://www.example.com/rgw, region_name='', ) response = client.get_session_token( DurationSeconds=43200 )", "s3client = boto3.client('s3', aws_access_key_id = response['Credentials']['AccessKeyId'], aws_secret_access_key = response['Credentials']['SecretAccessKey'], aws_session_token = response['Credentials']['SessionToken'], endpoint_url=https://www.example.com/s3, region_name='') bucket = s3client.create_bucket(Bucket='my-new-shiny-bucket') response = s3client.list_buckets() for bucket in response[\"Buckets\"]: print \"{name}\\t{created}\".format( name = bucket['Name'], created = bucket['CreationDate'], )", "radosgw-admin caps add --uid=\" USER \" --caps=\"roles=*\"", "radosgw-admin caps add --uid=\"gwadmin\" --caps=\"roles=*\"", "radosgw-admin role create --role-name= ROLE_NAME --path= PATH --assume-role-policy-doc= TRUST_POLICY_DOC", "radosgw-admin role create --role-name=S3Access --path=/application_abc/component_xyz/ --assume-role-policy-doc=\\{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":\\[\\{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":\\{\\\"AWS\\\":\\[\\\"arn:aws:iam:::user/TESTER\\\"\\]\\},\\\"Action\\\":\\[\\\"sts:AssumeRole\\\"\\]\\}\\]\\}", "radosgw-admin role-policy put --role-name= ROLE_NAME --policy-name= POLICY_NAME --policy-doc= PERMISSION_POLICY_DOC", "radosgw-admin role-policy put --role-name=S3Access --policy-name=Policy --policy-doc=\\{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":\\[\\{\\\"Effect\\\":\\\"Allow\\\",\\\"Action\\\":\\[\\\"s3:*\\\"\\],\\\"Resource\\\":\\\"arn:aws:s3:::example_bucket\\\"\\}\\]\\}", "radosgw-admin user info --uid=gwuser | grep -A1 access_key", "import boto3 access_key = 11BS02LGFB6AL6H1ADMW secret_key = vzCEkuryfn060dfee4fgQPqFrncKEIkh3ZcdOANY client = boto3.client('sts', aws_access_key_id=access_key, aws_secret_access_key=secret_key, endpoint_url=https://www.example.com/rgw, region_name='', ) response = client.assume_role( RoleArn='arn:aws:iam:::role/application_abc/component_xyz/S3Access', RoleSessionName='Bob', DurationSeconds=3600 )", "class SigV4Auth(BaseSigner): \"\"\" Sign a request with Signature V4. \"\"\" REQUIRES_REGION = True def __init__(self, credentials, service_name, region_name): self.credentials = credentials # We initialize these value here so the unit tests can have # valid values. But these will get overriden in ``add_auth`` # later for real requests. self._region_name = region_name if service_name == 'sts': 1 self._service_name = 's3' 2 else: 3 self._service_name = service_name 4", "def _modify_request_before_signing(self, request): if 'Authorization' in request.headers: del request.headers['Authorization'] self._set_necessary_date_headers(request) if self.credentials.token: if 'X-Amz-Security-Token' in request.headers: del request.headers['X-Amz-Security-Token'] request.headers['X-Amz-Security-Token'] = self.credentials.token if not request.context.get('payload_signing_enabled', True): if 'X-Amz-Content-SHA256' in request.headers: del request.headers['X-Amz-Content-SHA256'] request.headers['X-Amz-Content-SHA256'] = UNSIGNED_PAYLOAD 1 else: 2 request.headers['X-Amz-Content-SHA256'] = self.payload(request)", "client.put_bucket_notification_configuration( Bucket=bucket_name, NotificationConfiguration={ 'TopicConfigurations': [ { 'Id': notification_name, 'TopicArn': topic_arn, 'Events': ['s3:ObjectCreated:*', 's3:ObjectRemoved:*', 's3:ObjectLifecycle:Expiration:*'] }]})", "Get / BUCKET ?notification= NOTIFICATION_ID HTTP/1.1 Host: cname.domain.com Date: date Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "Get /testbucket?notification=testnotificationID HTTP/1.1 Host: cname.domain.com Date: date Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "<NotificationConfiguration xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\"> <TopicConfiguration> <Id></Id> <Topic></Topic> <Event></Event> <Filter> <S3Key> <FilterRule> <Name></Name> <Value></Value> </FilterRule> </S3Key> <S3Metadata> <FilterRule> <Name></Name> <Value></Value> </FilterRule> </S3Metadata> <S3Tags> <FilterRule> <Name></Name> <Value></Value> </FilterRule> </S3Tags> </Filter> </TopicConfiguration> </NotificationConfiguration>", "DELETE / BUCKET ?notification= NOTIFICATION_ID HTTP/1.1", "DELETE /testbucket?notification=testnotificationID HTTP/1.1", "GET /mybucket HTTP/1.1 Host: cname.domain.com", "GET / HTTP/1.1 Host: mybucket.cname.domain.com", "GET / HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "GET / BUCKET ?max-keys=25 HTTP/1.1 Host: cname.domain.com", "PUT / BUCKET HTTP/1.1 Host: cname.domain.com x-amz-acl: public-read-write Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "PUT / BUCKET ?website-configuration=HTTP/1.1", "PUT /testbucket?website-configuration=HTTP/1.1", "GET / BUCKET ?website-configuration=HTTP/1.1", "GET /testbucket?website-configuration=HTTP/1.1", "DELETE / BUCKET ?website-configuration=HTTP/1.1", "DELETE /testbucket?website-configuration=HTTP/1.1", "PUT / BUCKET ?replication HTTP/1.1", "PUT /testbucket?replication HTTP/1.1", "GET / BUCKET ?replication HTTP/1.1", "GET /testbucket?replication HTTP/1.1", "DELETE / BUCKET ?replication HTTP/1.1", "DELETE /testbucket?replication HTTP/1.1", "DELETE / BUCKET HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "<LifecycleConfiguration> <Rule> <Prefix/> <Status>Enabled</Status> <Expiration> <Days>10</Days> </Expiration> </Rule> </LifecycleConfiguration>", "<LifecycleConfiguration> <Rule> <Status>Enabled</Status> <Filter> <Prefix>keypre/</Prefix> </Filter> </Rule> </LifecycleConfiguration>", "<LifecycleConfiguration> <Rule> <Status>Enabled</Status> <Filter> <Prefix>keypre/</Prefix> </Filter> </Rule> <Rule> <Status>Enabled</Status> <Filter> <Prefix>mypre/</Prefix> </Filter> </Rule> </LifecycleConfiguration>", "<LifecycleConfiguration> <Rule> <Status>Enabled</Status> <Filter> <Tag> <Key>key</Key> <Value>value</Value> </Tag> </Filter> </Rule> </LifecycleConfiguration>", "<LifecycleConfiguration> <Rule> <Status>Enabled</Status> <Filter> <And> <Prefix>key-prefix</Prefix> <Tag> <Key>key1</Key> <Value>value1</Value> </Tag> <Tag> <Key>key2</Key> <Value>value2</Value> </Tag> </And> </Filter> </Rule> </LifecycleConfiguration>", "GET / BUCKET ?lifecycle HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "PUT / BUCKET ?lifecycle HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET <LifecycleConfiguration> <Rule> <Expiration> <Days>10</Days> </Expiration> </Rule> <Rule> </Rule> </LifecycleConfiguration>", "DELETE / BUCKET ?lifecycle HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "GET / BUCKET ?location HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "GET / BUCKET ?versioning HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "PUT / BUCKET ?versioning HTTP/1.1", "PUT /testbucket?versioning HTTP/1.1", "GET / BUCKET ?acl HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "PUT / BUCKET ?acl HTTP/1.1", "GET / BUCKET ?cors HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "PUT / BUCKET ?cors HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "DELETE / BUCKET ?cors HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "GET / BUCKET ?versions HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "HEAD / BUCKET HTTP/1.1 Host: cname.domain.com Date: date Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "GET / BUCKET ?uploads HTTP/1.1", "cat > examplepol { \"Version\": \"2012-10-17\", \"Statement\": [{ \"Effect\": \"Allow\", \"Principal\": {\"AWS\": [\"arn:aws:iam::usfolks:user/fred\"]}, \"Action\": \"s3:PutObjectAcl\", \"Resource\": [ \"arn:aws:s3:::happybucket/*\" ] }] } s3cmd setpolicy examplepol s3://happybucket s3cmd delpolicy s3://happybucket", "GET / BUCKET ?requestPayment HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "PUT / BUCKET ?requestPayment HTTP/1.1 Host: cname.domain.com", "https://rgw.domain.com/tenant:bucket", "from boto.s3.connection import S3Connection, OrdinaryCallingFormat c = S3Connection( aws_access_key_id=\"TESTER\", aws_secret_access_key=\"test123\", host=\"rgw.domain.com\", calling_format = OrdinaryCallingFormat() ) bucket = c.get_bucket(\"tenant:bucket\")", "{ \"Principal\": \"*\", \"Resource\": \"*\", \"Action\": \"s3:PutObject\", \"Effect\": \"Allow\", \"Condition\": { \"StringLike\": {\"aws:SourceVpc\": \"vpc-*\"}} }", "{ \"Principal\": \"*\", \"Resource\": \"*\", \"Action\": \"s3:PutObject\", \"Effect\": \"Allow\", \"Condition\": {\"StringEquals\": {\"aws:SourceVpc\": \"vpc-91237329\"}} }", "GET /v20180820/configuration/publicAccessBlock HTTP/1.1 Host: cname.domain.com x-amz-account-id: _ACCOUNTID_", "PUT /?publicAccessBlock HTTP/1.1 Host: Bucket.s3.amazonaws.com Content-MD5: ContentMD5 x-amz-sdk-checksum-algorithm: ChecksumAlgorithm x-amz-expected-bucket-owner: ExpectedBucketOwner <?xml version=\"1.0\" encoding=\"UTF-8\"?> <PublicAccessBlockConfiguration xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\"> <BlockPublicAcls>boolean</BlockPublicAcls> <IgnorePublicAcls>boolean</IgnorePublicAcls> <BlockPublicPolicy>boolean</BlockPublicPolicy> <RestrictPublicBuckets>boolean</RestrictPublicBuckets> </PublicAccessBlockConfiguration>", "DELETE /v20180820/configuration/publicAccessBlock HTTP/1.1 Host: s3-control.amazonaws.com x-amz-account-id: AccountId", "GET / BUCKET / OBJECT HTTP/1.1", "GET / BUCKET / OBJECT ?versionId= VERSION_ID HTTP/1.1", "GET / BUCKET / OBJECT ?partNumber= PARTNUMBER &versionId= VersionId HTTP/1.1 Host: Bucket.s3.amazonaws.com If-Match: IfMatch If-Modified-Since: IfModifiedSince If-None-Match: IfNoneMatch If-Unmodified-Since: IfUnmodifiedSince Range: Range", "GET /BUCKET/OBJECT?attributes&versionId=VersionId", "GET /testbucket/testobject?attributes&versionId=testversionid Host: Bucket.s3.amazonaws.com x-amz-max-parts: MaxParts x-amz-part-number-marker: PartNumberMarker x-amz-server-side-encryption-customer-algorithm: SSECustomerAlgorithm x-amz-server-side-encryption-customer-key: SSECustomerKey x-amz-server-side-encryption-customer-key-MD5: SSECustomerKeyMD5 x-amz-request-payer: RequestPayer x-amz-expected-bucket-owner: ExpectedBucketOwner x-amz-object-attributes: ObjectAttributes", "GET /{Key+}?attributes&versionId=VersionId HTTP/1.1 Host: Bucket.s3.amazonaws.com x-amz-max-parts: MaxParts x-amz-part-number-marker: PartNumberMarker x-amz-server-side-encryption-customer-algorithm: SSECustomerAlgorithm x-amz-server-side-encryption-customer-key: SSECustomerKey x-amz-server-side-encryption-customer-key-MD5: SSECustomerKeyMD5 x-amz-request-payer: RequestPayer x-amz-expected-bucket-owner: ExpectedBucketOwner x-amz-object-attributes: ObjectAttributes", "HTTP/1.1 200 x-amz-delete-marker: DeleteMarker Last-Modified: LastModified x-amz-version-id: VersionId x-amz-request-charged: RequestCharged <?xml version=\"1.0\" encoding=\"UTF-8\"?> <GetObjectAttributesOutput> <ETag>string</ETag> <Checksum> <ChecksumCRC32>string</ChecksumCRC32> <ChecksumCRC32C>string</ChecksumCRC32C> <ChecksumSHA1>string</ChecksumSHA1> <ChecksumSHA256>string</ChecksumSHA256> </Checksum> <ObjectParts> <IsTruncated>boolean</IsTruncated> <MaxParts>integer</MaxParts> <NextPartNumberMarker>integer</NextPartNumberMarker> <PartNumberMarker>integer</PartNumberMarker> <Part> <ChecksumCRC32>string</ChecksumCRC32> <ChecksumCRC32C>string</ChecksumCRC32C> <ChecksumSHA1>string</ChecksumSHA1> <ChecksumSHA256>string</ChecksumSHA256> <PartNumber>integer</PartNumber> <Size>long</Size> </Part> <PartsCount>integer</PartsCount> </ObjectParts> <StorageClass>string</StorageClass> <ObjectSize>long</ObjectSize> </GetObjectAttributesOutput>", "HEAD / BUCKET / OBJECT HTTP/1.1", "HEAD / BUCKET / OBJECT ?versionId= VERSION_ID HTTP/1.1", "PUT / BUCKET ?object-lock HTTP/1.1", "PUT /testbucket?object-lock HTTP/1.1", "GET / BUCKET ?object-lock HTTP/1.1", "GET /testbucket?object-lock HTTP/1.1", "PUT / BUCKET / OBJECT ?legal-hold&versionId= HTTP/1.1", "PUT /testbucket/testobject?legal-hold&versionId= HTTP/1.1", "GET / BUCKET / OBJECT ?legal-hold&versionId= HTTP/1.1", "GET /testbucket/testobject?legal-hold&versionId= HTTP/1.1", "PUT / BUCKET / OBJECT ?retention&versionId= HTTP/1.1", "PUT /testbucket/testobject?retention&versionId= HTTP/1.1", "GET / BUCKET / OBJECT ?retention&versionId= HTTP/1.1", "GET /testbucket/testobject?retention&versionId= HTTP/1.1", "PUT / BUCKET / OBJECT ?tagging&versionId= HTTP/1.1", "PUT /testbucket/testobject?tagging&versionId= HTTP/1.1", "GET / BUCKET / OBJECT ?tagging&versionId= HTTP/1.1", "GET /testbucket/testobject?tagging&versionId= HTTP/1.1", "DELETE / BUCKET / OBJECT ?tagging&versionId= HTTP/1.1", "DELETE /testbucket/testobject?tagging&versionId= HTTP/1.1", "PUT / BUCKET / OBJECT HTTP/1.1", "DELETE / BUCKET / OBJECT HTTP/1.1", "DELETE / BUCKET / OBJECT ?versionId= VERSION_ID HTTP/1.1", "POST / BUCKET / OBJECT ?delete HTTP/1.1", "GET / BUCKET / OBJECT ?acl HTTP/1.1", "GET / BUCKET / OBJECT ?versionId= VERSION_ID &acl HTTP/1.1", "PUT / BUCKET / OBJECT ?acl", "PUT / DEST_BUCKET / DEST_OBJECT HTTP/1.1 x-amz-copy-source: SOURCE_BUCKET / SOURCE_OBJECT", "POST / BUCKET / OBJECT HTTP/1.1", "OPTIONS / OBJECT HTTP/1.1", "POST / BUCKET / OBJECT ?uploads", "PUT / BUCKET / OBJECT ?partNumber=&uploadId= UPLOAD_ID HTTP/1.1", "GET / BUCKET / OBJECT ?uploadId= UPLOAD_ID HTTP/1.1", "POST / BUCKET / OBJECT ?uploadId= UPLOAD_ID HTTP/1.1", "PUT / BUCKET / OBJECT ?partNumber=PartNumber&uploadId= UPLOAD_ID HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "DELETE / BUCKET / OBJECT ?uploadId= UPLOAD_ID HTTP/1.1", "select customerid from s3Object where age>30 and age<65;", "POST / BUCKET / KEY ?select&select-type=2 HTTP/1.1\\r\\n", "POST /testbucket/sample1csv?select&select-type=2 HTTP/1.1\\r\\n POST /testbucket/sample1parquet?select&select-type=2 HTTP/1.1\\r\\n", "{:event-type,records} {:content-type,application/octet-stream} {:message-type,event}", "aws --endpoint- URL http://localhost:80 s3api select-object-content --bucket BUCKET_NAME --expression-type 'SQL' --input-serialization '{\"CSV\": {\"FieldDelimiter\": \",\" , \"QuoteCharacter\": \"\\\"\" , \"RecordDelimiter\" : \"\\n\" , \"QuoteEscapeCharacter\" : \"\\\\\" , \"FileHeaderInfo\": \"USE\" }, \"CompressionType\": \"NONE\"}' --output-serialization '{\"CSV\": {}}' --key OBJECT_NAME .csv --expression \"select count(0) from s3object where int(_1)<10;\" output.csv", "aws --endpoint-url http://localhost:80 s3api select-object-content --bucket testbucket --expression-type 'SQL' --input-serialization '{\"CSV\": {\"FieldDelimiter\": \",\" , \"QuoteCharacter\": \"\\\"\" , \"RecordDelimiter\" : \"\\n\" , \"QuoteEscapeCharacter\" : \"\\\\\" , \"FileHeaderInfo\": \"USE\" }, \"CompressionType\": \"NONE\"}' --output-serialization '{\"CSV\": {}}' --key testobject.csv --expression \"select count(0) from s3object where int(_1)<10;\" output.csv", "aws --endpoint-url http://localhost:80 s3api select-object-content --bucket BUCKET_NAME --expression-type 'SQL' --input-serialization '{\"Parquet\": {}, {\"CompressionType\": \"NONE\"}' --output-serialization '{\"CSV\": {}}' --key OBJECT_NAME .parquet --expression \"select count(0) from s3object where int(_1)<10;\" output.csv", "aws --endpoint-url http://localhost:80 s3api select-object-content --bucket testbucket --expression-type 'SQL' --input-serialization '{\"Parquet\": {}, {\"CompressionType\": \"NONE\"}' --output-serialization '{\"CSV\": {}}' --key testobject.parquet --expression \"select count(0) from s3object where int(_1)<10;\" output.csv", "aws --endpoint- URL http://localhost:80 s3api select-object-content --bucket BUCKET_NAME --expression-type 'SQL' --input-serialization '{\"JSON\": {\"CompressionType\": \"NONE\"}' --output-serialization '{\"CSV\": {}}}' --key OBJECT_NAME .json --expression \"select count(0) from s3object where int(_1)<10;\" output.csv", "aws --endpoint-url http://localhost:80 s3api select-object-content --bucket testbucket --expression-type 'SQL' --input-serialization '{\"JSON\": {\"CompressionType\": \"NONE\"}' --output-serialization '{\"CSV\": {}}}' --key testobject.json --expression \"select count(0) from s3object where int(_1)<10;\" output.csv", "import pprint import boto3 from botocore.exceptions import ClientError def run_s3select(bucket,key,query,column_delim=\",\",row_delim=\"\\n\",quot_char='\"',esc_char='\\\\',csv_header_info=\"NONE\"): s3 = boto3.client('s3', endpoint_url=endpoint, aws_access_key_id=access_key, region_name=region_name, aws_secret_access_key=secret_key) result = \"\" try: r = s3.select_object_content( Bucket=bucket, Key=key, ExpressionType='SQL', InputSerialization = {\"CSV\": {\"RecordDelimiter\" : row_delim, \"FieldDelimiter\" : column_delim,\"QuoteEscapeCharacter\": esc_char, \"QuoteCharacter\": quot_char, \"FileHeaderInfo\": csv_header_info}, \"CompressionType\": \"NONE\"}, OutputSerialization = {\"CSV\": {}}, Expression=query, RequestProgress = {\"Enabled\": progress}) except ClientError as c: result += str(c) return result for event in r['Payload']: if 'Records' in event: result = \"\" records = event['Records']['Payload'].decode('utf-8') result += records if 'Progress' in event: print(\"progress\") pprint.pprint(event['Progress'],width=1) if 'Stats' in event: print(\"Stats\") pprint.pprint(event['Stats'],width=1) if 'End' in event: print(\"End\") pprint.pprint(event['End'],width=1) return result run_s3select( \"my_bucket\", \"my_csv_object\", \"select int(_1) as a1, int(_2) as a2 , (a1+a2) as a3 from s3object where a3>100 and a3<300;\")", "select int(_1) as a1, int(_2) as a2 , (a1+a2) as a3 from s3object where a3>100 and a3<300;\")", "4-byte magic number \"PAR1\" <Column 1 Chunk 1 + Column Metadata> <Column 2 Chunk 1 + Column Metadata> <Column N Chunk 1 + Column Metadata> <Column 1 Chunk 2 + Column Metadata> <Column 2 Chunk 2 + Column Metadata> <Column N Chunk 2 + Column Metadata> <Column 1 Chunk M + Column Metadata> <Column 2 Chunk M + Column Metadata> <Column N Chunk M + Column Metadata> File Metadata 4-byte length in bytes of file metadata 4-byte magic number \"PAR1\"", "{ \"firstName\": \"Joe\", \"lastName\": \"Jackson\", \"gender\": \"male\", \"age\": \"twenty\" }, { \"firstName\": \"Joe_2\", \"lastName\": \"Jackson_2\", \"gender\": \"male\", \"age\": 21 }, \"phoneNumbers\": [ { \"type\": \"home1\", \"number\": \"734928_1\",\"addr\": 11 }, { \"type\": \"home2\", \"number\": \"734928_2\",\"addr\": 22 } ], \"key_after_array\": \"XXX\", \"description\" : { \"main_desc\" : \"value_1\", \"second_desc\" : \"value_2\" } the from-clause define a single row. _1 points to root object level. _1.age appears twice in Documnet-row, the last value is used for the operation. query = \"select _1.firstname,_1.key_after_array,_1.age+4,_1.description.main_desc,_1.description.second_desc from s3object[*].aa.bb.cc;\"; expected_result = Joe_2,XXX,25,value_1,value_2", "[cephuser@host01 ~]USD git clone https://github.com/ceph/s3select.git [cephuser@host01 ~]USD cd s3select", "[cephuser@host01 s3select]USD cat container/trino/hms_trino.yaml version: '3' services: hms: image: galsl/hms:dev container_name: hms environment: # S3_ENDPOINT the CEPH/RGW end-point-url - S3_ENDPOINT=http://rgw_ip:port - S3_ACCESS_KEY=abc - S3_SECRET_KEY=abc # the container starts with booting the hive metastore command: sh -c '. ~/.bashrc; start_hive_metastore' ports: - 9083:9083 networks: - trino_hms trino: image: trinodb/trino:405 container_name: trino volumes: # the trino directory contains the necessary configuration - ./trino:/etc/trino ports: - 8080:8080 networks: - trino_hms networks: trino_hm", "[cephuser@host01 s3select]USD cat container/trino/trino/catalog/hive.properties connector.name=hive hive.metastore.uri=thrift://hms:9083 #hive.metastore.warehouse.dir=s3a://hive/ hive.allow-drop-table=true hive.allow-rename-table=true hive.allow-add-column=true hive.allow-drop-column=true hive.allow-rename-column=true hive.non-managed-table-writes-enabled=true hive.s3select-pushdown.enabled=true hive.s3.aws-access-key=abc hive.s3.aws-secret-key=abc should modify per s3-endpoint-url hive.s3.endpoint=http://rgw_ip:port #hive.s3.max-connections=1 #hive.s3select-pushdown.max-connections=1 hive.s3.connect-timeout=100s hive.s3.socket-timeout=100s hive.max-splits-per-second=10000 hive.max-split-size=128MB", "[cephuser@host01 s3select]USD sudo docker compose -f ./container/trino/hms_trino.yaml up -d", "[cephuser@host01 s3select]USD sudo docker exec -it trino /bin/bash trino@66f753905e82:/USD trino trino> create schema hive.csvbkt1schema; trino> create table hive.csvbkt1schema.polariondatacsv(c1 varchar,c2 varchar, c3 varchar, c4 varchar, c5 varchar, c6 varchar, c7 varchar, c8 varchar, c9 varchar) WITH ( external_location = 's3a://csvbkt1/',format = 'CSV'); trino> select * from hive.csvbkt1schema.polariondatacsv;" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/developer_guide/ceph-object-gateway-and-the-s3-api
4.3. Shared Storage Example: NFS for a Simple Migration
4.3. Shared Storage Example: NFS for a Simple Migration Important This example uses NFS to share guest virtual machine images with other KVM host physical machines. Although not practical for large installations, it is presented to demonstrate migration techniques only. Do not use this example for migrating or running more than a few guest virtual machines. In addition, it is required that the sync parameter is enabled. This is required for proper export of the NFS storage. In addition, it is strongly recommended that the NFS is mounted on source host physical machine, and the guest virtual machine's image needs to be created on the NFS mounted directory located on source host physical machine. It should also be noted that NFS file locking must not be used as it is not supported in KVM. iSCSI storage is a better choice for large deployments. Refer to Section 12.5, "iSCSI-based Storage Pools" for configuration details. Also note, that the instructions provided in this section are not meant to replace the detailed instructions found in Red Hat Linux Storage Administration Guide . Refer to this guide for information on configuring NFS, opening IP tables, and configuring the firewall. Create a directory for the disk images This shared directory will contain the disk images for the guest virtual machines. To do this create a directory in a location different from /var/lib/libvirt/images . For example: Add the new directory path to the NFS configuration file The NFS configuration file is a text file located in /etc/exports . Open the file and edit it adding the path to the new file you created in step 1. Start NFS Make sure that the ports for NFS in iptables (2049, for example) are opened and add NFS to the /etc/hosts.allow file. Start the NFS service: Mount the shared storage on both the source and the destination Mount the /var/lib/libvirt/images directory on both the source and destination system, running the following command twice. Once on the source system and again on the destination system. Warning Make sure that the directories you create in this procedure is compliant with the requirements as outlined in Section 4.1, "Live Migration Requirements" . In addition, the directory may need to be labeled with the correct SELinux label. For more information consult the NFS chapter in the Red Hat Enterprise Linux Storage Administration Guide .
[ "mkdir /var/lib/libvirt-img/images", "echo \"/var/lib/libvirt-img/images\" >> /etc/exports/[NFS-Config-FILENAME.txt]", "service nfs start", "mount source_host :/var/lib/libvirt-img/images /var/lib/libvirt/images" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-shared-storage-nfs-migration
Preface
Preface Open Java Development Kit (OpenJDK) is a free and open source implementation of the Java Platform, Standard Edition (Java SE). The Red Hat build of OpenJDK is available in two versions, Red Hat build of OpenJDK 8u and Red Hat build of OpenJDK 11u. Packages for the Red Hat build of OpenJDK are made available on Red Hat Enterprise Linux and Microsoft Windows and shipped as a JDK and JRE in the Red Hat Ecosystem Catalog.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.11/pr01
Chapter 7. Installing the Migration Toolkit for Containers in a restricted network environment
Chapter 7. Installing the Migration Toolkit for Containers in a restricted network environment You can install the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 3 and 4 in a restricted network environment by performing the following procedures: Create a mirrored Operator catalog . This process creates a mapping.txt file, which contains the mapping between the registry.redhat.io image and your mirror registry image. The mapping.txt file is required for installing the Operator on the source cluster. Install the Migration Toolkit for Containers Operator on the OpenShift Container Platform 4.11 target cluster by using Operator Lifecycle Manager. By default, the MTC web console and the Migration Controller pod run on the target cluster. You can configure the Migration Controller custom resource manifest to run the MTC web console and the Migration Controller pod on a source cluster or on a remote cluster . Install the legacy Migration Toolkit for Containers Operator on the OpenShift Container Platform 3 source cluster from the command line interface. Configure object storage to use as a replication repository. To uninstall MTC, see Uninstalling MTC and deleting resources . 7.1. Compatibility guidelines You must install the Migration Toolkit for Containers (MTC) Operator that is compatible with your OpenShift Container Platform version. Definitions legacy platform OpenShift Container Platform 4.5 and earlier. modern platform OpenShift Container Platform 4.6 and later. legacy operator The MTC Operator designed for legacy platforms. modern operator The MTC Operator designed for modern platforms. control cluster The cluster that runs the MTC controller and GUI. remote cluster A source or destination cluster for a migration that runs Velero. The Control Cluster communicates with Remote clusters via the Velero API to drive migrations. You must use the compatible MTC version for migrating your OpenShift Container Platform clusters. For the migration to succeed both your source cluster and the destination cluster must use the same version of MTC. MTC 1.7 supports migrations from OpenShift Container Platform 3.11 to 4.8. MTC 1.8 only supports migrations from OpenShift Container Platform 4.9 and later. Table 7.1. MTC compatibility: Migrating from a legacy or a modern platform Details OpenShift Container Platform 3.11 OpenShift Container Platform 4.0 to 4.5 OpenShift Container Platform 4.6 to 4.8 OpenShift Container Platform 4.9 or later Stable MTC version MTC v.1.7. z MTC v.1.7. z MTC v.1.7. z MTC v.1.8. z Installation Legacy MTC v.1.7. z operator: Install manually with the operator.yml file. [ IMPORTANT ] This cluster cannot be the control cluster. Install with OLM, release channel release-v1.7 Install with OLM, release channel release-v1.8 Edge cases exist in which network restrictions prevent modern clusters from connecting to other clusters involved in the migration. For example, when migrating from an OpenShift Container Platform 3.11 cluster on premises to a modern OpenShift Container Platform cluster in the cloud, where the modern cluster cannot connect to the OpenShift Container Platform 3.11 cluster. With MTC v.1.7. z , if one of the remote clusters is unable to communicate with the control cluster because of network restrictions, use the crane tunnel-api command. With the stable MTC release, although you should always designate the most modern cluster as the control cluster, in this specific case it is possible to designate the legacy cluster as the control cluster and push workloads to the remote cluster. 7.2. Installing the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.11 You install the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.11 by using the Operator Lifecycle Manager. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must create an Operator catalog from a mirror image in a local registry. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Use the Filter by keyword field to find the Migration Toolkit for Containers Operator . Select the Migration Toolkit for Containers Operator and click Install . Click Install . On the Installed Operators page, the Migration Toolkit for Containers Operator appears in the openshift-migration project with the status Succeeded . Click Migration Toolkit for Containers Operator . Under Provided APIs , locate the Migration Controller tile, and click Create Instance . Click Create . Click Workloads Pods to verify that the MTC pods are running. 7.3. Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 3 You can install the legacy Migration Toolkit for Containers Operator manually on OpenShift Container Platform 3. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must have access to registry.redhat.io . You must have podman installed. You must create an image stream secret and copy it to each node in the cluster. You must have a Linux workstation with network access in order to download files from registry.redhat.io . You must create a mirror image of the Operator catalog. You must install the Migration Toolkit for Containers Operator from the mirrored Operator catalog on OpenShift Container Platform 4.11. Procedure Log in to registry.redhat.io with your Red Hat Customer Portal credentials: USD podman login registry.redhat.io Download the operator.yml file by entering the following command: podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./ Download the controller.yml file by entering the following command: podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./ Obtain the Operator image mapping by running the following command: USD grep openshift-migration-legacy-rhel8-operator ./mapping.txt | grep rhmtc The mapping.txt file was created when you mirrored the Operator catalog. The output shows the mapping between the registry.redhat.io image and your mirror registry image. Example output registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a=<registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator Update the image values for the ansible and operator containers and the REGISTRY value in the operator.yml file: containers: - name: ansible image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 1 ... - name: operator image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 2 ... env: - name: REGISTRY value: <registry.apps.example.com> 3 1 2 Specify your mirror registry and the sha256 value of the Operator image. 3 Specify your mirror registry. Log in to your OpenShift Container Platform source cluster. Create the Migration Toolkit for Containers Operator object: USD oc create -f operator.yml Example output namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-builders" already exists 1 Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-pullers" already exists 1 You can ignore Error from server (AlreadyExists) messages. They are caused by the Migration Toolkit for Containers Operator creating resources for earlier versions of OpenShift Container Platform 4 that are provided in later releases. Create the MigrationController object: USD oc create -f controller.yml Verify that the MTC pods are running: USD oc get pods -n openshift-migration 7.4. Proxy configuration For OpenShift Container Platform 4.1 and earlier versions, you must configure proxies in the MigrationController custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy object. For OpenShift Container Platform 4.2 to 4.11, the Migration Toolkit for Containers (MTC) inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings. 7.4.1. Direct volume migration Direct Volume Migration (DVM) was introduced in MTC 1.4.2. DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy. If you want to perform a DVM from a source cluster behind a proxy, you must configure a TCP proxy that works at the transport layer and forwards the SSL connections transparently without decrypting and re-encrypting them with their own SSL certificates. A Stunnel proxy is an example of such a proxy. 7.4.1.1. TCP proxy setup for DVM You can set up a direct connection between the source and the target cluster through a TCP proxy and configure the stunnel_tcp_proxy variable in the MigrationController CR to use the proxy: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port Direct volume migration (DVM) supports only basic authentication for the proxy. Moreover, DVM works only from behind proxies that can tunnel a TCP connection transparently. HTTP/HTTPS proxies in man-in-the-middle mode do not work. The existing cluster-wide proxies might not support this behavior. As a result, the proxy settings for DVM are intentionally kept different from the usual proxy configuration in MTC. 7.4.1.2. Why use a TCP proxy instead of an HTTP/HTTPS proxy? You can enable DVM by running Rsync between the source and the target cluster over an OpenShift route. Traffic is encrypted using Stunnel, a TCP proxy. The Stunnel running on the source cluster initiates a TLS connection with the target Stunnel and transfers data over an encrypted channel. Cluster-wide HTTP/HTTPS proxies in OpenShift are usually configured in man-in-the-middle mode where they negotiate their own TLS session with the outside servers. However, this does not work with Stunnel. Stunnel requires that its TLS session be untouched by the proxy, essentially making the proxy a transparent tunnel which simply forwards the TCP connection as-is. Therefore, you must use a TCP proxy. 7.4.1.3. Known issue Migration fails with error Upgrade request required The migration Controller uses the SPDY protocol to execute commands within remote pods. If the remote cluster is behind a proxy or a firewall that does not support the SPDY protocol, the migration controller fails to execute remote commands. The migration fails with the error message Upgrade request required . Workaround: Use a proxy that supports the SPDY protocol. In addition to supporting the SPDY protocol, the proxy or firewall also must pass the Upgrade HTTP header to the API server. The client uses this header to open a websocket connection with the API server. If the Upgrade header is blocked by the proxy or firewall, the migration fails with the error message Upgrade request required . Workaround: Ensure that the proxy forwards the Upgrade header. 7.4.2. Tuning network policies for migrations OpenShift supports restricting traffic to or from pods using NetworkPolicy or EgressFirewalls based on the network plugin used by the cluster. If any of the source namespaces involved in a migration use such mechanisms to restrict network traffic to pods, the restrictions might inadvertently stop traffic to Rsync pods during migration. Rsync pods running on both the source and the target clusters must connect to each other over an OpenShift Route. Existing NetworkPolicy or EgressNetworkPolicy objects can be configured to automatically exempt Rsync pods from these traffic restrictions. 7.4.2.1. NetworkPolicy configuration 7.4.2.1.1. Egress traffic from Rsync pods You can use the unique labels of Rsync pods to allow egress traffic to pass from them if the NetworkPolicy configuration in the source or destination namespaces blocks this type of traffic. The following policy allows all egress traffic from Rsync pods in the namespace: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress 7.4.2.1.2. Ingress traffic to Rsync pods apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress 7.4.2.2. EgressNetworkPolicy configuration The EgressNetworkPolicy object or Egress Firewalls are OpenShift constructs designed to block egress traffic leaving the cluster. Unlike the NetworkPolicy object, the Egress Firewall works at a project level because it applies to all pods in the namespace. Therefore, the unique labels of Rsync pods do not exempt only Rsync pods from the restrictions. However, you can add the CIDR ranges of the source or target cluster to the Allow rule of the policy so that a direct connection can be setup between two clusters. Based on which cluster the Egress Firewall is present in, you can add the CIDR range of the other cluster to allow egress traffic between the two: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny 7.4.2.3. Choosing alternate endpoints for data transfer By default, DVM uses an OpenShift Container Platform route as an endpoint to transfer PV data to destination clusters. You can choose another type of supported endpoint, if cluster topologies allow. For each cluster, you can configure an endpoint by setting the rsync_endpoint_type variable on the appropriate destination cluster in your MigrationController CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route] 7.4.2.4. Configuring supplemental groups for Rsync pods When your PVCs use a shared storage, you can configure the access to that storage by adding supplemental groups to Rsync pod definitions in order for the pods to allow access: Table 7.2. Supplementary groups for Rsync pods Variable Type Default Description src_supplemental_groups string Not set Comma-separated list of supplemental groups for source Rsync pods target_supplemental_groups string Not set Comma-separated list of supplemental groups for target Rsync pods Example usage The MigrationController CR can be updated to set values for these supplemental groups: spec: src_supplemental_groups: "1000,2000" target_supplemental_groups: "2000,3000" 7.4.3. Configuring proxies Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Procedure Get the MigrationController CR manifest: USD oc get migrationcontroller <migration_controller> -n openshift-migration Update the proxy parameters: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration ... spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2 1 Stunnel proxy URL for direct volume migration. 2 Comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy nor the httpsProxy field is set. Save the manifest as migration-controller.yaml . Apply the updated manifest: USD oc replace -f migration-controller.yaml -n openshift-migration For more information, see Configuring the cluster-wide proxy . 7.5. Configuring a replication repository The Multicloud Object Gateway is the only supported option for a restricted network environment. MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider. 7.5.1. Prerequisites All clusters must have uninterrupted network access to the replication repository. If you use a proxy server with an internally hosted replication repository, you must ensure that the proxy allows access to the replication repository. 7.5.2. Retrieving Multicloud Object Gateway credentials You must retrieve the Multicloud Object Gateway (MCG) credentials in order to create a Secret custom resource (CR) for the OpenShift API for Data Protection (OADP). MCG is a component of OpenShift Data Foundation. Prerequisites You must deploy OpenShift Data Foundation by using the appropriate OpenShift Data Foundation deployment guide . Procedure Obtain the S3 endpoint, AWS_ACCESS_KEY_ID , and AWS_SECRET_ACCESS_KEY by running the describe command on the NooBaa custom resource. 7.5.3. Additional resources Disconnected environment in the Red Hat OpenShift Data Foundation documentation. MTC workflow About data copy methods Adding a replication repository to the MTC web console 7.6. Uninstalling MTC and deleting resources You can uninstall the Migration Toolkit for Containers (MTC) and delete its resources to clean up the cluster. Note Deleting the velero CRDs removes Velero from the cluster. Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure Delete the MigrationController custom resource (CR) on all clusters: USD oc delete migrationcontroller <migration_controller> Uninstall the Migration Toolkit for Containers Operator on OpenShift Container Platform 4 by using the Operator Lifecycle Manager. Delete cluster-scoped resources on all clusters by running the following commands: migration custom resource definitions (CRDs): USD oc delete USD(oc get crds -o name | grep 'migration.openshift.io') velero CRDs: USD oc delete USD(oc get crds -o name | grep 'velero') migration cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io') migration-operator cluster role: USD oc delete clusterrole migration-operator velero cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'velero') migration cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io') migration-operator cluster role bindings: USD oc delete clusterrolebindings migration-operator velero cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'velero')
[ "podman login registry.redhat.io", "cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./", "cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./", "grep openshift-migration-legacy-rhel8-operator ./mapping.txt | grep rhmtc", "registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a=<registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator", "containers: - name: ansible image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 1 - name: operator image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 2 env: - name: REGISTRY value: <registry.apps.example.com> 3", "oc create -f operator.yml", "namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-builders\" already exists 1 Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-pullers\" already exists", "oc create -f controller.yml", "oc get pods -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]", "spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"", "oc get migrationcontroller <migration_controller> -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2", "oc replace -f migration-controller.yaml -n openshift-migration", "oc delete migrationcontroller <migration_controller>", "oc delete USD(oc get crds -o name | grep 'migration.openshift.io')", "oc delete USD(oc get crds -o name | grep 'velero')", "oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io')", "oc delete clusterrole migration-operator", "oc delete USD(oc get clusterroles -o name | grep 'velero')", "oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io')", "oc delete clusterrolebindings migration-operator", "oc delete USD(oc get clusterrolebindings -o name | grep 'velero')" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/migrating_from_version_3_to_4/installing-restricted-3-4
8.5.3. Plug-in Descriptions
8.5.3. Plug-in Descriptions The following list provides descriptions and usage instructions for several useful yum plug-ins. Plug-ins are listed by names, brackets contain the name of the package. search-disabled-repos ( subscription-manager ) The search-disabled-repos plug-in allows you to temporarily or permanently enable disabled repositories to help resolve dependencies. With this plug-in enabled, when Yum fails to install a package due to failed dependency resolution, it offers to temporarily enable disabled repositories and try again. If the installation succeeds, Yum also offers to enable the used repositories permanently. Note that the plug-in works only with the repositories that are managed by subscription-manager and not with custom repositories. Important If yum is executed with the --assumeyes or -y option, or if the assumeyes directive is enabled in /etc/yum.conf , the plug-in enables disabled repositories, both temporarily and permanently, without prompting for confirmation. This may lead to problems, for example, enabling repositories that you do not want enabled. To configure the search-disabled-repos plug-in, edit the configuration file located in /etc/yum/pluginconf.d/search-disabled-repos.conf . For the list of directives you can use in the [main] section, see the table below. Table 8.4. Supported search-disabled-repos.conf directives Directive Description enabled = value Allows you to enable or disable the plug-in. The value must be either 1 (enabled), or 0 (disabled). The plug-in is enabled by default. notify_only = value Allows you to restrict the behavior of the plug-in to notifications only. The value must be either 1 (notify only without modifying the behavior of Yum), or 0 (modify the behavior of Yum). By default the plug-in only notifies the user. ignored_repos = repositories Allows you to specify the repositories that will not be enabled by the plug-in. kabi ( kabi-yum-plugins ) The kabi plug-in checks whether a driver update package conforms with official Red Hat kernel Application Binary Interface ( kABI ). With this plug-in enabled, when a user attempts to install a package that uses kernel symbols which are not on a whitelist, a warning message is written to the system log. Additionally, configuring the plug-in to run in enforcing mode prevents such packages from being installed at all. To configure the kabi plug-in, edit the configuration file located in /etc/yum/pluginconf.d/kabi.conf . See Table 8.5, "Supported kabi.conf directives" for a list of directives that can be used in the [main] section. Table 8.5. Supported kabi.conf directives Directive Description enabled = value Allows you to enable or disable the plug-in. The value must be either 1 (enabled), or 0 (disabled). When installed, the plug-in is enabled by default. whitelists = directory Allows you to specify the directory in which the files with supported kernel symbols are located. By default, the kabi plug-in uses files provided by the kernel-abi-whitelists package (that is, the /lib/modules/kabi/ directory). enforce = value Allows you to enable or disable enforcing mode. The value must be either 1 (enabled), or 0 (disabled). By default, this option is commented out and the kabi plug-in only displays a warning message. presto ( yum-presto ) The presto plug-in adds support to Yum for downloading delta RPM packages, during updates, from repositories which have presto metadata enabled. Delta RPMs contain only the differences between the version of the package installed on the client requesting the RPM package and the updated version in the repository. Downloading a delta RPM is much quicker than downloading the entire updated package, and can speed up updates considerably. Once the delta RPMs are downloaded, they must be rebuilt to apply the difference to the currently-installed package and thus create the full, updated package. This process takes CPU time on the installing machine. Using delta RPMs is therefore a compromise between time-to-download, which depends on the network connection, and time-to-rebuild, which is CPU-bound. Using the presto plug-in is recommended for fast machines and systems with slower network connections, while slower machines on very fast connections benefit more from downloading normal RPM packages, that is, by disabling presto . product-id ( subscription-manager ) The product-id plug-in manages product identity certificates for products installed from the Content Delivery Network. The product-id plug-in is installed by default. refresh-packagekit ( PackageKit-yum-plugin ) The refresh-packagekit plug-in updates metadata for PackageKit whenever yum is run. The refresh-packagekit plug-in is installed by default. rhnplugin ( yum-rhn-plugin ) The rhnplugin provides support for connecting to RHN Classic . This allows systems registered with RHN Classic to update and install packages from this system. Note that RHN Classic is only provided for older Red Hat Enterprise Linux systems (that is, Red Hat Enterprise Linux 4.x, Red Hat Enterprise Linux 5.x, and Satellite 5.x) in order to migrate them over to Red Hat Enterprise Linux 6. The rhnplugin is installed by default. See the rhnplugin (8) manual page for more information about the plug-in. security ( yum-plugin-security ) Discovering information about and applying security updates easily and often is important to all system administrators. For this reason Yum provides the security plug-in, which extends yum with a set of highly-useful security-related commands, subcommands and options. You can check for security-related updates as follows: You can then use either yum update --security or yum update-minimal --security to update those packages which are affected by security advisories. Both of these commands update all packages on the system for which a security advisory has been issued. yum update-minimal --security updates them to the latest packages which were released as part of a security advisory, while yum update --security will update all packages affected by a security advisory to the latest version of that package available . In other words, if: the kernel-2.6.30.8-16 package is installed on your system; the kernel-2.6.30.8-32 package was released as a security update; then kernel-2.6.30.8-64 was released as a bug fix update, ...then yum update-minimal --security will update you to kernel-2.6.30.8-32 , and yum update --security will update you to kernel-2.6.30.8-64 . Conservative system administrators probably want to use update-minimal to reduce the risk incurred by updating packages as much as possible. See the yum-security (8) manual page for usage details and further explanation of the enhancements the security plug-in adds to yum . subscription-manager ( subscription-manager ) The subscription-manager plug-in provides support for connecting to Red Hat Network . This allows systems registered with Red Hat Network to update and install packages from the certificate-based Content Delivery Network. The subscription-manager plug-in is installed by default. See Chapter 6, Registering the System and Managing Subscriptions for more information how to manage product subscriptions and entitlements. yum-downloadonly ( yum-plugin-downloadonly ) The yum-downloadonly plug-in provides the --downloadonly command-line option which can be used to download packages from Red Hat Network or a configured Yum repository without installing the packages. To install the package, follow the instructions in Section 8.5.2, "Installing Additional Yum Plug-ins" . After the installation, see the contents of the /etc/yum/pluginconf.d/downloadonly.conf file to ensure that the plug-in is enabled: In the following example, the yum install --downloadonly command is run to download the latest version of the httpd package, without installing it: By default, packages downloaded using the --downloadonly option are saved in one of the subdirectories of the /var/cache/yum directory, depending on the Red Hat Enterprise Linux variant and architecture. If you want to specify an alternate directory to save the packages, pass the --downloaddir option along with --downloadonly : Note As an alternative to the yum-downloadonly plugin - to download packages without installing them - you can use the yumdownloader utility that is provided by the yum-utils package.
[ "~]# yum check-update --security Loaded plugins: product-id, refresh-packagekit, security, subscription-manager Updating Red Hat repositories. INFO:rhsm-app.repolib:repos updated: 0 Limiting package lists to security relevant ones Needed 3 of 7 packages, for security elinks.x86_64 0.12-0.13.el6 rhel kernel.x86_64 2.6.30.8-64.el6 rhel kernel-headers.x86_64 2.6.30.8-64.el6 rhel", "~]USD cat /etc/yum/pluginconf.d/downloadonly.conf [main] enabled=1", "~]# yum install httpd --downloadonly Loaded plugins: downloadonly, product-id, refresh-packagekit, rhnplugin, : subscription-manager Updating Red Hat repositories. Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package httpd.x86_64 0:2.2.15-9.el6_1.2 will be updated ---> Package httpd.x86_64 0:2.2.15-15.el6_2.1 will be an update --> Processing Dependency: httpd-tools = 2.2.15-15.el6_2.1 for package: httpd-2.2.15-15.el6_2.1.x86_64 --> Running transaction check ---> Package httpd-tools.x86_64 0:2.2.15-9.el6_1.2 will be updated ---> Package httpd-tools.x86_64 0:2.2.15-15.el6_2.1 will be an update --> Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Updating: httpd x86_64 2.2.15-15.el6_2.1 rhel-x86_64-server-6 812 k Updating for dependencies: httpd-tools x86_64 2.2.15-15.el6_2.1 rhel-x86_64-server-6 70 k Transaction Summary ================================================================================ Upgrade 2 Package(s) Total download size: 882 k Is this ok [y/N]: y Downloading Packages: (1/2): httpd-2.2.15-15.el6_2.1.x86_64.rpm | 812 kB 00:00 (2/2): httpd-tools-2.2.15-15.el6_2.1.x86_64.rpm | 70 kB 00:00 -------------------------------------------------------------------------------- Total 301 kB/s | 882 kB 00:02 exiting because --downloadonly specified", "~]# yum install --downloadonly --downloaddir=/path/to/directory httpd" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-Plugin_Descriptions
Chapter 1. Overview
Chapter 1. Overview Security Red Hat Enterprise Linux 7.4 introduces support for Network Bound Disk Encryption (NBDE), which enables the system administrator to encrypt root volumes of hard drives on bare metal machines without requiring to manually enter password when systems are rebooted. The USBGuard software framework provides system protection against intrusive USB devices by implementing basic whitelisting and blacklisting capabilities based on device attributes. The OpenSSH libraries update includes the ability to resume interrupted uploads in Secure File Transfer Protocol (SFTP) and adds support for a new fingerprint type that uses the SHA-256 algorithm. This OpenSSH version also removes server-side support for the SSH-1 protocol. Multiple new Linux Audit capabilities have been added to enable easier administration, to filter the events logged by the Audit system, gather more information from critical events, and to interpret large numbers of records. The OpenSC set of libraries and utilities adds support for Common Access Card (CAC) cards and now provides also the CoolKey applet functionality. The OpenSSL update includes multiple enhancements, such as support for the Datagram Transport Layer Security (DTLS) version 1.2 protocol and Application-Layer Protocol Negotiation (ALPN). The OpenSCAP tools have been NIST-certified, which enables easier adoption in regulated environments. Cryptographic protocols and algorithms that are considered insecure have been deprecated. However, this version also introduces a lot of other cryptographic-related improvements. For more information, see Part V, "Deprecated Functionality" and the Enhancing the Security of the Operating System with Cryptography Changes in Red Hat Enterprise Linux 7.4 Knowledgebase article on the Red Hat Customer Portal. See Chapter 15, Security for more information on security enhancements. Identity Management The System Security Services Daemon (SSSD) in a container is now fully supported. The Identity Management (IdM) server container is available as a Technology Preview feature. Users are now able to install new Identity Management servers, replicas, and clients on systems with FIPS mode enabled. Several enhancements related to smart card authentication have been introduced. For detailed information on changes in IdM, see Chapter 5, Authentication and Interoperability . For details on deprecated capabilities related to IdM, see Part V, "Deprecated Functionality" . Networking NetworkManager supports additional features for routing, enables the Media Access Control Security (MACsec) technology, and is now able to handle unmanaged devices. Kernel Generic Routing Encapsulation (GRE) tunneling has been enhanced. For more networking features, see Chapter 14, Networking . Kernel Support for NVMe Over Fabric has been added to the NVM-Express kernel driver, which increases flexibility when accessing high performance NVMe storage devices located in the data center on both Ethernet or Infiniband fabric infrastructures. For further kernel-related changes, refer to Chapter 12, Kernel . Storage and File Systems LVM provides full support for RAID takeover, which allows users to convert a RAID logical volume from one RAID level to another, and for RAID reshaping, which allows users to reshape properties, such as the RAID algorithm, stripe size, or number of images. You can now enable SELinux support for containers when you use OverlayFS with Docker. NFS over RDMA (NFSoRDMA) server is now fully supported when accessed by Red Hat Enterprise Linux clients. See Chapter 17, Storage for further storage-related features and Chapter 9, File Systems for enhancements to file systems. Tools The Performance Co-Pilot (PCP) application has been enhanced to support new client tools, such as pcp2influxdb , pcp-mpstat , and pcp-pidstat . Additionally, new PCP performance metrics from several subsystems are available for a variety of Performance Co-Pilot analysis tools. For more information regarding updates to various tools, see Chapter 7, Compiler and Tools . High Availability Red Hat Enterprise Linux 7.4 introduces full support for the following features: clufter , a tool for transforming and analyzing cluster configuration formats Quorum devices (QDevice) in a Pacemaker cluster for managing stretch clusters Booth cluster ticket manager For more information on the high availability features introduced in this release, see Chapter 6, Clustering . Virtualization Red Hat Enterprise Linux 7 guest virtual machines now support the Elastic Network Adapter (ENA), and thus provide enhanced networking capabilities when running on the the Amazon Web Services (AWS) cloud. For further enhancements to Virtualization, see Chapter 19, Virtualization . Management and Automation Red Hat Enterprise Linux 7.4 includes Red Hat Enterprise Linux System Roles powered by Ansible , a configuration interface that simplifies management and maintenance of Red Hat Enterprise Linux deployments. This feature is available as a Technology Preview. For details, refer to Chapter 47, Red Hat Enterprise Linux System Roles Powered by Ansible . Red Hat Insights Since Red Hat Enterprise Linux 7.2, the Red Hat Insights service is available. Red Hat Insights is a proactive service designed to enable you to identify, examine, and resolve known technical issues before they affect your deployment. Insights leverages the combined knowledge of Red Hat Support Engineers, documented solutions, and resolved issues to deliver relevant, actionable information to system administrators. The service is hosted and delivered through the customer portal at https://access.redhat.com/insights/ or through Red Hat Satellite. For further information, data security, and limits, refer to https://access.redhat.com/insights/splash/ . Red Hat Customer Portal Labs Red Hat Customer Portal Labs is a set of tools in a section of the Customer Portal available at https://access.redhat.com/labs/ . The applications in Red Hat Customer Portal Labs can help you improve performance, quickly troubleshoot issues, identify security problems, and quickly deploy and configure complex applications. Some of the most popular applications are: Registration Assistant Code Browser Red Hat Product Certificates Red Hat Network (RHN) System List Exporter Kickstart Generator Log Reaper Load Balancer Configuration Tool Multipath Helper
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.4_release_notes/chap-red_hat_enterprise_linux-7.4_release_notes-overview
12.5. Tracking Certificates with certmonger
12.5. Tracking Certificates with certmonger certmonger can monitor expiration date of a certificate and automatically renew the certificate at the end of its validity period. To track a certificate in this way, run the getcert start-tracking command. Note It is not required that you run getcert start-tracking after running getcert request , because the getcert request command by default automatically tracks and renews the requested certificate. The getcert start-tracking command is intended for situations when you have already obtained the key and certificate through some other process, and therefore you have to manually instruct certmonger to start the tracking. The getcert start-tracking command takes several options: -r automatically renews the certificate when its expiration date is close if the key pair already exists. This option is used by default. -I sets a name for the tracking request. certmonger uses this name to refer to the combination of storage locations and request options, and it is also displayed in the output of the getcert list command. If you do not specify this option, certmonger assigns an automatically generated a name for the task. To cancel tracking for a certificate, run the stop-tracking command.
[ "getcert start-tracking -I cert1-tracker -d /export/alias -n ServerCert" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system-level_authentication_guide/certmonger-tracking-certs
3.4. Multi-port Services and Load Balancer
3.4. Multi-port Services and Load Balancer LVS routers under any topology require extra configuration when creating multi-port Load Balancer services. Multi-port services can be created artificially by using firewall marks to bundle together different, but related protocols, such as HTTP (port 80) and HTTPS (port 443), or when Load Balancer is used with true multi-port protocols, such as FTP. In either case, the LVS router uses firewall marks to recognize that packets destined for different ports, but bearing the same firewall mark, should be handled identically. Also, when combined with persistence, firewall marks ensure connections from the client machine are routed to the same host, as long as the connections occur within the length of time specified by the persistence parameter. Although the mechanism used to balance the loads on the real servers, IPVS, can recognize the firewall marks assigned to a packet, it cannot itself assign firewall marks. The job of assigning firewall marks must be performed by the network packet filter, iptables . The default firewall administration tool in Red Hat Enterprise Linux 7 is firewalld , which can be used to configure iptables . If preferred, iptables can be used directly. See Red Hat Enterprise Linux 7 Security Guide for information on working with iptables in Red Hat Enterprise Linux 7. 3.4.1. Assigning Firewall Marks Using firewalld To assign firewall marks to a packet destined for a particular port, the administrator can use firewalld 's firewall-cmd utility. If required, confirm that firewalld is running: To start firewalld , enter: To ensure firewalld is enabled to start at system start: This section illustrates how to bundle HTTP and HTTPS as an example; however, FTP is another commonly clustered multi-port protocol. The basic rule to remember when using firewall marks is that for every protocol using a firewall mark in Keepalived there must be a commensurate firewall rule to assign marks to the network packets. Before creating network packet filter rules, make sure there are no rules already in place. To do this, open a shell prompt, login as root , and enter the following command: If no rich rules are present the prompt will instantly reappear. If firewalld is active and rich rules are present, it displays a set of rules. If the rules already in place are important, check the contents of /etc/firewalld/zones/ and copy any rules worth keeping to a safe place before proceeding. Delete unwanted rich rules using a command in the following format: firewall-cmd --zone= zone --remove-rich-rule=' rule ' --permanent The --permanent option makes the setting persistent, but the command will only take effect at system start. If required to make the setting take effect immediately, repeat the command omitting the --permanent option. The first load balancer related firewall rule to be configured is to allow VRRP traffic for the Keepalived service to function. Enter the following command: If the zone is omitted the default zone will be used. Below are rules which assign the same firewall mark, 80 , to incoming traffic destined for the floating IP address, n.n.n.n , on ports 80 and 443. If the zone is omitted the default zone will be used. See the Red Hat Enterprise Linux 7 Security Guide for more information on the use of firewalld 's rich language commands. 3.4.2. Assigning Firewall Marks Using iptables To assign firewall marks to a packet destined for a particular port, the administrator can use iptables . This section illustrates how to bundle HTTP and HTTPS as an example; however, FTP is another commonly clustered multi-port protocol. The basic rule to remember when using firewall marks is that for every protocol using a firewall mark in Keepalived there must be a commensurate firewall rule to assign marks to the network packets. Before creating network packet filter rules, make sure there are no rules already in place. To do this, open a shell prompt, login as root , and enter the following command: /usr/sbin/service iptables status If iptables is not running, the prompt will instantly reappear. If iptables is active, it displays a set of rules. If rules are present, enter the following command: /sbin/service iptables stop If the rules already in place are important, check the contents of /etc/sysconfig/iptables and copy any rules worth keeping to a safe place before proceeding. The first load balancer related configuring firewall rules is to allow VRRP traffic for the Keepalived service to function. Below are rules which assign the same firewall mark, 80 , to incoming traffic destined for the floating IP address, n.n.n.n , on ports 80 and 443. Note that you must log in as root and load the module for iptables before issuing rules for the first time. In the above iptables commands, n.n.n.n should be replaced with the floating IP for your HTTP and HTTPS virtual servers. These commands have the net effect of assigning any traffic addressed to the VIP on the appropriate ports a firewall mark of 80, which in turn is recognized by IPVS and forwarded appropriately. Warning The commands above will take effect immediately, but do not persist through a reboot of the system.
[ "systemctl status firewalld firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled) Active: active (running) since Tue 2016-01-26 05:23:53 EST; 7h ago", "systemctl start firewalld", "systemctl enable firewalld", "firewall-cmd --list-rich-rules", "firewall-cmd --add-rich-rule='rule protocol value=\"vrrp\" accept' --permanent", "firewall-cmd --add-rich-rule='rule family=\"ipv4\" destination address=\"n.n.n.n/32\" port port=\"80\" protocol=\"tcp\" mark set=\"80\"' --permanent firewall-cmd --add-rich-rule='rule family=\"ipv4\" destination address=\"n.n.n.n/32\" port port=\"443\" protocol=\"tcp\" mark set=\"80\"' --permanent firewall-cmd --reload success firewall-cmd --list-rich-rules rule protocol value=\"vrrp\" accept rule family=\"ipv4\" destination address=\"n.n.n.n/32\" port port=\"80\" protocol=\"tcp\" mark set=80 rule family=\"ipv4\" destination address=\"n.n.n.n/32\" port port=\"443\" protocol=\"tcp\" mark set=80", "/usr/sbin/iptables -I INPUT -p vrrp -j ACCEPT", "/usr/sbin/iptables -t mangle -A PREROUTING -p tcp -d n.n.n.n/32 -m multiport --dports 80,443 -j MARK --set-mark 80" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/load_balancer_administration/s1-lvs-multi-vsa
Chapter 9. nova
Chapter 9. nova The following chapter contains information about the configuration options in the nova service. 9.1. nova.conf This section contains options for the /etc/nova/nova.conf file. 9.1.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/nova/nova.conf file. . Configuration option = Default value Type Description allow_resize_to_same_host = False boolean value Allow destination machine to match source for resize. Useful when testing in single-host environments. By default it is not allowed to resize to the same host. Setting this option to true will add the same host to the destination options. Also set to true if you allow the ServerGroupAffinityFilter and need to resize. For changes to this option to take effect, the nova-api service needs to be restarted. arq_binding_timeout = 300 integer value Timeout for Accelerator Request (ARQ) bind event message arrival. Number of seconds to wait for ARQ bind resolution event to arrive. The event indicates that every ARQ for an instance has either bound successfully or failed to bind. If it does not arrive, instance bringup is aborted with an exception. backdoor_port = None string value Enable eventlet backdoor. Acceptable values are 0, <port>, and <start>:<end>, where 0 results in listening on a random tcp port number; <port> results in listening on the specified port number (and not enabling backdoor if that port is in use); and <start>:<end> results in listening on the smallest unused port number within the specified range of port numbers. The chosen port is displayed in the service's log file. backdoor_socket = None string value Enable eventlet backdoor, using the provided path as a unix socket that can receive connections. This option is mutually exclusive with backdoor_port in that only one should be provided. If both are provided then the existence of this option overrides the usage of that option. Inside the path {pid} will be replaced with the PID of the current process. block_device_allocate_retries = 60 integer value The number of times to check for a volume to be "available" before attaching it during server create. When creating a server with block device mappings where source_type is one of blank , image or snapshot and the destination_type is volume , the nova-compute service will create a volume and then attach it to the server. Before the volume can be attached, it must be in status "available". This option controls how many times to check for the created volume to be "available" before it is attached. If the operation times out, the volume will be deleted if the block device mapping delete_on_termination value is True. It is recommended to configure the image cache in the block storage service to speed up this operation. See https://docs.openstack.org/cinder/latest/admin/blockstorage-image-volume-cache.html for details. Possible values: 60 (default) If value is 0, then one attempt is made. For any value > 0, total attempts are (value + 1) Related options: block_device_allocate_retries_interval - controls the interval between checks block_device_allocate_retries_interval = 3 integer value Interval (in seconds) between block device allocation retries on failures. This option allows the user to specify the time interval between consecutive retries. The block_device_allocate_retries option specifies the maximum number of retries. Possible values: 0: Disables the option. Any positive integer in seconds enables the option. Related options: block_device_allocate_retries - controls the number of retries cert = self.pem string value Path to SSL certificate file. Related options: key ssl_only [console] ssl_ciphers [console] ssl_minimum_version compute_driver = None string value Defines which driver to use for controlling virtualization. Possible values: libvirt.LibvirtDriver fake.FakeDriver ironic.IronicDriver vmwareapi.VMwareVCDriver hyperv.HyperVDriver zvm.ZVMDriver compute_monitors = [] list value A comma-separated list of monitors that can be used for getting compute metrics. You can use the alias/name from the setuptools entry points for nova.compute.monitors.* namespaces. If no namespace is supplied, the "cpu." namespace is assumed for backwards-compatibility. Note Only one monitor per namespace (For example: cpu) can be loaded at a time. Possible values: An empty list will disable the feature (Default). An example value that would enable the CPU bandwidth monitor that uses the virt driver variant compute_monitors = cpu.virt_driver config_drive_format = iso9660 string value Config drive format. Config drive format that will contain metadata attached to the instance when it boots. Related options: This option is meaningful when one of the following alternatives occur: force_config_drive option set to true the REST API call to create the instance contains an enable flag for config drive option the image used to create the instance requires a config drive, this is defined by img_config_drive property for that image. A compute node running Hyper-V hypervisor can be configured to attach config drive as a CD drive. To attach the config drive as a CD drive, set the [hyperv] config_drive_cdrom option to true. Deprecated since: 19.0.0 Reason: This option was originally added as a workaround for bug in libvirt, #1246201, that was resolved in libvirt v1.2.17. As a result, this option is no longer necessary or useful. conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool console_host = <based on operating system> string value Console proxy host to be used to connect to instances on this host. It is the publicly visible name for the console host. Possible values: Current hostname (default) or any string representing hostname. control_exchange = nova string value The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option. cpu_allocation_ratio = None floating point value Virtual CPU to physical CPU allocation ratio. This option is used to influence the hosts selected by the Placement API by configuring the allocation ratio for VCPU inventory. note:: note:: Possible values: Any valid positive integer or float value Related options: initial_cpu_allocation_ratio daemon = False boolean value Run as a background process. debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_access_ip_network_name = None string value Name of the network to be used to set access IPs for instances. If there are multiple IPs to choose from, an arbitrary one will be chosen. Possible values: None (default) Any string representing network name. default_availability_zone = nova string value Default availability zone for compute services. This option determines the default availability zone for nova-compute services, which will be used if the service(s) do not belong to aggregates with availability zone metadata. Possible values: Any string representing an existing availability zone name. default_ephemeral_format = None string value The default format an ephemeral_volume will be formatted with on creation. Possible values: ext2 ext3 ext4 xfs ntfs (only for Windows guests) default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. default_schedule_zone = None string value Default availability zone for instances. This option determines the default availability zone for instances, which will be used when a user does not specify one when creating an instance. The instance(s) will be bound to this availability zone for their lifetime. Possible values: Any string representing an existing availability zone name. None, which means that the instance can move from one availability zone to another during its lifetime if it is moved from one compute node to another. Related options: [cinder]/cross_az_attach disk_allocation_ratio = None floating point value Virtual disk to physical disk allocation ratio. This option is used to influence the hosts selected by the Placement API by configuring the allocation ratio for DISK_GB inventory. When configured, a ratio greater than 1.0 will result in over-subscription of the available physical disk, which can be useful for more efficiently packing instances created with images that do not use the entire virtual disk, such as sparse or compressed images. It can be set to a value between 0.0 and 1.0 in order to preserve a percentage of the disk for uses other than instances. note:: note:: Possible values: Any valid positive integer or float value Related options: initial_disk_allocation_ratio enable_new_services = True boolean value Enable new nova-compute services on this host automatically. When a new nova-compute service starts up, it gets registered in the database as an enabled service. Sometimes it can be useful to register new compute services in disabled state and then enabled them at a later point in time. This option only sets this behavior for nova-compute services, it does not auto-disable other services like nova-conductor, nova-scheduler, or nova-osapi_compute. Possible values: True : Each new compute service is enabled as soon as it registers itself. False : Compute services must be enabled via an os-services REST API call or with the CLI with nova service-enable <hostname> <binary> , otherwise they are not ready to use. enabled_apis = ['osapi_compute', 'metadata'] list value List of APIs to be enabled by default. enabled_ssl_apis = [] list value List of APIs with enabled SSL. Nova provides SSL support for the API servers. enabled_ssl_apis option allows configuring the SSL support. executor_thread_pool_size = 64 integer value Size of executor thread pool when executor is threading or eventlet. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. flat_injected = False boolean value This option determines whether the network setup information is injected into the VM before it is booted. While it was originally designed to be used only by nova-network, it is also used by the vmware virt driver to control whether network information is injected into a VM. The libvirt virt driver also uses it when we use config_drive to configure network to control whether network information is injected into a VM. force_config_drive = False boolean value Force injection to take place on a config drive When this option is set to true config drive functionality will be forced enabled by default, otherwise users can still enable config drives via the REST API or image metadata properties. Launched instances are not affected by this option. Possible values: True: Force to use of config drive regardless the user's input in the REST API call. False: Do not force use of config drive. Config drives can still be enabled via the REST API or image metadata properties. Related options: Use the mkisofs_cmd flag to set the path where you install the genisoimage program. If genisoimage is in same path as the nova-compute service, you do not need to set this flag. To use a config drive with Hyper-V, you must set the mkisofs_cmd value to the full path to an mkisofs.exe installation. Additionally, you must set the qemu_img_cmd value in the hyperv configuration section to the full path to an qemu-img command installation. force_raw_images = True boolean value Force conversion of backing images to raw format. Possible values: True: Backing image files will be converted to raw image format False: Backing image files will not be converted Related options: compute_driver : Only the libvirt driver uses this option. [libvirt]/images_type : If images_type is rbd, setting this option to False is not allowed. See the bug https://bugs.launchpad.net/nova/+bug/1816686 for more details. graceful_shutdown_timeout = 60 integer value Specify a timeout after which a gracefully shutdown server will exit. Zero value means endless wait. heal_instance_info_cache_interval = 60 integer value Interval between instance network information cache updates. Number of seconds after which each compute node runs the task of querying Neutron for all of its instances networking information, then updates the Nova db with that information. Nova will never update it's cache if this option is set to 0. If we don't update the cache, the metadata service and nova-api endpoints will be proxying incorrect network data about the instance. So, it is not recommended to set this option to 0. Possible values: Any positive integer in seconds. Any value ⇐0 will disable the sync. This is not recommended. host = <based on operating system> host domain value Hostname, FQDN or IP address of this host. Used as: the oslo.messaging queue name for nova-compute worker we use this value for the binding_host sent to neutron. This means if you use a neutron agent, it should have the same value for host. cinder host attachment information Must be valid within AMQP key. Possible values: String with hostname, FQDN or IP address. Default is hostname of this host. initial_cpu_allocation_ratio = 4.0 floating point value Initial virtual CPU to physical CPU allocation ratio. This is only used when initially creating the computes_nodes table record for a given nova-compute service. See https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html for more details and usage scenarios. Related options: cpu_allocation_ratio initial_disk_allocation_ratio = 1.0 floating point value Initial virtual disk to physical disk allocation ratio. This is only used when initially creating the computes_nodes table record for a given nova-compute service. See https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html for more details and usage scenarios. Related options: disk_allocation_ratio initial_ram_allocation_ratio = 1.0 floating point value Initial virtual RAM to physical RAM allocation ratio. This is only used when initially creating the computes_nodes table record for a given nova-compute service. See https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html for more details and usage scenarios. Related options: ram_allocation_ratio injected_network_template = USDpybasedir/nova/virt/interfaces.template string value Path to /etc/network/interfaces template. The path to a template file for the /etc/network/interfaces -style file, which will be populated by nova and subsequently used by cloudinit. This provides a method to configure network connectivity in environments without a DHCP server. The template will be rendered using Jinja2 template engine, and receive a top-level key called interfaces . This key will contain a list of dictionaries, one for each interface. Refer to the cloudinit documentation for more information: Possible values: A path to a Jinja2-formatted template for a Debian /etc/network/interfaces file. This applies even if using a non Debian-derived guest. Related options: flat_inject : This must be set to True to ensure nova embeds network configuration information in the metadata provided through the config drive. instance_build_timeout = 0 integer value Maximum time in seconds that an instance can take to build. If this timer expires, instance status will be changed to ERROR. Enabling this option will make sure an instance will not be stuck in BUILD state for a longer period. Possible values: 0: Disables the option (default) Any positive integer in seconds: Enables the option. instance_delete_interval = 300 integer value Interval for retrying failed instance file deletes. This option depends on maximum_instance_delete_attempts . This option specifies how often to retry deletes whereas maximum_instance_delete_attempts specifies the maximum number of retry attempts that can be made. Possible values: 0: Will run at the default periodic interval. Any value < 0: Disables the option. Any positive integer in seconds. Related options: maximum_instance_delete_attempts from instance_cleaning_opts group. `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. instance_name_template = instance-%08x string value Template string to be used to generate instance names. This template controls the creation of the database name of an instance. This is not the display name you enter when creating an instance (via Horizon or CLI). For a new deployment it is advisable to change the default value (which uses the database autoincrement) to another value which makes use of the attributes of an instance, like instance-%(uuid)s . If you already have instances in your deployment when you change this, your deployment will break. Possible values: A string which either uses the instance database ID (like the default) A string with a list of named database columns, for example %(id)d or %(uuid)s or %(hostname)s . instance_usage_audit = False boolean value This option enables periodic compute.instance.exists notifications. Each compute node must be configured to generate system usage data. These notifications are consumed by OpenStack Telemetry service. instance_usage_audit_period = month string value Time period to generate instance usages for. It is possible to define optional offset to given period by appending @ character followed by a number defining offset. Possible values: period, example: hour , day , month or year period with offset, example: month@15 will result in monthly audits starting on 15th day of month. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. instances_path = USDstate_path/instances string value Specifies where instances are stored on the hypervisor's disk. It can point to locally attached storage or a directory on NFS. Possible values: USDstate_path/instances where state_path is a config option that specifies the top-level directory for maintaining nova's state. (default) or Any string representing directory path. Related options: [workarounds]/ensure_libvirt_rbd_instance_dir_cleanup internal_service_availability_zone = internal string value Availability zone for internal services. This option determines the availability zone for the various internal nova services, such as nova-scheduler , nova-conductor , etc. Possible values: Any string representing an existing availability zone name. key = None string value SSL key file (if separate from cert). Related options: cert live_migration_retry_count = 30 integer value Maximum number of 1 second retries in live_migration. It specifies number of retries to iptables when it complains. It happens when an user continuously sends live-migration request to same host leading to concurrent request to iptables. Possible values: Any positive integer representing retry count. log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_options = True boolean value Enables or disables logging values of all registered options when starting a service (at DEBUG level). log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is set to "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter long_rpc_timeout = 1800 integer value This option allows setting an alternate timeout value for RPC calls that have the potential to take a long time. If set, RPC calls to other services will use this value for the timeout (in seconds) instead of the global rpc_response_timeout value. Operations with RPC calls that utilize this value: live migration scheduling enabling/disabling a compute service image pre-caching snapshot-based / cross-cell resize resize / cold migration volume attach Related options: rpc_response_timeout max_concurrent_builds = 10 integer value Limits the maximum number of instance builds to run concurrently by nova-compute. Compute service can attempt to build an infinite number of instances, if asked to do so. This limit is enforced to avoid building unlimited instance concurrently on a compute node. This value can be set per compute node. Possible Values: 0 : treated as unlimited. Any positive integer representing maximum concurrent builds. max_concurrent_live_migrations = 1 integer value Maximum number of live migrations to run concurrently. This limit is enforced to avoid outbound live migrations overwhelming the host/network and causing failures. It is not recommended that you change this unless you are very sure that doing so is safe and stable in your environment. Possible values: 0 : treated as unlimited. Any positive integer representing maximum number of live migrations to run concurrently. max_concurrent_snapshots = 5 integer value Maximum number of instance snapshot operations to run concurrently. This limit is enforced to prevent snapshots overwhelming the host/network/storage and causing failure. This value can be set per compute node. Possible Values: 0 : treated as unlimited. Any positive integer representing maximum concurrent snapshots. max_local_block_devices = 3 integer value Maximum number of devices that will result in a local image being created on the hypervisor node. A negative number means unlimited. Setting max_local_block_devices to 0 means that any request that attempts to create a local disk will fail. This option is meant to limit the number of local discs (so root local disc that is the result of imageRef being used when creating a server, and any other ephemeral and swap disks). 0 does not mean that images will be automatically converted to volumes and boot instances from volumes - it just means that all requests that attempt to create a local disk will fail. Possible values: 0: Creating a local disk is not allowed. Negative number: Allows unlimited number of local discs. Positive number: Allows only these many number of local discs. max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". maximum_instance_delete_attempts = 5 integer value The number of times to attempt to reap an instance's files. This option specifies the maximum number of retry attempts that can be made. Possible values: Any positive integer defines how many attempts are made. Related options: [DEFAULT] instance_delete_interval can be used to disable this option. metadata_listen = 0.0.0.0 string value IP address on which the metadata API will listen. The metadata API service listens on this IP address for incoming requests. metadata_listen_port = 8775 port value Port on which the metadata API will listen. The metadata API service listens on this port number for incoming requests. metadata_workers = <based on operating system> integer value Number of workers for metadata service. If not specified the number of available CPUs will be used. The metadata service can be configured to run as multi-process (workers). This overcomes the problem of reduction in throughput when API request concurrency increases. The metadata service will run in the specified number of processes. Possible Values: Any positive integer None (default value) migrate_max_retries = -1 integer value Number of times to retry live-migration before failing. Possible values: If == -1, try until out of hosts (default) If == 0, only try once, no retries Integer greater than 0 mkisofs_cmd = genisoimage string value Name or path of the tool used for ISO image creation. Use the mkisofs_cmd flag to set the path where you install the genisoimage program. If genisoimage is on the system path, you do not need to change the default value. To use a config drive with Hyper-V, you must set the mkisofs_cmd value to the full path to an mkisofs.exe installation. Additionally, you must set the qemu_img_cmd value in the hyperv configuration section to the full path to an qemu-img command installation. Possible values: Name of the ISO image creator program, in case it is in the same directory as the nova-compute service Path to ISO image creator program Related options: This option is meaningful when config drives are enabled. To use config drive with Hyper-V, you must set the qemu_img_cmd value in the hyperv configuration section to the full path to an qemu-img command installation. my_block_storage_ip = USDmy_ip string value The IP address which is used to connect to the block storage network. Possible values: String with valid IP address. Default is IP address of this host. Related options: my_ip - if my_block_storage_ip is not set, then my_ip value is used. my_ip = <based on operating system> string value The IP address which the host is using to connect to the management network. Possible values: String with valid IP address. Default is IPv4 address of this host. Related options: my_block_storage_ip network_allocate_retries = 0 integer value Number of times to retry network allocation. It is required to attempt network allocation retries if the virtual interface plug fails. Possible values: Any positive integer representing retry count. non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] list value Image properties that should not be inherited from the instance when taking a snapshot. This option gives an opportunity to select which image-properties should not be inherited by newly created snapshots. note:: cinder_encryption_key_id cinder_encryption_key_deletion_policy img_signature img_signature_hash_method img_signature_key_type img_signature_certificate_uuid Possible values: A comma-separated list whose item is an image property. Usually only the image properties that are only needed by base images can be included here, since the snapshots that are created from the base images don't need them. Default list: cache_in_nova, bittorrent osapi_compute_listen = 0.0.0.0 string value IP address on which the OpenStack API will listen. The OpenStack API service listens on this IP address for incoming requests. osapi_compute_listen_port = 8774 port value Port on which the OpenStack API will listen. The OpenStack API service listens on this port number for incoming requests. `osapi_compute_unique_server_name_scope = ` string value Sets the scope of the check for unique instance names. The default doesn't check for unique names. If a scope for the name check is set, a launch of a new instance or an update of an existing instance with a duplicate name will result in an 'InstanceExists ' error. The uniqueness is case-insensitive. Setting this option can increase the usability for end users as they don't have to distinguish among instances with the same name by their IDs. osapi_compute_workers = None integer value Number of workers for OpenStack API service. The default will be the number of CPUs available. OpenStack API services can be configured to run as multi-process (workers). This overcomes the problem of reduction in throughput when API request concurrency increases. OpenStack API service will run in the specified number of processes. Possible Values: Any positive integer None (default value) password_length = 12 integer value Length of generated instance admin passwords. periodic_enable = True boolean value Enable periodic tasks. If set to true, this option allows services to periodically run tasks on the manager. In case of running multiple schedulers or conductors you may want to run periodic tasks on only one host - in this case disable this option for all hosts but one. periodic_fuzzy_delay = 60 integer value Number of seconds to randomly delay when starting the periodic task scheduler to reduce stampeding. When compute workers are restarted in unison across a cluster, they all end up running the periodic tasks at the same time causing problems for the external services. To mitigate this behavior, periodic_fuzzy_delay option allows you to introduce a random initial delay when starting the periodic task scheduler. Possible Values: Any positive integer (in seconds) 0 : disable the random delay pointer_model = usbtablet string value Generic property to specify the pointer type. Input devices allow interaction with a graphical framebuffer. For example to provide a graphic tablet for absolute cursor movement. If set, either the hw_input_bus or hw_pointer_model image metadata properties will take precedence over this configuration option. Related options: usbtablet must be configured with VNC enabled or SPICE enabled and SPICE agent disabled. When used with libvirt the instance mode should be configured as HVM. preallocate_images = none string value The image preallocation mode to use. Image preallocation allows storage for instance images to be allocated up front when the instance is initially provisioned. This ensures immediate feedback is given if enough space isn't available. In addition, it should significantly improve performance on writes to new blocks and may even improve I/O performance to prewritten blocks due to reduced fragmentation. publish_errors = False boolean value Enables or disables publication of error events. pybasedir = /usr/lib/python3.9/site-packages string value The directory where the Nova python modules are installed. This directory is used to store template files for networking and remote console access. It is also the default path for other config options which need to persist Nova internal data. It is very unlikely that you need to change this option from its default value. Possible values: The full path to a directory. Related options: state_path ram_allocation_ratio = None floating point value Virtual RAM to physical RAM allocation ratio. This option is used to influence the hosts selected by the Placement API by configuring the allocation ratio for MEMORY_MB inventory. note:: Possible values: Any valid positive integer or float value Related options: initial_ram_allocation_ratio rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. reboot_timeout = 0 integer value Time interval after which an instance is hard rebooted automatically. When doing a soft reboot, it is possible that a guest kernel is completely hung in a way that causes the soft reboot task to not ever finish. Setting this option to a time period in seconds will automatically hard reboot an instance if it has been stuck in a rebooting state longer than N seconds. Possible values: 0: Disables the option (default). Any positive integer in seconds: Enables the option. reclaim_instance_interval = 0 integer value Interval for reclaiming deleted instances. A value greater than 0 will enable SOFT_DELETE of instances. This option decides whether the server to be deleted will be put into the SOFT_DELETED state. If this value is greater than 0, the deleted server will not be deleted immediately, instead it will be put into a queue until it's too old (deleted time greater than the value of reclaim_instance_interval). The server can be recovered from the delete queue by using the restore action. If the deleted server remains longer than the value of reclaim_instance_interval, it will be deleted by a periodic task in the compute service automatically. Note that this option is read from both the API and compute nodes, and must be set globally otherwise servers could be put into a soft deleted state in the API and never actually reclaimed (deleted) on the compute node. note:: When using this option, you should also configure the [cinder] auth options, e.g. auth_type , auth_url , username , etc. Since the reclaim happens in a periodic task, there is no user token to cleanup volumes attached to any SOFT_DELETED servers so nova must be configured with administrator role access to cleanup those resources in cinder. Possible values: Any positive integer(in seconds) greater than 0 will enable this option. Any value ⇐0 will disable the option. Related options: [cinder] auth options for cleaning up volumes attached to servers during the reclaim process record = None string value Filename that will be used for storing websocket frames received and sent by a proxy service (like VNC, spice, serial) running on this host. If this is not set, no recording will be done. reimage_timeout_per_gb = 20 integer value Timeout for reimaging a volume. Number of seconds to wait for volume-reimaged events to arrive before continuing or failing. This is a per gigabyte time which has a default value of 20 seconds and will be multiplied by the GB size of image. Eg: an image of 6 GB will have a timeout of 20 * 6 = 120 seconds. Try increasing the timeout if the image copy per GB takes more time and you are hitting timeout failures. report_interval = 10 integer value Number of seconds indicating how frequently the state of services on a given hypervisor is reported. Nova needs to know this to determine the overall health of the deployment. Related Options: service_down_time report_interval should be less than service_down_time. If service_down_time is less than report_interval, services will routinely be considered down, because they report in too rarely. rescue_timeout = 0 integer value Interval to wait before un-rescuing an instance stuck in RESCUE. Possible values: 0: Disables the option (default) Any positive integer in seconds: Enables the option. reserved_host_cpus = 0 integer value Number of host CPUs to reserve for host processes. The host resources usage is reported back to the scheduler continuously from nova-compute running on the compute node. This value is used to determine the reserved value reported to placement. This option cannot be set if the [compute] cpu_shared_set or [compute] cpu_dedicated_set config options have been defined. When these options are defined, any host CPUs not included in these values are considered reserved for the host. Possible values: Any positive integer representing number of physical CPUs to reserve for the host. Related options: [compute] cpu_shared_set [compute] cpu_dedicated_set reserved_host_disk_mb = 0 integer value Amount of disk resources in MB to make them always available to host. The disk usage gets reported back to the scheduler from nova-compute running on the compute nodes. To prevent the disk resources from being considered as available, this option can be used to reserve disk space for that host. Possible values: Any positive integer representing amount of disk in MB to reserve for the host. reserved_host_memory_mb = 512 integer value Amount of memory in MB to reserve for the host so that it is always available to host processes. The host resources usage is reported back to the scheduler continuously from nova-compute running on the compute node. To prevent the host memory from being considered as available, this option is used to reserve memory for the host. Possible values: Any positive integer representing amount of memory in MB to reserve for the host. reserved_huge_pages = None dict value Number of huge/large memory pages to reserved per NUMA host cell. Possible values: A list of valid key=value which reflect NUMA node ID, page size (Default unit is KiB) and number of pages to be reserved. For example reserved_huge_pages = node:0,size:2048,count:64 reserved_huge_pages = node:1,size:1GB,count:1 resize_confirm_window = 0 integer value Automatically confirm resizes after N seconds. Resize functionality will save the existing server before resizing. After the resize completes, user is requested to confirm the resize. The user has the opportunity to either confirm or revert all changes. Confirm resize removes the original server and changes server status from resized to active. Setting this option to a time period (in seconds) will automatically confirm the resize if the server is in resized state longer than that time. Possible values: 0: Disables the option (default) Any positive integer in seconds: Enables the option. resize_fs_using_block_device = False boolean value Enable resizing of filesystems via a block device. If enabled, attempt to resize the filesystem by accessing the image over a block device. This is done by the host and may not be necessary if the image contains a recent version of cloud-init. Possible mechanisms require the nbd driver (for qcow and raw), or loop (for raw). resume_guests_state_on_host_boot = False boolean value This option specifies whether to start guests that were running before the host rebooted. It ensures that all of the instances on a Nova compute node resume their state each time the compute node boots or restarts. rootwrap_config = /etc/nova/rootwrap.conf string value Path to the rootwrap configuration file. Goal of the root wrapper is to allow a service-specific unprivileged user to run a number of actions as the root user in the safest manner possible. The configuration file used here must match the one defined in the sudoers entry. rpc_conn_pool_size = 30 integer value Size of RPC connection pool. rpc_ping_enabled = False boolean value Add an endpoint to answer to ping calls. Endpoint is named oslo_rpc_server_ping rpc_response_timeout = 60 integer value Seconds to wait for a response from a call. run_external_periodic_tasks = True boolean value Some periodic tasks can be run in a separate process. Should we run them here? running_deleted_instance_action = reap string value The compute service periodically checks for instances that have been deleted in the database but remain running on the compute node. The above option enables action to be taken when such instances are identified. Related options: running_deleted_instance_poll_interval running_deleted_instance_timeout running_deleted_instance_poll_interval = 1800 integer value Time interval in seconds to wait between runs for the clean up action. If set to 0, above check will be disabled. If "running_deleted_instance _action" is set to "log" or "reap", a value greater than 0 must be set. Possible values: Any positive integer in seconds enables the option. 0: Disables the option. 1800: Default value. Related options: running_deleted_instance_action running_deleted_instance_timeout = 0 integer value Time interval in seconds to wait for the instances that have been marked as deleted in database to be eligible for cleanup. Possible values: Any positive integer in seconds(default is 0). Related options: "running_deleted_instance_action" scheduler_instance_sync_interval = 120 integer value Interval between sending the scheduler a list of current instance UUIDs to verify that its view of instances is in sync with nova. If the CONF option scheduler_tracks_instance_changes is False, the sync calls will not be made. So, changing this option will have no effect. If the out of sync situations are not very common, this interval can be increased to lower the number of RPC messages being sent. Likewise, if sync issues turn out to be a problem, the interval can be lowered to check more frequently. Possible values: 0: Will run at the default periodic interval. Any value < 0: Disables the option. Any positive integer in seconds. Related options: This option has no impact if scheduler_tracks_instance_changes is set to False. service_down_time = 60 integer value Maximum time in seconds since last check-in for up service Each compute node periodically updates their database status based on the specified report interval. If the compute node hasn't updated the status for more than service_down_time, then the compute node is considered down. Related Options: report_interval (service_down_time should not be less than report_interval) servicegroup_driver = db string value This option specifies the driver to be used for the servicegroup service. ServiceGroup API in nova enables checking status of a compute node. When a compute worker running the nova-compute daemon starts, it calls the join API to join the compute group. Services like nova scheduler can query the ServiceGroup API to check if a node is alive. Internally, the ServiceGroup client driver automatically updates the compute worker status. There are multiple backend implementations for this service: Database ServiceGroup driver and Memcache ServiceGroup driver. Related Options: service_down_time (maximum time since last check-in for up service) shelved_offload_time = 0 integer value Time before a shelved instance is eligible for removal from a host. By default this option is set to 0 and the shelved instance will be removed from the hypervisor immediately after shelve operation. Otherwise, the instance will be kept for the value of shelved_offload_time(in seconds) so that during the time period the unshelve action will be faster, then the periodic task will remove the instance from hypervisor after shelved_offload_time passes. Possible values: 0: Instance will be immediately offloaded after being shelved. Any value < 0: An instance will never offload. Any positive integer in seconds: The instance will exist for the specified number of seconds before being offloaded. shelved_poll_interval = 3600 integer value Interval for polling shelved instances to offload. The periodic task runs for every shelved_poll_interval number of seconds and checks if there are any shelved instances. If it finds a shelved instance, based on the shelved_offload_time config value it offloads the shelved instances. Check shelved_offload_time config option description for details. Possible values: Any value ⇐ 0: Disables the option. Any positive integer in seconds. Related options: shelved_offload_time shutdown_timeout = 60 integer value Total time to wait in seconds for an instance to perform a clean shutdown. It determines the overall period (in seconds) a VM is allowed to perform a clean shutdown. While performing stop, rescue and shelve, rebuild operations, configuring this option gives the VM a chance to perform a controlled shutdown before the instance is powered off. The default timeout is 60 seconds. A value of 0 (zero) means the guest will be powered off immediately with no opportunity for guest OS clean-up. The timeout value can be overridden on a per image basis by means of os_shutdown_timeout that is an image metadata setting allowing different types of operating systems to specify how much time they need to shut down cleanly. Possible values: A positive integer or 0 (default value is 60). source_is_ipv6 = False boolean value Set to True if source host is addressed with IPv6. ssl_only = False boolean value Disallow non-encrypted connections. Related options: cert key state_path = USDpybasedir string value The top-level directory for maintaining Nova's state. This directory is used to store Nova's internal state. It is used by a variety of other config options which derive from this. In some scenarios (for example migrations) it makes sense to use a storage location which is shared between multiple compute hosts (for example via NFS). Unless the option instances_path gets overwritten, this directory can grow very large. Possible values: The full path to a directory. Defaults to value provided in pybasedir . sync_power_state_interval = 600 integer value Interval to sync power states between the database and the hypervisor. The interval that Nova checks the actual virtual machine power state and the power state that Nova has in its database. If a user powers down their VM, Nova updates the API to report the VM has been powered down. Should something turn on the VM unexpectedly, Nova will turn the VM back off to keep the system in the expected state. Possible values: 0: Will run at the default periodic interval. Any value < 0: Disables the option. Any positive integer in seconds. Related options: If handle_virt_lifecycle_events in the workarounds group is false and this option is negative, then instances that get out of sync between the hypervisor and the Nova database will have to be synchronized manually. sync_power_state_pool_size = 1000 integer value Number of greenthreads available for use to sync power states. This option can be used to reduce the number of concurrent requests made to the hypervisor or system with real instance power states for performance reasons, for example, with Ironic. Possible values: Any positive integer representing greenthreads count. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. tempdir = None string value Explicitly specify the temporary working directory. timeout_nbd = 10 integer value Amount of time, in seconds, to wait for NBD device start up. transport_url = rabbit:// string value The network address and optional user credentials for connecting to the messaging backend, in URL format. The expected format is: driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query Example: rabbit://rabbitmq:[email protected]:5672// For full details on the fields in the URL see the documentation of oslo_messaging.TransportURL at https://docs.openstack.org/oslo.messaging/latest/reference/transport.html update_resources_interval = 0 integer value Interval for updating compute resources. This option specifies how often the update_available_resource periodic task should run. A number less than 0 means to disable the task completely. Leaving this at the default of 0 will cause this to run at the default periodic interval. Setting it to any positive value will cause it to run at approximately that number of seconds. Possible values: 0: Will run at the default periodic interval. Any value < 0: Disables the option. Any positive integer in seconds. use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_cow_images = True boolean value Enable use of copy-on-write (cow) images. QEMU/KVM allow the use of qcow2 as backing files. By disabling this, backing files will not be used. use_eventlog = False boolean value Log output to Windows Event Log. use_rootwrap_daemon = False boolean value Start and use a daemon that can run the commands that need to be run with root privileges. This option is usually enabled on nodes that run nova compute processes. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. vcpu_pin_set = None string value Mask of host CPUs that can be used for VCPU resources. The behavior of this option depends on the definition of the [compute] cpu_dedicated_set option and affects the behavior of the [compute] cpu_shared_set option. If [compute] cpu_dedicated_set is defined, defining this option will result in an error. If [compute] cpu_dedicated_set is not defined, this option will be used to determine inventory for VCPU resources and to limit the host CPUs that both pinned and unpinned instances can be scheduled to, overriding the [compute] cpu_shared_set option. Possible values: A comma-separated list of physical CPU numbers that virtual CPUs can be allocated from. Each element should be either a single CPU number, a range of CPU numbers, or a caret followed by a CPU number to be excluded from a range. For example vcpu_pin_set = "4-12,^8,15" Related options: [compute] cpu_dedicated_set [compute] cpu_shared_set Deprecated since: 20.0.0 Reason: This option has been superseded by the ``[compute] cpu_dedicated_set`` and ``[compute] cpu_shared_set`` options, which allow things like the co-existence of pinned and unpinned instances on the same host (for the libvirt driver). vif_plugging_is_fatal = True boolean value Determine if instance should boot or fail on VIF plugging timeout. Nova sends a port update to Neutron after an instance has been scheduled, providing Neutron with the necessary information to finish setup of the port. Once completed, Neutron notifies Nova that it has finished setting up the port, at which point Nova resumes the boot of the instance since network connectivity is now supposed to be present. A timeout will occur if the reply is not received after a given interval. This option determines what Nova does when the VIF plugging timeout event happens. When enabled, the instance will error out. When disabled, the instance will continue to boot on the assumption that the port is ready. Possible values: True: Instances should fail after VIF plugging timeout False: Instances should continue booting after VIF plugging timeout vif_plugging_timeout = 300 integer value Timeout for Neutron VIF plugging event message arrival. Number of seconds to wait for Neutron vif plugging events to arrive before continuing or failing (see vif_plugging_is_fatal ). If you are hitting timeout failures at scale, consider running rootwrap in "daemon mode" in the neutron agent via the [agent]/root_helper_daemon neutron configuration option. Related options: vif_plugging_is_fatal - If vif_plugging_timeout is set to zero and vif_plugging_is_fatal is False, events should not be expected to arrive at all. virt_mkfs = [] multi valued Name of the mkfs commands for ephemeral device. The format is <os_type>=<mkfs command> volume_usage_poll_interval = 0 integer value Interval for gathering volume usages. This option updates the volume usage cache for every volume_usage_poll_interval number of seconds. Possible values: Any positive integer(in seconds) greater than 0 will enable this option. Any value ⇐0 will disable the option. watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. web = /usr/share/spice-html5 string value Path to directory with content which will be served by a web server. 9.1.2. api The following table outlines the options available under the [api] group in the /etc/nova/nova.conf file. Table 9.1. api Configuration option = Default value Type Description auth_strategy = keystone string value Determine the strategy to use for authentication. Deprecated since: 21.0.0 Reason: The only non-default choice, ``noauth2``, is for internal development and testing purposes only and should not be used in deployments. This option and its middleware, NoAuthMiddleware[V2_18], will be removed in a future release. compute_link_prefix = None string value This string is prepended to the normal URL that is returned in links to the OpenStack Compute API. If it is empty (the default), the URLs are returned unchanged. Possible values: Any string, including an empty string (the default). config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 string value When gathering the existing metadata for a config drive, the EC2-style metadata is returned for all versions that don't appear in this option. As of the Liberty release, the available versions are: 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 2009-04-04 The option is in the format of a single string, with each version separated by a space. Possible values: Any string that represents zero or more versions, separated by spaces. dhcp_domain = novalocal string value Domain name used to configure FQDN for instances. Configure a fully-qualified domain name for instance hostnames. The value is suffixed to the instance hostname from the database to construct the hostname that appears in the metadata API. To disable this behavior (for example in order to correctly support microversion's 2.94 FQDN hostnames), set this to the empty string. Possible values: Any string that is a valid domain name. enable_instance_password = True boolean value Enables returning of the instance password by the relevant server API calls such as create, rebuild, evacuate, or rescue. If the hypervisor does not support password injection, then the password returned will not be correct, so if your hypervisor does not support password injection, set this to False. glance_link_prefix = None string value This string is prepended to the normal URL that is returned in links to Glance resources. If it is empty (the default), the URLs are returned unchanged. Possible values: Any string, including an empty string (the default). instance_list_cells_batch_fixed_size = 100 integer value This controls the batch size of instances requested from each cell database if instance_list_cells_batch_strategy` is set to fixed . This integral value will define the limit issued to each cell every time a batch of instances is requested, regardless of the number of cells in the system or any other factors. Per the general logic called out in the documentation for instance_list_cells_batch_strategy , the minimum value for this is 100 records per batch. Related options: instance_list_cells_batch_strategy max_limit instance_list_cells_batch_strategy = distributed string value This controls the method by which the API queries cell databases in smaller batches during large instance list operations. If batching is performed, a large instance list operation will request some fraction of the overall API limit from each cell database initially, and will re-request that same batch size as records are consumed (returned) from each cell as necessary. Larger batches mean less chattiness between the API and the database, but potentially more wasted effort processing the results from the database which will not be returned to the user. Any strategy will yield a batch size of at least 100 records, to avoid a user causing many tiny database queries in their request. Related options: instance_list_cells_batch_fixed_size max_limit instance_list_per_project_cells = False boolean value When enabled, this will cause the API to only query cell databases in which the tenant has mapped instances. This requires an additional (fast) query in the API database before each list, but also (potentially) limits the number of cell databases that must be queried to provide the result. If you have a small number of cells, or tenants are likely to have instances in all cells, then this should be False. If you have many cells, especially if you confine tenants to a small subset of those cells, this should be True. list_records_by_skipping_down_cells = True boolean value When set to False, this will cause the API to return a 500 error if there is an infrastructure failure like non-responsive cells. If you want the API to skip the down cells and return the results from the up cells set this option to True. Note that from API microversion 2.69 there could be transient conditions in the deployment where certain records are not available and the results could be partial for certain requests containing those records. In those cases this option will be ignored. See "Handling Down Cells" section of the Compute API guide ( https://docs.openstack.org/api-guide/compute/down_cells.html ) for more information. local_metadata_per_cell = False boolean value Indicates that the nova-metadata API service has been deployed per-cell, so that we can have better performance and data isolation in a multi-cell deployment. Users should consider the use of this configuration depending on how neutron is setup. If you have networks that span cells, you might need to run nova-metadata API service globally. If your networks are segmented along cell boundaries, then you can run nova-metadata API service per cell. When running nova-metadata API service per cell, you should also configure each Neutron metadata-agent to point to the corresponding nova-metadata API service. max_limit = 1000 integer value As a query can potentially return many thousands of items, you can limit the maximum number of items in a single response by setting this option. metadata_cache_expiration = 15 integer value This option is the time (in seconds) to cache metadata. When set to 0, metadata caching is disabled entirely; this is generally not recommended for performance reasons. Increasing this setting should improve response times of the metadata API when under heavy load. Higher values may increase memory usage, and result in longer times for host metadata changes to take effect. neutron_default_tenant_id = default string value Tenant ID for getting the default network from Neutron API (also referred in some places as the project ID ) to use. Related options: use_neutron_default_nets use_forwarded_for = False boolean value When True, the X-Forwarded-For header is treated as the canonical remote address. When False (the default), the remote_address header is used. You should only enable this if you have an HTML sanitizing proxy. Deprecated since: 26.0.0 *Reason:*This feature is duplicate of the HTTPProxyToWSGI middleware in oslo.middleware use_neutron_default_nets = False boolean value When True, the TenantNetworkController will query the Neutron API to get the default networks to use. Related options: neutron_default_tenant_id vendordata_dynamic_connect_timeout = 5 integer value Maximum wait time for an external REST service to connect. Possible values: Any integer with a value greater than three (the TCP packet retransmission timeout). Note that instance start may be blocked during this wait time, so this value should be kept small. Related options: vendordata_providers vendordata_dynamic_targets vendordata_dynamic_ssl_certfile vendordata_dynamic_read_timeout vendordata_dynamic_failure_fatal vendordata_dynamic_failure_fatal = False boolean value Should failures to fetch dynamic vendordata be fatal to instance boot? Related options: vendordata_providers vendordata_dynamic_targets vendordata_dynamic_ssl_certfile vendordata_dynamic_connect_timeout vendordata_dynamic_read_timeout vendordata_dynamic_read_timeout = 5 integer value Maximum wait time for an external REST service to return data once connected. Possible values: Any integer. Note that instance start is blocked during this wait time, so this value should be kept small. Related options: vendordata_providers vendordata_dynamic_targets vendordata_dynamic_ssl_certfile vendordata_dynamic_connect_timeout vendordata_dynamic_failure_fatal `vendordata_dynamic_ssl_certfile = ` string value Path to an optional certificate file or CA bundle to verify dynamic vendordata REST services ssl certificates against. Possible values: An empty string, or a path to a valid certificate file Related options: vendordata_providers vendordata_dynamic_targets vendordata_dynamic_connect_timeout vendordata_dynamic_read_timeout vendordata_dynamic_failure_fatal vendordata_dynamic_targets = [] list value A list of targets for the dynamic vendordata provider. These targets are of the form <name>@<url> . The dynamic vendordata provider collects metadata by contacting external REST services and querying them for information about the instance. This behaviour is documented in the vendordata.rst file in the nova developer reference. vendordata_jsonfile_path = None string value Cloud providers may store custom data in vendor data file that will then be available to the instances via the metadata service, and to the rendering of config-drive. The default class for this, JsonFileVendorData, loads this information from a JSON file, whose path is configured by this option. If there is no path set by this option, the class returns an empty dictionary. Note that when using this to provide static vendor data to a configuration drive, the nova-compute service must be configured with this option and the file must be accessible from the nova-compute host. Possible values: Any string representing the path to the data file, or an empty string (default). vendordata_providers = ['StaticJSON'] list value A list of vendordata providers. vendordata providers are how deployers can provide metadata via configdrive and metadata that is specific to their deployment. For more information on the requirements for implementing a vendordata dynamic endpoint, please see the vendordata.rst file in the nova developer reference. Related options: vendordata_dynamic_targets vendordata_dynamic_ssl_certfile vendordata_dynamic_connect_timeout vendordata_dynamic_read_timeout vendordata_dynamic_failure_fatal 9.1.3. api_database The following table outlines the options available under the [api_database] group in the /etc/nova/nova.conf file. Table 9.2. api_database Configuration option = Default value Type Description backend = sqlalchemy string value The back end to use for the database. connection = None string value The SQLAlchemy connection string to use to connect to the database. connection_debug = 0 integer value Verbosity of SQL debugging information: 0=None, 100=Everything. `connection_parameters = ` string value Optional URL parameters to append onto the connection URL at connect time; specify as param1=value1&param2=value2&... connection_recycle_time = 3600 integer value Connections which have been present in the connection pool longer than this number of seconds will be replaced with a new one the time they are checked out from the pool. connection_trace = False boolean value Add Python stack traces to SQL as comment strings. db_inc_retry_interval = True boolean value If True, increases the interval between retries of a database operation up to db_max_retry_interval. db_max_retries = 20 integer value Maximum retries in case of connection error or deadlock error before error is raised. Set to -1 to specify an infinite retry count. db_max_retry_interval = 10 integer value If db_inc_retry_interval is set, the maximum seconds between retries of a database operation. db_retry_interval = 1 integer value Seconds between retries of a database transaction. max_overflow = 50 integer value If set, use this value for max_overflow with SQLAlchemy. max_pool_size = 5 integer value Maximum number of SQL connections to keep open in a pool. Setting a value of 0 indicates no limit. max_retries = 10 integer value Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count. mysql_enable_ndb = False boolean value If True, transparently enables support for handling MySQL Cluster (NDB). Deprecated since: 12.1.0 *Reason:*Support for the MySQL NDB Cluster storage engine has been deprecated and will be removed in a future release. mysql_sql_mode = TRADITIONAL string value The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode= mysql_wsrep_sync_wait = None integer value For Galera only, configure wsrep_sync_wait causality checks on new connections. Default is None, meaning don't configure any setting. pool_timeout = None integer value If set, use this value for pool_timeout with SQLAlchemy. retry_interval = 10 integer value Interval between retries of opening a SQL connection. slave_connection = None string value The SQLAlchemy connection string to use to connect to the slave database. sqlite_synchronous = True boolean value If True, SQLite uses synchronous mode. 9.1.4. barbican The following table outlines the options available under the [barbican] group in the /etc/nova/nova.conf file. Table 9.3. barbican Configuration option = Default value Type Description auth_endpoint = http://localhost/identity/v3 string value Use this endpoint to connect to Keystone barbican_api_version = None string value Version of the Barbican API, for example: "v1" barbican_endpoint = None string value Use this endpoint to connect to Barbican, for example: "http://localhost:9311/" barbican_endpoint_type = public string value Specifies the type of endpoint. Allowed values are: public, private, and admin barbican_region_name = None string value Specifies the region of the chosen endpoint. number_of_retries = 60 integer value Number of times to retry poll for key creation completion retry_delay = 1 integer value Number of seconds to wait before retrying poll for key creation completion send_service_user_token = False boolean value When True, if sending a user token to a REST API, also send a service token. Nova often reuses the user token provided to the nova-api to talk to other REST APIs, such as Cinder, Glance and Neutron. It is possible that while the user token was valid when the request was made to Nova, the token may expire before it reaches the other service. To avoid any failures, and to make it clear it is Nova calling the service on the user's behalf, we include a service token along with the user token. Should the user's token have expired, a valid service token ensures the REST API request will still be accepted by the keystone middleware. verify_ssl = True boolean value Specifies if insecure TLS (https) requests. If False, the server's certificate will not be validated, if True, we can set the verify_ssl_path config meanwhile. verify_ssl_path = None string value A path to a bundle or CA certs to check against, or None for requests to attempt to locate and use certificates which verify_ssh is True. If verify_ssl is False, this is ignored. 9.1.5. barbican_service_user The following table outlines the options available under the [barbican_service_user] group in the /etc/nova/nova.conf file. Table 9.4. barbican_service_user Configuration option = Default value Type Description auth_section = None string value Config Section from which to load plugin specific options auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file split-loggers = False boolean value Log requests to multiple loggers. timeout = None integer value Timeout value for http requests 9.1.6. cache The following table outlines the options available under the [cache] group in the /etc/nova/nova.conf file. Table 9.5. cache Configuration option = Default value Type Description backend = dogpile.cache.null string value Cache backend module. For eventlet-based or environments with hundreds of threaded servers, Memcache with pooling (oslo_cache.memcache_pool) is recommended. For environments with less than 100 threaded servers, Memcached (dogpile.cache.memcached) or Redis (dogpile.cache.redis) is recommended. Test environments with a single instance of the server can use the dogpile.cache.memory backend. backend_argument = [] multi valued Arguments supplied to the backend module. Specify this option once per argument to be passed to the dogpile.cache backend. Example format: "<argname>:<value>". config_prefix = cache.oslo string value Prefix for building the configuration dictionary for the cache region. This should not need to be changed unless there is another dogpile.cache region with the same configuration name. dead_timeout = 60 floating point value Time in seconds before attempting to add a node back in the pool in the HashClient's internal mechanisms. debug_cache_backend = False boolean value Extra debugging from the cache backend (cache keys, get/set/delete/etc calls). This is only really useful if you need to see the specific cache-backend get/set/delete calls with the keys/values. Typically this should be left set to false. enable_retry_client = False boolean value Enable retry client mechanisms to handle failure. Those mechanisms can be used to wrap all kind of pymemcache clients. The wrapper allows you to define how many attempts to make and how long to wait between attemots. enable_socket_keepalive = False boolean value Global toggle for the socket keepalive of dogpile's pymemcache backend enabled = False boolean value Global toggle for caching. expiration_time = 600 integer value Default TTL, in seconds, for any cached item in the dogpile.cache region. This applies to any cached method that doesn't have an explicit cache expiration time defined for it. hashclient_retry_attempts = 2 integer value Amount of times a client should be tried before it is marked dead and removed from the pool in the HashClient's internal mechanisms. hashclient_retry_delay = 1 floating point value Time in seconds that should pass between retry attempts in the HashClient's internal mechanisms. memcache_dead_retry = 300 integer value Number of seconds memcached server is considered dead before it is tried again. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only). `memcache_password = ` string value the password for the memcached which SASL enabled memcache_pool_connection_get_timeout = 10 integer value Number of seconds that an operation will wait to get a memcache client connection. memcache_pool_flush_on_reconnect = False boolean value Global toggle if memcache will be flushed on reconnect. (oslo_cache.memcache_pool backend only). memcache_pool_maxsize = 10 integer value Max total number of open connections to every memcached server. (oslo_cache.memcache_pool backend only). memcache_pool_unused_timeout = 60 integer value Number of seconds a connection to memcached is held unused in the pool before it is closed. (oslo_cache.memcache_pool backend only). memcache_sasl_enabled = False boolean value Enable the SASL(Simple Authentication and SecurityLayer) if the SASL_enable is true, else disable. memcache_servers = ['localhost:11211'] list value Memcache servers in the format of "host:port". This is used by backends dependent on Memcached.If dogpile.cache.memcached or oslo_cache.memcache_pool is used and a given host refer to an IPv6 or a given domain refer to IPv6 then you should prefix the given address withthe address family ( inet6 ) (e.g inet6[::1]:11211 , inet6:[fd12:3456:789a:1::1]:11211 , inet6:[controller-0.internalapi]:11211 ). If the address family is not given then these backends will use the default inet address family which corresponds to IPv4 memcache_socket_timeout = 1.0 floating point value Timeout in seconds for every call to a server. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only). `memcache_username = ` string value the user name for the memcached which SASL enabled proxies = [] list value Proxy classes to import that will affect the way the dogpile.cache backend functions. See the dogpile.cache documentation on changing-backend-behavior. retry_attempts = 2 integer value Number of times to attempt an action before failing. retry_delay = 0 floating point value Number of seconds to sleep between each attempt. socket_keepalive_count = 1 integer value The maximum number of keepalive probes TCP should send before dropping the connection. Should be a positive integer greater than zero. socket_keepalive_idle = 1 integer value The time (in seconds) the connection needs to remain idle before TCP starts sending keepalive probes. Should be a positive integer most greater than zero. socket_keepalive_interval = 1 integer value The time (in seconds) between individual keepalive probes. Should be a positive integer greater than zero. tls_allowed_ciphers = None string value Set the available ciphers for sockets created with the TLS context. It should be a string in the OpenSSL cipher list format. If not specified, all OpenSSL enabled ciphers will be available. tls_cafile = None string value Path to a file of concatenated CA certificates in PEM format necessary to establish the caching servers' authenticity. If tls_enabled is False, this option is ignored. tls_certfile = None string value Path to a single file in PEM format containing the client's certificate as well as any number of CA certificates needed to establish the certificate's authenticity. This file is only required when client side authentication is necessary. If tls_enabled is False, this option is ignored. tls_enabled = False boolean value Global toggle for TLS usage when comunicating with the caching servers. tls_keyfile = None string value Path to a single file containing the client's private key in. Otherwise the private key will be taken from the file specified in tls_certfile. If tls_enabled is False, this option is ignored. 9.1.7. cinder The following table outlines the options available under the [cinder] group in the /etc/nova/nova.conf file. Table 9.6. cinder Configuration option = Default value Type Description auth-url = None string value Authentication URL auth_section = None string value Config Section from which to load plugin specific options auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. catalog_info = volumev3::publicURL string value Info to match when looking for cinder in the service catalog. The <service_name> is optional and omitted by default since it should not be necessary in most deployments. Possible values: Format is separated values of the form: <service_type>:<service_name>:<endpoint_type> Note: Nova does not support the Cinder v2 API since the Nova 17.0.0 Queens release. Related options: endpoint_template - Setting this option will override catalog_info certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. cross_az_attach = True boolean value Allow attach between instance and volume in different availability zones. If False, volumes attached to an instance must be in the same availability zone in Cinder as the instance availability zone in Nova. This also means care should be taken when booting an instance from a volume where source is not "volume" because Nova will attempt to create a volume using the same availability zone as what is assigned to the instance. If that AZ is not in Cinder (or allow_availability_zone_fallback=False in cinder.conf), the volume create request will fail and the instance will fail the build request. By default there is no availability zone restriction on volume attach. Related options: [DEFAULT]/default_schedule_zone debug = False boolean value Enable DEBUG logging with cinderclient and os_brick independently of the rest of Nova. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to endpoint_template = None string value If this option is set then it will override service catalog lookup with this template for cinder endpoint Possible values: URL for cinder endpoint API e.g. http://localhost:8776/v3/%(project_id)s Note: Nova does not support the Cinder v2 API since the Nova 17.0.0 Queens release. Related options: catalog_info - If endpoint_template is not set, catalog_info will be used. http_retries = 3 integer value Number of times cinderclient should retry on any failed http call. 0 means connection is attempted only once. Setting it to any positive integer means that on failure connection is retried that many times e.g. setting it to 3 means total attempts to connect will be 4. Possible values: Any integer value. 0 means connection is attempted only once insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file os_region_name = None string value Region name of this node. This is used when picking the URL in the service catalog. Possible values: Any string representing region name password = None string value User's password project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to split-loggers = False boolean value Log requests to multiple loggers. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value ID of the trust to use as a trustee use user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User ID username = None string value Username 9.1.8. compute The following table outlines the options available under the [compute] group in the /etc/nova/nova.conf file. Table 9.7. compute Configuration option = Default value Type Description consecutive_build_service_disable_threshold = 10 integer value Enables reporting of build failures to the scheduler. Any nonzero value will enable sending build failure statistics to the scheduler for use by the BuildFailureWeigher. Possible values: Any positive integer enables reporting build failures. Zero to disable reporting build failures. Related options: [filter_scheduler]/build_failure_weight_multiplier cpu_dedicated_set = None string value Mask of host CPUs that can be used for PCPU resources. The behavior of this option affects the behavior of the deprecated vcpu_pin_set option. If this option is defined, defining vcpu_pin_set will result in an error. If this option is not defined, vcpu_pin_set will be used to determine inventory for VCPU resources and to limit the host CPUs that both pinned and unpinned instances can be scheduled to. This behavior will be simplified in a future release when vcpu_pin_set is removed. Possible values: A comma-separated list of physical CPU numbers that instance VCPUs can be allocated from. Each element should be either a single CPU number, a range of CPU numbers, or a caret followed by a CPU number to be excluded from a range. For example cpu_dedicated_set = "4-12,^8,15" Related options: [compute] cpu_shared_set : This is the counterpart option for defining where VCPU resources should be allocated from. vcpu_pin_set : A legacy option that this option partially replaces. cpu_shared_set = None string value Mask of host CPUs that can be used for VCPU resources and offloaded emulator threads. The behavior of this option depends on the definition of the deprecated vcpu_pin_set option. If vcpu_pin_set is not defined, [compute] cpu_shared_set will be be used to provide VCPU inventory and to determine the host CPUs that unpinned instances can be scheduled to. It will also be used to determine the host CPUS that instance emulator threads should be offloaded to for instances configured with the share emulator thread policy ( hw:emulator_threads_policy=share ). If vcpu_pin_set is defined, [compute] cpu_shared_set will only be used to determine the host CPUs that instance emulator threads should be offloaded to for instances configured with the share emulator thread policy ( hw:emulator_threads_policy=share ). vcpu_pin_set will be used to provide VCPU inventory and to determine the host CPUs that both pinned and unpinned instances can be scheduled to. This behavior will be simplified in a future release when vcpu_pin_set is removed. Possible values: A comma-separated list of physical CPU numbers that instance VCPUs can be allocated from. Each element should be either a single CPU number, a range of CPU numbers, or a caret followed by a CPU number to be excluded from a range. For example cpu_shared_set = "4-12,^8,15" Related options: [compute] cpu_dedicated_set : This is the counterpart option for defining where PCPU resources should be allocated from. vcpu_pin_set : A legacy option whose definition may change the behavior of this option. image_type_exclude_list = [] list value A list of image formats that should not be advertised as supported by this compute node. In some situations, it may be desirable to have a compute node refuse to support an expensive or complex image format. This factors into the decisions made by the scheduler about which compute node to select when booted with a given image. Possible values: Any glance image disk_format name (i.e. raw , qcow2 , etc) Related options: [scheduler]query_placement_for_image_type_support - enables filtering computes based on supported image types, which is required to be enabled for this to take effect. live_migration_wait_for_vif_plug = True boolean value Determine if the source compute host should wait for a network-vif-plugged event from the (neutron) networking service before starting the actual transfer of the guest to the destination compute host. Note that this option is read on the destination host of a live migration. If you set this option the same on all of your compute hosts, which you should do if you use the same networking backend universally, you do not have to worry about this. Before starting the transfer of the guest, some setup occurs on the destination compute host, including plugging virtual interfaces. Depending on the networking backend on the destination host , a network-vif-plugged event may be triggered and then received on the source compute host and the source compute can wait for that event to ensure networking is set up on the destination host before starting the guest transfer in the hypervisor. note:: Possible values: True: wait for network-vif-plugged events before starting guest transfer False: do not wait for network-vif-plugged events before starting guest transfer (this is the legacy behavior) Related options: [DEFAULT]/vif_plugging_is_fatal: if live_migration_wait_for_vif_plug is True and vif_plugging_timeout is greater than 0, and a timeout is reached, the live migration process will fail with an error but the guest transfer will not have started to the destination host [DEFAULT]/vif_plugging_timeout: if live_migration_wait_for_vif_plug is True, this controls the amount of time to wait before timing out and either failing if vif_plugging_is_fatal is True, or simply continuing with the live migration max_concurrent_disk_ops = 0 integer value Number of concurrent disk-IO-intensive operations (glance image downloads, image format conversions, etc.) that we will do in parallel. If this is set too high then response time suffers. The default value of 0 means no limit. max_disk_devices_to_attach = -1 integer value Maximum number of disk devices allowed to attach to a single server. Note that the number of disks supported by an server depends on the bus used. For example, the ide disk bus is limited to 4 attached devices. The configured maximum is enforced during server create, rebuild, evacuate, unshelve, live migrate, and attach volume. Usually, disk bus is determined automatically from the device type or disk device, and the virtualization type. However, disk bus can also be specified via a block device mapping or an image property. See the disk_bus field in :doc: /user/block-device-mapping for more information about specifying disk bus in a block device mapping, and see https://docs.openstack.org/glance/latest/admin/useful-image-properties.html for more information about the hw_disk_bus image property. Operators changing the [compute]/max_disk_devices_to_attach on a compute service that is hosting servers should be aware that it could cause rebuilds to fail, if the maximum is decreased lower than the number of devices already attached to servers. For example, if server A has 26 devices attached and an operators changes [compute]/max_disk_devices_to_attach to 20, a request to rebuild server A will fail and go into ERROR state because 26 devices are already attached and exceed the new configured maximum of 20. Operators setting [compute]/max_disk_devices_to_attach should also be aware that during a cold migration, the configured maximum is only enforced in-place and the destination is not checked before the move. This means if an operator has set a maximum of 26 on compute host A and a maximum of 20 on compute host B, a cold migration of a server with 26 attached devices from compute host A to compute host B will succeed. Then, once the server is on compute host B, a subsequent request to rebuild the server will fail and go into ERROR state because 26 devices are already attached and exceed the configured maximum of 20 on compute host B. The configured maximum is not enforced on shelved offloaded servers, as they have no compute host. warning:: If this option is set to 0, the nova-compute service will fail to start, as 0 disk devices is an invalid configuration that would prevent instances from being able to boot. Possible values: -1 means unlimited Any integer >= 1 represents the maximum allowed. A value of 0 will cause the nova-compute service to fail to start, as 0 disk devices is an invalid configuration that would prevent instances from being able to boot. packing_host_numa_cells_allocation_strategy = False boolean value This option controls allocation strategy used to choose NUMA cells on host for placing VM's NUMA cells (for VMs with defined numa topology). By default host's NUMA cell with more resources consumed will be chosen last for placing attempt. When the packing_host_numa_cells_allocation_strategy variable is set to False , host's NUMA cell with more resources available will be used. When set to True cells with some usage will be packed with VM's cell until it will be completely exhausted, before a new free host's cell will be used. Possible values: True : Packing VM's NUMA cell on most used host NUMA cell. False : Spreading VM's NUMA cell on host's NUMA cells with more resources available. provider_config_location = /etc/nova/provider_config/ string value Location of YAML files containing resource provider configuration data. These files allow the operator to specify additional custom inventory and traits to assign to one or more resource providers. Additional documentation is available here: resource_provider_association_refresh = 300 integer value Interval for updating nova-compute-side cache of the compute node resource provider's inventories, aggregates, and traits. This option specifies the number of seconds between attempts to update a provider's inventories, aggregates and traits in the local cache of the compute node. A value of zero disables cache refresh completely. The cache can be cleared manually at any time by sending SIGHUP to the compute process, causing it to be repopulated the time the data is accessed. Possible values: Any positive integer in seconds, or zero to disable refresh. shutdown_retry_interval = 10 integer value Time to wait in seconds before resending an ACPI shutdown signal to instances. The overall time to wait is set by shutdown_timeout . Possible values: Any integer greater than 0 in seconds Related options: shutdown_timeout vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] list value A list of strings describing allowed VMDK "create-type" subformats that will be allowed. This is recommended to only include single-file-with-sparse-header variants to avoid potential host file exposure due to processing named extents. If this list is empty, then no form of VMDK image will be allowed. 9.1.9. conductor The following table outlines the options available under the [conductor] group in the /etc/nova/nova.conf file. Table 9.8. conductor Configuration option = Default value Type Description workers = None integer value Number of workers for OpenStack Conductor service. The default will be the number of CPUs available. 9.1.10. console The following table outlines the options available under the [console] group in the /etc/nova/nova.conf file. Table 9.9. console Configuration option = Default value Type Description allowed_origins = [] list value Adds list of allowed origins to the console websocket proxy to allow connections from other origin hostnames. Websocket proxy matches the host header with the origin header to prevent cross-site requests. This list specifies if any there are values other than host are allowed in the origin header. Possible values: A list where each element is an allowed origin hostnames, else an empty list ssl_ciphers = None string value OpenSSL cipher preference string that specifies what ciphers to allow for TLS connections from clients. For example:: See the man page for the OpenSSL ciphers command for details of the cipher preference string format and allowed values:: Related options: [DEFAULT] cert [DEFAULT] key ssl_minimum_version = default string value Minimum allowed SSL/TLS protocol version. Related options: [DEFAULT] cert [DEFAULT] key 9.1.11. consoleauth The following table outlines the options available under the [consoleauth] group in the /etc/nova/nova.conf file. Table 9.10. consoleauth Configuration option = Default value Type Description token_ttl = 600 integer value The lifetime of a console auth token (in seconds). A console auth token is used in authorizing console access for a user. Once the auth token time to live count has elapsed, the token is considered expired. Expired tokens are then deleted. 9.1.12. cors The following table outlines the options available under the [cors] group in the /etc/nova/nova.conf file. Table 9.11. cors Configuration option = Default value Type Description allow_credentials = True boolean value Indicate that the actual request can include user credentials allow_headers = ['X-Auth-Token', 'X-Openstack-Request-Id', 'X-Identity-Status', 'X-Roles', 'X-Service-Catalog', 'X-User-Id', 'X-Tenant-Id', 'X-OpenStack-Nova-API-Version', 'OpenStack-API-Version'] list value Indicate which header field names may be used during the actual request. allow_methods = ['GET', 'PUT', 'POST', 'DELETE', 'PATCH'] list value Indicate which methods can be used during the actual request. allowed_origin = None list value Indicate whether this resource may be shared with the domain received in the requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing slash. Example: https://horizon.example.com expose_headers = ['X-Auth-Token', 'X-Openstack-Request-Id', 'X-Subject-Token', 'X-Service-Token', 'X-OpenStack-Nova-API-Version', 'OpenStack-API-Version'] list value Indicate which headers are safe to expose to the API. Defaults to HTTP Simple Headers. max_age = 3600 integer value Maximum cache age of CORS preflight requests. 9.1.13. cyborg The following table outlines the options available under the [cyborg] group in the /etc/nova/nova.conf file. Table 9.12. cyborg Configuration option = Default value Type Description cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. connect-retries = None integer value The maximum number of retries that should be attempted for connection errors. connect-retry-delay = None floating point value Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. endpoint-override = None string value Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version , min-version , and/or max-version options. insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file region-name = None string value The default region_name for endpoint URL discovery. service-name = None string value The default service_name for endpoint URL discovery. service-type = accelerator string value The default service_type for endpoint URL discovery. split-loggers = False boolean value Log requests to multiple loggers. status-code-retries = None integer value The maximum number of retries that should be attempted for retriable HTTP status codes. status-code-retry-delay = None floating point value Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. timeout = None integer value Timeout value for http requests valid-interfaces = ['internal', 'public'] list value List of interfaces, in order of preference, for endpoint URL. 9.1.14. database The following table outlines the options available under the [database] group in the /etc/nova/nova.conf file. Table 9.13. database Configuration option = Default value Type Description backend = sqlalchemy string value The back end to use for the database. connection = None string value The SQLAlchemy connection string to use to connect to the database. connection_debug = 0 integer value Verbosity of SQL debugging information: 0=None, 100=Everything. `connection_parameters = ` string value Optional URL parameters to append onto the connection URL at connect time; specify as param1=value1&param2=value2&... connection_recycle_time = 3600 integer value Connections which have been present in the connection pool longer than this number of seconds will be replaced with a new one the time they are checked out from the pool. connection_trace = False boolean value Add Python stack traces to SQL as comment strings. db_inc_retry_interval = True boolean value If True, increases the interval between retries of a database operation up to db_max_retry_interval. db_max_retries = 20 integer value Maximum retries in case of connection error or deadlock error before error is raised. Set to -1 to specify an infinite retry count. db_max_retry_interval = 10 integer value If db_inc_retry_interval is set, the maximum seconds between retries of a database operation. db_retry_interval = 1 integer value Seconds between retries of a database transaction. max_overflow = 50 integer value If set, use this value for max_overflow with SQLAlchemy. max_pool_size = 5 integer value Maximum number of SQL connections to keep open in a pool. Setting a value of 0 indicates no limit. max_retries = 10 integer value Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count. mysql_enable_ndb = False boolean value If True, transparently enables support for handling MySQL Cluster (NDB). Deprecated since: 12.1.0 *Reason:*Support for the MySQL NDB Cluster storage engine has been deprecated and will be removed in a future release. mysql_sql_mode = TRADITIONAL string value The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode= mysql_wsrep_sync_wait = None integer value For Galera only, configure wsrep_sync_wait causality checks on new connections. Default is None, meaning don't configure any setting. pool_timeout = None integer value If set, use this value for pool_timeout with SQLAlchemy. retry_interval = 10 integer value Interval between retries of opening a SQL connection. slave_connection = None string value The SQLAlchemy connection string to use to connect to the slave database. sqlite_synchronous = True boolean value If True, SQLite uses synchronous mode. 9.1.15. devices The following table outlines the options available under the [devices] group in the /etc/nova/nova.conf file. Table 9.14. devices Configuration option = Default value Type Description enabled_mdev_types = [] list value The mdev types enabled in the compute node. Some hardware (e.g. NVIDIA GRID K1) support different mdev types. User can use this option to specify a list of enabled mdev types that may be assigned to a guest instance. If more than one single mdev type is provided, then for each mdev type an additional section, [mdev_USD(MDEV_TYPE)] , must be added to the configuration file. Each section then must be configured with a single configuration option, device_addresses , which should be a list of PCI addresses corresponding to the physical GPU(s) or mdev-capable hardware to assign to this type. If one or more sections are missing (meaning that a specific type is not wanted to use for at least one physical device) or if no device addresses are provided , then Nova will only use the first type that was provided by [devices]/enabled_mdev_types . If the same PCI address is provided for two different types, nova-compute will return an InvalidLibvirtMdevConfig exception at restart. As an interim period, old configuration groups named [vgpu_USD(MDEV_TYPE)] will be accepted. A valid configuration could then be:: 9.1.16. ephemeral_storage_encryption The following table outlines the options available under the [ephemeral_storage_encryption] group in the /etc/nova/nova.conf file. Table 9.15. ephemeral_storage_encryption Configuration option = Default value Type Description cipher = aes-xts-plain64 string value Cipher-mode string to be used. The cipher and mode to be used to encrypt ephemeral storage. The set of cipher-mode combinations available depends on kernel support. According to the dm-crypt documentation, the cipher is expected to be in the format: "<cipher>-<chainmode>-<ivmode>". Possible values: Any crypto option listed in /proc/crypto . enabled = False boolean value Enables/disables LVM ephemeral storage encryption. key_size = 512 integer value Encryption key length in bits. The bit length of the encryption key to be used to encrypt ephemeral storage. In XTS mode only half of the bits are used for encryption key. 9.1.17. filter_scheduler The following table outlines the options available under the [filter_scheduler] group in the /etc/nova/nova.conf file. Table 9.16. filter_scheduler Configuration option = Default value Type Description aggregate_image_properties_isolation_namespace = None string value Image property namespace for use in the host aggregate. Images and hosts can be configured so that certain images can only be scheduled to hosts in a particular aggregate. This is done with metadata values set on the host aggregate that are identified by beginning with the value of this option. If the host is part of an aggregate with such a metadata key, the image in the request spec must have the value of that metadata in its properties in order for the scheduler to consider the host as acceptable. Note that this setting only affects scheduling if the AggregateImagePropertiesIsolation filter is enabled. Possible values: A string, where the string corresponds to an image property namespace Related options: [filter_scheduler] aggregate_image_properties_isolation_separator aggregate_image_properties_isolation_separator = . string value Separator character(s) for image property namespace and name. When using the aggregate_image_properties_isolation filter, the relevant metadata keys are prefixed with the namespace defined in the aggregate_image_properties_isolation_namespace configuration option plus a separator. This option defines the separator to be used. Note that this setting only affects scheduling if the AggregateImagePropertiesIsolation filter is enabled. Possible values: A string, where the string corresponds to an image property namespace separator character Related options: [filter_scheduler] aggregate_image_properties_isolation_namespace available_filters = ['nova.scheduler.filters.all_filters'] multi valued Filters that the scheduler can use. An unordered list of the filter classes the nova scheduler may apply. Only the filters specified in the [filter_scheduler] enabled_filters option will be used, but any filter appearing in that option must also be included in this list. By default, this is set to all filters that are included with nova. Possible values: A list of zero or more strings, where each string corresponds to the name of a filter that may be used for selecting a host Related options: [filter_scheduler] enabled_filters build_failure_weight_multiplier = 1000000.0 floating point value Multiplier used for weighing hosts that have had recent build failures. This option determines how much weight is placed on a compute node with recent build failures. Build failures may indicate a failing, misconfigured, or otherwise ailing compute node, and avoiding it during scheduling may be beneficial. The weight is inversely proportional to the number of recent build failures the compute node has experienced. This value should be set to some high value to offset weight given by other enabled weighers due to available resources. To disable weighing compute hosts by the number of recent failures, set this to zero. Note that this setting only affects scheduling if the BuildFailureWeigher weigher is enabled. Possible values: An integer or float value, where the value corresponds to the multiplier ratio for this weigher. Related options: [compute] consecutive_build_service_disable_threshold - Must be nonzero for a compute to report data considered by this weigher. [filter_scheduler] weight_classes cpu_weight_multiplier = 1.0 floating point value CPU weight multiplier ratio. Multiplier used for weighting free vCPUs. Negative numbers indicate stacking rather than spreading. Note that this setting only affects scheduling if the CPUWeigher weigher is enabled. Possible values: An integer or float value, where the value corresponds to the multipler ratio for this weigher. Related options: [filter_scheduler] weight_classes cross_cell_move_weight_multiplier = 1000000.0 floating point value Multiplier used for weighing hosts during a cross-cell move. This option determines how much weight is placed on a host which is within the same source cell when moving a server, for example during cross-cell resize. By default, when moving an instance, the scheduler will prefer hosts within the same cell since cross-cell move operations can be slower and riskier due to the complicated nature of cross-cell migrations. Note that this setting only affects scheduling if the CrossCellWeigher weigher is enabled. If your cloud is not configured to support cross-cell migrations, then this option has no effect. The value of this configuration option can be overridden per host aggregate by setting the aggregate metadata key with the same name ( cross_cell_move_weight_multiplier ). Possible values: An integer or float value, where the value corresponds to the multiplier ratio for this weigher. Positive values mean the weigher will prefer hosts within the same cell in which the instance is currently running. Negative values mean the weigher will prefer hosts in other cells from which the instance is currently running. Related options: [filter_scheduler] weight_classes disk_weight_multiplier = 1.0 floating point value Disk weight multipler ratio. Multiplier used for weighing free disk space. Negative numbers mean to stack vs spread. Note that this setting only affects scheduling if the DiskWeigher weigher is enabled. Possible values: An integer or float value, where the value corresponds to the multipler ratio for this weigher. enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] list value Filters that the scheduler will use. An ordered list of filter class names that will be used for filtering hosts. These filters will be applied in the order they are listed so place your most restrictive filters first to make the filtering process more efficient. All of the filters in this option must be present in the [scheduler_filter] available_filter option, or a SchedulerHostFilterNotFound exception will be raised. Possible values: A list of zero or more strings, where each string corresponds to the name of a filter to be used for selecting a host Related options: [filter_scheduler] available_filters host_subset_size = 1 integer value Size of subset of best hosts selected by scheduler. New instances will be scheduled on a host chosen randomly from a subset of the N best hosts, where N is the value set by this option. Setting this to a value greater than 1 will reduce the chance that multiple scheduler processes handling similar requests will select the same host, creating a potential race condition. By selecting a host randomly from the N hosts that best fit the request, the chance of a conflict is reduced. However, the higher you set this value, the less optimal the chosen host may be for a given request. Possible values: An integer, where the integer corresponds to the size of a host subset. hypervisor_version_weight_multiplier = 1.0 floating point value Hypervisor Version weight multiplier ratio. The multiplier is used for weighting hosts based on the reported hypervisor version. Negative numbers indicate preferring older hosts, the default is to prefer newer hosts to aid with upgrades. Possible values: An integer or float value, where the value corresponds to the multiplier ratio for this weigher. Example: Strongly prefer older hosts code-block:: ini Moderately prefer new hosts code-block:: ini Disable weigher influence code-block:: ini Related options: [filter_scheduler] weight_classes image_properties_default_architecture = None string value The default architecture to be used when using the image properties filter. When using the ImagePropertiesFilter , it is possible that you want to define a default architecture to make the user experience easier and avoid having something like x86_64 images landing on AARCH64 compute nodes because the user did not specify the hw_architecture property in Glance. Possible values: CPU Architectures such as x86_64, aarch64, s390x. io_ops_weight_multiplier = -1.0 floating point value IO operations weight multipler ratio. This option determines how hosts with differing workloads are weighed. Negative values, such as the default, will result in the scheduler preferring hosts with lighter workloads whereas positive values will prefer hosts with heavier workloads. Another way to look at it is that positive values for this option will tend to schedule instances onto hosts that are already busy, while negative values will tend to distribute the workload across more hosts. The absolute value, whether positive or negative, controls how strong the io_ops weigher is relative to other weighers. Note that this setting only affects scheduling if the IoOpsWeigher weigher is enabled. Possible values: An integer or float value, where the value corresponds to the multipler ratio for this weigher. Related options: [filter_scheduler] weight_classes isolated_hosts = [] list value List of hosts that can only run certain images. If there is a need to restrict some images to only run on certain designated hosts, list those host names here. Note that this setting only affects scheduling if the IsolatedHostsFilter filter is enabled. Possible values: A list of strings, where each string corresponds to the name of a host Related options: [filter_scheduler] isolated_images [filter_scheduler] restrict_isolated_hosts_to_isolated_images isolated_images = [] list value List of UUIDs for images that can only be run on certain hosts. If there is a need to restrict some images to only run on certain designated hosts, list those image UUIDs here. Note that this setting only affects scheduling if the IsolatedHostsFilter filter is enabled. Possible values: A list of UUID strings, where each string corresponds to the UUID of an image Related options: [filter_scheduler] isolated_hosts [filter_scheduler] restrict_isolated_hosts_to_isolated_images max_instances_per_host = 50 integer value Maximum number of instances that can exist on a host. If you need to limit the number of instances on any given host, set this option to the maximum number of instances you want to allow. The NumInstancesFilter and AggregateNumInstancesFilter will reject any host that has at least as many instances as this option's value. Note that this setting only affects scheduling if the NumInstancesFilter or AggregateNumInstancesFilter filter is enabled. Possible values: An integer, where the integer corresponds to the max instances that can be scheduled on a host. Related options: [filter_scheduler] enabled_filters max_io_ops_per_host = 8 integer value The number of instances that can be actively performing IO on a host. Instances performing IO includes those in the following states: build, resize, snapshot, migrate, rescue, unshelve. Note that this setting only affects scheduling if the IoOpsFilter filter is enabled. Possible values: An integer, where the integer corresponds to the max number of instances that can be actively performing IO on any given host. Related options: [filter_scheduler] enabled_filters pci_in_placement = False boolean value Enable scheduling and claiming PCI devices in Placement. This can be enabled after [pci]report_in_placement is enabled on all compute hosts. When enabled the scheduler queries Placement about the PCI device availability to select destination for a server with PCI request. The scheduler also allocates the selected PCI devices in Placement. Note that this logic does not replace the PCIPassthroughFilter but extends it. [pci] report_in_placement [pci] alias [pci] device_spec pci_weight_multiplier = 1.0 floating point value PCI device affinity weight multiplier. The PCI device affinity weighter computes a weighting based on the number of PCI devices on the host and the number of PCI devices requested by the instance. Note that this setting only affects scheduling if the PCIWeigher weigher and NUMATopologyFilter filter are enabled. Possible values: A positive integer or float value, where the value corresponds to the multiplier ratio for this weigher. Related options: [filter_scheduler] weight_classes ram_weight_multiplier = 1.0 floating point value RAM weight multipler ratio. This option determines how hosts with more or less available RAM are weighed. A positive value will result in the scheduler preferring hosts with more available RAM, and a negative number will result in the scheduler preferring hosts with less available RAM. Another way to look at it is that positive values for this option will tend to spread instances across many hosts, while negative values will tend to fill up (stack) hosts as much as possible before scheduling to a less-used host. The absolute value, whether positive or negative, controls how strong the RAM weigher is relative to other weighers. Note that this setting only affects scheduling if the RAMWeigher weigher is enabled. Possible values: An integer or float value, where the value corresponds to the multipler ratio for this weigher. Related options: [filter_scheduler] weight_classes restrict_isolated_hosts_to_isolated_images = True boolean value Prevent non-isolated images from being built on isolated hosts. Note that this setting only affects scheduling if the IsolatedHostsFilter filter is enabled. Even then, this option doesn't affect the behavior of requests for isolated images, which will always be restricted to isolated hosts. Related options: [filter_scheduler] isolated_images [filter_scheduler] isolated_hosts shuffle_best_same_weighed_hosts = False boolean value Enable spreading the instances between hosts with the same best weight. Enabling it is beneficial for cases when [filter_scheduler] host_subset_size is 1 (default), but there is a large number of hosts with same maximal weight. This scenario is common in Ironic deployments where there are typically many baremetal nodes with identical weights returned to the scheduler. In such case enabling this option will reduce contention and chances for rescheduling events. At the same time it will make the instance packing (even in unweighed case) less dense. soft_affinity_weight_multiplier = 1.0 floating point value Multiplier used for weighing hosts for group soft-affinity. Note that this setting only affects scheduling if the ServerGroupSoftAffinityWeigher weigher is enabled. Possible values: A non-negative integer or float value, where the value corresponds to weight multiplier for hosts with group soft affinity. Related options: [filter_scheduler] weight_classes soft_anti_affinity_weight_multiplier = 1.0 floating point value Multiplier used for weighing hosts for group soft-anti-affinity. Note that this setting only affects scheduling if the ServerGroupSoftAntiAffinityWeigher weigher is enabled. Possible values: A non-negative integer or float value, where the value corresponds to weight multiplier for hosts with group soft anti-affinity. Related options: [filter_scheduler] weight_classes track_instance_changes = True boolean value Enable querying of individual hosts for instance information. The scheduler may need information about the instances on a host in order to evaluate its filters and weighers. The most common need for this information is for the (anti-)affinity filters, which need to choose a host based on the instances already running on a host. If the configured filters and weighers do not need this information, disabling this option will improve performance. It may also be disabled when the tracking overhead proves too heavy, although this will cause classes requiring host usage data to query the database on each request instead. note:: Related options: [filter_scheduler] enabled_filters [workarounds] disable_group_policy_check_upcall weight_classes = ['nova.scheduler.weights.all_weighers'] list value Weighers that the scheduler will use. Only hosts which pass the filters are weighed. The weight for any host starts at 0, and the weighers order these hosts by adding to or subtracting from the weight assigned by the weigher. Weights may become negative. An instance will be scheduled to one of the N most-weighted hosts, where N is [filter_scheduler] host_subset_size . By default, this is set to all weighers that are included with Nova. Possible values: A list of zero or more strings, where each string corresponds to the name of a weigher that will be used for selecting a host 9.1.18. glance The following table outlines the options available under the [glance] group in the /etc/nova/nova.conf file. Table 9.17. glance Configuration option = Default value Type Description api_servers = None list value List of glance api servers endpoints available to nova. https is used for ssl-based glance api servers. Note The preferred mechanism for endpoint discovery is via keystoneauth1 loading options. Only use api_servers if you need multiple endpoints and are unable to use a load balancer for some reason. Possible values: A list of any fully qualified url of the form "scheme://hostname:port[/path]" (i.e. "http://10.0.1.0:9292" or "https://my.glance.server/image"). Deprecated since: 21.0.0 Reason: Support for image service configuration via standard keystoneauth1 Adapter options was added in the 17.0.0 Queens release. The api_servers option was retained temporarily to allow consumers time to cut over to a real load balancing solution. cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. connect-retries = None integer value The maximum number of retries that should be attempted for connection errors. connect-retry-delay = None floating point value Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. debug = False boolean value Enable or disable debug logging with glanceclient. default_trusted_certificate_ids = [] list value List of certificate IDs for certificates that should be trusted. May be used as a default list of trusted certificate IDs for certificate validation. The value of this option will be ignored if the user provides a list of trusted certificate IDs with an instance API request. The value of this option will be persisted with the instance data if signature verification and certificate validation are enabled and if the user did not provide an alternative list. If left empty when certificate validation is enabled the user must provide a list of trusted certificate IDs otherwise certificate validation will fail. Related options: The value of this option may be used if both verify_glance_signatures and enable_certificate_validation are enabled. enable_certificate_validation = False boolean value Enable certificate validation for image signature verification. During image signature verification nova will first verify the validity of the image's signing certificate using the set of trusted certificates associated with the instance. If certificate validation fails, signature verification will not be performed and the instance will be placed into an error state. This provides end users with stronger assurances that the image data is unmodified and trustworthy. If left disabled, image signature verification can still occur but the end user will not have any assurance that the signing certificate used to generate the image signature is still trustworthy. Related options: This option only takes effect if verify_glance_signatures is enabled. The value of default_trusted_certificate_ids may be used when this option is enabled. Deprecated since: 16.0.0 Reason: This option is intended to ease the transition for deployments leveraging image signature verification. The intended state long-term is for signature verification and certificate validation to always happen together. enable_rbd_download = False boolean value Enable Glance image downloads directly via RBD. Allow non-rbd computes using local storage to download and cache images from Ceph via rbd rather than the Glance API via http. note:: This option should only be enabled when the compute itself is not also using Ceph as a backing store. For example with the libvirt driver it should only be enabled when :oslo.config:option: libvirt.images_type is not set to rbd . Related options: :oslo.config:option: glance.rbd_user :oslo.config:option: glance.rbd_connect_timeout :oslo.config:option: glance.rbd_pool :oslo.config:option: glance.rbd_ceph_conf :oslo.config:option: libvirt.images_type endpoint-override = None string value Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version , min-version , and/or max-version options. insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file num_retries = 3 integer value Enable glance operation retries. Specifies the number of retries when uploading / downloading an image to / from glance. 0 means no retries. `rbd_ceph_conf = ` string value Path to the ceph configuration file to use. Related options: This option is only used if :oslo.config:option: glance.enable_rbd_download is set to True . rbd_connect_timeout = 5 integer value The RADOS client timeout in seconds when initially connecting to the cluster. Related options: This option is only used if :oslo.config:option: glance.enable_rbd_download is set to True . `rbd_pool = ` string value The RADOS pool in which the Glance images are stored as rbd volumes. Related options: This option is only used if :oslo.config:option: glance.enable_rbd_download is set to True . `rbd_user = ` string value The RADOS client name for accessing Glance images stored as rbd volumes. Related options: This option is only used if :oslo.config:option: glance.enable_rbd_download is set to True . region-name = None string value The default region_name for endpoint URL discovery. service-name = None string value The default service_name for endpoint URL discovery. service-type = image string value The default service_type for endpoint URL discovery. split-loggers = False boolean value Log requests to multiple loggers. status-code-retries = None integer value The maximum number of retries that should be attempted for retriable HTTP status codes. status-code-retry-delay = None floating point value Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. timeout = None integer value Timeout value for http requests valid-interfaces = ['internal', 'public'] list value List of interfaces, in order of preference, for endpoint URL. verify_glance_signatures = False boolean value Enable image signature verification. nova uses the image signature metadata from glance and verifies the signature of a signed image while downloading that image. If the image signature cannot be verified or if the image signature metadata is either incomplete or unavailable, then nova will not boot the image and instead will place the instance into an error state. This provides end users with stronger assurances of the integrity of the image data they are using to create servers. Related options: The options in the key_manager group, as the key_manager is used for the signature validation. Both enable_certificate_validation and default_trusted_certificate_ids below depend on this option being enabled. 9.1.19. guestfs The following table outlines the options available under the [guestfs] group in the /etc/nova/nova.conf file. Table 9.18. guestfs Configuration option = Default value Type Description debug = False boolean value Enable/disables guestfs logging. This configures guestfs to debug messages and push them to OpenStack logging system. When set to True, it traces libguestfs API calls and enable verbose debug messages. In order to use the above feature, "libguestfs" package must be installed. Related options: Since libguestfs access and modifies VM's managed by libvirt, below options should be set to give access to those VM's. libvirt.inject_key libvirt.inject_partition libvirt.inject_password 9.1.20. healthcheck The following table outlines the options available under the [healthcheck] group in the /etc/nova/nova.conf file. Table 9.19. healthcheck Configuration option = Default value Type Description backends = [] list value Additional backends that can perform health checks and report that information back as part of a request. detailed = False boolean value Show more detailed information as part of the response. Security note: Enabling this option may expose sensitive details about the service being monitored. Be sure to verify that it will not violate your security policies. disable_by_file_path = None string value Check the presence of a file to determine if an application is running on a port. Used by DisableByFileHealthcheck plugin. disable_by_file_paths = [] list value Check the presence of a file based on a port to determine if an application is running on a port. Expects a "port:path" list of strings. Used by DisableByFilesPortsHealthcheck plugin. path = /healthcheck string value The path to respond to healtcheck requests on. 9.1.21. hyperv The following table outlines the options available under the [hyperv] group in the /etc/nova/nova.conf file. Table 9.20. hyperv Configuration option = Default value Type Description config_drive_cdrom = False boolean value Mount config drive as a CD drive. OpenStack can be configured to write instance metadata to a config drive, which is then attached to the instance before it boots. The config drive can be attached as a disk drive (default) or as a CD drive. Related options: This option is meaningful with force_config_drive option set to True or when the REST API call to create an instance will have --config-drive=True flag. config_drive_format option must be set to iso9660 in order to use CD drive as the config drive image. To use config drive with Hyper-V, you must set the mkisofs_cmd value to the full path to an mkisofs.exe installation. Additionally, you must set the qemu_img_cmd value to the full path to an qemu-img command installation. You can configure the Compute service to always create a configuration drive by setting the force_config_drive option to True . config_drive_inject_password = False boolean value Inject password to config drive. When enabled, the admin password will be available from the config drive image. Related options: This option is meaningful when used with other options that enable config drive usage with Hyper-V, such as force_config_drive . dynamic_memory_ratio = 1.0 floating point value Dynamic memory ratio Enables dynamic memory allocation (ballooning) when set to a value greater than 1. The value expresses the ratio between the total RAM assigned to an instance and its startup RAM amount. For example a ratio of 2.0 for an instance with 1024MB of RAM implies 512MB of RAM allocated at startup. Possible values: 1.0: Disables dynamic memory allocation (Default). Float values greater than 1.0: Enables allocation of total implied RAM divided by this value for startup. enable_instance_metrics_collection = False boolean value Enable instance metrics collection Enables metrics collections for an instance by using Hyper-V's metric APIs. Collected data can be retrieved by other apps and services, e.g.: Ceilometer. enable_remotefx = False boolean value Enable RemoteFX feature This requires at least one DirectX 11 capable graphics adapter for Windows / Hyper-V Server 2012 R2 or newer and RDS-Virtualization feature has to be enabled. Instances with RemoteFX can be requested with the following flavor extra specs: os:resolution . Guest VM screen resolution size. Acceptable values 1024x768, 1280x1024, 1600x1200, 1920x1200, 2560x1600, 3840x2160 3840x2160 is only available on Windows / Hyper-V Server 2016. os:monitors . Guest VM number of monitors. Acceptable values [1, 4] - Windows / Hyper-V Server 2012 R2 [1, 8] - Windows / Hyper-V Server 2016 os:vram . Guest VM VRAM amount. Only available on Windows / Hyper-V Server 2016. Acceptable values:: `instances_path_share = ` string value Instances path share The name of a Windows share mapped to the "instances_path" dir and used by the resize feature to copy files to the target host. If left blank, an administrative share (hidden network share) will be used, looking for the same "instances_path" used locally. Possible values: "": An administrative share will be used (Default). Name of a Windows share. Related options: "instances_path": The directory which will be used if this option here is left blank. iscsi_initiator_list = [] list value List of iSCSI initiators that will be used for establishing iSCSI sessions. If none are specified, the Microsoft iSCSI initiator service will choose the initiator. limit_cpu_features = False boolean value Limit CPU features This flag is needed to support live migration to hosts with different CPU features and checked during instance creation in order to limit the CPU features used by the instance. mounted_disk_query_retry_count = 10 integer value Mounted disk query retry count The number of times to retry checking for a mounted disk. The query runs until the device can be found or the retry count is reached. Possible values: Positive integer values. Values greater than 1 is recommended (Default: 10). Related options: Time interval between disk mount retries is declared with "mounted_disk_query_retry_interval" option. mounted_disk_query_retry_interval = 5 integer value Mounted disk query retry interval Interval between checks for a mounted disk, in seconds. Possible values: Time in seconds (Default: 5). Related options: This option is meaningful when the mounted_disk_query_retry_count is greater than 1. The retry loop runs with mounted_disk_query_retry_count and mounted_disk_query_retry_interval configuration options. power_state_check_timeframe = 60 integer value Power state check timeframe The timeframe to be checked for instance power state changes. This option is used to fetch the state of the instance from Hyper-V through the WMI interface, within the specified timeframe. Possible values: Timeframe in seconds (Default: 60). power_state_event_polling_interval = 2 integer value Power state event polling interval Instance power state change event polling frequency. Sets the listener interval for power state events to the given value. This option enhances the internal lifecycle notifications of instances that reboot themselves. It is unlikely that an operator has to change this value. Possible values: Time in seconds (Default: 2). qemu_img_cmd = qemu-img.exe string value qemu-img command qemu-img is required for some of the image related operations like converting between different image types. You can get it from here: ( http://qemu.weilnetz.de/ ) or you can install the Cloudbase OpenStack Hyper-V Compute Driver ( https://cloudbase.it/openstack-hyperv-driver/ ) which automatically sets the proper path for this config option. You can either give the full path of qemu-img.exe or set its path in the PATH environment variable and leave this option to the default value. Possible values: Name of the qemu-img executable, in case it is in the same directory as the nova-compute service or its path is in the PATH environment variable (Default). Path of qemu-img command (DRIVELETTER:\PATH\TO\QEMU-IMG\COMMAND). Related options: If the config_drive_cdrom option is False, qemu-img will be used to convert the ISO to a VHD, otherwise the config drive will remain an ISO. To use config drive with Hyper-V, you must set the mkisofs_cmd value to the full path to an mkisofs.exe installation. use_multipath_io = False boolean value Use multipath connections when attaching iSCSI or FC disks. This requires the Multipath IO Windows feature to be enabled. MPIO must be configured to claim such devices. volume_attach_retry_count = 10 integer value Volume attach retry count The number of times to retry attaching a volume. Volume attachment is retried until success or the given retry count is reached. Possible values: Positive integer values (Default: 10). Related options: Time interval between attachment attempts is declared with volume_attach_retry_interval option. volume_attach_retry_interval = 5 integer value Volume attach retry interval Interval between volume attachment attempts, in seconds. Possible values: Time in seconds (Default: 5). Related options: This options is meaningful when volume_attach_retry_count is greater than 1. The retry loop runs with volume_attach_retry_count and volume_attach_retry_interval configuration options. vswitch_name = None string value External virtual switch name The Hyper-V Virtual Switch is a software-based layer-2 Ethernet network switch that is available with the installation of the Hyper-V server role. The switch includes programmatically managed and extensible capabilities to connect virtual machines to both virtual networks and the physical network. In addition, Hyper-V Virtual Switch provides policy enforcement for security, isolation, and service levels. The vSwitch represented by this config option must be an external one (not internal or private). Possible values: If not provided, the first of a list of available vswitches is used. This list is queried using WQL. Virtual switch name. wait_soft_reboot_seconds = 60 integer value Wait soft reboot seconds Number of seconds to wait for instance to shut down after soft reboot request is made. We fall back to hard reboot if instance does not shutdown within this window. Possible values: Time in seconds (Default: 60). 9.1.22. image_cache The following table outlines the options available under the [image_cache] group in the /etc/nova/nova.conf file. Table 9.21. image_cache Configuration option = Default value Type Description manager_interval = 2400 integer value Number of seconds to wait between runs of the image cache manager. Note that when using shared storage for the [DEFAULT]/instances_path configuration option across multiple nova-compute services, this periodic could process a large number of instances. Similarly, using a compute driver that manages a cluster (like vmwareapi.VMwareVCDriver) could result in processing a large number of instances. Therefore you may need to adjust the time interval for the anticipated load, or only run on one nova-compute service within a shared storage aggregate. Additional note, every time the image_cache_manager runs the timestamps of images in [DEFAULT]/instances_path are updated. Possible values: 0: run at the default interval of 60 seconds (not recommended) -1: disable Any other value Related options: [DEFAULT]/compute_driver [DEFAULT]/instances_path precache_concurrency = 1 integer value Maximum number of compute hosts to trigger image precaching in parallel. When an image precache request is made, compute nodes will be contacted to initiate the download. This number constrains the number of those that will happen in parallel. Higher numbers will cause more computes to work in parallel and may result in reduced time to complete the operation, but may also DDoS the image service. Lower numbers will result in more sequential operation, lower image service load, but likely longer runtime to completion. remove_unused_base_images = True boolean value Should unused base images be removed? When there are no remaining instances on the hypervisor created from this base image or linked to it, the base image is considered unused. remove_unused_original_minimum_age_seconds = 86400 integer value Unused unresized base images younger than this will not be removed. remove_unused_resized_minimum_age_seconds = 3600 integer value Unused resized base images younger than this will not be removed. subdirectory_name = _base string value Location of cached images. This is NOT the full path - just a folder name relative to USDinstances_path . For per-compute-host cached images, set to base USDmy_ip 9.1.23. ironic The following table outlines the options available under the [ironic] group in the /etc/nova/nova.conf file. Table 9.22. ironic Configuration option = Default value Type Description api_max_retries = 60 integer value The number of times to retry when a request conflicts. If set to 0, only try once, no retries. Related options: api_retry_interval api_retry_interval = 2 integer value The number of seconds to wait before retrying the request. Related options: api_max_retries auth-url = None string value Authentication URL auth_section = None string value Config Section from which to load plugin specific options auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. connect-retries = None integer value The maximum number of retries that should be attempted for connection errors. connect-retry-delay = None floating point value Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to endpoint-override = None string value Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version , min-version , and/or max-version options. insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file partition_key = None string value Case-insensitive key to limit the set of nodes that may be managed by this service to the set of nodes in Ironic which have a matching conductor_group property. If unset, all available nodes will be eligible to be managed by this service. Note that setting this to the empty string ( "" ) will match the default conductor group, and is different than leaving the option unset. password = None string value User's password peer_list = [] list value List of hostnames for all nova-compute services (including this host) with this partition_key config value. Nodes matching the partition_key value will be distributed between all services specified here. If partition_key is unset, this option is ignored. project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to region-name = None string value The default region_name for endpoint URL discovery. serial_console_state_timeout = 10 integer value Timeout (seconds) to wait for node serial console state changed. Set to 0 to disable timeout. service-name = None string value The default service_name for endpoint URL discovery. service-type = baremetal string value The default service_type for endpoint URL discovery. split-loggers = False boolean value Log requests to multiple loggers. status-code-retries = None integer value The maximum number of retries that should be attempted for retriable HTTP status codes. status-code-retry-delay = None floating point value Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. system-scope = None string value Scope for system operations timeout = None integer value Timeout value for http requests trust-id = None string value ID of the trust to use as a trustee use user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User ID username = None string value Username valid-interfaces = ['internal', 'public'] list value List of interfaces, in order of preference, for endpoint URL. 9.1.24. key_manager The following table outlines the options available under the [key_manager] group in the /etc/nova/nova.conf file. Table 9.23. key_manager Configuration option = Default value Type Description auth_type = None string value The type of authentication credential to create. Possible values are token , password , keystone_token , and keystone_password . Required if no context is passed to the credential factory. auth_url = None string value Use this endpoint to connect to Keystone. backend = barbican string value Specify the key manager implementation. Options are "barbican" and "vault". Default is "barbican". Will support the values earlier set using [key_manager]/api_class for some time. domain_id = None string value Domain ID for domain scoping. Optional for keystone_token and keystone_password auth_type. domain_name = None string value Domain name for domain scoping. Optional for keystone_token and keystone_password auth_type. fixed_key = None string value Fixed key returned by key manager, specified in hex. Possible values: Empty string or a key in hex value password = None string value Password for authentication. Required for password and keystone_password auth_type. project_domain_id = None string value Project's domain ID for project. Optional for keystone_token and keystone_password auth_type. project_domain_name = None string value Project's domain name for project. Optional for keystone_token and keystone_password auth_type. project_id = None string value Project ID for project scoping. Optional for keystone_token and keystone_password auth_type. project_name = None string value Project name for project scoping. Optional for keystone_token and keystone_password auth_type. reauthenticate = True boolean value Allow fetching a new token if the current one is going to expire. Optional for keystone_token and keystone_password auth_type. token = None string value Token for authentication. Required for token and keystone_token auth_type if no context is passed to the credential factory. trust_id = None string value Trust ID for trust scoping. Optional for keystone_token and keystone_password auth_type. user_domain_id = None string value User's domain ID for authentication. Optional for keystone_token and keystone_password auth_type. user_domain_name = None string value User's domain name for authentication. Optional for keystone_token and keystone_password auth_type. user_id = None string value User ID for authentication. Optional for keystone_token and keystone_password auth_type. username = None string value Username for authentication. Required for password auth_type. Optional for the keystone_password auth_type. 9.1.25. keystone The following table outlines the options available under the [keystone] group in the /etc/nova/nova.conf file. Table 9.24. keystone Configuration option = Default value Type Description auth-url = None string value Authentication URL auth_section = None string value Config Section from which to load plugin specific options auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. connect-retries = None integer value The maximum number of retries that should be attempted for connection errors. connect-retry-delay = None floating point value Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to endpoint-override = None string value Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version , min-version , and/or max-version options. insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file password = None string value User's password project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to region-name = None string value The default region_name for endpoint URL discovery. service-name = None string value The default service_name for endpoint URL discovery. service-type = identity string value The default service_type for endpoint URL discovery. split-loggers = False boolean value Log requests to multiple loggers. status-code-retries = None integer value The maximum number of retries that should be attempted for retriable HTTP status codes. status-code-retry-delay = None floating point value Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value ID of the trust to use as a trustee use user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User ID username = None string value Username valid-interfaces = ['internal', 'public'] list value List of interfaces, in order of preference, for endpoint URL. 9.1.26. keystone_authtoken The following table outlines the options available under the [keystone_authtoken] group in the /etc/nova/nova.conf file. Table 9.25. keystone_authtoken Configuration option = Default value Type Description auth_section = None string value Config Section from which to load plugin specific options auth_type = None string value Authentication type to load auth_uri = None string value Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. This option is deprecated in favor of www_authenticate_uri and will be removed in the S release. Deprecated since: Queens *Reason:*The auth_uri option is deprecated in favor of www_authenticate_uri and will be removed in the S release. auth_version = None string value API version of the Identity API endpoint. cache = None string value Request environment key where the Swift cache object is stored. When auth_token middleware is deployed with a Swift cache, use this option to have the middleware share a caching backend with swift. Otherwise, use the memcached_servers option instead. cafile = None string value A PEM encoded Certificate Authority to use when verifying HTTPs connections. Defaults to system CAs. certfile = None string value Required if identity server requires client certificate delay_auth_decision = False boolean value Do not handle authorization requests within the middleware, but delegate the authorization decision to downstream WSGI components. enforce_token_bind = permissive string value Used to control the use and type of token binding. Can be set to: "disabled" to not check token binding. "permissive" (default) to validate binding information if the bind type is of a form known to the server and ignore it if not. "strict" like "permissive" but if the bind type is unknown the token will be rejected. "required" any form of token binding is needed to be allowed. Finally the name of a binding method that must be present in tokens. http_connect_timeout = None integer value Request timeout value for communicating with Identity API server. http_request_max_retries = 3 integer value How many times are we trying to reconnect when communicating with Identity API Server. include_service_catalog = True boolean value (Optional) Indicate whether to set the X-Service-Catalog header. If False, middleware will not ask for service catalog on token validation and will not set the X-Service-Catalog header. insecure = False boolean value Verify HTTPS connections. interface = internal string value Interface to use for the Identity API endpoint. Valid values are "public", "internal" (default) or "admin". keyfile = None string value Required if identity server requires client certificate memcache_pool_conn_get_timeout = 10 integer value (Optional) Number of seconds that an operation will wait to get a memcached client connection from the pool. memcache_pool_dead_retry = 300 integer value (Optional) Number of seconds memcached server is considered dead before it is tried again. memcache_pool_maxsize = 10 integer value (Optional) Maximum total number of open connections to every memcached server. memcache_pool_socket_timeout = 3 integer value (Optional) Socket timeout in seconds for communicating with a memcached server. memcache_pool_unused_timeout = 60 integer value (Optional) Number of seconds a connection to memcached is held unused in the pool before it is closed. memcache_secret_key = None string value (Optional, mandatory if memcache_security_strategy is defined) This string is used for key derivation. memcache_security_strategy = None string value (Optional) If defined, indicate whether token data should be authenticated or authenticated and encrypted. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the cache. If the value is not one of these options or empty, auth_token will raise an exception on initialization. memcache_use_advanced_pool = True boolean value (Optional) Use the advanced (eventlet safe) memcached client pool. memcached_servers = None list value Optionally specify a list of memcached server(s) to use for caching. If left undefined, tokens will instead be cached in-process. region_name = None string value The region in which the identity server can be found. service_token_roles = ['service'] list value A choice of roles that must be present in a service token. Service tokens are allowed to request that an expired token can be used and so this check should tightly control that only actual services should be sending this token. Roles here are applied as an ANY check so any role in this list must be present. For backwards compatibility reasons this currently only affects the allow_expired check. service_token_roles_required = False boolean value For backwards compatibility reasons we must let valid service tokens pass that don't pass the service_token_roles check as valid. Setting this true will become the default in a future release and should be enabled if possible. service_type = None string value The name or type of the service as it appears in the service catalog. This is used to validate tokens that have restricted access rules. token_cache_time = 300 integer value In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen tokens for a configurable duration (in seconds). Set to -1 to disable caching completely. www_authenticate_uri = None string value Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. 9.1.27. libvirt The following table outlines the options available under the [libvirt] group in the /etc/nova/nova.conf file. Table 9.26. libvirt Configuration option = Default value Type Description `connection_uri = ` string value Overrides the default libvirt URI of the chosen virtualization type. If set, Nova will use this URI to connect to libvirt. Possible values: An URI like qemu:///system . Related options: virt_type : Influences what is used as default value here. cpu_mode = None string value Is used to set the CPU mode an instance should have. If virt_type="kvm&verbar;qemu" , it will default to host-model , otherwise it will default to none . Related options: cpu_models : This should be set ONLY when cpu_mode is set to custom . Otherwise, it would result in an error and the instance launch will fail. cpu_model_extra_flags = [] list value Enable or disable guest CPU flags. To explicitly enable or disable CPU flags, use the +flag or -flag notation - the + sign will enable the CPU flag for the guest, while a - sign will disable it. If neither + nor - is specified, the flag will be enabled, which is the default behaviour. For example, if you specify the following (assuming the said CPU model and features are supported by the host hardware and software):: Nova will disable the hle and rtm flags for the guest; and it will enable ssbd and mttr (because it was specified with neither + nor - prefix). The CPU flags are case-insensitive. In the following example, the pdpe1gb flag will be disabled for the guest; vmx and pcid flags will be enabled:: Specifying extra CPU flags is valid in combination with all the three possible values of cpu_mode config attribute: custom (this also requires an explicit CPU model to be specified via the cpu_models config attribute), host-model , or host-passthrough . There can be scenarios where you may need to configure extra CPU flags even for host-passthrough CPU mode, because sometimes QEMU may disable certain CPU features. An example of this is Intel's "invtsc" (Invariable Time Stamp Counter) CPU flag - if you need to expose this flag to a Nova instance, you need to explicitly enable it. The possible values for cpu_model_extra_flags depends on the CPU model in use. Refer to /usr/share/libvirt/cpu_map/*.xml for possible CPU feature flags for a given CPU model. A special note on a particular CPU flag: pcid (an Intel processor feature that alleviates guest performance degradation as a result of applying the Meltdown CVE fixes). When configuring this flag with the custom CPU mode, not all CPU models (as defined by QEMU and libvirt) need it: The only virtual CPU models that include the pcid capability are Intel "Haswell", "Broadwell", and "Skylake" variants. The libvirt / QEMU CPU models "Nehalem", "Westmere", "SandyBridge", and "IvyBridge" will not expose the pcid capability by default, even if the host CPUs by the same name include it. I.e. PCID needs to be explicitly specified when using the said virtual CPU models. The libvirt driver's default CPU mode, host-model , will do the right thing with respect to handling PCID CPU flag for the guest - assuming you are running updated processor microcode, host and guest kernel, libvirt, and QEMU. The other mode, host-passthrough , checks if PCID is available in the hardware, and if so directly passes it through to the Nova guests. Thus, in context of PCID , with either of these CPU modes ( host-model or host-passthrough ), there is no need to use the cpu_model_extra_flags . Related options: cpu_mode cpu_models cpu_models = [] list value An ordered list of CPU models the host supports. It is expected that the list is ordered so that the more common and less advanced CPU models are listed earlier. Here is an example: SandyBridge,IvyBridge,Haswell,Broadwell , the latter CPU model's features is richer that the CPU model. Possible values: The named CPU models can be found via virsh cpu-models ARCH , where ARCH is your host architecture. Related options: cpu_mode : This should be set to custom ONLY when you want to configure (via cpu_models ) a specific named CPU model. Otherwise, it would result in an error and the instance launch will fail. virt_type : Only the virtualization types kvm and qemu use this. note:: Be careful to only specify models which can be fully supported in hardware. cpu_power_governor_high = performance string value Governor to use in order to have best CPU performance cpu_power_governor_low = powersave string value Governor to use in order to reduce CPU power consumption cpu_power_management = False boolean value Use libvirt to manage CPU cores performance. cpu_power_management_strategy = cpu_state string value Tuning strategy to reduce CPU power consumption when unused device_detach_attempts = 8 integer value Maximum number of attempts the driver tries to detach a device in libvirt. Related options: :oslo.config:option: libvirt.device_detach_timeout device_detach_timeout = 20 integer value Maximum number of seconds the driver waits for the success or the failure event from libvirt for a given device detach attempt before it re-trigger the detach. Related options: :oslo.config:option: libvirt.device_detach_attempts disk_cachemodes = [] list value Specific cache modes to use for different disk types. For example: file=directsync,block=none,network=writeback For local or direct-attached storage, it is recommended that you use writethrough (default) mode, as it ensures data integrity and has acceptable I/O performance for applications running in the guest, especially for read operations. However, caching mode none is recommended for remote NFS storage, because direct I/O operations (O_DIRECT) perform better than synchronous I/O operations (with O_SYNC). Caching mode none effectively turns all guest I/O operations into direct I/O operations on the host, which is the NFS client in this environment. Possible cache modes: default: "It Depends" - For Nova-managed disks, none , if the host file system is capable of Linux's O_DIRECT semantics; otherwise writeback . For volume drivers, the default is driver-dependent: none for everything except for SMBFS and Virtuzzo (which use writeback ). none: With caching mode set to none, the host page cache is disabled, but the disk write cache is enabled for the guest. In this mode, the write performance in the guest is optimal because write operations bypass the host page cache and go directly to the disk write cache. If the disk write cache is battery-backed, or if the applications or storage stack in the guest transfer data properly (either through fsync operations or file system barriers), then data integrity can be ensured. However, because the host page cache is disabled, the read performance in the guest would not be as good as in the modes where the host page cache is enabled, such as writethrough mode. Shareable disk devices, like for a multi-attachable block storage volume, will have their cache mode set to none regardless of configuration. writethrough: With caching set to writethrough mode, the host page cache is enabled, but the disk write cache is disabled for the guest. Consequently, this caching mode ensures data integrity even if the applications and storage stack in the guest do not transfer data to permanent storage properly (either through fsync operations or file system barriers). Because the host page cache is enabled in this mode, the read performance for applications running in the guest is generally better. However, the write performance might be reduced because the disk write cache is disabled. writeback: With caching set to writeback mode, both the host page cache and the disk write cache are enabled for the guest. Because of this, the I/O performance for applications running in the guest is good, but the data is not protected in a power failure. As a result, this caching mode is recommended only for temporary data where potential data loss is not a concern. NOTE: Certain backend disk mechanisms may provide safe writeback cache semantics. Specifically those that bypass the host page cache, such as QEMU's integrated RBD driver. Ceph documentation recommends setting this to writeback for maximum performance while maintaining data safety. directsync: Like "writethrough", but it bypasses the host page cache. unsafe: Caching mode of unsafe ignores cache transfer operations completely. As its name implies, this caching mode should be used only for temporary data where data loss is not a concern. This mode can be useful for speeding up guest installations, but you should switch to another caching mode in production environments. disk_prefix = None string value Override the default disk prefix for the devices attached to an instance. If set, this is used to identify a free disk device name for a bus. Possible values: Any prefix which will result in a valid disk device name like sda or hda for example. This is only necessary if the device names differ to the commonly known device name prefixes for a virtualization type such as: sd, xvd, uvd, vd. Related options: virt_type : Influences which device type is used, which determines the default disk prefix. enabled_perf_events = [] list value Performance events to monitor and collect statistics for. This will allow you to specify a list of events to monitor low-level performance of guests, and collect related statistics via the libvirt driver, which in turn uses the Linux kernel's perf infrastructure. With this config attribute set, Nova will generate libvirt guest XML to monitor the specified events. For example, to monitor the count of CPU cycles (total/elapsed) and the count of cache misses, enable them as follows:: Possible values: A string list. The list of supported events can be found here`__. Note that Intel CMT events - `cmt , mbmbt and mbml - are unsupported by recent Linux kernel versions (4.14+) and will be ignored by nova. __ https://libvirt.org/formatdomain.html#elementsPerf . file_backed_memory = 0 integer value Available capacity in MiB for file-backed memory. Set to 0 to disable file-backed memory. When enabled, instances will create memory files in the directory specified in /etc/libvirt/qemu.conf 's memory_backing_dir option. The default location is /var/lib/libvirt/qemu/ram . When enabled, the value defined for this option is reported as the node memory capacity. Compute node system memory will be used as a cache for file-backed memory, via the kernel's pagecache mechanism. note:: This feature is not compatible with hugepages. note:: This feature is not compatible with memory overcommit. Related options: virt_type must be set to kvm or qemu . ram_allocation_ratio must be set to 1.0. gid_maps = [] list value List of guid targets and ranges.Syntax is guest-gid:host-gid:count. Maximum of 5 allowed. hw_disk_discard = None string value Discard option for nova managed disks. Requires: Libvirt >= 1.0.6 Qemu >= 1.5 (raw format) Qemu >= 1.6 (qcow2 format) hw_machine_type = None list value For qemu or KVM guests, set this option to specify a default machine type per host architecture. You can find a list of supported machine types in your environment by checking the output of the :command: virsh capabilities command. The format of the value for this config option is host-arch=machine-type . For example: x86_64=machinetype1,armv7l=machinetype2 . `images_rbd_ceph_conf = ` string value Path to the ceph configuration file to use images_rbd_glance_copy_poll_interval = 15 integer value The interval in seconds with which to poll Glance after asking for it to copy an image to the local rbd store. This affects how often we ask Glance to report on copy completion, and thus should be short enough that we notice quickly, but not too aggressive that we generate undue load on the Glance server. Related options: images_type - must be set to rbd images_rbd_glance_store_name - must be set to a store name images_rbd_glance_copy_timeout = 600 integer value The overall maximum time we will wait for Glance to complete an image copy to our local rbd store. This should be long enough to allow large images to be copied over the network link between our local store and the one where images typically reside. The downside of setting this too long is just to catch the case where the image copy is stalled or proceeding too slowly to be useful. Actual errors will be reported by Glance and noticed according to the poll interval. Related options: images_type - must be set to rbd images_rbd_glance_store_name - must be set to a store name images_rbd_glance_copy_poll_interval - controls the failure time-to-notice `images_rbd_glance_store_name = ` string value The name of the Glance store that represents the rbd cluster in use by this node. If set, this will allow Nova to request that Glance copy an image from an existing non-local store into the one named by this option before booting so that proper Copy-on-Write behavior is maintained. Related options: images_type - must be set to rbd images_rbd_glance_copy_poll_interval - controls the status poll frequency images_rbd_glance_copy_timeout - controls the overall copy timeout images_rbd_pool = rbd string value The RADOS pool in which rbd volumes are stored images_type = default string value VM Images format. If default is specified, then use_cow_images flag is used instead of this one. Related options: compute.use_cow_images images_volume_group [workarounds]/ensure_libvirt_rbd_instance_dir_cleanup compute.force_raw_images images_volume_group = None string value LVM Volume Group that is used for VM images, when you specify images_type=lvm Related options: images_type inject_key = False boolean value Allow the injection of an SSH key at boot time. There is no agent needed within the image to do this. If libguestfs is available on the host, it will be used. Otherwise nbd is used. The file system of the image will be mounted and the SSH key, which is provided in the REST API call will be injected as SSH key for the root user and appended to the authorized_keys of that user. The SELinux context will be set if necessary. Be aware that the injection is not possible when the instance gets launched from a volume. This config option will enable directly modifying the instance disk and does not affect what cloud-init may do using data from config_drive option or the metadata service. Linux distribution guest only. Related options: inject_partition : That option will decide about the discovery and usage of the file system. It also can disable the injection at all. inject_partition = -2 integer value Determines how the file system is chosen to inject data into it. libguestfs is used to inject data. If libguestfs is not able to determine the root partition (because there are more or less than one root partition) or cannot mount the file system it will result in an error and the instance won't boot. Possible values: -2 β‡’ disable the injection of data. -1 β‡’ find the root partition with the file system to mount with libguestfs 0 β‡’ The image is not partitioned >0 β‡’ The number of the partition to use for the injection Linux distribution guest only. Related options: inject_key : If this option allows the injection of a SSH key it depends on value greater or equal to -1 for inject_partition . inject_password : If this option allows the injection of an admin password it depends on value greater or equal to -1 for inject_partition . [guestfs]/debug You can enable the debug log level of libguestfs with this config option. A more verbose output will help in debugging issues. virt_type : If you use lxc as virt_type it will be treated as a single partition image inject_password = False boolean value Allow the injection of an admin password for instance only at create and rebuild process. There is no agent needed within the image to do this. If libguestfs is available on the host, it will be used. Otherwise nbd is used. The file system of the image will be mounted and the admin password, which is provided in the REST API call will be injected as password for the root user. If no root user is available, the instance won't be launched and an error is thrown. Be aware that the injection is not possible when the instance gets launched from a volume. Linux distribution guest only. Possible values: True: Allows the injection. False: Disallows the injection. Any via the REST API provided admin password will be silently ignored. Related options: inject_partition : That option will decide about the discovery and usage of the file system. It also can disable the injection at all. iscsi_iface = None string value The iSCSI transport iface to use to connect to target in case offload support is desired. Default format is of the form <transport_name>.<hwaddress> , where <transport_name> is one of ( be2iscsi , bnx2i , cxgb3i , cxgb4i , qla4xxx , ocs , tcp ) and <hwaddress> is the MAC address of the interface and can be generated via the iscsiadm -m iface command. Do not confuse the iscsi_iface parameter to be provided here with the actual transport name. iser_use_multipath = False boolean value Use multipath connection of the iSER volume. iSER volumes can be connected as multipath devices. This will provide high availability and fault tolerance. live_migration_bandwidth = 0 integer value Maximum bandwidth(in MiB/s) to be used during migration. If set to 0, the hypervisor will choose a suitable default. Some hypervisors do not support this feature and will return an error if bandwidth is not 0. Please refer to the libvirt documentation for further details. live_migration_completion_timeout = 800 integer value Time to wait, in seconds, for migration to successfully complete transferring data before aborting the operation. Value is per GiB of guest RAM + disk to be transferred, with lower bound of a minimum of 2 GiB. Should usually be larger than downtime delay * downtime steps. Set to 0 to disable timeouts. Related options: live_migration_downtime live_migration_downtime_steps live_migration_downtime_delay live_migration_downtime = 500 integer value Target maximum period of time Nova will try to keep the instance paused during the last part of the memory copy, in milliseconds . Will be rounded up to a minimum of 100ms. You can increase this value if you want to allow live-migrations to complete faster, or avoid live-migration timeout errors by allowing the guest to be paused for longer during the live-migration switch over. This value may be exceeded if there is any reduction on the transfer rate after the VM is paused. Related options: live_migration_completion_timeout live_migration_downtime_delay = 75 integer value Time to wait, in seconds, between each step increase of the migration downtime. Minimum delay is 3 seconds. Value is per GiB of guest RAM + disk to be transferred, with lower bound of a minimum of 2 GiB per device. live_migration_downtime_steps = 10 integer value Number of incremental steps to reach max downtime value. Will be rounded up to a minimum of 3 steps. live_migration_inbound_addr = None host domain value IP address used as the live migration address for this host. This option indicates the IP address which should be used as the target for live migration traffic when migrating to this hypervisor. This metadata is then used by the source of the live migration traffic to construct a migration URI. If this option is set to None, the hostname of the migration target compute node will be used. This option is useful in environments where the live-migration traffic can impact the network plane significantly. A separate network for live-migration traffic can then use this config option and avoids the impact on the management network. live_migration_permit_auto_converge = False boolean value This option allows nova to start live migration with auto converge on. Auto converge throttles down CPU if a progress of on-going live migration is slow. Auto converge will only be used if this flag is set to True and post copy is not permitted or post copy is unavailable due to the version of libvirt and QEMU in use. Related options: live_migration_permit_post_copy live_migration_permit_post_copy = False boolean value This option allows nova to switch an on-going live migration to post-copy mode, i.e., switch the active VM to the one on the destination node before the migration is complete, therefore ensuring an upper bound on the memory that needs to be transferred. Post-copy requires libvirt>=1.3.3 and QEMU>=2.5.0. When permitted, post-copy mode will be automatically activated if we reach the timeout defined by live_migration_completion_timeout and live_migration_timeout_action is set to force_complete . Note if you change to no timeout or choose to use abort , i.e. live_migration_completion_timeout = 0 , then there will be no automatic switch to post-copy. The live-migration force complete API also uses post-copy when permitted. If post-copy mode is not available, force complete falls back to pausing the VM to ensure the live-migration operation will complete. When using post-copy mode, if the source and destination hosts lose network connectivity, the VM being live-migrated will need to be rebooted. For more details, please see the Administration guide. Related options: live_migration_permit_auto_converge live_migration_timeout_action live_migration_scheme = None string value URI scheme for live migration used by the source of live migration traffic. Override the default libvirt live migration scheme (which is dependent on virt_type). If this option is set to None, nova will automatically choose a sensible default based on the hypervisor. It is not recommended that you change this unless you are very sure that hypervisor supports a particular scheme. Related options: virt_type : This option is meaningful only when virt_type is set to kvm or qemu . live_migration_uri : If live_migration_uri value is not None, the scheme used for live migration is taken from live_migration_uri instead. live_migration_timeout_action = abort string value This option will be used to determine what action will be taken against a VM after live_migration_completion_timeout expires. By default, the live migrate operation will be aborted after completion timeout. If it is set to force_complete , the compute service will either pause the VM or trigger post-copy depending on if post copy is enabled and available ( live_migration_permit_post_copy is set to True). Related options: live_migration_completion_timeout live_migration_permit_post_copy live_migration_tunnelled = False boolean value Enable tunnelled migration. This option enables the tunnelled migration feature, where migration data is transported over the libvirtd connection. If enabled, we use the VIR_MIGRATE_TUNNELLED migration flag, avoiding the need to configure the network to allow direct hypervisor to hypervisor communication. If False, use the native transport. If not set, Nova will choose a sensible default based on, for example the availability of native encryption support in the hypervisor. Enabling this option will definitely impact performance massively. Note that this option is NOT compatible with use of block migration. Deprecated since: 23.0.0 Reason: The "tunnelled live migration" has two inherent limitations: it cannot handle live migration of disks in a non-shared storage setup; and it has a huge performance cost. Both these problems are solved by ``live_migration_with_native_tls`` (requires a pre-configured TLS environment), which is the recommended approach for securing all live migration streams. live_migration_uri = None string value Live migration target URI used by the source of live migration traffic. Override the default libvirt live migration target URI (which is dependent on virt_type). Any included "%s" is replaced with the migration target hostname, or live_migration_inbound_addr if set. If this option is set to None (which is the default), Nova will automatically generate the live_migration_uri value based on only 4 supported virt_type in following list: kvm : qemu+tcp://%s/system qemu : qemu+tcp://%s/system parallels : parallels+tcp://%s/system Related options: live_migration_inbound_addr : If live_migration_inbound_addr value is not None and live_migration_tunnelled is False, the ip/hostname address of target compute node is used instead of live_migration_uri as the uri for live migration. live_migration_scheme : If live_migration_uri is not set, the scheme used for live migration is taken from live_migration_scheme instead. Deprecated since: 15.0.0 Reason: live_migration_uri is deprecated for removal in favor of two other options that allow to change live migration scheme and target URI: ``live_migration_scheme`` and ``live_migration_inbound_addr`` respectively. live_migration_with_native_tls = False boolean value Use QEMU-native TLS encryption when live migrating. This option will allow both migration stream (guest RAM plus device state) and disk stream to be transported over native TLS, i.e. TLS support built into QEMU. Prerequisite: TLS environment is configured correctly on all relevant Compute nodes. This means, Certificate Authority (CA), server, client certificates, their corresponding keys, and their file permissions are in place, and are validated. Notes: To have encryption for migration stream and disk stream (also called: "block migration"), live_migration_with_native_tls is the preferred config attribute instead of live_migration_tunnelled . The live_migration_tunnelled will be deprecated in the long-term for two main reasons: (a) it incurs a huge performance penalty; and (b) it is not compatible with block migration. Therefore, if your compute nodes have at least libvirt 4.4.0 and QEMU 2.11.0, it is strongly recommended to use live_migration_with_native_tls . The live_migration_tunnelled and live_migration_with_native_tls should not be used at the same time. Unlike live_migration_tunnelled , the live_migration_with_native_tls is compatible with block migration. That is, with this option, NBD stream, over which disks are migrated to a target host, will be encrypted. Related options: live_migration_tunnelled : This transports migration stream (but not disk stream) over libvirtd. max_queues = None integer value The maximum number of virtio queue pairs that can be enabled when creating a multiqueue guest. The number of virtio queues allocated will be the lesser of the CPUs requested by the guest and the max value defined. By default, this value is set to none meaning the legacy limits based on the reported kernel major version will be used. mem_stats_period_seconds = 10 integer value A number of seconds to memory usage statistics period. Zero or negative value mean to disable memory usage statistics. nfs_mount_options = None string value Mount options passed to the NFS client. See section of the nfs man page for details. Mount options controls the way the filesystem is mounted and how the NFS client behaves when accessing files on this mount point. Possible values: Any string representing mount options separated by commas. Example string: vers=3,lookupcache=pos nfs_mount_point_base = USDstate_path/mnt string value Directory where the NFS volume is mounted on the compute node. The default is mnt directory of the location where nova's Python module is installed. NFS provides shared storage for the OpenStack Block Storage service. Possible values: A string representing absolute path of mount point. num_aoe_discover_tries = 3 integer value Number of times to rediscover AoE target to find volume. Nova provides support for block storage attaching to hosts via AOE (ATA over Ethernet). This option allows the user to specify the maximum number of retry attempts that can be made to discover the AoE device. num_iser_scan_tries = 5 integer value Number of times to scan iSER target to find volume. iSER is a server network protocol that extends iSCSI protocol to use Remote Direct Memory Access (RDMA). This option allows the user to specify the maximum number of scan attempts that can be made to find iSER volume. num_memory_encrypted_guests = None integer value Maximum number of guests with encrypted memory which can run concurrently on this compute host. For now this is only relevant for AMD machines which support SEV (Secure Encrypted Virtualization). Such machines have a limited number of slots in their memory controller for storing encryption keys. Each running guest with encrypted memory will consume one of these slots. The option may be reused for other equivalent technologies in the future. If the machine does not support memory encryption, the option will be ignored and inventory will be set to 0. If the machine does support memory encryption, for now a value of None means an effectively unlimited inventory, i.e. no limit will be imposed by Nova on the number of SEV guests which can be launched, even though the underlying hardware will enforce its own limit. However it is expected that in the future, auto-detection of the inventory from the hardware will become possible, at which point None will cause auto-detection to automatically impose the correct limit. note:: Related options: :oslo.config:option: libvirt.virt_type must be set to kvm . It's recommended to consider including x86_64=q35 in :oslo.config:option: libvirt.hw_machine_type ; see :ref: deploying-sev-capable-infrastructure for more on this. num_nvme_discover_tries = 5 integer value Number of times to rediscover NVMe target to find volume Nova provides support for block storage attaching to hosts via NVMe (Non-Volatile Memory Express). This option allows the user to specify the maximum number of retry attempts that can be made to discover the NVMe device. num_pcie_ports = 0 integer value The number of PCIe ports an instance will get. Libvirt allows a custom number of PCIe ports (pcie-root-port controllers) a target instance will get. Some will be used by default, rest will be available for hotplug use. By default we have just 1-2 free ports which limits hotplug. More info: https://github.com/qemu/qemu/blob/master/docs/pcie.txt Due to QEMU limitations for aarch64/virt maximum value is set to 28 . Default value 0 moves calculating amount of ports to libvirt. num_volume_scan_tries = 5 integer value Number of times to scan given storage protocol to find volume. pmem_namespaces = [] list value Configure persistent memory(pmem) namespaces. These namespaces must have been already created on the host. This config option is in the following format:: USDNSNAME is the name of the pmem namespace. USDLABEL represents one resource class, this is used to generate the resource class name as CUSTOM_PMEM_NAMESPACE_USDLABEL . For example [libvirt] pmem_namespaces=128G:ns0|ns1|ns2|ns3,262144MB:ns4|ns5,MEDIUM:ns6|ns7 quobyte_client_cfg = None string value Path to a Quobyte Client configuration file. quobyte_mount_point_base = USDstate_path/mnt string value Directory where the Quobyte volume is mounted on the compute node. Nova supports Quobyte volume driver that enables storing Block Storage service volumes on a Quobyte storage back end. This Option specifies the path of the directory where Quobyte volume is mounted. Possible values: A string representing absolute path of mount point. rbd_connect_timeout = 5 integer value The RADOS client timeout in seconds when initially connecting to the cluster. rbd_destroy_volume_retries = 12 integer value Number of retries to destroy a RBD volume. Related options: [libvirt]/images_type = rbd rbd_destroy_volume_retry_interval = 5 integer value Number of seconds to wait between each consecutive retry to destroy a RBD volume. Related options: [libvirt]/images_type = rbd rbd_secret_uuid = None string value The libvirt UUID of the secret for the rbd_user volumes. rbd_user = None string value The RADOS client name for accessing rbd(RADOS Block Devices) volumes. Libvirt will refer to this user when connecting and authenticating with the Ceph RBD server. realtime_scheduler_priority = 1 integer value In a realtime host context vCPUs for guest will run in that scheduling priority. Priority depends on the host kernel (usually 1-99) remote_filesystem_transport = ssh string value libvirt's transport method for remote file operations. Because libvirt cannot use RPC to copy files over network to/from other compute nodes, other method must be used for: creating directory on remote host creating file on remote host removing file from remote host copying file to remote host rescue_image_id = None string value The ID of the image to boot from to rescue data from a corrupted instance. If the rescue REST API operation doesn't provide an ID of an image to use, the image which is referenced by this ID is used. If this option is not set, the image from the instance is used. Possible values: An ID of an image or nothing. If it points to an Amazon Machine Image (AMI), consider to set the config options rescue_kernel_id and rescue_ramdisk_id too. If nothing is set, the image of the instance is used. Related options: rescue_kernel_id : If the chosen rescue image allows the separate definition of its kernel disk, the value of this option is used, if specified. This is the case when Amazon 's AMI/AKI/ARI image format is used for the rescue image. rescue_ramdisk_id : If the chosen rescue image allows the separate definition of its RAM disk, the value of this option is used if, specified. This is the case when Amazon 's AMI/AKI/ARI image format is used for the rescue image. rescue_kernel_id = None string value The ID of the kernel (AKI) image to use with the rescue image. If the chosen rescue image allows the separate definition of its kernel disk, the value of this option is used, if specified. This is the case when Amazon 's AMI/AKI/ARI image format is used for the rescue image. Possible values: An ID of an kernel image or nothing. If nothing is specified, the kernel disk from the instance is used if it was launched with one. Related options: rescue_image_id : If that option points to an image in Amazon 's AMI/AKI/ARI image format, it's useful to use rescue_kernel_id too. rescue_ramdisk_id = None string value The ID of the RAM disk (ARI) image to use with the rescue image. If the chosen rescue image allows the separate definition of its RAM disk, the value of this option is used, if specified. This is the case when Amazon 's AMI/AKI/ARI image format is used for the rescue image. Possible values: An ID of a RAM disk image or nothing. If nothing is specified, the RAM disk from the instance is used if it was launched with one. Related options: rescue_image_id : If that option points to an image in Amazon 's AMI/AKI/ARI image format, it's useful to use rescue_ramdisk_id too. rng_dev_path = /dev/urandom string value The path to an RNG (Random Number Generator) device that will be used as the source of entropy on the host. Since libvirt 1.3.4, any path (that returns random numbers when read) is accepted. The recommended source of entropy is /dev/urandom - it is non-blocking, therefore relatively fast; and avoids the limitations of /dev/random , which is a legacy interface. For more details (and comparison between different RNG sources), refer to the "Usage" section in the Linux kernel API documentation for [u]random : http://man7.org/linux/man-pages/man4/urandom.4.html and http://man7.org/linux/man-pages/man7/random.7.html . rx_queue_size = None integer value Configure virtio rx queue size. This option is only usable for virtio-net device with vhost and vhost-user backend. Available only with QEMU/KVM. Requires libvirt v2.3 QEMU v2.7. `smbfs_mount_options = ` string value Mount options passed to the SMBFS client. Provide SMBFS options as a single string containing all parameters. See mount.cifs man page for details. Note that the libvirt-qemu uid and gid must be specified. smbfs_mount_point_base = USDstate_path/mnt string value Directory where the SMBFS shares are mounted on the compute node. snapshot_compression = False boolean value Enable snapshot compression for qcow2 images. Note: you can set snapshot_image_format to qcow2 to force all snapshots to be in qcow2 format, independently from their original image type. Related options: snapshot_image_format snapshot_image_format = None string value Determine the snapshot image format when sending to the image service. If set, this decides what format is used when sending the snapshot to the image service. If not set, defaults to same type as source image. snapshots_directory = USDinstances_path/snapshots string value Location where libvirt driver will store snapshots before uploading them to image service sparse_logical_volumes = False boolean value Create sparse logical volumes (with virtualsize) if this flag is set to True. Deprecated since: 18.0.0 Reason: Sparse logical volumes is a feature that is not tested hence not supported. LVM logical volumes are preallocated by default. If you want thin provisioning, use Cinder thin-provisioned volumes. swtpm_enabled = False boolean value Enable emulated TPM (Trusted Platform Module) in guests. swtpm_group = tss string value Group that swtpm binary runs as. When using emulated TPM, the swtpm binary will run to emulate a TPM device. The user this binary runs as depends on libvirt configuration, with tss being the default. In order to support cold migration and resize, nova needs to know what group the swtpm binary is running as in order to ensure that files get the proper ownership after being moved between nodes. Related options: swtpm_user must also be set. swtpm_user = tss string value User that swtpm binary runs as. When using emulated TPM, the swtpm binary will run to emulate a TPM device. The user this binary runs as depends on libvirt configuration, with tss being the default. In order to support cold migration and resize, nova needs to know what user the swtpm binary is running as in order to ensure that files get the proper ownership after being moved between nodes. Related options: swtpm_group must also be set. sysinfo_serial = unique string value The data source used to the populate the host "serial" UUID exposed to guest in the virtual BIOS. All choices except unique will change the serial when migrating the instance to another host. Changing the choice of this option will also affect existing instances on this host once they are stopped and started again. It is recommended to use the default choice ( unique ) since that will not change when an instance is migrated. However, if you have a need for per-host serials in addition to per-instance serial numbers, then consider restricting flavors via host aggregates. tx_queue_size = None integer value Configure virtio tx queue size. This option is only usable for virtio-net device with vhost-user backend. Available only with QEMU/KVM. Requires libvirt v3.7 QEMU v2.10. uid_maps = [] list value List of uid targets and ranges.Syntax is guest-uid:host-uid:count. Maximum of 5 allowed. use_virtio_for_bridges = True boolean value Use virtio for bridge interfaces with KVM/QEMU virt_type = kvm string value Describes the virtualization type (or so called domain type) libvirt should use. The choice of this type must match the underlying virtualization strategy you have chosen for this host. Related options: connection_uri : depends on this disk_prefix : depends on this cpu_mode : depends on this cpu_models : depends on this volume_clear = zero string value Method used to wipe ephemeral disks when they are deleted. Only takes effect if LVM is set as backing storage. Related options: images_type - must be set to lvm volume_clear_size volume_clear_size = 0 integer value Size of area in MiB, counting from the beginning of the allocated volume, that will be cleared using method set in volume_clear option. Possible values: 0 - clear whole volume >0 - clear specified amount of MiB Related options: images_type - must be set to lvm volume_clear - must be set and the value must be different than none for this option to have any impact volume_use_multipath = False boolean value Use multipath connection of the iSCSI or FC volume Volumes can be connected in the LibVirt as multipath devices. This will provide high availability and fault tolerance. vzstorage_cache_path = None string value Path to the SSD cache file. You can attach an SSD drive to a client and configure the drive to store a local cache of frequently accessed data. By having a local cache on a client's SSD drive, you can increase the overall cluster performance by up to 10 and more times. WARNING! There is a lot of SSD models which are not server grade and may loose arbitrary set of data changes on power loss. Such SSDs should not be used in Vstorage and are dangerous as may lead to data corruptions and inconsistencies. Please consult with the manual on which SSD models are known to be safe or verify it using vstorage-hwflush-check(1) utility. This option defines the path which should include "%(cluster_name)s" template to separate caches from multiple shares. Related options: vzstorage_mount_opts may include more detailed cache options. vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz string value Path to vzstorage client log. This option defines the log of cluster operations, it should include "%(cluster_name)s" template to separate logs from multiple shares. Related options: vzstorage_mount_opts may include more detailed logging options. vzstorage_mount_group = qemu string value Mount owner group name. This option defines the owner group of Vzstorage cluster mountpoint. Related options: vzstorage_mount_* group of parameters vzstorage_mount_opts = [] list value Extra mount options for pstorage-mount For full description of them, see https://static.openvz.org/vz-man/man1/pstorage-mount.1.gz.html Format is a python string representation of arguments list, like: "[ -v , -R , 500 ]" Shouldn't include -c, -l, -C, -u, -g and -m as those have explicit vzstorage_* options. Related options: All other vzstorage_* options vzstorage_mount_perms = 0770 string value Mount access mode. This option defines the access bits of Vzstorage cluster mountpoint, in the format similar to one of chmod(1) utility, like this: 0770. It consists of one to four digits ranging from 0 to 7, with missing lead digits assumed to be 0's. Related options: vzstorage_mount_* group of parameters vzstorage_mount_point_base = USDstate_path/mnt string value Directory where the Virtuozzo Storage clusters are mounted on the compute node. This option defines non-standard mountpoint for Vzstorage cluster. Related options: vzstorage_mount_* group of parameters vzstorage_mount_user = stack string value Mount owner user name. This option defines the owner user of Vzstorage cluster mountpoint. Related options: vzstorage_mount_* group of parameters wait_soft_reboot_seconds = 120 integer value Number of seconds to wait for instance to shut down after soft reboot request is made. We fall back to hard reboot if instance does not shutdown within this window. 9.1.28. metrics The following table outlines the options available under the [metrics] group in the /etc/nova/nova.conf file. Table 9.27. metrics Configuration option = Default value Type Description required = True boolean value Whether metrics are required. This setting determines how any unavailable metrics are treated. If this option is set to True, any hosts for which a metric is unavailable will raise an exception, so it is recommended to also use the MetricFilter to filter out those hosts before weighing. Possible values: A boolean value, where False ensures any metric being unavailable for a host will set the host weight to [metrics] weight_of_unavailable . Related options: [metrics] weight_of_unavailable weight_multiplier = 1.0 floating point value Multiplier used for weighing hosts based on reported metrics. When using metrics to weight the suitability of a host, you can use this option to change how the calculated weight influences the weight assigned to a host as follows: >1.0 : increases the effect of the metric on overall weight 1.0 : no change to the calculated weight >0.0,<1.0 : reduces the effect of the metric on overall weight 0.0 : the metric value is ignored, and the value of the [metrics] weight_of_unavailable option is returned instead >-1.0,<0.0 : the effect is reduced and reversed -1.0 : the effect is reversed <-1.0 : the effect is increased proportionally and reversed Possible values: An integer or float value, where the value corresponds to the multiplier ratio for this weigher. Related options: [filter_scheduler] weight_classes [metrics] weight_of_unavailable weight_of_unavailable = -10000.0 floating point value Default weight for unavailable metrics. When any of the following conditions are met, this value will be used in place of any actual metric value: One of the metrics named in [metrics] weight_setting is not available for a host, and the value of required is False . The ratio specified for a metric in [metrics] weight_setting is 0. The [metrics] weight_multiplier option is set to 0. Possible values: An integer or float value, where the value corresponds to the multiplier ratio for this weigher. Related options: [metrics] weight_setting [metrics] required [metrics] weight_multiplier weight_setting = [] list value Mapping of metric to weight modifier. This setting specifies the metrics to be weighed and the relative ratios for each metric. This should be a single string value, consisting of a series of one or more name=ratio pairs, separated by commas, where name is the name of the metric to be weighed, and ratio is the relative weight for that metric. Note that if the ratio is set to 0, the metric value is ignored, and instead the weight will be set to the value of the [metrics] weight_of_unavailable option. As an example, let's consider the case where this option is set to: The final weight will be: Possible values: A list of zero or more key/value pairs separated by commas, where the key is a string representing the name of a metric and the value is a numeric weight for that metric. If any value is set to 0, the value is ignored and the weight will be set to the value of the [metrics] weight_of_unavailable option. Related options: [metrics] weight_of_unavailable 9.1.29. mks The following table outlines the options available under the [mks] group in the /etc/nova/nova.conf file. Table 9.28. mks Configuration option = Default value Type Description enabled = False boolean value Enables graphical console access for virtual machines. mksproxy_base_url = http://127.0.0.1:6090/ uri value Location of MKS web console proxy The URL in the response points to a WebMKS proxy which starts proxying between client and corresponding vCenter server where instance runs. In order to use the web based console access, WebMKS proxy should be installed and configured Possible values: Must be a valid URL of the form: http://host:port/ or https://host:port/ 9.1.30. neutron The following table outlines the options available under the [neutron] group in the /etc/nova/nova.conf file. Table 9.29. neutron Configuration option = Default value Type Description auth-url = None string value Authentication URL auth_section = None string value Config Section from which to load plugin specific options auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. connect-retries = None integer value The maximum number of retries that should be attempted for connection errors. connect-retry-delay = None floating point value Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default_floating_pool = nova string value Default name for the floating IP pool. Specifies the name of floating IP pool used for allocating floating IPs. This option is only used if Neutron does not specify the floating IP pool name in port binding responses. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to endpoint-override = None string value Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version , min-version , and/or max-version options. extension_sync_interval = 600 integer value Integer value representing the number of seconds to wait before querying Neutron for extensions. After this number of seconds the time Nova needs to create a resource in Neutron it will requery Neutron for the extensions that it has loaded. Setting value to 0 will refresh the extensions with no wait. http_retries = 3 integer value Number of times neutronclient should retry on any failed http call. 0 means connection is attempted only once. Setting it to any positive integer means that on failure connection is retried that many times e.g. setting it to 3 means total attempts to connect will be 4. Possible values: Any integer value. 0 means connection is attempted only once insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file `metadata_proxy_shared_secret = ` string value This option holds the shared secret string used to validate proxy requests to Neutron metadata requests. In order to be used, the X-Metadata-Provider-Signature header must be supplied in the request. Related options: service_metadata_proxy ovs_bridge = br-int string value Default name for the Open vSwitch integration bridge. Specifies the name of an integration bridge interface used by OpenvSwitch. This option is only used if Neutron does not specify the OVS bridge name in port binding responses. password = None string value User's password physnets = [] list value List of physnets present on this host. For each physnet listed, an additional section, [neutron_physnet_USDPHYSNET] , will be added to the configuration file. Each section must be configured with a single configuration option, numa_nodes , which should be a list of node IDs for all NUMA nodes this physnet is associated with. For example:: Any physnet that is not listed using this option will be treated as having no particular NUMA node affinity. Tunnelled networks (VXLAN, GRE, ... ) cannot be accounted for in this way and are instead configured using the [neutron_tunnel] group. For example:: Related options: [neutron_tunnel] numa_nodes can be used to configure NUMA affinity for all tunneled networks [neutron_physnet_USDPHYSNET] numa_nodes must be configured for each value of USDPHYSNET specified by this option project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to region-name = None string value The default region_name for endpoint URL discovery. service-name = None string value The default service_name for endpoint URL discovery. service-type = network string value The default service_type for endpoint URL discovery. service_metadata_proxy = False boolean value When set to True, this option indicates that Neutron will be used to proxy metadata requests and resolve instance ids. Otherwise, the instance ID must be passed to the metadata request in the X-Instance-ID header. Related options: metadata_proxy_shared_secret split-loggers = False boolean value Log requests to multiple loggers. status-code-retries = None integer value The maximum number of retries that should be attempted for retriable HTTP status codes. status-code-retry-delay = None floating point value Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value ID of the trust to use as a trustee use user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User ID username = None string value Username valid-interfaces = ['internal', 'public'] list value List of interfaces, in order of preference, for endpoint URL. 9.1.31. notifications The following table outlines the options available under the [notifications] group in the /etc/nova/nova.conf file. Table 9.30. notifications Configuration option = Default value Type Description bdms_in_notifications = False boolean value If enabled, include block device information in the versioned notification payload. Sending block device information is disabled by default as providing that information can incur some overhead on the system since the information may need to be loaded from the database. default_level = INFO string value Default notification level for outgoing notifications. notification_format = unversioned string value Specifies which notification format shall be emitted by nova. The versioned notification interface are in feature parity with the legacy interface and the versioned interface is actively developed so new consumers should used the versioned interface. However, the legacy interface is heavily used by ceilometer and other mature OpenStack components so it remains the default. Note that notifications can be completely disabled by setting driver=noop in the [oslo_messaging_notifications] group. The list of versioned notifications is visible in https://docs.openstack.org/nova/latest/reference/notifications.html notify_on_state_change = None string value If set, send compute.instance.update notifications on instance state changes. Please refer to https://docs.openstack.org/nova/latest/reference/notifications.html for additional information on notifications. versioned_notifications_topics = ['versioned_notifications'] list value Specifies the topics for the versioned notifications issued by nova. The default value is fine for most deployments and rarely needs to be changed. However, if you have a third-party service that consumes versioned notifications, it might be worth getting a topic for that service. Nova will send a message containing a versioned notification payload to each topic queue in this list. The list of versioned notifications is visible in https://docs.openstack.org/nova/latest/reference/notifications.html 9.1.32. os_vif_linux_bridge The following table outlines the options available under the [os_vif_linux_bridge] group in the /etc/nova/nova.conf file. Table 9.31. os_vif_linux_bridge Configuration option = Default value Type Description flat_interface = None string value FlatDhcp will bridge into this interface if set forward_bridge_interface = ['all'] multi valued An interface that bridges can forward to. If this is set to all then all traffic will be forwarded. Can be specified multiple times. `iptables_bottom_regex = ` string value Regular expression to match the iptables rule that should always be on the bottom. iptables_drop_action = DROP string value The table that iptables to jump to when a packet is to be dropped. `iptables_top_regex = ` string value Regular expression to match the iptables rule that should always be on the top. network_device_mtu = 1500 integer value MTU setting for network interface. use_ipv6 = False boolean value Use IPv6 vlan_interface = None string value VLANs will bridge into this interface if set 9.1.33. os_vif_ovs The following table outlines the options available under the [os_vif_ovs] group in the /etc/nova/nova.conf file. Table 9.32. os_vif_ovs Configuration option = Default value Type Description isolate_vif = False boolean value Controls if VIF should be isolated when plugged to the ovs bridge. This should only be set to True when using the neutron ovs ml2 agent. network_device_mtu = 1500 integer value MTU setting for network interface. ovs_vsctl_timeout = 120 integer value Amount of time, in seconds, that ovs_vsctl should wait for a response from the database. 0 is to wait forever. ovsdb_connection = tcp:127.0.0.1:6640 string value The connection string for the OVSDB backend. When executing commands using the native or vsctl ovsdb interface drivers this config option defines the ovsdb endpoint used. ovsdb_interface = native string value The interface for interacting with the OVSDB Deprecated since: 2.2.0 Reason: os-vif has supported ovsdb access via python bindings since Stein (1.15.0), starting in Victoria (2.2.0) the ovs-vsctl driver is now deprecated for removal and in future releases it will be be removed. per_port_bridge = False boolean value Controls if VIF should be plugged into a per-port bridge. This is experimental and controls the plugging behavior when not using hybrid-plug.This is only used on linux and should be set to false in all other cases such as ironic smartnic ports. 9.1.34. oslo_concurrency The following table outlines the options available under the [oslo_concurrency] group in the /etc/nova/nova.conf file. Table 9.33. oslo_concurrency Configuration option = Default value Type Description disable_process_locking = False boolean value Enables or disables inter-process locks. lock_path = None string value Directory to use for lock files. For security, the specified directory should only be writable by the user running the processes that need locking. Defaults to environment variable OSLO_LOCK_PATH. If external locks are used, a lock path must be set. 9.1.35. oslo_limit The following table outlines the options available under the [oslo_limit] group in the /etc/nova/nova.conf file. Table 9.34. oslo_limit Configuration option = Default value Type Description auth-url = None string value Authentication URL cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. connect-retries = None integer value The maximum number of retries that should be attempted for connection errors. connect-retry-delay = None floating point value Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to endpoint-override = None string value Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version , min-version , and/or max-version options. endpoint_id = None string value The service's endpoint id which is registered in Keystone. insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file max-version = None string value The maximum major version of a given API, intended to be used as the upper bound of a range with min_version. Mutually exclusive with version. min-version = None string value The minimum major version of a given API, intended to be used as the lower bound of a range with max_version. Mutually exclusive with version. If min_version is given with no max_version it is as if max version is "latest". password = None string value User's password project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to region-name = None string value The default region_name for endpoint URL discovery. service-name = None string value The default service_name for endpoint URL discovery. service-type = None string value The default service_type for endpoint URL discovery. split-loggers = False boolean value Log requests to multiple loggers. status-code-retries = None integer value The maximum number of retries that should be attempted for retriable HTTP status codes. status-code-retry-delay = None floating point value Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value ID of the trust to use as a trustee use user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User ID username = None string value Username valid-interfaces = None list value List of interfaces, in order of preference, for endpoint URL. version = None string value Minimum Major API version within a given Major API version for endpoint URL discovery. Mutually exclusive with min_version and max_version 9.1.36. oslo_messaging_amqp The following table outlines the options available under the [oslo_messaging_amqp] group in the /etc/nova/nova.conf file. Table 9.35. oslo_messaging_amqp Configuration option = Default value Type Description addressing_mode = dynamic string value Indicates the addressing mode used by the driver. Permitted values: legacy - use legacy non-routable addressing routable - use routable addresses dynamic - use legacy addresses if the message bus does not support routing otherwise use routable addressing anycast_address = anycast string value Appended to the address prefix when sending to a group of consumers. Used by the message bus to identify messages that should be delivered in a round-robin fashion across consumers. broadcast_prefix = broadcast string value address prefix used when broadcasting to all servers connection_retry_backoff = 2 integer value Increase the connection_retry_interval by this many seconds after each unsuccessful failover attempt. connection_retry_interval = 1 integer value Seconds to pause before attempting to re-connect. connection_retry_interval_max = 30 integer value Maximum limit for connection_retry_interval + connection_retry_backoff container_name = None string value Name for the AMQP container. must be globally unique. Defaults to a generated UUID default_notification_exchange = None string value Exchange name used in notification addresses. Exchange name resolution precedence: Target.exchange if set else default_notification_exchange if set else control_exchange if set else notify default_notify_timeout = 30 integer value The deadline for a sent notification message delivery. Only used when caller does not provide a timeout expiry. default_reply_retry = 0 integer value The maximum number of attempts to re-send a reply message which failed due to a recoverable error. default_reply_timeout = 30 integer value The deadline for an rpc reply message delivery. default_rpc_exchange = None string value Exchange name used in RPC addresses. Exchange name resolution precedence: Target.exchange if set else default_rpc_exchange if set else control_exchange if set else rpc default_send_timeout = 30 integer value The deadline for an rpc cast or call message delivery. Only used when caller does not provide a timeout expiry. default_sender_link_timeout = 600 integer value The duration to schedule a purge of idle sender links. Detach link after expiry. group_request_prefix = unicast string value address prefix when sending to any server in group idle_timeout = 0 integer value Timeout for inactive connections (in seconds) link_retry_delay = 10 integer value Time to pause between re-connecting an AMQP 1.0 link that failed due to a recoverable error. multicast_address = multicast string value Appended to the address prefix when sending a fanout message. Used by the message bus to identify fanout messages. notify_address_prefix = openstack.org/om/notify string value Address prefix for all generated Notification addresses notify_server_credit = 100 integer value Window size for incoming Notification messages pre_settled = ['rpc-cast', 'rpc-reply'] multi valued Send messages of this type pre-settled. Pre-settled messages will not receive acknowledgement from the peer. Note well: pre-settled messages may be silently discarded if the delivery fails. Permitted values: rpc-call - send RPC Calls pre-settled rpc-reply - send RPC Replies pre-settled rpc-cast - Send RPC Casts pre-settled notify - Send Notifications pre-settled pseudo_vhost = True boolean value Enable virtual host support for those message buses that do not natively support virtual hosting (such as qpidd). When set to true the virtual host name will be added to all message bus addresses, effectively creating a private subnet per virtual host. Set to False if the message bus supports virtual hosting using the hostname field in the AMQP 1.0 Open performative as the name of the virtual host. reply_link_credit = 200 integer value Window size for incoming RPC Reply messages. rpc_address_prefix = openstack.org/om/rpc string value Address prefix for all generated RPC addresses rpc_server_credit = 100 integer value Window size for incoming RPC Request messages `sasl_config_dir = ` string value Path to directory that contains the SASL configuration `sasl_config_name = ` string value Name of configuration file (without .conf suffix) `sasl_default_realm = ` string value SASL realm to use if no realm present in username `sasl_mechanisms = ` string value Space separated list of acceptable SASL mechanisms server_request_prefix = exclusive string value address prefix used when sending to a specific server ssl = False boolean value Attempt to connect via SSL. If no other ssl-related parameters are given, it will use the system's CA-bundle to verify the server's certificate. `ssl_ca_file = ` string value CA certificate PEM file used to verify the server's certificate `ssl_cert_file = ` string value Self-identifying certificate PEM file for client authentication `ssl_key_file = ` string value Private key PEM file used to sign ssl_cert_file certificate (optional) ssl_key_password = None string value Password for decrypting ssl_key_file (if encrypted) ssl_verify_vhost = False boolean value By default SSL checks that the name in the server's certificate matches the hostname in the transport_url. In some configurations it may be preferable to use the virtual hostname instead, for example if the server uses the Server Name Indication TLS extension (rfc6066) to provide a certificate per virtual host. Set ssl_verify_vhost to True if the server's SSL certificate uses the virtual host name instead of the DNS name. trace = False boolean value Debug: dump AMQP frames to stdout unicast_address = unicast string value Appended to the address prefix when sending to a particular RPC/Notification server. Used by the message bus to identify messages sent to a single destination. 9.1.37. oslo_messaging_kafka The following table outlines the options available under the [oslo_messaging_kafka] group in the /etc/nova/nova.conf file. Table 9.36. oslo_messaging_kafka Configuration option = Default value Type Description compression_codec = none string value The compression codec for all data generated by the producer. If not set, compression will not be used. Note that the allowed values of this depend on the kafka version conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool consumer_group = oslo_messaging_consumer string value Group id for Kafka consumer. Consumers in one group will coordinate message consumption enable_auto_commit = False boolean value Enable asynchronous consumer commits kafka_consumer_timeout = 1.0 floating point value Default timeout(s) for Kafka consumers kafka_max_fetch_bytes = 1048576 integer value Max fetch bytes of Kafka consumer max_poll_records = 500 integer value The maximum number of records returned in a poll call pool_size = 10 integer value Pool Size for Kafka Consumers producer_batch_size = 16384 integer value Size of batch for the producer async send producer_batch_timeout = 0.0 floating point value Upper bound on the delay for KafkaProducer batching in seconds sasl_mechanism = PLAIN string value Mechanism when security protocol is SASL security_protocol = PLAINTEXT string value Protocol used to communicate with brokers `ssl_cafile = ` string value CA certificate PEM file used to verify the server certificate `ssl_client_cert_file = ` string value Client certificate PEM file used for authentication. `ssl_client_key_file = ` string value Client key PEM file used for authentication. `ssl_client_key_password = ` string value Client key password file used for authentication. 9.1.38. oslo_messaging_notifications The following table outlines the options available under the [oslo_messaging_notifications] group in the /etc/nova/nova.conf file. Table 9.37. oslo_messaging_notifications Configuration option = Default value Type Description driver = [] multi valued The Drivers(s) to handle sending notifications. Possible values are messaging, messagingv2, routing, log, test, noop retry = -1 integer value The maximum number of attempts to re-send a notification message which failed to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite topics = ['notifications'] list value AMQP topic used for OpenStack notifications. transport_url = None string value A URL representing the messaging driver to use for notifications. If not set, we fall back to the same configuration used for RPC. 9.1.39. oslo_messaging_rabbit The following table outlines the options available under the [oslo_messaging_rabbit] group in the /etc/nova/nova.conf file. Table 9.38. oslo_messaging_rabbit Configuration option = Default value Type Description amqp_auto_delete = False boolean value Auto-delete queues in AMQP. amqp_durable_queues = False boolean value Use durable queues in AMQP. If rabbit_quorum_queue is enabled, queues will be durable and this value will be ignored. direct_mandatory_flag = True boolean value (DEPRECATED) Enable/Disable the RabbitMQ mandatory flag for direct send. The direct send is used as reply, so the MessageUndeliverable exception is raised in case the client queue does not exist.MessageUndeliverable exception will be used to loop for a timeout to lets a chance to sender to recover.This flag is deprecated and it will not be possible to deactivate this functionality anymore enable_cancel_on_failover = False boolean value Enable x-cancel-on-ha-failover flag so that rabbitmq server will cancel and notify consumerswhen queue is down heartbeat_in_pthread = False boolean value Run the health check heartbeat thread through a native python thread by default. If this option is equal to False then the health check heartbeat will inherit the execution model from the parent process. For example if the parent process has monkey patched the stdlib by using eventlet/greenlet then the heartbeat will be run through a green thread. This option should be set to True only for the wsgi services. heartbeat_rate = 2 integer value How often times during the heartbeat_timeout_threshold we check the heartbeat. heartbeat_timeout_threshold = 60 integer value Number of seconds after which the Rabbit broker is considered down if heartbeat's keep-alive fails (0 disables heartbeat). kombu_compression = None string value EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not be used. This option may not be available in future versions. kombu_failover_strategy = round-robin string value Determines how the RabbitMQ node is chosen in case the one we are currently connected to becomes unavailable. Takes effect only if more than one RabbitMQ node is provided in config. kombu_missing_consumer_retry_timeout = 60 integer value How long to wait a missing client before abandoning to send it its replies. This value should not be longer than rpc_response_timeout. kombu_reconnect_delay = 1.0 floating point value How long to wait (in seconds) before reconnecting in response to an AMQP consumer cancel notification. rabbit_ha_queues = False boolean value Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring is no longer controlled by the x-ha-policy argument when declaring a queue. If you just want to make sure that all queues (except those with auto-generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy HA ^(?!amq\.).* {"ha-mode": "all"} " rabbit_interval_max = 30 integer value Maximum interval of RabbitMQ connection retries. Default is 30 seconds. rabbit_login_method = AMQPLAIN string value The RabbitMQ login method. rabbit_qos_prefetch_count = 0 integer value Specifies the number of messages to prefetch. Setting to zero allows unlimited messages. rabbit_quorum_delivery_limit = 0 integer value Each time a message is redelivered to a consumer, a counter is incremented. Once the redelivery count exceeds the delivery limit the message gets dropped or dead-lettered (if a DLX exchange has been configured) Used only when rabbit_quorum_queue is enabled, Default 0 which means dont set a limit. rabbit_quorum_max_memory_bytes = 0 integer value By default all messages are maintained in memory if a quorum queue grows in length it can put memory pressure on a cluster. This option can limit the number of memory bytes used by the quorum queue. Used only when rabbit_quorum_queue is enabled, Default 0 which means dont set a limit. rabbit_quorum_max_memory_length = 0 integer value By default all messages are maintained in memory if a quorum queue grows in length it can put memory pressure on a cluster. This option can limit the number of messages in the quorum queue. Used only when rabbit_quorum_queue is enabled, Default 0 which means dont set a limit. rabbit_quorum_queue = False boolean value Use quorum queues in RabbitMQ (x-queue-type: quorum). The quorum queue is a modern queue type for RabbitMQ implementing a durable, replicated FIFO queue based on the Raft consensus algorithm. It is available as of RabbitMQ 3.8.0. If set this option will conflict with the HA queues ( rabbit_ha_queues ) aka mirrored queues, in other words the HA queues should be disabled, quorum queues durable by default so the amqp_durable_queues opion is ignored when this option enabled. rabbit_retry_backoff = 2 integer value How long to backoff for between retries when connecting to RabbitMQ. rabbit_retry_interval = 1 integer value How frequently to retry connecting with RabbitMQ. rabbit_transient_queues_ttl = 1800 integer value Positive integer representing duration in seconds for queue TTL (x-expires). Queues which are unused for the duration of the TTL are automatically deleted. The parameter affects only reply and fanout queues. ssl = False boolean value Connect over SSL. `ssl_ca_file = ` string value SSL certification authority file (valid only if SSL enabled). `ssl_cert_file = ` string value SSL cert file (valid only if SSL enabled). ssl_enforce_fips_mode = False boolean value Global toggle for enforcing the OpenSSL FIPS mode. This feature requires Python support. This is available in Python 3.9 in all environments and may have been backported to older Python versions on select environments. If the Python executable used does not support OpenSSL FIPS mode, an exception will be raised. `ssl_key_file = ` string value SSL key file (valid only if SSL enabled). `ssl_version = ` string value SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions. 9.1.40. oslo_middleware The following table outlines the options available under the [oslo_middleware] group in the /etc/nova/nova.conf file. Table 9.39. oslo_middleware Configuration option = Default value Type Description enable_proxy_headers_parsing = False boolean value Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not. http_basic_auth_user_file = /etc/htpasswd string value HTTP basic auth password file. max_request_body_size = 114688 integer value The maximum body size for each request, in bytes. secure_proxy_ssl_header = X-Forwarded-Proto string value The HTTP Header that will be used to determine what the original request protocol scheme was, even if it was hidden by a SSL termination proxy. 9.1.41. oslo_policy The following table outlines the options available under the [oslo_policy] group in the /etc/nova/nova.conf file. Table 9.40. oslo_policy Configuration option = Default value Type Description enforce_new_defaults = True boolean value This option controls whether or not to use old deprecated defaults when evaluating policies. If True , the old deprecated defaults are not going to be evaluated. This means if any existing token is allowed for old defaults but is disallowed for new defaults, it will be disallowed. It is encouraged to enable this flag along with the enforce_scope flag so that you can get the benefits of new defaults and scope_type together. If False , the deprecated policy check string is logically OR'd with the new policy check string, allowing for a graceful upgrade experience between releases with new policies, which is the default behavior. enforce_scope = True boolean value This option controls whether or not to enforce scope when evaluating policies. If True , the scope of the token used in the request is compared to the scope_types of the policy being enforced. If the scopes do not match, an InvalidScope exception will be raised. If False , a message will be logged informing operators that policies are being invoked with mismatching scope. policy_default_rule = default string value Default rule. Enforced when a requested rule is not found. policy_dirs = ['policy.d'] multi valued Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored. policy_file = policy.yaml string value The relative or absolute path of a file that maps roles to permissions for a given service. Relative paths must be specified in relation to the configuration file setting this option. remote_content_type = application/x-www-form-urlencoded string value Content Type to send and receive data for REST based policy check remote_ssl_ca_crt_file = None string value Absolute path to ca cert file for REST based policy check remote_ssl_client_crt_file = None string value Absolute path to client cert for REST based policy check remote_ssl_client_key_file = None string value Absolute path client key file REST based policy check remote_ssl_verify_server_crt = False boolean value server identity verification for REST based policy check 9.1.42. oslo_reports The following table outlines the options available under the [oslo_reports] group in the /etc/nova/nova.conf file. Table 9.41. oslo_reports Configuration option = Default value Type Description file_event_handler = None string value The path to a file to watch for changes to trigger the reports, instead of signals. Setting this option disables the signal trigger for the reports. If application is running as a WSGI application it is recommended to use this instead of signals. file_event_handler_interval = 1 integer value How many seconds to wait between polls when file_event_handler is set log_dir = None string value Path to a log directory where to create a file 9.1.43. pci The following table outlines the options available under the [pci] group in the /etc/nova/nova.conf file. Table 9.42. pci Configuration option = Default value Type Description alias = [] multi valued An alias for a PCI passthrough device requirement. This allows users to specify the alias in the extra specs for a flavor, without needing to repeat all the PCI property requirements. This should be configured for the nova-api service and, assuming you wish to use move operations, for each nova-compute service. Possible Values: A dictionary of JSON values which describe the aliases. For example:: Supports multiple aliases by repeating the option (not by specifying a list value) alias = { "name": "QuickAssist-1", "product_id": "0443", "vendor_id": "8086", "device_type": "type-PCI", "numa_policy": "required" } alias = { "name": "QuickAssist-2", "product_id": "0444", "vendor_id": "8086", "device_type": "type-PCI", "numa_policy": "required" } device_spec = [] multi valued Specify the PCI devices available to VMs. Possible values: A JSON dictionary which describe a PCI device. It should take the following format ["vendor_id": "<id>",] ["product_id": "<id>",] ["address": "[[[[<domain>]:]<bus>]:][<slot>][.[<function>]]" | "devname": "<name>",] {"<tag>": "<tag_value>",} domain - 0xFFFF bus - 0xFF slot - 0x1F function - 0x7 physical_network trusted remote_managed - a VF is managed remotely by an off-path networking backend. May have boolean-like string values case-insensitive values: "true" or "false". By default, "false" is assumed for all devices. Using this option requires a networking service backend capable of handling those devices. PCI devices are also required to have a PCI VPD capability with a card serial number (either on a VF itself on its corresponding PF), otherwise they will be ignored and not available for allocation. resource_class - optional Placement resource class name to be used to track the matching PCI devices in Placement when [pci]report_in_placement is True. It can be a standard resource class from the os-resource-classes lib. Or can be any string. In that case Nova will normalize it to a proper Placement resource class by making it upper case, replacing any consecutive character outside of [A-Z0-9_] with a single _ , and prefixing the name with CUSTOM_ if not yet prefixed. The maximum allowed length is 255 character including the prefix. If resource_class is not provided Nova will generate it from the PCI device's vendor_id and product_id in the form of CUSTOM_PCI_{vendor_id}_{product_id} . The resource_class can be requested from a [pci]alias traits - optional comma separated list of Placement trait names to report on the resource provider that will represent the matching PCI device. Each trait can be a standard trait from os-traits lib or can be any string. If it is not a standard trait then Nova will normalize the trait name by making it upper case, replacing any consecutive character outside of [A-Z0-9_] with a single _ , and prefixing the name with CUSTOM_ if not yet prefixed. The maximum allowed length of a trait name is 255 character including the prefix. Any trait from traits can be requested from a [pci]alias . Valid examples are device_spec = {"devname":"eth0", "physical_network":"physnet"} device_spec = {"address":" :0a:00. "} device_spec = {"address":":0a:00.", "physical_network":"physnet1"} device_spec = {"vendor_id":"1137", "product_id":"0071"} device_spec = {"vendor_id":"1137", "product_id":"0071", "address": "0000:0a:00.1", "physical_network":"physnet1"} device_spec = {"address":{"domain": ". ", "bus": "02", "slot": "01", "function": "[2-7]"}, "physical_network":"physnet1"} device_spec = {"address":{"domain": ". ", "bus": "02", "slot": "0[1-2]", "function": ".*"}, "physical_network":"physnet1"} device_spec = {"devname": "eth0", "physical_network":"physnet1", "trusted": "true"} device_spec = {"vendor_id":"a2d6", "product_id":"15b3", "remote_managed": "true"} device_spec = {"vendor_id":"a2d6", "product_id":"15b3", "address": "0000:82:00.0", "physical_network":"physnet1", "remote_managed": "true"} device_spec = {"vendor_id":"1002", "product_id":"6929", "address": "0000:82:00.0", "resource_class": "PGPU", "traits": "HW_GPU_API_VULKAN,my-awesome-gpu"} The following are invalid, as they specify mutually exclusive options device_spec = {"devname":"eth0", "physical_network":"physnet", "address":" :0a:00. "} Nova Compute service startup device_spec = {"address": "0000:82:00.0", "product_id": "a2d6", "vendor_id": "15b3", "physical_network": null, "remote_managed": "true"} A JSON list of JSON dictionaries corresponding to the above format. For example device_spec = [{"product_id":"0001", "vendor_id":"8086"}, {"product_id":"0002", "vendor_id":"8086"}] report_in_placement = False boolean value Enable PCI resource inventory reporting to Placement. If it is enabled then the nova-compute service will report PCI resource inventories to Placement according to the [pci]device_spec configuration and the PCI devices reported by the hypervisor. Once it is enabled it cannot be disabled any more. In a future release the default of this config will be change to True. Related options: [pci]device_spec: to define which PCI devices nova are allowed to track and assign to guests. 9.1.44. placement The following table outlines the options available under the [placement] group in the /etc/nova/nova.conf file. Table 9.43. placement Configuration option = Default value Type Description auth-url = None string value Authentication URL auth_section = None string value Config Section from which to load plugin specific options auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. connect-retries = None integer value The maximum number of retries that should be attempted for connection errors. connect-retry-delay = None floating point value Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to endpoint-override = None string value Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version , min-version , and/or max-version options. insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file password = None string value User's password project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to region-name = None string value The default region_name for endpoint URL discovery. service-name = None string value The default service_name for endpoint URL discovery. service-type = placement string value The default service_type for endpoint URL discovery. split-loggers = False boolean value Log requests to multiple loggers. status-code-retries = None integer value The maximum number of retries that should be attempted for retriable HTTP status codes. status-code-retry-delay = None floating point value Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value ID of the trust to use as a trustee use user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User ID username = None string value Username valid-interfaces = ['internal', 'public'] list value List of interfaces, in order of preference, for endpoint URL. 9.1.45. privsep The following table outlines the options available under the [privsep] group in the /etc/nova/nova.conf file. Table 9.44. privsep Configuration option = Default value Type Description capabilities = [] list value List of Linux capabilities retained by the privsep daemon. group = None string value Group that the privsep daemon should run as. helper_command = None string value Command to invoke to start the privsep daemon if not using the "fork" method. If not specified, a default is generated using "sudo privsep-helper" and arguments designed to recreate the current configuration. This command must accept suitable --privsep_context and --privsep_sock_path arguments. logger_name = oslo_privsep.daemon string value Logger name to use for this privsep context. By default all contexts log with oslo_privsep.daemon. thread_pool_size = <based on operating system> integer value The number of threads available for privsep to concurrently run processes. Defaults to the number of CPU cores in the system. user = None string value User that the privsep daemon should run as. 9.1.46. profiler The following table outlines the options available under the [profiler] group in the /etc/nova/nova.conf file. Table 9.45. profiler Configuration option = Default value Type Description connection_string = messaging:// string value Connection string for a notifier backend. Default value is messaging:// which sets the notifier to oslo_messaging. Examples of possible values: messaging:// - use oslo_messaging driver for sending spans. redis://127.0.0.1:6379 - use redis driver for sending spans. mongodb://127.0.0.1:27017 - use mongodb driver for sending spans. elasticsearch://127.0.0.1:9200 - use elasticsearch driver for sending spans. jaeger://127.0.0.1:6831 - use jaeger tracing as driver for sending spans. enabled = False boolean value Enable the profiling for all services on this node. Default value is False (fully disable the profiling feature). Possible values: True: Enables the feature False: Disables the feature. The profiling cannot be started via this project operations. If the profiling is triggered by another project, this project part will be empty. es_doc_type = notification string value Document type for notification indexing in elasticsearch. es_scroll_size = 10000 integer value Elasticsearch splits large requests in batches. This parameter defines maximum size of each batch (for example: es_scroll_size=10000). es_scroll_time = 2m string value This parameter is a time value parameter (for example: es_scroll_time=2m), indicating for how long the nodes that participate in the search will maintain relevant resources in order to continue and support it. filter_error_trace = False boolean value Enable filter traces that contain error/exception to a separated place. Default value is set to False. Possible values: True: Enable filter traces that contain error/exception. False: Disable the filter. hmac_keys = SECRET_KEY string value Secret key(s) to use for encrypting context data for performance profiling. This string value should have the following format: <key1>[,<key2>,... <keyn>], where each key is some random string. A user who triggers the profiling via the REST API has to set one of these keys in the headers of the REST API call to include profiling results of this node for this particular project. Both "enabled" flag and "hmac_keys" config options should be set to enable profiling. Also, to generate correct profiling information across all services at least one key needs to be consistent between OpenStack projects. This ensures it can be used from client side to generate the trace, containing information from all possible resources. sentinel_service_name = mymaster string value Redissentinel uses a service name to identify a master redis service. This parameter defines the name (for example: sentinal_service_name=mymaster ). socket_timeout = 0.1 floating point value Redissentinel provides a timeout option on the connections. This parameter defines that timeout (for example: socket_timeout=0.1). trace_sqlalchemy = False boolean value Enable SQL requests profiling in services. Default value is False (SQL requests won't be traced). Possible values: True: Enables SQL requests profiling. Each SQL query will be part of the trace and can the be analyzed by how much time was spent for that. False: Disables SQL requests profiling. The spent time is only shown on a higher level of operations. Single SQL queries cannot be analyzed this way. 9.1.47. quota The following table outlines the options available under the [quota] group in the /etc/nova/nova.conf file. Table 9.46. quota Configuration option = Default value Type Description cores = 20 integer value The number of instance cores or vCPUs allowed per project. Possible values: A positive integer or 0. -1 to disable the quota. count_usage_from_placement = False boolean value Enable the counting of quota usage from the placement service. Starting in Train, it is possible to count quota usage for cores and ram from the placement service and instances from the API database instead of counting from cell databases. This works well if there is only one Nova deployment running per placement deployment. However, if an operator is running more than one Nova deployment sharing a placement deployment, they should not set this option to True because currently the placement service has no way to partition resource providers per Nova deployment. When this option is left as the default or set to False, Nova will use the legacy counting method to count quota usage for instances, cores, and ram from its cell databases. Note that quota usage behavior related to resizes will be affected if this option is set to True. Placement resource allocations are claimed on the destination while holding allocations on the source during a resize, until the resize is confirmed or reverted. During this time, when the server is in VERIFY_RESIZE state, quota usage will reflect resource consumption on both the source and the destination. This can be beneficial as it reserves space for a revert of a downsize, but it also means quota usage will be inflated until a resize is confirmed or reverted. Behavior will also be different for unscheduled servers in ERROR state. A server in ERROR state that has never been scheduled to a compute host will not have placement allocations, so it will not consume quota usage for cores and ram. Behavior will be different for servers in SHELVED_OFFLOADED state. A server in SHELVED_OFFLOADED state will not have placement allocations, so it will not consume quota usage for cores and ram. Note that because of this, it will be possible for a request to unshelve a server to be rejected if the user does not have enough quota available to support the cores and ram needed by the server to be unshelved. The populate_queued_for_delete and populate_user_id online data migrations must be completed before usage can be counted from placement. Until the data migration is complete, the system will fall back to legacy quota usage counting from cell databases depending on the result of an EXISTS database query during each quota check, if this configuration option is set to True. Operators who want to avoid the performance hit from the EXISTS queries should wait to set this configuration option to True until after they have completed their online data migrations via nova-manage db online_data_migrations . driver = nova.quota.DbQuotaDriver string value Provides abstraction for quota checks. Users can configure a specific driver to use for quota checks. injected_file_content_bytes = 10240 integer value The number of bytes allowed per injected file. Possible values: A positive integer or 0. -1 to disable the quota. injected_file_path_length = 255 integer value The maximum allowed injected file path length. Possible values: A positive integer or 0. -1 to disable the quota. injected_files = 5 integer value The number of injected files allowed. File injection allows users to customize the personality of an instance by injecting data into it upon boot. Only text file injection is permitted: binary or ZIP files are not accepted. During file injection, any existing files that match specified files are renamed to include .bak extension appended with a timestamp. Possible values: A positive integer or 0. -1 to disable the quota. instances = 10 integer value The number of instances allowed per project. Possible Values A positive integer or 0. -1 to disable the quota. key_pairs = 100 integer value The maximum number of key pairs allowed per user. Users can create at least one key pair for each project and use the key pair for multiple instances that belong to that project. Possible values: A positive integer or 0. -1 to disable the quota. metadata_items = 128 integer value The number of metadata items allowed per instance. Users can associate metadata with an instance during instance creation. This metadata takes the form of key-value pairs. Possible values: A positive integer or 0. -1 to disable the quota. ram = 51200 integer value The number of megabytes of instance RAM allowed per project. Possible values: A positive integer or 0. -1 to disable the quota. recheck_quota = True boolean value Recheck quota after resource creation to prevent allowing quota to be exceeded. This defaults to True (recheck quota after resource creation) but can be set to False to avoid additional load if allowing quota to be exceeded because of racing requests is considered acceptable. For example, when set to False, if a user makes highly parallel REST API requests to create servers, it will be possible for them to create more servers than their allowed quota during the race. If their quota is 10 servers, they might be able to create 50 during the burst. After the burst, they will not be able to create any more servers but they will be able to keep their 50 servers until they delete them. The initial quota check is done before resources are created, so if multiple parallel requests arrive at the same time, all could pass the quota check and create resources, potentially exceeding quota. When recheck_quota is True, quota will be checked a second time after resources have been created and if the resource is over quota, it will be deleted and OverQuota will be raised, usually resulting in a 403 response to the REST API user. This makes it impossible for a user to exceed their quota with the caveat that it will, however, be possible for a REST API user to be rejected with a 403 response in the event of a collision close to reaching their quota limit, even if the user has enough quota available when they made the request. server_group_members = 10 integer value The maximum number of servers per server group. Possible values: A positive integer or 0. -1 to disable the quota. server_groups = 10 integer value The maximum number of server groups per project. Server groups are used to control the affinity and anti-affinity scheduling policy for a group of servers or instances. Reducing the quota will not affect any existing group, but new servers will not be allowed into groups that have become over quota. Possible values: A positive integer or 0. -1 to disable the quota. 9.1.48. rdp The following table outlines the options available under the [rdp] group in the /etc/nova/nova.conf file. Table 9.47. rdp Configuration option = Default value Type Description enabled = False boolean value Enable Remote Desktop Protocol (RDP) related features. Hyper-V, unlike the majority of the hypervisors employed on Nova compute nodes, uses RDP instead of VNC and SPICE as a desktop sharing protocol to provide instance console access. This option enables RDP for graphical console access for virtual machines created by Hyper-V. Note: RDP should only be enabled on compute nodes that support the Hyper-V virtualization platform. Related options: compute_driver : Must be hyperv. html5_proxy_base_url = http://127.0.0.1:6083/ uri value The URL an end user would use to connect to the RDP HTML5 console proxy. The console proxy service is called with this token-embedded URL and establishes the connection to the proper instance. An RDP HTML5 console proxy service will need to be configured to listen on the address configured here. Typically the console proxy service would be run on a controller node. The localhost address used as default would only work in a single node environment i.e. devstack. An RDP HTML5 proxy allows a user to access via the web the text or graphical console of any Windows server or workstation using RDP. RDP HTML5 console proxy services include FreeRDP, wsgate. See https://github.com/FreeRDP/FreeRDP-WebConnect Possible values: <scheme>://<ip-address>:<port-number>/ Related options: rdp.enabled : Must be set to True for html5_proxy_base_url to be effective. 9.1.49. remote_debug The following table outlines the options available under the [remote_debug] group in the /etc/nova/nova.conf file. Table 9.48. remote_debug Configuration option = Default value Type Description host = None host address value Debug host (IP or name) to connect to. This command line parameter is used when you want to connect to a nova service via a debugger running on a different host. Note that using the remote debug option changes how nova uses the eventlet library to support async IO. This could result in failures that do not occur under normal operation. Use at your own risk. Possible Values: IP address of a remote host as a command line parameter to a nova service. For example nova-compute --config-file /etc/nova/nova.conf --remote_debug-host <IP address of the debugger> port = None port value Debug port to connect to. This command line parameter allows you to specify the port you want to use to connect to a nova service via a debugger running on different host. Note that using the remote debug option changes how nova uses the eventlet library to support async IO. This could result in failures that do not occur under normal operation. Use at your own risk. Possible Values: Port number you want to use as a command line parameter to a nova service. For example nova-compute --config-file /etc/nova/nova.conf --remote_debug-host <IP address of the debugger> --remote_debug-port <port debugger is listening on>. 9.1.50. scheduler The following table outlines the options available under the [scheduler] group in the /etc/nova/nova.conf file. Table 9.49. scheduler Configuration option = Default value Type Description discover_hosts_in_cells_interval = -1 integer value Periodic task interval. This value controls how often (in seconds) the scheduler should attempt to discover new hosts that have been added to cells. If negative (the default), no automatic discovery will occur. Deployments where compute nodes come and go frequently may want this enabled, where others may prefer to manually discover hosts when one is added to avoid any overhead from constantly checking. If enabled, every time this runs, we will select any unmapped hosts out of each cell database on every run. Possible values: An integer, where the integer corresponds to periodic task interval in seconds. 0 uses the default interval (60 seconds). A negative value disables periodic tasks. enable_isolated_aggregate_filtering = False boolean value Restrict use of aggregates to instances with matching metadata. This setting allows the scheduler to restrict hosts in aggregates based on matching required traits in the aggregate metadata and the instance flavor/image. If an aggregate is configured with a property with key trait:USDTRAIT_NAME and value required , the instance flavor extra_specs and/or image metadata must also contain trait:USDTRAIT_NAME=required to be eligible to be scheduled to hosts in that aggregate. More technical details at https://docs.openstack.org/nova/latest/reference/isolate-aggregates.html Possible values: A boolean value. image_metadata_prefilter = False boolean value Use placement to filter hosts based on image metadata. This setting causes the scheduler to transform well known image metadata properties into placement required traits to filter host based on image metadata. This feature requires host support and is currently supported by the following compute drivers: libvirt.LibvirtDriver (since Ussuri (21.0.0)) Possible values: A boolean value. Related options: [compute] compute_driver limit_tenants_to_placement_aggregate = False boolean value Restrict tenants to specific placement aggregates. This setting causes the scheduler to look up a host aggregate with the metadata key of filter_tenant_id set to the project of an incoming request, and request results from placement be limited to that aggregate. Multiple tenants may be added to a single aggregate by appending a serial number to the key, such as filter_tenant_id:123 . The matching aggregate UUID must be mirrored in placement for proper operation. If no host aggregate with the tenant id is found, or that aggregate does not match one in placement, the result will be the same as not finding any suitable hosts for the request. Possible values: A boolean value. Related options: [scheduler] placement_aggregate_required_for_tenants max_attempts = 3 integer value The maximum number of schedule attempts. This is the maximum number of attempts that will be made for a given instance build/move operation. It limits the number of alternate hosts returned by the scheduler. When that list of hosts is exhausted, a MaxRetriesExceeded exception is raised and the instance is set to an error state. Possible values: A positive integer, where the integer corresponds to the max number of attempts that can be made when building or moving an instance. max_placement_results = 1000 integer value The maximum number of placement results to request. This setting determines the maximum limit on results received from the placement service during a scheduling operation. It effectively limits the number of hosts that may be considered for scheduling requests that match a large number of candidates. A value of 1 (the minimum) will effectively defer scheduling to the placement service strictly on "will it fit" grounds. A higher value will put an upper cap on the number of results the scheduler will consider during the filtering and weighing process. Large deployments may need to set this lower than the total number of hosts available to limit memory consumption, network traffic, etc. of the scheduler. Possible values: An integer, where the integer corresponds to the number of placement results to return. placement_aggregate_required_for_tenants = False boolean value Require a placement aggregate association for all tenants. This setting, when limit_tenants_to_placement_aggregate=True, will control whether or not a tenant with no aggregate affinity will be allowed to schedule to any available node. If aggregates are used to limit some tenants but not all, then this should be False. If all tenants should be confined via aggregate, then this should be True to prevent them from receiving unrestricted scheduling to any available node. Possible values: A boolean value. Related options: [scheduler] placement_aggregate_required_for_tenants query_placement_for_availability_zone = True boolean value Use placement to determine availability zones. This setting causes the scheduler to look up a host aggregate with the metadata key of availability_zone set to the value provided by an incoming request, and request results from placement be limited to that aggregate. The matching aggregate UUID must be mirrored in placement for proper operation. If no host aggregate with the availability_zone key is found, or that aggregate does not match one in placement, the result will be the same as not finding any suitable hosts. Note that if you disable this flag, you must enable the (less efficient) AvailabilityZoneFilter in the scheduler in order to availability zones to work correctly. Possible values: A boolean value. Related options: [filter_scheduler] enabled_filters Deprecated since: 24.0.0 Reason: Since the introduction of placement pre-filters in 18.0.0 (Rocky), we have supported tracking Availability Zones either natively in placement or using the legacy ``AvailabilityZoneFilter`` scheduler filter. In 24.0.0 (Xena), the filter-based approach has been deprecated for removal in favor of the placement-based approach. As a result, this config option has also been deprecated and will be removed when the ``AvailabilityZoneFilter`` filter is removed. query_placement_for_image_type_support = False boolean value Use placement to determine host support for the instance's image type. This setting causes the scheduler to ask placement only for compute hosts that support the disk_format of the image used in the request. Possible values: A boolean value. query_placement_for_routed_network_aggregates = False boolean value Enable the scheduler to filter compute hosts affined to routed network segment aggregates. See https://docs.openstack.org/neutron/latest/admin/config-routed-networks.html for details. workers = None integer value Number of workers for the nova-scheduler service. Defaults to the number of CPUs available. Possible values: An integer, where the integer corresponds to the number of worker processes. 9.1.51. serial_console The following table outlines the options available under the [serial_console] group in the /etc/nova/nova.conf file. Table 9.50. serial_console Configuration option = Default value Type Description base_url = ws://127.0.0.1:6083/ uri value The URL an end user would use to connect to the nova-serialproxy service. The nova-serialproxy service is called with this token enriched URL and establishes the connection to the proper instance. Related options: The IP address must be identical to the address to which the nova-serialproxy service is listening (see option serialproxy_host in this section). The port must be the same as in the option serialproxy_port of this section. If you choose to use a secured websocket connection, then start this option with wss:// instead of the unsecured ws:// . The options cert and key in the [DEFAULT] section have to be set for that. enabled = False boolean value Enable the serial console feature. In order to use this feature, the service nova-serialproxy needs to run. This service is typically executed on the controller node. port_range = 10000:20000 string value A range of TCP ports a guest can use for its backend. Each instance which gets created will use one port out of this range. If the range is not big enough to provide another port for an new instance, this instance won't get launched. Possible values: Each string which passes the regex ^\d+:\d+USD For example 10000:20000 . Be sure that the first port number is lower than the second port number and that both are in range from 0 to 65535. proxyclient_address = 127.0.0.1 string value The IP address to which proxy clients (like nova-serialproxy ) should connect to get the serial console of an instance. This is typically the IP address of the host of a nova-compute service. serialproxy_host = 0.0.0.0 string value The IP address which is used by the nova-serialproxy service to listen for incoming requests. The nova-serialproxy service listens on this IP address for incoming connection requests to instances which expose serial console. Related options: Ensure that this is the same IP address which is defined in the option base_url of this section or use 0.0.0.0 to listen on all addresses. serialproxy_port = 6083 port value The port number which is used by the nova-serialproxy service to listen for incoming requests. The nova-serialproxy service listens on this port number for incoming connection requests to instances which expose serial console. Related options: Ensure that this is the same port number which is defined in the option base_url of this section. 9.1.52. service_user The following table outlines the options available under the [service_user] group in the /etc/nova/nova.conf file. Table 9.51. service_user Configuration option = Default value Type Description auth-url = None string value Authentication URL auth_section = None string value Config Section from which to load plugin specific options auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file password = None string value User's password project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to send_service_user_token = False boolean value When True, if sending a user token to a REST API, also send a service token. Nova often reuses the user token provided to the nova-api to talk to other REST APIs, such as Cinder, Glance and Neutron. It is possible that while the user token was valid when the request was made to Nova, the token may expire before it reaches the other service. To avoid any failures, and to make it clear it is Nova calling the service on the user's behalf, we include a service token along with the user token. Should the user's token have expired, a valid service token ensures the REST API request will still be accepted by the keystone middleware. split-loggers = False boolean value Log requests to multiple loggers. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value ID of the trust to use as a trustee use user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User ID username = None string value Username 9.1.53. spice The following table outlines the options available under the [spice] group in the /etc/nova/nova.conf file. Table 9.52. spice Configuration option = Default value Type Description agent_enabled = True boolean value Enable the SPICE guest agent support on the instances. The Spice agent works with the Spice protocol to offer a better guest console experience. However, the Spice console can still be used without the Spice Agent. With the Spice agent installed the following features are enabled: Copy & Paste of text and images between the guest and client machine Automatic adjustment of resolution when the client screen changes - e.g. if you make the Spice console full screen the guest resolution will adjust to match it rather than letterboxing. Better mouse integration - The mouse can be captured and released without needing to click inside the console or press keys to release it. The performance of mouse movement is also improved. enabled = False boolean value Enable SPICE related features. Related options: VNC must be explicitly disabled to get access to the SPICE console. Set the enabled option to False in the [vnc] section to disable the VNC console. html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html uri value Location of the SPICE HTML5 console proxy. End user would use this URL to connect to the nova-spicehtml5proxy service. This service will forward request to the console of an instance. In order to use SPICE console, the service nova-spicehtml5proxy should be running. This service is typically launched on the controller node. Possible values: Must be a valid URL of the form: http://host:port/spice_auto.html where host is the node running nova-spicehtml5proxy and the port is typically 6082. Consider not using default value as it is not well defined for any real deployment. Related options: This option depends on html5proxy_host and html5proxy_port options. The access URL returned by the compute node must have the host and port where the nova-spicehtml5proxy service is listening. html5proxy_host = 0.0.0.0 host address value IP address or a hostname on which the nova-spicehtml5proxy service listens for incoming requests. Related options: This option depends on the html5proxy_base_url option. The nova-spicehtml5proxy service must be listening on a host that is accessible from the HTML5 client. html5proxy_port = 6082 port value Port on which the nova-spicehtml5proxy service listens for incoming requests. Related options: This option depends on the html5proxy_base_url option. The nova-spicehtml5proxy service must be listening on a port that is accessible from the HTML5 client. image_compression = None string value Configure the SPICE image compression (lossless). jpeg_compression = None string value Configure the SPICE wan image compression (lossy for slow links). playback_compression = None boolean value Enable the SPICE audio stream compression (using celt). server_listen = 127.0.0.1 string value The address where the SPICE server running on the instances should listen. Typically, the nova-spicehtml5proxy proxy client runs on the controller node and connects over the private network to this address on the compute node(s). Possible values: IP address to listen on. server_proxyclient_address = 127.0.0.1 string value The address used by nova-spicehtml5proxy client to connect to instance console. Typically, the nova-spicehtml5proxy proxy client runs on the controller node and connects over the private network to this address on the compute node(s). Possible values: Any valid IP address on the compute node. Related options: This option depends on the server_listen option. The proxy client must be able to access the address specified in server_listen using the value of this option. streaming_mode = None string value Configure the SPICE video stream detection and (lossy) compression. zlib_compression = None string value Configure the SPICE wan image compression (lossless for slow links). 9.1.54. upgrade_levels The following table outlines the options available under the [upgrade_levels] group in the /etc/nova/nova.conf file. Table 9.53. upgrade_levels Configuration option = Default value Type Description baseapi = None string value Base API RPC API version cap. Possible values: By default send the latest version the client knows about A string representing a version number in the format N.N ; for example, possible values might be 1.12 or 2.0 . An OpenStack release name, in lower case, such as mitaka or liberty . cert = None string value Cert RPC API version cap. Possible values: By default send the latest version the client knows about A string representing a version number in the format N.N ; for example, possible values might be 1.12 or 2.0 . An OpenStack release name, in lower case, such as mitaka or liberty . Deprecated since: 18.0.0 Reason: The nova-cert service was removed in 16.0.0 (Pike) so this option is no longer used. compute = None string value Compute RPC API version cap. By default, we always send messages using the most recent version the client knows about. Where you have old and new compute services running, you should set this to the lowest deployed version. This is to guarantee that all services never send messages that one of the compute nodes can't understand. Note that we only support upgrading from release N to release N+1. Set this option to "auto" if you want to let the compute RPC module automatically determine what version to use based on the service versions in the deployment. Possible values: By default send the latest version the client knows about auto : Automatically determines what version to use based on the service versions in the deployment. A string representing a version number in the format N.N ; for example, possible values might be 1.12 or 2.0 . An OpenStack release name, in lower case, such as mitaka or liberty . conductor = None string value Conductor RPC API version cap. Possible values: By default send the latest version the client knows about A string representing a version number in the format N.N ; for example, possible values might be 1.12 or 2.0 . An OpenStack release name, in lower case, such as mitaka or liberty . scheduler = None string value Scheduler RPC API version cap. Possible values: By default send the latest version the client knows about A string representing a version number in the format N.N ; for example, possible values might be 1.12 or 2.0 . An OpenStack release name, in lower case, such as mitaka or liberty . 9.1.55. vault The following table outlines the options available under the [vault] group in the /etc/nova/nova.conf file. Table 9.54. vault Configuration option = Default value Type Description approle_role_id = None string value AppRole role_id for authentication with vault approle_secret_id = None string value AppRole secret_id for authentication with vault kv_mountpoint = secret string value Mountpoint of KV store in Vault to use, for example: secret kv_version = 2 integer value Version of KV store in Vault to use, for example: 2 namespace = None string value Vault Namespace to use for all requests to Vault. Vault Namespaces feature is available only in Vault Enterprise root_token_id = None string value root token for vault ssl_ca_crt_file = None string value Absolute path to ca cert file use_ssl = False boolean value SSL Enabled/Disabled vault_url = http://127.0.0.1:8200 string value Use this endpoint to connect to Vault, for example: "http://127.0.0.1:8200" 9.1.56. vendordata_dynamic_auth The following table outlines the options available under the [vendordata_dynamic_auth] group in the /etc/nova/nova.conf file. Table 9.55. vendordata_dynamic_auth Configuration option = Default value Type Description auth-url = None string value Authentication URL auth_section = None string value Config Section from which to load plugin specific options auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file password = None string value User's password project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to split-loggers = False boolean value Log requests to multiple loggers. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value ID of the trust to use as a trustee use user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User ID username = None string value Username 9.1.57. vmware The following table outlines the options available under the [vmware] group in the /etc/nova/nova.conf file. Table 9.56. vmware Configuration option = Default value Type Description api_retry_count = 10 integer value Number of times VMware vCenter server API must be retried on connection failures, e.g. socket error, etc. ca_file = None string value Specifies the CA bundle file to be used in verifying the vCenter server certificate. cache_prefix = None string value This option adds a prefix to the folder where cached images are stored This is not the full path - just a folder prefix. This should only be used when a datastore cache is shared between compute nodes. Note: This should only be used when the compute nodes are running on same host or they have a shared file system. Possible values: Any string representing the cache prefix to the folder cluster_name = None string value Name of a VMware Cluster ComputeResource. connection_pool_size = 10 integer value This option sets the http connection pool size The connection pool size is the maximum number of connections from nova to vSphere. It should only be increased if there are warnings indicating that the connection pool is full, otherwise, the default should suffice. console_delay_seconds = None integer value Set this value if affected by an increased network latency causing repeated characters when typing in a remote console. datastore_regex = None string value Regular expression pattern to match the name of datastore. The datastore_regex setting specifies the datastores to use with Compute. For example, datastore_regex="nas.*" selects all the data stores that have a name starting with "nas". Note If no regex is given, it just picks the datastore with the most freespace. Possible values: Any matching regular expression to a datastore must be given host_ip = None host address value Hostname or IP address for connection to VMware vCenter host. host_password = None string value Password for connection to VMware vCenter host. host_port = 443 port value Port for connection to VMware vCenter host. host_username = None string value Username for connection to VMware vCenter host. insecure = False boolean value If true, the vCenter server certificate is not verified. If false, then the default CA truststore is used for verification. Related options: ca_file: This option is ignored if "ca_file" is set. integration_bridge = None string value This option should be configured only when using the NSX-MH Neutron plugin. This is the name of the integration bridge on the ESXi server or host. This should not be set for any other Neutron plugin. Hence the default value is not set. Possible values: Any valid string representing the name of the integration bridge maximum_objects = 100 integer value This option specifies the limit on the maximum number of objects to return in a single result. A positive value will cause the operation to suspend the retrieval when the count of objects reaches the specified limit. The server may still limit the count to something less than the configured value. Any remaining objects may be retrieved with additional requests. pbm_default_policy = None string value This option specifies the default policy to be used. If pbm_enabled is set and there is no defined storage policy for the specific request, then this policy will be used. Possible values: Any valid storage policy such as VSAN default storage policy Related options: pbm_enabled pbm_enabled = False boolean value This option enables or disables storage policy based placement of instances. Related options: pbm_default_policy pbm_wsdl_location = None string value This option specifies the PBM service WSDL file location URL. Setting this will disable storage policy based placement of instances. Possible values: Any valid file path e.g file:///opt/SDK/spbm/wsdl/pbmService.wsdl serial_log_dir = /opt/vmware/vspc string value Specifies the directory where the Virtual Serial Port Concentrator is storing console log files. It should match the serial_log_dir config value of VSPC. serial_port_proxy_uri = None uri value Identifies a proxy service that provides network access to the serial_port_service_uri. Possible values: Any valid URI (The scheme is telnet or telnets .) Related options: This option is ignored if serial_port_service_uri is not specified. serial_port_service_uri serial_port_service_uri = None string value Identifies the remote system where the serial port traffic will be sent. This option adds a virtual serial port which sends console output to a configurable service URI. At the service URI address there will be virtual serial port concentrator that will collect console logs. If this is not set, no serial ports will be added to the created VMs. Possible values: Any valid URI task_poll_interval = 0.5 floating point value Time interval in seconds to poll remote tasks invoked on VMware VC server. use_linked_clone = True boolean value This option enables/disables the use of linked clone. The ESX hypervisor requires a copy of the VMDK file in order to boot up a virtual machine. The compute driver must download the VMDK via HTTP from the OpenStack Image service to a datastore that is visible to the hypervisor and cache it. Subsequent virtual machines that need the VMDK use the cached version and don't have to copy the file again from the OpenStack Image service. If set to false, even with a cached VMDK, there is still a copy operation from the cache location to the hypervisor file directory in the shared datastore. If set to true, the above copy operation is avoided as it creates copy of the virtual machine that shares virtual disks with its parent VM. vnc_keymap = en-us string value Keymap for VNC. The keyboard mapping (keymap) determines which keyboard layout a VNC session should use by default. Possible values: A keyboard layout which is supported by the underlying hypervisor on this node. This is usually an IETF language tag (for example en-us ). vnc_port = 5900 port value This option specifies VNC starting port. Every VM created by ESX host has an option of enabling VNC client for remote connection. Above option vnc_port helps you to set default starting port for the VNC client. Possible values: Any valid port number within 5900 -(5900 + vnc_port_total) Related options: Below options should be set to enable VNC client. vnc.enabled = True vnc_port_total vnc_port_total = 10000 integer value Total number of VNC ports. 9.1.58. vnc The following table outlines the options available under the [vnc] group in the /etc/nova/nova.conf file. Table 9.57. vnc Configuration option = Default value Type Description auth_schemes = ['none'] list value The authentication schemes to use with the compute node. Control what RFB authentication schemes are permitted for connections between the proxy and the compute host. If multiple schemes are enabled, the first matching scheme will be used, thus the strongest schemes should be listed first. Related options: [vnc]vencrypt_client_key , [vnc]vencrypt_client_cert : must also be set enabled = True boolean value Enable VNC related features. Guests will get created with graphical devices to support this. Clients (for example Horizon) can then establish a VNC connection to the guest. novncproxy_base_url = http://127.0.0.1:6080/vnc_auto.html uri value Public address of noVNC VNC console proxy. The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. noVNC provides VNC support through a websocket-based client. This option sets the public base URL to which client systems will connect. noVNC clients can use this address to connect to the noVNC instance and, by extension, the VNC sessions. If using noVNC >= 1.0.0, you should use vnc_lite.html instead of vnc_auto.html . Related options: novncproxy_host novncproxy_port novncproxy_host = 0.0.0.0 string value IP address that the noVNC console proxy should bind to. The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. noVNC provides VNC support through a websocket-based client. This option sets the private address to which the noVNC console proxy service should bind to. Related options: novncproxy_port novncproxy_base_url novncproxy_port = 6080 port value Port that the noVNC console proxy should bind to. The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. noVNC provides VNC support through a websocket-based client. This option sets the private port to which the noVNC console proxy service should bind to. Related options: novncproxy_host novncproxy_base_url server_listen = 127.0.0.1 host address value The IP address or hostname on which an instance should listen to for incoming VNC connection requests on this node. server_proxyclient_address = 127.0.0.1 host address value Private, internal IP address or hostname of VNC console proxy. The VNC proxy is an OpenStack component that enables compute service users to access their instances through VNC clients. This option sets the private address to which proxy clients, such as nova-novncproxy , should connect to. vencrypt_ca_certs = None string value The path to the CA certificate PEM file The fully qualified path to a PEM file containing one or more x509 certificates for the certificate authorities used by the compute node VNC server. Related options: vnc.auth_schemes : must include vencrypt vencrypt_client_cert = None string value The path to the client key file (for x509) The fully qualified path to a PEM file containing the x509 certificate which the VNC proxy server presents to the compute node during VNC authentication. Realted options: vnc.auth_schemes : must include vencrypt vnc.vencrypt_client_key : must also be set vencrypt_client_key = None string value The path to the client certificate PEM file (for x509) The fully qualified path to a PEM file containing the private key which the VNC proxy server presents to the compute node during VNC authentication. Related options: vnc.auth_schemes : must include vencrypt vnc.vencrypt_client_cert : must also be set 9.1.59. workarounds The following table outlines the options available under the [workarounds] group in the /etc/nova/nova.conf file. Table 9.58. workarounds Configuration option = Default value Type Description disable_compute_service_check_for_ffu = False boolean value If this is set, the normal safety check for old compute services will be treated as a warning instead of an error. This is only to be enabled to facilitate a Fast-Forward upgrade where new control services are being started before compute nodes have been able to update their service record. In an FFU, the service records in the database will be more than one version old until the compute nodes start up, but control services need to be online first. disable_deep_image_inspection = False boolean value This disables the additional deep image inspection that the compute node does when downloading from glance. This includes backing-file, data-file, and known-features detection before passing the image to qemu-img. Generally, this inspection should be enabled for maximum safety, but this workaround option allows disabling it if there is a compatibility concern. disable_fallback_pcpu_query = False boolean value Disable fallback request for VCPU allocations when using pinned instances. Starting in Train, compute nodes using the libvirt virt driver can report PCPU inventory and will use this for pinned instances. The scheduler will automatically translate requests using the legacy CPU pinning-related flavor extra specs, hw:cpu_policy and hw:cpu_thread_policy , their image metadata property equivalents, and the emulator threads pinning flavor extra spec, hw:emulator_threads_policy , to new placement requests. However, compute nodes require additional configuration in order to report PCPU inventory and this configuration may not be present immediately after an upgrade. To ensure pinned instances can be created without this additional configuration, the scheduler will make a second request to placement for old-style VCPU -based allocations and fallback to these allocation candidates if necessary. This has a slight performance impact and is not necessary on new or upgraded deployments where the new configuration has been set on all hosts. By setting this option, the second lookup is disabled and the scheduler will only request PCPU -based allocations. Deprecated since: 20.0.0 *Reason:*None disable_group_policy_check_upcall = False boolean value Disable the server group policy check upcall in compute. In order to detect races with server group affinity policy, the compute service attempts to validate that the policy was not violated by the scheduler. It does this by making an upcall to the API database to list the instances in the server group for one that it is booting, which violates our api/cell isolation goals. Eventually this will be solved by proper affinity guarantees in the scheduler and placement service, but until then, this late check is needed to ensure proper affinity policy. Operators that desire api/cell isolation over this check should enable this flag, which will avoid making that upcall from compute. Related options: [filter_scheduler]/track_instance_changes also relies on upcalls from the compute service to the scheduler service. disable_libvirt_livesnapshot = False boolean value Disable live snapshots when using the libvirt driver. Live snapshots allow the snapshot of the disk to happen without an interruption to the guest, using coordination with a guest agent to quiesce the filesystem. When using libvirt 1.2.2 live snapshots fail intermittently under load (likely related to concurrent libvirt/qemu operations). This config option provides a mechanism to disable live snapshot, in favor of cold snapshot, while this is resolved. Cold snapshot causes an instance outage while the guest is going through the snapshotting process. For more information, refer to the bug report: Possible values: True: Live snapshot is disabled when using libvirt False: Live snapshots are always used when snapshotting (as long as there is a new enough libvirt and the backend storage supports it) Deprecated since: 19.0.0 Reason: This option was added to work around issues with libvirt 1.2.2. We no longer support this version of libvirt, which means this workaround is no longer necessary. It will be removed in a future release. disable_rootwrap = False boolean value Use sudo instead of rootwrap. Allow fallback to sudo for performance reasons. For more information, refer to the bug report: Possible values: True: Use sudo instead of rootwrap False: Use rootwrap as usual Interdependencies to other options: Any options that affect rootwrap will be ignored. enable_numa_live_migration = False boolean value Enable live migration of instances with NUMA topologies. Live migration of instances with NUMA topologies when using the libvirt driver is only supported in deployments that have been fully upgraded to Train. In versions, or in mixed Stein/Train deployments with a rolling upgrade in progress, live migration of instances with NUMA topologies is disabled by default when using the libvirt driver. This includes live migration of instances with CPU pinning or hugepages. CPU pinning and huge page information for such instances is not currently re-calculated, as noted in `bug #1289064`_. This means that if instances were already present on the destination host, the migrated instance could be placed on the same dedicated cores as these instances or use hugepages allocated for another instance. Alternately, if the host platforms were not homogeneous, the instance could be assigned to non-existent cores or be inadvertently split across host NUMA nodes. Despite these known issues, there may be cases where live migration is necessary. By enabling this option, operators that are aware of the issues and are willing to manually work around them can enable live migration support for these instances. Related options: compute_driver : Only the libvirt driver is affected. _bug #1289064: https://bugs.launchpad.net/nova/+bug/1289064 Deprecated since: 20.0.0 *Reason:*This option was added to mitigate known issues when live migrating instances with a NUMA topology with the libvirt driver. Those issues are resolved in Train. Clouds using the libvirt driver and fully upgraded to Train support NUMA-aware live migration. This option will be removed in a future release. enable_qemu_monitor_announce_self = False boolean value If it is set to True the libvirt driver will try as a best effort to send the announce-self command to the QEMU monitor so that it generates RARP frames to update network switches in the post live migration phase on the destination. Please note that this causes the domain to be considered tainted by libvirt. Related options: :oslo.config:option: DEFAULT.compute_driver (libvirt) ensure_libvirt_rbd_instance_dir_cleanup = False boolean value Ensure the instance directory is removed during clean up when using rbd. When enabled this workaround will ensure that the instance directory is always removed during cleanup on hosts using [libvirt]/images_type=rbd . This avoids the following bugs with evacuation and revert resize clean up that lead to the instance directory remaining on the host: https://bugs.launchpad.net/nova/+bug/1414895 https://bugs.launchpad.net/nova/+bug/1761062 Both of these bugs can then result in DestinationDiskExists errors being raised if the instances ever attempt to return to the host. warning:: Operators will need to ensure that the instance directory itself, specified by [DEFAULT]/instances_path , is not shared between computes before enabling this workaround otherwise the console.log, kernels, ramdisks and any additional files being used by the running instance will be lost. Related options: compute_driver (libvirt) [libvirt]/images_type (rbd) instances_path handle_virt_lifecycle_events = True boolean value Enable handling of events emitted from compute drivers. Many compute drivers emit lifecycle events, which are events that occur when, for example, an instance is starting or stopping. If the instance is going through task state changes due to an API operation, like resize, the events are ignored. This is an advanced feature which allows the hypervisor to signal to the compute service that an unexpected state change has occurred in an instance and that the instance can be shutdown automatically. Unfortunately, this can race in some conditions, for example in reboot operations or when the compute service or when host is rebooted (planned or due to an outage). If such races are common, then it is advisable to disable this feature. Care should be taken when this feature is disabled and sync_power_state_interval is set to a negative value. In this case, any instances that get out of sync between the hypervisor and the Nova database will have to be synchronized manually. For more information, refer to the bug report: https://bugs.launchpad.net/bugs/1444630 Interdependencies to other options: If sync_power_state_interval is negative and this feature is disabled, then instances that get out of sync between the hypervisor and the Nova database will have to be synchronized manually. libvirt_disable_apic = False boolean value With some kernels initializing the guest apic can result in a kernel hang that renders the guest unusable. This happens as a result of a kernel bug. In most cases the correct fix it to update the guest image kernel to one that is patched however in some cases this is not possible. This workaround allows the emulation of an apic to be disabled per host however it is not recommended to use outside of a CI or developer cloud. never_download_image_if_on_rbd = False boolean value When booting from an image on a ceph-backed compute node, if the image does not already reside on the ceph cluster (as would be the case if glance is also using the same cluster), nova will download the image from glance and upload it to ceph itself. If using multiple ceph clusters, this may cause nova to unintentionally duplicate the image in a non-COW-able way in the local ceph deployment, wasting space. For more information, refer to the bug report: https://bugs.launchpad.net/nova/+bug/1858877 Enabling this option will cause nova to refuse to boot an instance if it would require downloading the image from glance and uploading it to ceph itself. Related options: compute_driver (libvirt) [libvirt]/images_type (rbd) qemu_monitor_announce_self_count = 3 integer value The total number of times to send the announce_self command to the QEMU monitor when enable_qemu_monitor_announce_self is enabled. Related options: :oslo.config:option: WORKAROUNDS.enable_qemu_monitor_announce_self (libvirt) qemu_monitor_announce_self_interval = 1 integer value The number of seconds to wait before re-sending the announce_self command to the QEMU monitor. Related options: :oslo.config:option: WORKAROUNDS.enable_qemu_monitor_announce_self (libvirt) reserve_disk_resource_for_image_cache = False boolean value If it is set to True then the libvirt driver will reserve DISK_GB resource for the images stored in the image cache. If the :oslo.config:option: DEFAULT.instances_path is on different disk partition than the image cache directory then the driver will not reserve resource for the cache. Such disk reservation is done by a periodic task in the resource tracker that runs every :oslo.config:option: update_resources_interval seconds. So the reservation is not updated immediately when an image is cached. Related options: :oslo.config:option: DEFAULT.instances_path :oslo.config:option: image_cache.subdirectory_name :oslo.config:option: update_resources_interval skip_cpu_compare_at_startup = False boolean value This will skip the CPU comparison call at the startup of Compute service and lets libvirt handle it. skip_cpu_compare_on_dest = False boolean value With the libvirt driver, during live migration, skip comparing guest CPU with the destination host. When using QEMU >= 2.9 and libvirt >= 4.4.0, libvirt will do the correct thing with respect to checking CPU compatibility on the destination host during live migration. skip_hypervisor_version_check_on_lm = False boolean value When this is enabled, it will skip version-checking of hypervisors during live migration. skip_reserve_in_use_ironic_nodes = False boolean value This may be useful if you use the Ironic driver, but don't have automatic cleaning enabled in Ironic. Nova, by default, will mark Ironic nodes as reserved as soon as they are in use. When you free the Ironic node (by deleting the nova instance) it takes a while for Nova to un-reserve that Ironic node in placement. Usually this is a good idea, because it avoids placement providing an Ironic as a valid candidate when it is still being cleaned. Howerver, if you don't use automatic cleaning, it can cause an extra delay before and Ironic node is available for building a new Nova instance. unified_limits_count_pcpu_as_vcpu = False boolean value When using unified limits, use VCPU + PCPU for VCPU quota usage. If the deployment is configured to use unified limits via [quota]driver=nova.quota.UnifiedLimitsDriver , by default VCPU resources are counted independently from PCPU resources, consistent with how they are represented in the placement service. Legacy quota behavior counts PCPU as VCPU and returns the sum of VCPU + PCPU usage as the usage count for VCPU. Operators relying on the aggregation of VCPU and PCPU resource usage counts should set this option to True. Related options: :oslo.config:option: quota.driver wait_for_vif_plugged_event_during_hard_reboot = [] list value The libvirt virt driver implements power on and hard reboot by tearing down every vif of the instance being rebooted then plug them again. By default nova does not wait for network-vif-plugged event from neutron before it lets the instance run. This can cause the instance to requests the IP via DHCP before the neutron backend has a chance to set up the networking backend after the vif plug. This flag defines which vifs nova expects network-vif-plugged events from during hard reboot. The possible values are neutron port vnic types: normal direct macvtap baremetal direct-physical virtio-forwarder smart-nic vdpa accelerator-direct accelerator-direct-physical remote-managed Adding a vnic_type to this configuration makes Nova wait for a network-vif-plugged event for each of the instance's vifs having the specific vnic_type before unpausing the instance, similarly to how new instance creation works. Please note that not all neutron networking backends send plug time events, for certain vnic_type therefore this config is empty by default. The ml2/ovs and the networking-odl backends are known to send plug time events for ports with normal vnic_type so it is safe to add normal to this config if you are using only those backends in the compute host. The neutron in-tree SRIOV backend does not reliably send network-vif-plugged event during plug time for ports with direct vnic_type and never sends that event for port with direct-physical vnic_type during plug time. For other vnic_type and backend pairs, please consult the developers of the backend. Related options: :oslo.config:option: DEFAULT.vif_plugging_timeout 9.1.60. wsgi The following table outlines the options available under the [wsgi] group in the /etc/nova/nova.conf file. Table 9.59. wsgi Configuration option = Default value Type Description api_paste_config = api-paste.ini string value This option represents a file name for the paste.deploy config for nova-api. Possible values: A string representing file name for the paste.deploy config. client_socket_timeout = 900 integer value This option specifies the timeout for client connections' socket operations. If an incoming connection is idle for this number of seconds it will be closed. It indicates timeout on individual read/writes on the socket connection. To wait forever set to 0. default_pool_size = 1000 integer value This option specifies the size of the pool of greenthreads used by wsgi. It is possible to limit the number of concurrent connections using this option. keep_alive = True boolean value This option allows using the same TCP connection to send and receive multiple HTTP requests/responses, as opposed to opening a new one for every single request/response pair. HTTP keep-alive indicates HTTP connection reuse. Possible values: True : reuse HTTP connection. False : closes the client socket connection explicitly. Related options: tcp_keepidle max_header_line = 16384 integer value This option specifies the maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated by the Keystone v3 API with big service catalogs). Since TCP is a stream based protocol, in order to reuse a connection, the HTTP has to have a way to indicate the end of the response and beginning of the . Hence, in a keep_alive case, all messages must have a self-defined message length. secure_proxy_ssl_header = None string value This option specifies the HTTP header used to determine the protocol scheme for the original request, even if it was removed by a SSL terminating proxy. Possible values: None (default) - the request scheme is not influenced by any HTTP headers Valid HTTP header, like HTTP_X_FORWARDED_PROTO Warning Do not set this unless you know what you are doing. Make sure ALL of the following are true before setting this (assuming the values from the example above): Your API is behind a proxy. Your proxy strips the X-Forwarded-Proto header from all incoming requests. In other words, if end users include that header in their requests, the proxy will discard it. Your proxy sets the X-Forwarded-Proto header and sends it to API, but only for requests that originally come in via HTTPS. If any of those are not true, you should keep this setting set to None. ssl_ca_file = None string value This option allows setting path to the CA certificate file that should be used to verify connecting clients. Possible values: String representing path to the CA certificate file. Related options: enabled_ssl_apis ssl_cert_file = None string value This option allows setting path to the SSL certificate of API server. Possible values: String representing path to the SSL certificate. Related options: enabled_ssl_apis ssl_key_file = None string value This option specifies the path to the file where SSL private key of API server is stored when SSL is in effect. Possible values: String representing path to the SSL private key. Related options: enabled_ssl_apis tcp_keepidle = 600 integer value This option sets the value of TCP_KEEPIDLE in seconds for each server socket. It specifies the duration of time to keep connection active. TCP generates a KEEPALIVE transmission for an application that requests to keep connection active. Not supported on OS X. Related options: keep_alive wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f string value It represents a python format string that is used as the template to generate log lines. The following values can be formatted into it: client_ip, date_time, request_line, status_code, body_length, wall_seconds. This option is used for building custom request loglines when running nova-api under eventlet. If used under uwsgi or apache, this option has no effect. Possible values: %(client_ip)s "%(request_line)s" status: %(status_code)s ' 'len: %(body_length)s time: %(wall_seconds).7f (default) Any formatted string formed by specific values. Deprecated since: 16.0.0 Reason: This option only works when running nova-api under eventlet, and encodes very eventlet specific pieces of information. Starting in Pike the preferred model for running nova-api is under uwsgi or apache mod_wsgi. 9.1.61. zvm The following table outlines the options available under the [zvm] group in the /etc/nova/nova.conf file. Table 9.60. zvm Configuration option = Default value Type Description ca_file = None string value CA certificate file to be verified in httpd server with TLS enabled A string, it must be a path to a CA bundle to use. cloud_connector_url = None uri value URL to be used to communicate with z/VM Cloud Connector. image_tmp_path = USDstate_path/images string value The path at which images will be stored (snapshot, deploy, etc). Images used for deploy and images captured via snapshot need to be stored on the local disk of the compute host. This configuration identifies the directory location. Possible values: A file system path on the host running the compute service. reachable_timeout = 300 integer value Timeout (seconds) to wait for an instance to start. The z/VM driver relies on communication between the instance and cloud connector. After an instance is created, it must have enough time to wait for all the network info to be written into the user directory. The driver will keep rechecking network status to the instance with the timeout value, If setting network failed, it will notify the user that starting the instance failed and put the instance in ERROR state. The underlying z/VM guest will then be deleted. Possible Values: Any positive integer. Recommended to be at least 300 seconds (5 minutes), but it will vary depending on instance and system load. A value of 0 is used for debug. In this case the underlying z/VM guest will not be deleted when the instance is marked in ERROR state.
[ "This option does not affect `PCPU` inventory, which cannot be overcommitted.", "If this option is set to something *other than* `None` or `0.0`, the allocation ratio will be overwritten by the value of this option, otherwise, the allocation ratio will not change. Once set to a non-default value, it is not possible to \"unset\" the config to get back to the default behavior. If you want to reset back to the initial value, explicitly specify it to the value of `initial_cpu_allocation_ratio`.", "If the value is set to `>1`, we recommend keeping track of the free disk space, as the value approaching `0` may result in the incorrect functioning of instances using it at the moment.", "If this option is set to something *other than* `None` or `0.0`, the allocation ratio will be overwritten by the value of this option, otherwise, the allocation ratio will not change. Once set to a non-default value, it is not possible to \"unset\" the config to get back to the default behavior. If you want to reset back to the initial value, explicitly specify it to the value of `initial_disk_allocation_ratio`.", "https://cloudinit.readthedocs.io/en/latest/topics/datasources.html", "The following image properties are *never* inherited regardless of whether they are listed in this configuration option or not:", "If this option is set to something *other than* `None` or `0.0`, the allocation ratio will be overwritten by the value of this option, otherwise, the allocation ratio will not change. Once set to a non-default value, it is not possible to \"unset\" the config to get back to the default behavior. If you want to reset back to the initial value, explicitly specify it to the value of `initial_ram_allocation_ratio`.", "In this example we are reserving on NUMA node 0 64 pages of 2MiB and on NUMA node 1 1 page of 1GiB.", "The compute service cannot reliably determine which types of virtual interfaces (`port.binding:vif_type`) will send `network-vif-plugged` events without an accompanying port `binding:host_id` change. Open vSwitch and linuxbridge should be OK, but OpenDaylight is at least one known backend that will not currently work in this case, see bug https://launchpad.net/bugs/1755890 for more details.", "https://docs.openstack.org/nova/latest/admin/managing-resource-providers.html", "ssl_ciphers = \"kEECDH+aECDSA+AES:kEECDH+AES+aRSA:kEDH+aRSA+AES\"", "https://www.openssl.org/docs/man1.1.0/man1/ciphers.html", "[devices] enabled_mdev_types = nvidia-35, nvidia-36", "[mdev_nvidia-35] device_addresses = 0000:84:00.0,0000:85:00.0", "[vgpu_nvidia-36] device_addresses = 0000:86:00.0", "[filter_scheduler] hypervisor_version_weight_multiplier=-1000", "[filter_scheduler] hypervisor_version_weight_multiplier=2.5", "[filter_scheduler] hypervisor_version_weight_multiplier=0", "In a multi-cell (v2) setup where the cell MQ is separated from the top-level, computes cannot directly communicate with the scheduler. Thus, this option cannot be enabled in that scenario. See also the `[workarounds] disable_group_policy_check_upcall` option.", "64, 128, 256, 512, 1024", "This is only necessary if the URI differs to the commonly known URIs for the chosen virtualization type.", "[libvirt] cpu_mode = custom cpu_models = Cascadelake-Server cpu_model_extra_flags = -hle, -rtm, +ssbd, mtrr", "[libvirt] cpu_mode = custom cpu_models = Haswell-noTSX-IBRS cpu_model_extra_flags = -PDPE1GB, +VMX, pcid", "[libvirt] enabled_perf_events = cpu_clock, cache_misses", "It is recommended to read :ref:`the deployment documentation's section on this option <num_memory_encrypted_guests>` before deciding whether to configure this setting or leave it at the default.", "\"USDLABEL:USDNSNAME[&verbar;USDNSNAME][,USDLABEL:USDNSNAME[&verbar;USDNSNAME]]\"", "`name1=1.0, name2=-1.3`", "`(name1.value * 1.0) + (name2.value * -1.3)`", "[neutron] physnets = foo, bar", "[neutron_physnet_foo] numa_nodes = 0", "[neutron_physnet_bar] numa_nodes = 0,1", "[neutron_tunnel] numa_nodes = 1", "alias = { \"name\": \"QuickAssist\", \"product_id\": \"0443\", \"vendor_id\": \"8086\", \"device_type\": \"type-PCI\", \"numa_policy\": \"required\" }", "This defines an alias for the Intel QuickAssist card. (multi valued). Valid key values are :", "`name` Name of the PCI alias.", "`product_id` Product ID of the device in hexadecimal.", "`vendor_id` Vendor ID of the device in hexadecimal.", "`device_type` Type of PCI device. Valid values are: `type-PCI`, `type-PF` and `type-VF`. Note that `\"device_type\": \"type-PF\"` **must** be specified if you wish to passthrough a device that supports SR-IOV in its entirety.", "`numa_policy` Required NUMA affinity of device. Valid values are: `legacy`, `preferred` and `required`.", "`resource_class` The optional Placement resource class name that is used to track the requested PCI devices in Placement. It can be a standard resource class from the `os-resource-classes` lib. Or it can be an arbitrary string. If it is an non-standard resource class then Nova will normalize it to a proper Placement resource class by making it upper case, replacing any consecutive character outside of `[A-Z0-9_]` with a single '_', and prefixing the name with `CUSTOM_` if not yet prefixed. The maximum allowed length is 255 character including the prefix. If `resource_class` is not provided Nova will generate it from `vendor_id` and `product_id` values of the alias in the form of `CUSTOM_PCI_{vendor_id}_{product_id}`. The `resource_class` requested in the alias is matched against the `resource_class` defined in the `[pci]device_spec`. This field can only be used only if `[filter_scheduler]pci_in_placement` is enabled.", "`traits` An optional comma separated list of Placement trait names requested to be present on the resource provider that fulfills this alias. Each trait can be a standard trait from `os-traits` lib or it can be an arbitrary string. If it is a non-standard trait then Nova will normalize the trait name by making it upper case, replacing any consecutive character outside of `[A-Z0-9_]` with a single '_', and prefixing the name with `CUSTOM_` if not yet prefixed. The maximum allowed length of a trait name is 255 character including the prefix. Every trait in `traits` requested in the alias ensured to be in the list of traits provided in the `traits` field of the `[pci]device_spec` when scheduling the request. This field can only be used only if `[filter_scheduler]pci_in_placement` is enabled.", "Where `[` indicates zero or one occurrences, `{` indicates zero or multiple occurrences, and `&verbar;` mutually exclusive options. Note that any missing fields are automatically wildcarded.", "Valid key values are :", "`vendor_id` Vendor ID of the device in hexadecimal.", "`product_id` Product ID of the device in hexadecimal.", "`address` PCI address of the device. Both traditional glob style and regular expression syntax is supported. Please note that the address fields are restricted to the following maximum values:", "`devname` Device name of the device (for e.g. interface name). Not all PCI devices have a name.", "`<tag>` Additional `<tag>` and `<tag_value>` used for specifying PCI devices. Supported `<tag>` values are :", "The following example is invalid because it specifies the `remote_managed` tag for a PF - it will result in an error during config validation at the", "The scheme must be identical to the scheme configured for the RDP HTML5 console proxy service. It is `http` or `https`.", "The IP address must be identical to the address on which the RDP HTML5 console proxy service is listening.", "The port must be identical to the port on which the RDP HTML5 console proxy service is listening.", "https://bugs.launchpad.net/nova/+bug/1334398", "https://bugs.launchpad.net/nova/+bug/1415106" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/configuration_reference/nova_4
Chapter 11. Network Interfaces
Chapter 11. Network Interfaces Under Red Hat Enterprise Linux, all network communications occur between configured software interfaces and physical networking devices connected to the system. The configuration files for network interfaces are located in the /etc/sysconfig/network-scripts/ directory. The scripts used to activate and deactivate these network interfaces are also located here. Although the number and type of interface files can differ from system to system, there are three categories of files that exist in this directory: Interface configuration files Interface control scripts Network function files The files in each of these categories work together to enable various network devices. This chapter explores the relationship between these files and how they are used. 11.1. Network Configuration Files Before delving into the interface configuration files, let us first itemize the primary configuration files used in network configuration. Understanding the role these files play in setting up the network stack can be helpful when customizing a Red Hat Enterprise Linux system. The primary network configuration files are as follows: /etc/hosts The main purpose of this file is to resolve host names that cannot be resolved any other way. It can also be used to resolve host names on small networks with no DNS server. Regardless of the type of network the computer is on, this file should contain a line specifying the IP address of the loopback device ( 127.0.0.1 ) as localhost.localdomain . For more information, see the hosts(5) manual page. /etc/resolv.conf This file specifies the IP addresses of DNS servers and the search domain. Unless configured to do otherwise, the network initialization scripts populate this file. For more information about this file, see the resolv.conf(5) manual page. /etc/sysconfig/network This file specifies routing and host information for all network interfaces. It is used to contain directives which are to have global effect and not to be interface specific. For more information about this file and the directives it accepts, see Section D.1.14, "/etc/sysconfig/network" . /etc/sysconfig/network-scripts/ifcfg- interface-name For each network interface, there is a corresponding interface configuration script. Each of these files provide information specific to a particular network interface. See Section 11.2, "Interface Configuration Files" for more information on this type of file and the directives it accepts. Important Network interface names may be different on different hardware types. See Appendix A, Consistent Network Device Naming for more information. Warning The /etc/sysconfig/networking/ directory is used by the now deprecated Network Administration Tool ( system-config-network ). Its contents should not be edited manually. Using only one method for network configuration is strongly encouraged, due to the risk of configuration deletion. For more information about configuring network interfaces using graphical configuration tools, see Chapter 10, NetworkManager . 11.1.1. Setting the Host Name To permanently change the static host name, change the HOSTNAME directive in the /etc/sysconfig/network file. For example: Red Hat recommends the static host name matches the fully qualified domain name (FQDN) used for the machine in DNS, such as host.example.com. It is also recommended that the static host name consists only of 7 bit ASCII lower-case characters, no spaces or dots, and limits itself to the format allowed for DNS domain name labels, even though this is not a strict requirement. Older specifications do not permit the underscore, and so their use is not recommended. Changes will only take effect when the networking service, or the system, is restarted. Note that the FQDN of the host can be supplied by a DNS resolver, by settings in /etc/sysconfig/network , or by the /etc/hosts file. The default setting of hosts: files dns in /etc/nsswitch.conf causes the configuration files to be checked before a resolver. The default setting of multi on in the /etc/host.conf file means that all valid values in the /etc/hosts file are returned, not just the first. Sometimes you may need to use the host table in the /etc/hosts file instead of the HOSTNAME directive in /etc/sysconfig/network , for example, when DNS is not running during system bootup. To change the host name using the /etc/hosts file, add lines to it in the following format: 192.168.1.2 penguin.example.com penguin
[ "HOSTNAME=penguin.example.com" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/ch-Network_Interfaces
3.9. Example: Attach Storage Domains to Data Center
3.9. Example: Attach Storage Domains to Data Center The following example attaches the data1 and iso1 storage domains to the Default data center. Example 3.9. Attach data1 storage domain to the Default data center Request: cURL command: Example 3.10. Attach iso1 storage domain to the Default data center Request: cURL command: These POST requests place our two new storage_domain resources in the storagedomains sub-collection of the Default data center. This means the storagedomains sub-collection contains attached storage domains of the data center.
[ "POST /ovirt-engine/api/datacenters/01a45ff0-915a-11e0-8b87-5254004ac988/storagedomains HTTP/1.1 Accept: application/xml Content-type: application/xml <storage_domain> <name>data1</name> </storage_domain>", "curl -X POST -H \"Accept: application/xml\" -H \"Content-Type: application/xml\" -u [USER:PASS] --cacert [CERT] -d \"<storage_domain><name>data1</name></storage_domain>\" https:// [RHEVM Host] :443/ovirt-engine/api/datacenters/01a45ff0-915a-11e0-8b87-5254004ac988/storagedomains", "POST /ovirt-engine/api/datacenters/01a45ff0-915a-11e0-8b87-5254004ac988/storagedomains HTTP/1.1 Accept: application/xml Content-type: application/xml <storage_domain> <name>iso1</name> </storage_domain>", "curl -X POST -H \"Accept: application/xml\" -H \"Content-Type: application/xml\" -u [USER:PASS] --cacert [CERT] -d \"<storage_domain><name>iso1</name></storage_domain>\" https:// [RHEVM Host] :443/ovirt-engine/api/datacenters/01a45ff0-915a-11e0-8b87-5254004ac988/storagedomains" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/example_attach_storage_domains_to_data_center
Chapter 34. Downloading the Red Hat Process Automation Manager installation files
Chapter 34. Downloading the Red Hat Process Automation Manager installation files You can use the installer JAR file or deployable ZIP files to install Red Hat Process Automation Manager. You can run the installer in interactive or command line interface (CLI) mode. Alternatively, you can extract and configure the Business Central and KIE Server deployable ZIP files. If you want to run Business Central without deploying it to an application server, download the Business Central Standalone JAR file. Download a Red Hat Process Automation Manager distribution that meets your environment and installation requirements. Procedure Navigate to the Software Downloads page in the Red Hat Customer Portal (login required), and select the product and version from the drop-down options: Product: Process Automation Manager Version: 7.13.5 Download one of the following product distributions, depending on your preferred installation method: Note You only need to download one of these distributions. If you want to use the installer to install Red Hat Process Automation Manager on Red Hat JBoss Web Server, download Red Hat Process Automation Manager 7.13.5 Installer ( rhpam-installer-7.13.5.jar ). The installer graphical user interface guides you through the installation process. To install KIE Server on Red Hat JBoss Web Server using the deployable ZIP files, download the following files: Red Hat Process Automation Manager 7.13.5 Add Ons ( rhpam-7.13.5-add-ons.zip ) Red Hat Process Automation Manager 7.13.5 Maven Repository ( rhpam-7.13.5-maven-repository.zip ) To run Business Central without needing to deploy it to an application server, download Red Hat Process Automation Manager 7.13.5 Business Central Standalone ( rhpam-7.13.5-business-central-standalone.jar ).
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/installing_and_configuring_red_hat_process_automation_manager/install-download-proc_install-on-jws
17.8. Managing Tokens Used by the Subsystems
17.8. Managing Tokens Used by the Subsystems Certificate System managers two groups of tokens: tokens used by the subsystems to perform PKI tasks and tokens issued through the subsystem. These management tasks refer specifically to tokens that are used by the subsystems. For information on managing smart card tokens, see Chapter 6, Using and Configuring the Token Management System: TPS and TKS . 17.8.1. Detecting Tokens To see if a token can be detected by Certificate System to be installed or configured, use the TokenInfo utility. This utility will return all tokens which can be detected by the Certificate System, not only tokens which are installed in the Certificate System. 17.8.2. Viewing Tokens To view a list of the tokens currently installed for a Certificate System instance, use the modutil utility. Open the instance alias directory. For example: Show the information about the installed PKCS #11 modules installed as well as information on the corresponding tokens using the modutil tool. 17.8.3. Changing a Token's Password The token, internal or external, that stores the key pairs and certificates for the subsystems is protected (encrypted) by a password. To decrypt the key pairs or to gain access to them, enter the token password. This password is set when the token is first accessed, usually during Certificate System installation. It is good security practice to change the password that protects the server's keys and certificates periodically. Changing the password minimizes the risk of someone finding out the password. To change a token's password, use the certutil command-line utility. For information about certutil , see http://www.mozilla.org/projects/security/pki/nss/tools/ . The single sign-on password cache stores token passwords in the password.conf file. This file must be manually updated every time the token password is changed. For more information on managing passwords through the password.conf file, see Red Hat Certificate System Planning, Installation, and Deployment Guide .
[ "TokenInfo /var/lib/pki/ instance_name /alias Database Path: /var/lib/pki/ instance_name /alias Found external module 'NSS Internal PKCS #11 Module'", "cd /var/lib/pki/ instance_name /alias", "modutil -dbdir . -nocertdb -list" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/managing_tokens_used_by_the_subsystems
Chapter 23. Manually Recovering File Split-brain
Chapter 23. Manually Recovering File Split-brain This chapter provides steps to manually recover from split-brain. Run the following command to obtain the path of the file that is in split-brain: From the command output, identify the files for which file operations performed from the client keep failing with Input/Output error. Close the applications that opened split-brain file from the mount point. If you are using a virtual machine, you must power off the machine. Obtain and verify the AFR changelog extended attributes of the file using the getfattr command. Then identify the type of split-brain to determine which of the bricks contains the 'good copy' of the file. For example, The extended attributes with trusted.afr. VOLNAME volname-client-<subvolume-index> are used by AFR to maintain changelog of the file. The values of the trusted.afr. VOLNAME volname-client-<subvolume-index> are calculated by the glusterFS client (FUSE or NFS-server) processes. When the glusterFS client modifies a file or directory, the client contacts each brick and updates the changelog extended attribute according to the response of the brick. subvolume-index is the brick number - 1 of gluster volume info VOLNAME output. For example, In the example above: Each file in a brick maintains the changelog of itself and that of the files present in all the other bricks in it's replica set as seen by that brick. In the example volume given above, all files in brick-a will have 2 entries, one for itself and the other for the file present in it's replica pair. The following is the changelog for brick2, trusted.afr.vol-client-0=0x000000000000000000000000 - is the changelog for itself (brick1) trusted.afr.vol-client-1=0x000000000000000000000000 - changelog for brick2 as seen by brick1 Likewise, all files in brick2 will have the following: trusted.afr.vol-client-0=0x000000000000000000000000 - changelog for brick1 as seen by brick2 trusted.afr.vol-client-1=0x000000000000000000000000 - changelog for itself (brick2) Note These files do not have entries for themselves, only for the other bricks in the replica. For example, brick1 will only have trusted.afr.vol-client-1 set and brick2 will only have trusted.afr.vol-client-0 set. Interpreting the changelog remains same as explained below. The same can be extended for other replica pairs. Interpreting changelog (approximate pending operation count) value Each extended attribute has a value which is 24 hexa decimal digits. First 8 digits represent changelog of data. Second 8 digits represent changelog of metadata. Last 8 digits represent Changelog of directory entries. Pictorially representing the same is as follows: For directories, metadata and entry changelogs are valid. For regular files, data and metadata changelogs are valid. For special files like device files and so on, metadata changelog is valid. When a file split-brain happens it could be either be data split-brain or meta-data split-brain or both. The following is an example of both data, metadata split-brain on the same file: Scrutinize the changelogs The changelog extended attributes on file /rhgs/brick1/a are as follows: The first 8 digits of trusted.afr.vol-client-0 are all zeros (0x00000000................) , The first 8 digits of trusted.afr.vol-client-1 are not all zeros (0x000003d7................). So the changelog on /rhgs/brick-a/a implies that some data operations succeeded on itself but failed on /rhgs/brick2/a . The second 8 digits of trusted.afr.vol-client-0 are all zeros (0x........00000000........) , and the second 8 digits of trusted.afr.vol-client-1 are not all zeros (0x........00000001........). So the changelog on /rhgs/brick1/a implies that some metadata operations succeeded on itself but failed on /rhgs/brick2/a . The changelog extended attributes on file /rhgs/brick2/a are as follows: The first 8 digits of trusted.afr.vol-client-0 are not all zeros (0x000003b0................). The first 8 digits of trusted.afr.vol-client-1 are all zeros (0x00000000................). So the changelog on /rhgs/brick2/a implies that some data operations succeeded on itself but failed on /rhgs/brick1/a . The second 8 digits of trusted.afr.vol-client-0 are not all zeros (0x........00000001........) The second 8 digits of trusted.afr.vol-client-1 are all zeros (0x........00000000........). So the changelog on /rhgs/brick2/a implies that some metadata operations succeeded on itself but failed on /rhgs/brick1/a . Here, both the copies have data, metadata changes that are not on the other file. Hence, it is both data and metadata split-brain. Deciding on the correct copy You must inspect stat and getfattr output of the files to decide which metadata to retain and contents of the file to decide which data to retain. To continue with the example above, here, we are retaining the data of /rhgs/brick1/a and metadata of /rhgs/brick2/a . Resetting the relevant changelogs to resolve the split-brain Resolving data split-brain You must change the changelog extended attributes on the files as if some data operations succeeded on /rhgs/brick1/a but failed on /rhgs/brick-b/a. But /rhgs/brick2/a should not have any changelog showing data operations succeeded on /rhgs/brick2/a but failed on /rhgs/brick1/a . You must reset the data part of the changelog on trusted.afr.vol-client-0 of /rhgs/brick2/a . Resolving metadata split-brain You must change the changelog extended attributes on the files as if some metadata operations succeeded on /rhgs/brick2/a but failed on /rhgs/brick1/a . But /rhgs/brick1/a should not have any changelog which says some metadata operations succeeded on /rhgs/brick1/a but failed on /rhgs/brick2/a . You must reset metadata part of the changelog on trusted.afr.vol-client-1 of /rhgs/brick1/a Run the following commands to reset the extended attributes. On /rhgs/brick2/a , for trusted.afr.vol-client-0 0x000003b00000000100000000 to 0x000000000000000100000000 , execute the following command: On /rhgs/brick1/a , for trusted.afr.vol-client-1 0x0000000000000000ffffffff to 0x000003d70000000000000000 , execute the following command: After you reset the extended attributes, the changelogs would look similar to the following: Resolving Directory entry split-brain AFR has the ability to conservatively merge different entries in the directories when there is a split-brain on directory. If on one brick directory storage has entries 1 , 2 and has entries 3 , 4 on the other brick then AFR will merge all of the entries in the directory to have 1, 2, 3, 4 entries in the same directory. But this may result in deleted files to re-appear in case the split-brain happens because of deletion of files on the directory. Split-brain resolution needs human intervention when there is at least one entry which has same file name but different gfid in that directory. For example: On brick-a the directory has 2 entries file1 with gfid_x and file2 . On brick-b directory has 2 entries file1 with gfid_y and file3 . Here the gfid's of file1 on the bricks are different. These kinds of directory split-brain needs human intervention to resolve the issue. You must remove either file1 on brick-a or the file1 on brick-b to resolve the split-brain. In addition, the corresponding gfid-link file must be removed. The gfid-link files are present in the . glusterfs directory in the top-level directory of the brick. If the gfid of the file is 0x307a5c9efddd4e7c96e94fd4bcdcbd1b (the trusted.gfid extended attribute received from the getfattr command earlier), the gfid-link file can be found at /rhgs/brick1/.glusterfs/30/7a/307a5c9efddd4e7c96e94fd4bcdcbd1b . Warning Before deleting the gfid-link , you must ensure that there are no hard links to the file present on that brick. If hard-links exist, you must delete them. Trigger self-heal by running the following command: or
[ "gluster volume heal VOLNAME info split-brain", "getfattr -d -m . -e hex <file-path-on-brick>", "getfattr -d -e hex -m. brick-a/file.txt #file: brick-a/file.txt security.selinux=0x726f6f743a6f626a6563745f723a66696c655f743a733000 trusted.afr.vol-client-2=0x000000000000000000000000 trusted.afr.vol-client-3=0x000000000200000000000000 trusted.gfid=0x307a5c9efddd4e7c96e94fd4bcdcbd1b", "gluster volume info vol Volume Name: vol Type: Distributed-Replicate Volume ID: 4f2d7849-fbd6-40a2-b346-d13420978a01 Status: Created Number of Bricks: 4 x 2 = 8 Transport-type: tcp Bricks: brick1: server1:/rhgs/brick1 brick2: server1:/rhgs/brick2 brick3: server1:/rhgs/brick3 brick4: server1:/rhgs/brick4 brick5: server1:/rhgs/brick5 brick6: server1:/rhgs/brick6 brick7: server1:/rhgs/brick7 brick8: server1:/rhgs/brick8", "Brick | Replica set | Brick subvolume index ---------------------------------------------------------------------------- /rhgs/brick1 | 0 | 0 /rhgs/brick2 | 0 | 1 /rhgs/brick3 | 1 | 2 /rhgs/brick4 | 1 | 3 /rhgs/brick5 | 2 | 4 /rhgs/brick6 | 2 | 5 /rhgs/brick7 | 3 | 6 /rhgs/brick8 | 3 | 7 ```", "0x 000003d7 00000001 00000000110 | | | | | \\_ changelog of directory entries | \\_ changelog of metadata \\ _ changelog of data", "getfattr -d -m . -e hex /rhgs/brick?/a getfattr: Removing leading '/' from absolute path names #file: rhgs/brick1/a trusted.afr.vol-client-0=0x000000000000000000000000 trusted.afr.vol-client-1=0x000003d70000000100000000 trusted.gfid=0x80acdbd886524f6fbefa21fc356fed57 #file: rhgs/brick2/a trusted.afr.vol-client-0=0x000003b00000000100000000 trusted.afr.vol-client-1=0x000000000000000000000000 trusted.gfid=0x80acdbd886524f6fbefa21fc356fed57", "setfattr -n trusted.afr.vol-client-0 -v 0x000000000000000100000000 /rhgs/brick2/a", "setfattr -n trusted.afr.vol-client-1 -v 0x000003d70000000000000000 /rhgs/brick1/a", "getfattr -d -m . -e hex /rhgs/brick?/a getfattr: Removing leading '/' from absolute path names #file: rhgs/brick1/a trusted.afr.vol-client-0=0x000000000000000000000000 trusted.afr.vol-client-1=0x000003d70000000000000000 trusted.gfid=0x80acdbd886524f6fbefa21fc356fed57 #file: rhgs/brick2/a trusted.afr.vol-client-0=0x000000000000000100000000 trusted.afr.vol-client-1=0x000000000000000000000000 trusted.gfid=0x80acdbd886524f6fbefa21fc356fed57", "ls -l <file-path-on-gluster-mount>", "gluster volume heal VOLNAME" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/chap-Manually_Resolving_Split-brains
Chapter 5. Querying
Chapter 5. Querying Infinispan Query can execute Lucene queries and retrieve domain objects from a Red Hat JBoss Data Grid cache. Procedure 5.1. Prepare and Execute a Query Get SearchManager of an indexing enabled cache as follows: Create a QueryBuilder to build queries for Myth.class as follows: Create an Apache Lucene query that queries the Myth.class class' atributes as follows: Report a bug 5.1. Building Queries Query Module queries are built on Lucene queries, allowing users to use any Lucene query type. When the query is built, Infinispan Query uses org.infinispan.query.CacheQuery as the query manipulation API for further query processing. Report a bug 5.1.1. Building a Lucene Query Using the Lucene-based Query API With the Lucene API, use either the query parser (simple queries) or the Lucene programmatic API (complex queries). For details, see the online Lucene documentation or a copy of Lucene in Action or Hibernate Search in Action . Report a bug 5.1.2. Building a Lucene Query Using the Lucene programmatic API, it is possible to write full-text queries. However, when using Lucene programmatic API, the parameters must be converted to their string equivalent and must also apply the correct analyzer to the right field. A ngram analyzer for example uses several ngrams as the tokens for a given word and should be searched as such. It is recommended to use the QueryBuilder for this task. The Lucene-based query API is fluent. This API has a following key characteristics: Method names are in English. As a result, API operations can be read and understood as a series of English phrases and instructions. It uses IDE autocompletion which helps possible completions for the current input prefix and allows the user to choose the right option. It often uses the chaining method pattern. It is easy to use and read the API operations. To use the API, first create a query builder that is attached to a given indexed type. This QueryBuilder knows what analyzer to use and what field bridge to apply. Several QueryBuilder s (one for each type involved in the root of your query) can be created. The QueryBuilder is derived from the SearchFactory . The analyzer, used for a given field or fields can also be overridden. The query builder is now used to build Lucene queries. Report a bug 5.1.2.1. Keyword Queries The following example shows how to search for a specific word: Example 5.1. Keyword Search Table 5.1. Keyword query parameters Parameter Description keyword() Use this parameter to find a specific word onField() Use this parameter to specify in which lucene field to search the word matching() use this parameter to specify the match for search string createQuery() creates the Lucene query object The value "storm" is passed through the history FieldBridge . This is useful when numbers or dates are involved. The field bridge value is then passed to the analyzer used to index the field history . This ensures that the query uses the same term transformation than the indexing (lower case, ngram, stemming and so on). If the analyzing process generates several terms for a given word, a boolean query is used with the SHOULD logic (roughly an OR logic). To search a property that is not of type string. Note In plain Lucene, the Date object had to be converted to its string representation (in this case the year) This conversion works for any object, provided that the FieldBridge has an objectToString method (and all built-in FieldBridge implementations do). The example searches a field that uses ngram analyzers. The ngram analyzers index succession of ngrams of words, which helps to avoid user typos. For example, the 3-grams of the word hibernate are hib, ibe, ber, rna, nat, ate. Example 5.2. Searching Using Ngram Analyzers The matching word "Sisiphus" will be lower-cased and then split into 3-grams: sis, isi, sip, phu, hus. Each of these ngram will be part of the query. The user is then able to find the Sysiphus myth (with a y ). All that is transparently done for the user. Note If the user does not want a specific field to use the field bridge or the analyzer then the ignoreAnalyzer() or ignoreFieldBridge() functions can be called. To search for multiple possible words in the same field, add them all in the matching clause. Example 5.3. Searching for Multiple Words To search the same word on multiple fields, use the onFields method. Example 5.4. Searching Multiple Fields In some cases, one field must be treated differently from another field even if searching the same term. In this case, use the andField() method. Example 5.5. Using the andField Method In the example, only field name is boosted to 5. Report a bug 5.1.2.2. Fuzzy Queries To execute a fuzzy query (based on the Levenshtein distance algorithm), start like a keyword query and add the fuzzy flag. Example 5.6. Fuzzy Query The threshold is the limit above which two terms are considering matching. It is a decimal between 0 and 1 and the default value is 0.5. The prefixLength is the length of the prefix ignored by the "fuzzyness". While the default value is 0, a non zero value is recommended for indexes containing a huge amount of distinct terms. Report a bug 5.1.2.3. Wildcard Queries Wildcard queries can also be executed (queries where some of parts of the word are unknown). The ? represents a single character and * represents any character sequence. Note that for performance purposes, it is recommended that the query does not start with either ? or * . Example 5.7. Wildcard Query Note Wildcard queries do not apply the analyzer on the matching terms. Otherwise the risk of * or ? being mangled is too high. Report a bug 5.1.2.4. Phrase Queries So far we have been looking for words or sets of words, the user can also search exact or approximate sentences. Use phrase() to do so. Example 5.8. Phrase Query Approximate sentences can be searched by adding a slop factor. The slop factor represents the number of other words permitted in the sentence: this works like a within or near operator. Example 5.9. Adding Slop Factor Report a bug 5.1.2.5. Range Queries A range query searches for a value in between given boundaries (included or not) or for a value below or above a given boundary (included or not). Example 5.10. Range Query Report a bug 5.1.2.6. Combining Queries Queries can be aggregated (combine) to create more complex queries. The following aggregation operators are available: SHOULD : the query should contain the matching elements of the subquery. MUST : the query must contain the matching elements of the subquery. MUST NOT : the query must not contain the matching elements of the subquery. The subqueries can be any Lucene query including a boolean query itself. Following are some examples: Example 5.11. Combining Subqueries Report a bug 5.1.2.7. Query Options The following is a summary of query options for query types and fields: boostedTo (on query type and on field) boosts the query or field to a provided factor. withConstantScore (on query) returns all results that match the query and have a constant score equal to the boost. filteredBy(Filter) (on query) filters query results using the Filter instance. ignoreAnalyzer (on field) ignores the analyzer when processing this field. ignoreFieldBridge (on field) ignores the field bridge when processing this field. The following example illustrates how to use these options: Example 5.12. Querying Options 23151%2C+Infinispan+Query+Guide-6.608-09-2016+09%3A23%3A32JBoss+Data+Grid+6Documentation6.6.1 Report a bug 5.1.3. Build a Query with Infinispan Query 5.1.3.1. Generality After building the Lucene query, wrap it within a Infinispan CacheQuery. The query searches all indexed entities and returns all types of indexed classes unless explicitly configured not to do so. Example 5.13. Wrapping a Lucene Query in an Infinispan CacheQuery For improved performance, restrict the returned types as follows: Example 5.14. Filtering the Search Result by Entity Type The first part of the second example only returns the matching Customer instances. The second part of the same example returns matching Actor and Item instances. The type restriction is polymorphic. As a result, if the two subclasses Salesman and Customer of the base class Person return, specify Person.class to filter based on result types. Report a bug 5.1.3.2. Pagination To avoid performance degradation, it is recommended to restrict the number of returned objects per query. A user navigating from one page to another page is a very common use case. The way to define pagination is similar to defining pagination in a plain HQL or Criteria query. Example 5.15. Defining pagination for a search query Note The total number of matching elements, despite the pagination, is accessible via cacheQuery.getResultSize() . Report a bug 5.1.3.3. Sorting Apache Lucene contains a flexible and powerful result sorting mechanism. The default sorting is by relevance and is appropriate for a large variety of use cases. The sorting mechanism can be changed to sort by other properties using the Lucene Sort object to apply a Lucene sorting strategy. Example 5.16. Specifying a Lucene Sort Note Fields used for sorting must not be tokenized. For more information about tokenizing, see Section 4.1.2, "@Field" . Report a bug 5.1.3.4. Projection In some cases, only a small subset of the properties is required. Use Infinispan Query to return a subset of properties as follows: Example 5.17. Using Projection Instead of Returning the Full Domain Object The Query Module extracts properties from the Lucene index and converts them to their object representation and returns a list of Object[] . Projections prevent a time consuming database round-trip. However, they have following constraints: The properties projected must be stored in the index ( @Field(store=Store.YES) ), which increases the index size. The properties projected must use a FieldBridge implementing org.infinispan.query.bridge.TwoWayFieldBridge or org.infinispan.query.bridge.TwoWayStringBridge , the latter being the simpler version. Note All Lucene-based Query API built-in types are two-way. Only the simple properties of the indexed entity or its embedded associations can be projected. Therefore a whole embedded entity cannot be projected. Projection does not work on collections or maps which are indexed via @IndexedEmbedded Lucene provides metadata information about query results. Use projection constants to retrieve the metadata. Example 5.18. Using Projection to Retrieve Metadata Fields can be mixed with the following projection constants: FullTextQuery.THIS returns the initialized and managed entity as a non-projected query does. FullTextQuery.DOCUMENT returns the Lucene Document related to the projected object. FullTextQuery.OBJECT_CLASS returns the indexed entity's class. FullTextQuery.SCORE returns the document score in the query. Use scores to compare one result against another for a given query. However, scores are not relevant to compare the results of two different queries. FullTextQuery.ID is the ID property value of the projected object. FullTextQuery.DOCUMENT_ID is the Lucene document ID. The Lucene document ID changes between two IndexReader openings. FullTextQuery.EXPLANATION returns the Lucene Explanation object for the matching object/document in the query. This is not suitable for retrieving large amounts of data. Running FullTextQuery.EXPLANATION is as expensive as running a Lucene query for each matching element. As a result, projection is recommended. Report a bug 5.1.3.5. Limiting the Time of a Query Limit the time a query takes in Infinispan Query as follows: Raise an exception when arriving at the limit. Limit to the number of results retrieved when the time limit is raised. Report a bug 5.1.3.6. Raise an Exception on Time Limit If a query uses more than the defined amount of time, a custom exception might be defined to be thrown. To define the limit when using the CacheQuery API, use the following approach: Example 5.19. Defining a Timeout in Query Execution The getResultSize() , iterate() and scroll() honor the timeout until the end of the method call. As a result, Iterable or the ScrollableResults ignore the timeout. Additionally, explain() does not honor this timeout period. This method is used for debugging and to check the reasons for slow performance of a query. Important The example code does not guarantee that the query stops at the specified results amount. Report a bug
[ "SearchManager manager = Search.getSearchManager(cache);", "final org.hibernate.search.query.dsl.QueryBuilder queryBuilder = manager.buildQueryBuilderForClass(Myth.class).get();", "org.apache.lucene.search.Query query = queryBuilder.keyword() .onField(\"history\").boostedTo(3) .matching(\"storm\") .createQuery(); // wrap Lucene query in a org.infinispan.query.CacheQuery CacheQuery cacheQuery = manager.getQuery(query); // Get query result List<Object> result = cacheQuery.list();", "Search.getSearchManager(cache).buildQueryBuilderForClass(Myth.class).get();", "SearchFactory searchFactory = Search.getSearchManager(cache).getSearchFactory(); QueryBuilder mythQB = searchFactory.buildQueryBuilder() .forEntity(Myth.class) .overridesForField(\"history\",\"stem_analyzer_definition\") .get();", "Query luceneQuery = mythQB.keyword().onField(\"history\").matching(\"storm\").createQuery();", "@Indexed public class Myth { @Field(analyze = Analyze.NO) @DateBridge(resolution = Resolution.YEAR) public Date getCreationDate() { return creationDate; } public Date setCreationDate(Date creationDate) { this.creationDate = creationDate; } private Date creationDate; } Date birthdate = ...; Query luceneQuery = mythQb.keyword() .onField(\"creationDate\") .matching(birthdate) .createQuery();", "@AnalyzerDef(name = \"ngram\", tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class), filters = { @TokenFilterDef(factory = StandardFilterFactory.class), @TokenFilterDef(factory = LowerCaseFilterFactory.class), @TokenFilterDef(factory = StopFilterFactory.class), @TokenFilterDef(factory = NGramFilterFactory.class, params = { @Parameter(name = \"minGramSize\", value = \"3\"), @Parameter(name = \"maxGramSize\", value = \"3\")}) }) public class Myth { @Field(analyzer = @Analyzer(definition = \"ngram\")) public String getName() { return name; } public String setName(String name) { this.name = name; } private String name; } Date birthdate = ...; Query luceneQuery = mythQb.keyword() .onField(\"name\") .matching(\"Sisiphus\") .createQuery();", "//search document with storm or lightning in their history Query luceneQuery = mythQB.keyword().onField(\"history\").matching(\"storm lightning\").createQuery();", "Query luceneQuery = mythQB .keyword() .onFields(\"history\",\"description\",\"name\") .matching(\"storm\") .createQuery();", "Query luceneQuery = mythQB.keyword() .onField(\"history\") .andField(\"name\") .boostedTo(5) .andField(\"description\") .matching(\"storm\") .createQuery();", "Query luceneQuery = mythQB.keyword() .fuzzy() .withThreshold(.8f) .withPrefixLength(1) .onField(\"history\") .matching(\"starm\") .createQuery();", "Query luceneQuery = mythQB.keyword() .wildcard() .onField(\"history\") .matching(\"sto*\") .createQuery();", "Query luceneQuery = mythQB.phrase() .onField(\"history\") .sentence(\"Thou shalt not kill\") .createQuery();", "Query luceneQuery = mythQB.phrase() .withSlop(3) .onField(\"history\") .sentence(\"Thou kill\") .createQuery();", "//look for 0 <= starred < 3 Query luceneQuery = mythQB.range() .onField(\"starred\") .from(0).to(3).excludeLimit() .createQuery(); //look for myths strictly BC Date beforeChrist = ...; Query luceneQuery = mythQB.range() .onField(\"creationDate\") .below(beforeChrist).excludeLimit() .createQuery();", "//look for popular modern myths that are not urban Date twentiethCentury = ...; Query luceneQuery = mythQB.bool() .must(mythQB.keyword().onField(\"description\").matching(\"urban\").createQuery()) .not() .must(mythQB.range().onField(\"starred\").above(4).createQuery()) .must(mythQB.range() .onField(\"creationDate\") .above(twentiethCentury) .createQuery()) .createQuery(); //look for popular myths that are preferably urban Query luceneQuery = mythQB .bool() .should(mythQB.keyword() .onField(\"description\") .matching(\"urban\") .createQuery()) .must(mythQB.range().onField(\"starred\").above(4).createQuery()) .createQuery(); //look for all myths except religious ones Query luceneQuery = mythQB.all() .except(mythQb.keyword() .onField(\"description_stem\") .matching(\"religion\") .createQuery()) .createQuery();", "Query luceneQuery = mythQB .bool() .should(mythQB.keyword().onField(\"description\").matching(\"urban\").createQuery()) .should(mythQB .keyword() .onField(\"name\") .boostedTo(3) .ignoreAnalyzer() .matching(\"urban\").createQuery()) .must(mythQB .range() .boostedTo(5) .withConstantScore() .onField(\"starred\") .above(4).createQuery()) .createQuery();", "CacheQuery cacheQuery = Search.getSearchManager(cache).getQuery(luceneQuery);", "CacheQuery cacheQuery = Search.getSearchManager(cache).getQuery(luceneQuery, Customer.class); // or CacheQuery cacheQuery = Search.getSearchManager(cache).getQuery(luceneQuery, Item.class, Actor.class);", "CacheQuery cacheQuery = Search.getSearchManager(cache) .getQuery(luceneQuery, Customer.class); cacheQuery.firstResult(15); //start from the 15th element cacheQuery.maxResults(10); //return 10 elements", "org.infinispan.query.CacheQuery cacheQuery = Search.getSearchManager(cache).getQuery(luceneQuery, Book.class); org.apache.lucene.search.Sort sort = new Sort( new SortField(\"title\", SortField.STRING)); cacheQuery.sort(sort); List results = cacheQuery.list();", "SearchManager searchManager = Search.getSearchManager(cache); CacheQuery cacheQuery = searchManager.getQuery(luceneQuery, Book.class); cacheQuery.projection(\"id\", \"summary\", \"body\", \"mainAuthor.name\"); List results = cacheQuery.list(); Object[] firstResult = (Object[]) results.get(0); Integer id = (Integer) firstResult[0]; String summary = (String) firstResult[1]; String body = (String) firstResult[2]; String authorName = (String) firstResult[3];", "SearchManager searchManager = Search.getSearchManager(cache); CacheQuery cacheQuery = searchManager.getQuery(luceneQuery, Book.class); cacheQuery.projection(\"mainAuthor.name\"); List results = cacheQuery.list(); Object[] firstResult = (Object[]) results.get(0); float score = (Float) firstResult[0]; Book book = (Book) firstResult[1]; String authorName = (String) firstResult[2];", "SearchManagerImplementor searchManager = (SearchManagerImplementor) Search.getSearchManager(cache); searchManager.setTimeoutExceptionFactory(new MyTimeoutExceptionFactory()); CacheQuery cacheQuery = searchManager.getQuery(luceneQuery, Book.class); //define the timeout in seconds cacheQuery.timeout(2, TimeUnit.SECONDS) try { query.list(); } catch (MyTimeoutException e) { //do something, too slow } private static class MyTimeoutExceptionFactory implements TimeoutExceptionFactory { @Override public RuntimeException createTimeoutException(String message, Query query) { return new MyTimeoutException(); } } public static class MyTimeoutException extends RuntimeException { }" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/infinispan_query_guide/chap-querying
Chapter 3. Clair security scanner
Chapter 3. Clair security scanner 3.1. Clair vulnerability databases Clair uses the following vulnerability databases to report for issues in your images: Ubuntu Oval database Debian Security Tracker Red Hat Enterprise Linux (RHEL) Oval database SUSE Oval database Oracle Oval database Alpine SecDB database VMware Photon OS database Amazon Web Services (AWS) UpdateInfo Open Source Vulnerability (OSV) Database For information about how Clair does security mapping with the different databases, see Claircore Severity Mapping . 3.1.1. Information about Open Source Vulnerability (OSV) database for Clair Open Source Vulnerability (OSV) is a vulnerability database and monitoring service that focuses on tracking and managing security vulnerabilities in open source software. OSV provides a comprehensive and up-to-date database of known security vulnerabilities in open source projects. It covers a wide range of open source software, including libraries, frameworks, and other components that are used in software development. For a full list of included ecosystems, see defined ecosystems . Clair also reports vulnerability and security information for golang , java , and ruby ecosystems through the Open Source Vulnerability (OSV) database. By leveraging OSV, developers and organizations can proactively monitor and address security vulnerabilities in open source components that they use, which helps to reduce the risk of security breaches and data compromises in projects. For more information about OSV, see the OSV website . 3.2. Clair on OpenShift Container Platform To set up Clair v4 (Clair) on a Red Hat Quay deployment on OpenShift Container Platform, it is recommended to use the Red Hat Quay Operator. By default, the Red Hat Quay Operator installs or upgrades a Clair deployment along with your Red Hat Quay deployment and configure Clair automatically. 3.3. Testing Clair Use the following procedure to test Clair on either a standalone Red Hat Quay deployment, or on an OpenShift Container Platform Operator-based deployment. Prerequisites You have deployed the Clair container image. Procedure Pull a sample image by entering the following command: USD podman pull ubuntu:20.04 Tag the image to your registry by entering the following command: USD sudo podman tag docker.io/library/ubuntu:20.04 <quay-server.example.com>/<user-name>/ubuntu:20.04 Push the image to your Red Hat Quay registry by entering the following command: USD sudo podman push --tls-verify=false quay-server.example.com/quayadmin/ubuntu:20.04 Log in to your Red Hat Quay deployment through the UI. Click the repository name, for example, quayadmin/ubuntu . In the navigation pane, click Tags . Report summary Click the image report, for example, 45 medium , to show a more detailed report: Report details Note In some cases, Clair shows duplicate reports on images, for example, ubi8/nodejs-12 or ubi8/nodejs-16 . This occurs because vulnerabilities with same name are for different packages. This behavior is expected with Clair vulnerability reporting and will not be addressed as a bug. 3.4. Advanced Clair configuration Use the procedures in the following sections to configure advanced Clair settings. 3.4.1. Unmanaged Clair configuration Red Hat Quay users can run an unmanaged Clair configuration with the Red Hat Quay OpenShift Container Platform Operator. This feature allows users to create an unmanaged Clair database, or run their custom Clair configuration without an unmanaged database. An unmanaged Clair database allows the Red Hat Quay Operator to work in a geo-replicated environment, where multiple instances of the Operator must communicate with the same database. An unmanaged Clair database can also be used when a user requires a highly-available (HA) Clair database that exists outside of a cluster. 3.4.1.1. Running a custom Clair configuration with an unmanaged Clair database Use the following procedure to set your Clair database to unmanaged. Procedure In the Quay Operator, set the clairpostgres component of the QuayRegistry custom resource to managed: false : apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay370 spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: clairpostgres managed: false 3.4.1.2. Configuring a custom Clair database with an unmanaged Clair database Red Hat Quay on OpenShift Container Platform allows users to provide their own Clair database. Use the following procedure to create a custom Clair database. Note The following procedure sets up Clair with SSL/TLS certifications. To view a similar procedure that does not set up Clair with SSL/TSL certifications, see "Configuring a custom Clair database with a managed Clair configuration". Procedure Create a Quay configuration bundle secret that includes the clair-config.yaml by entering the following command: USD oc create secret generic --from-file config.yaml=./config.yaml --from-file extra_ca_cert_rds-ca-2019-root.pem=./rds-ca-2019-root.pem --from-file clair-config.yaml=./clair-config.yaml --from-file ssl.cert=./ssl.cert --from-file ssl.key=./ssl.key config-bundle-secret Example Clair config.yaml file indexer: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca layer_scan_concurrency: 6 migrations: true scanlock_retry: 11 log_level: debug matcher: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca migrations: true metrics: name: prometheus notifier: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca migrations: true Note The database certificate is mounted under /run/certs/rds-ca-2019-root.pem on the Clair application pod in the clair-config.yaml . It must be specified when configuring your clair-config.yaml . An example clair-config.yaml can be found at Clair on OpenShift config . Add the clair-config.yaml file to your bundle secret, for example: apiVersion: v1 kind: Secret metadata: name: config-bundle-secret namespace: quay-enterprise data: config.yaml: <base64 encoded Quay config> clair-config.yaml: <base64 encoded Clair config> extra_ca_cert_<name>: <base64 encoded ca cert> ssl.crt: <base64 encoded SSL certificate> ssl.key: <base64 encoded SSL private key> Note When updated, the provided clair-config.yaml file is mounted into the Clair pod. Any fields not provided are automatically populated with defaults using the Clair configuration module. You can check the status of your Clair pod by clicking the commit in the Build History page, or by running oc get pods -n <namespace> . For example: Example output 3.4.2. Running a custom Clair configuration with a managed Clair database In some cases, users might want to run a custom Clair configuration with a managed Clair database. This is useful in the following scenarios: When a user wants to disable specific updater resources. When a user is running Red Hat Quay in an disconnected environment. For more information about running Clair in a disconnected environment, see Clair in disconnected environments . Note If you are running Red Hat Quay in an disconnected environment, the airgap parameter of your clair-config.yaml must be set to true . If you are running Red Hat Quay in an disconnected environment, you should disable all updater components. 3.4.2.1. Setting a Clair database to managed Use the following procedure to set your Clair database to managed. Procedure In the Quay Operator, set the clairpostgres component of the QuayRegistry custom resource to managed: true : apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay370 spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: clairpostgres managed: true 3.4.2.2. Configuring a custom Clair database with a managed Clair configuration Red Hat Quay on OpenShift Container Platform allows users to provide their own Clair database. Use the following procedure to create a custom Clair database. Procedure Create a Quay configuration bundle secret that includes the clair-config.yaml by entering the following command: USD oc create secret generic --from-file config.yaml=./config.yaml --from-file extra_ca_cert_rds-ca-2019-root.pem=./rds-ca-2019-root.pem --from-file clair-config.yaml=./clair-config.yaml config-bundle-secret Example Clair config.yaml file indexer: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable layer_scan_concurrency: 6 migrations: true scanlock_retry: 11 log_level: debug matcher: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable migrations: true metrics: name: prometheus notifier: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable migrations: true Note The database certificate is mounted under /run/certs/rds-ca-2019-root.pem on the Clair application pod in the clair-config.yaml . It must be specified when configuring your clair-config.yaml . An example clair-config.yaml can be found at Clair on OpenShift config . Add the clair-config.yaml file to your bundle secret, for example: apiVersion: v1 kind: Secret metadata: name: config-bundle-secret namespace: quay-enterprise data: config.yaml: <base64 encoded Quay config> clair-config.yaml: <base64 encoded Clair config> Note When updated, the provided clair-config.yaml file is mounted into the Clair pod. Any fields not provided are automatically populated with defaults using the Clair configuration module. You can check the status of your Clair pod by clicking the commit in the Build History page, or by running oc get pods -n <namespace> . For example: Example output 3.4.3. Clair in disconnected environments Note Currently, deploying Clair in disconnected environments is not supported on IBM Power and IBM Z. Clair uses a set of components called updaters to handle the fetching and parsing of data from various vulnerability databases. Updaters are set up by default to pull vulnerability data directly from the internet and work for immediate use. However, some users might require Red Hat Quay to run in a disconnected environment, or an environment without direct access to the internet. Clair supports disconnected environments by working with different types of update workflows that take network isolation into consideration. This works by using the clairctl command line interface tool, which obtains updater data from the internet by using an open host, securely transferring the data to an isolated host, and then important the updater data on the isolated host into Clair. Use this guide to deploy Clair in a disconnected environment. Important Due to known issue PROJQUAY-6577 , the Red Hat Quay Operator does not properly render customized Clair config.yaml files. As a result, the following procedure does not currently work. Users must create the entire Clair configuration themselves, from the beginning, instead of relying on the Operator to populate the fields. To do this, following the instructions at Procedure to enable Clair scanning of images in disconnected environments . Note Currently, Clair enrichment data is CVSS data. Enrichment data is currently unsupported in disconnected environments. For more information about Clair updaters, see "Clair updaters". 3.4.3.1. Setting up Clair in a disconnected OpenShift Container Platform cluster Use the following procedures to set up an OpenShift Container Platform provisioned Clair pod in a disconnected OpenShift Container Platform cluster. Important Due to known issue PROJQUAY-6577 , the Red Hat Quay Operator does not properly render customized Clair config.yaml files. As a result, the following procedure does not currently work. Users must create the entire Clair configuration themselves, from the beginning, instead of relying on the Operator to populate the fields. To do this, following the instructions at Procedure to enable Clair scanning of images in disconnected environments . 3.4.3.1.1. Installing the clairctl command line utility tool for OpenShift Container Platform deployments Use the following procedure to install the clairctl CLI tool for OpenShift Container Platform deployments. Procedure Install the clairctl program for a Clair deployment in an OpenShift Container Platform cluster by entering the following command: USD oc -n quay-enterprise exec example-registry-clair-app-64dd48f866-6ptgw -- cat /usr/bin/clairctl > clairctl Note Unofficially, the clairctl tool can be downloaded Set the permissions of the clairctl file so that it can be executed and run by the user, for example: USD chmod u+x ./clairctl 3.4.3.1.2. Retrieving and decoding the Clair configuration secret for Clair deployments on OpenShift Container Platform Use the following procedure to retrieve and decode the configuration secret for an OpenShift Container Platform provisioned Clair instance on OpenShift Container Platform. Prerequisites You have installed the clairctl command line utility tool. Procedure Enter the following command to retrieve and decode the configuration secret, and then save it to a Clair configuration YAML: USD oc get secret -n quay-enterprise example-registry-clair-config-secret -o "jsonpath={USD.data['config\.yaml']}" | base64 -d > clair-config.yaml Update the clair-config.yaml file so that the disable_updaters and airgap parameters are set to true , for example: --- indexer: airgap: true --- matcher: disable_updaters: true --- 3.4.3.1.3. Exporting the updaters bundle from a connected Clair instance Use the following procedure to export the updaters bundle from a Clair instance that has access to the internet. Prerequisites You have installed the clairctl command line utility tool. You have retrieved and decoded the Clair configuration secret, and saved it to a Clair config.yaml file. The disable_updaters and airgap parameters are set to true in your Clair config.yaml file. Procedure From a Clair instance that has access to the internet, use the clairctl CLI tool with your configuration file to export the updaters bundle. For example: USD ./clairctl --config ./config.yaml export-updaters updates.gz 3.4.3.1.4. Configuring access to the Clair database in the disconnected OpenShift Container Platform cluster Use the following procedure to configure access to the Clair database in your disconnected OpenShift Container Platform cluster. Prerequisites You have installed the clairctl command line utility tool. You have retrieved and decoded the Clair configuration secret, and saved it to a Clair config.yaml file. The disable_updaters and airgap parameters are set to true in your Clair config.yaml file. You have exported the updaters bundle from a Clair instance that has access to the internet. Procedure Determine your Clair database service by using the oc CLI tool, for example: USD oc get svc -n quay-enterprise Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.224.93 <none> 80/TCP,8089/TCP 4d21h example-registry-clair-postgres ClusterIP 172.30.246.88 <none> 5432/TCP 4d21h ... Forward the Clair database port so that it is accessible from the local machine. For example: USD oc port-forward -n quay-enterprise service/example-registry-clair-postgres 5432:5432 Update your Clair config.yaml file, for example: indexer: connstring: host=localhost port=5432 dbname=postgres user=postgres password=postgres sslmode=disable 1 scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true scanner: repo: rhel-repository-scanner: 2 repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: 3 name2repos_mapping_file: /data/repo-map.json 1 Replace the value of the host in the multiple connstring fields with localhost . 2 For more information about the rhel-repository-scanner parameter, see "Mapping repositories to Common Product Enumeration information". 3 For more information about the rhel_containerscanner parameter, see "Mapping repositories to Common Product Enumeration information". 3.4.3.1.5. Importing the updaters bundle into the disconnected OpenShift Container Platform cluster Use the following procedure to import the updaters bundle into your disconnected OpenShift Container Platform cluster. Prerequisites You have installed the clairctl command line utility tool. You have retrieved and decoded the Clair configuration secret, and saved it to a Clair config.yaml file. The disable_updaters and airgap parameters are set to true in your Clair config.yaml file. You have exported the updaters bundle from a Clair instance that has access to the internet. You have transferred the updaters bundle into your disconnected environment. Procedure Use the clairctl CLI tool to import the updaters bundle into the Clair database that is deployed by OpenShift Container Platform. For example: USD ./clairctl --config ./clair-config.yaml import-updaters updates.gz 3.4.3.2. Setting up a self-managed deployment of Clair for a disconnected OpenShift Container Platform cluster Use the following procedures to set up a self-managed deployment of Clair for a disconnected OpenShift Container Platform cluster. Important Due to known issue PROJQUAY-6577 , the Red Hat Quay Operator does not properly render customized Clair config.yaml files. As a result, the following procedure does not currently work. Users must create the entire Clair configuration themselves, from the beginning, instead of relying on the Operator to populate the fields. To do this, following the instructions at Procedure to enable Clair scanning of images in disconnected environments . 3.4.3.2.1. Installing the clairctl command line utility tool for a self-managed Clair deployment on OpenShift Container Platform Use the following procedure to install the clairctl CLI tool for self-managed Clair deployments on OpenShift Container Platform. Procedure Install the clairctl program for a self-managed Clair deployment by using the podman cp command, for example: USD sudo podman cp clairv4:/usr/bin/clairctl ./clairctl Set the permissions of the clairctl file so that it can be executed and run by the user, for example: USD chmod u+x ./clairctl 3.4.3.2.2. Deploying a self-managed Clair container for disconnected OpenShift Container Platform clusters Use the following procedure to deploy a self-managed Clair container for disconnected OpenShift Container Platform clusters. Prerequisites You have installed the clairctl command line utility tool. Procedure Create a folder for your Clair configuration file, for example: USD mkdir /etc/clairv4/config/ Create a Clair configuration file with the disable_updaters parameter set to true , for example: --- indexer: airgap: true --- matcher: disable_updaters: true --- Start Clair by using the container image, mounting in the configuration from the file you created: 3.4.3.2.3. Exporting the updaters bundle from a connected Clair instance Use the following procedure to export the updaters bundle from a Clair instance that has access to the internet. Prerequisites You have installed the clairctl command line utility tool. You have deployed Clair. The disable_updaters and airgap parameters are set to true in your Clair config.yaml file. Procedure From a Clair instance that has access to the internet, use the clairctl CLI tool with your configuration file to export the updaters bundle. For example: USD ./clairctl --config ./config.yaml export-updaters updates.gz 3.4.3.2.4. Configuring access to the Clair database in the disconnected OpenShift Container Platform cluster Use the following procedure to configure access to the Clair database in your disconnected OpenShift Container Platform cluster. Prerequisites You have installed the clairctl command line utility tool. You have deployed Clair. The disable_updaters and airgap parameters are set to true in your Clair config.yaml file. You have exported the updaters bundle from a Clair instance that has access to the internet. Procedure Determine your Clair database service by using the oc CLI tool, for example: USD oc get svc -n quay-enterprise Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.224.93 <none> 80/TCP,8089/TCP 4d21h example-registry-clair-postgres ClusterIP 172.30.246.88 <none> 5432/TCP 4d21h ... Forward the Clair database port so that it is accessible from the local machine. For example: USD oc port-forward -n quay-enterprise service/example-registry-clair-postgres 5432:5432 Update your Clair config.yaml file, for example: indexer: connstring: host=localhost port=5432 dbname=postgres user=postgres password=postgres sslmode=disable 1 scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true scanner: repo: rhel-repository-scanner: 2 repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: 3 name2repos_mapping_file: /data/repo-map.json 1 Replace the value of the host in the multiple connstring fields with localhost . 2 For more information about the rhel-repository-scanner parameter, see "Mapping repositories to Common Product Enumeration information". 3 For more information about the rhel_containerscanner parameter, see "Mapping repositories to Common Product Enumeration information". 3.4.3.2.5. Importing the updaters bundle into the disconnected OpenShift Container Platform cluster Use the following procedure to import the updaters bundle into your disconnected OpenShift Container Platform cluster. Prerequisites You have installed the clairctl command line utility tool. You have deployed Clair. The disable_updaters and airgap parameters are set to true in your Clair config.yaml file. You have exported the updaters bundle from a Clair instance that has access to the internet. You have transferred the updaters bundle into your disconnected environment. Procedure Use the clairctl CLI tool to import the updaters bundle into the Clair database that is deployed by OpenShift Container Platform: USD ./clairctl --config ./clair-config.yaml import-updaters updates.gz 3.4.4. Mapping repositories to Common Product Enumeration information Note Currently, mapping repositories to Common Product Enumeration information is not supported on IBM Power and IBM Z. Clair's Red Hat Enterprise Linux (RHEL) scanner relies on a Common Product Enumeration (CPE) file to map RPM packages to the corresponding security data to produce matching results. These files are owned by product security and updated daily. The CPE file must be present, or access to the file must be allowed, for the scanner to properly process RPM packages. If the file is not present, RPM packages installed in the container image will not be scanned. Table 3.1. Clair CPE mapping files CPE Link to JSON mapping file repos2cpe Red Hat Repository-to-CPE JSON names2repos Red Hat Name-to-Repos JSON . In addition to uploading CVE information to the database for disconnected Clair installations, you must also make the mapping file available locally: For standalone Red Hat Quay and Clair deployments, the mapping file must be loaded into the Clair pod. For Red Hat Quay on OpenShift Container Platform deployments, you must set the Clair component to unmanaged . Then, Clair must be deployed manually, setting the configuration to load a local copy of the mapping file. 3.4.4.1. Mapping repositories to Common Product Enumeration example configuration Use the repo2cpe_mapping_file and name2repos_mapping_file fields in your Clair configuration to include the CPE JSON mapping files. For example: indexer: scanner: repo: rhel-repository-scanner: repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: name2repos_mapping_file: /data/repo-map.json For more information, see How to accurately match OVAL security data to installed RPMs .
[ "podman pull ubuntu:20.04", "sudo podman tag docker.io/library/ubuntu:20.04 <quay-server.example.com>/<user-name>/ubuntu:20.04", "sudo podman push --tls-verify=false quay-server.example.com/quayadmin/ubuntu:20.04", "apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay370 spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: clairpostgres managed: false", "oc create secret generic --from-file config.yaml=./config.yaml --from-file extra_ca_cert_rds-ca-2019-root.pem=./rds-ca-2019-root.pem --from-file clair-config.yaml=./clair-config.yaml --from-file ssl.cert=./ssl.cert --from-file ssl.key=./ssl.key config-bundle-secret", "indexer: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca layer_scan_concurrency: 6 migrations: true scanlock_retry: 11 log_level: debug matcher: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca migrations: true metrics: name: prometheus notifier: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslrootcert=/run/certs/rds-ca-2019-root.pem sslmode=verify-ca migrations: true", "apiVersion: v1 kind: Secret metadata: name: config-bundle-secret namespace: quay-enterprise data: config.yaml: <base64 encoded Quay config> clair-config.yaml: <base64 encoded Clair config> extra_ca_cert_<name>: <base64 encoded ca cert> ssl.crt: <base64 encoded SSL certificate> ssl.key: <base64 encoded SSL private key>", "oc get pods -n <namespace>", "NAME READY STATUS RESTARTS AGE f192fe4a-c802-4275-bcce-d2031e635126-9l2b5-25lg2 1/1 Running 0 7s", "apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: quay370 spec: configBundleSecret: config-bundle-secret components: - kind: objectstorage managed: false - kind: route managed: true - kind: tls managed: false - kind: clairpostgres managed: true", "oc create secret generic --from-file config.yaml=./config.yaml --from-file extra_ca_cert_rds-ca-2019-root.pem=./rds-ca-2019-root.pem --from-file clair-config.yaml=./clair-config.yaml config-bundle-secret", "indexer: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable layer_scan_concurrency: 6 migrations: true scanlock_retry: 11 log_level: debug matcher: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable migrations: true metrics: name: prometheus notifier: connstring: host=quay-server.example.com port=5432 dbname=quay user=quayrdsdb password=quayrdsdb sslmode=disable migrations: true", "apiVersion: v1 kind: Secret metadata: name: config-bundle-secret namespace: quay-enterprise data: config.yaml: <base64 encoded Quay config> clair-config.yaml: <base64 encoded Clair config>", "oc get pods -n <namespace>", "NAME READY STATUS RESTARTS AGE f192fe4a-c802-4275-bcce-d2031e635126-9l2b5-25lg2 1/1 Running 0 7s", "oc -n quay-enterprise exec example-registry-clair-app-64dd48f866-6ptgw -- cat /usr/bin/clairctl > clairctl", "chmod u+x ./clairctl", "oc get secret -n quay-enterprise example-registry-clair-config-secret -o \"jsonpath={USD.data['config\\.yaml']}\" | base64 -d > clair-config.yaml", "--- indexer: airgap: true --- matcher: disable_updaters: true ---", "./clairctl --config ./config.yaml export-updaters updates.gz", "oc get svc -n quay-enterprise", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.224.93 <none> 80/TCP,8089/TCP 4d21h example-registry-clair-postgres ClusterIP 172.30.246.88 <none> 5432/TCP 4d21h", "oc port-forward -n quay-enterprise service/example-registry-clair-postgres 5432:5432", "indexer: connstring: host=localhost port=5432 dbname=postgres user=postgres password=postgres sslmode=disable 1 scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true scanner: repo: rhel-repository-scanner: 2 repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: 3 name2repos_mapping_file: /data/repo-map.json", "./clairctl --config ./clair-config.yaml import-updaters updates.gz", "sudo podman cp clairv4:/usr/bin/clairctl ./clairctl", "chmod u+x ./clairctl", "mkdir /etc/clairv4/config/", "--- indexer: airgap: true --- matcher: disable_updaters: true ---", "sudo podman run -it --rm --name clairv4 -p 8081:8081 -p 8088:8088 -e CLAIR_CONF=/clair/config.yaml -e CLAIR_MODE=combo -v /etc/clairv4/config:/clair:Z registry.redhat.io/quay/clair-rhel8:v3.10.9", "./clairctl --config ./config.yaml export-updaters updates.gz", "oc get svc -n quay-enterprise", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-registry-clair-app ClusterIP 172.30.224.93 <none> 80/TCP,8089/TCP 4d21h example-registry-clair-postgres ClusterIP 172.30.246.88 <none> 5432/TCP 4d21h", "oc port-forward -n quay-enterprise service/example-registry-clair-postgres 5432:5432", "indexer: connstring: host=localhost port=5432 dbname=postgres user=postgres password=postgres sslmode=disable 1 scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true scanner: repo: rhel-repository-scanner: 2 repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: 3 name2repos_mapping_file: /data/repo-map.json", "./clairctl --config ./clair-config.yaml import-updaters updates.gz", "indexer: scanner: repo: rhel-repository-scanner: repo2cpe_mapping_file: /data/cpe-map.json package: rhel_containerscanner: name2repos_mapping_file: /data/repo-map.json" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/red_hat_quay_operator_features/clair-vulnerability-scanner
Chapter 17. Data objects
Chapter 17. Data objects Data objects are the building blocks for the rule assets that you create. Data objects are custom data types implemented as Java objects in specified packages of your project. For example, you might create a Person object with data fields Name , Address , and DateOfBirth to specify personal details for loan application rules. These custom data types determine what data your assets and your decision services are based on. 17.1. Creating data objects The following procedure is a generic overview of creating data objects. It is not specific to a particular business asset. Procedure In Business Central, go to Menu Design Projects and click the project name. Click Add Asset Data Object . Enter a unique Data Object name and select the Package where you want the data object to be available for other rule assets. Data objects with the same name cannot exist in the same package. In the specified DRL file, you can import a data object from any package. Importing data objects from other packages You can import an existing data object from another package directly into the asset designers like guided rules or guided decision table designers. Select the relevant rule asset within the project and in the asset designer, go to Data Objects New item to select the object to be imported. To make your data object persistable, select the Persistable checkbox. Persistable data objects are able to be stored in a database according to the JPA specification. The default JPA is Hibernate. Click Ok . In the data object designer, click add field to add a field to the object with the attributes Id , Label , and Type . Required attributes are marked with an asterisk (*). Id: Enter the unique ID of the field. Label: (Optional) Enter a label for the field. Type: Enter the data type of the field. List: (Optional) Select this check box to enable the field to hold multiple items for the specified type. Figure 17.1. Add data fields to a data object Click Create to add the new field, or click Create and continue to add the new field and continue adding other fields. Note To edit a field, select the field row and use the general properties on the right side of the screen.
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/data-objects-con_drl-rules
Chapter 7. Working with Helm charts
Chapter 7. Working with Helm charts 7.1. Understanding Helm Helm is a software package manager that simplifies deployment of applications and services to OpenShift Container Platform clusters. Helm uses a packaging format called charts . A Helm chart is a collection of files that describes the OpenShift Container Platform resources. A running instance of the chart in a cluster is called a release . A new release is created every time a chart is installed on the cluster. Each time a chart is installed, or a release is upgraded or rolled back, an incremental revision is created. 7.1.1. Key features Helm provides the ability to: Search through a large collection of charts stored in the chart repository. Modify existing charts. Create your own charts with OpenShift Container Platform or Kubernetes resources. Package and share your applications as charts. 7.1.2. Red Hat Certification of Helm charts for OpenShift You can choose to verify and certify your Helm charts by Red Hat for all the components you will be deploying on the Red Hat OpenShift Container Platform. Charts go through an automated Red Hat OpenShift certification workflow that guarantees security compliance as well as best integration and experience with the platform. Certification assures the integrity of the chart and ensures that the Helm chart works seamlessly on Red Hat OpenShift clusters. 7.1.3. Additional resources For more information on how to certify your Helm charts as a Red Hat partner, see Red Hat Certification of Helm charts for OpenShift . For more information on OpenShift and Container certification guides for Red Hat partners, see Partner Guide for OpenShift and Container Certification . For a list of the charts, see the Red Hat Helm index file . You can view the available charts at the Red Hat Marketplace . For more information, see Using the Red Hat Marketplace . 7.2. Installing Helm The following section describes how to install Helm on different platforms using the CLI. You can also find the URL to the latest binaries from the OpenShift Container Platform web console by clicking the ? icon in the upper-right corner and selecting Command Line Tools . Prerequisites You have installed Go, version 1.13 or higher. 7.2.1. On Linux Download the Helm binary and add it to your path: Linux (x86_64, amd64) # curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-amd64 -o /usr/local/bin/helm Linux on IBM Z and LinuxONE (s390x) # curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-s390x -o /usr/local/bin/helm Linux on IBM Power (ppc64le) # curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-ppc64le -o /usr/local/bin/helm Make the binary file executable: # chmod +x /usr/local/bin/helm Check the installed version: USD helm version Example output version.BuildInfo{Version:"v3.0", GitCommit:"b31719aab7963acf4887a1c1e6d5e53378e34d93", GitTreeState:"clean", GoVersion:"go1.13.4"} 7.2.2. On Windows 7/8 Download the latest .exe file and put in a directory of your preference. Right click Start and click Control Panel . Select System and Security and then click System . From the menu on the left, select Advanced systems settings and click Environment Variables at the bottom. Select Path from the Variable section and click Edit . Click New and type the path to the folder with the .exe file into the field or click Browse and select the directory, and click OK . 7.2.3. On Windows 10 Download the latest .exe file and put in a directory of your preference. Click Search and type env or environment . Select Edit environment variables for your account . Select Path from the Variable section and click Edit . Click New and type the path to the directory with the exe file into the field or click Browse and select the directory, and click OK . 7.2.4. On MacOS Download the Helm binary and add it to your path: # curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-darwin-amd64 -o /usr/local/bin/helm Make the binary file executable: # chmod +x /usr/local/bin/helm Check the installed version: USD helm version Example output version.BuildInfo{Version:"v3.0", GitCommit:"b31719aab7963acf4887a1c1e6d5e53378e34d93", GitTreeState:"clean", GoVersion:"go1.13.4"} 7.3. Configuring custom Helm chart repositories You can install Helm charts on an OpenShift Container Platform cluster using the following methods: The CLI. The Developer perspective of the web console. The Developer Catalog , in the Developer perspective of the web console, displays the Helm charts available in the cluster. By default, it lists the Helm charts from the Red Hat OpenShift Helm chart repository. For a list of the charts, see the Red Hat Helm index file . As a cluster administrator, you can add multiple cluster-scoped and namespace-scoped Helm chart repositories, separate from the default cluster-scoped Helm repository, and display the Helm charts from these repositories in the Developer Catalog . As a regular user or project member with the appropriate role-based access control (RBAC) permissions, you can add multiple namespace-scoped Helm chart repositories, apart from the default cluster-scoped Helm repository, and display the Helm charts from these repositories in the Developer Catalog . 7.3.1. Installing a Helm chart on an OpenShift Container Platform cluster Prerequisites You have a running OpenShift Container Platform cluster and you have logged into it. You have installed Helm. Procedure Create a new project: USD oc new-project vault Add a repository of Helm charts to your local Helm client: USD helm repo add openshift-helm-charts https://charts.openshift.io/ Example output "openshift-helm-charts" has been added to your repositories Update the repository: USD helm repo update Install an example HashiCorp Vault: USD helm install example-vault openshift-helm-charts/hashicorp-vault Example output NAME: example-vault LAST DEPLOYED: Fri Mar 11 12:02:12 2022 NAMESPACE: vault STATUS: deployed REVISION: 1 NOTES: Thank you for installing HashiCorp Vault! Verify that the chart has installed successfully: USD helm list Example output NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION example-vault vault 1 2022-03-11 12:02:12.296226673 +0530 IST deployed vault-0.19.0 1.9.2 7.3.2. Installing Helm charts using the Developer perspective You can use either the Developer perspective in the web console or the CLI to select and install a chart from the Helm charts listed in the Developer Catalog . You can create Helm releases by installing Helm charts and see them in the Developer perspective of the web console. Prerequisites You have logged in to the web console and have switched to the Developer perspective . Procedure To create Helm releases from the Helm charts provided in the Developer Catalog : In the Developer perspective, navigate to the +Add view and select a project. Then click Helm Chart option to see all the Helm Charts in the Developer Catalog . Select a chart and read the description, README, and other details about the chart. Click Install Helm Chart . Figure 7.1. Helm charts in developer catalog In the Install Helm Chart page: Enter a unique name for the release in the Release Name field. Select the required chart version from the Chart Version drop-down list. Configure your Helm chart by using the Form View or the YAML View . Note Where available, you can switch between the YAML View and Form View . The data is persisted when switching between the views. Click Install to create a Helm release. You will be redirected to the Topology view where the release is displayed. If the Helm chart has release notes, the chart is pre-selected and the right panel displays the release notes for that release. You can upgrade, rollback, or uninstall a Helm release by using the Actions button on the side panel or by right-clicking a Helm release. 7.3.3. Using Helm in the web terminal You can use Helm by Accessing the web terminal in the Developer perspective of the web console. 7.3.4. Creating a custom Helm chart on OpenShift Container Platform Procedure Create a new project: USD oc new-project nodejs-ex-k Download an example Node.js chart that contains OpenShift Container Platform objects: USD git clone https://github.com/redhat-developer/redhat-helm-charts Go to the directory with the sample chart: USD cd redhat-helm-charts/alpha/nodejs-ex-k/ Edit the Chart.yaml file and add a description of your chart: apiVersion: v2 1 name: nodejs-ex-k 2 description: A Helm chart for OpenShift 3 icon: https://static.redhat.com/libs/redhat/brand-assets/latest/corp/logo.svg 4 version: 0.2.1 5 1 The chart API version. It should be v2 for Helm charts that require at least Helm 3. 2 The name of your chart. 3 The description of your chart. 4 The URL to an image to be used as an icon. 5 The Version of your chart as per the Semantic Versioning (SemVer) 2.0.0 Specification. Verify that the chart is formatted properly: USD helm lint Example output [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, 0 chart(s) failed Navigate to the directory level: USD cd .. Install the chart: USD helm install nodejs-chart nodejs-ex-k Verify that the chart has installed successfully: USD helm list Example output NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION nodejs-chart nodejs-ex-k 1 2019-12-05 15:06:51.379134163 -0500 EST deployed nodejs-0.1.0 1.16.0 7.3.5. Adding custom Helm chart repositories As a cluster administrator, you can add custom Helm chart repositories to your cluster and enable access to the Helm charts from these repositories in the Developer Catalog . Procedure To add a new Helm Chart Repository, you must add the Helm Chart Repository custom resource (CR) to your cluster. Sample Helm Chart Repository CR apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: <name> spec: # optional name that might be used by console # name: <chart-display-name> connectionConfig: url: <helm-chart-repository-url> For example, to add an Azure sample chart repository, run: USD cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: azure-sample-repo spec: name: azure-sample-repo connectionConfig: url: https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs EOF Navigate to the Developer Catalog in the web console to verify that the Helm charts from the chart repository are displayed. For example, use the Chart repositories filter to search for a Helm chart from the repository. Figure 7.2. Chart repositories filter Note If a cluster administrator removes all of the chart repositories, then you cannot view the Helm option in the +Add view, Developer Catalog , and left navigation panel. 7.3.6. Adding namespace-scoped custom Helm chart repositories The cluster-scoped HelmChartRepository custom resource definition (CRD) for Helm repository provides the ability for administrators to add Helm repositories as custom resources. The namespace-scoped ProjectHelmChartRepository CRD allows project members with the appropriate role-based access control (RBAC) permissions to create Helm repository resources of their choice but scoped to their namespace. Such project members can see charts from both cluster-scoped and namespace-scoped Helm repository resources. Note Administrators can limit users from creating namespace-scoped Helm repository resources. By limiting users, administrators have the flexibility to control the RBAC through a namespace role instead of a cluster role. This avoids unnecessary permission elevation for the user and prevents access to unauthorized services or applications. The addition of the namespace-scoped Helm repository does not impact the behavior of the existing cluster-scoped Helm repository. As a regular user or project member with the appropriate RBAC permissions, you can add custom namespace-scoped Helm chart repositories to your cluster and enable access to the Helm charts from these repositories in the Developer Catalog . Procedure To add a new namespace-scoped Helm Chart Repository, you must add the Helm Chart Repository custom resource (CR) to your namespace. Sample Namespace-scoped Helm Chart Repository CR apiVersion: helm.openshift.io/v1beta1 kind: ProjectHelmChartRepository metadata: name: <name> spec: url: https://my.chart-repo.org/stable # optional name that might be used by console name: <chart-repo-display-name> # optional and only needed for UI purposes description: <My private chart repo> # required: chart repository URL connectionConfig: url: <helm-chart-repository-url> For example, to add an Azure sample chart repository scoped to your my-namespace namespace, run: USD cat <<EOF | oc apply --namespace my-namespace -f - apiVersion: helm.openshift.io/v1beta1 kind: ProjectHelmChartRepository metadata: name: azure-sample-repo spec: name: azure-sample-repo connectionConfig: url: https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs EOF The output verifies that the namespace-scoped Helm Chart Repository CR is created: Example output Navigate to the Developer Catalog in the web console to verify that the Helm charts from the chart repository are displayed in your my-namespace namespace. For example, use the Chart repositories filter to search for a Helm chart from the repository. Figure 7.3. Chart repositories filter in your namespace Alternatively, run: USD oc get projecthelmchartrepositories --namespace my-namespace Example output Note If a cluster administrator or a regular user with appropriate RBAC permissions removes all of the chart repositories in a specific namespace, then you cannot view the Helm option in the +Add view, Developer Catalog , and left navigation panel for that specific namespace. 7.3.7. Creating credentials and CA certificates to add Helm chart repositories Some Helm chart repositories need credentials and custom certificate authority (CA) certificates to connect to it. You can use the web console as well as the CLI to add credentials and certificates. Procedure To configure the credentials and certificates, and then add a Helm chart repository using the CLI: In the openshift-config namespace, create a ConfigMap object with a custom CA certificate in PEM encoded format, and store it under the ca-bundle.crt key within the config map: USD oc create configmap helm-ca-cert \ --from-file=ca-bundle.crt=/path/to/certs/ca.crt \ -n openshift-config In the openshift-config namespace, create a Secret object to add the client TLS configurations: USD oc create secret tls helm-tls-configs \ --cert=/path/to/certs/client.crt \ --key=/path/to/certs/client.key \ -n openshift-config Note that the client certificate and key must be in PEM encoded format and stored under the keys tls.crt and tls.key , respectively. Add the Helm repository as follows: USD cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: <helm-repository> spec: name: <helm-repository> connectionConfig: url: <URL for the Helm repository> tlsConfig: name: helm-tls-configs ca: name: helm-ca-cert EOF The ConfigMap and Secret are consumed in the HelmChartRepository CR using the tlsConfig and ca fields. These certificates are used to connect to the Helm repository URL. By default, all authenticated users have access to all configured charts. However, for chart repositories where certificates are needed, you must provide users with read access to the helm-ca-cert config map and helm-tls-configs secret in the openshift-config namespace, as follows: USD cat <<EOF | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: openshift-config name: helm-chartrepos-tls-conf-viewer rules: - apiGroups: [""] resources: ["configmaps"] resourceNames: ["helm-ca-cert"] verbs: ["get"] - apiGroups: [""] resources: ["secrets"] resourceNames: ["helm-tls-configs"] verbs: ["get"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: openshift-config name: helm-chartrepos-tls-conf-viewer subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: 'system:authenticated' roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: helm-chartrepos-tls-conf-viewer EOF 7.3.8. Filtering Helm Charts by their certification level You can filter Helm charts based on their certification level in the Developer Catalog . Procedure In the Developer perspective, navigate to the +Add view and select a project. From the Developer Catalog tile, select the Helm Chart option to see all the Helm charts in the Developer Catalog . Use the filters to the left of the list of Helm charts to filter the required charts: Use the Chart Repositories filter to filter charts provided by Red Hat Certification Charts or OpenShift Helm Charts . Use the Source filter to filter charts sourced from Partners , Community , or Red Hat . Certified charts are indicated with the ( ) icon. Note The Source filter will not be visible when there is only one provider type. You can now select the required chart and install it. 7.3.9. Disabling Helm Chart repositories You can disable Helm Charts from a particular Helm Chart Repository in the catalog by setting the disabled property in the HelmChartRepository custom resource to true . Procedure To disable a Helm Chart repository by using CLI, add the disabled: true flag to the custom resource. For example, to remove an Azure sample chart repository, run: To disable a recently added Helm Chart repository by using Web Console: Go to Custom Resource Definitions and search for the HelmChartRepository custom resource. Go to Instances , find the repository you want to disable, and click its name. Go to the YAML tab, add the disabled: true flag in the spec section, and click Save . Example The repository is now disabled and will not appear in the catalog. 7.4. Working with Helm releases You can use the Developer perspective in the web console to update, rollback, or uninstall a Helm release. 7.4.1. Prerequisites You have logged in to the web console and have switched to the Developer perspective . 7.4.2. Upgrading a Helm release You can upgrade a Helm release to upgrade to a new chart version or update your release configuration. Procedure In the Topology view, select the Helm release to see the side panel. Click Actions Upgrade Helm Release . In the Upgrade Helm Release page, select the Chart Version you want to upgrade to, and then click Upgrade to create another Helm release. The Helm Releases page displays the two revisions. 7.4.3. Rolling back a Helm release If a release fails, you can rollback the Helm release to a version. Procedure To rollback a release using the Helm view: In the Developer perspective, navigate to the Helm view to see the Helm Releases in the namespace. Click the Options menu adjoining the listed release, and select Rollback . In the Rollback Helm Release page, select the Revision you want to rollback to and click Rollback . In the Helm Releases page, click on the chart to see the details and resources for that release. Go to the Revision History tab to see all the revisions for the chart. Figure 7.4. Helm revision history If required, you can further use the Options menu adjoining a particular revision and select the revision to rollback to. 7.4.4. Uninstalling a Helm release Procedure In the Topology view, right-click the Helm release and select Uninstall Helm Release . In the confirmation prompt, enter the name of the chart and click Uninstall .
[ "curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-amd64 -o /usr/local/bin/helm", "curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-s390x -o /usr/local/bin/helm", "curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-ppc64le -o /usr/local/bin/helm", "chmod +x /usr/local/bin/helm", "helm version", "version.BuildInfo{Version:\"v3.0\", GitCommit:\"b31719aab7963acf4887a1c1e6d5e53378e34d93\", GitTreeState:\"clean\", GoVersion:\"go1.13.4\"}", "curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-darwin-amd64 -o /usr/local/bin/helm", "chmod +x /usr/local/bin/helm", "helm version", "version.BuildInfo{Version:\"v3.0\", GitCommit:\"b31719aab7963acf4887a1c1e6d5e53378e34d93\", GitTreeState:\"clean\", GoVersion:\"go1.13.4\"}", "oc new-project vault", "helm repo add openshift-helm-charts https://charts.openshift.io/", "\"openshift-helm-charts\" has been added to your repositories", "helm repo update", "helm install example-vault openshift-helm-charts/hashicorp-vault", "NAME: example-vault LAST DEPLOYED: Fri Mar 11 12:02:12 2022 NAMESPACE: vault STATUS: deployed REVISION: 1 NOTES: Thank you for installing HashiCorp Vault!", "helm list", "NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION example-vault vault 1 2022-03-11 12:02:12.296226673 +0530 IST deployed vault-0.19.0 1.9.2", "oc new-project nodejs-ex-k", "git clone https://github.com/redhat-developer/redhat-helm-charts", "cd redhat-helm-charts/alpha/nodejs-ex-k/", "apiVersion: v2 1 name: nodejs-ex-k 2 description: A Helm chart for OpenShift 3 icon: https://static.redhat.com/libs/redhat/brand-assets/latest/corp/logo.svg 4 version: 0.2.1 5", "helm lint", "[INFO] Chart.yaml: icon is recommended 1 chart(s) linted, 0 chart(s) failed", "cd ..", "helm install nodejs-chart nodejs-ex-k", "helm list", "NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION nodejs-chart nodejs-ex-k 1 2019-12-05 15:06:51.379134163 -0500 EST deployed nodejs-0.1.0 1.16.0", "apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: <name> spec: # optional name that might be used by console # name: <chart-display-name> connectionConfig: url: <helm-chart-repository-url>", "cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: azure-sample-repo spec: name: azure-sample-repo connectionConfig: url: https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs EOF", "apiVersion: helm.openshift.io/v1beta1 kind: ProjectHelmChartRepository metadata: name: <name> spec: url: https://my.chart-repo.org/stable # optional name that might be used by console name: <chart-repo-display-name> # optional and only needed for UI purposes description: <My private chart repo> # required: chart repository URL connectionConfig: url: <helm-chart-repository-url>", "cat <<EOF | oc apply --namespace my-namespace -f - apiVersion: helm.openshift.io/v1beta1 kind: ProjectHelmChartRepository metadata: name: azure-sample-repo spec: name: azure-sample-repo connectionConfig: url: https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs EOF", "projecthelmchartrepository.helm.openshift.io/azure-sample-repo created", "oc get projecthelmchartrepositories --namespace my-namespace", "NAME AGE azure-sample-repo 1m", "oc create configmap helm-ca-cert --from-file=ca-bundle.crt=/path/to/certs/ca.crt -n openshift-config", "oc create secret tls helm-tls-configs --cert=/path/to/certs/client.crt --key=/path/to/certs/client.key -n openshift-config", "cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: <helm-repository> spec: name: <helm-repository> connectionConfig: url: <URL for the Helm repository> tlsConfig: name: helm-tls-configs ca: name: helm-ca-cert EOF", "cat <<EOF | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: openshift-config name: helm-chartrepos-tls-conf-viewer rules: - apiGroups: [\"\"] resources: [\"configmaps\"] resourceNames: [\"helm-ca-cert\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"secrets\"] resourceNames: [\"helm-tls-configs\"] verbs: [\"get\"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: openshift-config name: helm-chartrepos-tls-conf-viewer subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: 'system:authenticated' roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: helm-chartrepos-tls-conf-viewer EOF", "cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: azure-sample-repo spec: connectionConfig: url:https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs disabled: true EOF", "spec: connectionConfig: url: <url-of-the-repositoru-to-be-disabled> disabled: true" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/building_applications/working-with-helm-charts
Chapter 8. Memory
Chapter 8. Memory This chapter covers memory optimization options for virtualized environments. 8.1. Memory Tuning Tips To optimize memory performance in a virtualized environment, consider the following: Do not allocate more resources to guest than it will use. If possible, assign a guest to a single NUMA node, providing that resources are sufficient on that NUMA node. For more information on using NUMA, see Chapter 9, NUMA .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_tuning_and_optimization_guide/chap-Virtualization_Tuning_Optimization_Guide-Memory
Chapter 5. Root DSE attributes
Chapter 5. Root DSE attributes The attributes in this section are used to define the root directory server entry (DSE) for the server instance. The information defined in the DSE relates to the actual configuration of the server instance, such as the controls, mechanisms, or features supported in that version of the server software. It also contains information specific to the instance, like its build number and installation date. The DSE is a special entry, outside the normal DIT, and can be returned by searching with a null search base. For example: # ldapsearch -D "cn=Directory Manager" -W -p 389 -h server.example.com -x -s base -b "" "objectclass=*" 5.1. dataversion This attribute contains a timestamp which shows the most recent edit time for any data in the directory. dataversion: 020090923175302020090923175302 OID Syntax GeneralizedTime Multi- or Single-Valued Single-valued Defined in Directory Server 5.2. defaultNamingContext Corresponds to the naming context, out of all configured naming contexts, which clients should use by default. OID Syntax DN Multi- or Single-Valued Single-valued Defined in Directory Server 5.3. lastusn The USN Plug-in assigns a sequence number to every entry whenever a write operation - add, modify, delete, and modrdn - is performed for that entry. The USN is assigned in the entryUSN operational attribute for the entry. The USN Plug-in has two modes: local and global. In local mode, each database maintained for a server instance has its own instance of the USN Plug-in with a separate USN counter per back end database. The most recent USN assigned for any entry in the database is displayed in the lastusn attribute. When the USN Plug-in is set to local mode, the lastUSN attribute shows both the database which assigned the USN and the USN: lastusn;pass:quotes[ database_name ]:pass:quotes[ USN ] For example: lastusn;example1: 213 lastusn;example2: 207 In global mode, when the database uses a shared USN counter, the lastUSN value shows the latest USN assigned by any database: lastusn: 420 5.4. namingContexts Corresponds to a naming context the server is controlling or shadowing. When Directory Server does not control any information (such as when it is an LDAP gateway to a public X.500 directory), this attribute is absent. When Directory Server believes it contains the entire directory, the attribute has a single value, and that value is the empty string (indicating the null DN of the root).This attribute permits a client contacting a server to choose suitable base objects for searching. OID 1.3.6.1.4.1.1466.101.120.5 Syntax DN Multi- or Single-Valued Multi-valued Defined in RFC 2252 5.5. netscapemdsuffix This attribute contains the DN for the top suffix of the directory tree for machine data maintained in the server. The DN itself points to an LDAP URL. For example: cn=ldap://dc=pass:quotes[ server_name ],dc=example,dc=com:389 OID 2.16.840.1.113730.3.1.212 Syntax DN Multi- or Single-Valued Single-valued Defined in Directory Server 5.6. supportedControl The values of this attribute are the object identifiers (OIDs) that identify the controls supported by the server. When the server does not support controls, this attribute is absent. OID 1.3.6.1.4.1.1466.101.120.13 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2252 5.7. supportedExtension The values of this attribute are the object identifiers (OIDs) that identify the extended operations supported by the server. When the server does not support extended operations, this attribute is absent. OID 1.3.6.1.4.1.1466.101.120.7 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2252 5.8. supportedFeatures This attribute contains features supported by the current version of Red Hat Directory Server. OID 1.3.6.1.4.1.4203.1.3.5 Syntax OID Multi- or Single-Valued Multi-valued Defined in RFC 3674 5.9. supportedLDAPVersion This attribute identifies the versions of the LDAP protocol implemented by the server. OID 1.3.6.1.4.1.1466.101.120.15 Syntax Integer Multi- or Single-Valued Multi-valued Defined in RFC 2252 5.10. supportedSASLMechanisms This attribute identifies the names of the SASL mechanisms supported by the server. When the server does not support SASL attributes, this attribute is absent. OID 1.3.6.1.4.1.1466.101.120.14 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2252 5.11. vendorName This attribute contains the name of the server vendor. OID 1.3.6.1.1.4 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in RFC 3045 5.12. vendorVersion This attribute shows the vendor's version number for the server. OID 1.3.6.1.1.5 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in RFC 3045 config-schema-reference-title
[ "ldapsearch -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x -s base -b \"\" \"objectclass=*\"", "dataversion: 020090923175302020090923175302", "lastusn;pass:quotes[ database_name ]:pass:quotes[ USN ]", "lastusn;example1: 213 lastusn;example2: 207", "lastusn: 420", "cn=ldap://dc=pass:quotes[ server_name ],dc=example,dc=com:389" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/configuration_and_schema_reference/assembly_root-dse-attributes_config-schema-reference-title
Chapter 2. Installing a user-provisioned cluster on bare metal
Chapter 2. Installing a user-provisioned cluster on bare metal In OpenShift Container Platform 4.13, you can install a cluster on bare metal infrastructure that you provision. Important While you might be able to follow this procedure to deploy a cluster on virtualized or cloud environments, you must be aware of additional considerations for non-bare metal platforms. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you attempt to install an OpenShift Container Platform cluster in such an environment. 2.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 2.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.13, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. Additional resources See Installing a user-provisioned bare metal cluster on a restricted network for more information about performing a restricted network installation on bare metal infrastructure that you provision. 2.3. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 2.3.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 2.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Note As an exception, you can run zero compute machines in a bare metal cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. Running one compute machine is not supported. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 2.3.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 2.2. Minimum resource requirements Machine Operating System CPU [1] RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One CPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = CPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 2.3.3. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. Additional resources See Configuring a three-node cluster for details about deploying three-node clusters in bare metal environments. See Approving the certificate signing requests for your machines for more information about approving cluster certificate signing requests after installation. 2.3.4. Requirements for baremetal clusters on vSphere Ensure you enable the disk.EnableUUID parameter on all virtual machines in your cluster. Additional resources See Installing RHCOS and starting the OpenShift Container Platform bootstrap process for details on setting the disk.EnableUUID parameter's value to TRUE on VMware vSphere for user-provisioned infrastructure. 2.3.5. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 2.3.5.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 2.3.5.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 2.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 2.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 2.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 2.3.6. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 2.6. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 2.3.6.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 2.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 2.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. Additional resources Validating DNS resolution for user-provisioned infrastructure 2.3.7. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 2.7. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 2.8. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 2.3.7.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 2.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 2.4. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. Additional resources Requirements for a cluster with user-provisioned infrastructure Installing RHCOS and starting the OpenShift Container Platform bootstrap process Setting the cluster node hostnames through DHCP Advanced RHCOS installation configuration Networking requirements for user-provisioned infrastructure User-provisioned DNS requirements Validating DNS resolution for user-provisioned infrastructure Load balancing requirements for user-provisioned infrastructure 2.5. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. Additional resources User-provisioned DNS requirements Load balancing requirements for user-provisioned infrastructure 2.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program. Additional resources Verifying node health 2.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 2.8. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.13. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.13 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 2.9. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 2.9.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide a customized install-config.yaml installation configuration file that describes the details for your environment. Note After installation, you cannot modify these parameters in the install-config.yaml file. 2.9.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 2.9. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters and hyphens ( - ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 2.9.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. If you use the Red Hat OpenShift Networking OVN-Kubernetes network plugin, both IPv4 and IPv6 address families are supported. If you use the Red Hat OpenShift Networking OpenShift SDN network plugin, only the IPv4 address family is supported. If you configure your cluster to use both IP address families, review the following requirements: Both IP families must use the same network interface for the default gateway. Both IP families must have the default gateway. You must specify IPv4 and IPv6 addresses in the same order for all network configuration parameters. For example, in the following configuration IPv4 addresses are listed before IPv6 addresses. networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112 Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 2.10. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. If you use the OpenShift SDN network plugin, specify an IPv4 network. If you use the OVN-Kubernetes network plugin, you can specify IPv4 and IPv6 networks. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The prefix length for an IPv6 block is between 0 and 128 . For example, 10.128.0.0/14 or fd01::/48 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. For an IPv4 network the default value is 23 . For an IPv6 network the default value is 64 . The default value is also the minimum value for IPv6. networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. If you use the OVN-Kubernetes network plugin, you can specify an IP address block for both of the IPv4 and IPv6 address families. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 - fd02::/112 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 or fd00::/48 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 2.9.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 2.11. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array cpuPartitioningMode Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . String compute: hyperthreading: Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . String controlPlane: hyperthreading: Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. 2.9.2. Sample install-config.yaml file for bare metal You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not enabled in your BIOS settings, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether in the BIOS or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for your platform. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. Important OpenShift Container Platform 4.13 is based on Red Hat Enterprise Linux (RHEL) 9.2. RHEL 9.2 cryptographic modules have not yet been submitted for FIPS validation. For more information, see "About this release" in the 4.13 OpenShift Container Platform Release Notes . 15 The pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Additional resources See Load balancing requirements for user-provisioned infrastructure for more information on the API and application ingress load balancing requirements. See Enabling cluster capabilities for more information on enabling cluster capabilities that were disabled prior to installation. See Optional cluster capabilities in OpenShift Container Platform 4.13 for more information about the features provided by each capability. 2.9.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Note For bare metal installations, if you do not assign node IP addresses from the range that is specified in the networking.machineNetwork[].cidr field in the install-config.yaml file, you must include them in the proxy.noProxy field. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 2.9.4. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a bare metal cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 2.10. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Additional resources See Recovering from expired control plane certificates for more information about recovering kubelet certificates. 2.11. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on bare metal infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on the machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. To install RHCOS on the machines, follow either the steps to use an ISO image or network PXE booting. Note The compute node deployment steps included in this installation document are RHCOS-specific. If you choose instead to deploy RHEL-based compute nodes, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Only RHEL 8 compute machines are supported. You can configure RHCOS during ISO and PXE installations by using the following methods: Kernel arguments: You can use kernel arguments to provide installation-specific information. For example, you can specify the locations of the RHCOS installation files that you uploaded to your HTTP server and the location of the Ignition config file for the type of node you are installing. For a PXE installation, you can use the APPEND parameter to pass the arguments to the kernel of the live installer. For an ISO installation, you can interrupt the live installation boot process to add the kernel arguments. In both installation cases, you can use special coreos.inst.* arguments to direct the live installer, as well as standard installation boot arguments for turning standard kernel services on or off. Ignition configs: OpenShift Container Platform Ignition config files ( *.ign ) are specific to the type of node you are installing. You pass the location of a bootstrap, control plane, or compute node Ignition config file during the RHCOS installation so that it takes effect on first boot. In special cases, you can create a separate, limited Ignition config to pass to the live system. That Ignition config could do a certain set of tasks, such as reporting success to a provisioning system after completing installation. This special Ignition config is consumed by the coreos-installer to be applied on first boot of the installed system. Do not provide the standard control plane and compute node Ignition configs to the live ISO directly. coreos-installer : You can boot the live ISO installer to a shell prompt, which allows you to prepare the permanent system in a variety of ways before first boot. In particular, you can run the coreos-installer command to identify various artifacts to include, work with disk partitions, and set up networking. In some cases, you can configure features on the live system and copy them to the installed system. Whether to use an ISO or PXE install depends on your situation. A PXE install requires an available DHCP service and more preparation, but can make the installation process more automated. An ISO install is a more manual process and can be inconvenient if you are setting up more than a few machines. 2.11.1. Installing RHCOS by using an ISO image You can use an ISO image to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Obtain the SHA512 digest for each of your Ignition config files. For example, you can use the following on a system running Linux to get the SHA512 digest for your bootstrap.ign Ignition config file: USD sha512sum <installation_directory>/bootstrap.ign The digests are provided to the coreos-installer in a later step to validate the authenticity of the Ignition config files on the cluster nodes. Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS images that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS images are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep '\.iso[^.]' Example output "location": "<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso", "location": "<url>/art/storage/releases/rhcos-4.13-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso", "location": "<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso", "location": "<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live.x86_64.iso", Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Use only ISO images for this procedure. RHCOS qcow2 images are not supported for this installation type. ISO file names resemble the following example: rhcos-<version>-live.<architecture>.iso Use the ISO to start the RHCOS installation. Use one of the following installation options: Burn the ISO image to a disk and boot it directly. Use ISO redirection by using a lights-out management (LOM) interface. Boot the RHCOS ISO image without specifying any options or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment. Note It is possible to interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you should use the coreos-installer command as outlined in the following steps, instead of adding kernel arguments. Run the coreos-installer command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to: USD sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2 1 1 You must run the coreos-installer command by using sudo , because the core user does not have the required root privileges to perform the installation. 2 The --ignition-hash option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node. <digest> is the Ignition config file SHA512 digest obtained in a preceding step. Note If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running coreos-installer . The following example initializes a bootstrap node installation to the /dev/sda device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2: USD sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, you must reboot the system. During the system reboot, it applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the other machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install OpenShift Container Platform. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 2.11.2. Installing RHCOS by using PXE or iPXE booting You can use PXE or iPXE booting to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have configured suitable PXE or iPXE infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS kernel , initramfs and rootfs files that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS files are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep -Eo '"https.*(kernel-|initramfs.|rootfs.)\w+(\.img)?"' Example output "<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64" "<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.13-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le" "<url>/art/storage/releases/rhcos-4.13-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.13-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x" "<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img" "<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img" "<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live-kernel-x86_64" "<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img" "<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img" Important The RHCOS artifacts might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel , initramfs , and rootfs artifacts described below for this procedure. RHCOS QCOW2 images are not supported for this installation type. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel : rhcos-<version>-live-kernel-<architecture> initramfs : rhcos-<version>-live-initramfs.<architecture>.img rootfs : rhcos-<version>-live-rootfs.<architecture>.img Upload the rootfs , kernel , and initramfs files to your HTTP server. Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Configure the network boot infrastructure so that the machines boot from their local disks after RHCOS is installed on them. Configure PXE or iPXE installation for the RHCOS images and begin the installation. Modify one of the following example menu entries for your environment and verify that the image and Ignition files are properly accessible: For PXE ( x86_64 ): 1 1 Specify the location of the live kernel file that you uploaded to your HTTP server. The URL must be HTTP, TFTP, or FTP; HTTPS and NFS are not supported. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the initramfs file, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. You can also add more kernel arguments to the APPEND line to configure networking or other boot options. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. For iPXE ( x86_64 + aarch64 ): 1 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The kernel parameter value is the location of the kernel file, the initrd=main argument is needed for booting on UEFI systems, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your HTTP server. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the kernel line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. Note To network boot the CoreOS kernel on aarch64 architecture, you need to use a version of iPXE build with the IMAGE_GZIP option enabled. See IMAGE_GZIP option in iPXE . For PXE (with UEFI and Grub as second stage) on aarch64 : 1 Specify the locations of the RHCOS files that you uploaded to your HTTP/TFTP server. The kernel parameter value is the location of the kernel file on your TFTP server. The coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file on your HTTP Server. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your TFTP server. Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, the system reboots. During reboot, the system applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install the cluster. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 2.11.3. Advanced RHCOS installation configuration A key benefit for manually provisioning the Red Hat Enterprise Linux CoreOS (RHCOS) nodes for OpenShift Container Platform is to be able to do configuration that is not available through default OpenShift Container Platform installation methods. This section describes some of the configurations that you can do using techniques that include: Passing kernel arguments to the live installer Running coreos-installer manually from the live system Customizing a live ISO or PXE boot image The advanced configuration topics for manual Red Hat Enterprise Linux CoreOS (RHCOS) installations detailed in this section relate to disk partitioning, networking, and using Ignition configs in different ways. 2.11.3.1. Using advanced networking options for PXE and ISO installations Networking for OpenShift Container Platform nodes uses DHCP by default to gather all necessary configuration settings. To set up static IP addresses or configure special settings, such as bonding, you can do one of the following: Pass special kernel parameters when you boot the live installer. Use a machine config to copy networking files to the installed system. Configure networking from a live installer shell prompt, then copy those settings to the installed system so that they take effect when the installed system first boots. To configure a PXE or iPXE installation, use one of the following options: See the "Advanced RHCOS installation reference" tables. Use a machine config to copy networking files to the installed system. To configure an ISO installation, use the following procedure. Procedure Boot the ISO installer. From the live system shell prompt, configure networking for the live system using available RHEL tools, such as nmcli or nmtui . Run the coreos-installer command to install the system, adding the --copy-network option to copy networking configuration. For example: USD sudo coreos-installer install --copy-network \ --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number> Important The --copy-network option only copies networking configuration found under /etc/NetworkManager/system-connections . In particular, it does not copy the system hostname. Reboot into the installed system. Additional resources See Getting started with nmcli and Getting started with nmtui in the RHEL 8 documentation for more information about the nmcli and nmtui tools. 2.11.3.2. Disk partitioning Disk partitions are created on OpenShift Container Platform cluster nodes during the Red Hat Enterprise Linux CoreOS (RHCOS) installation. Each RHCOS node of a particular architecture uses the same partition layout, unless you override the default partitioning configuration. During the RHCOS installation, the size of the root file system is increased to use any remaining available space on the target device. Important The use of a custom partition scheme on your node might result in OpenShift Container Platform not monitoring or alerting on some node partitions. If you override the default partitioning, see Understanding OpenShift File System Monitoring (eviction conditions) for more information about how OpenShift Container Platform monitors your host file systems. OpenShift Container Platform monitors the following two filesystem identifiers: nodefs , which is the filesystem that contains /var/lib/kubelet imagefs , which is the filesystem that contains /var/lib/containers For the default partition scheme, nodefs and imagefs monitor the same root filesystem, / . To override the default partitioning when installing RHCOS on an OpenShift Container Platform cluster node, you must create separate partitions. Important For disk sizes larger than 100GB, and especially disk sizes larger than 1TB, create a separate /var partition. See "Creating a separate /var partition" and this Red Hat Knowledgebase article for more information. Consider a situation where you want to add a separate storage partition for your containers and container images. For example, by mounting /var/lib/containers in a separate partition, the kubelet separately monitors /var/lib/containers as the imagefs directory and the root file system as the nodefs directory. Important If you have resized your disk size to host a larger file system, consider creating a separate /var/lib/containers partition. Consider resizing a disk that has an xfs format to reduce CPU time issues caused by a high number of allocation groups. 2.11.3.2.1. Creating a separate /var partition In general, you should use the default disk partitioning that is created during the RHCOS installation. However, there are cases where you might want to create a separate partition for a directory that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var directory or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Important For disk sizes larger than 100GB, and especially larger than 1TB, create a separate /var partition. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. The use of a separate partition for the /var directory or a subdirectory of /var also prevents data growth in the partitioned directory from filling up the root file system. The following procedure sets up a separate /var partition by adding a machine config manifest that is wrapped into the Ignition config file for a node type during the preparation phase of an installation. Procedure On your installation host, change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD openshift-install create manifests --dir <installation_directory> Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.13.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum offset value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no offset value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for compute nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Create the Ignition config files: USD openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory: The files in the <installation_directory>/manifest and <installation_directory>/openshift directories are wrapped into the Ignition config files, including the file that contains the 98-var-partition custom MachineConfig object. steps You can apply the custom disk partitioning by referencing the Ignition config files during the RHCOS installations. 2.11.3.2.2. Retaining existing partitions For an ISO installation, you can add options to the coreos-installer command that cause the installer to maintain one or more existing partitions. For a PXE installation, you can add coreos.inst.* options to the APPEND parameter to preserve partitions. Saved partitions might be data partitions from an existing OpenShift Container Platform system. You can identify the disk partitions you want to keep either by partition label or by number. Note If you save existing partitions, and those partitions do not leave enough space for RHCOS, the installation will fail without damaging the saved partitions. Retaining existing partitions during an ISO installation This example preserves any partition in which the partition label begins with data ( data* ): # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign \ --save-partlabel 'data*' /dev/disk/by-id/scsi-<serial_number> The following example illustrates running the coreos-installer in a way that preserves the sixth (6) partition on the disk: # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign \ --save-partindex 6 /dev/disk/by-id/scsi-<serial_number> This example preserves partitions 5 and higher: # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/disk/by-id/scsi-<serial_number> In the examples where partition saving is used, coreos-installer recreates the partition immediately. Retaining existing partitions during a PXE installation This APPEND option preserves any partition in which the partition label begins with 'data' ('data*'): coreos.inst.save_partlabel=data* This APPEND option preserves partitions 5 and higher: coreos.inst.save_partindex=5- This APPEND option preserves partition 6: coreos.inst.save_partindex=6 2.11.3.3. Identifying Ignition configs When doing an RHCOS manual installation, there are two types of Ignition configs that you can provide, with different reasons for providing each one: Permanent install Ignition config : Every manual RHCOS installation needs to pass one of the Ignition config files generated by openshift-installer , such as bootstrap.ign , master.ign and worker.ign , to carry out the installation. Important It is not recommended to modify these Ignition config files directly. You can update the manifest files that are wrapped into the Ignition config files, as outlined in examples in the preceding sections. For PXE installations, you pass the Ignition configs on the APPEND line using the coreos.inst.ignition_url= option. For ISO installations, after the ISO boots to the shell prompt, you identify the Ignition config on the coreos-installer command line with the --ignition-url= option. In both cases, only HTTP and HTTPS protocols are supported. Live install Ignition config : This type can be created by using the coreos-installer customize subcommand and its various options. With this method, the Ignition config passes to the live install medium, runs immediately upon booting, and performs setup tasks before or after the RHCOS system installs to disk. This method should only be used for performing tasks that must be done once and not applied again later, such as with advanced partitioning that cannot be done using a machine config. For PXE or ISO boots, you can create the Ignition config and APPEND the ignition.config.url= option to identify the location of the Ignition config. You also need to append ignition.firstboot ignition.platform.id=metal or the ignition.config.url option will be ignored. 2.11.3.4. Default console configuration Red Hat Enterprise Linux CoreOS (RHCOS) nodes installed from an OpenShift Container Platform 4.13 boot image use a default console that is meant to accomodate most virtualized and bare metal setups. Different cloud and virtualization platforms may use different default settings depending on the chosen architecture. Bare metal installations use the kernel default settings which typically means the graphical console is the primary console and the serial console is disabled. The default consoles may not match your specific hardware configuration or you might have specific needs that require you to adjust the default console. For example: You want to access the emergency shell on the console for debugging purposes. Your cloud platform does not provide interactive access to the graphical console, but provides a serial console. You want to enable multiple consoles. Console configuration is inherited from the boot image. This means that new nodes in existing clusters are unaffected by changes to the default console. You can configure the console for bare metal installations in the following ways: Using coreos-installer manually on the command line. Using the coreos-installer iso customize or coreos-installer pxe customize subcommands with the --dest-console option to create a custom image that automates the process. Note For advanced customization, perform console configuration using the coreos-installer iso or coreos-installer pxe subcommands, and not kernel arguments. 2.11.3.5. Enabling the serial console for PXE and ISO installations By default, the Red Hat Enterprise Linux CoreOS (RHCOS) serial console is disabled and all output is written to the graphical console. You can enable the serial console for an ISO installation and reconfigure the bootloader so that output is sent to both the serial console and the graphical console. Procedure Boot the ISO installer. Run the coreos-installer command to install the system, adding the --console option once to specify the graphical console, and a second time to specify the serial console: USD coreos-installer install \ --console=tty0 \ 1 --console=ttyS0,<options> \ 2 --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number> 1 The desired secondary console. In this case, the graphical console. Omitting this option will disable the graphical console. 2 The desired primary console. In this case the serial console. The options field defines the baud rate and other settings. A common value for this field is 11520n8 . If no options are provided, the default kernel value of 9600n8 is used. For more information on the format of this option, see Linux kernel serial console documentation. Reboot into the installed system. Note A similar outcome can be obtained by using the coreos-installer install --append-karg option, and specifying the console with console= . However, this will only set the console for the kernel and not the bootloader. To configure a PXE installation, make sure the coreos.inst.install_dev kernel command line option is omitted, and use the shell prompt to run coreos-installer manually using the above ISO installation procedure. 2.11.3.6. Customizing a live RHCOS ISO or PXE install You can use the live ISO image or PXE environment to install RHCOS by injecting an Ignition config file directly into the image. This creates a customized image that you can use to provision your system. For an ISO image, the mechanism to do this is the coreos-installer iso customize subcommand, which modifies the .iso file with your configuration. Similarly, the mechanism for a PXE environment is the coreos-installer pxe customize subcommand, which creates a new initramfs file that includes your customizations. The customize subcommand is a general purpose tool that can embed other types of customizations as well. The following tasks are examples of some of the more common customizations: Inject custom CA certificates for when corporate security policy requires their use. Configure network settings without the need for kernel arguments. Embed arbitrary preinstall and post-install scripts or binaries. 2.11.3.7. Customizing a live RHCOS ISO image You can customize a live RHCOS ISO image directly with the coreos-installer iso customize subcommand. When you boot the ISO image, the customizations are applied automatically. You can use this feature to configure the ISO image to automatically install RHCOS. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS ISO image from the RHCOS image mirror page and the Ignition config file, and then run the following command to inject the Ignition config directly into the ISO image: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso \ --dest-ignition bootstrap.ign \ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> 2 1 The Ignition config file that is generated from the openshift-installer installation program. 2 When you specify this option, the ISO image automatically runs an installation. Otherwise, the image remains configured for installation, but does not install automatically unless you specify the coreos.inst.install_dev kernel argument. Optional: To remove the ISO image customizations and return the image to its pristine state, run: USD coreos-installer iso reset rhcos-<version>-live.x86_64.iso You can now re-customize the live ISO image or use it in its pristine state. Applying your customizations affects every subsequent boot of RHCOS. 2.11.3.7.1. Modifying a live install ISO image to enable the serial console On clusters installed with OpenShift Container Platform 4.12 and above, the serial console is disabled by default and all output is written to the graphical console. You can enable the serial console with the following procedure. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following command to customize the ISO image to enable the serial console to receive output: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso \ --dest-ignition <path> \ 1 --dest-console tty0 \ 2 --dest-console ttyS0,<options> \ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> 4 1 The location of the Ignition config to install. 2 The desired secondary console. In this case, the graphical console. Omitting this option will disable the graphical console. 3 The desired primary console. In this case, the serial console. The options field defines the baud rate and other settings. A common value for this field is 115200n8 . If no options are provided, the default kernel value of 9600n8 is used. For more information on the format of this option, see the Linux kernel serial console documentation. 4 The specified disk to install to. If you omit this option, the ISO image automatically runs the installation program which will fail unless you also specify the coreos.inst.install_dev kernel argument. Note The --dest-console option affects the installed system and not the live ISO system. To modify the console for a live ISO system, use the --live-karg-append option and specify the console with console= . Your customizations are applied and affect every subsequent boot of the ISO image. Optional: To remove the ISO image customizations and return the image to its original state, run the following command: USD coreos-installer iso reset rhcos-<version>-live.x86_64.iso You can now recustomize the live ISO image or use it in its original state. 2.11.3.7.2. Modifying a live install ISO image to use a custom certificate authority You can provide certificate authority (CA) certificates to Ignition with the --ignition-ca flag of the customize subcommand. You can use the CA certificates during both the installation boot and when provisioning the installed system. Note Custom CA certificates affect how Ignition fetches remote resources but they do not affect the certificates installed onto the system. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following command to customize the ISO image for use with a custom CA: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso --ignition-ca cert.pem Important The coreos.inst.ignition_url kernel parameter does not work with the --ignition-ca flag. You must use the --dest-ignition flag to create a customized image for each cluster. Applying your custom CA certificate affects every subsequent boot of RHCOS. 2.11.3.7.3. Modifying a live install ISO image with customized network settings You can embed a NetworkManager keyfile into the live ISO image and pass it through to the installed system with the --network-keyfile flag of the customize subcommand. Warning When creating a connection profile, you must use a .nmconnection filename extension in the filename of the connection profile. If you do not use a .nmconnection filename extension, the cluster will apply the connection profile to the live environment, but it will not apply the configuration when the cluster first boots up the nodes, resulting in a setup that does not work. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Create a connection profile for a bonded interface. For example, create the bond0.nmconnection file in your local directory with the following content: [connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em1.nmconnection file in your local directory with the following content: [connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em2.nmconnection file in your local directory with the following content: [connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following command to customize the ISO image with your configured networking: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso \ --network-keyfile bond0.nmconnection \ --network-keyfile bond0-proxy-em1.nmconnection \ --network-keyfile bond0-proxy-em2.nmconnection Network settings are applied to the live system and are carried over to the destination system. 2.11.3.8. Customizing a live RHCOS PXE environment You can customize a live RHCOS PXE environment directly with the coreos-installer pxe customize subcommand. When you boot the PXE environment, the customizations are applied automatically. You can use this feature to configure the PXE environment to automatically install RHCOS. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and the Ignition config file, and then run the following command to create a new initramfs file that contains the customizations from your Ignition config: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --dest-ignition bootstrap.ign \ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> \ 2 -o rhcos-<version>-custom-initramfs.x86_64.img 3 1 The Ignition config file that is generated from openshift-installer . 2 When you specify this option, the PXE environment automatically runs an install. Otherwise, the image remains configured for installing, but does not do so automatically unless you specify the coreos.inst.install_dev kernel argument. 3 Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Applying your customizations affects every subsequent boot of RHCOS. 2.11.3.8.1. Modifying a live install PXE environment to enable the serial console On clusters installed with OpenShift Container Platform 4.12 and above, the serial console is disabled by default and all output is written to the graphical console. You can enable the serial console with the following procedure. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and the Ignition config file, and then run the following command to create a new customized initramfs file that enables the serial console to receive output: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --dest-ignition <path> \ 1 --dest-console tty0 \ 2 --dest-console ttyS0,<options> \ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> \ 4 -o rhcos-<version>-custom-initramfs.x86_64.img 5 1 The location of the Ignition config to install. 2 The desired secondary console. In this case, the graphical console. Omitting this option will disable the graphical console. 3 The desired primary console. In this case, the serial console. The options field defines the baud rate and other settings. A common value for this field is 115200n8 . If no options are provided, the default kernel value of 9600n8 is used. For more information on the format of this option, see the Linux kernel serial console documentation. 4 The specified disk to install to. If you omit this option, the PXE environment automatically runs the installer which will fail unless you also specify the coreos.inst.install_dev kernel argument. 5 Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Your customizations are applied and affect every subsequent boot of the PXE environment. 2.11.3.8.2. Modifying a live install PXE environment to use a custom certificate authority You can provide certificate authority (CA) certificates to Ignition with the --ignition-ca flag of the customize subcommand. You can use the CA certificates during both the installation boot and when provisioning the installed system. Note Custom CA certificates affect how Ignition fetches remote resources but they do not affect the certificates installed onto the system. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and run the following command to create a new customized initramfs file for use with a custom CA: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --ignition-ca cert.pem \ -o rhcos-<version>-custom-initramfs.x86_64.img Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Important The coreos.inst.ignition_url kernel parameter does not work with the --ignition-ca flag. You must use the --dest-ignition flag to create a customized image for each cluster. Applying your custom CA certificate affects every subsequent boot of RHCOS. 2.11.3.8.3. Modifying a live install PXE environment with customized network settings You can embed a NetworkManager keyfile into the live PXE environment and pass it through to the installed system with the --network-keyfile flag of the customize subcommand. Warning When creating a connection profile, you must use a .nmconnection filename extension in the filename of the connection profile. If you do not use a .nmconnection filename extension, the cluster will apply the connection profile to the live environment, but it will not apply the configuration when the cluster first boots up the nodes, resulting in a setup that does not work. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Create a connection profile for a bonded interface. For example, create the bond0.nmconnection file in your local directory with the following content: [connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em1.nmconnection file in your local directory with the following content: [connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em2.nmconnection file in your local directory with the following content: [connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and run the following command to create a new customized initramfs file that contains your configured networking: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --network-keyfile bond0.nmconnection \ --network-keyfile bond0-proxy-em1.nmconnection \ --network-keyfile bond0-proxy-em2.nmconnection \ -o rhcos-<version>-custom-initramfs.x86_64.img Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Network settings are applied to the live system and are carried over to the destination system. 2.11.3.9. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 2.11.3.9.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page . The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=<name>[:<network_interfaces>][:options] <name> is the bonding device name ( bond0 ), <network_interfaces> represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Bonding multiple SR-IOV network interfaces to a dual port NIC interface Important Support for Day 1 operations associated with enabling NIC partitioning for SR-IOV devices is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Optional: You can bond multiple SR-IOV network interfaces to a dual port NIC interface by using the bond= option. On each node, you must perform the following tasks: Create the SR-IOV virtual functions (VFs) following the guidance in Managing SR-IOV devices . Follow the procedure in the "Attaching SR-IOV networking devices to virtual machines" section. Create the bond, attach the desired VFs to the bond and set the bond link state up following the guidance in Configuring network bonding . Follow any of the described procedures to create the bond. The following examples illustrate the syntax you must use: The syntax for configuring a bonded interface is bond=<name>[:<network_interfaces>][:options] . <name> is the bonding device name ( bond0 ), <network_interfaces> represents the virtual functions (VFs) by their known name in the kernel and shown in the output of the ip link command( eno1f0 , eno2f0 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Using network teaming Optional: You can use a network teaming as an alternative to bonding by using the team= parameter: The syntax for configuring a team interface is: team=name[:network_interfaces] name is the team device name ( team0 ) and network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1, em2 ). Note Teaming is planned to be deprecated when RHCOS switches to an upcoming version of RHEL. For more information, see this Red Hat Knowledgebase Article . Use the following example to configure a network team: team=team0:em1,em2 ip=team0:dhcp 2.11.3.9.2. coreos-installer options for ISO and PXE installations You can install RHCOS by running coreos-installer install <options> <device> at the command prompt, after booting into the RHCOS live environment from an ISO image. The following table shows the subcommands, options, and arguments you can pass to the coreos-installer command. Table 2.12. coreos-installer subcommands, command-line options, and arguments coreos-installer install subcommand Subcommand Description USD coreos-installer install <options> <device> Embed an Ignition config in an ISO image. coreos-installer install subcommand options Option Description -u , --image-url <url> Specify the image URL manually. -f , --image-file <path> Specify a local image file manually. Used for debugging. -i, --ignition-file <path> Embed an Ignition config from a file. -I , --ignition-url <URL> Embed an Ignition config from a URL. --ignition-hash <digest> Digest type-value of the Ignition config. -p , --platform <name> Override the Ignition platform ID for the installed system. --console <spec> Set the kernel and bootloader console for the installed system. For more information about the format of <spec> , see the Linux kernel serial console documentation. --append-karg <arg>... Append a default kernel argument to the installed system. --delete-karg <arg>... Delete a default kernel argument from the installed system. -n , --copy-network Copy the network configuration from the install environment. Important The --copy-network option only copies networking configuration found under /etc/NetworkManager/system-connections . In particular, it does not copy the system hostname. --network-dir <path> For use with -n . Default is /etc/NetworkManager/system-connections/ . --save-partlabel <lx>.. Save partitions with this label glob. --save-partindex <id>... Save partitions with this number or range. --insecure Skip RHCOS image signature verification. --insecure-ignition Allow Ignition URL without HTTPS or hash. --architecture <name> Target CPU architecture. Valid values are x86_64 and aarch64 . --preserve-on-error Do not clear partition table on error. -h , --help Print help information. coreos-installer install subcommand argument Argument Description <device> The destination device. coreos-installer ISO subcommands Subcommand Description USD coreos-installer iso customize <options> <ISO_image> Customize a RHCOS live ISO image. coreos-installer iso reset <options> <ISO_image> Restore a RHCOS live ISO image to default settings. coreos-installer iso ignition remove <options> <ISO_image> Remove the embedded Ignition config from an ISO image. coreos-installer ISO customize subcommand options Option Description --dest-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the destination system. --dest-console <spec> Specify the kernel and bootloader console for the destination system. --dest-device <path> Install and overwrite the specified destination device. --dest-karg-append <arg> Add a kernel argument to each boot of the destination system. --dest-karg-delete <arg> Delete a kernel argument from each boot of the destination system. --network-keyfile <path> Configure networking by using the specified NetworkManager keyfile for live and destination systems. --ignition-ca <path> Specify an additional TLS certificate authority to be trusted by Ignition. --pre-install <path> Run the specified script before installation. --post-install <path> Run the specified script after installation. --installer-config <path> Apply the specified installer configuration file. --live-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the live environment. --live-karg-append <arg> Add a kernel argument to each boot of the live environment. --live-karg-delete <arg> Delete a kernel argument from each boot of the live environment. --live-karg-replace <k=o=n> Replace a kernel argument in each boot of the live environment, in the form key=old=new . -f , --force Overwrite an existing Ignition config. -o , --output <path> Write the ISO to a new output file. -h , --help Print help information. coreos-installer PXE subcommands Subcommand Description Note that not all of these options are accepted by all subcommands. coreos-installer pxe customize <options> <path> Customize a RHCOS live PXE boot config. coreos-installer pxe ignition wrap <options> Wrap an Ignition config in an image. coreos-installer pxe ignition unwrap <options> <image_name> Show the wrapped Ignition config in an image. coreos-installer PXE customize subcommand options Option Description Note that not all of these options are accepted by all subcommands. --dest-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the destination system. --dest-console <spec> Specify the kernel and bootloader console for the destination system. --dest-device <path> Install and overwrite the specified destination device. --network-keyfile <path> Configure networking by using the specified NetworkManager keyfile for live and destination systems. --ignition-ca <path> Specify an additional TLS certificate authority to be trusted by Ignition. --pre-install <path> Run the specified script before installation. post-install <path> Run the specified script after installation. --installer-config <path> Apply the specified installer configuration file. --live-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the live environment. -o, --output <path> Write the initramfs to a new output file. Note This option is required for PXE environments. -h , --help Print help information. 2.11.3.9.3. coreos.inst boot options for ISO or PXE installations You can automatically invoke coreos-installer options at boot time by passing coreos.inst boot arguments to the RHCOS live installer. These are provided in addition to the standard boot arguments. For ISO installations, the coreos.inst options can be added by interrupting the automatic boot at the bootloader menu. You can interrupt the automatic boot by pressing TAB while the RHEL CoreOS (Live) menu option is highlighted. For PXE or iPXE installations, the coreos.inst options must be added to the APPEND line before the RHCOS live installer is booted. The following table shows the RHCOS live installer coreos.inst boot options for ISO and PXE installations. Table 2.13. coreos.inst boot options Argument Description coreos.inst.install_dev Required. The block device on the system to install to. It is recommended to use the full path, such as /dev/sda , although sda is allowed. coreos.inst.ignition_url Optional: The URL of the Ignition config to embed into the installed system. If no URL is specified, no Ignition config is embedded. Only HTTP and HTTPS protocols are supported. coreos.inst.save_partlabel Optional: Comma-separated labels of partitions to preserve during the install. Glob-style wildcards are permitted. The specified partitions do not need to exist. coreos.inst.save_partindex Optional: Comma-separated indexes of partitions to preserve during the install. Ranges m-n are permitted, and either m or n can be omitted. The specified partitions do not need to exist. coreos.inst.insecure Optional: Permits the OS image that is specified by coreos.inst.image_url to be unsigned. coreos.inst.image_url Optional: Download and install the specified RHCOS image. This argument should not be used in production environments and is intended for debugging purposes only. While this argument can be used to install a version of RHCOS that does not match the live media, it is recommended that you instead use the media that matches the version you want to install. If you are using coreos.inst.image_url , you must also use coreos.inst.insecure . This is because the bare-metal media are not GPG-signed for OpenShift Container Platform. Only HTTP and HTTPS protocols are supported. coreos.inst.skip_reboot Optional: The system will not reboot after installing. After the install finishes, you will receive a prompt that allows you to inspect what is happening during installation. This argument should not be used in production environments and is intended for debugging purposes only. coreos.inst.platform_id Optional: The Ignition platform ID of the platform the RHCOS image is being installed on. Default is metal . This option determines whether or not to request an Ignition config from the cloud provider, such as VMware. For example: coreos.inst.platform_id=vmware . ignition.config.url Optional: The URL of the Ignition config for the live boot. For example, this can be used to customize how coreos-installer is invoked, or to run code before or after the installation. This is different from coreos.inst.ignition_url , which is the Ignition config for the installed system. 2.11.4. Enabling multipathing with kernel arguments on RHCOS RHCOS supports multipathing on the primary disk, allowing stronger resilience to hardware failure to achieve higher host availability. You can enable multipathing at installation time for nodes that were provisioned in OpenShift Container Platform 4.8 or later. While postinstallation support is available by activating multipathing via the machine config, enabling multipathing during installation is recommended. In setups where any I/O to non-optimized paths results in I/O system errors, you must enable multipathing at installation time. Important On IBM Z and IBM(R) LinuxONE, you can enable multipathing only if you configured your cluster for it during installation. For more information, see "Installing RHCOS and starting the OpenShift Container Platform bootstrap process" in Installing a cluster with z/VM on IBM Z and IBM(R) LinuxONE . The following procedure enables multipath at installation time and appends kernel arguments to the coreos-installer install command so that the installed system itself will use multipath beginning from the first boot. Note OpenShift Container Platform does not support enabling multipathing as a day-2 activity on nodes that have been upgraded from 4.6 or earlier. Prerequisites You have created the Ignition config files for your cluster. You have reviewed Installing RHCOS and starting the OpenShift Container Platform bootstrap process . Procedure To enable multipath and start the multipathd daemon, run the following command on the installation host: USD mpathconf --enable && systemctl start multipathd.service Optional: If booting the PXE or ISO, you can instead enable multipath by adding rd.multipath=default from the kernel command line. Append the kernel arguments by invoking the coreos-installer program: If there is only one multipath device connected to the machine, it should be available at path /dev/mapper/mpatha . For example: USD coreos-installer install /dev/mapper/mpatha \ 1 --ignition-url=http://host/worker.ign \ --append-karg rd.multipath=default \ --append-karg root=/dev/disk/by-label/dm-mpath-root \ --append-karg rw 1 Indicates the path of the single multipathed device. If there are multiple multipath devices connected to the machine, or to be more explicit, instead of using /dev/mapper/mpatha , it is recommended to use the World Wide Name (WWN) symlink available in /dev/disk/by-id . For example: USD coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \ 1 --ignition-url=http://host/worker.ign \ --append-karg rd.multipath=default \ --append-karg root=/dev/disk/by-label/dm-mpath-root \ --append-karg rw 1 Indicates the WWN ID of the target multipathed device. For example, 0xx194e957fcedb4841 . This symlink can also be used as the coreos.inst.install_dev kernel argument when using special coreos.inst.* arguments to direct the live installer. For more information, see "Installing RHCOS and starting the OpenShift Container Platform bootstrap process". Reboot into the installed system. Check that the kernel arguments worked by going to one of the worker nodes and listing the kernel command line arguments (in /proc/cmdline on the host): USD oc debug node/ip-10-0-141-105.ec2.internal Example output Starting pod/ip-10-0-141-105ec2internal-debug ... To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline ... rd.multipath=default root=/dev/disk/by-label/dm-mpath-root ... sh-4.2# exit You should see the added kernel arguments. 2.11.4.1. Enabling multipathing on secondary disks RHCOS also supports multipathing on a secondary disk. Instead of kernel arguments, you use Ignition to enable multipathing for the secondary disk at installation time. Prerequisites You have read the section Disk partitioning . You have read Enabling multipathing with kernel arguments on RHCOS . You have installed the Butane utility. Procedure Create a Butane config with information similar to the following: Example multipath-config.bu variant: openshift version: 4.13.0 systemd: units: - name: mpath-configure.service enabled: true contents: | [Unit] Description=Configure Multipath on Secondary Disk ConditionFirstBoot=true ConditionPathExists=!/etc/multipath.conf Before=multipathd.service 1 DefaultDependencies=no [Service] Type=oneshot ExecStart=/usr/sbin/mpathconf --enable 2 [Install] WantedBy=multi-user.target - name: mpath-var-lib-container.service enabled: true contents: | [Unit] Description=Set Up Multipath On /var/lib/containers ConditionFirstBoot=true 3 Requires=dev-mapper-mpatha.device After=dev-mapper-mpatha.device After=ostree-remount.service Before=kubelet.service DefaultDependencies=no [Service] 4 Type=oneshot ExecStart=/usr/sbin/mkfs.xfs -L containers -m reflink=1 /dev/mapper/mpatha ExecStart=/usr/bin/mkdir -p /var/lib/containers [Install] WantedBy=multi-user.target - name: var-lib-containers.mount enabled: true contents: | [Unit] Description=Mount /var/lib/containers After=mpath-var-lib-containers.service Before=kubelet.service 5 [Mount] 6 What=/dev/disk/by-label/dm-mpath-containers Where=/var/lib/containers Type=xfs [Install] WantedBy=multi-user.target 1 The configuration must be set before launching the multipath daemon. 2 Starts the mpathconf utility. 3 This field must be set to the value true . 4 Creates the filesystem and directory /var/lib/containers . 5 The device must be mounted before starting any nodes. 6 Mounts the device to the /var/lib/containers mount point. This location cannot be a symlink. Create the Ignition configuration by running the following command: USD butane --pretty --strict multipath-config.bu > multipath-config.ign Continue with the rest of the first boot RHCOS installation process. Important Do not add the rd.multipath or root kernel arguments on the command-line during installation unless the primary disk is also multipathed. Additional resources See Installing RHCOS and starting the OpenShift Container Platform bootstrap process for more information on using special coreos.inst.* arguments to direct the live installer. 2.12. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.26.0 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. Additional resources See Monitoring installation progress for more information about monitoring the installation logs and retrieving diagnostic data if installation issues arise. 2.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 2.14. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 2.15. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m Configure the Operators that are not available. Additional resources See Gathering logs from a failed installation for details about gathering data in the event of a failed OpenShift Container Platform installation. See Troubleshooting Operator issues for steps to check Operator pod health across the cluster and gather Operator logs for diagnosis. 2.15.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . When this has completed, you must configure storage. 2.15.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 2.15.2.1. Configuring registry storage for bare metal and other manual installations As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster that uses manually-provisioned Red Hat Enterprise Linux CoreOS (RHCOS) nodes, such as bare metal. You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.13 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 2.15.2.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 2.15.2.3. Configuring block registry storage for bare metal To allow the image registry to use block storage types during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes, or block persistent volumes, are supported but not recommended for use with the image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. If you choose to use a block storage volume with the image registry, you must use a filesystem persistent volume claim (PVC). Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only one ( 1 ) replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. 2.16. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Post-installation machine configuration tasks documentation for more information. 2.17. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.13, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 2.18. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage .
[ "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "mkdir <installation_directory>", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64", "networking: serviceNetwork: - 172.30.0.0/16 - fd02::/112", "networking: machineNetwork: - cidr: 10.0.0.0/16", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - name: worker platform: {} replicas: 0", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". β”œβ”€β”€ auth β”‚ β”œβ”€β”€ kubeadmin-password β”‚ └── kubeconfig β”œβ”€β”€ bootstrap.ign β”œβ”€β”€ master.ign β”œβ”€β”€ metadata.json └── worker.ign", "sha512sum <installation_directory>/bootstrap.ign", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep '\\.iso[^.]'", "\"location\": \"<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.13-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'", "\"<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.13-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.13-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.13-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3", "kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot", "menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "sudo coreos-installer install --copy-network --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>", "openshift-install create manifests --dir <installation_directory>", "variant: openshift version: 4.13.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir <installation_directory> 1", ". β”œβ”€β”€ auth β”‚ β”œβ”€β”€ kubeadmin-password β”‚ └── kubeconfig β”œβ”€β”€ bootstrap.ign β”œβ”€β”€ master.ign β”œβ”€β”€ metadata.json └── worker.ign", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partlabel 'data*' /dev/disk/by-id/scsi-<serial_number>", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 6 /dev/disk/by-id/scsi-<serial_number>", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/disk/by-id/scsi-<serial_number>", "coreos.inst.save_partlabel=data*", "coreos.inst.save_partindex=5-", "coreos.inst.save_partindex=6", "coreos-installer install --console=tty0 \\ 1 --console=ttyS0,<options> \\ 2 --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>", "coreos-installer iso customize rhcos-<version>-live.x86_64.iso --dest-ignition bootstrap.ign \\ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> 2", "coreos-installer iso reset rhcos-<version>-live.x86_64.iso", "coreos-installer iso customize rhcos-<version>-live.x86_64.iso --dest-ignition <path> \\ 1 --dest-console tty0 \\ 2 --dest-console ttyS0,<options> \\ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> 4", "coreos-installer iso reset rhcos-<version>-live.x86_64.iso", "coreos-installer iso customize rhcos-<version>-live.x86_64.iso --ignition-ca cert.pem", "[connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto", "[connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond", "[connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond", "coreos-installer iso customize rhcos-<version>-live.x86_64.iso --network-keyfile bond0.nmconnection --network-keyfile bond0-proxy-em1.nmconnection --network-keyfile bond0-proxy-em2.nmconnection", "coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --dest-ignition bootstrap.ign \\ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> \\ 2 -o rhcos-<version>-custom-initramfs.x86_64.img 3", "coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --dest-ignition <path> \\ 1 --dest-console tty0 \\ 2 --dest-console ttyS0,<options> \\ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> \\ 4 -o rhcos-<version>-custom-initramfs.x86_64.img 5", "coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --ignition-ca cert.pem -o rhcos-<version>-custom-initramfs.x86_64.img", "[connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto", "[connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond", "[connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond", "coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --network-keyfile bond0.nmconnection --network-keyfile bond0-proxy-em1.nmconnection --network-keyfile bond0-proxy-em2.nmconnection -o rhcos-<version>-custom-initramfs.x86_64.img", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp", "bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "team=team0:em1,em2 ip=team0:dhcp", "mpathconf --enable && systemctl start multipathd.service", "coreos-installer install /dev/mapper/mpatha \\ 1 --ignition-url=http://host/worker.ign --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw", "coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \\ 1 --ignition-url=http://host/worker.ign --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw", "oc debug node/ip-10-0-141-105.ec2.internal", "Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline rd.multipath=default root=/dev/disk/by-label/dm-mpath-root sh-4.2# exit", "variant: openshift version: 4.13.0 systemd: units: - name: mpath-configure.service enabled: true contents: | [Unit] Description=Configure Multipath on Secondary Disk ConditionFirstBoot=true ConditionPathExists=!/etc/multipath.conf Before=multipathd.service 1 DefaultDependencies=no [Service] Type=oneshot ExecStart=/usr/sbin/mpathconf --enable 2 [Install] WantedBy=multi-user.target - name: mpath-var-lib-container.service enabled: true contents: | [Unit] Description=Set Up Multipath On /var/lib/containers ConditionFirstBoot=true 3 Requires=dev-mapper-mpatha.device After=dev-mapper-mpatha.device After=ostree-remount.service Before=kubelet.service DefaultDependencies=no [Service] 4 Type=oneshot ExecStart=/usr/sbin/mkfs.xfs -L containers -m reflink=1 /dev/mapper/mpatha ExecStart=/usr/bin/mkdir -p /var/lib/containers [Install] WantedBy=multi-user.target - name: var-lib-containers.mount enabled: true contents: | [Unit] Description=Mount /var/lib/containers After=mpath-var-lib-containers.service Before=kubelet.service 5 [Mount] 6 What=/dev/disk/by-label/dm-mpath-containers Where=/var/lib/containers Type=xfs [Install] WantedBy=multi-user.target", "butane --pretty --strict multipath-config.bu > multipath-config.ign", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.26.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resources found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.13 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_bare_metal/installing-bare-metal
Chapter 6. Load balancing traffic with HAProxy
Chapter 6. Load balancing traffic with HAProxy The HAProxy service provides load balancing of traffic to Controller nodes in the high availability cluster, as well as logging and sample configurations. The haproxy package contains the haproxy daemon, which corresponds to the systemd service of the same name. Pacemaker manages the HAProxy service as a highly available service called haproxy-bundle . For more information about HAProxy, see the HAProxy 1.8 documentation . For information on verifying that HAProxy is configured correctly, see the KCS article How can I verify my haproxy.cfg is correctly configured to load balance openstack services? . 6.1. How HAProxy works Director can configure most Red Hat OpenStack Platform services to use the HAProxy service. Director configures those services in the /var/lib/config-data/haproxy/etc/haproxy/haproxy.cfg file, which instructs HAProxy to run in a dedicated container on each overcloud node. The following table shows the list of services that HAProxy manages: Table 6.1. Services managed by HAProxy aodh cinder glance_api gnocchi haproxy.stats heat_api heat_cfn horizon keystone_admin keystone_public mysql neutron nova_metadata nova_novncproxy nova_osapi nova_placement For each service in the haproxy.cfg file, you can see the following properties: listen : The name of the service that is listening for requests. bind : The IP address and TCP port number on which the service is listening. server : The name of each Controller node server that uses HAProxy, the IP address and listening port, and additional information about the server. The following example shows the OpenStack Block Storage (cinder) service configuration in the haproxy.cfg file: This example output shows the following information about the OpenStack Block Storage (cinder) service: 172.16.0.10:8776 : Virtual IP address and port on the Internal API network (VLAN201) to use within the overcloud. 192.168.1.150:8776 : Virtual IP address and port on the External network (VLAN100) that provides access to the API network from outside the overcloud. 8777 : Port number on which the OpenStack Block Storage (cinder) service is listening. server : Controller node names and IP addresses. HAProxy can direct requests made to those IP addresses to one of the Controller nodes listed in the server output. httpchk : Enables health checks on the Controller node servers. fall 5 : Number of failed health checks to determine that the service is offline. inter 2000 : Interval between two consecutive health checks in milliseconds. rise 2 : Number of successful health checks to determine that the service is running. For more information about settings you can use in the haproxy.cfg file, see the /usr/share/doc/haproxy-[VERSION]/configuration.txt file on any node where the haproxy package is installed. 6.2. Viewing HAProxy stats By default, the director also enables HAProxy Stats, or statistics, on all HA deployments. With this feature, you can view detailed information about data transfer, connections, and server states on the HAProxy Stats page. The director also sets the IP:Port address that you use to reach the HAProxy Stats page and stores the information in the haproxy.cfg file. Procedure Open the /var/lib/config-data/haproxy/etc/haproxy/haproxy.cfg file in any Controller node where HAProxy is installed. Locate the listen haproxy.stats section: In a Web browser, navigate to 10.200.0.6:1993 and enter the credentials from the stats auth row to view the HAProxy Stats page.
[ "listen cinder bind 172.16.0.10:8776 bind 192.168.1.150:8776 mode http http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc } option httpchk server overcloud-controller-0 172.16.0.13:8777 check fall 5 inter 2000 rise 2 server overcloud-controller-1 172.16.0.14:8777 check fall 5 inter 2000 rise 2 server overcloud-controller-2 172.16.0.15:8777 check fall 5 inter 2000 rise 2", "listen haproxy.stats bind 10.200.0.6:1993 mode http stats enable stats uri / stats auth admin:<haproxy-stats-password>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/high_availability_deployment_and_usage/assembly_haproxy