title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Part IX. Set Up Cache Writing | Part IX. Set Up Cache Writing | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/part-Set_Up_Cache_Writing |
2.34. RHEA-2011:0608 - new package: python-rhsm | 2.34. RHEA-2011:0608 - new package: python-rhsm A new python-rhsm package is now available for Red Hat Enterprise Linux 6. The new python-rhsm package provides access to the Subscription Management tools. It helps users to understand specific products which are installed on their machines and specific subscriptions which their machines consume. This enhancement update adds a new python-rhsm package to Red Hat Enterprise Linux 6. (BZ# 661863 ) All users requiring python-rhsm should install this newly-released package, which adds this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.1_technical_notes/python-rhsm_new |
Chapter 5. Erasure code pools overview | Chapter 5. Erasure code pools overview Ceph storage strategies involve defining data durability requirements. Data durability means the ability to sustain the loss of one or more OSDs without losing data. Ceph stores data in pools and there are two types of the pools: replicated erasure-coded Ceph uses the replicated pools by default, meaning the Ceph copies every object from a primary OSD node to one or more secondary OSDs. The erasure-coded pools reduce the amount of disk space required to ensure data durability but it is computationally a bit more expensive than replication. Erasure coding is a method of storing an object in the Ceph storage cluster durably where the erasure code algorithm breaks the object into data chunks ( k ) and coding chunks ( m ), and stores those chunks in different OSDs. In the event of the failure of an OSD, Ceph retrieves the remaining data ( k ) and coding ( m ) chunks from the other OSDs and the erasure code algorithm restores the object from those chunks. Note Red Hat recommends min_size for erasure-coded pools to be K+1 or more to prevent loss of writes and data. Erasure coding uses storage capacity more efficiently than replication. The n-replication approach maintains n copies of an object (3x by default in Ceph), whereas erasure coding maintains only k + m chunks. For example, 3 data and 2 coding chunks use 1.5x the storage space of the original object. While erasure coding uses less storage overhead than replication, the erasure code algorithm uses more RAM and CPU than replication when it accesses or recovers objects. Erasure coding is advantageous when data storage must be durable and fault tolerant, but do not require fast read performance (for example, cold storage, historical records, and so on). For the mathematical and detailed explanation on how erasure code works in Ceph, see the Ceph Erasure Coding section in the Architecture Guide for Red Hat Ceph Storage 6. Ceph creates a default erasure code profile when initializing a cluster with k=2 and m=2 , This mean that Ceph will spread the object data over four OSDs ( k+m == 4 ) and Ceph can lose one of those OSDs without losing data. To know more about erasure code profiling see the Erasure Code Profiles section. Important Configure only the .rgw.buckets pool as erasure-coded and all other Ceph Object Gateway pools as replicated, otherwise an attempt to create a new bucket fails with the following error: The reason for this is that erasure-coded pools do not support the omap operations and certain Ceph Object Gateway metadata pools require the omap support. 5.1. Creating a sample erasure-coded pool The ceph osd pool create command creates an erasure-coded pool with the default profile, unless another profile is specified. Profiles define the redundancy of data by setting two parameters, k , and m . These parameters define the number of chunks a piece of data is split and the number of coding chunks are created. The simplest erasure coded pool is equivalent to RAID5 and requires at least three hosts: Example Note The 32 in pool create stands for the number of placement groups. 5.2. Erasure code profiles Ceph defines an erasure-coded pool with a profile . Ceph uses a profile when creating an erasure-coded pool and the associated CRUSH rule. Ceph creates a default erasure code profile when initializing a cluster and it provides the same level of redundancy as two copies in a replicated pool. However, it uses 25% less storage capacity. The default profiles define k=2 and m=2 , meaning Ceph will spread the object data over four OSDs ( k+m=4 ) and Ceph can lose one of those OSDs without losing data. The default erasure code profile can sustain the loss of a single OSD. It is equivalent to a replicated pool with a size two, but requires 1.5 TB instead of 2 TB to store 1 TB of data. To display the default profile use the following command: You can create a new profile to improve redundancy without increasing raw storage requirements. For instance, a profile with k=8 and m=4 can sustain the loss of four ( m=4 ) OSDs by distributing an object on 12 ( k+m=12 ) OSDs. Ceph divides the object into 8 chunks and computes 4 coding chunks for recovery. For example, if the object size is 8 MB, each data chunk is 1 MB and each coding chunk has the same size as the data chunk, that is also 1 MB. The object will not be lost even if four OSDs fail simultaneously. The most important parameters of the profile are k , m and crush-failure-domain , because they define the storage overhead and the data durability. Important Choosing the correct profile is important because you cannot change the profile after you create the pool. To modify a profile, you must create a new pool with a different profile and migrate the objects from the old pool to the new pool. For instance, if the desired architecture must sustain the loss of two racks with a storage overhead of 40% overhead, the following profile can be defined: The primary OSD will divide the NYAN object into four ( k=4 ) data chunks and create two additional chunks ( m=2 ). The value of m defines how many OSDs can be lost simultaneously without losing any data. The crush-failure-domain=rack will create a CRUSH rule that ensures no two chunks are stored in the same rack. Important Red Hat supports the following jerasure coding values for k , and m : k=8 m=3 k=8 m=4 k=4 m=2 Important If the number of OSDs lost equals the number of coding chunks ( m ), some placement groups in the erasure coding pool will go into incomplete state. If the number of OSDs lost is less than m , no placement groups will go into incomplete state. In either situation, no data loss will occur. If placement groups are in incomplete state, temporarily reducing min_size of an erasure coded pool will allow recovery. 5.2.1. Setting OSD erasure-code-profile To create a new erasure code profile: Syntax Where: directory Description Set the directory name from which the erasure code plug-in is loaded. Type String Required No. Default /usr/lib/ceph/erasure-code plugin Description Use the erasure code plug-in to compute coding chunks and recover missing chunks. See the Erasure Code Plug-ins section for details. Type String Required No. Default jerasure stripe_unit Description The amount of data in a data chunk, per stripe. For example, a profile with 2 data chunks and stripe_unit=4K would put the range 0-4K in chunk 0, 4K-8K in chunk 1, then 8K-12K in chunk 0 again. This should be a multiple of 4K for best performance. The default value is taken from the monitor config option osd_pool_erasure_code_stripe_unit when a pool is created. The stripe_width of a pool using this profile will be the number of data chunks multiplied by this stripe_unit . Type String Required No. Default 4K crush-device-class Description The device class, such as hdd or ssd . Type String Required No Default none , meaning CRUSH uses all devices regardless of class. crush-failure-domain Description The failure domain, such as host or rack . Type String Required No Default host key Description The semantic of the remaining key-value pairs is defined by the erasure code plug-in. Type String Required No. --force Description Override an existing profile by the same name. Type String Required No. 5.2.2. Removing OSD erasure-code-profile To remove an erasure code profile: Syntax If the profile is referenced by a pool, the deletion fails. Warning Removing an erasure code profile using osd erasure-code-profile rm command does not automatically delete the associated CRUSH rule associated with the erasure code profile. Red Hat recommends to manually remove the associated CRUSH rule using ceph osd crush rule remove RULE_NAME command to avoid unexpected behavior. 5.2.3. Getting OSD erasure-code-profile To display an erasure code profile: Syntax 5.2.4. Listing OSD erasure-code-profile To list the names of all erasure code profiles: Syntax 5.3. Erasure Coding with Overwrites By default, erasure coded pools only work with the Ceph Object Gateway, which performs full object writes and appends. Using erasure coded pools with overwrites allows Ceph Block Devices and CephFS store their data in an erasure coded pool: Syntax Example Enabling erasure coded pools with overwrites can only reside in a pool using BlueStore OSDs. Since BlueStore's checksumming is used to detect bit rot or other corruption during deep scrubs. Erasure coded pools do not support omap. To use erasure coded pools with Ceph Block Devices and CephFS, store the data in an erasure coded pool, and the metadata in a replicated pool. For Ceph Block Devices, use the --data-pool option during image creation: Syntax Example If using erasure coded pools for CephFS, then setting the overwrites must be done in a file layout. 5.4. Erasure Code Plugins Ceph supports erasure coding with a plug-in architecture, which means you can create erasure coded pools using different types of algorithms. Ceph supports Jerasure. 5.4.1. Creating a new erasure code profile using jerasure erasure code plugin The jerasure plug-in is the most generic and flexible plug-in. It is also the default for Ceph erasure coded pools. The jerasure plug-in encapsulates the JerasureH library. For detailed information about the parameters, see the jerasure documentation. To create a new erasure code profile using the jerasure plug-in, run the following command: Syntax Where: k Description Each object is split in data-chunks parts, each stored on a different OSD. Type Integer Required Yes. Example 4 m Description Compute coding chunks for each object and store them on different OSDs. The number of coding chunks is also the number of OSDs that can be down without losing data. Type Integer Required Yes. Example 2 technique Description The more flexible technique is reed_sol_van ; it is enough to set k and m . The cauchy_good technique can be faster but you need to choose the packetsize carefully. All of reed_sol_r6_op , liberation , blaum_roth , liber8tion are RAID6 equivalents in the sense that they can only be configured with m=2 . Type String Required No. Valid Settings reed_sol_van reed_sol_r6_op cauchy_orig cauchy_good liberation blaum_roth liber8tion Default reed_sol_van packetsize Description The encoding will be done on packets of bytes size at a time. Choosing the correct packet size is difficult. The jerasure documentation contains extensive information on this topic. Type Integer Required No. Default 2048 crush-root Description The name of the CRUSH bucket used for the first step of the rule. For instance step take default . Type String Required No. Default default crush-failure-domain Description Ensure that no two chunks are in a bucket with the same failure domain. For instance, if the failure domain is host no two chunks will be stored on the same host. It is used to create a rule step such as step chooseleaf host . Type String Required No. Default host directory Description Set the directory name from which the erasure code plug-in is loaded. Type String Required No. Default /usr/lib/ceph/erasure-code --force Description Override an existing profile by the same name. Type String Required No. 5.4.2. Controlling CRUSH Placement The default CRUSH rule provides OSDs that are on different hosts. For instance: needs exactly 8 OSDs, one for each chunk. If the hosts are in two adjacent racks, the first four chunks can be placed in the first rack and the last four in the second rack. Recovering from the loss of a single OSD does not require using bandwidth between the two racks. For instance: creates a rule that selects two crush buckets of type rack and for each of them choose four OSDs, each of them located in a different bucket of type host . The rule can also be created manually for finer control. | [
"set_req_state_err err_no=95 resorting to 500",
"ceph osd pool create ecpool 32 32 erasure pool 'ecpool' created echo ABCDEFGHI | rados --pool ecpool put NYAN - rados --pool ecpool get NYAN - ABCDEFGHI",
"ceph osd erasure-code-profile get default k=2 m=2 plugin=jerasure technique=reed_sol_van",
"ceph osd erasure-code-profile set myprofile k=4 m=2 crush-failure-domain=rack ceph osd pool create ecpool 12 12 erasure *myprofile* echo ABCDEFGHIJKL | rados --pool ecpool put NYAN - rados --pool ecpool get NYAN - ABCDEFGHIJKL",
"ceph osd erasure-code-profile set NAME [<directory= DIRECTORY >] [<plugin= PLUGIN >] [<stripe_unit= STRIPE_UNIT >] [<_CRUSH_DEVICE_CLASS_>] [<_CRUSH_FAILURE_DOMAIN_>] [<key=value> ...] [--force]",
"ceph osd erasure-code-profile rm NAME",
"ceph osd erasure-code-profile get NAME",
"ceph osd erasure-code-profile ls",
"ceph osd pool set ERASURE_CODED_POOL_NAME allow_ec_overwrites true",
"ceph osd pool set ec_pool allow_ec_overwrites true",
"rbd create --size IMAGE_SIZE_M|G|T --data-pool _ERASURE_CODED_POOL_NAME REPLICATED_POOL_NAME / IMAGE_NAME",
"rbd create --size 1G --data-pool ec_pool rep_pool/image01",
"ceph osd erasure-code-profile set NAME plugin=jerasure k= DATA_CHUNKS m= DATA_CHUNKS technique= TECHNIQUE [crush-root= ROOT ] [crush-failure-domain= BUCKET_TYPE ] [directory= DIRECTORY ] [--force]",
"chunk nr 01234567 step 1 _cDD_cDD step 2 cDDD____ step 3 ____cDDD",
"crush-steps='[ [ \"choose\", \"rack\", 2 ], [ \"chooseleaf\", \"host\", 4 ] ]'"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/storage_strategies_guide/erasure-code-pools-overview_strategy |
Chapter 4. Installing Knative Serving | Chapter 4. Installing Knative Serving Installing Knative Serving allows you to create Knative services and functions on your cluster. It also allows you to use additional functionality such as autoscaling and networking options for your applications. After you install the OpenShift Serverless Operator, you can install Knative Serving by using the default settings, or configure more advanced settings in the KnativeServing custom resource (CR). For more information about configuration options for the KnativeServing CR, see Global configuration . Important If you want to use Red Hat OpenShift distributed tracing with OpenShift Serverless , you must install and configure Red Hat OpenShift distributed tracing before you install Knative Serving. 4.1. Installing Knative Serving by using the web console After you install the OpenShift Serverless Operator, install Knative Serving by using the OpenShift Container Platform web console. You can install Knative Serving by using the default settings or configure more advanced settings in the KnativeServing custom resource (CR). Prerequisites You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated. You have logged in to the OpenShift Container Platform web console. You have installed the OpenShift Serverless Operator. Procedure In the Administrator perspective of the OpenShift Container Platform web console, navigate to Operators Installed Operators . Check that the Project dropdown at the top of the page is set to Project: knative-serving . Click Knative Serving in the list of Provided APIs for the OpenShift Serverless Operator to go to the Knative Serving tab. Click Create Knative Serving . In the Create Knative Serving page, you can install Knative Serving using the default settings by clicking Create . You can also modify settings for the Knative Serving installation by editing the KnativeServing object using either the form provided, or by editing the YAML. Using the form is recommended for simpler configurations that do not require full control of KnativeServing object creation. Editing the YAML is recommended for more complex configurations that require full control of KnativeServing object creation. You can access the YAML by clicking the edit YAML link in the top right of the Create Knative Serving page. After you complete the form, or have finished modifying the YAML, click Create . Note For more information about configuration options for the KnativeServing custom resource definition, see the documentation on Advanced installation configuration options . After you have installed Knative Serving, the KnativeServing object is created, and you are automatically directed to the Knative Serving tab. You will see the knative-serving custom resource in the list of resources. Verification Click on knative-serving custom resource in the Knative Serving tab. You will be automatically directed to the Knative Serving Overview page. Scroll down to look at the list of Conditions . You should see a list of conditions with a status of True , as shown in the example image. Note It may take a few seconds for the Knative Serving resources to be created. You can check their status in the Resources tab. If the conditions have a status of Unknown or False , wait a few moments and then check again after you have confirmed that the resources have been created. 4.2. Installing Knative Serving by using YAML After you install the OpenShift Serverless Operator, you can install Knative Serving by using the default settings, or configure more advanced settings in the KnativeServing custom resource (CR). You can use the following procedure to install Knative Serving by using YAML files and the oc CLI. Prerequisites You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated. You have installed the OpenShift Serverless Operator. Install the OpenShift CLI ( oc ). Procedure Create a file named serving.yaml and copy the following example YAML into it: apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving Apply the serving.yaml file: USD oc apply -f serving.yaml Verification To verify the installation is complete, enter the following command: USD oc get knativeserving.operator.knative.dev/knative-serving -n knative-serving --template='{{range .status.conditions}}{{printf "%s=%s\n" .type .status}}{{end}}' Example output DependenciesInstalled=True DeploymentsAvailable=True InstallSucceeded=True Ready=True Note It may take a few seconds for the Knative Serving resources to be created. If the conditions have a status of Unknown or False , wait a few moments and then check again after you have confirmed that the resources have been created. Check that the Knative Serving resources have been created: USD oc get pods -n knative-serving Example output NAME READY STATUS RESTARTS AGE activator-67ddf8c9d7-p7rm5 2/2 Running 0 4m activator-67ddf8c9d7-q84fz 2/2 Running 0 4m autoscaler-5d87bc6dbf-6nqc6 2/2 Running 0 3m59s autoscaler-5d87bc6dbf-h64rl 2/2 Running 0 3m59s autoscaler-hpa-77f85f5cc4-lrts7 2/2 Running 0 3m57s autoscaler-hpa-77f85f5cc4-zx7hl 2/2 Running 0 3m56s controller-5cfc7cb8db-nlccl 2/2 Running 0 3m50s controller-5cfc7cb8db-rmv7r 2/2 Running 0 3m18s domain-mapping-86d84bb6b4-r746m 2/2 Running 0 3m58s domain-mapping-86d84bb6b4-v7nh8 2/2 Running 0 3m58s domainmapping-webhook-769d679d45-bkcnj 2/2 Running 0 3m58s domainmapping-webhook-769d679d45-fff68 2/2 Running 0 3m58s storage-version-migration-serving-serving-0.26.0--1-6qlkb 0/1 Completed 0 3m56s webhook-5fb774f8d8-6bqrt 2/2 Running 0 3m57s webhook-5fb774f8d8-b8lt5 2/2 Running 0 3m57s Check that the necessary networking components have been installed to the automatically created knative-serving-ingress namespace: USD oc get pods -n knative-serving-ingress Example output NAME READY STATUS RESTARTS AGE net-kourier-controller-7d4b6c5d95-62mkf 1/1 Running 0 76s net-kourier-controller-7d4b6c5d95-qmgm2 1/1 Running 0 76s 3scale-kourier-gateway-6688b49568-987qz 1/1 Running 0 75s 3scale-kourier-gateway-6688b49568-b5tnp 1/1 Running 0 75s 4.3. Additional resources Kourier and Istio ingresses 4.4. steps If you want to use Knative event-driven architecture you can install Knative Eventing . | [
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving",
"oc apply -f serving.yaml",
"oc get knativeserving.operator.knative.dev/knative-serving -n knative-serving --template='{{range .status.conditions}}{{printf \"%s=%s\\n\" .type .status}}{{end}}'",
"DependenciesInstalled=True DeploymentsAvailable=True InstallSucceeded=True Ready=True",
"oc get pods -n knative-serving",
"NAME READY STATUS RESTARTS AGE activator-67ddf8c9d7-p7rm5 2/2 Running 0 4m activator-67ddf8c9d7-q84fz 2/2 Running 0 4m autoscaler-5d87bc6dbf-6nqc6 2/2 Running 0 3m59s autoscaler-5d87bc6dbf-h64rl 2/2 Running 0 3m59s autoscaler-hpa-77f85f5cc4-lrts7 2/2 Running 0 3m57s autoscaler-hpa-77f85f5cc4-zx7hl 2/2 Running 0 3m56s controller-5cfc7cb8db-nlccl 2/2 Running 0 3m50s controller-5cfc7cb8db-rmv7r 2/2 Running 0 3m18s domain-mapping-86d84bb6b4-r746m 2/2 Running 0 3m58s domain-mapping-86d84bb6b4-v7nh8 2/2 Running 0 3m58s domainmapping-webhook-769d679d45-bkcnj 2/2 Running 0 3m58s domainmapping-webhook-769d679d45-fff68 2/2 Running 0 3m58s storage-version-migration-serving-serving-0.26.0--1-6qlkb 0/1 Completed 0 3m56s webhook-5fb774f8d8-6bqrt 2/2 Running 0 3m57s webhook-5fb774f8d8-b8lt5 2/2 Running 0 3m57s",
"oc get pods -n knative-serving-ingress",
"NAME READY STATUS RESTARTS AGE net-kourier-controller-7d4b6c5d95-62mkf 1/1 Running 0 76s net-kourier-controller-7d4b6c5d95-qmgm2 1/1 Running 0 76s 3scale-kourier-gateway-6688b49568-987qz 1/1 Running 0 75s 3scale-kourier-gateway-6688b49568-b5tnp 1/1 Running 0 75s"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.33/html/installing_openshift_serverless/installing-knative-serving |
Chapter 39. Gathering System Information | Chapter 39. Gathering System Information Before you learn how to configure your system, you should learn how to gather essential system information. For example, you should know how to find the amount of free memory, the amount of available hard drive space, how your hard drive is partitioned, and what processes are running. This chapter discusses how to retrieve this type of information from your Red Hat Enterprise Linux system using simple commands and a few simple programs. 39.1. System Processes The ps ax command displays a list of current system processes, including processes owned by other users. To display the owner alongside each process, use the ps aux command. This list is a static list; in other words, it is a snapshot of what was running when you invoked the command. If you want a constantly updated list of running processes, use top as described below. The ps output can be long. To prevent it from scrolling off the screen, you can pipe it through less: You can use the ps command in combination with the grep command to see if a process is running. For example, to determine if Emacs is running, use the following command: The top command displays currently running processes and important information about them including their memory and CPU usage. The list is both real-time and interactive. An example of output from the top command is provided as follows: To exit top , press the q key. Table 39.1, "Interactive top commands" contains useful interactive commands that you can use with top . For more information, refer to the top (1) manual page. Table 39.1. Interactive top commands Command Description Space Immediately refresh the display h Display a help screen k Kill a process. You are prompted for the process ID and the signal to send to it. n Change the number of processes displayed. You are prompted to enter the number. u Sort by user. M Sort by memory usage. P Sort by CPU usage. If you prefer a graphical interface for top , you can use the GNOME System Monitor . To start it from the desktop, select System => Administration => System Monitor or type gnome-system-monitor at a shell prompt (such as an XTerm). Select the Process Listing tab. The GNOME System Monitor allows you to search for a process in the list of running processes. Using the Gnome System Monitor, you can also view all processes, your processes, or active processes. The Edit menu item allows you to: Stop a process. Continue or start a process. End a processes. Kill a process. Change the priority of a selected process. Edit the System Monitor preferences. These include changing the interval seconds to refresh the list and selecting process fields to display in the System Monitor window. The View menu item allows you to: View only active processes. View all processes. View my processes. View process dependencies. Hide a process. View hidden processes. View memory maps. View the files opened by the selected process. To stop a process, select it and click End Process . Alternatively you can also stop a process by selecting it, clicking Edit on your menu and selecting Stop Process . To sort the information by a specific column, click on the name of the column. This sorts the information by the selected column in ascending order. Click on the name of the column again to toggle the sort between ascending and descending order. Figure 39.1. GNOME System Monitor | [
"ps aux | less",
"ps ax | grep emacs",
"top - 15:02:46 up 35 min, 4 users, load average: 0.17, 0.65, 1.00 Tasks: 110 total, 1 running, 107 sleeping, 0 stopped, 2 zombie Cpu(s): 41.1% us, 2.0% sy, 0.0% ni, 56.6% id, 0.0% wa, 0.3% hi, 0.0% si Mem: 775024k total, 772028k used, 2996k free, 68468k buffers Swap: 1048568k total, 176k used, 1048392k free, 441172k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 4624 root 15 0 40192 18m 7228 S 28.4 2.4 1:23.21 X 4926 mhideo 15 0 55564 33m 9784 S 13.5 4.4 0:25.96 gnome-terminal 6475 mhideo 16 0 3612 968 760 R 0.7 0.1 0:00.11 top 4920 mhideo 15 0 20872 10m 7808 S 0.3 1.4 0:01.61 wnck-applet 1 root 16 0 1732 548 472 S 0.0 0.1 0:00.23 init 2 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/0 3 root 5 -10 0 0 0 S 0.0 0.0 0:00.03 events/0 4 root 6 -10 0 0 0 S 0.0 0.0 0:00.02 khelper 5 root 5 -10 0 0 0 S 0.0 0.0 0:00.00 kacpid 29 root 5 -10 0 0 0 S 0.0 0.0 0:00.00 kblockd/0 47 root 16 0 0 0 0 S 0.0 0.0 0:01.74 pdflush 50 root 11 -10 0 0 0 S 0.0 0.0 0:00.00 aio/0 30 root 15 0 0 0 0 S 0.0 0.0 0:00.05 khubd 49 root 16 0 0 0 0 S 0.0 0.0 0:01.44 kswapd0"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/ch-sysinfo |
function::ns_ppid | function::ns_ppid Name function::ns_ppid - Returns the process ID of a target process's parent process as seen in a pid namespace Synopsis Arguments None Description This function return the process ID of the target proccess's parent process as seen in the target pid namespace if provided, or the stap process namespace. | [
"ns_ppid:long()"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-ns-ppid |
Chapter 2. Planning your undercloud | Chapter 2. Planning your undercloud 2.1. Containerized undercloud The undercloud is the node that controls the configuration, installation, and management of your final Red Hat OpenStack Platform (RHOSP) environment, which is called the overcloud. The undercloud itself uses OpenStack Platform components in the form of containers to create a toolset called director. This means that the undercloud pulls a set of container images from a registry source, generates configuration for the containers, and runs each OpenStack Platform service as a container. As a result, the undercloud provides a containerized set of services that you can use as a toolset to create and manage your overcloud. Since both the undercloud and overcloud use containers, both use the same architecture to pull, configure, and run containers. This architecture is based on the OpenStack Orchestration service (heat) for provisioning nodes and uses Ansible to configure services and containers. It is useful to have some familiarity with heat and Ansible to help you troubleshoot issues that you might encounter. 2.2. Preparing your undercloud networking The undercloud requires access to two main networks: The Provisioning or Control Plane network , which is the network that director uses to provision your nodes and access them over SSH when executing Ansible configuration. This network also enables SSH access from the undercloud to overcloud nodes. The undercloud contains DHCP services for introspection and provisioning other nodes on this network, which means that no other DHCP services should exist on this network. The director configures the interface for this network. The External network , which enables access to OpenStack Platform repositories, container image sources, and other servers such as DNS servers or NTP servers. Use this network for standard access the undercloud from your workstation. You must manually configure an interface on the undercloud to access the external network. The undercloud requires a minimum of 2 x 1 Gbps Network Interface Cards: one for the Provisioning or Control Plane network and one for the External network . However, it is recommended to use a 10 Gbps interface for Provisioning network traffic, especially if you want to provision a large number of nodes in your overcloud environment. Note: Do not use the same Provisioning or Control Plane NIC as the one that you use to access the director machine from your workstation. The director installation creates a bridge by using the Provisioning NIC, which drops any remote connections. Use the External NIC for remote connections to the director system. The Provisioning network requires an IP range that fits your environment size. Use the following guidelines to determine the total number of IP addresses to include in this range: Include at least one temporary IP address for each node that connects to the Provisioning network during introspection. Include at least one permanent IP address for each node that connects to the Provisioning network during deployment. Include an extra IP address for the virtual IP of the overcloud high availability cluster on the Provisioning network. Include additional IP addresses within this range for scaling the environment. 2.3. Determining environment scale Before you install the undercloud, determine the scale of your environment. Include the following factors when you plan your environment: How many nodes do you want to deploy in your overcloud? The undercloud manages each node within an overcloud. Provisioning overcloud nodes consumes resources on the undercloud. You must provide your undercloud with enough resources to adequately provision and control all of your overcloud nodes. How many simultaneous operations do you want the undercloud to perform? Most OpenStack services on the undercloud use a set of workers. Each worker performs an operation specific to that service. Multiple workers provide simultaneous operations. The default number of workers on the undercloud is determined by halving the total CPU thread count on the undercloud [1] . For example, if your undercloud has a CPU with 16 threads, then the director services spawn 8 workers by default. Director also uses a set of minimum and maximum caps by default: Service Minimum Maximum OpenStack Orchestration (heat) 4 24 All other service 2 12 The undercloud has the following minimum CPU and memory requirements: An 8-thread 64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions. This provides 4 workers for each undercloud service. A minimum of 24 GB of RAM. The ceph-ansible playbook consumes 1 GB resident set size (RSS) for every 10 hosts that the undercloud deploys. If you want to use a new or existing Ceph cluster in your deployment, you must provision the undercloud RAM accordingly. To use a larger number of workers, increase the vCPUs and memory of your undercloud using the following recommendations: Minimum: Use 1.5 GB of memory for each thread. For example, a machine with 48 threads requires 72 GB of RAM to provide the minimum coverage for 24 heat workers and 12 workers for other services. Recommended: Use 3 GB of memory for each thread. For example, a machine with 48 threads requires 144 GB of RAM to provide the recommended coverage for 24 heat workers and 12 workers for other services. 2.4. Undercloud disk sizing The recommended minimum undercloud disk size is 100 GB of available disk space on the root disk: 20 GB for container images 10 GB to accommodate QCOW2 image conversion and caching during the node provisioning process 70 GB+ for general usage, logging, metrics, and growth 2.5. Virtualization support Red Hat only supports a virtualized undercloud on the following platforms: Platform Notes Kernel-based Virtual Machine (KVM) Hosted by Red Hat Enterprise Linux 8, as listed on certified hypervisors. Red Hat Virtualization Hosted by Red Hat Virtualization 4.x, as listed on certified hypervisors. Microsoft Hyper-V Hosted by versions of Hyper-V as listed on the Red Hat Customer Portal Certification Catalogue . VMware ESX and ESXi Hosted by versions of ESX and ESXi as listed on the Red Hat Customer Portal Certification Catalogue . Important Red Hat OpenStack Platform director requires that the latest version of Red Hat Enterprise Linux 8 is installed as the host operating system. This means your virtualization platform must also support the underlying Red Hat Enterprise Linux version. Virtual Machine Requirements Resource requirements for a virtual undercloud are similar to those of a bare metal undercloud. You should consider the various tuning options when provisioning such as network model, guest CPU capabilities, storage backend, storage format, and caching mode. Network Considerations Note the following network considerations for your virtualized undercloud: Power Management The undercloud VM requires access to the overcloud nodes' power management devices. This is the IP address set for the pm_addr parameter when registering nodes. Provisioning network The NIC used for the provisioning ( ctlplane ) network requires the ability to broadcast and serve DHCP requests to the NICs of the overcloud's bare metal nodes. As a recommendation, create a bridge that connects the VM's NIC to the same network as the bare metal NICs. Note A common problem occurs when the hypervisor technology blocks the undercloud from transmitting traffic from an unknown address. - If using Red Hat Enterprise Virtualization, disable anti-mac-spoofing to prevent this. - If using VMware ESX or ESXi, allow forged transmits to prevent this. You must power off and on the director VM after you apply these settings. Rebooting the VM is not sufficient. 2.6. Character encoding configuration Red Hat OpenStack Platform has special character encoding requirements as part of the locale settings: Use UTF-8 encoding on all nodes. Ensure the LANG environment variable is set to en_US.UTF-8 on all nodes. Avoid using non-ASCII characters if you use Red Hat Ansible Tower to automate the creation of Red Hat OpenStack Platform resources. 2.7. Considerations when running the undercloud with a proxy If your environment uses a proxy, review these considerations to best understand the different configuration methods of integrating parts of Red Hat OpenStack Platform with a proxy and the limitations of each method. System-wide proxy configuration Use this method to configure proxy communication for all network traffic on the undercloud. To configure the proxy settings, edit the /etc/environment file and set the following environment variables: http_proxy The proxy that you want to use for standard HTTP requests. https_proxy The proxy that you want to use for HTTPs requests. no_proxy A comma-separated list of domains that you want to exclude from proxy communications. The system-wide proxy method has the following limitations: The no_proxy variable primarily uses domain names ( www.example.com ), domain suffixes ( example.com ), and domains with a wildcard ( *.example.com ). Most Red Hat OpenStack Platform services interpret IP addresses in no_proxy but certain services, such as container health checks, do not interpret IP addresses in the no_proxy environment variable due to limitations with cURL and wget . To use a system-wide proxy with the undercloud, disable container health checks with the container_healthcheck_disabled parameter in the undercloud.conf file during installation. For more information, see BZ#1837458 - Container health checks fail to honor no_proxy CIDR notation . Some containers bind and parse the environment variables in /etc/environments incorrectly, which causes problems when running these services. For more information, see BZ#1916070 - proxy configuration updates in /etc/environment files are not being picked up in containers correctly and BZ#1918408 - mistral_executor container fails to properly set no_proxy environment parameter . dnf proxy configuration Use this method to configure dnf to run all traffic through a proxy. To configure the proxy settings, edit the /etc/dnf/dnf.conf file and set the following parameters: proxy The URL of the proxy server. proxy_username The username that you want to use to connect to the proxy server. proxy_password The password that you want to use to connect to the proxy server. proxy_auth_method The authentication method used by the proxy server. For more information about these options, run man dnf.conf . The dnf proxy method has the following limitations: This method provides proxy support only for dnf . The dnf proxy method does not include an option to exclude certain hosts from proxy communication. Red Hat Subscription Manager proxy Use this method to configure Red Hat Subscription Manager to run all traffic through a proxy. To configure the proxy settings, edit the /etc/rhsm/rhsm.conf file and set the following parameters: proxy_hostname Host for the proxy. proxy_scheme The scheme for the proxy when writing out the proxy to repo definitions. proxy_port The port for the proxy. proxy_username The username that you want to use to connect to the proxy server. proxy_password The password to use for connecting to the proxy server. no_proxy A comma-separated list of hostname suffixes for specific hosts that you want to exclude from proxy communication. For more information about these options, run man rhsm.conf . The Red Hat Subscription Manager proxy method has the following limitations: This method provides proxy support only for Red Hat Subscription Manager. The values for the Red Hat Subscription Manager proxy configuration override any values set for the system-wide environment variables. Transparent proxy If your network uses a transparent proxy to manage application layer traffic, you do not need to configure the undercloud itself to interact with the proxy because proxy management occurs automatically. A transparent proxy can help overcome limitations associated with client-based proxy configuration in Red Hat OpenStack Platform. 2.8. Undercloud repositories Red Hat OpenStack Platform 16.0 runs on Red Hat Enterprise Linux 8.1. Before enabling repositories, lock the director to a version with the subscription-manager release command: Enable the following repositories for the installation and configuration of the undercloud. Core repositories The following table lists core repositories for installing the undercloud. Name Repository Description of requirement Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs) Extended Update Support (EUS) rhel-8-for-x86_64-baseos-eus-rpms Base operating system repository for x86_64 systems. Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs) Extended Update Support (EUS) rhel-8-for-x86_64-appstream-eus-rpms Contains Red Hat OpenStack Platform dependencies. Red Hat Enterprise Linux 8 for x86_64 - High Availability (RPMs) Extended Update Support (EUS) rhel-8-for-x86_64-highavailability-eus-rpms High availability tools for Red Hat Enterprise Linux. Used for Controller node high availability. Red Hat Ansible Engine 2.8 for RHEL 8 x86_64 (RPMs) ansible-2.8-for-rhel-8-x86_64-rpms Ansible Engine for Red Hat Enterprise Linux. Used to provide the latest version of Ansible. Red Hat Satellite Tools for RHEL 8 Server RPMs x86_64 satellite-tools-6.5-for-rhel-8-x86_64-rpms Tools for managing hosts with Red Hat Satellite 6. Red Hat OpenStack Platform 16.0 for RHEL 8 (RPMs) openstack-16-for-rhel-8-x86_64-rpms Core Red Hat OpenStack Platform repository, which contains packages for Red Hat OpenStack Platform director. Red Hat Fast Datapath for RHEL 8 (RPMS) fast-datapath-for-rhel-8-x86_64-rpms Provides Open vSwitch (OVS) packages for OpenStack Platform. IBM POWER repositories The following table contains a list of repositories for Red Hat Openstack Platform on POWER PC architecture. Use these repositories in place of equivalents in the Core repositories. Name Repository Description of requirement Red Hat Enterprise Linux for IBM Power, little endian - BaseOS (RPMs) rhel-8-for-ppc64le-baseos-rpms Base operating system repository for ppc64le systems. Red Hat Enterprise Linux 8 for IBM Power, little endian - AppStream (RPMs) rhel-8-for-ppc64le-appstream-rpms Contains Red Hat OpenStack Platform dependencies. Red Hat Enterprise Linux 8 for IBM Power, little endian - High Availability (RPMs) rhel-8-for-ppc64le-highavailability-rpms High availability tools for Red Hat Enterprise Linux. Used for Controller node high availability. Red Hat Ansible Engine 2.8 for RHEL 8 IBM Power, little endian (RPMs) ansible-2.8-for-rhel-8-ppc64le-rpms Ansible Engine for Red Hat Enterprise Linux. Provides the latest version of Ansible. Red Hat OpenStack Platform 16.0 for RHEL 8 (RPMs) openstack-16-for-rhel-8-ppc64le-rpms Core Red Hat OpenStack Platform repository for ppc64le systems. [1] In this instance, thread count refers to the number of CPU cores multiplied by the hyper-threading value | [
"sudo subscription-manager release --set=8.1"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/director_installation_and_usage/planning-your-undercloud |
3.7. Turning on Packet Forwarding and Nonlocal Binding | 3.7. Turning on Packet Forwarding and Nonlocal Binding In order for the Keepalived service to forward network packets properly to the real servers, each router node must have IP forwarding turned on in the kernel. Log in as root and change the line which reads net.ipv4.ip_forward = 0 in /etc/sysctl.conf to the following: The changes take effect when you reboot the system. Load balancing in HAProxy and Keepalived at the same time also requires the ability to bind to an IP address that are nonlocal , meaning that it is not assigned to a device on the local system. This allows a running load balancer instance to bind to an IP that is not local for failover. To enable, edit the line in /etc/sysctl.conf that reads net.ipv4.ip_nonlocal_bind to the following: The changes take effect when you reboot the system. To check if IP forwarding is turned on, issue the following command as root : /usr/sbin/sysctl net.ipv4.ip_forward To check if nonlocal binding is turned on, issue the following command as root : /usr/sbin/sysctl net.ipv4.ip_nonlocal_bind If both the above commands return a 1 , then the respective settings are enabled. | [
"net.ipv4.ip_forward = 1",
"net.ipv4.ip_nonlocal_bind = 1"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/load_balancer_administration/s1-initial-setup-forwarding-VSA |
Chapter 3. Installing the Network Observability Operator | Chapter 3. Installing the Network Observability Operator Installing Loki is a recommended prerequisite for using the Network Observability Operator. You can choose to use Network Observability without Loki , but there are some considerations for doing this, described in the previously linked section. The Loki Operator integrates a gateway that implements multi-tenancy and authentication with Loki for data flow storage. The LokiStack resource manages Loki, which is a scalable, highly-available, multi-tenant log aggregation system, and a web proxy with OpenShift Container Platform authentication. The LokiStack proxy uses OpenShift Container Platform authentication to enforce multi-tenancy and facilitate the saving and indexing of data in Loki log stores. Note The Loki Operator can also be used for configuring the LokiStack log store . The Network Observability Operator requires a dedicated LokiStack separate from the logging. 3.1. Network Observability without Loki You can use Network Observability without Loki by not performing the Loki installation steps and skipping directly to "Installing the Network Observability Operator". If you only want to export flows to a Kafka consumer or IPFIX collector, or you only need dashboard metrics, then you do not need to install Loki or provide storage for Loki. The following table compares available features with and without Loki. Table 3.1. Comparison of feature availability with and without Loki With Loki Without Loki Exporters Multi-tenancy Complete filtering and aggregations capabilities [1] Partial filtering and aggregations capabilities [2] Flow-based metrics and dashboards Traffic flows view overview [3] Traffic flows view table Topology view OpenShift Container Platform console Network Traffic tab integration Such as per pod. Such as per workload or namespace. Statistics on packet drops are only available with Loki. Additional resources Export enriched network flow data . 3.2. Installing the Loki Operator The Loki Operator versions 5.7+ are the supported Loki Operator versions for Network Observability; these versions provide the ability to create a LokiStack instance using the openshift-network tenant configuration mode and provide fully-automatic, in-cluster authentication and authorization support for Network Observability. There are several ways you can install Loki. One way is by using the OpenShift Container Platform web console Operator Hub. Prerequisites Supported Log Store (AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation) OpenShift Container Platform 4.10+ Linux Kernel 4.18+ Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Choose Loki Operator from the list of available Operators, and click Install . Under Installation Mode , select All namespaces on the cluster . Verification Verify that you installed the Loki Operator. Visit the Operators Installed Operators page and look for Loki Operator . Verify that Loki Operator is listed with Status as Succeeded in all the projects. Important To uninstall Loki, refer to the uninstallation process that corresponds with the method you used to install Loki. You might have remaining ClusterRoles and ClusterRoleBindings , data stored in object store, and persistent volume that must be removed. 3.2.1. Creating a secret for Loki storage The Loki Operator supports a few log storage options, such as AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation. The following example shows how to create a secret for AWS S3 storage. The secret created in this example, loki-s3 , is referenced in "Creating a LokiStack resource". You can create this secret in the web console or CLI. Using the web console, navigate to the Project All Projects dropdown and select Create Project . Name the project netobserv and click Create . Navigate to the Import icon, + , in the top right corner. Paste your YAML file into the editor. The following shows an example secret YAML file for S3 storage: apiVersion: v1 kind: Secret metadata: name: loki-s3 namespace: netobserv 1 stringData: access_key_id: QUtJQUlPU0ZPRE5ON0VYQU1QTEUK access_key_secret: d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQo= bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1 1 The installation examples in this documentation use the same namespace, netobserv , across all components. You can optionally use a different namespace for the different components Verification Once you create the secret, you should see it listed under Workloads Secrets in the web console. Additional resources Flow Collector API Reference Flow Collector sample resource Loki object storage 3.2.2. Creating a LokiStack custom resource You can deploy a LokiStack custom resource (CR) by using the web console or OpenShift CLI ( oc ) to create a namespace, or new project. Procedure Navigate to Operators Installed Operators , viewing All projects from the Project dropdown. Look for Loki Operator . In the details, under Provided APIs , select LokiStack . Click Create LokiStack . Ensure the following fields are specified in either Form View or YAML view : apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: loki namespace: netobserv 1 spec: size: 1x.small 2 storage: schemas: - version: v12 effectiveDate: '2022-06-01' secret: name: loki-s3 type: s3 storageClassName: gp3 3 tenants: mode: openshift-network 1 The installation examples in this documentation use the same namespace, netobserv , across all components. You can optionally use a different namespace. 2 Specify the deployment size. In the Loki Operator 5.8 and later versions, the supported size options for production instances of Loki are 1x.extra-small , 1x.small , or 1x.medium . Important It is not possible to change the number 1x for the deployment size. 3 Use a storage class name that is available on the cluster for ReadWriteOnce access mode. You can use oc get storageclasses to see what is available on your cluster. Important You must not reuse the same LokiStack CR that is used for logging. Click Create . 3.2.3. Creating a new group for the cluster-admin user role Important Querying application logs for multiple namespaces as a cluster-admin user, where the sum total of characters of all of the namespaces in the cluster is greater than 5120, results in the error Parse error: input size too long (XXXX > 5120) . For better control over access to logs in LokiStack, make the cluster-admin user a member of the cluster-admin group. If the cluster-admin group does not exist, create it and add the desired users to it. Use the following procedure to create a new group for users with cluster-admin permissions. Procedure Enter the following command to create a new group: USD oc adm groups new cluster-admin Enter the following command to add the desired user to the cluster-admin group: USD oc adm groups add-users cluster-admin <username> Enter the following command to add cluster-admin user role to the group: USD oc adm policy add-cluster-role-to-group cluster-admin cluster-admin 3.2.4. Custom admin group access If you need to see cluster-wide logs without necessarily being an administrator, or if you already have any group defined that you want to use here, you can specify a custom group using the adminGroup field. Users who are members of any group specified in the adminGroups field of the LokiStack custom resource (CR) have the same read access to logs as administrators. Administrator users have access to all application logs in all namespaces, if they also get assigned the cluster-logging-application-view role. Administrator users have access to all network logs across the cluster. Example LokiStack CR apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: loki namespace: netobserv spec: tenants: mode: openshift-network 1 openshift: adminGroups: 2 - cluster-admin - custom-admin-group 3 1 Custom admin groups are only available in this mode. 2 Entering an empty list [] value for this field disables admin groups. 3 Overrides the default groups ( system:cluster-admins , cluster-admin , dedicated-admin ) 3.2.5. Loki deployment sizing Sizing for Loki follows the format of 1x.<size> where the value 1x is number of instances and <size> specifies performance capabilities. Important It is not possible to change the number 1x for the deployment size. Table 3.2. Loki sizing 1x.demo 1x.extra-small 1x.small 1x.medium Data transfer Demo use only 100GB/day 500GB/day 2TB/day Queries per second (QPS) Demo use only 1-25 QPS at 200ms 25-50 QPS at 200ms 25-75 QPS at 200ms Replication factor None 2 2 2 Total CPU requests None 14 vCPUs 34 vCPUs 54 vCPUs Total memory requests None 31Gi 67Gi 139Gi Total disk requests 40Gi 430Gi 430Gi 590Gi 3.2.6. LokiStack ingestion limits and health alerts The LokiStack instance comes with default settings according to the configured size. It is possible to override some of these settings, such as the ingestion and query limits. You might want to update them if you get Loki errors showing up in the Console plugin, or in flowlogs-pipeline logs. An automatic alert in the web console notifies you when these limits are reached. Here is an example of configured limits: spec: limits: global: ingestion: ingestionBurstSize: 40 ingestionRate: 20 maxGlobalStreamsPerTenant: 25000 queries: maxChunksPerQuery: 2000000 maxEntriesLimitPerQuery: 10000 maxQuerySeries: 3000 For more information about these settings, see the LokiStack API reference . 3.3. Installing the Network Observability Operator You can install the Network Observability Operator using the OpenShift Container Platform web console Operator Hub. When you install the Operator, it provides the FlowCollector custom resource definition (CRD). You can set specifications in the web console when you create the FlowCollector . Important The actual memory consumption of the Operator depends on your cluster size and the number of resources deployed. Memory consumption might need to be adjusted accordingly. For more information refer to "Network Observability controller manager pod runs out of memory" in the "Important Flow Collector configuration considerations" section. Prerequisites If you choose to use Loki, install the Loki Operator version 5.7+ . You must have cluster-admin privileges. One of the following supported architectures is required: amd64 , ppc64le , arm64 , or s390x . Any CPU supported by Red Hat Enterprise Linux (RHEL) 9. Must be configured with OVN-Kubernetes or OpenShift SDN as the main network plugin, and optionally using secondary interfaces with Multus and SR-IOV. Note Additionally, this installation example uses the netobserv namespace, which is used across all components. You can optionally use a different namespace. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Choose Network Observability Operator from the list of available Operators in the OperatorHub , and click Install . Select the checkbox Enable Operator recommended cluster monitoring on this Namespace . Navigate to Operators Installed Operators . Under Provided APIs for Network Observability, select the Flow Collector link. Navigate to the Flow Collector tab, and click Create FlowCollector . Make the following selections in the form view: spec.agent.ebpf.Sampling : Specify a sampling size for flows. Lower sampling sizes will have higher impact on resource utilization. For more information, see the "FlowCollector API reference", spec.agent.ebpf . If you are not using Loki, click Loki client settings and change Enable to False . The setting is True by default. If you are using Loki, set the following specifications: spec.loki.mode : Set this to the LokiStack mode, which automatically sets URLs, TLS, cluster roles and a cluster role binding, as well as the authToken value. Alternatively, the Manual mode allows more control over configuration of these settings. spec.loki.lokistack.name : Set this to the name of your LokiStack resource. In this documentation, loki is used. Optional: If you are in a large-scale environment, consider configuring the FlowCollector with Kafka for forwarding data in a more resilient, scalable way. See "Configuring the Flow Collector resource with Kafka storage" in the "Important Flow Collector configuration considerations" section. Optional: Configure other optional settings before the step of creating the FlowCollector . For example, if you choose not to use Loki, then you can configure exporting flows to Kafka or IPFIX. See "Export enriched network flow data to Kafka and IPFIX" and more in the "Important Flow Collector configuration considerations" section. Click Create . Verification To confirm this was successful, when you navigate to Observe you should see Network Traffic listed in the options. In the absence of Application Traffic within the OpenShift Container Platform cluster, default filters might show that there are "No results", which results in no visual flow. Beside the filter selections, select Clear all filters to see the flow. 3.4. Enabling multi-tenancy in Network Observability Multi-tenancy in the Network Observability Operator allows and restricts individual user access, or group access, to the flows stored in Loki and or Prometheus. Access is enabled for project administrators. Project administrators who have limited access to some namespaces can access flows for only those namespaces. For Developers, multi-tenancy is available for both Loki and Prometheus but requires different access rights. Prerequisite If you are using Loki, you have installed at least Loki Operator version 5.7 . You must be logged in as a project administrator. Procedure For per-tenant access, you must have the netobserv-reader cluster role and the netobserv-metrics-reader namespace role to use the developer perspective. Run the following commands for this level of access: USD oc adm policy add-cluster-role-to-user netobserv-reader <user_group_or_name> USD oc adm policy add-role-to-user netobserv-metrics-reader <user_group_or_name> -n <namespace> For cluster-wide access, non-cluster-administrators must have the netobserv-reader , cluster-monitoring-view , and netobserv-metrics-reader cluster roles. In this scenario, you can use either the admin perspective or the developer perspective. Run the following commands for this level of access: USD oc adm policy add-cluster-role-to-user netobserv-reader <user_group_or_name> USD oc adm policy add-cluster-role-to-user cluster-monitoring-view <user_group_or_name> USD oc adm policy add-cluster-role-to-user netobserv-metrics-reader <user_group_or_name> 3.5. Important Flow Collector configuration considerations Once you create the FlowCollector instance, you can reconfigure it, but the pods are terminated and recreated again, which can be disruptive. Therefore, you can consider configuring the following options when creating the FlowCollector for the first time: Configuring the Flow Collector resource with Kafka Export enriched network flow data to Kafka or IPFIX Configuring monitoring for SR-IOV interface traffic Working with conversation tracking Working with DNS tracking Working with packet drops Additional resources For more general information about Flow Collector specifications and the Network Observability Operator architecture and resource use, see the following resources: Flow Collector API Reference Flow Collector sample resource Resource considerations Troubleshooting Network Observability controller manager pod runs out of memory Network Observability architecture 3.5.1. Migrating removed stored versions of the FlowCollector CRD Network Observability Operator version 1.6 removes the old and deprecated v1alpha1 version of the FlowCollector API. If you previously installed this version on your cluster, it might still be referenced in the storedVersion of the FlowCollector CRD, even if it is removed from the etcd store, which blocks the upgrade process. These references need to be manually removed. There are two options to remove stored versions: Use the Storage Version Migrator Operator. Uninstall and reinstall the Network Observability Operator, ensuring that the installation is in a clean state. Prerequisites You have an older version of the Operator installed, and you want to prepare your cluster to install the latest version of the Operator. Or you have attempted to install the Network Observability Operator 1.6 and run into the error: Failed risk of data loss updating "flowcollectors.flows.netobserv.io": new CRD removes version v1alpha1 that is listed as a stored version on the existing CRD . Procedure Verify that the old FlowCollector CRD version is still referenced in the storedVersion : USD oc get crd flowcollectors.flows.netobserv.io -ojsonpath='{.status.storedVersions}' If v1alpha1 appears in the list of results, proceed with Step a to use the Kubernetes Storage Version Migrator or Step b to uninstall and reinstall the CRD and the Operator. Option 1: Kubernetes Storage Version Migrator : Create a YAML to define the StorageVersionMigration object, for example migrate-flowcollector-v1alpha1.yaml : apiVersion: migration.k8s.io/v1alpha1 kind: StorageVersionMigration metadata: name: migrate-flowcollector-v1alpha1 spec: resource: group: flows.netobserv.io resource: flowcollectors version: v1alpha1 Save the file. Apply the StorageVersionMigration by running the following command: USD oc apply -f migrate-flowcollector-v1alpha1.yaml Update the FlowCollector CRD to manually remove v1alpha1 from the storedVersion : USD oc edit crd flowcollectors.flows.netobserv.io Option 2: Reinstall : Save the Network Observability Operator 1.5 version of the FlowCollector CR to a file, for example flowcollector-1.5.yaml . USD oc get flowcollector cluster -o yaml > flowcollector-1.5.yaml Follow the steps in "Uninstalling the Network Observability Operator", which uninstalls the Operator and removes the existing FlowCollector CRD. Install the Network Observability Operator latest version, 1.6.0. Create the FlowCollector using backup that was saved in Step b. Verification Run the following command: USD oc get crd flowcollectors.flows.netobserv.io -ojsonpath='{.status.storedVersions}' The list of results should no longer show v1alpha1 and only show the latest version, v1beta1 . Additional resources Kubernetes Storage Version Migrator Operator 3.6. Installing Kafka (optional) The Kafka Operator is supported for large scale environments. Kafka provides high-throughput and low-latency data feeds for forwarding network flow data in a more resilient, scalable way. You can install the Kafka Operator as Red Hat AMQ Streams from the Operator Hub, just as the Loki Operator and Network Observability Operator were installed. Refer to "Configuring the FlowCollector resource with Kafka" to configure Kafka as a storage option. Note To uninstall Kafka, refer to the uninstallation process that corresponds with the method you used to install. Additional resources Configuring the FlowCollector resource with Kafka . 3.7. Uninstalling the Network Observability Operator You can uninstall the Network Observability Operator using the OpenShift Container Platform web console Operator Hub, working in the Operators Installed Operators area. Procedure Remove the FlowCollector custom resource. Click Flow Collector , which is to the Network Observability Operator in the Provided APIs column. Click the options menu for the cluster and select Delete FlowCollector . Uninstall the Network Observability Operator. Navigate back to the Operators Installed Operators area. Click the options menu to the Network Observability Operator and select Uninstall Operator . Home Projects and select openshift-netobserv-operator Navigate to Actions and select Delete Project Remove the FlowCollector custom resource definition (CRD). Navigate to Administration CustomResourceDefinitions . Look for FlowCollector and click the options menu . Select Delete CustomResourceDefinition . Important The Loki Operator and Kafka remain if they were installed and must be removed separately. Additionally, you might have remaining data stored in an object store, and a persistent volume that must be removed. | [
"apiVersion: v1 kind: Secret metadata: name: loki-s3 namespace: netobserv 1 stringData: access_key_id: QUtJQUlPU0ZPRE5ON0VYQU1QTEUK access_key_secret: d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQo= bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: loki namespace: netobserv 1 spec: size: 1x.small 2 storage: schemas: - version: v12 effectiveDate: '2022-06-01' secret: name: loki-s3 type: s3 storageClassName: gp3 3 tenants: mode: openshift-network",
"oc adm groups new cluster-admin",
"oc adm groups add-users cluster-admin <username>",
"oc adm policy add-cluster-role-to-group cluster-admin cluster-admin",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: loki namespace: netobserv spec: tenants: mode: openshift-network 1 openshift: adminGroups: 2 - cluster-admin - custom-admin-group 3",
"spec: limits: global: ingestion: ingestionBurstSize: 40 ingestionRate: 20 maxGlobalStreamsPerTenant: 25000 queries: maxChunksPerQuery: 2000000 maxEntriesLimitPerQuery: 10000 maxQuerySeries: 3000",
"oc adm policy add-cluster-role-to-user netobserv-reader <user_group_or_name>",
"oc adm policy add-role-to-user netobserv-metrics-reader <user_group_or_name> -n <namespace>",
"oc adm policy add-cluster-role-to-user netobserv-reader <user_group_or_name>",
"oc adm policy add-cluster-role-to-user cluster-monitoring-view <user_group_or_name>",
"oc adm policy add-cluster-role-to-user netobserv-metrics-reader <user_group_or_name>",
"oc get crd flowcollectors.flows.netobserv.io -ojsonpath='{.status.storedVersions}'",
"apiVersion: migration.k8s.io/v1alpha1 kind: StorageVersionMigration metadata: name: migrate-flowcollector-v1alpha1 spec: resource: group: flows.netobserv.io resource: flowcollectors version: v1alpha1",
"oc apply -f migrate-flowcollector-v1alpha1.yaml",
"oc edit crd flowcollectors.flows.netobserv.io",
"oc get flowcollector cluster -o yaml > flowcollector-1.5.yaml",
"oc get crd flowcollectors.flows.netobserv.io -ojsonpath='{.status.storedVersions}'"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/network_observability/installing-network-observability-operators |
Chapter 1. Introduction to Provisioning | Chapter 1. Introduction to Provisioning 1.1. Provisioning Overview Provisioning is a process that starts with a bare physical or virtual machine and ends with a fully configured, ready-to-use operating system. Using Red Hat Satellite, you can define and automate fine-grained provisioning for a large number of hosts. There are many provisioning methods. For example, you can use Satellite Server's integrated Capsule or an external Capsule Server to provision bare metal hosts using both PXE based and non-PXE based methods. You can also provision cloud instances from specific providers through their APIs. These provisioning methods are part of the Red Hat Satellite application life cycle to create, manage, and update hosts. Red Hat Satellite has different methods for provisioning hosts: Bare Metal Provisioning Satellite provisions bare metal hosts primarily through PXE boot and MAC address identification. You can create host entries and specify the MAC address of the physical host to provision. You can also boot blank hosts to use Satellite's discovery service, which creates a pool of ready-to-provision hosts. Cloud Providers Satellite connects to private and public cloud providers to provision instances of hosts from images that are stored with the Cloud environment. This also includes selecting which hardware profile or flavor to use. Virtualization Infrastructure Satellite connects to virtualization infrastructure services such as Red Hat Virtualization and VMware to provision virtual machines from virtual image templates or using the same PXE-based boot methods as bare metal providers. 1.2. Supported Cloud Providers You can connect the following cloud providers as compute resources to Satellite: Red Hat OpenStack Platform Amazon EC2 Google Compute Engine Microsoft Azure 1.3. Supported Virtualization Infrastructure You can connect the following virtualization infrastructure as compute resources to Satellite: KVM (libvirt) Red Hat Virtualization (deprecated) VMware OpenShift Virtualization 1.4. Network Boot Provisioning Workflow For physical or virtual BIOS hosts: Set the first booting device as boot configuration with network. Set the second booting device as boot from hard drive. Satellite manages TFTP boot configuration files so hosts can be easily provisioned just by rebooting. For physical or virtual EFI hosts: Set the first booting device as boot configuration with network. Depending on the EFI firmware type and configuration, the OS installer typically configures the OS boot loader as the first entry. To reboot into installer again, use efibootmgr utility to switch back to boot from network. The provisioning process follows a basic PXE workflow: You create a host and select a domain and subnet. Satellite requests an available IP address from the DHCP Capsule Server that is associated with the subnet or from the PostgreSQL database in Satellite. Satellite loads this IP address into the IP address field in the Create Host window. When you complete all the options for the new host, submit the new host request. Depending on the configuration specifications of the host and its domain and subnet, Satellite creates the following settings: A DHCP record on Capsule Server that is associated with the subnet. A forward DNS record on Capsule Server that is associated with the domain. A reverse DNS record on the DNS Capsule Server that is associated with the subnet. PXELinux, Grub, Grub2, and iPXE configuration files for the host in the TFTP Capsule Server that is associated with the subnet. A Puppet certificate on the associated Puppet server. A realm on the associated identity server. The host is configured to boot from the network as the first device and HDD as the second device. The new host requests a DHCP reservation from the DHCP server. The DHCP server responds to the reservation request and returns TFTP -server and filename options. The host requests the boot loader and menu from the TFTP server according to the PXELoader setting. A boot loader is returned over TFTP. The boot loader fetches configuration for the host through its provisioning interface MAC address. The boot loader fetches the operating system installer kernel, init RAM disk, and boot parameters. The installer requests the provisioning template from Satellite. Satellite renders the provision template and returns the result to the host. The installer performs installation of the operating system. The installer registers the host to Satellite using Subscription Manager. The installer installs management tools such as katello-agent and puppet . The installer notifies Satellite of a successful build in the postinstall script. The PXE configuration files revert to a local boot template. The host reboots. The new host requests a DHCP reservation from the DHCP server. The DHCP server responds to the reservation request and returns TFTP -server and filename options. The host requests the bootloader and menu from the TFTP server according to the PXELoader setting. A boot loader is returned over TFTP. The boot loader fetches the configuration for the host through its provision interface MAC address. The boot loader initiates boot from the local drive. If you configured the host to use any Puppet classes, the host configures itself using the modules. The fully provisioned host performs the following workflow: The host is configured to boot from the network as the first device and HDD as the second device. The new host requests a DHCP reservation from the DHCP server. The DHCP server responds to the reservation request and returns TFTP -server and filename options. The host requests the boot loader and menu from the TFTP server according to the PXELoader setting. A boot loader is returned over TFTP. The boot loader fetches the configuration settings for the host through its provisioning interface MAC address. For BIOS hosts: The boot loader returns non-bootable device so BIOS skips to the device (boot from HDD). For EFI hosts: The boot loader finds Grub2 on a ESP partition and chainboots it. If the host is unknown to Satellite, a default bootloader configuration is provided. When Discovery service is enabled, it boots into discovery, otherwise it boots from HDD. This workflow differs depending on custom options. For example: Discovery If you use the discovery service, Satellite automatically detects the MAC address of the new host and restarts the host after you submit a request. Note that TCP port 8443 must be reachable by the Capsule to which the host is attached for Satellite to restart the host. PXE-less Provisioning After you submit a new host request, you must boot the specific host with the boot disk that you download from Satellite and transfer using a USB port of the host. Compute Resources Satellite creates the virtual machine and retrieves the MAC address and stores the MAC address in Satellite. If you use image-based provisioning, the host does not follow the standard PXE boot and operating system installation. The compute resource creates a copy of the image for the host to use. Depending on image settings in Satellite, seed data can be passed in for initial configuration, for example using cloud-init . Satellite can connect using SSH to the host and execute a template to finish the customization. Note By default, deleting the provisioned profile host from Satellite does not destroy the actual VM on the external compute resource. To destroy the VM when deleting the host entry on Satellite, navigate to Administer > Settings > Provisioning and configure this behavior using the destroy_vm_on_host_delete setting. If you do not destroy the associated VM and attempt to create a new VM with the same resource name later, it will fail because that VM name already exists in the external compute resource. You can still register the existing VM to Satellite using the standard host registration workflow you would use for any already provisioned host. | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/provisioning_hosts/Introduction_to_Provisioning_provisioning |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar. Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/configuring_a_red_hat_high_availability_cluster_on_red_hat_openstack_platform/proc_providing-feedback-on-red-hat-documentation_configurng-a-red-hat-high-availability-cluster-on-red-hat-openstack-platform |
Chapter 2. Preparation for deploying Red Hat Process Automation Manager in your OpenShift environment | Chapter 2. Preparation for deploying Red Hat Process Automation Manager in your OpenShift environment Before deploying Red Hat Process Automation Manager in your OpenShift environment, you must complete several procedures. You do not need to repeat these procedures if you want to deploy additional images, for example, for new versions of processes or for other processes. Note If you are deploying a trial environment, complete the procedure described in Section 2.1, "Ensuring your environment is authenticated to the Red Hat registry" and do not complete any other preparation procedures. 2.1. Ensuring your environment is authenticated to the Red Hat registry To deploy Red Hat Process Automation Manager components of Red Hat OpenShift Container Platform, you must ensure that OpenShift can download the correct images from the Red Hat registry. OpenShift must be configured to authenticate with the Red Hat registry using your service account user name and password. This configuration is specific for a namespace, and if operators work, the configuration is already completed for the openshift namespace. However, if the image streams for Red Hat Process Automation Manager are not found in the openshift namespace or if the operator is configured to update Red Hat Process Automation Manager to a new version automatically, the operator needs to download images into the namespace of your project. You must complete the authentication configuration for this namespace. Procedure Ensure you are logged in to OpenShift with the oc command and that your project is active. Complete the steps documented in Registry Service Accounts for Shared Environments . You must log in to Red Hat Customer Portal to access the document and to complete the steps to create a registry service account. Select the OpenShift Secret tab and click the link under Download secret to download the YAML secret file. View the downloaded file and note the name that is listed in the name: entry. Run the following commands: Replace <file_name> with the name of the downloaded file and <secret_name> with the name that is listed in the name: entry of the file. 2.2. Creating the secrets for KIE Server OpenShift uses objects called secrets to hold sensitive information such as passwords or keystores. For more information about OpenShift secrets, see What is a secret in the Red Hat OpenShift Container Platform documentation. In order to provide HTTPS access, KIE Server uses an SSL certificate. The deployment can create a sample secret automatically. However, in production environments you must create an SSL certificate for KIE Server and provide it to your OpenShift environment as a secret. Procedure Generate an SSL keystore named keystore.jks with a private and public key for SSL encryption for KIE Server. For more information about creating keystores and using certificates, see How to Configure Server Security . Note In a production environment, generate a valid signed certificate that matches the expected URL for KIE Server. Record the name of the certificate. The default value for this name in Red Hat Process Automation Manager configuration is jboss . Record the password of the keystore file. The default value for this name in Red Hat Process Automation Manager configuration is mykeystorepass . Use the oc command to generate a secret named kieserver-app-secret from the new keystore file: 2.3. Creating the secrets for Business Central In order to provide HTTPS access, Business Central uses an SSL certificate. The deployment can create a sample secret automatically. However, in production environments you must create an SSL certificate for Business Central and provide it to your OpenShift environment as a secret. Do not use the same certificate and keystore for Business Central and KIE Server. Procedure Generate an SSL keystore named keystore.jks with a private and public key for SSL encryption for KIE Server. For more information about creating keystores and using certificates, see How to Configure Server Security . Note In a production environment, generate a valid signed certificate that matches the expected URL for Business Central. Record the name of the certificate. The default value for this name in Red Hat Process Automation Manager configuration is jboss . Record the password of the keystore file. The default value for this name in Red Hat Process Automation Manager configuration is mykeystorepass . Use the oc command to generate a secret named businesscentral-app-secret from the new keystore file: 2.4. Creating the secrets for the AMQ broker connection If you want to connect any KIE Server to an AMQ broker and to use SSL for the AMQ broker connection, you must create an SSL certificate for the connection and provide it to your OpenShift environment as a secret. Procedure Generate an SSL keystore named keystore.jks with a private and public key for SSL encryption for KIE Server. For more information about creating keystores and using certificates, see How to Configure Server Security . Note In a production environment, generate a valid signed certificate that matches the expected URL for the AMQ broker connection. Record the name of the certificate. The default value for this name in Red Hat Process Automation Manager configuration is jboss . Record the password of the keystore file. The default value for this name in Red Hat Process Automation Manager configuration is mykeystorepass . Use the oc command to generate a secret named broker-app-secret from the new keystore file: 2.5. Creating the secrets for Smart Router In order to provide HTTPS access, Smart Router uses an SSL certificate. The deployment can create a sample secret automatically. However, in production environments you must create an SSL certificate for Smart Router and provide it to your OpenShift environment as a secret. Do not use the same certificate and keystore for Smart Router as the ones used for KIE Server or Business Central. Procedure Generate an SSL keystore named keystore.jks with a private and public key for SSL encryption for KIE Server. For more information about creating keystores and using certificates, see How to Configure Server Security . Note In a production environment, generate a valid signed certificate that matches the expected URL for Smart Router. Record the name of the certificate. The default value for this name in Red Hat Process Automation Manager configuration is jboss . Record the password of the keystore file. The default value for this name in Red Hat Process Automation Manager configuration is mykeystorepass . Use the oc command to generate a secret named smartrouter-app-secret from the new keystore file: 2.6. Building a custom KIE Server extension image for an external database If you want to use an external database server for a KIE Server and the database server is not a MySQL or PostgreSQL server, you must build a custom KIE Server extension image with drivers for this server before deploying your environment. Complete the steps in this build procedure to provide drivers for any of the following database servers: Microsoft SQL Server IBM DB2 Oracle Database Sybase Optionally, you can use this procedure to build a new version of drivers for any of the following database servers: MySQL MariaDB PostgreSQL For the supported versions of the database servers, see Red Hat Process Automation Manager 7 Supported Configurations . The build procedure creates a custom extension image that extends the existing KIE Server image. You must import this custom extension image into your OpenShift environment and then reference it in the EXTENSIONS_IMAGE parameter. Prerequisites You are logged in to your OpenShift environment using the oc command. Your OpenShift user must have the registry-viewer role. For more information about assigning the registry-viewer role, see the "Accessing the registry" section in the "Registry" chapter of the OpenShift Container Platform 4.10 Documentation . For Oracle Database, IBM DB2, or Sybase, you downloaded the JDBC driver from the database server vendor. You have installed the following required software: Docker: For installation instructions, see Get Docker . CEKit version 3.11.0 or higher: For installation instructions, see Installation . The following libraries and extensions for CEKit. For more information, see Dependencies . docker , provided by the python3-docker package or similar package docker-squash , provided by the python3-docker-squash package or similar package behave , provided by the python3-behave package or similar package Procedure For IBM DB2, Oracle Database, or Sybase, provide the JDBC driver JAR file in a local directory. Download the rhpam-7.13.5-openshift-templates.zip product deliverable file from the Software Downloads page of the Red Hat Customer Portal. Unzip the file and, using the command line, change to the contrib/jdbc/cekit directory of the unzipped file. This directory contains the source code for the custom build. Enter one of the following commands, depending on the database server type: For Microsoft SQL Server: For MySQL: For PostgreSQL: For MariaDB: For IBM DB2: In this command, replace /tmp/db2jcc4.jar with the path name of the IBM DB2 driver and 10.2 with the version of the driver. For Oracle Database: In this command, replace /tmp/ojdbc7.jar with the path name of the Oracle Database driver and 7.0 with the version of the driver. For Sybase: In this command, replace /tmp/jconn4-16.0_PL05.jar with the path name of the downloaded Sybase driver and 16.0_PL05 with the version of the driver. Alternatively, if you need to update the driver class or driver XA class for the Sybase driver, you can set the DRIVER_CLASS or DRIVER_XA_CLASS variable for this command, for example: Enter the following command to list the Docker images that are available locally: Note the name of the image that was built, for example, jboss-kie-db2-extension-openshift-image , and the version tag of the image, for example, 11.1.4.4 (not the latest tag). Access the registry of your OpenShift environment directly and push the image to the registry. Depending on your user permissions, you can push the image into the openshift namespace or into a project namespace. For instructions about accessing the registry and pushing the images, see Accessing registry directly from the cluster in the Red Hat OpenShift Container Platform product documentation. 2.7. Preparing Git hooks In an authoring environment you can use Git hooks to execute custom operations when the source code of a project in Business Central is changed. The typical use of Git hooks is for interaction with an upstream repository. To enable Git hooks to interact with an upstream repository using SSH authentication, you must also provide a secret key and a known hosts file for authentication with the repository. Skip this procedure if you do not want to configure Git hooks. Procedure Create the Git hooks files. For instructions, see the Git hooks reference documentation . Note A pre-commit script is not supported in Business Central. Use a post-commit script. Create a configuration map (ConfigMap) or persistent volume with the files. For more information about ConfigMaps, see KIE configuration and ConfigMaps . If the Git hooks consist of one or several fixed script files, use the oc command to create a configuration map. For example: If the Git hooks consist of long files or depend on binaries, such as executable or JAR files, use a persistent volume. You must create a persistent volume, create a persistent volume claim and associate the volume with the claim, and transfer files to the volume. For instructions about persistent volumes and persistent volume claims, see Storage in the Red Hat OpenShift Container Platform documentation. For instructions about copying files onto a persistent volume, see Transferring files in and out of containers . If the Git hooks scripts must interact with an upstream repository using SSH authentication, prepare a secret with the necessary files: Prepare the id_rsa file with a private key that matches a public key stored in the repository. Prepare the known_hosts file with the correct name, address, and public key for the repository. Create a secret with the two files using the oc command, for example: Note When the deployment uses this secret, it mounts the id_rsa and known_hosts files into the /home/jboss/.ssh directory on Business Central pods. 2.8. Provisioning persistent volumes with ReadWriteMany access mode using NFS If you want to deploy Business Central Monitoring or high-availability Business Central, your environment must provision persistent volumes with ReadWriteMany access mode. If your configuration requires provisioning persistent volumes with ReadWriteMany access mode but your environment does not support such provisioning, use NFS to provision the volumes. Otherwise, skip this procedure. Procedure Deploy an NFS server and provision the persistent volumes using NFS. For information about provisioning persistent volumes using NFS, see the "Persistent storage using NFS" section of the OpenShift Container Platform Storage guide. 2.9. Extracting the source code from Business Central for use in an S2I build If you are planning to create immutable KIE servers using the source-to-image (S2I) process, you must provide the source code for your services in a Git repository. If you are using Business Central for authoring services, you can extract the source code for your service and place it into a separate Git repository, such as GitHub or an on-premise installation of GitLab, for use in the S2I build. Skip this procedure if you are not planning to use the S2I process or if you are not using Business Central for authoring services. Procedure Use the following command to extract the source code: In this command, replace the following variables: <business-central-host> with the host on which Business Central is running <MySpace> with the name of the Business Central space in which the project is located <MyProject> with the name of the project Note To view the full Git URL for a project in Business Central, click Menu Design <MyProject> Settings . Note If you are using self-signed certificates for HTTPS communication, the command might fail with an SSL certificate problem error message. In this case, disable SSL certificate verification in git , for example, using the GIT_SSL_NO_VERIFY environment variable: Upload the source code to another Git repository, such as GitHub or GitLab, for the S2I build. 2.10. Preparing for deployment in a restricted network You can deploy Red Hat Process Automation Manager in a restricted network that is not connected to the public Internet. For instructions about operator deployment in a restricted network, see Using Operator Lifecycle Manager on restricted networks in Red Hat OpenShift Container Platform documentation. Important In Red Hat Process Automation Manager 7.13, deployment on restricted networks is for Technology Preview only. For more information on Red Hat Technology Preview features, see Technology Preview Features Scope . In order to use a deployment that does not have outgoing access to the public Internet, you must also prepare a Maven repository with a mirror of all the necessary artifacts. For instructions about creating this repository, see Section 2.11, "Preparing a Maven mirror repository for offline use" . 2.11. Preparing a Maven mirror repository for offline use If your Red Hat OpenShift Container Platform environment does not have outgoing access to the public Internet, you must prepare a Maven repository with a mirror of all the necessary artifacts and make this repository available to your environment. Note You do not need to complete this procedure if your Red Hat OpenShift Container Platform environment is connected to the Internet. Prerequisites A computer that has outgoing access to the public Internet is available. Procedure Configure a Maven release repository to which you have write access. The repository must allow read access without authentication and your OpenShift environment must have network access to this repository. You can deploy a Nexus repository manager in the OpenShift environment. For instructions about setting up Nexus on OpenShift, see Setting up Nexus in the Red Hat OpenShift Container Platform 3.11 documentation. The documented procedure is applicable to Red Hat OpenShift Container Platform 4. Use this repository as a mirror to host the publicly available Maven artifacts. You can also provide your own services in this repository in order to deploy these services on immutable servers or to deploy them on managed servers using Business Central monitoring. On the computer that has an outgoing connection to the public Internet, complete the following steps: Navigate to the Software Downloads page in the Red Hat Customer Portal (login required), and select the product and version from the drop-down options: Product: Process Automation Manager Version: 7.13.5 Download and extract the Red Hat Process Automation Manager 7.13.5 Offliner Content List ( rhpam-7.13.5-offliner.zip ) product deliverable file. Extract the contents of the rhpam-7.13.5-offliner.zip file into any directory. Change to the directory and enter the following command: This command creates the repository subdirectory and downloads the necessary artifacts into this subdirectory. This is the mirror repository. If a message reports that some downloads have failed, run the same command again. If downloads fail again, contact Red Hat support. Upload all artifacts from the repository subdirectory to the Maven mirror repository that you prepared. You can use the Maven Repository Provisioner utility, available from the Maven repository tools Git repository, to upload the artifacts. If you developed services outside of Business Central and they have additional dependencies, add the dependencies to the mirror repository. If you developed the services as Maven projects, you can use the following steps to prepare these dependencies automatically. Complete the steps on the computer that has an outgoing connection to the public Internet. Create a backup of the local Maven cache directory ( ~/.m2/repository ) and then clear the directory. Build the source of your projects using the mvn clean install command. For every project, enter the following command to ensure that Maven downloads all runtime dependencies for all the artifacts generated by the project: Replace /path/to/project/pom.xml with the path of the pom.xml file of the project. Upload all artifacts from the local Maven cache directory ( ~/.m2/repository ) to the Maven mirror repository that you prepared. You can use the Maven Repository Provisioner utility, available from the Maven repository tools Git repository, to upload the artifacts. | [
"create -f <file_name>.yaml secrets link default <secret_name> --for=pull secrets link builder <secret_name> --for=pull",
"oc create secret generic kieserver-app-secret --from-file=keystore.jks",
"oc create secret generic businesscentral-app-secret --from-file=keystore.jks",
"oc create secret generic broker-app-secret --from-file=keystore.jks",
"oc create secret generic smartrouter-app-secret --from-file=keystore.jks",
"make mssql",
"make mysql",
"make postgresql",
"make mariadb",
"make db2 artifact=/tmp/db2jcc4.jar version=10.2",
"make oracle artifact=/tmp/ojdbc7.jar version=7.0",
"make build sybase artifact=/tmp/jconn4-16.0_PL05.jar version=16.0_PL05",
"export DRIVER_CLASS=another.class.Sybase && make sybase artifact=/tmp/jconn4-16.0_PL05.jar version=16.0_PL05",
"docker images",
"create configmap git-hooks --from-file=post-commit=post-commit",
"create secret git-hooks-secret --from-file=id_rsa=id_rsa --from-file=known_hosts=known_hosts",
"git clone https://<business-central-host>:443/git/<MySpace>/<MyProject>",
"env GIT_SSL_NO_VERIFY=true git clone https://<business-central-host>:443/git/<MySpace>/<MyProject>",
"./offline-repo-builder.sh offliner.txt",
"mvn -e -DskipTests dependency:go-offline -f /path/to/project/pom.xml --batch-mode -Djava.net.preferIPv4Stack=true"
] | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/deploying_red_hat_process_automation_manager_on_red_hat_openshift_container_platform/dm-openshift-prepare-con_openshift-operator |
Chapter 2. Sizing requirements for Red Hat Developer Hub | Chapter 2. Sizing requirements for Red Hat Developer Hub Scalability of Red Hat Developer Hub requires significant resource allocation. The following table lists the sizing requirements for installing and running Red Hat Developer Hub, including both the Developer Hub application and Developer Hub database components. Table 2.1. Recommended sizing for running Red Hat Developer Hub Components Red Hat Developer Hub application Red Hat Developer Hub database Central Processing Unit (CPU) 4 vCPU 2 vCPU Memory 16 GB 8 GB Storage size 2 GB 20 GB Replicas 2 or more 3 or more | null | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.2/html/getting_started_with_red_hat_developer_hub/ref-rhdh-sizing_rhdh-getting-started |
Chapter 9. Gathering the observability data from multiple clusters | Chapter 9. Gathering the observability data from multiple clusters For a multicluster configuration, you can create one OpenTelemetry Collector instance in each one of the remote clusters and then forward all the telemetry data to one OpenTelemetry Collector instance. Prerequisites The Red Hat build of OpenTelemetry Operator is installed. The Tempo Operator is installed. A TempoStack instance is deployed on the cluster. The following mounted certificates: Issuer, self-signed certificate, CA issuer, client and server certificates. To create any of these certificates, see step 1. Procedure Mount the following certificates in the OpenTelemetry Collector instance, skipping already mounted certificates. An Issuer to generate the certificates by using the cert-manager Operator for Red Hat OpenShift. apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: selfsigned-issuer spec: selfSigned: {} A self-signed certificate. apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: ca spec: isCA: true commonName: ca subject: organizations: - <your_organization_name> organizationalUnits: - Widgets secretName: ca-secret privateKey: algorithm: ECDSA size: 256 issuerRef: name: selfsigned-issuer kind: Issuer group: cert-manager.io A CA issuer. apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: test-ca-issuer spec: ca: secretName: ca-secret The client and server certificates. apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: server spec: secretName: server-tls isCA: false usages: - server auth - client auth dnsNames: - "otel.observability.svc.cluster.local" 1 issuerRef: name: ca-issuer --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: client spec: secretName: client-tls isCA: false usages: - server auth - client auth dnsNames: - "otel.observability.svc.cluster.local" 2 issuerRef: name: ca-issuer 1 List of exact DNS names to be mapped to a solver in the server OpenTelemetry Collector instance. 2 List of exact DNS names to be mapped to a solver in the client OpenTelemetry Collector instance. Create a service account for the OpenTelemetry Collector instance. Example ServiceAccount apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment Create a cluster role for the service account. Example ClusterRole apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: ["", "config.openshift.io"] resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"] verbs: ["get", "watch", "list"] 1 The k8sattributesprocessor requires permissions for pods and namespace resources. 2 The resourcedetectionprocessor requires permissions for infrastructures and status. Bind the cluster role to the service account. Example ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-<example> roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io Create the YAML file to define the OpenTelemetryCollector custom resource (CR) in the edge clusters. Example OpenTelemetryCollector custom resource for the edge clusters apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: otel-collector-<example> spec: mode: daemonset serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} opencensus: otlp: protocols: grpc: {} http: {} zipkin: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlphttp: endpoint: https://observability-cluster.com:443 1 tls: insecure: false cert_file: /certs/server.crt key_file: /certs/server.key ca_file: /certs/ca.crt service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp] volumes: - name: otel-certs secret: name: otel-certs volumeMounts: - name: otel-certs mountPath: /certs 1 The Collector exporter is configured to export OTLP HTTP and points to the OpenTelemetry Collector from the central cluster. Create the YAML file to define the OpenTelemetryCollector custom resource (CR) in the central cluster. Example OpenTelemetryCollector custom resource for the central cluster apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otlp-receiver namespace: observability spec: mode: "deployment" ingress: type: route route: termination: "passthrough" config: receivers: otlp: protocols: http: tls: 1 cert_file: /certs/server.crt key_file: /certs/server.key client_ca_file: /certs/ca.crt exporters: otlp: endpoint: "tempo-<simplest>-distributor:4317" 2 tls: insecure: true service: pipelines: traces: receivers: [otlp] processors: [] exporters: [otlp] volumes: - name: otel-certs secret: name: otel-certs volumeMounts: - name: otel-certs mountPath: /certs 1 The Collector receiver requires the certificates listed in the first step. 2 The Collector exporter is configured to export OTLP and points to the Tempo distributor endpoint, which in this example is "tempo-simplest-distributor:4317" and already created. | [
"apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: selfsigned-issuer spec: selfSigned: {}",
"apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: ca spec: isCA: true commonName: ca subject: organizations: - <your_organization_name> organizationalUnits: - Widgets secretName: ca-secret privateKey: algorithm: ECDSA size: 256 issuerRef: name: selfsigned-issuer kind: Issuer group: cert-manager.io",
"apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: test-ca-issuer spec: ca: secretName: ca-secret",
"apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: server spec: secretName: server-tls isCA: false usages: - server auth - client auth dnsNames: - \"otel.observability.svc.cluster.local\" 1 issuerRef: name: ca-issuer --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: client spec: secretName: client-tls isCA: false usages: - server auth - client auth dnsNames: - \"otel.observability.svc.cluster.local\" 2 issuerRef: name: ca-issuer",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-<example> roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: otel-collector-<example> spec: mode: daemonset serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} opencensus: otlp: protocols: grpc: {} http: {} zipkin: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlphttp: endpoint: https://observability-cluster.com:443 1 tls: insecure: false cert_file: /certs/server.crt key_file: /certs/server.key ca_file: /certs/ca.crt service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp] volumes: - name: otel-certs secret: name: otel-certs volumeMounts: - name: otel-certs mountPath: /certs",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otlp-receiver namespace: observability spec: mode: \"deployment\" ingress: type: route route: termination: \"passthrough\" config: receivers: otlp: protocols: http: tls: 1 cert_file: /certs/server.crt key_file: /certs/server.key client_ca_file: /certs/ca.crt exporters: otlp: endpoint: \"tempo-<simplest>-distributor:4317\" 2 tls: insecure: true service: pipelines: traces: receivers: [otlp] processors: [] exporters: [otlp] volumes: - name: otel-certs secret: name: otel-certs volumeMounts: - name: otel-certs mountPath: /certs"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/red_hat_build_of_opentelemetry/otel-gathering-observability-data-from-multiple-clusters |
7.257. trace-cmd | 7.257. trace-cmd 7.257.1. RHBA-2013:0423 - trace-cmd bug fix and enhancement update Updated trace-cmd packages that fix two bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. The trace-cmd packages contain a command-line tool that interfaces with the ftrace utility in the kernel. Bug Fixes BZ#746656 The trace-cmd extract command read a buffer multiple times even after an EOF condition. Consequently, the output of the trace-cmd command contained duplicate data. With this update, the trace-cmd utility has been modified to respect the EOF condition and avoid duplication of data in its output. BZ#879792 When using the latency tracer, the start_threads() function was not called. Calling the stop_threads() function without first calling start_threads() caused the trace-cmd record command to terminate with a segmentation fault because PIDs were not initialized. Consequently, the trace.dat file was not generated. With this update, stop_threads() is not called unless start_threads() is called first. As a result, the segmentation fault no longer occurs. Enhancement BZ# 838746 Previously, the trace-cmd record command was able to filter ftrace data based on a single PID only. With this update, multiple PIDs can be specified by using the "-P" option. Users of trace-cmd are advised to upgrade to these updated packages, which fix these bugs and add this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/trace-cmd |
Appendix A. The Ceph RESTful API specifications | Appendix A. The Ceph RESTful API specifications As a storage administrator, you can access the various Ceph sub-systems through the Ceph RESTful API endpoints. This is a reference guide for the available Ceph RESTful API methods. The available Ceph API endpoints: Section A.1, "Ceph summary" Section A.2, "Authentication" Section A.3, "Ceph File System" Section A.4, "Storage cluster configuration" Section A.5, "CRUSH rules" Section A.6, "Erasure code profiles" Section A.7, "Feature toggles" Section A.8, "Grafana" Section A.9, "Storage cluster health" Section A.11, "Logs" Section A.12, "Ceph Manager modules" Section A.13, "Ceph Monitor" Section A.14, "Ceph OSD" Section A.15, "Ceph Object Gateway" Section A.16, "REST APIs for manipulating a role" Section A.17, "NFS Ganesha" Section A.18, "Ceph Orchestrator" Section A.19, "Pools" Section A.20, "Prometheus" Section A.21, "RADOS block device" Section A.22, "Performance counters" Section A.23, "Roles" Section A.24, "Services" Section A.25, "Settings" Section A.26, "Ceph task" Section A.27, "Telemetry" Section A.28, "Ceph users" Prerequisites An understanding of how to use a RESTful API. A healthy running Red Hat Ceph Storage cluster. The Ceph Manager dashboard module is enabled. A.1. Ceph summary The method reference for using the Ceph RESTful API summary endpoint to display the Ceph summary details. GET /api/summary Description Display a summary of Ceph details. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.2. Authentication The method reference for using the Ceph RESTful API auth endpoint to initiate a session with Red Hat Ceph Storage. POST /api/auth Curl Example Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/auth/check Description Check the requirement for an authentication token. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/auth/logout Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.3. Ceph File System The method reference for using the Ceph RESTful API cephfs endpoint to manage Ceph File Systems (CephFS). GET /api/cephfs Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/cephfs/ FS_ID Parameters Replace FS_ID with the Ceph File System identifier string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/cephfs/ FS_ID /client/ CLIENT_ID Parameters Replace FS_ID with the Ceph File System identifier string. Replace CLIENT_ID with the Ceph client identifier string. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/cephfs/ FS_ID /clients Parameters Replace FS_ID with the Ceph File System identifier string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/cephfs/ FS_ID /get_root_directory Description The root directory that can not be fetched using the ls_dir API call. Parameters Replace FS_ID with the Ceph File System identifier string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/cephfs/ FS_ID /ls_dir Description List directories for a given path. Parameters Replace FS_ID with the Ceph File System identifier string. Queries: path - The string value where you want to start the listing. The default path is / , if not given. depth - An integer value specifying the number of steps to go down the directory tree. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/cephfs/ FS_ID /mds_counters Parameters Replace FS_ID with the Ceph File System identifier string. Queries: counters - An integer value. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/cephfs/ FS_ID /quota Description Display the CephFS quotas for the given path. Parameters Replace FS_ID with the Ceph File System identifier string. Queries: path - A required string value specifying the directory path. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/cephfs/ FS_ID /quota Description Sets the quota for a given path. Parameters Replace FS_ID with the Ceph File System identifier string. max_bytes - A string value defining the byte limit. max_files - A string value defining the file limit. path - A string value defining the path to the directory or file. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing, check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/cephfs/ FS_ID /snapshot Description Remove a snapsnot. Parameters Replace FS_ID with the Ceph File System identifier string. Queries: name - A required string value specifying the snapshot name. path - A required string value defining the path to the directory. Status Codes 202 Accepted - Operation is still executing, check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/cephfs/ FS_ID /snapshot Description Create a snapshot. Parameters Replace FS_ID with the Ceph File System identifier string. name - A string value specifying the snapshot name. If no name is specified, then a name using the current time in RFC3339 UTC format is generated. path - A string value defining the path to the directory. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing, check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/cephfs/ FS_ID /tree Description Remove a directory. Parameters Replace FS_ID with the Ceph File System identifier string. Queries: path - A required string value defining the path to the directory. Status Codes 202 Accepted - Operation is still executing, check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/cephfs/ FS_ID /tree Description Creates a directory. Parameters Replace FS_ID with the Ceph File System identifier string. path - A string value defining the path to the directory. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing, check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.4. Storage cluster configuration The method reference for using the Ceph RESTful API cluster_conf endpoint to manage the Red Hat Ceph Storage cluster. GET /api/cluster_conf Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/cluster_conf Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing, check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/cluster_conf Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing, check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/cluster_conf/filter Description Display the storage cluster configuration by name. Parameters Queries: names - A string value for the configuration option names. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/cluster_conf/ NAME Parameters Replace NAME with the storage cluster configuration name. Queries: section - A required string value. Status Codes 202 Accepted - Operation is still executing, check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/cluster_conf/ NAME Parameters Replace NAME with the storage cluster configuration name. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.5. CRUSH rules The method reference for using the Ceph RESTful API crush_rule endpoint to manage the CRUSH rules. GET /api/crush_rule Description List the CRUSH rule configuration. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/crush_rule Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing, check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/crush_rule/ NAME Parameters Replace NAME with the rule name. Status Codes 202 Accepted - Operation is still executing, check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/crush_rule/ NAME Parameters Replace NAME with the rule name. Example Status Codes 202 Accepted - Operation is still executing, check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.6. Erasure code profiles The method reference for using the Ceph RESTful API erasure_code_profile endpoint to manage the profiles for erasure coding. GET /api/erasure_code_profile Description List erasure-coded profile information. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/erasure_code_profile Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing, check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/erasure_code_profile/ NAME Parameters Replace NAME with the profile name. Status Codes 202 Accepted - Operation is still executing, check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/erasure_code_profile/ NAME Parameters Replace NAME with the profile name. Example Status Codes 202 Accepted - Operation is still executing, check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.7. Feature toggles The method reference for using the Ceph RESTful API feature_toggles endpoint to manage the CRUSH rules. GET /api/feature_toggles Description List the features of Red Hat Ceph Storage. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.8. Grafana The method reference for using the Ceph RESTful API grafana endpoint to manage Grafana. POST /api/grafana/dashboards Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/grafana/url Description List the Grafana URL instance. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/grafana/validation/ PARAMS Parameters Replace PARAMS with a string value. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.9. Storage cluster health The method reference for using the Ceph RESTful API health endpoint to display the storage cluster health details and status. GET /api/health/full Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/health/minimal Description Display the storage cluster's minimal health report. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.10. Host The method reference for using the Ceph RESTful API host endpoint to display host, also known as node, information. GET /api/host Description List the host specifications. Parameters Queries: sources - A string value of host sources. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/host Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/host/ HOST_NAME Parameters Replace HOST_NAME with the name of the node. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/host/ HOST_NAME Description Displays information on the given host. Parameters Replace HOST_NAME with the name of the node. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/host/ HOST_NAME Description Updates information for the given host. This method is only supported when the Ceph Orchestrator is enabled. Parameters Replace HOST_NAME with the name of the node. force - Force the host to enter maintenance mode. labels - A list of labels. maintenance - Enter or exit maintenance mode. update_labels - Updates the labels. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/host/ HOST_NAME /daemons Parameters Replace HOST_NAME with the name of the node. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/host/ HOST_NAME /devices Parameters Replace HOST_NAME with the name of the node. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/host/ HOST_NAME /identify_device Description Identify a device by switching on the device's light for a specified number of seconds. Parameters Replace HOST_NAME with the name of the node. device - The device id, such as, /dev/dm-0 or ABC1234DEF567-1R1234_ABC8DE0Q . duration - The number of seconds the device's LED should flash. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/host/ HOST_NAME /inventory Description Display the inventory of the host. Parameters Replace HOST_NAME with the name of the node. Queries: refresh - A string value to trigger an asynchronous refresh. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/host/ HOST_NAME /smart Parameters Replace HOST_NAME with the name of the node. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.11. Logs The method reference for using the Ceph RESTful API logs endpoint to display log information. GET /api/logs/all Description View all the log configuration. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.12. Ceph Manager modules The method reference for using the Ceph RESTful API mgr/module endpoint to manage the Ceph Manager modules. GET /api/mgr/module Description View the list of managed modules. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/mgr/module/ MODULE_NAME Description Retrieve the values of the persistent configuration settings. Parameters Replace MODULE_NAME with the Ceph Manager module name. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/mgr/module/ MODULE_NAME Description Set the values of the persistent configuration settings. Parameters Replace MODULE_NAME with the Ceph Manager module name. config - The values of the module options. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/mgr/module/ MODULE_NAME /disable Description Disable the given Ceph Manager module. Parameters Replace MODULE_NAME with the Ceph Manager module name. Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/mgr/module/ MODULE_NAME /enable Description Enable the given Ceph Manager module. Parameters Replace MODULE_NAME with the Ceph Manager module name. Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/mgr/module/ MODULE_NAME /options Description View the options for the given Ceph Manager module. Parameters Replace MODULE_NAME with the Ceph Manager module name. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.13. Ceph Monitor The method reference for using the Ceph RESTful API monitor endpoint to display information on the Ceph Monitor. GET /api/monitor Description View Ceph Monitor details. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.14. Ceph OSD The method reference for using the Ceph RESTful API osd endpoint to manage the Ceph OSDs. GET /api/osd Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/osd Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/osd/flags Description View the Ceph OSD flags. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/osd/flags Description Sets the Ceph OSD flags for the entire storage cluster. Parameters The recovery_deletes , sortbitwise , and pglog_hardlimit flags can not be unset. The purged_snapshots flag can not be set. Important You must include these four flags for a successful operation. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/osd/flags/individual Description View the individual Ceph OSD flags. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/osd/flags/individual Description Updates the noout , noin , nodown , and noup flags for an individual subset of Ceph OSDs. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/osd/safe_to_delete Parameters Queries: svc_ids - A required string of the Ceph OSD service identifier. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/osd/safe_to_destroy Description Check to see if the Ceph OSD is safe to destroy. Parameters Queries: ids - A required string of the Ceph OSD service identifier. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/osd/ SVC_ID Parameters Replace SVC_ID with a string value for the Ceph OSD service identifier. Queries: preserve_id - A string value. force - A string value. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/osd/ SVC_ID Description Returns collected data about a Ceph OSD. Parameters Replace SVC_ID with a string value for the Ceph OSD service identifier. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/osd/ SVC_ID Parameters Replace SVC_ID with a string value for the Ceph OSD service identifier. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/osd/ SVC_ID /destroy Description Marks Ceph OSD as being destroyed. The Ceph OSD must be marked down before being destroyed. This operation keeps the Ceph OSD identifier intact, but removes the Cephx keys, configuration key data, and lockbox keys. Warning This operation renders the data permanently unreadable. Parameters Replace SVC_ID with a string value for the Ceph OSD service identifier. Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/osd/ SVC_ID /devices Parameters Replace SVC_ID with a string value for the Ceph OSD service identifier. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/osd/ SVC_ID /histogram Description Returns the Ceph OSD histogram data. Parameters Replace SVC_ID with a string value for the Ceph OSD service identifier. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/osd/ SVC_ID /mark Description Marks a Ceph OSD out , in , down , and lost . Note A Ceph OSD must be marked down before marking it lost . Parameters Replace SVC_ID with a string value for the Ceph OSD service identifier. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/osd/ SVC_ID /purge Description Removes the Ceph OSD from the CRUSH map. Note The Ceph OSD must be marked down before removal. Parameters Replace SVC_ID with a string value for the Ceph OSD service identifier. Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/osd/ SVC_ID /reweight Description Temporarily reweights the Ceph OSD. When a Ceph OSD is marked out , the OSD's weight is set to 0 . When the Ceph OSD is marked back in , the OSD's weight is set to 1 . Parameters Replace SVC_ID with a string value for the Ceph OSD service identifier. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/osd/ SVC_ID /scrub Parameters Replace SVC_ID with a string value for the Ceph OSD service identifier. Queries: deep - A boolean value, either true or false . Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/osd/ SVC_ID /smart Parameters Replace SVC_ID with a string value for the Ceph OSD service identifier. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.15. Ceph Object Gateway The method reference for using the Ceph RESTful API rgw endpoint to manage the Ceph Object Gateway. GET /api/rgw/status Description Display the Ceph Object Gateway status. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/rgw/daemon Description Display the Ceph Object Gateway daemons. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/rgw/daemon/ SVC_ID Parameters Replace SVC_ID with the service identifier as a string value. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/rgw/site Parameters Queries: query - A string value. daemon_name - The name of the daemon as a string value. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Bucket Management GET /api/rgw/bucket Parameters Queries: stats - A boolean value for bucket statistics. daemon_name - The name of the daemon as a string value. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/rgw/bucket Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/rgw/bucket/ BUCKET Parameters Replace BUCKET with the bucket name as a string value. Queries: purge_objects - A string value. daemon_name - The name of the daemon as a string value. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/rgw/bucket/ BUCKET Parameters Replace BUCKET with the bucket name as a string value. Queries: daemon_name - The name of the daemon as a string value. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/rgw/bucket/ BUCKET Parameters Replace BUCKET with the bucket name as a string value. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. User Management GET /api/rgw/user Description Display the Ceph Object Gateway users. Parameters Queries: daemon_name - The name of the daemon as a string value. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/rgw/user Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/rgw/user/get_emails Parameters Queries: daemon_name - The name of the daemon as a string value. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/rgw/user/ UID Parameters Replace UID with the user identifier as a string. Queries: daemon_name - The name of the daemon as a string value. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/rgw/user/ UID Parameters Replace UID with the user identifier as a string. Queries: daemon_name - The name of the daemon as a string value. stats - A boolean value for user statistics. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/rgw/user/ UID Parameters Replace UID with the user identifier as a string. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/rgw/user/ UID /capability Parameters Replace UID with the user identifier as a string. Queries: daemon_name - The name of the daemon as a string value. type - Required. A string value. perm - Required. A string value. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/rgw/user/ UID /capability Parameters Replace UID with the user identifier as a string. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/rgw/user/ UID /key Parameters Replace UID with the user identifier as a string. Queries: daemon_name - The name of the daemon as a string value. key_type - A string value. subuser - A string value. access_key - A string value. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/rgw/user/ UID /key Parameters Replace UID with the user identifier as a string. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/rgw/user/ UID /quota Parameters Replace UID with the user identifier as a string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/rgw/user/ UID /quota Parameters Replace UID with the user identifier as a string. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/rgw/user/ UID /subuser Parameters Replace UID with the user identifier as a string. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/rgw/user/ UID /subuser/ SUBUSER Parameters Replace UID with the user identifier as a string. Replace SUBUSER with the sub user name as a string. Queries: purge_keys - Set to false to not purge the keys. This only works for S3 subusers. daemon_name - The name of the daemon as a string value. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.16. REST APIs for manipulating a role In addition to the radosgw-admin role commands, you can use the REST APIs for manipulating a role. To invoke the REST admin APIs, create a user with admin caps. Example Create a role: Syntax Example Example response Get a role: Syntax Example Example response List a role: Syntax Example request Example response Update the assume role policy document: Syntax Example Update policy attached to a role: Syntax Example List permission policy names attached to a role: Syntax Example Get permission policy attached to a role: Syntax Example Delete policy attached to a role: Syntax Example Delete a role: Note You can delete a role only when it does not have any permission policy attached to it. Syntax Example Additional Resources See the Role management section in the Red Hat Ceph Storage Object Gateway Guide for details. A.17. NFS Ganesha The method reference for using the Ceph RESTful API nfs-ganesha endpoint to manage the Ceph NFS gateway. GET /api/nfs-ganesha/daemon Description View information on the NFS Ganesha daemons. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/nfs-ganesha/export Description View all of the NFS Ganesha exports. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/nfs-ganesha/export Description Creates a new NFS Ganesha export. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/nfs-ganesha/export/ CLUSTER_ID / EXPORT_ID Description Deletes a NFS Ganesha export. Parameters Replace CLUSTER_ID with the storage cluster identifier string. Replace EXPORT_ID with the export identifier as an integer. Queries: reload_daemons - A boolean value that triggers the reloading of the NFS Ganesha daemons configuration. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/nfs-ganesha/export/ CLUSTER_ID / EXPORT_ID Description View NFS Ganesha export information. Parameters Replace CLUSTER_ID with the storage cluster identifier string. Replace EXPORT_ID with the export identifier as an integer. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/nfs-ganesha/export/ CLUSTER_ID / EXPORT_ID Description Update the NFS Ganesha export information. Parameters Replace CLUSTER_ID with the storage cluster identifier string. Replace EXPORT_ID with the export identifier as an integer. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/nfs-ganesha/status Description View the status information for the NFS Ganesha management feature. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. See the Exporting the Namespace to NFS-Ganesha section in the Red Hat Ceph Storage Object Gateway Guide for more information. A.18. Ceph Orchestrator The method reference for using the Ceph RESTful API orchestrator endpoint to display the Ceph Orchestrator status. GET /api/orchestrator/status Description Display the Ceph Orchestrator status. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.19. Pools The method reference for using the Ceph RESTful API pool endpoint to manage the storage pools. GET /api/pool Description Display the pool list. Parameters Queries: attrs - A string value of pool attributes. stats - A boolean value for pool statistics. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/pool Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/pool/ POOL_NAME Parameters Replace POOL_NAME with the name of the pool. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/pool/ POOL_NAME Parameters Replace POOL_NAME with the name of the pool. Queries: attrs - A string value of pool attributes. stats - A boolean value for pool statistics. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/pool/ POOL_NAME Parameters Replace POOL_NAME with the name of the pool. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/pool/ POOL_NAME /configuration Parameters Replace POOL_NAME with the name of the pool. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.20. Prometheus The method reference for using the Ceph RESTful API prometheus endpoint to manage Prometheus. GET /api/prometheus Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/prometheus/rules Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/prometheus/silence Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/prometheus/silence/ S_ID Parameters Replace S_ID with a string value. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/prometheus/silences Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/prometheus/notifications Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.21. RADOS block device The method reference for using the Ceph RESTful API block endpoint to manage RADOS block devices (RBD). This reference includes all available RBD feature endpoints, such as: RBD Namespace RBD Snapshots RBD Trash RBD Mirroring RBD Mirroring Summary RBD Mirroring Pool Bootstrap RBD Mirroring Pool Mode RBD Mirroring Pool Peer RBD Images GET /api/block/image Description View the RBD images. Parameters Queries: pool_name - The pool name as a string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/block/image Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/block/image/clone_format_version Description Returns the RBD clone format version. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/block/image/default_features Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/block/image/default_features Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/block/image/ IMAGE_SPEC Parameters Replace IMAGE_SPEC with the image name as a string value. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/block/image/ IMAGE_SPEC Parameters Replace IMAGE_SPEC with the image name as a string value. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/block/image/ IMAGE_SPEC Parameters Replace IMAGE_SPEC with the image name as a string value. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/block/image/ IMAGE_SPEC /copy Parameters Replace IMAGE_SPEC with the image name as a string value. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/block/image/ IMAGE_SPEC /flatten Parameters Replace IMAGE_SPEC with the image name as a string value. Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/block/image/ IMAGE_SPEC /move_trash Description Move an image to the trash. Images actively in-use by clones can be moved to the trash, and deleted at a later time. Parameters Replace IMAGE_SPEC with the image name as a string value. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. RBD Mirroring GET /api/block/mirroring/site_name Description Display the RBD mirroring site name. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/block/mirroring/site_name Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. RBD Mirroring Pool Bootstrap POST /api/block/mirroring/pool/ POOL_NAME /bootstrap/peer Parameters Replace POOL_NAME with the name of the pool as a string. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/block/mirroring/pool/ POOL_NAME /bootstrap/token Parameters Replace POOL_NAME with the name of the pool as a string. Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. RBD Mirroring Pool Mode GET /api/block/mirroring/pool/ POOL_NAME Description Display the RBD mirroring summary. Parameters Replace POOL_NAME with the name of the pool as a string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/block/mirroring/pool/ POOL_NAME Parameters Replace POOL_NAME with the name of the pool as a string. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. RBD Mirroring Pool Peer GET /api/block/mirroring/pool/ POOL_NAME /peer Parameters Replace POOL_NAME with the name of the pool as a string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/block/mirroring/pool/ POOL_NAME /peer Parameters Replace POOL_NAME with the name of the pool as a string. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/block/mirroring/pool/ POOL_NAME /peer/ PEER_UUID Parameters Replace POOL_NAME with the name of the pool as a string. Replace PEER_UUID with the UUID of the peer as a string. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/block/mirroring/pool/ POOL_NAME /peer/ PEER_UUID Parameters Replace POOL_NAME with the name of the pool as a string. Replace PEER_UUID with the UUID of the peer as a string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/block/mirroring/pool/ POOL_NAME /peer/ PEER_UUID Parameters Replace POOL_NAME with the name of the pool as a string. Replace PEER_UUID with the UUID of the peer as a string. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. RBD Mirroring Summary GET /api/block/mirroring/summary Description Display the RBD mirroring summary. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. RBD Namespace GET /api/block/pool/ POOL_NAME /namespace Parameters Replace POOL_NAME with the name of the pool as a string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/block/pool/ POOL_NAME /namespace Parameters Replace POOL_NAME with the name of the pool as a string. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/block/pool/ POOL_NAME /namespace/ NAMESPACE Parameters Replace POOL_NAME with the name of the pool as a string. Replace NAMESPACE with the namespace as a string. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. RBD Snapshots POST /api/block/image/ IMAGE_SPEC /snap Parameters Replace IMAGE_SPEC with the image name as a string value. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/block/image/ IMAGE_SPEC /snap/ SNAPSHOT_NAME Parameters Replace IMAGE_SPEC with the image name as a string value. Replace SNAPSHOT_NAME with the name of the snapshot as a string value. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/block/image/ IMAGE_SPEC /snap/ SNAPSHOT_NAME Parameters Replace IMAGE_SPEC with the image name as a string value. Replace SNAPSHOT_NAME with the name of the snapshot as a string value. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/block/image/ IMAGE_SPEC /snap/ SNAPSHOT_NAME /clone Description Clones a snapshot to an image. Parameters Replace IMAGE_SPEC with the image name as a string value. Replace SNAPSHOT_NAME with the name of the snapshot as a string value. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/block/image/ IMAGE_SPEC /snap/ SNAPSHOT_NAME /rollback Parameters Replace IMAGE_SPEC with the image name as a string value. Replace SNAPSHOT_NAME with the name of the snapshot as a string value. Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. RBD Trash GET /api/block/image/trash Description Display all the RBD trash entries, or the RBD trash details by pool name. Parameters Queries: pool_name - The name of the pool as a string value. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/block/image/trash/purge Description Remove all the expired images from trash. Parameters Queries: pool_name - The name of the pool as a string value. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/block/image/trash/ IMAGE_ID_SPEC Description Deletes an image from the trash. If the image deferment time has not expired, you can not delete it unless you use force . An actively in-use image by clones or has snapshots, it can not be deleted. Parameters Replace IMAGE_ID_SPEC with the image name as a string value. Queries: force - A boolean value to force the deletion of an image from trash. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/block/image/trash/ IMAGE_ID_SPEC /restore Description Restores an image from the trash. Parameters Replace IMAGE_ID_SPEC with the image name as a string value. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.22. Performance counters The method reference for using the Ceph RESTful API perf_counters endpoint to display the various Ceph performance counter. This reference includes all available performance counter endpoints, such as: Ceph Metadata Server (MDS) Ceph Manager Ceph Monitor Ceph OSD Ceph Object Gateway Ceph RADOS Block Device (RBD) Mirroring TCMU Runner GET /api/perf_counters Description Displays the performance counters. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Ceph Metadata Server GET /api/perf_counters/mds/ SERVICE_ID Parameters Replace SERVICE_ID with the required service identifier as a string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Ceph Manager GET /api/perf_counters/mgr/ SERVICE_ID Parameters Replace SERVICE_ID with the required service identifier as a string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Ceph Monitor GET /api/perf_counters/mon/ SERVICE_ID Parameters Replace SERVICE_ID with the required service identifier as a string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Ceph OSD GET /api/perf_counters/osd/ SERVICE_ID Parameters Replace SERVICE_ID with the required service identifier as a string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Ceph RADOS Block Device (RBD) Mirroring GET /api/perf_counters/rbd-mirror/ SERVICE_ID Parameters Replace SERVICE_ID with the required service identifier as a string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Ceph Object Gateway GET /api/perf_counters/rgw/ SERVICE_ID Parameters Replace SERVICE_ID with the required service identifier as a string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. TCMU Runner GET /api/perf_counters/tcmu-runner/ SERVICE_ID Parameters Replace SERVICE_ID with the required service identifier as a string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.23. Roles The method reference for using the Ceph RESTful API role endpoint to manage the various user roles in Ceph. GET /api/role Description Display the role list. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/role Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/role/ NAME Parameters Replace NAME with the role name as a string. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/role/ NAME Parameters Replace NAME with the role name as a string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/role/ NAME Parameters Replace NAME with the role name as a string. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/role/ NAME /clone Parameters Replace NAME with the role name as a string. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.24. Services The method reference for using the Ceph RESTful API service endpoint to manage the various Ceph services. GET /api/service Parameters Queries: service_name - The name of the service as a string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/service Parameters service_spec - The service specification as a JSON file. service_name - The name of the service. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/service/known_types Description Display a list of known service types. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/service/ SERVICE_NAME Parameters Replace SERVICE_NAME with the name of the service as a string. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/service/ SERVICE_NAME Parameters Replace SERVICE_NAME with the name of the service as a string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/service/ SERVICE_NAME /daemons Parameters Replace SERVICE_NAME with the name of the service as a string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.25. Settings The method reference for using the Ceph RESTful API settings endpoint to manage the various Ceph settings. GET /api/settings Description Display the list of available options Parameters Queries: names - A comma-separated list of option names. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/settings Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/settings/ NAME Parameters Replace NAME with the option name as a string. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/settings/ NAME Description Display the given option. Parameters Replace NAME with the option name as a string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/settings/ NAME Parameters Replace NAME with the option name as a string. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.26. Ceph task The method reference for using the Ceph RESTful API task endpoint to display Ceph tasks. GET /api/task Description Display Ceph tasks. Parameters Queries: name - The name of the task. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. A.27. Telemetry The method reference for using the Ceph RESTful API telemetry endpoint to manage data for the telemetry Ceph Manager module. PUT /api/telemetry Description Enables or disables the sending of collected data by the telemetry module. Parameters enable - A boolean value. license_name - A string value, such as, sharing-1-0 . Make sure the user is aware of and accepts the license for sharing telemetry data. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/telemetry/report Description Display report data on Ceph and devices. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. See the Activating and deactivating telemetry chapter in the Red Hat Ceph Storage Dashboard Guide for details about managing with the Ceph dashboard. A.28. Ceph users The method reference for using the Ceph RESTful API user endpoint to display Ceph user details and to manage Ceph user passwords. GET /api/user Description Display a list of users. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/user Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. DELETE /api/user/ USER_NAME Parameters Replace USER_NAME with the name of the user as a string. Status Codes 202 Accepted - Operation is still executing. Please check the task queue. 204 No Content - Resource deleted. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. GET /api/user/ USER_NAME Parameters Replace USER_NAME with the name of the user as a string. Example Status Codes 200 OK - Okay. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. PUT /api/user/ USER_NAME Parameters Replace USER_NAME with the name of the user as a string. Example Status Codes 200 OK - Okay. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/user/ USER_NAME /change_password Parameters Replace USER_NAME with the name of the user as a string. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. POST /api/user/validate_password Description Checks the password to see if it meets the password policy. Parameters password - The password to validate. username - Optional. The name of the user. old_password - Optional. The old password. Example Status Codes 201 Created - Resource created. 202 Accepted - Operation is still executing. Please check the task queue. 400 Bad Request - Operation exception. Please check the response body for details. 401 Unauthorized - Unauthenticated access. Please login first. 403 Forbidden - Unauthorized access. Please check your permissions. 500 Internal Server Error - Unexpected error. Please check the response body for the stack trace. Additional Resources See the Ceph RESTful API chapter in the Red Hat Ceph Storage Developer Guide for more details. | [
"GET /api/summary HTTP/1.1 Host: example.com",
"curl -i -k --location -X POST 'https://192.168.0.44:8443/api/auth' -H 'Accept: application/vnd.ceph.api.v1.0+json' -H 'Content-Type: application/json' --data '{\"password\": \"admin@123\", \"username\": \"admin\"}'",
"POST /api/auth HTTP/1.1 Host: example.com Content-Type: application/json { \"password\": \" STRING \", \"username\": \" STRING \" }",
"POST /api/auth/check?token= STRING HTTP/1.1 Host: example.com Content-Type: application/json { \"token\": \" STRING \" }",
"GET /api/cephfs HTTP/1.1 Host: example.com",
"GET /api/cephfs/ FS_ID HTTP/1.1 Host: example.com",
"GET /api/cephfs/ FS_ID /clients HTTP/1.1 Host: example.com",
"GET /api/cephfs/ FS_ID /get_root_directory HTTP/1.1 Host: example.com",
"GET /api/cephfs/ FS_ID /ls_dir HTTP/1.1 Host: example.com",
"GET /api/cephfs/ FS_ID /mds_counters HTTP/1.1 Host: example.com",
"GET /api/cephfs/ FS_ID /quota?path= STRING HTTP/1.1 Host: example.com",
"PUT /api/cephfs/ FS_ID /quota HTTP/1.1 Host: example.com Content-Type: application/json { \"max_bytes\": \" STRING \", \"max_files\": \" STRING \", \"path\": \" STRING \" }",
"POST /api/cephfs/ FS_ID /snapshot HTTP/1.1 Host: example.com Content-Type: application/json { \"name\": \" STRING \", \"path\": \" STRING \" }",
"POST /api/cephfs/ FS_ID /tree HTTP/1.1 Host: example.com Content-Type: application/json { \"path\": \" STRING \" }",
"GET /api/cluster_conf HTTP/1.1 Host: example.com",
"POST /api/cluster_conf HTTP/1.1 Host: example.com Content-Type: application/json { \"name\": \" STRING \", \"value\": \" STRING \" }",
"PUT /api/cluster_conf HTTP/1.1 Host: example.com Content-Type: application/json { \"options\": \" STRING \" }",
"GET /api/cluster_conf/filter HTTP/1.1 Host: example.com",
"GET /api/cluster_conf/ NAME HTTP/1.1 Host: example.com",
"GET /api/crush_rule HTTP/1.1 Host: example.com",
"POST /api/crush_rule HTTP/1.1 Host: example.com Content-Type: application/json { \"device_class\": \" STRING \", \"failure_domain\": \" STRING \", \"name\": \" STRING \", \"root\": \" STRING \" }",
"GET /api/crush_rule/ NAME HTTP/1.1 Host: example.com",
"GET /api/erasure_code_profile HTTP/1.1 Host: example.com",
"POST /api/erasure_code_profile HTTP/1.1 Host: example.com Content-Type: application/json { \"name\": \" STRING \" }",
"GET /api/erasure_code_profile/ NAME HTTP/1.1 Host: example.com",
"GET /api/feature_toggles HTTP/1.1 Host: example.com",
"GET /api/grafana/url HTTP/1.1 Host: example.com",
"GET /api/grafana/validation/ PARAMS HTTP/1.1 Host: example.com",
"GET /api/health/full HTTP/1.1 Host: example.com",
"GET /api/health/minimal HTTP/1.1 Host: example.com",
"GET /api/host HTTP/1.1 Host: example.com",
"POST /api/host HTTP/1.1 Host: example.com Content-Type: application/json { \"hostname\": \" STRING \", \"status\": \" STRING \" }",
"GET /api/host/ HOST_NAME HTTP/1.1 Host: example.com",
"PUT /api/host/ HOST_NAME HTTP/1.1 Host: example.com Content-Type: application/json { \"force\": true, \"labels\": [ \" STRING \" ], \"maintenance\": true, \"update_labels\": true }",
"GET /api/host/ HOST_NAME /daemons HTTP/1.1 Host: example.com",
"GET /api/host/ HOST_NAME /devices HTTP/1.1 Host: example.com",
"POST /api/host/ HOST_NAME /identify_device HTTP/1.1 Host: example.com Content-Type: application/json { \"device\": \" STRING \", \"duration\": \" STRING \" }",
"GET /api/host/ HOST_NAME /inventory HTTP/1.1 Host: example.com",
"GET /api/host/ HOST_NAME /smart HTTP/1.1 Host: example.com",
"GET /api/logs/all HTTP/1.1 Host: example.com",
"GET /api/mgr/module HTTP/1.1 Host: example.com",
"GET /api/mgr/module/ MODULE_NAME HTTP/1.1 Host: example.com",
"PUT /api/mgr/module/ MODULE_NAME HTTP/1.1 Host: example.com Content-Type: application/json { \"config\": \" STRING \" }",
"GET /api/mgr/module/ MODULE_NAME /options HTTP/1.1 Host: example.com",
"GET /api/monitor HTTP/1.1 Host: example.com",
"GET /api/osd HTTP/1.1 Host: example.com",
"POST /api/osd HTTP/1.1 Host: example.com Content-Type: application/json { \"data\": \" STRING \", \"method\": \" STRING \", \"tracking_id\": \" STRING \" }",
"GET /api/osd/flags HTTP/1.1 Host: example.com",
"PUT /api/osd/flags HTTP/1.1 Host: example.com Content-Type: application/json { \"flags\": [ \" STRING \" ] }",
"GET /api/osd/flags/individual HTTP/1.1 Host: example.com",
"PUT /api/osd/flags/individual HTTP/1.1 Host: example.com Content-Type: application/json { \"flags\": { \"nodown\": true, \"noin\": true, \"noout\": true, \"noup\": true }, \"ids\": [ 1 ] }",
"GET /api/osd/safe_to_delete?svc_ids= STRING HTTP/1.1 Host: example.com",
"GET /api/osd/safe_to_destroy?ids= STRING HTTP/1.1 Host: example.com",
"GET /api/osd/ SVC_ID HTTP/1.1 Host: example.com",
"PUT /api/osd/ SVC_ID HTTP/1.1 Host: example.com Content-Type: application/json { \"device_class\": \" STRING \" }",
"GET /api/osd/ SVC_ID /devices HTTP/1.1 Host: example.com",
"GET /api/osd/ SVC_ID /histogram HTTP/1.1 Host: example.com",
"PUT /api/osd/ SVC_ID /mark HTTP/1.1 Host: example.com Content-Type: application/json { \"action\": \" STRING \" }",
"POST /api/osd/ SVC_ID /reweight HTTP/1.1 Host: example.com Content-Type: application/json { \"weight\": \" STRING \" }",
"POST /api/osd/ SVC_ID /scrub HTTP/1.1 Host: example.com Content-Type: application/json { \"deep\": true }",
"GET /api/osd/ SVC_ID /smart HTTP/1.1 Host: example.com",
"GET /api/rgw/status HTTP/1.1 Host: example.com",
"GET /api/rgw/daemon HTTP/1.1 Host: example.com",
"GET /api/rgw/daemon/ SVC_ID HTTP/1.1 Host: example.com",
"GET /api/rgw/site HTTP/1.1 Host: example.com",
"GET /api/rgw/bucket HTTP/1.1 Host: example.com",
"POST /api/rgw/bucket HTTP/1.1 Host: example.com Content-Type: application/json { \"bucket\": \" STRING \", \"daemon_name\": \" STRING \", \"lock_enabled\": \"false\", \"lock_mode\": \" STRING \", \"lock_retention_period_days\": \" STRING \", \"lock_retention_period_years\": \" STRING \", \"placement_target\": \" STRING \", \"uid\": \" STRING \", \"zonegroup\": \" STRING \" }",
"GET /api/rgw/bucket/ BUCKET HTTP/1.1 Host: example.com",
"PUT /api/rgw/bucket/ BUCKET HTTP/1.1 Host: example.com Content-Type: application/json { \"bucket_id\": \" STRING \", \"daemon_name\": \" STRING \", \"lock_mode\": \" STRING \", \"lock_retention_period_days\": \" STRING \", \"lock_retention_period_years\": \" STRING \", \"mfa_delete\": \" STRING \", \"mfa_token_pin\": \" STRING \", \"mfa_token_serial\": \" STRING \", \"uid\": \" STRING \", \"versioning_state\": \" STRING \" }",
"GET /api/rgw/user HTTP/1.1 Host: example.com",
"POST /api/rgw/user HTTP/1.1 Host: example.com Content-Type: application/json { \"access_key\": \" STRING \", \"daemon_name\": \" STRING \", \"display_name\": \" STRING \", \"email\": \" STRING \", \"generate_key\": \" STRING \", \"max_buckets\": \" STRING \", \"secret_key\": \" STRING \", \"suspended\": \" STRING \", \"uid\": \" STRING \" }",
"GET /api/rgw/user/get_emails HTTP/1.1 Host: example.com",
"GET /api/rgw/user/ UID HTTP/1.1 Host: example.com",
"PUT /api/rgw/user/ UID HTTP/1.1 Host: example.com Content-Type: application/json { \"daemon_name\": \" STRING \", \"display_name\": \" STRING \", \"email\": \" STRING \", \"max_buckets\": \" STRING \", \"suspended\": \" STRING \" }",
"POST /api/rgw/user/ UID /capability HTTP/1.1 Host: example.com Content-Type: application/json { \"daemon_name\": \" STRING \", \"perm\": \" STRING \", \"type\": \" STRING \" }",
"POST /api/rgw/user/ UID /key HTTP/1.1 Host: example.com Content-Type: application/json { \"access_key\": \" STRING \", \"daemon_name\": \" STRING \", \"generate_key\": \"true\", \"key_type\": \"s3\", \"secret_key\": \" STRING \", \"subuser\": \" STRING \" }",
"GET /api/rgw/user/ UID /quota HTTP/1.1 Host: example.com",
"PUT /api/rgw/user/ UID /quota HTTP/1.1 Host: example.com Content-Type: application/json { \"daemon_name\": \" STRING \", \"enabled\": \" STRING \", \"max_objects\": \" STRING \", \"max_size_kb\": 1, \"quota_type\": \" STRING \" }",
"POST /api/rgw/user/ UID /subuser HTTP/1.1 Host: example.com Content-Type: application/json { \"access\": \" STRING \", \"access_key\": \" STRING \", \"daemon_name\": \" STRING \", \"generate_secret\": \"true\", \"key_type\": \"s3\", \"secret_key\": \" STRING \", \"subuser\": \" STRING \" }",
"radosgw-admin --uid TESTER --display-name \"TestUser\" --access_key TESTER --secret test123 user create radosgw-admin caps add --uid=\"TESTER\" --caps=\"roles=*\"",
"POST \"<hostname>?Action=CreateRole&RoleName= ROLE_NAME &Path= PATH_TO_FILE &AssumeRolePolicyDocument= TRUST_RELATIONSHIP_POLICY_DOCUMENT \"",
"POST \"<hostname>?Action=CreateRole&RoleName=S3Access&Path=/application_abc/component_xyz/&AssumeRolePolicyDocument={\"Version\":\"2022-06-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"AWS\":[\"arn:aws:iam:::user/TESTER\"]},\"Action\":[\"sts:AssumeRole\"]}]}\"",
"<role> <id>8f41f4e0-7094-4dc0-ac20-074a881ccbc5</id> <name>S3Access</name> <path>/application_abc/component_xyz/</path> <arn>arn:aws:iam:::role/application_abc/component_xyz/S3Access</arn> <create_date>2022-06-23T07:43:42.811Z</create_date> <max_session_duration>3600</max_session_duration> <assume_role_policy_document>{\"Version\":\"2022-06-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"AWS\":[\"arn:aws:iam:::user/TESTER\"]},\"Action\":[\"sts:AssumeRole\"]}]}</assume_role_policy_document> </role>",
"POST \"<hostname>?Action=GetRole&RoleName= ROLE_NAME \"",
"POST \"<hostname>?Action=GetRole&RoleName=S3Access\"",
"<role> <id>8f41f4e0-7094-4dc0-ac20-074a881ccbc5</id> <name>S3Access</name> <path>/application_abc/component_xyz/</path> <arn>arn:aws:iam:::role/application_abc/component_xyz/S3Access</arn> <create_date>2022-06-23T07:43:42.811Z</create_date> <max_session_duration>3600</max_session_duration> <assume_role_policy_document>{\"Version\":\"2022-06-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"AWS\":[\"arn:aws:iam:::user/TESTER\"]},\"Action\":[\"sts:AssumeRole\"]}]}</assume_role_policy_document> </role>",
"POST \"<hostname>?Action=GetRole&RoleName= ROLE_NAME &PathPrefix= PATH_PREFIX \"",
"POST \"<hostname>?Action=ListRoles&RoleName=S3Access&PathPrefix=/application\"",
"<role> <id>8f41f4e0-7094-4dc0-ac20-074a881ccbc5</id> <name>S3Access</name> <path>/application_abc/component_xyz/</path> <arn>arn:aws:iam:::role/application_abc/component_xyz/S3Access</arn> <create_date>2022-06-23T07:43:42.811Z</create_date> <max_session_duration>3600</max_session_duration> <assume_role_policy_document>{\"Version\":\"2022-06-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"AWS\":[\"arn:aws:iam:::user/TESTER\"]},\"Action\":[\"sts:AssumeRole\"]}]}</assume_role_policy_document> </role>",
"POST \"<hostname>?Action=UpdateAssumeRolePolicy&RoleName= ROLE_NAME &PolicyDocument= TRUST_RELATIONSHIP_POLICY_DOCUMENT \"",
"POST \"<hostname>?Action=UpdateAssumeRolePolicy&RoleName=S3Access&PolicyDocument={\"Version\":\"2022-06-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"AWS\":[\"arn:aws:iam:::user/TESTER2\"]},\"Action\":[\"sts:AssumeRole\"]}]}\"",
"POST \"<hostname>?Action=PutRolePolicy&RoleName= ROLE_NAME &PolicyName= POLICY_NAME &PolicyDocument= TRUST_RELATIONSHIP_POLICY_DOCUMENT \"",
"POST \"<hostname>?Action=PutRolePolicy&RoleName=S3Access&PolicyName=Policy1&PolicyDocument={\"Version\":\"2022-06-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Action\":[\"s3:CreateBucket\"],\"Resource\":\"arn:aws:s3:::example_bucket\"}]}\"",
"POST \"<hostname>?Action=ListRolePolicies&RoleName= ROLE_NAME \"",
"POST \"<hostname>?Action=ListRolePolicies&RoleName=S3Access\" <PolicyNames> <member>Policy1</member> </PolicyNames>",
"POST \"<hostname>?Action=GetRolePolicy&RoleName= ROLE_NAME &PolicyName= POLICY_NAME \"",
"POST \"<hostname>?Action=GetRolePolicy&RoleName=S3Access&PolicyName=Policy1\" <GetRolePolicyResult> <PolicyName>Policy1</PolicyName> <RoleName>S3Access</RoleName> <Permission_policy>{\"Version\":\"2022-06-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Action\":[\"s3:CreateBucket\"],\"Resource\":\"arn:aws:s3:::example_bucket\"}]}</Permission_policy> </GetRolePolicyResult>",
"POST \"hostname>?Action=DeleteRolePolicy&RoleName= ROLE_NAME &PolicyName= POLICY_NAME \"",
"POST \"<hostname>?Action=DeleteRolePolicy&RoleName=S3Access&PolicyName=Policy1\"",
"POST \"<hostname>?Action=DeleteRole&RoleName= ROLE_NAME \"",
"POST \"<hostname>?Action=DeleteRole&RoleName=S3Access\"",
"GET /api/nfs-ganesha/daemon HTTP/1.1 Host: example.com",
"GET /api/nfs-ganesha/export HTTP/1.1 Host: example.com",
"POST /api/nfs-ganesha/export HTTP/1.1 Host: example.com Content-Type: application/json { \"access_type\": \" STRING \", \"clients\": [ { \"access_type\": \" STRING \", \"addresses\": [ \" STRING \" ], \"squash\": \" STRING \" } ], \"cluster_id\": \" STRING \", \"daemons\": [ \" STRING \" ], \"fsal\": { \"filesystem\": \" STRING \", \"name\": \" STRING \", \"rgw_user_id\": \" STRING \", \"sec_label_xattr\": \" STRING \", \"user_id\": \" STRING \" }, \"path\": \" STRING \", \"protocols\": [ 1 ], \"pseudo\": \" STRING \", \"reload_daemons\": true, \"security_label\": \" STRING \", \"squash\": \" STRING \", \"tag\": \" STRING \", \"transports\": [ \" STRING \" ] }",
"GET /api/nfs-ganesha/export/ CLUSTER_ID / EXPORT_ID HTTP/1.1 Host: example.com",
"PUT /api/nfs-ganesha/export/ CLUSTER_ID / EXPORT_ID HTTP/1.1 Host: example.com Content-Type: application/json { \"access_type\": \" STRING \", \"clients\": [ { \"access_type\": \" STRING \", \"addresses\": [ \" STRING \" ], \"squash\": \" STRING \" } ], \"daemons\": [ \" STRING \" ], \"fsal\": { \"filesystem\": \" STRING \", \"name\": \" STRING \", \"rgw_user_id\": \" STRING \", \"sec_label_xattr\": \" STRING \", \"user_id\": \" STRING \" }, \"path\": \" STRING \", \"protocols\": [ 1 ], \"pseudo\": \" STRING \", \"reload_daemons\": true, \"security_label\": \" STRING \", \"squash\": \" STRING \", \"tag\": \" STRING \", \"transports\": [ \" STRING \" ] }",
"GET /api/nfs-ganesha/status HTTP/1.1 Host: example.com",
"GET /api/orchestrator/status HTTP/1.1 Host: example.com",
"GET /api/pool HTTP/1.1 Host: example.com",
"POST /api/pool HTTP/1.1 Host: example.com Content-Type: application/json { \"application_metadata\": \" STRING \", \"configuration\": \" STRING \", \"erasure_code_profile\": \" STRING \", \"flags\": \" STRING \", \"pg_num\": 1, \"pool\": \" STRING \", \"pool_type\": \" STRING \", \"rule_name\": \" STRING \" }",
"GET /api/pool/ POOL_NAME HTTP/1.1 Host: example.com",
"PUT /api/pool/ POOL_NAME HTTP/1.1 Host: example.com Content-Type: application/json { \"application_metadata\": \" STRING \", \"configuration\": \" STRING \", \"flags\": \" STRING \" }",
"GET /api/pool/ POOL_NAME /configuration HTTP/1.1 Host: example.com",
"GET /api/prometheus/rules HTTP/1.1 Host: example.com",
"GET /api/prometheus/rules HTTP/1.1 Host: example.com",
"GET /api/prometheus/silences HTTP/1.1 Host: example.com",
"GET /api/prometheus/notifications HTTP/1.1 Host: example.com",
"GET /api/block/image HTTP/1.1 Host: example.com",
"POST /api/block/image HTTP/1.1 Host: example.com Content-Type: application/json { \"configuration\": \" STRING \", \"data_pool\": \" STRING \", \"features\": \" STRING \", \"name\": \" STRING \", \"namespace\": \" STRING \", \"obj_size\": 1, \"pool_name\": \" STRING \", \"size\": 1, \"stripe_count\": 1, \"stripe_unit\": \" STRING \" }",
"GET /api/block/image/clone_format_version HTTP/1.1 Host: example.com",
"GET /api/block/image/default_features HTTP/1.1 Host: example.com",
"GET /api/block/image/default_features HTTP/1.1 Host: example.com",
"GET /api/block/image/ IMAGE_SPEC HTTP/1.1 Host: example.com",
"PUT /api/block/image/ IMAGE_SPEC HTTP/1.1 Host: example.com Content-Type: application/json { \"configuration\": \" STRING \", \"features\": \" STRING \", \"name\": \" STRING \", \"size\": 1 }",
"POST /api/block/image/ IMAGE_SPEC /copy HTTP/1.1 Host: example.com Content-Type: application/json { \"configuration\": \" STRING \", \"data_pool\": \" STRING \", \"dest_image_name\": \" STRING \", \"dest_namespace\": \" STRING \", \"dest_pool_name\": \" STRING \", \"features\": \" STRING \", \"obj_size\": 1, \"snapshot_name\": \" STRING \", \"stripe_count\": 1, \"stripe_unit\": \" STRING \" }",
"POST /api/block/image/ IMAGE_SPEC /move_trash HTTP/1.1 Host: example.com Content-Type: application/json { \"delay\": 1 }",
"GET /api/block/mirroring/site_name HTTP/1.1 Host: example.com",
"PUT /api/block/mirroring/site_name HTTP/1.1 Host: example.com Content-Type: application/json { \"site_name\": \" STRING \" }",
"POST /api/block/mirroring/pool/ POOL_NAME /bootstrap/peer HTTP/1.1 Host: example.com Content-Type: application/json { \"direction\": \" STRING \", \"token\": \" STRING \" }",
"GET /api/block/mirroring/pool/ POOL_NAME HTTP/1.1 Host: example.com",
"PUT /api/block/mirroring/pool/ POOL_NAME HTTP/1.1 Host: example.com Content-Type: application/json { \"mirror_mode\": \" STRING \" }",
"GET /api/block/mirroring/pool/ POOL_NAME /peer HTTP/1.1 Host: example.com",
"POST /api/block/mirroring/pool/ POOL_NAME /peer HTTP/1.1 Host: example.com Content-Type: application/json { \"client_id\": \" STRING \", \"cluster_name\": \" STRING \", \"key\": \" STRING \", \"mon_host\": \" STRING \" }",
"GET /api/block/mirroring/pool/ POOL_NAME /peer/ PEER_UUID HTTP/1.1 Host: example.com",
"PUT /api/block/mirroring/pool/ POOL_NAME /peer/ PEER_UUID HTTP/1.1 Host: example.com Content-Type: application/json { \"client_id\": \" STRING \", \"cluster_name\": \" STRING \", \"key\": \" STRING \", \"mon_host\": \" STRING \" }",
"GET /api/block/mirroring/summary HTTP/1.1 Host: example.com",
"GET /api/block/pool/ POOL_NAME /namespace HTTP/1.1 Host: example.com",
"POST /api/block/pool/ POOL_NAME /namespace HTTP/1.1 Host: example.com Content-Type: application/json { \"namespace\": \" STRING \" }",
"POST /api/block/image/ IMAGE_SPEC /snap HTTP/1.1 Host: example.com Content-Type: application/json { \"snapshot_name\": \" STRING \" }",
"PUT /api/block/image/ IMAGE_SPEC /snap/ SNAPSHOT_NAME HTTP/1.1 Host: example.com Content-Type: application/json { \"is_protected\": true, \"new_snap_name\": \" STRING \" }",
"POST /api/block/image/ IMAGE_SPEC /snap/ SNAPSHOT_NAME /clone HTTP/1.1 Host: example.com Content-Type: application/json { \"child_image_name\": \" STRING \", \"child_namespace\": \" STRING \", \"child_pool_name\": \" STRING \", \"configuration\": \" STRING \", \"data_pool\": \" STRING \", \"features\": \" STRING \", \"obj_size\": 1, \"stripe_count\": 1, \"stripe_unit\": \" STRING \" }",
"GET /api/block/image/trash HTTP/1.1 Host: example.com",
"POST /api/block/image/trash/purge HTTP/1.1 Host: example.com Content-Type: application/json { \"pool_name\": \" STRING \" }",
"POST /api/block/image/trash/ IMAGE_ID_SPEC /restore HTTP/1.1 Host: example.com Content-Type: application/json { \"new_image_name\": \" STRING \" }",
"GET /api/perf_counters HTTP/1.1 Host: example.com",
"GET /api/perf_counters/mds/ SERVICE_ID HTTP/1.1 Host: example.com",
"GET /api/perf_counters/mgr/ SERVICE_ID HTTP/1.1 Host: example.com",
"GET /api/perf_counters/mon/ SERVICE_ID HTTP/1.1 Host: example.com",
"GET /api/perf_counters/osd/ SERVICE_ID HTTP/1.1 Host: example.com",
"GET /api/perf_counters/rbd-mirror/ SERVICE_ID HTTP/1.1 Host: example.com",
"GET /api/perf_counters/rgw/ SERVICE_ID HTTP/1.1 Host: example.com",
"GET /api/perf_counters/tcmu-runner/ SERVICE_ID HTTP/1.1 Host: example.com",
"GET /api/role HTTP/1.1 Host: example.com",
"POST /api/role HTTP/1.1 Host: example.com Content-Type: application/json { \"description\": \" STRING \", \"name\": \" STRING \", \"scopes_permissions\": \" STRING \" }",
"GET /api/role/ NAME HTTP/1.1 Host: example.com",
"PUT /api/role/ NAME HTTP/1.1 Host: example.com Content-Type: application/json { \"description\": \" STRING \", \"scopes_permissions\": \" STRING \" }",
"POST /api/role/ NAME /clone HTTP/1.1 Host: example.com Content-Type: application/json { \"new_name\": \" STRING \" }",
"GET /api/service HTTP/1.1 Host: example.com",
"POST /api/service HTTP/1.1 Host: example.com Content-Type: application/json { \"service_name\": \" STRING \", \"service_spec\": \" STRING \" }",
"GET /api/service/known_types HTTP/1.1 Host: example.com",
"GET /api/service/ SERVICE_NAME HTTP/1.1 Host: example.com",
"GET /api/service/ SERVICE_NAME /daemons HTTP/1.1 Host: example.com",
"GET /api/settings HTTP/1.1 Host: example.com",
"GET /api/settings/ NAME HTTP/1.1 Host: example.com",
"PUT /api/settings/ NAME HTTP/1.1 Host: example.com Content-Type: application/json { \"value\": \" STRING \" }",
"GET /api/task HTTP/1.1 Host: example.com",
"PUT /api/telemetry HTTP/1.1 Host: example.com Content-Type: application/json { \"enable\": true, \"license_name\": \" STRING \" }",
"GET /api/telemetry/report HTTP/1.1 Host: example.com",
"GET /api/user HTTP/1.1 Host: example.com",
"POST /api/user HTTP/1.1 Host: example.com Content-Type: application/json { \"email\": \" STRING \", \"enabled\": true, \"name\": \" STRING \", \"password\": \" STRING \", \"pwdExpirationDate\": \" STRING \", \"pwdUpdateRequired\": true, \"roles\": \" STRING \", \"username\": \" STRING \" }",
"GET /api/user/ USER_NAME HTTP/1.1 Host: example.com",
"PUT /api/user/ USER_NAME HTTP/1.1 Host: example.com Content-Type: application/json { \"email\": \" STRING \", \"enabled\": \" STRING \", \"name\": \" STRING \", \"password\": \" STRING \", \"pwdExpirationDate\": \" STRING \", \"pwdUpdateRequired\": true, \"roles\": \" STRING \" }",
"POST /api/user/ USER_NAME /change_password HTTP/1.1 Host: example.com Content-Type: application/json { \"new_password\": \" STRING \", \"old_password\": \" STRING \" }",
"POST /api/user/validate_password HTTP/1.1 Host: example.com Content-Type: application/json { \"old_password\": \" STRING \", \"password\": \" STRING \", \"username\": \" STRING \" }"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/developer_guide/the-ceph-restful-api-specifications |
Chapter 1. Preparing to deploy OpenShift Data Foundation | Chapter 1. Preparing to deploy OpenShift Data Foundation Deploying OpenShift Data Foundation on OpenShift Container Platform using dynamic or local storage devices provides you with the option to create internal cluster resources. This will result in the internal provisioning of the base services, which helps to make additional storage classes available to applications. Before you begin the deployment of Red Hat OpenShift Data Foundation using dynamic or local storage, ensure that your resource requirements are met. See the Resource requirements section in the Planning guide. Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) HashiCorp Vault, follow these steps: Ensure that you have a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . When the Token authentication method is selected for encryption then refer to Enabling cluster-wide encryption with KMS using the Token authentication method . When the Kubernetes authentication method is selected for encryption then refer to Enabling cluster-wide encryption with KMS using the Kubernetes authentication method . Ensure that you are using signed certificates on your Vault servers. Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) Thales CipherTrust Manager, you must first enable the Key Management Interoperability Protocol (KMIP) and use signed certificates on your server. Follow these steps: Create a KMIP client if one does not exist. From the user interface, select KMIP Client Profile Add Profile . Add the CipherTrust username to the Common Name field during profile creation. Create a token by navigating to KMIP Registration Token New Registration Token . Copy the token for the step. To register the client, navigate to KMIP Registered Clients Add Client . Specify the Name . Paste the Registration Token from the step, then click Save . Download the Private Key and Client Certificate by clicking Save Private Key and Save Certificate respectively. To create a new KMIP interface, navigate to Admin Settings Interfaces Add Interface . Select KMIP Key Management Interoperability Protocol and click . Select a free Port . Select Network Interface as all . Select Interface Mode as TLS, verify client cert, user name taken from client cert, auth request is optional . (Optional) You can enable hard delete to delete both metadata and material when the key is deleted. It is disabled by default. Select the CA to be used, and click Save . To get the server CA certificate, click on the Action menu (...) on the right of the newly created interface, and click Download Certificate . Optional: If StorageClass encryption is to be enabled during deployment, create a key to act as the Key Encryption Key (KEK): Navigate to Keys Add Key . Enter Key Name . Set the Algorithm and Size to AES and 256 respectively. Enable Create a key in Pre-Active state and set the date and time for activation. Ensure that Encrypt and Decrypt are enabled under Key Usage . Copy the ID of the newly created Key to be used as the Unique Identifier during deployment. Minimum starting node requirements An OpenShift Data Foundation cluster will be deployed with minimum configuration when the standard deployment resource requirement is not met. See Resource requirements section in Planning guide. Disaster recovery requirements [Technology Preview] Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced subscription A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation. For deploying using local storage devices, see requirements for installing OpenShift Data Foundation using local storage devices . These are not applicable for deployment using dynamic storage devices. 1.1. Requirements for installing OpenShift Data Foundation using local storage devices Node requirements The cluster must consist of at least three OpenShift Container Platform worker or infrastructure nodes with locally attached-storage devices on each of them. Each of the three selected nodes must have at least one raw block device available. OpenShift Data Foundation uses the one or more available raw block devices. The devices you use must be empty, the disks must not include Physical Volumes (PVs), Volume Groups (VGs), or Logical Volumes (LVs) remaining on the disk. For more information, see the Resource requirements section in the Planning guide . Disaster recovery requirements Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced subscription. A valid Red Hat Advanced Cluster Management (RHACM) for Kubernetes subscription. To know in detail how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed disaster recovery solution requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation. Arbiter stretch cluster requirements In this case, a single cluster is stretched across two zones with a third zone as the location for the arbiter. This solution is currently intended for deployment in the OpenShift Container Platform on-premises and in the same data center. This solution is not recommended for deployments stretching over multiple data centers. Instead, consider Metro-DR as a first option for no data loss DR solution deployed over multiple data centers with low latency networks. To know in detail how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Note You cannot enable Flexible scaling and Arbiter both at the same time as they have conflicting scaling logic. With Flexible scaling, you can add one node at a time to your OpenShift Data Foundation cluster. Whereas, in an Arbiter cluster, you need to add at least one node in each of the two data zones. Minimum starting node requirements An OpenShift Data Foundation cluster is deployed with a minimum configuration when the resource requirement for a standard deployment is not met. For more information, see the Resource requirements section in the Planning guide . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_openshift_data_foundation_on_vmware_vsphere/preparing_to_deploy_openshift_data_foundation |
Chapter 12. VolumeSnapshotClass [snapshot.storage.k8s.io/v1] | Chapter 12. VolumeSnapshotClass [snapshot.storage.k8s.io/v1] Description VolumeSnapshotClass specifies parameters that a underlying storage system uses when creating a volume snapshot. A specific VolumeSnapshotClass is used by specifying its name in a VolumeSnapshot object. VolumeSnapshotClasses are non-namespaced Type object Required deletionPolicy driver 12.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources deletionPolicy string deletionPolicy determines whether a VolumeSnapshotContent created through the VolumeSnapshotClass should be deleted when its bound VolumeSnapshot is deleted. Supported values are "Retain" and "Delete". "Retain" means that the VolumeSnapshotContent and its physical snapshot on underlying storage system are kept. "Delete" means that the VolumeSnapshotContent and its physical snapshot on underlying storage system are deleted. Required. driver string driver is the name of the storage driver that handles this VolumeSnapshotClass. Required. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata parameters object (string) parameters is a key-value map with storage driver specific parameters for creating snapshots. These values are opaque to Kubernetes. 12.2. API endpoints The following API endpoints are available: /apis/snapshot.storage.k8s.io/v1/volumesnapshotclasses DELETE : delete collection of VolumeSnapshotClass GET : list objects of kind VolumeSnapshotClass POST : create a VolumeSnapshotClass /apis/snapshot.storage.k8s.io/v1/volumesnapshotclasses/{name} DELETE : delete a VolumeSnapshotClass GET : read the specified VolumeSnapshotClass PATCH : partially update the specified VolumeSnapshotClass PUT : replace the specified VolumeSnapshotClass 12.2.1. /apis/snapshot.storage.k8s.io/v1/volumesnapshotclasses HTTP method DELETE Description delete collection of VolumeSnapshotClass Table 12.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind VolumeSnapshotClass Table 12.2. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotClassList schema 401 - Unauthorized Empty HTTP method POST Description create a VolumeSnapshotClass Table 12.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.4. Body parameters Parameter Type Description body VolumeSnapshotClass schema Table 12.5. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotClass schema 201 - Created VolumeSnapshotClass schema 202 - Accepted VolumeSnapshotClass schema 401 - Unauthorized Empty 12.2.2. /apis/snapshot.storage.k8s.io/v1/volumesnapshotclasses/{name} Table 12.6. Global path parameters Parameter Type Description name string name of the VolumeSnapshotClass HTTP method DELETE Description delete a VolumeSnapshotClass Table 12.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 12.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified VolumeSnapshotClass Table 12.9. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotClass schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified VolumeSnapshotClass Table 12.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.11. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotClass schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified VolumeSnapshotClass Table 12.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.13. Body parameters Parameter Type Description body VolumeSnapshotClass schema Table 12.14. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotClass schema 201 - Created VolumeSnapshotClass schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/storage_apis/volumesnapshotclass-snapshot-storage-k8s-io-v1 |
Chapter 3. Using libcgroup Tools | Chapter 3. Using libcgroup Tools The libcgroup package, which was the main tool for cgroup management in versions of Red Hat Enterprise Linux, is now deprecated. To avoid conflicts, do not use libcgroup tools for default resource controllers (listed in Available Controllers in Red Hat Enterprise Linux 7 ) that are now an exclusive domain of systemd . This leaves a limited space for applying libcgroup tools, use it only when you need to manage controllers not currently supported by systemd , such as net_prio . The following sections describe how to use libcgroup tools in relevant scenarios without conflicting with the default system of hierarchy. Note In order to use libcgroup tools, first ensure the libcgroup and libcgroup-tools packages are installed on your system. To install them, run as root : Note The net_prio controller is not compiled in the kernel like the rest of the controllers, rather it is a module that has to be loaded before attempting to mount it. To load this module, type as root : 3.1. Mounting a Hierarchy To use a kernel resource controller that is not mounted automatically, you have to create a hierarchy that will contain this controller. Add or detach the hierarchy by editing the mount section of the /etc/cgconfig.conf configuration file. This method makes the controller attachment persistent, which means your settings will be preserved after system reboot. As an alternative, use the mount command to create a transient mount only for the current session. Using the cgconfig Service The cgconfig service installed with the libcgroup-tools package provides a way to mount hierarchies for additional resource controllers. By default, this service is not started automatically. When you start cgconfig , it applies the settings from the /etc/cgconfig.conf configuration file. The configuration is therefore recreated from session to session and becomes persistent. Note that if you stop cgconfig , it unmounts all the hierarchies that it mounted. The default /etc/cgconfig.conf file installed with the libcgroup package does not contain any configuration settings, only information that systemd mounts the main resource controllers automatically. Entries of three types can be created in /etc/cgconfig.conf - mount , group , and template . Mount entries are used to create and mount hierarchies as virtual file systems, and attach controllers to those hierarchies. In Red Hat Enterprise Linux 7, default hierarchies are mounted automatically to the /sys/fs/cgroup/ directory, cgconfig is therefore used solely to attach non-default controllers. Mount entries are defined using the following syntax: Replace controller_name with a name of the kernel resource controller you wish to mount to the hierarchy. See Example 3.1, "Creating a mount entry" for an example. Example 3.1. Creating a mount entry To attach the net_prio controller to the default cgroup tree, add the following text to the /etc/cgconfig.conf configuration file: Then restart the cgconfig service to apply the setting: Group entries in /etc/cgconfig.conf can be used to set the parameters of resource controllers. See Section 3.5, "Setting Cgroup Parameters" for more information about group entries. Template entries in /etc/cgconfig.conf can be used to create a group definition applied to all processes. Using the mount Command Use the mount command to temporarily mount a hierarchy. To do so, first create a mount point in the /sys/fs/cgroup/ directory where systemd mounts the main resource controllers. Type as root : Replace name with a name of the new mount destination, usually the name of the controller is used. , execute the mount command to mount the hierarchy and simultaneously attach one or more subsystems. Type as root : Replace controller_name with a name of the controller to specify both the device to be mounted as well as the destination folder. The -t cgroup parameter specifies the type of mount. Example 3.2. Using the mount command to attach controllers To mount a hierarchy for the net_prio controller with use of the mount command, first create the mount point: Then mount net_prio to the destination you created in the step: You can verify whether you attached the hierarchy correctly by listing all available hierarchies along with their current mount points using the lssubsys command (see the section called "Listing Controllers" ): | [
"~]# yum install libcgroup ~]# yum install libcgroup-tools",
"~]# modprobe netprio_cgroup",
"mount { controller_name = /sys/fs/cgroup/ controller_name ; ... }",
"mount { net_prio = /sys/fs/cgroup/net_prio; }",
"~]# systemctl restart cgconfig.service",
"~]# mkdir /sys/fs/cgroup/ name",
"~]# mount -t cgroup -o controller_name none /sys/fs/cgroup/ controller_name",
"~]# mkdir /sys/fs/cgroup/net_prio",
"~]# mount -t cgroup -o net_prio none /sys/fs/cgroup/net_prio",
"~]# lssubsys -am cpuset /sys/fs/cgroup/cpuset cpu,cpuacct /sys/fs/cgroup/cpu,cpuacct memory /sys/fs/cgroup/memory devices /sys/fs/cgroup/devices freezer /sys/fs/cgroup/freezer net_cls /sys/fs/cgroup/net_cls blkio /sys/fs/cgroup/blkio perf_event /sys/fs/cgroup/perf_event hugetlb /sys/fs/cgroup/hugetlb net_prio /sys/fs/cgroup/net_prio"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/resource_management_guide/chap-Using_libcgroup_Tools |
Chapter 2. Eclipse Temurin 11.0.20.1 release notes | Chapter 2. Eclipse Temurin 11.0.20.1 release notes Eclipse Temurin does not contain structural changes from the upstream distribution of OpenJDK. Review the following release notes for an overview of the changes from the Eclipse Temurin 11.0.20.1 patch release. Note For all the other changes and security fixes, see OpenJDK 11.0.20.1 Released . Fixed Invalid CEN header error on valid .zip files OpenJDK 11.0.20 introduced additional validation checks on the ZIP64 fields of .zip files (JDK-8302483). However, these additional checks caused validation failures on some valid .zip files with the following error message: Invalid CEN header (invalid zip64 extra data field size) . To fix this issue, OpenJDK 11.0.20.1 supports zero-length headers and the additional padding that some ZIP64 creation tools produce. From OpenJDK 11.0.20 onward, you can disable these checks by setting the jdk.util.zip.disableZip64ExtraFieldValidation system property to true . See JDK-8313765 (JDK Bug System) Increased default value of jdk.jar.maxSignatureFileSize system property OpenJDK 11.0.20 introduced a jdk.jar.maxSignatureFileSize system property for configuring the maximum number of bytes that are allowed for the signature-related files in a Java archive (JAR) file ( JDK-8300596 ). By default, the jdk.jar.maxSignatureFileSize property was set to 8000000 bytes (8 MB), which was too small for some JAR files. OpenJDK 11.0.20.1 increases the default value of the jdk.jar.maxSignatureFileSize property to 16000000 bytes (16 MB). See JDK-8313216 (JDK Bug System) Fixed NullPointerException when handling null addresses In OpenJDK 11.0.20, when the serviceability agent encountered null addresses while generating thread dumps, the serviceability agent produced a NullPointerException . OpenJDK 11.0.20.1 handles null addresses appropriately. See JDK-8243210 (JDK Bug System) | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_eclipse_temurin_11.0.20/openjdk-temurin-11-0-20-1-release-notes_openjdk |
Chapter 7. Catalog exclusion by labels or expressions | Chapter 7. Catalog exclusion by labels or expressions You can exclude catalogs by using match expressions on metadata with the NotIn or DoesNotExist operators. The following CRs add an example.com/testing label to the unwanted-catalog-1 and unwanted-catalog-2 cluster catalogs: Example cluster catalog CR apiVersion: olm.operatorframework.io/v1 kind: ClusterCatalog metadata: name: unwanted-catalog-1 labels: example.com/testing: "true" spec: source: type: Image image: ref: quay.io/example/content-management-a:latest Example cluster catalog CR apiVersion: olm.operatorframework.io/v1 kind: ClusterCatalog metadata: name: unwanted-catalog-2 labels: example.com/testing: "true" spec: source: type: Image image: ref: quay.io/example/content-management-b:latest The following cluster extension CR excludes selection from the unwanted-catalog-1 catalog: Example cluster extension CR that excludes a specific catalog apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: <example_extension> spec: namespace: <example_namespace> serviceAccount: name: <example_extension>-installer source: sourceType: Catalog catalog: packageName: <example_extension>-operator selector: matchExpressions: - key: olm.operatorframework.io/metadata.name operator: NotIn values: - unwanted-catalog-1 The following cluster extension CR selects from catalogs that do not have the example.com/testing label. As a result, both unwanted-catalog-1 and unwanted-catalog-2 are excluded from catalog selection. Example cluster extension CR that excludes catalogs with a specific label apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: <example_extension> spec: namespace: <example_namespace> serviceAccount: name: <example_extension>-installer source: sourceType: Catalog catalog: packageName: <example_extension>-operator selector: matchExpressions: - key: example.com/testing operator: DoesNotExist | [
"apiVersion: olm.operatorframework.io/v1 kind: ClusterCatalog metadata: name: unwanted-catalog-1 labels: example.com/testing: \"true\" spec: source: type: Image image: ref: quay.io/example/content-management-a:latest",
"apiVersion: olm.operatorframework.io/v1 kind: ClusterCatalog metadata: name: unwanted-catalog-2 labels: example.com/testing: \"true\" spec: source: type: Image image: ref: quay.io/example/content-management-b:latest",
"apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: <example_extension> spec: namespace: <example_namespace> serviceAccount: name: <example_extension>-installer source: sourceType: Catalog catalog: packageName: <example_extension>-operator selector: matchExpressions: - key: olm.operatorframework.io/metadata.name operator: NotIn values: - unwanted-catalog-1",
"apiVersion: olm.operatorframework.io/v1 kind: ClusterExtension metadata: name: <example_extension> spec: namespace: <example_namespace> serviceAccount: name: <example_extension>-installer source: sourceType: Catalog catalog: packageName: <example_extension>-operator selector: matchExpressions: - key: example.com/testing operator: DoesNotExist"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/extensions/olmv1-catalog-exclusion-by-labels-or-expressions_catalog-content-resolution |
Chapter 7. Kafka Streams configuration properties | Chapter 7. Kafka Streams configuration properties application.id Type: string Importance: high An identifier for the stream processing application. Must be unique within the Kafka cluster. It is used as 1) the default client-id prefix, 2) the group-id for membership management, 3) the changelog topic prefix. bootstrap.servers Type: list Importance: high A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping-this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form host1:port1,host2:port2,... . Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down). num.standby.replicas Type: int Default: 0 Importance: high The number of standby replicas for each task. state.dir Type: string Default: /tmp/kafka-streams Importance: high Directory location for state store. This path must be unique for each streams instance sharing the same underlying filesystem. Note that if not configured, then the default location will be different in each environment as it is computed using System.getProperty("java.io.tmpdir"). acceptable.recovery.lag Type: long Default: 10000 Valid Values: [0,... ] Importance: medium The maximum acceptable lag (number of offsets to catch up) for a client to be considered caught-up enough to receive an active task assignment. Upon assignment, it will still restore the rest of the changelog before processing. To avoid a pause in processing during rebalances, this config should correspond to a recovery time of well under a minute for a given workload. Must be at least 0. cache.max.bytes.buffering Type: long Default: 10485760 Valid Values: [0,... ] Importance: medium Maximum number of memory bytes to be used for buffering across all threads. client.id Type: string Default: "" Importance: medium An ID prefix string used for the client IDs of internal (main, restore, and global) consumers , producers, and admin clients with pattern <client.id>-[Global]StreamThread[-<threadSequenceNumber>]-<consumer|producer|restore-consumer|global-consumer> . default.deserialization.exception.handler Type: class Default: org.apache.kafka.streams.errors.LogAndFailExceptionHandler Importance: medium Exception handling class that implements the org.apache.kafka.streams.errors.DeserializationExceptionHandler interface. default.key.serde Type: class Default: null Importance: medium Default serializer / deserializer class for key that implements the org.apache.kafka.common.serialization.Serde interface. Note when windowed serde class is used, one needs to set the inner serde class that implements the org.apache.kafka.common.serialization.Serde interface via 'default.windowed.key.serde.inner' or 'default.windowed.value.serde.inner' as well. default.list.key.serde.inner Type: class Default: null Importance: medium Default inner class of list serde for key that implements the org.apache.kafka.common.serialization.Serde interface. This configuration will be read if and only if default.key.serde configuration is set to org.apache.kafka.common.serialization.Serdes.ListSerde . default.list.key.serde.type Type: class Default: null Importance: medium Default class for key that implements the java.util.List interface. This configuration will be read if and only if default.key.serde configuration is set to org.apache.kafka.common.serialization.Serdes.ListSerde Note when list serde class is used, one needs to set the inner serde class that implements the org.apache.kafka.common.serialization.Serde interface via 'default.list.key.serde.inner'. default.list.value.serde.inner Type: class Default: null Importance: medium Default inner class of list serde for value that implements the org.apache.kafka.common.serialization.Serde interface. This configuration will be read if and only if default.value.serde configuration is set to org.apache.kafka.common.serialization.Serdes.ListSerde . default.list.value.serde.type Type: class Default: null Importance: medium Default class for value that implements the java.util.List interface. This configuration will be read if and only if default.value.serde configuration is set to org.apache.kafka.common.serialization.Serdes.ListSerde Note when list serde class is used, one needs to set the inner serde class that implements the org.apache.kafka.common.serialization.Serde interface via 'default.list.value.serde.inner'. default.production.exception.handler Type: class Default: org.apache.kafka.streams.errors.DefaultProductionExceptionHandler Importance: medium Exception handling class that implements the org.apache.kafka.streams.errors.ProductionExceptionHandler interface. default.timestamp.extractor Type: class Default: org.apache.kafka.streams.processor.FailOnInvalidTimestamp Importance: medium Default timestamp extractor class that implements the org.apache.kafka.streams.processor.TimestampExtractor interface. default.value.serde Type: class Default: null Importance: medium Default serializer / deserializer class for value that implements the org.apache.kafka.common.serialization.Serde interface. Note when windowed serde class is used, one needs to set the inner serde class that implements the org.apache.kafka.common.serialization.Serde interface via 'default.windowed.key.serde.inner' or 'default.windowed.value.serde.inner' as well. max.task.idle.ms Type: long Default: 0 Importance: medium This config controls whether joins and merges may produce out-of-order results. The config value is the maximum amount of time in milliseconds a stream task will stay idle when it is fully caught up on some (but not all) input partitions to wait for producers to send additional records and avoid potential out-of-order record processing across multiple input streams. The default (zero) does not wait for producers to send more records, but it does wait to fetch data that is already present on the brokers. This default means that for records that are already present on the brokers, Streams will process them in timestamp order. Set to -1 to disable idling entirely and process any locally available data, even though doing so may produce out-of-order processing. max.warmup.replicas Type: int Default: 2 Valid Values: [1,... ] Importance: medium The maximum number of warmup replicas (extra standbys beyond the configured num.standbys) that can be assigned at once for the purpose of keeping the task available on one instance while it is warming up on another instance it has been reassigned to. Used to throttle how much extra broker traffic and cluster state can be used for high availability. Must be at least 1.Note that one warmup replica corresponds to one Stream Task. Furthermore, note that each warmup replica can only be promoted to an active task during a rebalance (normally during a so-called probing rebalance, which occur at a frequency specified by the probing.rebalance.interval.ms config). This means that the maximum rate at which active tasks can be migrated from one Kafka Streams Instance to another instance can be determined by ( max.warmup.replicas / probing.rebalance.interval.ms ). num.stream.threads Type: int Default: 1 Importance: medium The number of threads to execute stream processing. processing.guarantee Type: string Default: at_least_once Valid Values: [at_least_once, exactly_once, exactly_once_beta, exactly_once_v2] Importance: medium The processing guarantee that should be used. Possible values are at_least_once (default) and exactly_once_v2 (requires brokers version 2.5 or higher). Deprecated options are exactly_once (requires brokers version 0.11.0 or higher) and exactly_once_beta (requires brokers version 2.5 or higher). Note that exactly-once processing requires a cluster of at least three brokers by default what is the recommended setting for production; for development you can change this, by adjusting broker setting transaction.state.log.replication.factor and transaction.state.log.min.isr . rack.aware.assignment.non_overlap_cost Type: int Default: null Importance: medium Cost associated with moving tasks from existing assignment. This config and rack.aware.assignment.traffic_cost controls whether the optimization algorithm favors minimizing cross rack traffic or minimize the movement of tasks in existing assignment. If set a larger value org.apache.kafka.streams.processor.internals.assignment.RackAwareTaskAssignor will optimize to maintain the existing assignment. The default value is null which means it will use default non_overlap cost values in different assignors. rack.aware.assignment.strategy Type: string Default: none Valid Values: [none, min_traffic, balance_subtopology] Importance: medium The strategy we use for rack aware assignment. Rack aware assignment will take client.rack and racks of TopicPartition into account when assigning tasks to minimize cross rack traffic. Valid settings are : none (default), which will disable rack aware assignment; min_traffic , which will compute minimum cross rack traffic assignment; balance_subtopology , which will compute minimum cross rack traffic and try to balance the tasks of same subtopolgies across different clients. rack.aware.assignment.tags Type: list Default: "" Valid Values: List containing maximum of 5 elements Importance: medium List of client tag keys used to distribute standby replicas across Kafka Streams instances. When configured, Kafka Streams will make a best-effort to distribute the standby tasks over each client tag dimension. rack.aware.assignment.traffic_cost Type: int Default: null Importance: medium Cost associated with cross rack traffic. This config and rack.aware.assignment.non_overlap_cost controls whether the optimization algorithm favors minimizing cross rack traffic or minimize the movement of tasks in existing assignment. If set a larger value org.apache.kafka.streams.processor.internals.assignment.RackAwareTaskAssignor will optimize for minimizing cross rack traffic. The default value is null which means it will use default traffic cost values in different assignors. replication.factor Type: int Default: -1 Importance: medium The replication factor for change log topics and repartition topics created by the stream processing application. The default of -1 (meaning: use broker default replication factor) requires broker version 2.4 or newer. security.protocol Type: string Default: PLAINTEXT Valid Values: (case insensitive) [SASL_SSL, PLAINTEXT, SSL, SASL_PLAINTEXT] Importance: medium Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. statestore.cache.max.bytes Type: long Default: 10485760 (10 mebibytes) Valid Values: [0,... ] Importance: medium Maximum number of memory bytes to be used for statestore cache across all threads. task.assignor.class Type: string Default: null Importance: medium A task assignor class or class name implementing the org.apache.kafka.streams.processor.assignment.TaskAssignor interface. Defaults to the HighAvailabilityTaskAssignor class. task.timeout.ms Type: long Default: 300000 (5 minutes) Valid Values: [0,... ] Importance: medium The maximum amount of time in milliseconds a task might stall due to internal errors and retries until an error is raised. For a timeout of 0ms, a task would raise an error for the first internal error. For any timeout larger than 0ms, a task will retry at least once before an error is raised. topology.optimization Type: string Default: none Valid Values: [all, none, reuse.ktable.source.topics, merge.repartition.topics, single.store.self.join] Importance: medium A configuration telling Kafka Streams if it should optimize the topology and what optimizations to apply. Acceptable values are: "NO_OPTIMIZATION", "OPTIMIZE", or a comma separated list of specific optimizations: ("REUSE_KTABLE_SOURCE_TOPICS", "MERGE_REPARTITION_TOPICS", "SINGLE_STORE_SELF_JOIN"). "NO_OPTIMIZATION" by default. application.server Type: string Default: "" Importance: low A host:port pair pointing to a user-defined endpoint that can be used for state store discovery and interactive queries on this KafkaStreams instance. auto.include.jmx.reporter Type: boolean Default: true Importance: low Deprecated. Whether to automatically include JmxReporter even if it's not listed in metric.reporters . This configuration will be removed in Kafka 4.0, users should instead include org.apache.kafka.common.metrics.JmxReporter in metric.reporters in order to enable the JmxReporter. buffered.records.per.partition Type: int Default: 1000 Importance: low Maximum number of records to buffer per partition. built.in.metrics.version Type: string Default: latest Valid Values: [latest] Importance: low Version of the built-in metrics to use. commit.interval.ms Type: long Default: 30000 (30 seconds) Valid Values: [0,... ] Importance: low The frequency in milliseconds with which to commit processing progress. For at-least-once processing, committing means to save the position (ie, offsets) of the processor. For exactly-once processing, it means to commit the transaction which includes to save the position and to make the committed data in the output topic visible to consumers with isolation level read_committed. (Note, if processing.guarantee is set to exactly_once_v2 , exactly_once ,the default value is 100 , otherwise the default value is 30000 . connections.max.idle.ms Type: long Default: 540000 (9 minutes) Importance: low Close idle connections after the number of milliseconds specified by this config. default.client.supplier Type: class Default: org.apache.kafka.streams.processor.internals.DefaultKafkaClientSupplier Importance: low Client supplier class that implements the org.apache.kafka.streams.KafkaClientSupplier interface. default.dsl.store Type: string Default: rocksDB Valid Values: [rocksDB, in_memory] Importance: low The default state store type used by DSL operators. dsl.store.suppliers.class Type: class Default: org.apache.kafka.streams.state.BuiltInDslStoreSuppliersUSDRocksDBDslStoreSuppliers Importance: low Defines which store implementations to plug in to DSL operators. Must implement the org.apache.kafka.streams.state.DslStoreSuppliers interface. enable.metrics.push Type: boolean Default: true Importance: low Whether to enable pushing of internal client metrics for (main, restore, and global) consumers, producers, and admin clients. The cluster must have a client metrics subscription which corresponds to a client. metadata.max.age.ms Type: long Default: 300000 (5 minutes) Valid Values: [0,... ] Importance: low The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions. metric.reporters Type: list Default: "" Importance: low A list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics. metrics.num.samples Type: int Default: 2 Valid Values: [1,... ] Importance: low The number of samples maintained to compute metrics. metrics.recording.level Type: string Default: INFO Valid Values: [INFO, DEBUG, TRACE] Importance: low The highest recording level for metrics. metrics.sample.window.ms Type: long Default: 30000 (30 seconds) Valid Values: [0,... ] Importance: low The window of time a metrics sample is computed over. poll.ms Type: long Default: 100 Importance: low The amount of time in milliseconds to block waiting for input. probing.rebalance.interval.ms Type: long Default: 600000 (10 minutes) Valid Values: [60000,... ] Importance: low The maximum time in milliseconds to wait before triggering a rebalance to probe for warmup replicas that have finished warming up and are ready to become active. Probing rebalances will continue to be triggered until the assignment is balanced. Must be at least 1 minute. receive.buffer.bytes Type: int Default: 32768 (32 kibibytes) Valid Values: [-1,... ] Importance: low The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used. reconnect.backoff.max.ms Type: long Default: 1000 (1 second) Valid Values: [0,... ] Importance: low The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms. reconnect.backoff.ms Type: long Default: 50 Valid Values: [0,... ] Importance: low The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker. This value is the initial backoff value and will increase exponentially for each consecutive connection failure, up to the reconnect.backoff.max.ms value. repartition.purge.interval.ms Type: long Default: 30000 (30 seconds) Valid Values: [0,... ] Importance: low The frequency in milliseconds with which to delete fully consumed records from repartition topics. Purging will occur after at least this value since the last purge, but may be delayed until later. (Note, unlike commit.interval.ms , the default for this value remains unchanged when processing.guarantee is set to exactly_once_v2 ). request.timeout.ms Type: int Default: 40000 (40 seconds) Valid Values: [0,... ] Importance: low The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. retries Type: int Default: 0 Valid Values: [0,... ,2147483647] Importance: low Setting a value greater than zero will cause the client to resend any request that fails with a potentially transient error. It is recommended to set the value to either zero or MAX_VALUE and use corresponding timeout parameters to control how long a client should retry a request. retry.backoff.ms Type: long Default: 100 Valid Values: [0,... ] Importance: low The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. This value is the initial backoff value and will increase exponentially for each failed request, up to the retry.backoff.max.ms value. rocksdb.config.setter Type: class Default: null Importance: low A Rocks DB config setter class or class name that implements the org.apache.kafka.streams.state.RocksDBConfigSetter interface. send.buffer.bytes Type: int Default: 131072 (128 kibibytes) Valid Values: [-1,... ] Importance: low The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used. state.cleanup.delay.ms Type: long Default: 600000 (10 minutes) Importance: low The amount of time in milliseconds to wait before deleting state when a partition has migrated. Only state directories that have not been modified for at least state.cleanup.delay.ms will be removed. upgrade.from Type: string Default: null Valid Values: [null, 0.10.0, 0.10.1, 0.10.2, 0.11.0, 1.0, 1.1, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7] Importance: low Allows upgrading in a backward compatible way. This is needed when upgrading from [0.10.0, 1.1] to 2.0+, or when upgrading from [2.0, 2.3] to 2.4+. When upgrading from 3.3 to a newer version it is not required to specify this config. Default is null . Accepted values are "0.10.0", "0.10.1", "0.10.2", "0.11.0", "1.0", "1.1", "2.0", "2.1", "2.2", "2.3", "2.4", "2.5", "2.6", "2.7", "2.8", "3.0", "3.1", "3.2", "3.3", "3.4", "3.5", "3.6", "3.7" (for upgrading from the corresponding old version). window.size.ms Type: long Default: null Importance: low Sets window size for the deserializer in order to calculate window end times. windowed.inner.class.serde Type: string Default: null Importance: low Default serializer / deserializer for the inner class of a windowed record. Must implement the org.apache.kafka.common.serialization.Serde interface. Note that setting this config in KafkaStreams application would result in an error as it is meant to be used only from Plain consumer client. windowstore.changelog.additional.retention.ms Type: long Default: 86400000 (1 day) Importance: low Added to a windows maintainMs to ensure data is not deleted from the log prematurely. Allows for clock drift. Default is 1 day. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/kafka_configuration_properties/kafka-streams-configuration-properties-str |
Part V. Deprecated Functionality | Part V. Deprecated Functionality This part provides an overview of functionality that has been deprecated in all minor releases up to Red Hat Enterprise Linux 7.4. Deprecated functionality continues to be supported until the end of life of Red Hat Enterprise Linux 7. Deprecated functionality will likely not be supported in future major releases of this product and is not recommended for new deployments. For the most recent list of deprecated functionality within a particular major release, refer to the latest version of release documentation. Deprecated hardware components are not recommended for new deployments on the current or future major releases. Hardware driver updates are limited to security and critical fixes only. Red Hat recommends replacing this hardware as soon as reasonably feasible. A package can be deprecated and not recommended for further use. Under certain circumstances, a package can be removed from a product. Product documentation then identifies more recent packages that offer functionality similar, identical, or more advanced to the one deprecated, and provides further recommendations. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.4_release_notes/part-Red_Hat_Enterprise_Linux-7.4_Release_Notes-Deprecated_Functionality |
2.4. Configuring Red Hat JBoss Data Grid for Authorization | 2.4. Configuring Red Hat JBoss Data Grid for Authorization Authorization is configured at two levels: the cache container (CacheManager), and at the single cache. CacheManager The following is an example configuration for authorization at the CacheManager level: Example 2.4. CacheManager Authorization (Declarative Configuration) Each cache container determines: whether to use authorization. a class which will map principals to a set of roles. a set of named roles and the permissions they represent. You can choose to use only a subset of the roles defined at the container level. Roles Roles may be applied on a cache-per-cache basis, using the roles defined at the cache-container level, as follows: Example 2.5. Defining Roles Important Any cache that is intended to require authentication must have a listing of roles defined; otherwise authentication is not enforced as the no-anonymous policy is defined by the cache's authorization. Programmatic CacheManager Authorization (Library Mode) The following example shows how to set up the same authorization parameters for Library mode using programmatic configuration: Example 2.6. CacheManager Authorization Programmatic Configuration Important The REST protocol is not supported for use with authorization, and any attempts to access a cache with authorization enabled will result in a SecurityException . Report a bug | [
"<cache-container name=\"local\" default-cache=\"default\"> <security> <authorization> <identity-role-mapper /> <role name=\"admin\" permissions=\"ALL\"/> <role name=\"reader\" permissions=\"READ\"/> <role name=\"writer\" permissions=\"WRITE\"/> <role name=\"supervisor\" permissions=\"ALL_READ ALL_WRITE\"/> </authorization> </security> </cache-container>",
"<local-cache name=\"secured\"> <security> <authorization roles=\"admin reader writer supervisor\"/> </security> </local-cache>",
"GlobalConfigurationBuilder global = new GlobalConfigurationBuilder(); global .security() .authorization() .principalRoleMapper(new IdentityRoleMapper()) .role(\"admin\") .permission(CachePermission.ALL) .role(\"supervisor\") .permission(CachePermission.EXEC) .permission(CachePermission.READ) .permission(CachePermission.WRITE) .role(\"reader\") .permission(CachePermission.READ); ConfigurationBuilder config = new ConfigurationBuilder(); config .security() .enable() .authorization() .role(\"admin\") .role(\"supervisor\") .role(\"reader\");"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/security_guide/Configuring_Red_Hat_JBoss_Data_Grid_for_Authorization |
Chapter 16. Upgrading to OpenShift Data Foundation | Chapter 16. Upgrading to OpenShift Data Foundation 16.1. Overview of the OpenShift Data Foundation update process OpenShift Container Storage, based on the open source Ceph technology, has expanded its scope and foundational role in a containerized, hybrid cloud environment since its introduction. It complements existing storage in addition to other data-related hardware and software, making them rapidly attachable, accessible, and scalable in a hybrid cloud environment. To better reflect these foundational and infrastructure distinctives, OpenShift Container Storage is now OpenShift Data Foundation . Important You can perform the upgrade process for OpenShift Data Foundation version 4.9 from OpenShift Container Storage version 4.8 only by installing the OpenShift Data Foundation operator from OpenShift Container Platform OperatorHub. In the future release, you can upgrade Red Hat OpenShift Data Foundation, either between minor releases like 4.9 and 4.x, or between batch updates like 4.9.0 and 4.9.1 by enabling automatic updates (if not done so during operator installation) or performing manual updates. You also need to upgrade the different parts of Red Hat OpenShift Data Foundation in the following order for both internal and external mode deployments: Update OpenShift Container Platform according to the Updating clusters documentation for OpenShift Container Platform. Update Red Hat OpenShift Data Foundation. To prepare a disconnected environment for updates , see Operators guide to using Operator Lifecycle Manager on restricted networks to be able to update Red Hat OpenShift Data Foundation as well as Local Storage Operator when in use. Update Red Hat OpenShift Container Storage operator version 4.8 to version 4.9 by installing the Red Hat OpenShift Data Foundation operator from the OperatorHub on OpenShift Container Platform web console. See Updating Red Hat OpenShift Container Storage 4.8 to Red Hat OpenShift Data Foundation 4.9 . Update Red Hat OpenShift Data Foundation from 4.9.x to 4.9.y . See Updating Red Hat OpenShift Data Foundation 4.9.x to 4.9.y . For updating external mode deployments , you must also perform the steps from section Updating the OpenShift Data Foundation external secret . If you use local storage: Update the Local Storage operator . See Checking for Local Storage Operator deployments if you are unsure. Perform post-update configuration changes for clusters backed by local storage. See Post-update configuration for clusters backed by local storage for details. Update considerations Review the following important considerations before you begin. Red Hat recommends using the same version of Red Hat OpenShift Container Platform with Red Hat OpenShift Data Foundation. See the Interoperability Matrix for more information about supported combinations of OpenShift Container Platform and Red Hat OpenShift Data Foundation. The Local Storage Operator is fully supported only when the Local Storage Operator version matches the Red Hat OpenShift Container Platform version. The flexible scaling feature is available only in new deployments of Red Hat OpenShift Data Foundation versions 4.7 and later. Storage clusters upgraded from a version to version 4.7 or later do not support flexible scaling. For more information, see Flexible scaling of OpenShift Container Storage cluster in the New features section of 4.7 Release Notes . 16.2. Updating Red Hat OpenShift Container Storage 4.8 to Red Hat OpenShift Data Foundation 4.9 This chapter helps you to upgrade between the z-stream release for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. The Only difference is what gets upgraded and what's not. For Internal and Internal-attached deployments, upgrading OpenShift Container Storage upgrades all OpenShift Container Storage services including the backend Ceph Storage cluster. For External mode deployments, upgrading OpenShift Container Storage only upgrades the OpenShift Container Storage service while the backend Ceph storage cluster remains untouched and needs to be upgraded separately. We recommend upgrading RHCS along with OpenShift Container Storage in order to get new feature support, security fixes, and other bug fixes. Since we do not have a strong dependency on RHCS upgrade, you can upgrade the OpenShift Data Foundation operator first followed by RHCS upgrade or vice-versa. See solution to know more about Red Hat Ceph Storage releases. Important Upgrading to 4.9 directly from any version older than 4.8 is unsupported. Prerequisites Ensure that the OpenShift Container Platform cluster has been updated to the latest stable release of version 4.9.X, see Updating Clusters . Ensure that the OpenShift Container Storage cluster is healthy and data is resilient. Navigate to Storage Overview and check both Block and File and Object tabs for the green tick on the status card. Green tick indicates that the storage cluster , object service and data resiliency are all healthy. Ensure that all OpenShift Container Storage Pods, including the operator pods, are in Running state in the openshift-storage namespace. To view the state of the pods, on the OpenShift Web Console, click Workloads Pods . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Ensure that you have sufficient time to complete the OpenShift Data Foundation update process, as the update time varies depending on the number of OSDs that run in the cluster. Procedure On the OpenShift Web Console, navigate to OperatorHub . Search for OpenShift Data Foundation using the Filter by keyword box and click on the OpenShift Data Foundation tile. Click Install . On the install Operator page, click Install . Wait for the Operator installation to complete. Note We recommend using all default settings. Changing it may result in unexpected behavior. Alter only if you are aware of its result. Verification steps Verify that the page displays Succeeded message along with the option to Create StorageSystem . Note For the upgraded clusters, since the storage system is automatically created, do not create it again. On the notification popup, click Refresh web console link to reflect the OpenShift Data Foundation changes in the OpenShift console. Verify the state of the pods on the OpenShift Web Console. Click Workloads Pods . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Wait for all the pods in the openshift-storage namespace to restart and reach Running state. Verify that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage OpenShift Data foundation Storage Systems tab and then click on the storage system name. Check both Block and File and Object tabs for the green tick on the status card. Green tick indicates that the storage cluster, object service and data resiliency are all healthy. Important In case the console plugin option was not automatically enabled after you installed the OpenShift Data Foundation Operator, you need to enable it. For more information on how to enable the console plugin, see Enabling the Red Hat OpenShift Data Foundation console plugin . After updating external mode deployments, you must also update the external secret. For instructions, see Updating the OpenShift Data Foundation external secret . Additional Resources If you face any issues while updating OpenShift Data Foundation, see the Commonly required logs for troubleshooting section in the Troubleshooting guide . 16.3. Updating Red Hat OpenShift Data Foundation 4.9.x to 4.9.y This chapter helps you to upgrade between the z-stream release for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. The Only difference is what gets upgraded and what's not. For Internal and Internal-attached deployments, upgrading OpenShift Container Storage upgrades all OpenShift Container Storage services including the backend Ceph Storage cluster. For External mode deployments, upgrading OpenShift Container Storage only upgrades the OpenShift Container Storage service while the backend Ceph storage cluster remains untouched and needs to be upgraded separately. Hence, we recommend upgrading RHCS along with OpenShift Container Storage in order to get new feature support, security fixes, and other bug fixes. Since we do not have a strong dependency on RHCS upgrade, you can upgrade the OpenShift Data Foundation operator first followed by RHCS upgrade or vice-versa. See solution to know more about Red Hat Ceph Storage releases. When a new z-stream release becomes available, the upgrade process triggers automatically if the update strategy was set to Automatic . If the update strategy is set to Manual then use the following procedure. Prerequisites Ensure that the OpenShift Container Platform cluster has been updated to the latest stable release of version 4.9.X, see Updating Clusters . Ensure that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage OpenShift Data Foundation Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview - Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy. Ensure that all OpenShift Data Foundation Pods, including the operator pods, are in Running state in the openshift-storage namespace. To view the state of the pods, on the OpenShift Web Console, click Workloads Pods . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Ensure that you have sufficient time to complete the OpenShift Data Foundation update process, as the update time varies depending on the number of OSDs that run in the cluster. Procedure On the OpenShift Web Console, navigate to Operators Installed Operators . Select openshift-storage project. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Click the OpenShift Data Foundation operator name. Click the Subscription tab. If the Upgrade Status shows require approval , click on requires approval link. On the InstallPlan Details page, click Preview Install Plan . Review the install plan and click Approve . Wait for the Status to change from Unknown to Created . Verification steps Verify that the Version below the OpenShift Data Foundation name and the operator status is the latest version. Navigate to Operators Installed Operators and select the openshift-storage project. When the upgrade completes, the version updates to a new version number for OpenShift Data Foundation and status changes to Succeeded with a green tick. Verify that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage OpenShift Data Foundation Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview - Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy Important In case the console plugin option was not automatically enabled after you installed the OpenShift Data Foundation Operator, you need to enable it. For more information on how to enable the console plugin, see Enabling the Red Hat OpenShift Data Foundation console plugin . If verification steps fail, contact Red Hat Support . 16.4. Changing the update approval strategy To ensure that the storage system gets updated automatically when a new update is available in the same channel, we recommend keeping the update approval strategy to Automatic . Changing the update approval strategy to Manual will need manual approval for each upgrade. Procedure Navigate to Operators Installed Operators . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Click on OpenShift Data Foundation operator name Go to the Subscription tab. Click on the pencil icon for changing the Update approval . Select the update approval strategy and click Save . Verification steps Verify that the Update approval shows the newly selected approval strategy below it. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/upgrading-your-cluster_osp |
19.4. Configuring LDAP and Kerberos for Single Sign-on | 19.4. Configuring LDAP and Kerberos for Single Sign-on Single sign-on allows users to log in to the VM Portal or the Administration Portal without re-typing their passwords. Authentication credentials are obtained from the Kerberos server. To configure single sign-on to the Administration Portal and the VM Portal, you need to configure two extensions: ovirt-engine-extension-aaa-misc and ovirt-engine-extension-aaa-ldap ; and two Apache modules: mod_auth_gssapi and mod_session . You can configure single sign-on that does not involve Kerberos, however this is outside the scope of this documentation. Note If single sign-on to the VM Portal is enabled, single sign-on to virtual machines is not possible. With single sign-on to the VM Portal enabled, the VM Portal does not need to accept a password, so you cannot delegate the password to sign in to virtual machines. This example assumes the following: The existing Key Distribution Center (KDC) server uses the MIT version of Kerberos 5. You have administrative rights to the KDC server. The Kerberos client is installed on the Red Hat Virtualization Manager and user machines. The kadmin utility is used to create Kerberos service principals and keytab files. This procedure involves the following components: On the KDC server Create a service principal and a keytab file for the Apache service on the Red Hat Virtualization Manager. On the Red Hat Virtualization Manager Install the authentication and authorization extension packages and the Apache Kerberos authentication module. Configure the extension files. Configuring Kerberos for the Apache Service On the KDC server, use the kadmin utility to create a service principal for the Apache service on the Red Hat Virtualization Manager. The service principal is a reference ID to the KDC for the Apache service. Generate a keytab file for the Apache service. The keytab file stores the shared secret key. Note The engine-backup command includes the file /etc/httpd/http.keytab when backing up and restoring. If you use a different name for the keytab file, make sure you back up and restore it. Copy the keytab file from the KDC server to the Red Hat Virtualization Manager: Configuring Single Sign-on to the VM Portal or Administration Portal On the Red Hat Virtualization Manager, ensure that the ownership and permissions for the keytab are appropriate: Install the authentication extension package, LDAP extension package, and the mod_auth_gssapi and mod_session Apache modules: Copy the SSO configuration template file into the /etc/ovirt-engine directory. Template files are available for Active Directory ( ad-sso ) and other directory types ( simple-sso ). This example uses the simple SSO configuration template. Move ovirt-sso.conf into the Apache configuration directory. Note The engine-backup command includes the file /etc/httpd/conf.d/ovirt-sso.conf when backing up and restoring. If you use a different name for this file, make sure you back up and restore it. Review the authentication method file. You do not need to edit this file, as the realm is automatically fetched from the keytab file. Example 19.5. Example authentication method file Rename the configuration files to match the profile name you want visible to users on the Administration Portal and the VM Portal login pages: Edit the LDAP property configuration file by uncommenting an LDAP server type and updating the domain and passwords fields: Example 19.6. Example profile: LDAP server section To use TLS or SSL protocol to interact with the LDAP server, obtain the root CA certificate for the LDAP server and use it to create a public keystore file. Uncomment the following lines and specify the full path to the public keystore file and the password to access the file. Note For more information on creating a public keystore file, see Section D.2, "Setting Up Encrypted Communication between the Manager and an LDAP Server" . Example 19.7. Example profile: keystore section Review the authentication configuration file. The profile name visible to users on the Administration Portal and the VM Portal login pages is defined by ovirt.engine.aaa.authn.profile.name . The configuration profile location must match the LDAP configuration file location. All fields can be left as default. Example 19.8. Example authentication configuration file Review the authorization configuration file. The configuration profile location must match the LDAP configuration file location. All fields can be left as default. Example 19.9. Example authorization configuration file Review the authentication mapping configuration file. The configuration profile location must match the LDAP configuration file location. The configuration profile extension name must match the ovirt.engine.aaa.authn.mapping.plugin value in the authentication configuration file. All fields can be left as default. Example 19.10. Example authentication mapping configuration file Ensure that the ownership and permissions of the configuration files are appropriate: Restart the Apache service and the ovirt-engine service: | [
"kadmin kadmin> addprinc -randkey HTTP/fqdn-of-rhevm @ REALM.COM",
"kadmin> ktadd -k /tmp/http.keytab HTTP/fqdn-of-rhevm @ REALM.COM kadmin> quit",
"scp /tmp/http.keytab root@ rhevm.example.com :/etc/httpd",
"chown apache /etc/httpd/http.keytab chmod 400 /etc/httpd/http.keytab",
"yum install ovirt-engine-extension-aaa-misc ovirt-engine-extension-aaa-ldap mod_auth_gssapi mod_session",
"cp -r /usr/share/ovirt-engine-extension-aaa-ldap/examples/simple-sso/. /etc/ovirt-engine",
"mv /etc/ovirt-engine/aaa/ovirt-sso.conf /etc/httpd/conf.d",
"vi /etc/httpd/conf.d/ovirt-sso.conf",
"<LocationMatch ^/ovirt-engine/sso/(interactive-login-negotiate|oauth/token-http-auth)|^/ovirt-engine/api> <If \"req('Authorization') !~ /^(Bearer|Basic)/i\"> RewriteEngine on RewriteCond %{LA-U:REMOTE_USER} ^(.*)USD RewriteRule ^(.*)USD - [L,NS,P,E=REMOTE_USER:%1] RequestHeader set X-Remote-User %{REMOTE_USER}s AuthType GSSAPI AuthName \"Kerberos Login\" # Modify to match installation GssapiCredStore keytab:/etc/httpd/http.keytab GssapiUseSessions On Session On SessionCookieName ovirt_gssapi_session path=/private;httponly;secure; Require valid-user ErrorDocument 401 \"<html><meta http-equiv=\\\"refresh\\\" content=\\\"0; url=/ovirt-engine/sso/login-unauthorized\\\"/><body><a href=\\\"/ovirt-engine/sso/login-unauthorized\\\">Here</a></body></html>\" </If> </LocationMatch>",
"mv /etc/ovirt-engine/aaa/profile1.properties /etc/ovirt-engine/aaa/ example .properties",
"mv /etc/ovirt-engine/extensions.d/profile1-http-authn.properties /etc/ovirt-engine/extensions.d/ example -http-authn.properties",
"mv /etc/ovirt-engine/extensions.d/profile1-http-mapping.properties /etc/ovirt-engine/extensions.d/ example -http-mapping.properties",
"mv /etc/ovirt-engine/extensions.d/profile1-authz.properties /etc/ovirt-engine/extensions.d/ example -authz.properties",
"vi /etc/ovirt-engine/aaa/ example .properties",
"Select one include = <openldap.properties> #include = <389ds.properties> #include = <rhds.properties> #include = <ipa.properties> #include = <iplanet.properties> #include = <rfc2307-389ds.properties> #include = <rfc2307-rhds.properties> #include = <rfc2307-openldap.properties> #include = <rfc2307-edir.properties> #include = <rfc2307-generic.properties> Server # vars.server = ldap1.company.com Search user and its password. # vars.user = uid=search,cn=users,cn=accounts,dc=company,dc=com vars.password = 123456 pool.default.serverset.single.server = USD{global:vars.server} pool.default.auth.simple.bindDN = USD{global:vars.user} pool.default.auth.simple.password = USD{global:vars.password}",
"Create keystore, import certificate chain and uncomment if using ssl/tls. pool.default.ssl.startTLS = true pool.default.ssl.truststore.file = /full/path/to/myrootca.jks pool.default.ssl.truststore.password = password",
"vi /etc/ovirt-engine/extensions.d/ example -http-authn.properties",
"ovirt.engine.extension.name = example -http-authn ovirt.engine.extension.bindings.method = jbossmodule ovirt.engine.extension.binding.jbossmodule.module = org.ovirt.engine-extensions.aaa.misc ovirt.engine.extension.binding.jbossmodule.class = org.ovirt.engineextensions.aaa.misc.http.AuthnExtension ovirt.engine.extension.provides = org.ovirt.engine.api.extensions.aaa.Authn ovirt.engine.aaa.authn.profile.name = example -http ovirt.engine.aaa.authn.authz.plugin = example -authz ovirt.engine.aaa.authn.mapping.plugin = example -http-mapping config.artifact.name = HEADER config.artifact.arg = X-Remote-User",
"vi /etc/ovirt-engine/extensions.d/ example -authz.properties",
"ovirt.engine.extension.name = example -authz ovirt.engine.extension.bindings.method = jbossmodule ovirt.engine.extension.binding.jbossmodule.module = org.ovirt.engine-extensions.aaa.ldap ovirt.engine.extension.binding.jbossmodule.class = org.ovirt.engineextensions.aaa.ldap.AuthzExtension ovirt.engine.extension.provides = org.ovirt.engine.api.extensions.aaa.Authz config.profile.file.1 = ../aaa/ example .properties",
"vi /etc/ovirt-engine/extensions.d/ example -http-mapping.properties",
"ovirt.engine.extension.name = example -http-mapping ovirt.engine.extension.bindings.method = jbossmodule ovirt.engine.extension.binding.jbossmodule.module = org.ovirt.engine-extensions.aaa.misc ovirt.engine.extension.binding.jbossmodule.class = org.ovirt.engineextensions.aaa.misc.mapping.MappingExtension ovirt.engine.extension.provides = org.ovirt.engine.api.extensions.aaa.Mapping config.mapAuthRecord.type = regex config.mapAuthRecord.regex.mustMatch = true config.mapAuthRecord.regex.pattern = ^(?<user>.*?)((\\\\\\\\(?<at>@)(?<suffix>.*?)@.*)|(?<realm>@.*))USD config.mapAuthRecord.regex.replacement = USD{user}USD{at}USD{suffix}",
"chown ovirt:ovirt /etc/ovirt-engine/aaa/ example .properties",
"chown ovirt:ovirt /etc/ovirt-engine/extensions.d/ example -http-authn.properties",
"chown ovirt:ovirt /etc/ovirt-engine/extensions.d/ example -http-mapping.properties",
"chown ovirt:ovirt /etc/ovirt-engine/extensions.d/ example -authz.properties",
"chmod 600 /etc/ovirt-engine/aaa/ example .properties",
"chmod 640 /etc/ovirt-engine/extensions.d/ example -http-authn.properties",
"chmod 640 /etc/ovirt-engine/extensions.d/ example -http-mapping.properties",
"chmod 640 /etc/ovirt-engine/extensions.d/ example -authz.properties",
"systemctl restart httpd.service systemctl restart ovirt-engine.service"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/Configuring_LDAP_and_Kerberos_for_Single_Sign-on |
Chapter 2. Configuring an AWS account | Chapter 2. Configuring an AWS account Before you can install OpenShift Container Platform, you must configure an Amazon Web Services (AWS) account. 2.1. Configuring Route 53 To install OpenShift Container Platform, the Amazon Web Services (AWS) account you use must have a dedicated public hosted zone in your Route 53 service. This zone must be authoritative for the domain. The Route 53 service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through AWS or another source. Note If you purchase a new domain through AWS, it takes time for the relevant DNS changes to propagate. For more information about purchasing domains through AWS, see Registering Domain Names Using Amazon Route 53 in the AWS documentation. If you are using an existing domain and registrar, migrate its DNS to AWS. See Making Amazon Route 53 the DNS Service for an Existing Domain in the AWS documentation. Create a public hosted zone for your domain or subdomain. See Creating a Public Hosted Zone in the AWS documentation. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . Extract the new authoritative name servers from the hosted zone records. See Getting the Name Servers for a Public Hosted Zone in the AWS documentation. Update the registrar records for the AWS Route 53 name servers that your domain uses. For example, if you registered your domain to a Route 53 service in a different accounts, see the following topic in the AWS documentation: Adding or Changing Name Servers or Glue Records . If you are using a subdomain, add its delegation records to the parent domain. This gives Amazon Route 53 responsibility for the subdomain. Follow the delegation procedure outlined by the DNS provider of the parent domain. See Creating a subdomain that uses Amazon Route 53 as the DNS service without migrating the parent domain in the AWS documentation for an example high level procedure. 2.1.1. Ingress Operator endpoint configuration for AWS Route 53 If you install in either Amazon Web Services (AWS) GovCloud (US) US-West or US-East region, the Ingress Operator uses us-gov-west-1 region for Route53 and tagging API clients. The Ingress Operator uses https://tagging.us-gov-west-1.amazonaws.com as the tagging API endpoint if a tagging custom endpoint is configured that includes the string 'us-gov-east-1'. For more information on AWS GovCloud (US) endpoints, see the Service Endpoints in the AWS documentation about GovCloud (US). Important Private, disconnected installations are not supported for AWS GovCloud when you install in the us-gov-east-1 region. Example Route 53 configuration platform: aws: region: us-gov-west-1 serviceEndpoints: - name: ec2 url: https://ec2.us-gov-west-1.amazonaws.com - name: elasticloadbalancing url: https://elasticloadbalancing.us-gov-west-1.amazonaws.com - name: route53 url: https://route53.us-gov.amazonaws.com 1 - name: tagging url: https://tagging.us-gov-west-1.amazonaws.com 2 1 Route 53 defaults to https://route53.us-gov.amazonaws.com for both AWS GovCloud (US) regions. 2 Only the US-West region has endpoints for tagging. Omit this parameter if your cluster is in another region. 2.2. AWS account limits The OpenShift Container Platform cluster uses a number of Amazon Web Services (AWS) components, and the default Service Limits affect your ability to install OpenShift Container Platform clusters. If you use certain cluster configurations, deploy your cluster in certain AWS regions, or run multiple clusters from your account, you might need to request additional resources for your AWS account. The following table summarizes the AWS components whose limits can impact your ability to install and run OpenShift Container Platform clusters. Component Number of clusters available by default Default AWS limit Description Instance Limits Varies Varies By default, each cluster creates the following instances: One bootstrap machine, which is removed after installation Three control plane nodes Three worker nodes These instance type counts are within a new account's default limit. To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, review your account limits to ensure that your cluster can deploy the machines that you need. In most regions, the worker machines use an m6i.large instance and the bootstrap and control plane machines use m6i.xlarge instances. In some regions, including all regions that do not support these instance types, m5.large and m5.xlarge instances are used instead. Elastic IPs (EIPs) 0 to 1 5 EIPs per account To provision the cluster in a highly available configuration, the installation program creates a public and private subnet for each availability zone within a region . Each private subnet requires a NAT Gateway , and each NAT gateway requires a separate elastic IP . Review the AWS region map to determine how many availability zones are in each region. To take advantage of the default high availability, install the cluster in a region with at least three availability zones. To install a cluster in a region with more than five availability zones, you must increase the EIP limit. Important To use the us-east-1 region, you must increase the EIP limit for your account. Virtual Private Clouds (VPCs) 5 5 VPCs per region Each cluster creates its own VPC. Elastic Load Balancing (ELB/NLB) 3 20 per region By default, each cluster creates internal and external network load balancers for the master API server and a single Classic Load Balancer for the router. Deploying more Kubernetes Service objects with type LoadBalancer will create additional load balancers . NAT Gateways 5 5 per availability zone The cluster deploys one NAT gateway in each availability zone. Elastic Network Interfaces (ENIs) At least 12 350 per region The default installation creates 21 ENIs and an ENI for each availability zone in your region. For example, the us-east-1 region contains six availability zones, so a cluster that is deployed in that zone uses 27 ENIs. Review the AWS region map to determine how many availability zones are in each region. Additional ENIs are created for additional machines and ELB load balancers that are created by cluster usage and deployed workloads. VPC Gateway 20 20 per account Each cluster creates a single VPC Gateway for S3 access. S3 buckets 99 100 buckets per account Because the installation process creates a temporary bucket and the registry component in each cluster creates a bucket, you can create only 99 OpenShift Container Platform clusters per AWS account. Security Groups 250 2,500 per account Each cluster creates 10 distinct security groups. 2.3. Required AWS permissions for the IAM user Note Your IAM user must have the permission tag:GetResources in the region us-east-1 to delete the base cluster resources. As part of the AWS API requirement, the OpenShift Container Platform installation program performs various actions in this region. When you attach the AdministratorAccess policy to the IAM user that you create in Amazon Web Services (AWS), you grant that user all of the required permissions. To deploy all components of an OpenShift Container Platform cluster, the IAM user requires the following permissions: Example 2.1. Required EC2 permissions for installation ec2:AttachNetworkInterface ec2:AuthorizeSecurityGroupEgress ec2:AuthorizeSecurityGroupIngress ec2:CopyImage ec2:CreateNetworkInterface ec2:CreateSecurityGroup ec2:CreateTags ec2:CreateVolume ec2:DeleteSecurityGroup ec2:DeleteSnapshot ec2:DeleteTags ec2:DeregisterImage ec2:DescribeAccountAttributes ec2:DescribeAddresses ec2:DescribeAvailabilityZones ec2:DescribeDhcpOptions ec2:DescribeImages ec2:DescribeInstanceAttribute ec2:DescribeInstanceCreditSpecifications ec2:DescribeInstances ec2:DescribeInstanceTypes ec2:DescribeInternetGateways ec2:DescribeKeyPairs ec2:DescribeNatGateways ec2:DescribeNetworkAcls ec2:DescribeNetworkInterfaces ec2:DescribePrefixLists ec2:DescribePublicIpv4Pools (only required if publicIpv4Pool is specified in install-config.yaml ) ec2:DescribeRegions ec2:DescribeRouteTables ec2:DescribeSecurityGroupRules ec2:DescribeSecurityGroups ec2:DescribeSubnets ec2:DescribeTags ec2:DescribeVolumes ec2:DescribeVpcAttribute ec2:DescribeVpcClassicLink ec2:DescribeVpcClassicLinkDnsSupport ec2:DescribeVpcEndpoints ec2:DescribeVpcs ec2:DisassociateAddress (only required if publicIpv4Pool is specified in install-config.yaml ) ec2:GetEbsDefaultKmsKeyId ec2:ModifyInstanceAttribute ec2:ModifyNetworkInterfaceAttribute ec2:RevokeSecurityGroupEgress ec2:RevokeSecurityGroupIngress ec2:RunInstances ec2:TerminateInstances Example 2.2. Required permissions for creating network resources during installation ec2:AllocateAddress ec2:AssociateAddress ec2:AssociateDhcpOptions ec2:AssociateRouteTable ec2:AttachInternetGateway ec2:CreateDhcpOptions ec2:CreateInternetGateway ec2:CreateNatGateway ec2:CreateRoute ec2:CreateRouteTable ec2:CreateSubnet ec2:CreateVpc ec2:CreateVpcEndpoint ec2:ModifySubnetAttribute ec2:ModifyVpcAttribute Note If you use an existing Virtual Private Cloud (VPC), your account does not require these permissions for creating network resources. Example 2.3. Required Elastic Load Balancing permissions (ELB) for installation elasticloadbalancing:AddTags elasticloadbalancing:ApplySecurityGroupsToLoadBalancer elasticloadbalancing:AttachLoadBalancerToSubnets elasticloadbalancing:ConfigureHealthCheck elasticloadbalancing:CreateListener elasticloadbalancing:CreateLoadBalancer elasticloadbalancing:CreateLoadBalancerListeners elasticloadbalancing:CreateTargetGroup elasticloadbalancing:DeleteLoadBalancer elasticloadbalancing:DeregisterInstancesFromLoadBalancer elasticloadbalancing:DeregisterTargets elasticloadbalancing:DescribeInstanceHealth elasticloadbalancing:DescribeListeners elasticloadbalancing:DescribeLoadBalancerAttributes elasticloadbalancing:DescribeLoadBalancers elasticloadbalancing:DescribeTags elasticloadbalancing:DescribeTargetGroupAttributes elasticloadbalancing:DescribeTargetHealth elasticloadbalancing:ModifyLoadBalancerAttributes elasticloadbalancing:ModifyTargetGroup elasticloadbalancing:ModifyTargetGroupAttributes elasticloadbalancing:RegisterInstancesWithLoadBalancer elasticloadbalancing:RegisterTargets elasticloadbalancing:SetLoadBalancerPoliciesOfListener elasticloadbalancing:SetSecurityGroups Important OpenShift Container Platform uses both the ELB and ELBv2 API services to provision load balancers. The permission list shows permissions required by both services. A known issue exists in the AWS web console where both services use the same elasticloadbalancing action prefix but do not recognize the same actions. You can ignore the warnings about the service not recognizing certain elasticloadbalancing actions. Example 2.4. Required IAM permissions for installation iam:AddRoleToInstanceProfile iam:CreateInstanceProfile iam:CreateRole iam:DeleteInstanceProfile iam:DeleteRole iam:DeleteRolePolicy iam:GetInstanceProfile iam:GetRole iam:GetRolePolicy iam:GetUser iam:ListInstanceProfilesForRole iam:ListRoles iam:ListUsers iam:PassRole iam:PutRolePolicy iam:RemoveRoleFromInstanceProfile iam:SimulatePrincipalPolicy iam:TagInstanceProfile iam:TagRole Note If you specify an existing IAM role in the install-config.yaml file, the following IAM permissions are not required: iam:CreateRole , iam:DeleteRole , iam:DeleteRolePolicy , and iam:PutRolePolicy . If you have not created a load balancer in your AWS account, the IAM user also requires the iam:CreateServiceLinkedRole permission. Example 2.5. Required Route 53 permissions for installation route53:ChangeResourceRecordSets route53:ChangeTagsForResource route53:CreateHostedZone route53:DeleteHostedZone route53:GetChange route53:GetHostedZone route53:ListHostedZones route53:ListHostedZonesByName route53:ListResourceRecordSets route53:ListTagsForResource route53:UpdateHostedZoneComment Example 2.6. Required Amazon Simple Storage Service (S3) permissions for installation s3:CreateBucket s3:DeleteBucket s3:GetAccelerateConfiguration s3:GetBucketAcl s3:GetBucketCors s3:GetBucketLocation s3:GetBucketLogging s3:GetBucketObjectLockConfiguration s3:GetBucketPolicy s3:GetBucketRequestPayment s3:GetBucketTagging s3:GetBucketVersioning s3:GetBucketWebsite s3:GetEncryptionConfiguration s3:GetLifecycleConfiguration s3:GetReplicationConfiguration s3:ListBucket s3:PutBucketAcl s3:PutBucketPolicy s3:PutBucketTagging s3:PutEncryptionConfiguration Example 2.7. S3 permissions that cluster Operators require s3:DeleteObject s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:GetObjectVersion s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Example 2.8. Required permissions to delete base cluster resources autoscaling:DescribeAutoScalingGroups ec2:DeleteNetworkInterface ec2:DeletePlacementGroup ec2:DeleteVolume elasticloadbalancing:DeleteTargetGroup elasticloadbalancing:DescribeTargetGroups iam:DeleteAccessKey iam:DeleteUser iam:DeleteUserPolicy iam:ListAttachedRolePolicies iam:ListInstanceProfiles iam:ListRolePolicies iam:ListUserPolicies s3:DeleteObject s3:ListBucketVersions tag:GetResources Example 2.9. Required permissions to delete network resources ec2:DeleteDhcpOptions ec2:DeleteInternetGateway ec2:DeleteNatGateway ec2:DeleteRoute ec2:DeleteRouteTable ec2:DeleteSubnet ec2:DeleteVpc ec2:DeleteVpcEndpoints ec2:DetachInternetGateway ec2:DisassociateRouteTable ec2:ReleaseAddress ec2:ReplaceRouteTableAssociation Note If you use an existing VPC, your account does not require these permissions to delete network resources. Instead, your account only requires the tag:UntagResources permission to delete network resources. Example 2.10. Optional permissions for installing a cluster with a custom Key Management Service (KMS) key kms:CreateGrant kms:Decrypt kms:DescribeKey kms:Encrypt kms:GenerateDataKey kms:GenerateDataKeyWithoutPlainText kms:ListGrants kms:RevokeGrant Example 2.11. Required permissions to delete a cluster with shared instance roles iam:UntagRole Example 2.12. Additional IAM and S3 permissions that are required to create manifests iam:GetUserPolicy iam:ListAccessKeys iam:PutUserPolicy iam:TagUser s3:AbortMultipartUpload s3:GetBucketPublicAccessBlock s3:ListBucket s3:ListBucketMultipartUploads s3:PutBucketPublicAccessBlock s3:PutLifecycleConfiguration Note If you are managing your cloud provider credentials with mint mode, the IAM user also requires the iam:CreateAccessKey and iam:CreateUser permissions. Example 2.13. Optional permissions for instance and quota checks for installation ec2:DescribeInstanceTypeOfferings servicequotas:ListAWSDefaultServiceQuotas Example 2.14. Optional permissions for the cluster owner account when installing a cluster on a shared VPC sts:AssumeRole Example 2.15. Required permissions for enabling Bring your own public IPv4 addresses (BYOIP) feature for installation ec2:DescribePublicIpv4Pools ec2:DisassociateAddress 2.4. Creating an IAM user Each Amazon Web Services (AWS) account contains a root user account that is based on the email address you used to create the account. This is a highly-privileged account, and it is recommended to use it for only initial account and billing configuration, creating an initial set of users, and securing the account. Before you install OpenShift Container Platform, create a secondary IAM administrative user. As you complete the Creating an IAM User in Your AWS Account procedure in the AWS documentation, set the following options: Procedure Specify the IAM user name and select Programmatic access . Attach the AdministratorAccess policy to ensure that the account has sufficient permission to create the cluster. This policy provides the cluster with the ability to grant credentials to each OpenShift Container Platform component. The cluster grants the components only the credentials that they require. Note While it is possible to create a policy that grants the all of the required AWS permissions and attach it to the user, this is not the preferred option. The cluster will not have the ability to grant additional credentials to individual components, so the same credentials are used by all components. Optional: Add metadata to the user by attaching tags. Confirm that the user name that you specified is granted the AdministratorAccess policy. Record the access key ID and secret access key values. You must use these values when you configure your local machine to run the installation program. Important You cannot use a temporary session token that you generated while using a multi-factor authentication device to authenticate to AWS when you deploy a cluster. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. 2.5. IAM Policies and AWS authentication By default, the installation program creates instance profiles for the bootstrap, control plane, and compute instances with the necessary permissions for the cluster to operate. Note To enable pulling images from the Amazon Elastic Container Registry (ECR) as a postinstallation task in a single-node OpenShift cluster, you must add the AmazonEC2ContainerRegistryReadOnly policy to the IAM role associated with the cluster's control plane role. However, you can create your own IAM roles and specify them as part of the installation process. You might need to specify your own roles to deploy the cluster or to manage the cluster after installation. For example: Your organization's security policies require that you use a more restrictive set of permissions to install the cluster. After the installation, the cluster is configured with an Operator that requires access to additional services. If you choose to specify your own IAM roles, you can take the following steps: Begin with the default policies and adapt as required. For more information, see "Default permissions for IAM instance profiles". To create a policy template that is based on the cluster's activity, see "Using AWS IAM Analyzer to create policy templates". 2.5.1. Default permissions for IAM instance profiles By default, the installation program creates IAM instance profiles for the bootstrap, control plane and worker instances with the necessary permissions for the cluster to operate. The following lists specify the default permissions for control plane and compute machines: Example 2.16. Default IAM role permissions for control plane instance profiles ec2:AttachVolume ec2:AuthorizeSecurityGroupIngress ec2:CreateSecurityGroup ec2:CreateTags ec2:CreateVolume ec2:DeleteSecurityGroup ec2:DeleteVolume ec2:Describe* ec2:DetachVolume ec2:ModifyInstanceAttribute ec2:ModifyVolume ec2:RevokeSecurityGroupIngress elasticloadbalancing:AddTags elasticloadbalancing:AttachLoadBalancerToSubnets elasticloadbalancing:ApplySecurityGroupsToLoadBalancer elasticloadbalancing:CreateListener elasticloadbalancing:CreateLoadBalancer elasticloadbalancing:CreateLoadBalancerPolicy elasticloadbalancing:CreateLoadBalancerListeners elasticloadbalancing:CreateTargetGroup elasticloadbalancing:ConfigureHealthCheck elasticloadbalancing:DeleteListener elasticloadbalancing:DeleteLoadBalancer elasticloadbalancing:DeleteLoadBalancerListeners elasticloadbalancing:DeleteTargetGroup elasticloadbalancing:DeregisterInstancesFromLoadBalancer elasticloadbalancing:DeregisterTargets elasticloadbalancing:Describe* elasticloadbalancing:DetachLoadBalancerFromSubnets elasticloadbalancing:ModifyListener elasticloadbalancing:ModifyLoadBalancerAttributes elasticloadbalancing:ModifyTargetGroup elasticloadbalancing:ModifyTargetGroupAttributes elasticloadbalancing:RegisterInstancesWithLoadBalancer elasticloadbalancing:RegisterTargets elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer elasticloadbalancing:SetLoadBalancerPoliciesOfListener kms:DescribeKey Example 2.17. Default IAM role permissions for compute instance profiles ec2:DescribeInstances ec2:DescribeRegions 2.5.2. Specifying an existing IAM role Instead of allowing the installation program to create IAM instance profiles with the default permissions, you can use the install-config.yaml file to specify an existing IAM role for control plane and compute instances. Prerequisites You have an existing install-config.yaml file. Procedure Update compute.platform.aws.iamRole with an existing role for the compute machines. Sample install-config.yaml file with an IAM role for compute instances compute: - hyperthreading: Enabled name: worker platform: aws: iamRole: ExampleRole Update controlPlane.platform.aws.iamRole with an existing role for the control plane machines. Sample install-config.yaml file with an IAM role for control plane instances controlPlane: hyperthreading: Enabled name: master platform: aws: iamRole: ExampleRole Save the file and reference it when installing the OpenShift Container Platform cluster. Note To change or update an IAM account after the cluster has been installed, see RHOCP 4 AWS cloud-credentials access key is expired (Red Hat Knowledgebase). Additional resources Deploying the cluster 2.5.3. Using AWS IAM Analyzer to create policy templates The minimal set of permissions that the control plane and compute instance profiles require depends on how the cluster is configured for its daily operation. One way to determine which permissions the cluster instances require is to use the AWS Identity and Access Management Access Analyzer (IAM Access Analyzer) to create a policy template: A policy template contains the permissions the cluster has used over a specified period of time. You can then use the template to create policies with fine-grained permissions. Procedure The overall process could be: Ensure that CloudTrail is enabled. CloudTrail records all of the actions and events in your AWS account, including the API calls that are required to create a policy template. For more information, see the AWS documentation for working with CloudTrail . Create an instance profile for control plane instances and an instance profile for compute instances. Be sure to assign each role a permissive policy, such as PowerUserAccess. For more information, see the AWS documentation for creating instance profile roles . Install the cluster in a development environment and configure it as required. Be sure to deploy all of applications the cluster will host in a production environment. Test the cluster thoroughly. Testing the cluster ensures that all of the required API calls are logged. Use the IAM Access Analyzer to create a policy template for each instance profile. For more information, see the AWS documentation for generating policies based on the CloudTrail logs . Create and add a fine-grained policy to each instance profile. Remove the permissive policy from each instance profile. Deploy a production cluster using the existing instance profiles with the new policies. Note You can add IAM Conditions to your policy to make it more restrictive and compliant with your organization security requirements. 2.6. Supported AWS Marketplace regions Installing an OpenShift Container Platform cluster using an AWS Marketplace image is available to customers who purchase the offer in North America. While the offer must be purchased in North America, you can deploy the cluster to any of the following supported paritions: Public GovCloud Note Deploying a OpenShift Container Platform cluster using an AWS Marketplace image is not supported for the AWS secret regions or China regions. 2.7. Supported AWS regions You can deploy an OpenShift Container Platform cluster to the following regions. Note Your IAM user must have the permission tag:GetResources in the region us-east-1 to delete the base cluster resources. As part of the AWS API requirement, the OpenShift Container Platform installation program performs various actions in this region. 2.7.1. AWS public regions The following AWS public regions are supported: af-south-1 (Cape Town) ap-east-1 (Hong Kong) ap-northeast-1 (Tokyo) ap-northeast-2 (Seoul) ap-northeast-3 (Osaka) ap-south-1 (Mumbai) ap-south-2 (Hyderabad) ap-southeast-1 (Singapore) ap-southeast-2 (Sydney) ap-southeast-3 (Jakarta) ap-southeast-4 (Melbourne) ca-central-1 (Central) ca-west-1 (Calgary) eu-central-1 (Frankfurt) eu-central-2 (Zurich) eu-north-1 (Stockholm) eu-south-1 (Milan) eu-south-2 (Spain) eu-west-1 (Ireland) eu-west-2 (London) eu-west-3 (Paris) il-central-1 (Tel Aviv) me-central-1 (UAE) me-south-1 (Bahrain) sa-east-1 (Sao Paulo) us-east-1 (N. Virginia) us-east-2 (Ohio) us-west-1 (N. California) us-west-2 (Oregon) 2.7.2. AWS GovCloud regions The following AWS GovCloud regions are supported: us-gov-west-1 us-gov-east-1 2.7.3. AWS SC2S and C2S secret regions The following AWS secret regions are supported: us-isob-east-1 Secret Commercial Cloud Services (SC2S) us-iso-east-1 Commercial Cloud Services (C2S) 2.7.4. AWS China regions The following AWS China regions are supported: cn-north-1 (Beijing) cn-northwest-1 (Ningxia) 2.8. steps Install an OpenShift Container Platform cluster: Quickly install a cluster with default options on installer-provisioned infrastructure Install a cluster with cloud customizations on installer-provisioned infrastructure Install a cluster with network customizations on installer-provisioned infrastructure Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormation templates | [
"platform: aws: region: us-gov-west-1 serviceEndpoints: - name: ec2 url: https://ec2.us-gov-west-1.amazonaws.com - name: elasticloadbalancing url: https://elasticloadbalancing.us-gov-west-1.amazonaws.com - name: route53 url: https://route53.us-gov.amazonaws.com 1 - name: tagging url: https://tagging.us-gov-west-1.amazonaws.com 2",
"compute: - hyperthreading: Enabled name: worker platform: aws: iamRole: ExampleRole",
"controlPlane: hyperthreading: Enabled name: master platform: aws: iamRole: ExampleRole"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_aws/installing-aws-account |
probe::ioscheduler_trace.elv_issue_request | probe::ioscheduler_trace.elv_issue_request Name probe::ioscheduler_trace.elv_issue_request - Fires when a request is Synopsis ioscheduler_trace.elv_issue_request Values rq_flags Request flags. disk_minor Disk minor number of request. disk_major Disk major no of request. elevator_name The type of I/O elevator currently enabled. rq Address of request. name Name of the probe point Description scheduled. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-ioscheduler-trace-elv-issue-request |
Chapter 11. Uninstalling a cluster on RHOSP from your own infrastructure | Chapter 11. Uninstalling a cluster on RHOSP from your own infrastructure You can remove a cluster that you deployed to Red Hat OpenStack Platform (RHOSP) on user-provisioned infrastructure. 11.1. Downloading playbook dependencies The Ansible playbooks that simplify the removal process on user-provisioned infrastructure require several Python modules. On the machine where you will run the process, add the modules' repositories and then download them. Note These instructions assume that you are using Red Hat Enterprise Linux (RHEL) 8. Prerequisites Python 3 is installed on your machine. Procedure On a command line, add the repositories: Register with Red Hat Subscription Manager: USD sudo subscription-manager register # If not done already Pull the latest subscription data: USD sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already Disable the current repositories: USD sudo subscription-manager repos --disable=* # If not done already Add the required repositories: USD sudo subscription-manager repos \ --enable=rhel-8-for-x86_64-baseos-rpms \ --enable=openstack-16-tools-for-rhel-8-x86_64-rpms \ --enable=ansible-2.9-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-rpms Install the modules: USD sudo yum install python3-openstackclient ansible python3-openstacksdk Ensure that the python command points to python3 : USD sudo alternatives --set python /usr/bin/python3 11.2. Removing a cluster from RHOSP that uses your own infrastructure You can remove an OpenShift Container Platform cluster on Red Hat OpenStack Platform (RHOSP) that uses your own infrastructure. To complete the removal process quickly, run several Ansible playbooks. Prerequisites Python 3 is installed on your machine. You downloaded the modules in "Downloading playbook dependencies." You have the playbooks that you used to install the cluster. You modified the playbooks that are prefixed with down- to reflect any changes that you made to their corresponding installation playbooks. For example, changes to the bootstrap.yaml file are reflected in the down-bootstrap.yaml file. All of the playbooks are in a common directory. Procedure On a command line, run the playbooks that you downloaded: USD ansible-playbook -i inventory.yaml \ down-bootstrap.yaml \ down-control-plane.yaml \ down-compute-nodes.yaml \ down-load-balancers.yaml \ down-network.yaml \ down-security-groups.yaml Remove any DNS record changes you made for the OpenShift Container Platform installation. OpenShift Container Platform is removed from your infrastructure. | [
"sudo subscription-manager register # If not done already",
"sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already",
"sudo subscription-manager repos --disable=* # If not done already",
"sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=openstack-16-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-rpms",
"sudo yum install python3-openstackclient ansible python3-openstacksdk",
"sudo alternatives --set python /usr/bin/python3",
"ansible-playbook -i inventory.yaml down-bootstrap.yaml down-control-plane.yaml down-compute-nodes.yaml down-load-balancers.yaml down-network.yaml down-security-groups.yaml"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_openstack/uninstalling-openstack-user |
Chapter 8. Configuring Language and Installation Source | Chapter 8. Configuring Language and Installation Source Before the graphical installation program starts, you need to configure the language and installation source. 8.1. The Text Mode Installation Program User Interface Important We recommend that you install Red Hat Enterprise Linux using the graphical interface. If you are installing Red Hat Enterprise Linux on a system that lacks a graphical display, consider performing the installation over a VNC connection - see Chapter 31, Installing Through VNC . If anaconda detects that you are installing in text mode on a system where installation over a VNC connection might be possible, anaconda asks you to verify your decision to install in text mode even though your options during installation are limited. If your system has a graphical display, but graphical installation fails, try booting with the xdriver=vesa option - refer to Chapter 28, Boot Options Both the loader and later anaconda use a screen-based interface that includes most of the on-screen widgets commonly found on graphical user interfaces. Figure 8.1, "Installation Program Widgets as seen in URL Setup " , and Figure 8.2, "Installation Program Widgets as seen in Choose a Language " , illustrate widgets that appear on screens during the installation process. Note Not every language supported in graphical installation mode is also supported in text mode. Specifically, languages written with a character set other than the Latin or Cyrillic alphabets are not available in text mode. If you choose a language written with a character set that is not supported in text mode, the installation program will present you with the English versions of the screens. Figure 8.1. Installation Program Widgets as seen in URL Setup Figure 8.2. Installation Program Widgets as seen in Choose a Language The widgets include: Window - Windows (usually referred to as dialogs in this manual) appear on your screen throughout the installation process. At times, one window may overlay another; in these cases, you can only interact with the window on top. When you are finished in that window, it disappears, allowing you to continue working in the window underneath. Checkbox - Checkboxes allow you to select or deselect a feature. The box displays either an asterisk (selected) or a space (unselected). When the cursor is within a checkbox, press Space to select or deselect a feature. Text Input - Text input lines are regions where you can enter information required by the installation program. When the cursor rests on a text input line, you may enter and/or edit information on that line. Text Widget - Text widgets are regions of the screen for the display of text. At times, text widgets may also contain other widgets, such as checkboxes. If a text widget contains more information than can be displayed in the space reserved for it, a scroll bar appears; if you position the cursor within the text widget, you can then use the Up and Down arrow keys to scroll through all the information available. Your current position is shown on the scroll bar by a # character, which moves up and down the scroll bar as you scroll. Scroll Bar - Scroll bars appear on the side or bottom of a window to control which part of a list or document is currently in the window's frame. The scroll bar makes it easy to move to any part of a file. Button Widget - Button widgets are the primary method of interacting with the installation program. You progress through the windows of the installation program by navigating these buttons, using the Tab and Enter keys. Buttons can be selected when they are highlighted. Cursor - Although not a widget, the cursor is used to select (and interact with) a particular widget. As the cursor is moved from widget to widget, it may cause the widget to change color, or the cursor itself may only appear positioned in or to the widget. In Figure 8.1, "Installation Program Widgets as seen in URL Setup " , the cursor is positioned on the Enable HTTP proxy checkbox. Figure 8.2, "Installation Program Widgets as seen in Choose a Language " , shows the cursor on the OK button. 8.1.1. Using the Keyboard to Navigate Navigation through the installation dialogs is performed through a simple set of keystrokes. To move the cursor, use the Left , Right , Up , and Down arrow keys. Use Tab , and Shift - Tab to cycle forward or backward through each widget on the screen. Along the bottom, most screens display a summary of available cursor positioning keys. To "press" a button, position the cursor over the button (using Tab , for example) and press Space or Enter . To select an item from a list of items, move the cursor to the item you wish to select and press Enter . To select an item with a checkbox, move the cursor to the checkbox and press Space to select an item. To deselect, press Space a second time. Pressing F12 accepts the current values and proceeds to the dialog; it is equivalent to pressing the OK button. Warning Unless a dialog box is waiting for your input, do not press any keys during the installation process (doing so may result in unpredictable behavior). | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ch-Installation_Phase_2-x86 |
8.226. virt-manager | 8.226. virt-manager 8.226.1. RHBA-2013:1646 - virt-manager bug fix update Updated virt-manager packages that fix several bugs are now available for Red Hat Enterprise Linux 6. Virtual Machine Manager (virt-manager) is a graphical tool for administering virtual machines for KVM, Xen, and QEMU. The virt-manager utility can start, stop, add or remove virtualized devices, connect to a graphical or serial console, and see resource usage statistics for existing virtualized guests on local or remote machines. It uses the libvirt API (Application Programming Interface). Bug Fixes BZ# 820303 Previously, when calling the libvirt utility, virt-manager omitted an address (in form "bus:device") when identical USB devices (in form "vendorid:productid") were attached, and thus the wrong devices were attached to the guest. With this update, the user specifies information about both, the "bus:device" and "vendorid:productid", to select the correct device. Now, the specified device in the XML or the device selected in the virt-manager GUI are correctly attached to the guest. BZ# 869206 Previously, changing a device type or model did not reset the guest address that the device should be reachable at. Consequently, the guest could not start after changing a watchdog from i6300esb to ib700. This bug has been fixed and the guest can now be started as expected. BZ# 869474 When selecting a bridge network created by the libvirt utility, virt-manager could not display the details and configuration of network created by libvirt. Moreover, the following error was returned: Error selecting network: 'None Type' object has no attribute 'split' With this update, configuration of the network created by libvirt. BZ# 873142 Previously, the "create a new virtual machine" virt-manager dialog contained a typographical mistake in unit of "Storage", showing "Gb" instead of "GB". The typo has been fixed. BZ# 907399 Due to a wrong attribute always set to "no", errors occurred after changing SElinux from the static option to dynamic on virt-manager. A patch has been provided to fix this bug. With this update, no error messages are returned and SElinux now changes from the static to dynamic option successfully. BZ# 981628 If the "Toolbar" check-box was unchecked from the VM configuration in virt-manager, any new VM failed to start installation and the 'Begin Installation' button disappeared. A patch has been applied to fix this bug, and the 'Begin Installation' button no longer disappears from the GUI. BZ# 985184 Previously, the ram attribute supported only the qxl guest driver type. Consequently, errors were shown when changing a video from qxl to other models. With this update, the guest works well and deletes the "ram" element automatically when models are changed. BZ# 990507 Prior to this update, using virt-manager to connect a physical CD-ROM or an ISO CD-ROM image occasionally did not work in KDE. Also, the "Choose Media" dialog box to select the image or physical device did not show up. A patch has been provided to fix this bug and the "Choose Media" dialog window now shows up when the "Connect" button is pressed. All virt-manager users are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/virt-manager |
Chapter 5. Next steps | Chapter 5. steps After completing the tutorial, consider the following steps: Explore the tutorial further. Use the MySQL command line client to add, modify, and remove rows in the database tables, and see the effect on the topics. Keep in mind that you cannot remove a row that is referenced by a foreign key. Plan a Debezium deployment. You can install Debezium in OpenShift or on Red Hat Enterprise Linux. For more information, see the following: Installing Debezium on OpenShift Installing Debezium on RHEL Revised on 2024-10-09 02:25:11 UTC | null | https://docs.redhat.com/en/documentation/red_hat_build_of_debezium/2.7.3/html/getting_started_with_debezium/next-steps |
Chapter 7. File Systems | Chapter 7. File Systems Read this chapter for an overview of the file systems supported for use with Red Hat Enterprise Linux, and how to optimize their performance. 7.1. Tuning Considerations for File Systems There are several tuning considerations common to all file systems: formatting and mount options selected on your system, and actions available to applications that can improve their performance on a given system. 7.1.1. Formatting Options File system block size Block size can be selected at mkfs time. The range of valid sizes depends on the system: the upper limit is the maximum page size of the host system, while the lower limit depends on the file system used. The default block size is appropriate for most use cases. If you expect to create many files smaller than the default block size, you can set a smaller block size to minimize the amount of space wasted on disk. Note, however, that setting a smaller block size may limit the maximum size of the file system, and can cause additional runtime overhead, particularly for files greater than the selected block size. File system geometry If your system uses striped storage such as RAID5, you can improve performance by aligning data and metadata with the underlying storage geometry at mkfs time. For software RAID (LVM or MD) and some enterprise hardware storage, this information is queried and set automatically, but in many cases the administrator must specify this geometry manually with mkfs at the command line. Refer to the Storage Administration Guide for further information about creating and maintaining these file systems. External journals Metadata-intensive workloads mean that the log section of a journaling file system (such as ext4 and XFS) is updated extremely frequently. To minimize seek time from file system to journal, you can place the journal on dedicated storage. Note, however, that placing the journal on external storage that is slower than the primary file system can nullify any potential advantage associated with using external storage. Warning Ensure that your external journal is reliable. The loss of an external journal device will cause file system corruption. External journals are created at mkfs time, with journal devices being specified at mount time. Refer to the mke2fs(8) , mkfs.xfs(8) , and mount(8) man pages for further information. 7.1.2. Mount Options Barriers A write barrier is a kernel mechanism used to ensure that file system metadata is correctly written and ordered on persistent storage, even when storage devices with volatile write caches lose power. File systems with write barriers enabled also ensure that any data transmitted via fsync() persists across a power outage. Red Hat Enterprise Linux enables barriers by default on all hardware that supports them. However, enabling write barriers slows some applications significantly; specifically, applications that use fsync() heavily, or create and delete many small files. For storage with no volatile write cache, or in the rare case where file system inconsistencies and data loss after a power loss is acceptable, barriers can be disabled by using the nobarrier mount option. For further information, refer to the Storage Administration Guide . Access Time (noatime) Historically, when a file is read, the access time ( atime ) for that file must be updated in the inode metadata, which involves additional write I/O. If accurate atime metadata is not required, mount the file system with the noatime option to eliminate these metadata updates. In most cases, however, atime is not a large overhead due to the default relative atime (or relatime ) behavior in the Red Hat Enterprise Linux 6 kernel. The relatime behavior only updates atime if the atime is older than the modification time ( mtime ) or status change time ( ctime ). Note Enabling the noatime option also enables nodiratime behavior; there is no need to set both noatime and nodiratime . Increased read-ahead support Read-ahead speeds up file access by pre-fetching data and loading it into the page cache so that it can be available earlier in memory instead of from disk. Some workloads, such as those involving heavy streaming of sequential I/O, benefit from high read-ahead values. The tuned tool and the use of LVM striping elevate the read-ahead value, but this is not always sufficient for some workloads. Additionally, Red Hat Enterprise Linux is not always able to set an appropriate read-ahead value based on what it can detect of your file system. For example, if a powerful storage array presents itself to Red Hat Enterprise Linux as a single powerful LUN, the operating system will not treat it as a powerful LUN array, and therefore will not by default make full use of the read-ahead advantages potentially available to the storage. Use the blockdev command to view and edit the read-ahead value. To view the current read-ahead value for a particular block device, run: To modify the read-ahead value for that block device, run the following command. N represents the number of 512-byte sectors. Note that the value selected with the blockdev command will not persist between boots. We recommend creating a run level init.d script to set this value during boot. 7.1.3. File system maintenance Discard unused blocks Batch discard and online discard operations are features of mounted file systems that discard blocks which are not in use by the file system. These operations are useful for both solid-state drives and thinly-provisioned storage. Batch discard operations are run explicitly by the user with the fstrim command. This command discards all unused blocks in a file system that match the user's criteria. Both operation types are supported for use with the XFS and ext4 file systems in Red Hat Enterprise Linux 6.2 and later as long as the block device underlying the file system supports physical discard operations. Physical discard operations are supported if the value of /sys/block/ device /queue/discard_max_bytes is not zero. Online discard operations are specified at mount time with the -o discard option (either in /etc/fstab or as part of the mount command), and run in realtime without user intervention. Online discard operations only discard blocks that are transitioning from used to free. Online discard operations are supported on ext4 file systems in Red Hat Enterprise Linux 6.2 and later, and on XFS file systems in Red Hat Enterprise Linux 6.4 and later. Red Hat recommends batch discard operations unless the system's workload is such that batch discard is not feasible, or online discard operations are necessary to maintain performance. 7.1.4. Application Considerations Pre-allocation The ext4, XFS, and GFS2 file systems support efficient space pre-allocation via the fallocate(2) glibc call. In cases where files may otherwise become badly fragmented due to write patterns, leading to poor read performance, space preallocation can be a useful technique. Pre-allocation marks disk space as if it has been allocated to a file, without writing any data into that space. Until real data is written to a pre-allocated block, read operations will return zeroes. | [
"blockdev -getra device",
"blockdev -setra N device"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/performance_tuning_guide/main-fs |
6.7.2. Configuring a Single Storage-Based Fence Device for a Node | 6.7.2. Configuring a Single Storage-Based Fence Device for a Node When using non-power fencing methods (that is, SAN/storage fencing) to fence a node, you must configure unfencing for the fence device. This ensures that a fenced node is not re-enabled until the node has been rebooted. When you configure a device that requires unfencing, the cluster must first be stopped and the full configuration including devices and unfencing must be added before the cluster is started. When you configure unfencing for a node, you specify a device that mirrors the corresponding fence device you have configured for the node with the notable addition of the explicit action of on or enable . For more information about unfencing a node, see the fence_node (8) man page. Use the following procedure to configure a node with a single storage-based fence device that uses a fence device named sanswitch1 , which uses the fence_sanbox2 fencing agent. Add a fence method for the node, providing a name for the fence method. For example, to configure a fence method named SAN for the node node-01.example.com in the configuration file on the cluster node node-01.example.com , execute the following command: Add a fence instance for the method. You must specify the fence device to use for the node, the node this instance applies to, the name of the method, and any options for this method that are specific to this node: For example, to configure a fence instance in the configuration file on the cluster node node-01.example.com that uses the SAN switch power port 11 on the fence device named sanswitch1 to fence cluster node node-01.example.com using the method named SAN , execute the following command: To configure unfencing for the storage-based fence device on this node, execute the following command: You will need to add a fence method for each node in the cluster. The following commands configure a fence method for each node with the method name SAN . The device for the fence method specifies sanswitch as the device name, which is a device previously configured with the --addfencedev option, as described in Section 6.5, "Configuring Fence Devices" . Each node is configured with a unique SAN physical port number: The port number for node-01.example.com is 11 , the port number for node-02.example.com is 12 , and the port number for node-03.example.com is 13 . Example 6.3, " cluster.conf After Adding Storage-Based Fence Methods " shows a cluster.conf configuration file after you have added fencing methods, fencing instances, and unfencing to each node in the cluster. Example 6.3. cluster.conf After Adding Storage-Based Fence Methods Note that when you have finished configuring all of the components of your cluster, you will need to sync the cluster configuration file to all of the nodes, as described in Section 6.15, "Propagating the Configuration File to the Cluster Nodes" . | [
"ccs -h host --addmethod method node",
"ccs -h node01.example.com --addmethod SAN node01.example.com",
"ccs -h host --addfenceinst fencedevicename node method [ options ]",
"ccs -h node01.example.com --addfenceinst sanswitch1 node01.example.com SAN port=11",
"ccs -h host --addunfence fencedevicename node action=on|off",
"ccs -h node01.example.com --addmethod SAN node01.example.com ccs -h node01.example.com --addmethod SAN node02.example.com ccs -h node01.example.com --addmethod SAN node03.example.com ccs -h node01.example.com --addfenceinst sanswitch1 node01.example.com SAN port=11 ccs -h node01.example.com --addfenceinst sanswitch1 node02.example.com SAN port=12 ccs -h node01.example.com --addfenceinst sanswitch1 node03.example.com SAN port=13 ccs -h node01.example.com --addunfence sanswitch1 node01.example.com port=11 action=on ccs -h node01.example.com --addunfence sanswitch1 node02.example.com port=12 action=on ccs -h node01.example.com --addunfence sanswitch1 node03.example.com port=13 action=on",
"<cluster name=\"mycluster\" config_version=\"3\"> <clusternodes> <clusternode name=\"node-01.example.com\" nodeid=\"1\"> <fence> <method name=\"SAN\"> <device name=\"sanswitch1\" port=\"11\"/> </method> </fence> <unfence> <device name=\"sanswitch1\" port=\"11\" action=\"on\"/> </unfence> </clusternode> <clusternode name=\"node-02.example.com\" nodeid=\"2\"> <fence> <method name=\"SAN\"> <device name=\"sanswitch1\" port=\"12\"/> </method> </fence> <unfence> <device name=\"sanswitch1\" port=\"12\" action=\"on\"/> </unfence> </clusternode> <clusternode name=\"node-03.example.com\" nodeid=\"3\"> <fence> <method name=\"SAN\"> <device name=\"sanswitch1\" port=\"13\"/> </method> </fence> <unfence> <device name=\"sanswitch1\" port=\"13\" action=\"on\"/> </unfence> </clusternode> </clusternodes> <fencedevices> <fencedevice agent=\"fence_sanbox2\" ipaddr=\"san_ip_example\" login=\"login_example\" name=\"sanswitch1\" passwd=\"password_example\"/> </fencedevices> <rm> </rm> </cluster>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s2-single-storagefence-config-ccs-CA |
Using the AMQ Core Protocol JMS Client | Using the AMQ Core Protocol JMS Client Red Hat AMQ 2021.Q1 For Use with AMQ Clients 2.9 | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_core_protocol_jms_client/index |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_red_hat_build_of_openjdk_21.0.4/making-open-source-more-inclusive |
Chapter 145. KafkaRebalanceSpec schema reference | Chapter 145. KafkaRebalanceSpec schema reference Used in: KafkaRebalance Property Property type Description mode string (one of [remove-brokers, full, add-brokers]) Mode to run the rebalancing. The supported modes are full , add-brokers , remove-brokers . If not specified, the full mode is used by default. full mode runs the rebalancing across all the brokers in the cluster. add-brokers mode can be used after scaling up the cluster to move some replicas to the newly added brokers. remove-brokers mode can be used before scaling down the cluster to move replicas out of the brokers to be removed. brokers integer array The list of newly added brokers in case of scaling up or the ones to be removed in case of scaling down to use for rebalancing. This list can be used only with rebalancing mode add-brokers and removed-brokers . It is ignored with full mode. goals string array A list of goals, ordered by decreasing priority, to use for generating and executing the rebalance proposal. The supported goals are available at https://github.com/linkedin/cruise-control#goals . If an empty goals list is provided, the goals declared in the default.goals Cruise Control configuration parameter are used. skipHardGoalCheck boolean Whether to allow the hard goals specified in the Kafka CR to be skipped in optimization proposal generation. This can be useful when some of those hard goals are preventing a balance solution being found. Default is false. rebalanceDisk boolean Enables intra-broker disk balancing, which balances disk space utilization between disks on the same broker. Only applies to Kafka deployments that use JBOD storage with multiple disks. When enabled, inter-broker balancing is disabled. Default is false. excludedTopics string A regular expression where any matching topics will be excluded from the calculation of optimization proposals. This expression will be parsed by the java.util.regex.Pattern class; for more information on the supported format consult the documentation for that class. concurrentPartitionMovementsPerBroker integer The upper bound of ongoing partition replica movements going into/out of each broker. Default is 5. concurrentIntraBrokerPartitionMovements integer The upper bound of ongoing partition replica movements between disks within each broker. Default is 2. concurrentLeaderMovements integer The upper bound of ongoing partition leadership movements. Default is 1000. replicationThrottle integer The upper bound, in bytes per second, on the bandwidth used to move replicas. There is no limit by default. replicaMovementStrategies string array A list of strategy class names used to determine the execution order for the replica movements in the generated optimization proposal. By default BaseReplicaMovementStrategy is used, which will execute the replica movements in the order that they were generated. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-kafkarebalancespec-reference |
Preface | Preface Open Java Development Kit (OpenJDK) is a free and open source implementation of the Java Platform, Standard Edition (Java SE). The Red Hat build of OpenJDK is available in four versions: 8u, 11u, 17u, and 21u. Packages for the Red Hat build of OpenJDK are made available on Red Hat Enterprise Linux and Microsoft Windows and shipped as a JDK and JRE in the Red Hat Ecosystem Catalog. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.10/pr01 |
Chapter 1. AWS DynamoDB Sink | Chapter 1. AWS DynamoDB Sink Send data to AWS DynamoDB service. The sent data will insert/update/delete an item on the given AWS DynamoDB table. Access Key/Secret Key are the basic method for authenticating to the AWS DynamoDB service. These parameters are optional, because the Kamelet also provides the following option 'useDefaultCredentialsProvider'. When using a default Credentials Provider the AWS DynamoDB client will load the credentials through this provider and won't use the static credential. This is the reason for not having access key and secret key as mandatory parameters for this Kamelet. This Kamelet expects a JSON field as body. The mapping between the JSON fields and table attribute values is done by key, so if you have the input as follows: {"username":"oscerd", "city":"Rome"} The Kamelet will insert/update an item in the given AWS DynamoDB table and set the attributes 'username' and 'city' respectively. Please note that the JSON object must include the primary key values that define the item. 1.1. Configuration Options The following table summarizes the configuration options available for the aws-ddb-sink Kamelet: Property Name Description Type Default Example region * AWS Region The AWS region to connect to string "eu-west-1" table * Table Name of the DynamoDB table to look at string accessKey Access Key The access key obtained from AWS string operation Operation The operation to perform (one of PutItem, UpdateItem, DeleteItem) string "PutItem" "PutItem" overrideEndpoint Endpoint Overwrite Set the need for overiding the endpoint URI. This option needs to be used in combination with uriEndpointOverride setting. boolean false secretKey Secret Key The secret key obtained from AWS string uriEndpointOverride Overwrite Endpoint URI Set the overriding endpoint URI. This option needs to be used in combination with overrideEndpoint option. string useDefaultCredentialsProvider Default Credentials Provider Set whether the DynamoDB client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. boolean false writeCapacity Write Capacity The provisioned throughput to reserved for writing resources to your table integer 1 Note Fields marked with an asterisk (*) are mandatory. 1.2. Dependencies At runtime, the aws-ddb-sink Kamelet relies upon the presence of the following dependencies: mvn:org.apache.camel.kamelets:camel-kamelets-utils:1.8.0 camel:core camel:jackson camel:aws2-ddb camel:kamelet 1.3. Usage This section describes how you can use the aws-ddb-sink . 1.3.1. Knative Sink You can use the aws-ddb-sink Kamelet as a Knative sink by binding it to a Knative object. aws-ddb-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-ddb-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-ddb-sink properties: region: "eu-west-1" table: "The Table" 1.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 1.3.1.2. Procedure for using the cluster CLI Save the aws-ddb-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: 1.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: This command creates the KameletBinding in the current namespace on the cluster. 1.3.2. Kafka Sink You can use the aws-ddb-sink Kamelet as a Kafka sink by binding it to a Kafka topic. aws-ddb-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-ddb-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-ddb-sink properties: region: "eu-west-1" table: "The Table" 1.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 1.3.2.2. Procedure for using the cluster CLI Save the aws-ddb-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: 1.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: This command creates the KameletBinding in the current namespace on the cluster. 1.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/aws-ddb-sink.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-ddb-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-ddb-sink properties: region: \"eu-west-1\" table: \"The Table\"",
"apply -f aws-ddb-sink-binding.yaml",
"kamel bind channel:mychannel aws-ddb-sink -p \"sink.region=eu-west-1\" -p \"sink.table=The Table\"",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-ddb-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-ddb-sink properties: region: \"eu-west-1\" table: \"The Table\"",
"apply -f aws-ddb-sink-binding.yaml",
"kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-ddb-sink -p \"sink.region=eu-west-1\" -p \"sink.table=The Table\""
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/kamelets_reference/aws-ddb-sink |
1.2. Red Hat Virtualization Host | 1.2. Red Hat Virtualization Host A Red Hat Virtualization environment has one or more hosts attached to it. A host is a server that provides the physical hardware that virtual machines make use of. Red Hat Virtualization Host (RHVH) runs an optimized operating system installed using a special, customized installation media specifically for creating virtualization hosts. Red Hat Enterprise Linux hosts are servers running a standard Red Hat Enterprise Linux operating system that has been configured after installation to permit use as a host. Both methods of host installation result in hosts that interact with the rest of the virtualized environment in the same way, and so, will both referred to as hosts . Figure 1.2. Host Architecture Kernel-based Virtual Machine (KVM) The Kernel-based Virtual Machine (KVM) is a loadable kernel module that provides full virtualization through the use of the Intel VT or AMD-V hardware extensions. Though KVM itself runs in kernel space, the guests running upon it run as individual QEMU processes in user space. KVM allows a host to make its physical hardware available to virtual machines. QEMU QEMU is a multi-platform emulator used to provide full system emulation. QEMU emulates a full system, for example a PC, including one or more processors, and peripherals. QEMU can be used to launch different operating systems or to debug system code. QEMU, working in conjunction with KVM and a processor with appropriate virtualization extensions, provides full hardware assisted virtualization. Red Hat Virtualization Manager Host Agent, VDSM In Red Hat Virtualization, VDSM initiates actions on virtual machines and storage. It also facilitates inter-host communication. VDSM monitors host resources such as memory, storage, and networking. Additionally, VDSM manages tasks such as virtual machine creation, statistics accumulation, and log collection. A VDSM instance runs on each host and receives management commands from the Red Hat Virtualization Manager using the re-configurable port 54321 . VDSM-REG VDSM uses VDSM-REG to register each host with the Red Hat Virtualization Manager. VDSM-REG supplies information about itself and its host using port 80 or port 443 . libvirt Libvirt facilitates the management of virtual machines and their associated virtual devices. When Red Hat Virtualization Manager initiates virtual machine life-cycle commands (start, stop, reboot), VDSM invokes libvirt on the relevant host machines to execute them. Storage Pool Manager, SPM The Storage Pool Manager (SPM) is a role assigned to one host in a data center. The SPM host has sole authority to make all storage domain structure metadata changes for the data center. This includes creation, deletion, and manipulation of virtual disks, snapshots, and templates. It also includes allocation of storage for sparse block devices on a Storage Area Network(SAN). The role of SPM can be migrated to any host in a data center. As a result, all hosts in a data center must have access to all the storage domains defined in the data center. Red Hat Virtualization Manager ensures that the SPM is always available. In case of storage connectivity errors, the Manager re-assigns the SPM role to another host. Guest Operating System Guest operating systems do not need to be modified to be installed on virtual machines in a Red Hat Virtualization environment. The guest operating system, and any applications on the guest, are unaware of the virtualized environment and run normally. Red Hat provides enhanced device drivers that allow faster and more efficient access to virtualized devices. You can also install the Red Hat Virtualization Guest Agent on guests, which provides enhanced guest information to the management console. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/technical_reference/red_hat_virtualization_host |
3.5. Configure the Gluster Storage Cluster | 3.5. Configure the Gluster Storage Cluster Configure these instances to form a trusted storage pool (cluster). Note If you are using Red Hat Enterprise Linux 7 machines, log in to the Azure portal and reset the password for the VMs and also restart the VMs. On Red Hat Enterprise Linux 6 machines, password reset is not required. Log into each node using the keys or with password. For example, Register each node to Red Hat Network using the subscription-manager command, and attach the relevant Red Hat Storage subscriptions. For information on subscribing to the Red Hat Gluster Storage 3.5 channels, see the Installing Red Hat Gluster Storage chapter in the Red Hat Gluster Storage 3.5 Installation Guide . Update each node to ensure the latest enhancements and patches are in place. Follow the instructions in the Adding Servers to the Trusted Storage Pool chapter in the Red Hat Gluster Storage Administration Guide to create the trusted storage pool. | [
"ssh -i [path-to-key-pem] [admin-name@public-ip-address] or ssh [admin-name@public-ip-address]",
"ssh -i /root/.azure/ssh/rhgs72-key.pem [email protected] or ssh [email protected]",
"yum update"
] | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/deployment_guide_for_public_cloud/chap-documentation-deployment_guide_for_public_cloud-azure-setting_up_rhgs_azure-configuring_cluster |
17.17. Applying QoS to Your Virtual Network | 17.17. Applying QoS to Your Virtual Network Quality of Service (QoS) refers to the resource control systems that guarantees an optimal experience for all users on a network, making sure that there is no delay, jitter, or packet loss. QoS can be application specific or user / group specific. See Section 23.17.8.14, "Quality of service (QoS)" for more information. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-virtual_networking-applying_qos_to_your_virtual_network |
Chapter 1. Uninstalling Red Hat OpenShift GitOps | Chapter 1. Uninstalling Red Hat OpenShift GitOps Uninstalling the Red Hat OpenShift GitOps Operator is a two-step process: Delete the Argo CD instances that were added under the default namespace of the Red Hat OpenShift GitOps Operator. Uninstall the Red Hat OpenShift GitOps Operator. Uninstalling only the Operator will not remove the Argo CD instances created. 1.1. Deleting the Argo CD instances Delete the Argo CD instances added to the namespace of the GitOps Operator. Procedure In the Terminal type the following command: USD oc delete gitopsservice cluster -n openshift-gitops Note You cannot delete an Argo CD cluster from the web console UI. After the command runs successfully all the Argo CD instances will be deleted from the openshift-gitops namespace. Delete any other Argo CD instances from other namespaces using the same command: USD oc delete gitopsservice cluster -n <namespace> 1.2. Uninstalling the GitOps Operator You can uninstall Red Hat OpenShift GitOps Operator from the OperatorHub by using the web console. Procedure From the Operators OperatorHub page, use the Filter by keyword box to search for Red Hat OpenShift GitOps Operator tile. Click the Red Hat OpenShift GitOps Operator tile. The Operator tile indicates it is installed. In the Red Hat OpenShift GitOps Operator descriptor page, click Uninstall . Additional resources You can learn more about uninstalling Operators on OpenShift Container Platform in the Deleting Operators from a cluster section. | [
"oc delete gitopsservice cluster -n openshift-gitops",
"oc delete gitopsservice cluster -n <namespace>"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.12/html/removing_gitops/uninstalling-openshift-gitops |
Backup and restore | Backup and restore OpenShift Container Platform 4.16 Backing up and restoring your OpenShift Container Platform cluster Red Hat OpenShift Documentation Team | [
"oc -n openshift-kube-apiserver-operator get secret kube-apiserver-to-kubelet-signer -o jsonpath='{.metadata.annotations.auth\\.openshift\\.io/certificate-not-after}'",
"2022-08-05T14:37:50Zuser@user:~ USD 1",
"for node in USD(oc get nodes -o jsonpath='{.items[*].metadata.name}'); do echo USD{node} ; oc adm cordon USD{node} ; done",
"ci-ln-mgdnf4b-72292-n547t-master-0 node/ci-ln-mgdnf4b-72292-n547t-master-0 cordoned ci-ln-mgdnf4b-72292-n547t-master-1 node/ci-ln-mgdnf4b-72292-n547t-master-1 cordoned ci-ln-mgdnf4b-72292-n547t-master-2 node/ci-ln-mgdnf4b-72292-n547t-master-2 cordoned ci-ln-mgdnf4b-72292-n547t-worker-a-s7ntl node/ci-ln-mgdnf4b-72292-n547t-worker-a-s7ntl cordoned ci-ln-mgdnf4b-72292-n547t-worker-b-cmc9k node/ci-ln-mgdnf4b-72292-n547t-worker-b-cmc9k cordoned ci-ln-mgdnf4b-72292-n547t-worker-c-vcmtn node/ci-ln-mgdnf4b-72292-n547t-worker-c-vcmtn cordoned",
"for node in USD(oc get nodes -l node-role.kubernetes.io/worker -o jsonpath='{.items[*].metadata.name}'); do echo USD{node} ; oc adm drain USD{node} --delete-emptydir-data --ignore-daemonsets=true --timeout=15s --force ; done",
"for node in USD(oc get nodes -o jsonpath='{.items[*].metadata.name}'); do oc debug node/USD{node} -- chroot /host shutdown -h 1; done 1",
"Starting pod/ip-10-0-130-169us-east-2computeinternal-debug To use host binaries, run `chroot /host` Shutdown scheduled for Mon 2021-09-13 09:36:17 UTC, use 'shutdown -c' to cancel. Removing debug pod Starting pod/ip-10-0-150-116us-east-2computeinternal-debug To use host binaries, run `chroot /host` Shutdown scheduled for Mon 2021-09-13 09:36:29 UTC, use 'shutdown -c' to cancel.",
"oc get nodes -l node-role.kubernetes.io/master",
"NAME STATUS ROLES AGE VERSION ip-10-0-168-251.ec2.internal Ready control-plane,master 75m v1.29.4 ip-10-0-170-223.ec2.internal Ready control-plane,master 75m v1.29.4 ip-10-0-211-16.ec2.internal Ready control-plane,master 75m v1.29.4",
"oc get csr",
"oc describe csr <csr_name> 1",
"oc adm certificate approve <csr_name>",
"oc get nodes -l node-role.kubernetes.io/worker",
"NAME STATUS ROLES AGE VERSION ip-10-0-179-95.ec2.internal Ready worker 64m v1.29.4 ip-10-0-182-134.ec2.internal Ready worker 64m v1.29.4 ip-10-0-250-100.ec2.internal Ready worker 64m v1.29.4",
"oc get csr",
"oc describe csr <csr_name> 1",
"oc adm certificate approve <csr_name>",
"for node in USD(oc get nodes -o jsonpath='{.items[*].metadata.name}'); do echo USD{node} ; oc adm uncordon USD{node} ; done",
"oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 59m cloud-credential 4.16.0 True False False 85m cluster-autoscaler 4.16.0 True False False 73m config-operator 4.16.0 True False False 73m console 4.16.0 True False False 62m csi-snapshot-controller 4.16.0 True False False 66m dns 4.16.0 True False False 76m etcd 4.16.0 True False False 76m",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-168-251.ec2.internal Ready control-plane,master 82m v1.29.4 ip-10-0-170-223.ec2.internal Ready control-plane.master 82m v1.29.4 ip-10-0-179-95.ec2.internal Ready worker 70m v1.29.4 ip-10-0-182-134.ec2.internal Ready worker 70m v1.29.4 ip-10-0-211-16.ec2.internal Ready control-plane,master 82m v1.29.4 ip-10-0-250-100.ec2.internal Ready worker 69m v1.29.4",
"Requests specifying Server Side Encryption with Customer provided keys must provide the client calculated MD5 of the secret key.",
"found a podvolumebackup with status \"InProgress\" during the server starting, mark it as \"Failed\".",
"data path restore failed: Failed to run kopia restore: Unable to load snapshot : snapshot not found",
"The generated label name is too long.",
"velero restore create <RESTORE_NAME> --from-backup <BACKUP_NAME> --exclude-resources=deployment.apps",
"velero restore create <RESTORE_NAME> --from-backup <BACKUP_NAME> --include-resources=deployment.apps",
"oc get dpa -n openshift-adp -o yaml > dpa.orig.backup",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/restic-9cq4q 1/1 Running 0 94s pod/restic-m4lts 1/1 Running 0 94s pod/restic-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/restic 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: velero: defaultPlugins: - openshift - aws - azure - gcp",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: velero: defaultPlugins: - openshift - azure - gcp customPlugins: - name: custom-plugin-example image: quay.io/example-repo/custom-velero-plugin",
"024-02-27T10:46:50.028951744Z time=\"2024-02-27T10:46:50Z\" level=error msg=\"Error backing up item\" backup=openshift-adp/<backup name> error=\"error executing custom action (groupResource=imagestreams.image.openshift.io, namespace=<BSL Name>, name=postgres): rpc error: code = Aborted desc = plugin panicked: runtime error: index out of range with length 1, stack trace: goroutine 94...",
"oc label backupstoragelocations.velero.io <bsl_name> app.kubernetes.io/component=bsl",
"oc -n openshift-adp get secret/oadp-<bsl_name>-<bsl_provider>-registry-secret -o json | jq -r '.data'",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: test-obc 1 namespace: openshift-adp spec: storageClassName: openshift-storage.noobaa.io generateBucketName: test-backup-bucket 2",
"oc create -f <obc_file_name> 1",
"oc extract --to=- cm/test-obc 1",
"BUCKET_NAME backup-c20...41fd BUCKET_PORT 443 BUCKET_REGION BUCKET_SUBREGION BUCKET_HOST s3.openshift-storage.svc",
"oc extract --to=- secret/test-obc",
"AWS_ACCESS_KEY_ID ebYR....xLNMc AWS_SECRET_ACCESS_KEY YXf...+NaCkdyC3QPym",
"oc get route s3 -n openshift-storage",
"[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=cloud-credentials",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: oadp-backup namespace: openshift-adp spec: configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - aws - openshift - csi defaultSnapshotMoveData: true 1 backupLocations: - velero: config: profile: \"default\" region: noobaa s3Url: https://s3.openshift-storage.svc 2 s3ForcePathStyle: \"true\" insecureSkipTLSVerify: \"true\" provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: <bucket_name> 3 prefix: oadp",
"oc apply -f <dpa_filename>",
"oc get dpa -o yaml",
"apiVersion: v1 items: - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp #...# spec: backupLocations: - velero: config: #...# status: conditions: - lastTransitionTime: \"20....9:54:02Z\" message: Reconcile complete reason: Complete status: \"True\" type: Reconciled kind: List metadata: resourceVersion: \"\"",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1",
"oc apply -f <backup_cr_filename>",
"oc describe backup test-backup -n openshift-adp",
"Name: test-backup Namespace: openshift-adp ....# Status: Backup Item Operations Attempted: 1 Backup Item Operations Completed: 1 Completion Timestamp: 2024-09-25T10:17:01Z Expiration: 2024-10-25T10:16:31Z Format Version: 1.1.0 Hook Status: Phase: Completed Progress: Items Backed Up: 34 Total Items: 34 Start Timestamp: 2024-09-25T10:16:31Z Version: 1 Events: <none>",
"apiVersion: velero.io/v1 kind: Restore metadata: name: test-restore 1 namespace: openshift-adp spec: backupName: <backup_name> 2 restorePVs: true namespaceMapping: <application_namespace>: test-restore-application 3",
"oc apply -f <restore_cr_filename>",
"oc describe restores.velero.io <restore_name> -n openshift-adp",
"oc project test-restore-application",
"oc get pvc,svc,deployment,secret,configmap",
"NAME STATUS VOLUME persistentvolumeclaim/mysql Bound pvc-9b3583db-...-14b86 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/mysql ClusterIP 172....157 <none> 3306/TCP 2m56s service/todolist ClusterIP 172.....15 <none> 8000/TCP 2m56s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/mysql 0/1 1 0 2m55s NAME TYPE DATA AGE secret/builder-dockercfg-6bfmd kubernetes.io/dockercfg 1 2m57s secret/default-dockercfg-hz9kz kubernetes.io/dockercfg 1 2m57s secret/deployer-dockercfg-86cvd kubernetes.io/dockercfg 1 2m57s secret/mysql-persistent-sa-dockercfg-rgp9b kubernetes.io/dockercfg 1 2m57s NAME DATA AGE configmap/kube-root-ca.crt 1 2m57s configmap/openshift-service-ca.crt 1 2m57s",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: test-obc 1 namespace: openshift-adp spec: storageClassName: openshift-storage.noobaa.io generateBucketName: test-backup-bucket 2",
"oc create -f <obc_file_name>",
"oc extract --to=- cm/test-obc 1",
"BUCKET_NAME backup-c20...41fd BUCKET_PORT 443 BUCKET_REGION BUCKET_SUBREGION BUCKET_HOST s3.openshift-storage.svc",
"oc extract --to=- secret/test-obc",
"AWS_ACCESS_KEY_ID ebYR....xLNMc AWS_SECRET_ACCESS_KEY YXf...+NaCkdyC3QPym",
"[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=cloud-credentials",
"oc get cm/openshift-service-ca.crt -o jsonpath='{.data.service-ca\\.crt}' | base64 -w0; echo",
"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0 ....gpwOHMwaG9CRmk5a3....FLS0tLS0K",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: oadp-backup namespace: openshift-adp spec: configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - aws - openshift - csi defaultSnapshotMoveData: true backupLocations: - velero: config: profile: \"default\" region: noobaa s3Url: https://s3.openshift-storage.svc s3ForcePathStyle: \"true\" insecureSkipTLSVerify: \"false\" 1 provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: <bucket_name> 2 prefix: oadp caCert: <ca_cert> 3",
"oc apply -f <dpa_filename>",
"oc get dpa -o yaml",
"apiVersion: v1 items: - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp #...# spec: backupLocations: - velero: config: #...# status: conditions: - lastTransitionTime: \"20....9:54:02Z\" message: Reconcile complete reason: Complete status: \"True\" type: Reconciled kind: List metadata: resourceVersion: \"\"",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1",
"oc apply -f <backup_cr_filename>",
"oc describe backup test-backup -n openshift-adp",
"Name: test-backup Namespace: openshift-adp ....# Status: Backup Item Operations Attempted: 1 Backup Item Operations Completed: 1 Completion Timestamp: 2024-09-25T10:17:01Z Expiration: 2024-10-25T10:16:31Z Format Version: 1.1.0 Hook Status: Phase: Completed Progress: Items Backed Up: 34 Total Items: 34 Start Timestamp: 2024-09-25T10:16:31Z Version: 1 Events: <none>",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: oadp-backup namespace: openshift-adp spec: configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - legacy-aws 1 - openshift - csi defaultSnapshotMoveData: true backupLocations: - velero: config: profile: \"default\" region: noobaa s3Url: https://s3.openshift-storage.svc s3ForcePathStyle: \"true\" insecureSkipTLSVerify: \"true\" provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: <bucket_name> 2 prefix: oadp",
"oc apply -f <dpa_filename>",
"oc get dpa -o yaml",
"apiVersion: v1 items: - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp #...# spec: backupLocations: - velero: config: #...# status: conditions: - lastTransitionTime: \"20....9:54:02Z\" message: Reconcile complete reason: Complete status: \"True\" type: Reconciled kind: List metadata: resourceVersion: \"\"",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1",
"oc apply -f <backup_cr_filename>",
"oc describe backups.velero.io test-backup -n openshift-adp",
"Name: test-backup Namespace: openshift-adp ....# Status: Backup Item Operations Attempted: 1 Backup Item Operations Completed: 1 Completion Timestamp: 2024-09-25T10:17:01Z Expiration: 2024-10-25T10:16:31Z Format Version: 1.1.0 Hook Status: Phase: Completed Progress: Items Backed Up: 34 Total Items: 34 Start Timestamp: 2024-09-25T10:16:31Z Version: 1 Events: <none>",
"resources: mds: limits: cpu: \"3\" memory: 128Gi requests: cpu: \"3\" memory: 8Gi",
"BUCKET=<your_bucket>",
"REGION=<your_region>",
"aws s3api create-bucket --bucket USDBUCKET --region USDREGION --create-bucket-configuration LocationConstraint=USDREGION 1",
"aws iam create-user --user-name velero 1",
"cat > velero-policy.json <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:DescribeVolumes\", \"ec2:DescribeSnapshots\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:GetObject\", \"s3:DeleteObject\", \"s3:PutObject\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}/*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\", \"s3:GetBucketLocation\", \"s3:ListBucketMultipartUploads\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}\" ] } ] } EOF",
"aws iam put-user-policy --user-name velero --policy-name velero --policy-document file://velero-policy.json",
"aws iam create-access-key --user-name velero",
"{ \"AccessKey\": { \"UserName\": \"velero\", \"Status\": \"Active\", \"CreateDate\": \"2017-07-31T22:24:41.576Z\", \"SecretAccessKey\": <AWS_SECRET_ACCESS_KEY>, \"AccessKeyId\": <AWS_ACCESS_KEY_ID> } }",
"cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOF",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"[backupStorage] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> [volumeSnapshot] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> prefix: <prefix> config: region: us-east-1 profile: \"backupStorage\" credential: key: cloud name: cloud-credentials snapshotLocations: - velero: provider: aws config: region: us-west-2 profile: \"volumeSnapshot\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: BackupStorageLocation metadata: name: default namespace: openshift-adp spec: provider: aws 1 objectStorage: bucket: <bucket_name> 2 prefix: <bucket_prefix> 3 credential: 4 key: cloud 5 name: cloud-credentials 6 config: region: <bucket_region> 7 s3ForcePathStyle: \"true\" 8 s3Url: <s3_url> 9 publicUrl: <public_s3_url> 10 serverSideEncryption: AES256 11 kmsKeyId: \"50..c-4da1-419f-a16e-ei...49f\" 12 customerKeyEncryptionFile: \"/credentials/customer-key\" 13 signatureVersion: \"1\" 14 profile: \"default\" 15 insecureSkipTLSVerify: \"true\" 16 enableSharedConfig: \"true\" 17 tagging: \"\" 18 checksumAlgorithm: \"CRC32\" 19",
"snapshotLocations: - velero: config: profile: default region: <region> provider: aws",
"dd if=/dev/urandom bs=1 count=32 > sse.key",
"cat sse.key | base64 > sse_encoded.key",
"ln -s sse_encoded.key customer-key",
"oc create secret generic cloud-credentials --namespace openshift-adp --from-file cloud=<path>/openshift_aws_credentials,customer-key=<path>/sse_encoded.key",
"apiVersion: v1 data: cloud: W2Rfa2V5X2lkPSJBS0lBVkJRWUIyRkQ0TlFHRFFPQiIKYXdzX3NlY3JldF9hY2Nlc3Nfa2V5P<snip>rUE1mNWVSbTN5K2FpeWhUTUQyQk1WZHBOIgo= customer-key: v+<snip>TFIiq6aaXPbj8dhos= kind: Secret",
"spec: backupLocations: - velero: config: customerKeyEncryptionFile: /credentials/customer-key profile: default",
"echo \"encrypt me please\" > test.txt",
"aws s3api put-object --bucket <bucket> --key test.txt --body test.txt --sse-customer-key fileb://sse.key --sse-customer-algorithm AES256",
"s3cmd get s3://<bucket>/test.txt test.txt",
"aws s3api get-object --bucket <bucket> --key test.txt --sse-customer-key fileb://sse.key --sse-customer-algorithm AES256 downloaded.txt",
"cat downloaded.txt",
"encrypt me please",
"aws s3api get-object --bucket <bucket> --key velero/backups/mysql-persistent-customerkeyencryptionfile4/mysql-persistent-customerkeyencryptionfile4.tar.gz --sse-customer-key fileb://sse.key --sse-customer-algorithm AES256 --debug velero_download.tar.gz",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP",
"CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') [[ -n USDCA_CERT ]] && echo \"USDCA_CERT\" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"cat > /tmp/your-cacert.txt\" || echo \"DPA BSL has no caCert\"",
"velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt",
"velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>",
"oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"ls /tmp/your-cacert.txt\" /tmp/your-cacert.txt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - openshift 2 - aws resourceTimeout: 10m 3 nodeAgent: 4 enable: true 5 uploaderType: kopia 6 podConfig: nodeSelector: <node_selector> 7 backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> 8 prefix: <prefix> 9 config: region: <region> profile: \"default\" s3ForcePathStyle: \"true\" 10 s3Url: <s3_url> 11 credential: key: cloud name: cloud-credentials 12 snapshotLocations: 13 - name: default velero: provider: aws config: region: <region> 14 profile: \"default\" credential: key: cloud name: cloud-credentials 15",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: checksumAlgorithm: \"\" 1 insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: velero: defaultPlugins: - openshift - aws - csi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication # backupLocations: - name: aws 1 velero: provider: aws default: true 2 objectStorage: bucket: <bucket_name> 3 prefix: <prefix> 4 config: region: <region_name> 5 profile: \"default\" credential: key: cloud name: cloud-credentials 6 - name: odf 7 velero: provider: aws default: false objectStorage: bucket: <bucket_name> prefix: <prefix> config: profile: \"default\" region: <region_name> s3Url: <url> 8 insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" credential: key: cloud name: <custom_secret_name_odf> 9 #",
"apiVersion: velero.io/v1 kind: Backup spec: includedNamespaces: - <namespace> 1 storageLocation: <backup_storage_location> 2 defaultVolumesToFsBackup: true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"ibmcloud plugin install cos -f",
"BUCKET=<bucket_name>",
"REGION=<bucket_region> 1",
"ibmcloud resource group-create <resource_group_name>",
"ibmcloud target -g <resource_group_name>",
"ibmcloud target",
"API endpoint: https://cloud.ibm.com Region: User: test-user Account: Test Account (fb6......e95) <-> 2...122 Resource group: Default",
"RESOURCE_GROUP=<resource_group> 1",
"ibmcloud resource service-instance-create <service_instance_name> \\ 1 <service_name> \\ 2 <service_plan> \\ 3 <region_name> 4",
"ibmcloud resource service-instance-create test-service-instance cloud-object-storage \\ 1 standard global -d premium-global-deployment 2",
"SERVICE_INSTANCE_ID=USD(ibmcloud resource service-instance test-service-instance --output json | jq -r '.[0].id')",
"ibmcloud cos bucket-create \\// --bucket USDBUCKET \\// --ibm-service-instance-id USDSERVICE_INSTANCE_ID \\// --region USDREGION",
"ibmcloud resource service-key-create test-key Writer --instance-name test-service-instance --parameters {\\\"HMAC\\\":true}",
"cat > credentials-velero << __EOF__ [default] aws_access_key_id=USD(ibmcloud resource service-key test-key -o json | jq -r '.[0].credentials.cos_hmac_keys.access_key_id') aws_secret_access_key=USD(ibmcloud resource service-key test-key -o json | jq -r '.[0].credentials.cos_hmac_keys.secret_access_key') __EOF__",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: provider: <provider> default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix>",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp name: <dpa_name> spec: configuration: velero: defaultPlugins: - openshift - aws - csi backupLocations: - velero: provider: aws 1 default: true objectStorage: bucket: <bucket_name> 2 prefix: velero config: insecureSkipTLSVerify: 'true' profile: default region: <region_name> 3 s3ForcePathStyle: 'true' s3Url: <s3_url> 4 credential: key: cloud name: cloud-credentials 5",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication # backupLocations: - name: aws 1 velero: provider: aws default: true 2 objectStorage: bucket: <bucket_name> 3 prefix: <prefix> 4 config: region: <region_name> 5 profile: \"default\" credential: key: cloud name: cloud-credentials 6 - name: odf 7 velero: provider: aws default: false objectStorage: bucket: <bucket_name> prefix: <prefix> config: profile: \"default\" region: <region_name> s3Url: <url> 8 insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" credential: key: cloud name: <custom_secret_name_odf> 9 #",
"apiVersion: velero.io/v1 kind: Backup spec: includedNamespaces: - <namespace> 1 storageLocation: <backup_storage_location> 2 defaultVolumesToFsBackup: true",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: config: resourceGroup: <azure_resource_group> storageAccount: <azure_storage_account_id> subscriptionId: <azure_subscription_id> storageAccountKeyEnvVar: AZURE_STORAGE_ACCOUNT_ACCESS_KEY credential: key: cloud name: <custom_secret> 1 provider: azure default: true objectStorage: bucket: <bucket_name> prefix: <prefix> snapshotLocations: - velero: config: resourceGroup: <azure_resource_group> subscriptionId: <azure_subscription_id> incremental: \"true\" provider: azure",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP",
"CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') [[ -n USDCA_CERT ]] && echo \"USDCA_CERT\" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"cat > /tmp/your-cacert.txt\" || echo \"DPA BSL has no caCert\"",
"velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt",
"velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>",
"oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"ls /tmp/your-cacert.txt\" /tmp/your-cacert.txt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - azure - openshift 2 resourceTimeout: 10m 3 nodeAgent: 4 enable: true 5 uploaderType: kopia 6 podConfig: nodeSelector: <node_selector> 7 backupLocations: - velero: config: resourceGroup: <azure_resource_group> 8 storageAccount: <azure_storage_account_id> 9 subscriptionId: <azure_subscription_id> 10 storageAccountKeyEnvVar: AZURE_STORAGE_ACCOUNT_ACCESS_KEY credential: key: cloud name: cloud-credentials-azure 11 provider: azure default: true objectStorage: bucket: <bucket_name> 12 prefix: <prefix> 13 snapshotLocations: 14 - velero: config: resourceGroup: <azure_resource_group> subscriptionId: <azure_subscription_id> incremental: \"true\" name: default provider: azure credential: key: cloud name: cloud-credentials-azure 15",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"gcloud auth login",
"BUCKET=<bucket> 1",
"gsutil mb gs://USDBUCKET/",
"PROJECT_ID=USD(gcloud config get-value project)",
"gcloud iam service-accounts create velero --display-name \"Velero service account\"",
"gcloud iam service-accounts list",
"SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list --filter=\"displayName:Velero service account\" --format 'value(email)')",
"ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list iam.serviceAccounts.signBlob )",
"gcloud iam roles create velero.server --project USDPROJECT_ID --title \"Velero Server\" --permissions \"USD(IFS=\",\"; echo \"USD{ROLE_PERMISSIONS[*]}\")\"",
"gcloud projects add-iam-policy-binding USDPROJECT_ID --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL --role projects/USDPROJECT_ID/roles/velero.server",
"gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET}",
"gcloud iam service-accounts keys create credentials-velero --iam-account USDSERVICE_ACCOUNT_EMAIL",
"oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: provider: gcp default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix> snapshotLocations: - velero: provider: gcp default: true config: project: <project> snapshotLocation: us-west1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP",
"CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') [[ -n USDCA_CERT ]] && echo \"USDCA_CERT\" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"cat > /tmp/your-cacert.txt\" || echo \"DPA BSL has no caCert\"",
"velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt",
"velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>",
"oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"ls /tmp/your-cacert.txt\" /tmp/your-cacert.txt",
"mkdir -p oadp-credrequest",
"echo 'apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: oadp-operator-credentials namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec permissions: - compute.disks.get - compute.disks.create - compute.disks.createSnapshot - compute.snapshots.get - compute.snapshots.create - compute.snapshots.useReadOnly - compute.snapshots.delete - compute.zones.get - storage.objects.create - storage.objects.delete - storage.objects.get - storage.objects.list - iam.serviceAccounts.signBlob skipServiceCheck: true secretRef: name: cloud-credentials-gcp namespace: <OPERATOR_INSTALL_NS> serviceAccountNames: - velero ' > oadp-credrequest/credrequest.yaml",
"ccoctl gcp create-service-accounts --name=<name> --project=<gcp_project_id> --credentials-requests-dir=oadp-credrequest --workload-identity-pool=<pool_id> --workload-identity-provider=<provider_id>",
"oc create namespace <OPERATOR_INSTALL_NS>",
"oc apply -f manifests/openshift-adp-cloud-credentials-gcp-credentials.yaml",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: <OPERATOR_INSTALL_NS> 1 spec: configuration: velero: defaultPlugins: - gcp - openshift 2 resourceTimeout: 10m 3 nodeAgent: 4 enable: true 5 uploaderType: kopia 6 podConfig: nodeSelector: <node_selector> 7 backupLocations: - velero: provider: gcp default: true credential: key: cloud 8 name: cloud-credentials-gcp 9 objectStorage: bucket: <bucket_name> 10 prefix: <prefix> 11 snapshotLocations: 12 - velero: provider: gcp default: true config: project: <project> snapshotLocation: us-west1 13 credential: key: cloud name: cloud-credentials-gcp 14 backupImages: true 15",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOF",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: config: profile: \"default\" region: <region_name> 1 s3Url: <url> insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" provider: aws default: true credential: key: cloud name: <custom_secret> 2 objectStorage: bucket: <bucket_name> prefix: <prefix>",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP",
"CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') [[ -n USDCA_CERT ]] && echo \"USDCA_CERT\" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"cat > /tmp/your-cacert.txt\" || echo \"DPA BSL has no caCert\"",
"velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt",
"velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>",
"oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"ls /tmp/your-cacert.txt\" /tmp/your-cacert.txt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - aws 2 - openshift 3 resourceTimeout: 10m 4 nodeAgent: 5 enable: true 6 uploaderType: kopia 7 podConfig: nodeSelector: <node_selector> 8 backupLocations: - velero: config: profile: \"default\" region: <region_name> 9 s3Url: <url> 10 insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" provider: aws default: true credential: key: cloud name: cloud-credentials 11 objectStorage: bucket: <bucket_name> 12 prefix: <prefix> 13",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: provider: <provider> default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix>",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP",
"CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') [[ -n USDCA_CERT ]] && echo \"USDCA_CERT\" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"cat > /tmp/your-cacert.txt\" || echo \"DPA BSL has no caCert\"",
"velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt",
"velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>",
"oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"ls /tmp/your-cacert.txt\" /tmp/your-cacert.txt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - aws 2 - kubevirt 3 - csi 4 - openshift 5 resourceTimeout: 10m 6 nodeAgent: 7 enable: true 8 uploaderType: kopia 9 podConfig: nodeSelector: <node_selector> 10 backupLocations: - velero: provider: gcp 11 default: true credential: key: cloud name: <default_secret> 12 objectStorage: bucket: <bucket_name> 13 prefix: <prefix> 14",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - kubevirt 2 - gcp 3 - csi 4 - openshift 5 resourceTimeout: 10m 6 nodeAgent: 7 enable: true 8 uploaderType: kopia 9 podConfig: nodeSelector: <node_selector> 10 backupLocations: - velero: provider: gcp 11 default: true credential: key: cloud name: <default_secret> 12 objectStorage: bucket: <bucket_name> 13 prefix: <prefix> 14",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: velero.io/v1 kind: Backup metadata: name: vmbackupsingle namespace: openshift-adp spec: snapshotMoveData: true includedNamespaces: - <vm_namespace> 1 labelSelector: matchLabels: app: <vm_app_name> 2 storageLocation: <backup_storage_location_name> 3",
"oc apply -f <backup_cr_file_name> 1",
"apiVersion: velero.io/v1 kind: Restore metadata: name: vmrestoresingle namespace: openshift-adp spec: backupName: vmbackupsingle 1 restorePVs: true",
"oc apply -f <restore_cr_file_name> 1",
"oc label vm <vm_name> app=<vm_name> -n openshift-adp",
"apiVersion: velero.io/v1 kind: Restore metadata: name: singlevmrestore namespace: openshift-adp spec: backupName: multiplevmbackup restorePVs: true LabelSelectors: - matchLabels: kubevirt.io/created-by: <datavolume_uid> 1 - matchLabels: app: <vm_name> 2",
"oc apply -f <restore_cr_file_name> 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication # backupLocations: - name: aws 1 velero: provider: aws default: true 2 objectStorage: bucket: <bucket_name> 3 prefix: <prefix> 4 config: region: <region_name> 5 profile: \"default\" credential: key: cloud name: cloud-credentials 6 - name: odf 7 velero: provider: aws default: false objectStorage: bucket: <bucket_name> prefix: <prefix> config: profile: \"default\" region: <region_name> s3Url: <url> 8 insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" credential: key: cloud name: <custom_secret_name_odf> 9 #",
"apiVersion: velero.io/v1 kind: Backup spec: includedNamespaces: - <namespace> 1 storageLocation: <backup_storage_location> 2 defaultVolumesToFsBackup: true",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=<aws_credentials_file_name> 1",
"oc create secret generic mcg-secret -n openshift-adp --from-file cloud=<MCG_credentials_file_name> 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: two-bsl-dpa namespace: openshift-adp spec: backupLocations: - name: aws velero: config: profile: default region: <region_name> 1 credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> 2 prefix: velero provider: aws - name: mcg velero: config: insecureSkipTLSVerify: \"true\" profile: noobaa region: <region_name> 3 s3ForcePathStyle: \"true\" s3Url: <s3_url> 4 credential: key: cloud name: mcg-secret 5 objectStorage: bucket: <bucket_name_mcg> 6 prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws",
"oc create -f <dpa_file_name> 1",
"oc get dpa -o yaml",
"oc get bsl",
"NAME PHASE LAST VALIDATED AGE DEFAULT aws Available 5s 3m28s true mcg Available 5s 3m28s",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup1 namespace: openshift-adp spec: includedNamespaces: - <mysql_namespace> 1 defaultVolumesToFsBackup: true",
"oc apply -f <backup_file_name> 1",
"oc get backups.velero.io <backup_name> -o yaml 1",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup1 namespace: openshift-adp spec: includedNamespaces: - <mysql_namespace> 1 storageLocation: mcg 2 defaultVolumesToFsBackup: true",
"oc apply -f <backup_file_name> 1",
"oc get backups.velero.io <backup_name> -o yaml 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication # snapshotLocations: - velero: config: profile: default region: <region> 1 credential: key: cloud name: cloud-credentials provider: aws - velero: config: profile: default region: <region> credential: key: cloud name: <custom_credential> 2 provider: aws #",
"velero backup create <backup-name> --snapshot-volumes false 1",
"velero describe backup <backup_name> --details 1",
"velero restore create --from-backup <backup-name> 1",
"velero describe restore <restore_name> --details 1",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAMESPACE NAME PHASE LAST VALIDATED AGE DEFAULT openshift-adp velero-sample-1 Available 11s 31m",
"apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> labels: velero.io/storage-location: default namespace: openshift-adp spec: hooks: {} includedNamespaces: - <namespace> 1 includedResources: [] 2 excludedResources: [] 3 storageLocation: <velero-sample-1> 4 ttl: 720h0m0s 5 labelSelector: 6 matchLabels: app: <label_1> app: <label_2> app: <label_3> orLabelSelectors: 7 - matchLabels: app: <label_1> app: <label_2> app: <label_3>",
"oc get backups.velero.io -n openshift-adp <backup> -o jsonpath='{.status.phase}'",
"apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: <volume_snapshot_class_name> labels: velero.io/csi-volumesnapshot-class: \"true\" 1 annotations: snapshot.storage.kubernetes.io/is-default-class: true 2 driver: <csi_driver> deletionPolicy: <deletion_policy_type> 3",
"apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> labels: velero.io/storage-location: default namespace: openshift-adp spec: defaultVolumesToFsBackup: true 1",
"apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> namespace: openshift-adp spec: hooks: resources: - name: <hook_name> includedNamespaces: - <namespace> 1 excludedNamespaces: 2 - <namespace> includedResources: [] - pods 3 excludedResources: [] 4 labelSelector: 5 matchLabels: app: velero component: server pre: 6 - exec: container: <container> 7 command: - /bin/uname 8 - -a onError: Fail 9 timeout: 30s 10 post: 11",
"oc get backupStorageLocations -n openshift-adp",
"NAMESPACE NAME PHASE LAST VALIDATED AGE DEFAULT openshift-adp velero-sample-1 Available 11s 31m",
"cat << EOF | oc apply -f - apiVersion: velero.io/v1 kind: Schedule metadata: name: <schedule> namespace: openshift-adp spec: schedule: 0 7 * * * 1 template: hooks: {} includedNamespaces: - <namespace> 2 storageLocation: <velero-sample-1> 3 defaultVolumesToFsBackup: true 4 ttl: 720h0m0s 5 EOF",
"schedule: \"*/10 * * * *\"",
"oc get schedule -n openshift-adp <schedule> -o jsonpath='{.status.phase}'",
"apiVersion: velero.io/v1 kind: DeleteBackupRequest metadata: name: deletebackuprequest namespace: openshift-adp spec: backupName: <backup_name> 1",
"oc apply -f <deletebackuprequest_cr_filename>",
"velero backup delete <backup_name> -n openshift-adp 1",
"pod/repo-maintain-job-173...2527-2nbls 0/1 Completed 0 168m pod/repo-maintain-job-173....536-fl9tm 0/1 Completed 0 108m pod/repo-maintain-job-173...2545-55ggx 0/1 Completed 0 48m",
"not due for full maintenance cycle until 2024-00-00 18:29:4",
"oc get backuprepositories.velero.io -n openshift-adp",
"oc delete backuprepository <backup_repository_name> -n openshift-adp 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: nodeAgent: enable: true uploaderType: kopia",
"velero backup create <backup-name> --snapshot-volumes false 1",
"velero describe backup <backup_name> --details 1",
"velero restore create --from-backup <backup-name> 1",
"velero describe restore <restore_name> --details 1",
"apiVersion: velero.io/v1 kind: Restore metadata: name: <restore> namespace: openshift-adp spec: backupName: <backup> 1 includedResources: [] 2 excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io restorePVs: true 3",
"oc get restores.velero.io -n openshift-adp <restore> -o jsonpath='{.status.phase}'",
"oc get all -n <namespace> 1",
"bash dc-restic-post-restore.sh -> dc-post-restore.sh",
"#!/bin/bash set -e if sha256sum exists, use it to check the integrity of the file if command -v sha256sum >/dev/null 2>&1; then CHECKSUM_CMD=\"sha256sum\" else CHECKSUM_CMD=\"shasum -a 256\" fi label_name () { if [ \"USD{#1}\" -le \"63\" ]; then echo USD1 return fi sha=USD(echo -n USD1|USDCHECKSUM_CMD) echo \"USD{1:0:57}USD{sha:0:6}\" } if [[ USD# -ne 1 ]]; then echo \"usage: USD{BASH_SOURCE} restore-name\" exit 1 fi echo \"restore: USD1\" label=USD(label_name USD1) echo \"label: USDlabel\" echo Deleting disconnected restore pods delete pods --all-namespaces -l oadp.openshift.io/disconnected-from-dc=USDlabel for dc in USD(oc get dc --all-namespaces -l oadp.openshift.io/replicas-modified=USDlabel -o jsonpath='{range .items[*]}{.metadata.namespace}{\",\"}{.metadata.name}{\",\"}{.metadata.annotations.oadp\\.openshift\\.io/original-replicas}{\",\"}{.metadata.annotations.oadp\\.openshift\\.io/original-paused}{\"\\n\"}') do IFS=',' read -ra dc_arr <<< \"USDdc\" if [ USD{#dc_arr[0]} -gt 0 ]; then echo Found deployment USD{dc_arr[0]}/USD{dc_arr[1]}, setting replicas: USD{dc_arr[2]}, paused: USD{dc_arr[3]} cat <<EOF | oc patch dc -n USD{dc_arr[0]} USD{dc_arr[1]} --patch-file /dev/stdin spec: replicas: USD{dc_arr[2]} paused: USD{dc_arr[3]} EOF fi done",
"apiVersion: velero.io/v1 kind: Restore metadata: name: <restore> namespace: openshift-adp spec: hooks: resources: - name: <hook_name> includedNamespaces: - <namespace> 1 excludedNamespaces: - <namespace> includedResources: - pods 2 excludedResources: [] labelSelector: 3 matchLabels: app: velero component: server postHooks: - init: initContainers: - name: restore-hook-init image: alpine:latest volumeMounts: - mountPath: /restores/pvc1-vm name: pvc1-vm command: - /bin/ash - -c timeout: 4 - exec: container: <container> 5 command: - /bin/bash 6 - -c - \"psql < /backup/backup.sql\" waitTimeout: 5m 7 execTimeout: 1m 8 onError: Continue 9",
"velero restore create <RESTORE_NAME> --from-backup <BACKUP_NAME> --exclude-resources=deployment.apps",
"velero restore create <RESTORE_NAME> --from-backup <BACKUP_NAME> --include-resources=deployment.apps",
"export CLUSTER_NAME=my-cluster 1 export ROSA_CLUSTER_ID=USD(rosa describe cluster -c USD{CLUSTER_NAME} --output json | jq -r .id) export REGION=USD(rosa describe cluster -c USD{CLUSTER_NAME} --output json | jq -r .region.id) export OIDC_ENDPOINT=USD(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||') export AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query Account --output text) export CLUSTER_VERSION=USD(rosa describe cluster -c USD{CLUSTER_NAME} -o json | jq -r .version.raw_id | cut -f -2 -d '.') export ROLE_NAME=\"USD{CLUSTER_NAME}-openshift-oadp-aws-cloud-credentials\" export SCRATCH=\"/tmp/USD{CLUSTER_NAME}/oadp\" mkdir -p USD{SCRATCH} echo \"Cluster ID: USD{ROSA_CLUSTER_ID}, Region: USD{REGION}, OIDC Endpoint: USD{OIDC_ENDPOINT}, AWS Account ID: USD{AWS_ACCOUNT_ID}\"",
"POLICY_ARN=USD(aws iam list-policies --query \"Policies[?PolicyName=='RosaOadpVer1'].{ARN:Arn}\" --output text) 1",
"if [[ -z \"USD{POLICY_ARN}\" ]]; then cat << EOF > USD{SCRATCH}/policy.json 1 { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"s3:CreateBucket\", \"s3:DeleteBucket\", \"s3:PutBucketTagging\", \"s3:GetBucketTagging\", \"s3:PutEncryptionConfiguration\", \"s3:GetEncryptionConfiguration\", \"s3:PutLifecycleConfiguration\", \"s3:GetLifecycleConfiguration\", \"s3:GetBucketLocation\", \"s3:ListBucket\", \"s3:GetObject\", \"s3:PutObject\", \"s3:DeleteObject\", \"s3:ListBucketMultipartUploads\", \"s3:AbortMultipartUploads\", \"s3:ListMultipartUploadParts\", \"s3:DescribeSnapshots\", \"ec2:DescribeVolumes\", \"ec2:DescribeVolumeAttribute\", \"ec2:DescribeVolumesModifications\", \"ec2:DescribeVolumeStatus\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" } ]} EOF POLICY_ARN=USD(aws iam create-policy --policy-name \"RosaOadpVer1\" --policy-document file:///USD{SCRATCH}/policy.json --query Policy.Arn --tags Key=rosa_openshift_version,Value=USD{CLUSTER_VERSION} Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-oadp Key=operator_name,Value=openshift-oadp --output text) fi",
"echo USD{POLICY_ARN}",
"cat <<EOF > USD{SCRATCH}/trust-policy.json { \"Version\":2012-10-17\", \"Statement\": [{ \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_ENDPOINT}\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_ENDPOINT}:sub\": [ \"system:serviceaccount:openshift-adp:openshift-adp-controller-manager\", \"system:serviceaccount:openshift-adp:velero\"] } } }] } EOF",
"ROLE_ARN=USD(aws iam create-role --role-name \"USD{ROLE_NAME}\" --assume-role-policy-document file://USD{SCRATCH}/trust-policy.json --tags Key=rosa_cluster_id,Value=USD{ROSA_CLUSTER_ID} Key=rosa_openshift_version,Value=USD{CLUSTER_VERSION} Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=openshift-oadp --query Role.Arn --output text)",
"echo USD{ROLE_ARN}",
"aws iam attach-role-policy --role-name \"USD{ROLE_NAME}\" --policy-arn USD{POLICY_ARN}",
"cat <<EOF > USD{SCRATCH}/credentials [default] role_arn = USD{ROLE_ARN} web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token region = <aws_region> 1 EOF",
"oc create namespace openshift-adp",
"oc -n openshift-adp create secret generic cloud-credentials --from-file=USD{SCRATCH}/credentials",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: CloudStorage metadata: name: USD{CLUSTER_NAME}-oadp namespace: openshift-adp spec: creationSecret: key: credentials name: cloud-credentials enableSharedConfig: true name: USD{CLUSTER_NAME}-oadp provider: aws region: USDREGION EOF",
"oc get pvc -n <namespace>",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE applog Bound pvc-351791ae-b6ab-4e8b-88a4-30f73caf5ef8 1Gi RWO gp3-csi 4d19h mysql Bound pvc-16b8e009-a20a-4379-accc-bc81fedd0621 1Gi RWO gp3-csi 4d19h",
"oc get storageclass",
"NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 4d21h gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3 ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3-csi (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 features: dataMover: enable: false backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws - csi nodeAgent: 2 enable: false uploaderType: kopia 3 EOF",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 features: dataMover: enable: false backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws nodeAgent: 2 enable: false uploaderType: restic snapshotLocations: - velero: config: credentialsFile: /tmp/credentials/openshift-adp/cloud-credentials-credentials 3 enableSharedConfig: \"true\" 4 profile: default 5 region: USD{REGION} 6 provider: aws EOF",
"nodeAgent: enable: false uploaderType: restic",
"restic: enable: false",
"oc get sub -o yaml redhat-oadp-operator",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: annotations: creationTimestamp: \"2025-01-15T07:18:31Z\" generation: 1 labels: operators.coreos.com/redhat-oadp-operator.openshift-adp: \"\" name: redhat-oadp-operator namespace: openshift-adp resourceVersion: \"77363\" uid: 5ba00906-5ad2-4476-ae7b-ffa90986283d spec: channel: stable-1.4 config: env: - name: ROLEARN value: arn:aws:iam::11111111:role/wrong-role-arn 1 installPlanApproval: Manual name: redhat-oadp-operator source: prestage-operators sourceNamespace: openshift-marketplace startingCSV: oadp-operator.v1.4.2",
"oc patch subscription redhat-oadp-operator -p '{\"spec\": {\"config\": {\"env\": [{\"name\": \"ROLEARN\", \"value\": \"<role_arn>\"}]}}}' --type='merge'",
"oc get secret cloud-credentials -o jsonpath='{.data.credentials}' | base64 -d",
"[default] sts_regional_endpoints = regional role_arn = arn:aws:iam::160.....6956:role/oadprosa.....8wlf web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-rosa-dpa namespace: openshift-adp spec: backupLocations: - bucket: config: region: us-east-1 cloudStorageRef: name: <cloud_storage> 1 credential: name: cloud-credentials key: credentials prefix: velero default: true configuration: velero: defaultPlugins: - aws - openshift",
"oc create -f <dpa_manifest_file>",
"oc get dpa -n openshift-adp -o yaml",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication status: conditions: - lastTransitionTime: \"2023-07-31T04:48:12Z\" message: Reconcile complete reason: Complete status: \"True\" type: Reconciled",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT ts-dpa-1 Available 3s 6s true",
"oc create namespace hello-world",
"oc new-app -n hello-world --image=docker.io/openshift/hello-openshift",
"oc expose service/hello-openshift -n hello-world",
"curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`",
"Hello OpenShift!",
"cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Backup metadata: name: hello-world namespace: openshift-adp spec: includedNamespaces: - hello-world storageLocation: USD{CLUSTER_NAME}-dpa-1 ttl: 720h0m0s EOF",
"watch \"oc -n openshift-adp get backup hello-world -o json | jq .status\"",
"{ \"completionTimestamp\": \"2022-09-07T22:20:44Z\", \"expiration\": \"2022-10-07T22:20:22Z\", \"formatVersion\": \"1.1.0\", \"phase\": \"Completed\", \"progress\": { \"itemsBackedUp\": 58, \"totalItems\": 58 }, \"startTimestamp\": \"2022-09-07T22:20:22Z\", \"version\": 1 }",
"oc delete ns hello-world",
"cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Restore metadata: name: hello-world namespace: openshift-adp spec: backupName: hello-world EOF",
"watch \"oc -n openshift-adp get restore hello-world -o json | jq .status\"",
"{ \"completionTimestamp\": \"2022-09-07T22:25:47Z\", \"phase\": \"Completed\", \"progress\": { \"itemsRestored\": 38, \"totalItems\": 38 }, \"startTimestamp\": \"2022-09-07T22:25:28Z\", \"warnings\": 9 }",
"oc -n hello-world get pods",
"NAME READY STATUS RESTARTS AGE hello-openshift-9f885f7c6-kdjpj 1/1 Running 0 90s",
"curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`",
"Hello OpenShift!",
"oc delete ns hello-world",
"oc -n openshift-adp delete dpa USD{CLUSTER_NAME}-dpa",
"oc -n openshift-adp delete cloudstorage USD{CLUSTER_NAME}-oadp",
"oc -n openshift-adp patch cloudstorage USD{CLUSTER_NAME}-oadp -p '{\"metadata\":{\"finalizers\":null}}' --type=merge",
"oc -n openshift-adp delete subscription oadp-operator",
"oc delete ns openshift-adp",
"oc delete backups.velero.io hello-world",
"velero backup delete hello-world",
"for CRD in `oc get crds | grep velero | awk '{print USD1}'`; do oc delete crd USDCRD; done",
"aws s3 rm s3://USD{CLUSTER_NAME}-oadp --recursive",
"aws s3api delete-bucket --bucket USD{CLUSTER_NAME}-oadp",
"aws iam detach-role-policy --role-name \"USD{ROLE_NAME}\" --policy-arn \"USD{POLICY_ARN}\"",
"aws iam delete-role --role-name \"USD{ROLE_NAME}\"",
"export CLUSTER_NAME= <AWS_cluster_name> 1",
"export CLUSTER_VERSION=USD(oc get clusterversion version -o jsonpath='{.status.desired.version}{\"\\n\"}') export AWS_CLUSTER_ID=USD(oc get clusterversion version -o jsonpath='{.spec.clusterID}{\"\\n\"}') export OIDC_ENDPOINT=USD(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||') export REGION=USD(oc get infrastructures cluster -o jsonpath='{.status.platformStatus.aws.region}' --allow-missing-template-keys=false || echo us-east-2) export AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query Account --output text) export ROLE_NAME=\"USD{CLUSTER_NAME}-openshift-oadp-aws-cloud-credentials\"",
"export SCRATCH=\"/tmp/USD{CLUSTER_NAME}/oadp\" mkdir -p USD{SCRATCH}",
"echo \"Cluster ID: USD{AWS_CLUSTER_ID}, Region: USD{REGION}, OIDC Endpoint: USD{OIDC_ENDPOINT}, AWS Account ID: USD{AWS_ACCOUNT_ID}\"",
"export POLICY_NAME=\"OadpVer1\" 1",
"POLICY_ARN=USD(aws iam list-policies --query \"Policies[?PolicyName=='USDPOLICY_NAME'].{ARN:Arn}\" --output text)",
"if [[ -z \"USD{POLICY_ARN}\" ]]; then cat << EOF > USD{SCRATCH}/policy.json { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"s3:CreateBucket\", \"s3:DeleteBucket\", \"s3:PutBucketTagging\", \"s3:GetBucketTagging\", \"s3:PutEncryptionConfiguration\", \"s3:GetEncryptionConfiguration\", \"s3:PutLifecycleConfiguration\", \"s3:GetLifecycleConfiguration\", \"s3:GetBucketLocation\", \"s3:ListBucket\", \"s3:GetObject\", \"s3:PutObject\", \"s3:DeleteObject\", \"s3:ListBucketMultipartUploads\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\", \"ec2:DescribeSnapshots\", \"ec2:DescribeVolumes\", \"ec2:DescribeVolumeAttribute\", \"ec2:DescribeVolumesModifications\", \"ec2:DescribeVolumeStatus\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" } ]} EOF POLICY_ARN=USD(aws iam create-policy --policy-name USDPOLICY_NAME --policy-document file:///USD{SCRATCH}/policy.json --query Policy.Arn --tags Key=openshift_version,Value=USD{CLUSTER_VERSION} Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=oadp --output text) 1 fi",
"echo USD{POLICY_ARN}",
"cat <<EOF > USD{SCRATCH}/trust-policy.json { \"Version\": \"2012-10-17\", \"Statement\": [{ \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_ENDPOINT}\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_ENDPOINT}:sub\": [ \"system:serviceaccount:openshift-adp:openshift-adp-controller-manager\", \"system:serviceaccount:openshift-adp:velero\"] } } }] } EOF",
"ROLE_ARN=USD(aws iam create-role --role-name \"USD{ROLE_NAME}\" --assume-role-policy-document file://USD{SCRATCH}/trust-policy.json --tags Key=cluster_id,Value=USD{AWS_CLUSTER_ID} Key=openshift_version,Value=USD{CLUSTER_VERSION} Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=oadp --query Role.Arn --output text)",
"echo USD{ROLE_ARN}",
"aws iam attach-role-policy --role-name \"USD{ROLE_NAME}\" --policy-arn USD{POLICY_ARN}",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"cat <<EOF > USD{SCRATCH}/credentials [default] role_arn = USD{ROLE_ARN} web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token EOF",
"oc create namespace openshift-adp",
"oc -n openshift-adp create secret generic cloud-credentials --from-file=USD{SCRATCH}/credentials",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: CloudStorage metadata: name: USD{CLUSTER_NAME}-oadp namespace: openshift-adp spec: creationSecret: key: credentials name: cloud-credentials enableSharedConfig: true name: USD{CLUSTER_NAME}-oadp provider: aws region: USDREGION EOF",
"oc get pvc -n <namespace>",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE applog Bound pvc-351791ae-b6ab-4e8b-88a4-30f73caf5ef8 1Gi RWO gp3-csi 4d19h mysql Bound pvc-16b8e009-a20a-4379-accc-bc81fedd0621 1Gi RWO gp3-csi 4d19h",
"oc get storageclass",
"NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 4d21h gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3 ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3-csi (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 features: dataMover: enable: false backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws - csi restic: enable: false EOF",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 features: dataMover: enable: false backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws nodeAgent: 2 enable: false uploaderType: restic snapshotLocations: - velero: config: credentialsFile: /tmp/credentials/openshift-adp/cloud-credentials-credentials 3 enableSharedConfig: \"true\" 4 profile: default 5 region: USD{REGION} 6 provider: aws EOF",
"nodeAgent: enable: false uploaderType: restic",
"restic: enable: false",
"oc create namespace hello-world",
"oc new-app -n hello-world --image=docker.io/openshift/hello-openshift",
"oc expose service/hello-openshift -n hello-world",
"curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`",
"Hello OpenShift!",
"cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Backup metadata: name: hello-world namespace: openshift-adp spec: includedNamespaces: - hello-world storageLocation: USD{CLUSTER_NAME}-dpa-1 ttl: 720h0m0s EOF",
"watch \"oc -n openshift-adp get backup hello-world -o json | jq .status\"",
"{ \"completionTimestamp\": \"2022-09-07T22:20:44Z\", \"expiration\": \"2022-10-07T22:20:22Z\", \"formatVersion\": \"1.1.0\", \"phase\": \"Completed\", \"progress\": { \"itemsBackedUp\": 58, \"totalItems\": 58 }, \"startTimestamp\": \"2022-09-07T22:20:22Z\", \"version\": 1 }",
"oc delete ns hello-world",
"cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Restore metadata: name: hello-world namespace: openshift-adp spec: backupName: hello-world EOF",
"watch \"oc -n openshift-adp get restore hello-world -o json | jq .status\"",
"{ \"completionTimestamp\": \"2022-09-07T22:25:47Z\", \"phase\": \"Completed\", \"progress\": { \"itemsRestored\": 38, \"totalItems\": 38 }, \"startTimestamp\": \"2022-09-07T22:25:28Z\", \"warnings\": 9 }",
"oc -n hello-world get pods",
"NAME READY STATUS RESTARTS AGE hello-openshift-9f885f7c6-kdjpj 1/1 Running 0 90s",
"curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`",
"Hello OpenShift!",
"oc delete ns hello-world",
"oc -n openshift-adp delete dpa USD{CLUSTER_NAME}-dpa",
"oc -n openshift-adp delete cloudstorage USD{CLUSTER_NAME}-oadp",
"oc -n openshift-adp patch cloudstorage USD{CLUSTER_NAME}-oadp -p '{\"metadata\":{\"finalizers\":null}}' --type=merge",
"oc -n openshift-adp delete subscription oadp-operator",
"oc delete ns openshift-adp",
"oc delete backups.velero.io hello-world",
"velero backup delete hello-world",
"for CRD in `oc get crds | grep velero | awk '{print USD1}'`; do oc delete crd USDCRD; done",
"aws s3 rm s3://USD{CLUSTER_NAME}-oadp --recursive",
"aws s3api delete-bucket --bucket USD{CLUSTER_NAME}-oadp",
"aws iam detach-role-policy --role-name \"USD{ROLE_NAME}\" --policy-arn \"USD{POLICY_ARN}\"",
"aws iam delete-role --role-name \"USD{ROLE_NAME}\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa_sample namespace: openshift-adp spec: configuration: velero: defaultPlugins: - openshift - aws - csi resourceTimeout: 10m nodeAgent: enable: true uploaderType: kopia backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> 1 prefix: <prefix> 2 config: region: <region> 3 profile: \"default\" s3ForcePathStyle: \"true\" s3Url: <s3_url> 4 credential: key: cloud name: cloud-credentials",
"oc create -f dpa.yaml",
"apiVersion: velero.io/v1 kind: Backup metadata: name: operator-install-backup namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: false includedNamespaces: - threescale 1 includedResources: - operatorgroups - subscriptions - namespaces itemOperationTimeout: 1h0m0s snapshotMoveData: false ttl: 720h0m0s",
"oc create -f backup.yaml",
"apiVersion: velero.io/v1 kind: Backup metadata: name: operator-resources-secrets namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: false includedNamespaces: - threescale includedResources: - secrets itemOperationTimeout: 1h0m0s labelSelector: matchLabels: app: 3scale-api-management snapshotMoveData: false snapshotVolumes: false ttl: 720h0m0s",
"oc create -f backup-secret.yaml",
"apiVersion: velero.io/v1 kind: Backup metadata: name: operator-resources-apim namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: false includedNamespaces: - threescale includedResources: - apimanagers itemOperationTimeout: 1h0m0s snapshotMoveData: false snapshotVolumes: false storageLocation: ts-dpa-1 ttl: 720h0m0s volumeSnapshotLocations: - ts-dpa-1",
"oc create -f backup-apimanager.yaml",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: example-claim namespace: threescale spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: gp3-csi volumeMode: Filesystem",
"oc create -f ts_pvc.yml",
"oc edit deployment system-mysql -n threescale",
"volumeMounts: - name: example-claim mountPath: /var/lib/mysqldump/data - name: mysql-storage mountPath: /var/lib/mysql/data - name: mysql-extra-conf mountPath: /etc/my-extra.d - name: mysql-main-conf mountPath: /etc/my-extra serviceAccount: amp volumes: - name: example-claim persistentVolumeClaim: claimName: example-claim 1",
"apiVersion: velero.io/v1 kind: Backup metadata: name: mysql-backup namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: true hooks: resources: - name: dumpdb pre: - exec: command: - /bin/sh - -c - mysqldump -u USDMYSQL_USER --password=USDMYSQL_PASSWORD system --no-tablespaces > /var/lib/mysqldump/data/dump.sql 1 container: system-mysql onError: Fail timeout: 5m includedNamespaces: 2 - threescale includedResources: - deployment - pods - replicationControllers - persistentvolumeclaims - persistentvolumes itemOperationTimeout: 1h0m0s labelSelector: matchLabels: app: 3scale-api-management threescale_component_element: mysql snapshotMoveData: false ttl: 720h0m0s",
"oc create -f mysql.yaml",
"oc get backups.velero.io mysql-backup",
"NAME STATUS CREATED NAMESPACE POD VOLUME UPLOADER TYPE STORAGE LOCATION AGE mysql-backup-4g7qn Completed 30s threescale system-mysql-2-9pr44 example-claim kopia ts-dpa-1 30s mysql-backup-smh85 Completed 23s threescale system-mysql-2-9pr44 mysql-storage kopia ts-dpa-1 30s",
"oc edit deployment backend-redis -n threescale",
"annotations: post.hook.backup.velero.io/command: >- [\"/bin/bash\", \"-c\", \"redis-cli CONFIG SET auto-aof-rewrite-percentage 100\"] pre.hook.backup.velero.io/command: >- [\"/bin/bash\", \"-c\", \"redis-cli CONFIG SET auto-aof-rewrite-percentage 0\"]",
"apiVersion: velero.io/v1 kind: Backup metadata: name: redis-backup namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: true includedNamespaces: - threescale includedResources: - deployment - pods - replicationcontrollers - persistentvolumes - persistentvolumeclaims itemOperationTimeout: 1h0m0s labelSelector: matchLabels: app: 3scale-api-management threescale_component: backend threescale_component_element: redis snapshotMoveData: false snapshotVolumes: false ttl: 720h0m0s",
"oc get backups.velero.io redis-backup -o yaml",
"oc get backups.velero.io",
"oc delete project threescale",
"\"threescale\" project deleted successfully",
"apiVersion: velero.io/v1 kind: Restore metadata: name: operator-installation-restore namespace: openshift-adp spec: backupName: operator-install-backup excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io itemOperationTimeout: 4h0m0s",
"oc create -f restore.yaml",
"oc apply -f - <<EOF --- apiVersion: v1 kind: Secret metadata: name: s3-credentials namespace: threescale stringData: AWS_ACCESS_KEY_ID: <ID_123456> 1 AWS_SECRET_ACCESS_KEY: <ID_98765544> 2 AWS_BUCKET: <mybucket.example.com> 3 AWS_REGION: <us-east-1> 4 type: Opaque EOF",
"oc scale deployment threescale-operator-controller-manager-v2 --replicas=0 -n threescale",
"apiVersion: velero.io/v1 kind: Restore metadata: name: operator-resources-secrets namespace: openshift-adp spec: backupName: operator-resources-secrets excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io itemOperationTimeout: 4h0m0s",
"oc create -f restore-secrets.yaml",
"apiVersion: velero.io/v1 kind: Restore metadata: name: operator-resources-apim namespace: openshift-adp spec: backupName: operator-resources-apim excludedResources: 1 - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io itemOperationTimeout: 4h0m0s",
"oc create -f restore-apimanager.yaml",
"oc scale deployment threescale-operator-controller-manager-v2 --replicas=1 -n threescale",
"oc scale deployment threescale-operator-controller-manager-v2 --replicas=0 -n threescale",
"deployment.apps/threescale-operator-controller-manager-v2 scaled",
"vi ./scaledowndeployment.sh",
"for deployment in apicast-production apicast-staging backend-cron backend-listener backend-redis backend-worker system-app system-memcache system-mysql system-redis system-searchd system-sidekiq zync zync-database zync-que; do oc scale deployment/USDdeployment --replicas=0 -n threescale done",
"./scaledowndeployment.sh",
"deployment.apps.openshift.io/apicast-production scaled deployment.apps.openshift.io/apicast-staging scaled deployment.apps.openshift.io/backend-cron scaled deployment.apps.openshift.io/backend-listener scaled deployment.apps.openshift.io/backend-redis scaled deployment.apps.openshift.io/backend-worker scaled deployment.apps.openshift.io/system-app scaled deployment.apps.openshift.io/system-memcache scaled deployment.apps.openshift.io/system-mysql scaled deployment.apps.openshift.io/system-redis scaled deployment.apps.openshift.io/system-searchd scaled deployment.apps.openshift.io/system-sidekiq scaled deployment.apps.openshift.io/zync scaled deployment.apps.openshift.io/zync-database scaled deployment.apps.openshift.io/zync-que scaled",
"oc delete deployment system-mysql -n threescale",
"Warning: apps.openshift.io/v1 deployment is deprecated in v4.14+, unavailable in v4.10000+ deployment.apps.openshift.io \"system-mysql\" deleted",
"apiVersion: velero.io/v1 kind: Restore metadata: name: restore-mysql namespace: openshift-adp spec: backupName: mysql-backup excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io - resticrepositories.velero.io hooks: resources: - name: restoreDB postHooks: - exec: command: - /bin/sh - '-c' - > sleep 30 mysql -h 127.0.0.1 -D system -u root --password=USDMYSQL_ROOT_PASSWORD < /var/lib/mysqldump/data/dump.sql 1 container: system-mysql execTimeout: 80s onError: Fail waitTimeout: 5m itemOperationTimeout: 1h0m0s restorePVs: true",
"oc create -f restore-mysql.yaml",
"oc get podvolumerestores.velero.io -n openshift-adp",
"NAME NAMESPACE POD UPLOADER TYPE VOLUME STATUS TOTALBYTES BYTESDONE AGE restore-mysql-rbzvm threescale system-mysql-2-kjkhl kopia mysql-storage Completed 771879108 771879108 40m restore-mysql-z7x7l threescale system-mysql-2-kjkhl kopia example-claim Completed 380415 380415 40m",
"oc get pvc -n threescale",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE backend-redis-storage Bound pvc-3dca410d-3b9f-49d4-aebf-75f47152e09d 1Gi RWO gp3-csi <unset> 68m example-claim Bound pvc-cbaa49b0-06cd-4b1a-9e90-0ef755c67a54 1Gi RWO gp3-csi <unset> 57m mysql-storage Bound pvc-4549649f-b9ad-44f7-8f67-dd6b9dbb3896 1Gi RWO gp3-csi <unset> 68m system-redis-storage Bound pvc-04dadafd-8a3e-4d00-8381-6041800a24fc 1Gi RWO gp3-csi <unset> 68m system-searchd Bound pvc-afbf606c-d4a8-4041-8ec6-54c5baf1a3b9 1Gi RWO gp3-csi <unset> 68m",
"oc delete deployment backend-redis -n threescale",
"Warning: apps.openshift.io/v1 deployment is deprecated in v4.14+, unavailable in v4.10000+ deployment.apps.openshift.io \"backend-redis\" deleted",
"apiVersion: velero.io/v1 kind: Restore metadata: name: restore-backend namespace: openshift-adp spec: backupName: redis-backup excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io itemOperationTimeout: 1h0m0s restorePVs: true",
"oc create -f restore-backend.yaml",
"oc get podvolumerestores.velero.io -n openshift-adp",
"NAME NAMESPACE POD UPLOADER TYPE VOLUME STATUS TOTALBYTES BYTESDONE AGE restore-backend-jmrwx threescale backend-redis-1-bsfmv kopia backend-redis-storage Completed 76123 76123 21m",
"oc scale deployment threescale-operator-controller-manager-v2 --replicas=1 -n threescale",
"oc get deployment -n threescale",
"./scaledeployment.sh",
"oc get routes -n threescale",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD backend backend-3scale.apps.custom-cluster-name.openshift.com backend-listener http edge/Allow None zync-3scale-api-b4l4d api-3scale-apicast-production.apps.custom-cluster-name.openshift.com apicast-production gateway edge/Redirect None zync-3scale-api-b6sns api-3scale-apicast-staging.apps.custom-cluster-name.openshift.com apicast-staging gateway edge/Redirect None zync-3scale-master-7sc4j master.apps.custom-cluster-name.openshift.com system-master http edge/Redirect None zync-3scale-provider-7r2nm 3scale-admin.apps.custom-cluster-name.openshift.com system-provider http edge/Redirect None zync-3scale-provider-mjxlb 3scale.apps.custom-cluster-name.openshift.com system-developer http edge/Redirect None",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: nodeAgent: enable: true 1 uploaderType: kopia 2 velero: defaultPlugins: - openshift - aws - csi 3 defaultSnapshotMoveData: true defaultVolumesToFSBackup: 4 featureFlags: - EnableCSI",
"kind: Backup apiVersion: velero.io/v1 metadata: name: backup namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: 1 includedNamespaces: - mysql-persistent itemOperationTimeout: 4h0m0s snapshotMoveData: true 2 storageLocation: default ttl: 720h0m0s 3 volumeSnapshotLocations: - dpa-sample-1",
"Error: relabel failed /var/lib/kubelet/pods/3ac..34/volumes/ kubernetes.io~csi/pvc-684..12c/mount: lsetxattr /var/lib/kubelet/ pods/3ac..34/volumes/kubernetes.io~csi/pvc-68..2c/mount/data-xfs-103: no space left on device",
"oc create -f backup.yaml",
"oc get datauploads -A",
"NAMESPACE NAME STATUS STARTED BYTES DONE TOTAL BYTES STORAGE LOCATION AGE NODE openshift-adp backup-test-1-sw76b Completed 9m47s 108104082 108104082 dpa-sample-1 9m47s ip-10-0-150-57.us-west-2.compute.internal openshift-adp mongo-block-7dtpf Completed 14m 1073741824 1073741824 dpa-sample-1 14m ip-10-0-150-57.us-west-2.compute.internal",
"oc get datauploads <dataupload_name> -o yaml",
"apiVersion: velero.io/v2alpha1 kind: DataUpload metadata: name: backup-test-1-sw76b namespace: openshift-adp spec: backupStorageLocation: dpa-sample-1 csiSnapshot: snapshotClass: \"\" storageClass: gp3-csi volumeSnapshot: velero-mysql-fq8sl operationTimeout: 10m0s snapshotType: CSI sourceNamespace: mysql-persistent sourcePVC: mysql status: completionTimestamp: \"2023-11-02T16:57:02Z\" node: ip-10-0-150-57.us-west-2.compute.internal path: /host_pods/15116bac-cc01-4d9b-8ee7-609c3bef6bde/volumes/kubernetes.io~csi/pvc-eead8167-556b-461a-b3ec-441749e291c4/mount phase: Completed 1 progress: bytesDone: 108104082 totalBytes: 108104082 snapshotID: 8da1c5febf25225f4577ada2aeb9f899 startTimestamp: \"2023-11-02T16:56:22Z\"",
"apiVersion: velero.io/v1 kind: Restore metadata: name: restore namespace: openshift-adp spec: backupName: <backup>",
"oc create -f restore.yaml",
"oc get datadownloads -A",
"NAMESPACE NAME STATUS STARTED BYTES DONE TOTAL BYTES STORAGE LOCATION AGE NODE openshift-adp restore-test-1-sk7lg Completed 7m11s 108104082 108104082 dpa-sample-1 7m11s ip-10-0-150-57.us-west-2.compute.internal",
"oc get datadownloads <datadownload_name> -o yaml",
"apiVersion: velero.io/v2alpha1 kind: DataDownload metadata: name: restore-test-1-sk7lg namespace: openshift-adp spec: backupStorageLocation: dpa-sample-1 operationTimeout: 10m0s snapshotID: 8da1c5febf25225f4577ada2aeb9f899 sourceNamespace: mysql-persistent targetVolume: namespace: mysql-persistent pv: \"\" pvc: mysql status: completionTimestamp: \"2023-11-02T17:01:24Z\" node: ip-10-0-150-57.us-west-2.compute.internal phase: Completed 1 progress: bytesDone: 108104082 totalBytes: 108104082 startTimestamp: \"2023-11-02T17:00:52Z\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication # configuration: nodeAgent: enable: true 1 uploaderType: kopia 2 velero: defaultPlugins: - openshift - aws - csi 3 defaultSnapshotMoveData: true podConfig: env: - name: KOPIA_HASHING_ALGORITHM value: <hashing_algorithm_name> 4 - name: KOPIA_ENCRYPTION_ALGORITHM value: <encryption_algorithm_name> 5 - name: KOPIA_SPLITTER_ALGORITHM value: <splitter_algorithm_name> 6",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> 1 namespace: openshift-adp spec: backupLocations: - name: aws velero: config: profile: default region: <region_name> 2 credential: key: cloud name: cloud-credentials 3 default: true objectStorage: bucket: <bucket_name> 4 prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - csi 5 defaultSnapshotMoveData: true podConfig: env: - name: KOPIA_HASHING_ALGORITHM value: BLAKE3-256 6 - name: KOPIA_ENCRYPTION_ALGORITHM value: CHACHA20-POLY1305-HMAC-SHA256 7 - name: KOPIA_SPLITTER_ALGORITHM value: DYNAMIC-8M-RABINKARP 8",
"oc create -f <dpa_file_name> 1",
"oc get dpa -o yaml",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1 defaultVolumesToFsBackup: true",
"oc apply -f <backup_file_name> 1",
"oc get backups.velero.io <backup_name> -o yaml 1",
"kopia repository connect s3 --bucket=<bucket_name> \\ 1 --prefix=velero/kopia/<application_namespace> \\ 2 --password=static-passw0rd \\ 3 --access-key=\"<aws_s3_access_key>\" \\ 4 --secret-access-key=\"<aws_s3_secret_access_key>\" \\ 5",
"kopia repository status",
"Config file: /../.config/kopia/repository.config Description: Repository in S3: s3.amazonaws.com <bucket_name> Storage type: s3 Storage capacity: unbounded Storage config: { \"bucket\": <bucket_name>, \"prefix\": \"velero/kopia/<application_namespace>/\", \"endpoint\": \"s3.amazonaws.com\", \"accessKeyID\": <access_key>, \"secretAccessKey\": \"****************************************\", \"sessionToken\": \"\" } Unique ID: 58....aeb0 Hash: BLAKE3-256 Encryption: CHACHA20-POLY1305-HMAC-SHA256 Splitter: DYNAMIC-8M-RABINKARP Format version: 3",
"apiVersion: v1 kind: Pod metadata: name: oadp-mustgather-pod labels: purpose: user-interaction spec: containers: - name: oadp-mustgather-container image: registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 command: [\"sleep\"] args: [\"infinity\"]",
"oc apply -f <pod_config_file_name> 1",
"oc describe pod/oadp-mustgather-pod | grep scc",
"openshift.io/scc: anyuid",
"oc -n openshift-adp rsh pod/oadp-mustgather-pod",
"sh-5.1# kopia repository connect s3 --bucket=<bucket_name> \\ 1 --prefix=velero/kopia/<application_namespace> \\ 2 --password=static-passw0rd \\ 3 --access-key=\"<access_key>\" \\ 4 --secret-access-key=\"<secret_access_key>\" \\ 5 --endpoint=<bucket_endpoint> \\ 6",
"sh-5.1# kopia benchmark hashing",
"Benchmarking hash 'BLAKE2B-256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE2B-256-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE2S-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE2S-256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE3-256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE3-256-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA224' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA256-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA3-224' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA3-256' (100 x 1048576 bytes, parallelism 1) Hash Throughput ----------------------------------------------------------------- 0. BLAKE3-256 15.3 GB / second 1. BLAKE3-256-128 15.2 GB / second 2. HMAC-SHA256-128 6.4 GB / second 3. HMAC-SHA256 6.4 GB / second 4. HMAC-SHA224 6.4 GB / second 5. BLAKE2B-256-128 4.2 GB / second 6. BLAKE2B-256 4.1 GB / second 7. BLAKE2S-256 2.9 GB / second 8. BLAKE2S-128 2.9 GB / second 9. HMAC-SHA3-224 1.6 GB / second 10. HMAC-SHA3-256 1.5 GB / second ----------------------------------------------------------------- Fastest option for this machine is: --block-hash=BLAKE3-256",
"sh-5.1# kopia benchmark encryption",
"Benchmarking encryption 'AES256-GCM-HMAC-SHA256'... (1000 x 1048576 bytes, parallelism 1) Benchmarking encryption 'CHACHA20-POLY1305-HMAC-SHA256'... (1000 x 1048576 bytes, parallelism 1) Encryption Throughput ----------------------------------------------------------------- 0. AES256-GCM-HMAC-SHA256 2.2 GB / second 1. CHACHA20-POLY1305-HMAC-SHA256 1.8 GB / second ----------------------------------------------------------------- Fastest option for this machine is: --encryption=AES256-GCM-HMAC-SHA256",
"sh-5.1# kopia benchmark splitter",
"splitting 16 blocks of 32MiB each, parallelism 1 DYNAMIC 747.6 MB/s count:107 min:9467 10th:2277562 25th:2971794 50th:4747177 75th:7603998 90th:8388608 max:8388608 DYNAMIC-128K-BUZHASH 718.5 MB/s count:3183 min:3076 10th:80896 25th:104312 50th:157621 75th:249115 90th:262144 max:262144 DYNAMIC-128K-RABINKARP 164.4 MB/s count:3160 min:9667 10th:80098 25th:106626 50th:162269 75th:250655 90th:262144 max:262144 FIXED-512K 102.9 TB/s count:1024 min:524288 10th:524288 25th:524288 50th:524288 75th:524288 90th:524288 max:524288 FIXED-8M 566.3 TB/s count:64 min:8388608 10th:8388608 25th:8388608 50th:8388608 75th:8388608 90th:8388608 max:8388608 ----------------------------------------------------------------- 0. FIXED-8M 566.3 TB/s count:64 min:8388608 10th:8388608 25th:8388608 50th:8388608 75th:8388608 90th:8388608 max:8388608 1. FIXED-4M 425.8 TB/s count:128 min:4194304 10th:4194304 25th:4194304 50th:4194304 75th:4194304 90th:4194304 max:4194304 # 22. DYNAMIC-128K-RABINKARP 164.4 MB/s count:3160 min:9667 10th:80098 25th:106626 50th:162269 75th:250655 90th:262144 max:262144",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"oc describe <velero_cr> <cr_name>",
"oc logs pod/<velero>",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: velero-sample spec: configuration: velero: logLevel: warning",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero <backup_restore_cr> <command> <cr_name>",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero --help",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero <backup_restore_cr> describe <cr_name>",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero <backup_restore_cr> logs <cr_name>",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication configuration: velero: podConfig: resourceAllocations: 1 requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication configuration: restic: podConfig: resourceAllocations: 1 requests: cpu: 1000m memory: 16Gi",
"requests: cpu: 500m memory: 128Mi",
"Velero: pod volume restore failed: data path restore failed: Failed to run kopia restore: Failed to copy snapshot data to the target: restore error: copy file: error creating file: open /host_pods/b4d...6/volumes/kubernetes.io~nfs/pvc-53...4e5/userdata/base/13493/2681: no such file or directory",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nfs-client provisioner: k8s-sigs.io/nfs-subdir-external-provisioner parameters: pathPattern: \"USD{.PVC.namespace}/USD{.PVC.annotations.nfs.io/storage-path}\" 1 onDelete: delete",
"velero restore <restore_name> --from-backup=<backup_name> --include-resources service.serving.knavtive.dev",
"oc get mutatingwebhookconfigurations",
"024-02-27T10:46:50.028951744Z time=\"2024-02-27T10:46:50Z\" level=error msg=\"Error backing up item\" backup=openshift-adp/<backup name> error=\"error executing custom action (groupResource=imagestreams.image.openshift.io, namespace=<BSL Name>, name=postgres): rpc error: code = Aborted desc = plugin panicked: runtime error: index out of range with length 1, stack trace: goroutine 94...",
"oc label backupstoragelocations.velero.io <bsl_name> app.kubernetes.io/component=bsl",
"oc -n openshift-adp get secret/oadp-<bsl_name>-<bsl_provider>-registry-secret -o json | jq -r '.data'",
"[default] 1 aws_access_key_id=AKIAIOSFODNN7EXAMPLE 2 aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
"oc get backupstoragelocations.velero.io -A",
"velero backup-location get -n <OADP_Operator_namespace>",
"oc get backupstoragelocations.velero.io -n <namespace> -o yaml",
"apiVersion: v1 items: - apiVersion: velero.io/v1 kind: BackupStorageLocation metadata: creationTimestamp: \"2023-11-03T19:49:04Z\" generation: 9703 name: example-dpa-1 namespace: openshift-adp-operator ownerReferences: - apiVersion: oadp.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: DataProtectionApplication name: example-dpa uid: 0beeeaff-0287-4f32-bcb1-2e3c921b6e82 resourceVersion: \"24273698\" uid: ba37cd15-cf17-4f7d-bf03-8af8655cea83 spec: config: enableSharedConfig: \"true\" region: us-west-2 credential: key: credentials name: cloud-credentials default: true objectStorage: bucket: example-oadp-operator prefix: example provider: aws status: lastValidationTime: \"2023-11-10T22:06:46Z\" message: \"BackupStorageLocation \\\"example-dpa-1\\\" is unavailable: rpc error: code = Unknown desc = WebIdentityErr: failed to retrieve credentials\\ncaused by: AccessDenied: Not authorized to perform sts:AssumeRoleWithWebIdentity\\n\\tstatus code: 403, request id: d3f2e099-70a0-467b-997e-ff62345e3b54\" phase: Unavailable kind: List metadata: resourceVersion: \"\"",
"level=error msg=\"Error backing up item\" backup=velero/monitoring error=\"timed out waiting for all PodVolumeBackups to complete\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: configuration: nodeAgent: enable: true uploaderType: restic timeout: 1h",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: configuration: velero: resourceTimeout: 10m",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: features: dataMover: timeout: 10m",
"apiVersion: velero.io/v1 kind: Backup metadata: name: <backup_name> spec: csiSnapshotTimeout: 10m",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: configuration: velero: defaultItemOperationTimeout: 1h",
"apiVersion: velero.io/v1 kind: Restore metadata: name: <restore_name> spec: itemOperationTimeout: 1h",
"apiVersion: velero.io/v1 kind: Backup metadata: name: <backup_name> spec: itemOperationTimeout: 1h",
"oc -n {namespace} exec deployment/velero -c velero -- ./velero backup describe <backup>",
"oc delete backups.velero.io <backup> -n openshift-adp",
"velero backup describe <backup-name> --details",
"time=\"2023-02-17T16:33:13Z\" level=error msg=\"Error backing up item\" backup=openshift-adp/user1-backup-check5 error=\"error executing custom action (groupResource=persistentvolumeclaims, namespace=busy1, name=pvc1-user1): rpc error: code = Unknown desc = failed to get volumesnapshotclass for storageclass ocs-storagecluster-ceph-rbd: failed to get volumesnapshotclass for provisioner openshift-storage.rbd.csi.ceph.com, ensure that the desired volumesnapshot class has the velero.io/csi-volumesnapshot-class label\" logSource=\"/remote-source/velero/app/pkg/backup/backup.go:417\" name=busybox-79799557b5-vprq",
"oc delete backups.velero.io <backup> -n openshift-adp",
"oc label volumesnapshotclass/<snapclass_name> velero.io/csi-volumesnapshot-class=true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: nodeAgent: enable: true uploaderType: restic supplementalGroups: - <group_id> 1",
"oc delete resticrepository openshift-adp <name_of_the_restic_repository>",
"time=\"2021-12-29T18:29:14Z\" level=info msg=\"1 errors encountered backup up item\" backup=velero/backup65 logSource=\"pkg/backup/backup.go:431\" name=mysql-7d99fc949-qbkds time=\"2021-12-29T18:29:14Z\" level=error msg=\"Error backing up item\" backup=velero/backup65 error=\"pod volume backup failed: error running restic backup, stderr=Fatal: unable to open config file: Stat: The specified key does not exist.\\nIs there a repository at the following location?\\ns3:http://minio-minio.apps.mayap-oadp- veleo-1234.qe.devcluster.openshift.com/mayapvelerooadp2/velero1/ restic/ mysql-persistent \\n: exit status 1\" error.file=\"/remote-source/ src/github.com/vmware-tanzu/velero/pkg/restic/backupper.go:184\" error.function=\"github.com/vmware-tanzu/velero/ pkg/restic.(*backupper).BackupPodVolumes\" logSource=\"pkg/backup/backup.go:435\" name=mysql-7d99fc949-qbkds",
"\\\"level=error\\\" in line#2273: time=\\\"2023-06-12T06:50:04Z\\\" level=error msg=\\\"error restoring mysql-869f9f44f6-tp5lv: pods\\\\ \"mysql-869f9f44f6-tp5lv\\\\\\\" is forbidden: violates PodSecurity\\\\ \"restricted:v1.24\\\\\\\": privil eged (container \\\\\\\"mysql\\\\ \" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers \\\\ \"restic-wait\\\\\\\", \\\\\\\"mysql\\\\\\\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers \\\\ \"restic-wait\\\\\\\", \\\\\\\"mysql\\\\\\\" must set securityContext.capabilities.drop=[\\\\\\\"ALL\\\\\\\"]), seccompProfile (pod or containers \\\\ \"restic-wait\\\\\\\", \\\\\\\"mysql\\\\\\\" must set securityContext.seccompProfile.type to \\\\ \"RuntimeDefault\\\\\\\" or \\\\\\\"Localhost\\\\\\\")\\\" logSource=\\\"/remote-source/velero/app/pkg/restore/restore.go:1388\\\" restore=openshift-adp/todolist-backup-0780518c-08ed-11ee-805c-0a580a80e92c\\n velero container contains \\\"level=error\\\" in line#2447: time=\\\"2023-06-12T06:50:05Z\\\" level=error msg=\\\"Namespace todolist-mariadb, resource restore error: error restoring pods/todolist-mariadb/mysql-869f9f44f6-tp5lv: pods \\\\ \"mysql-869f9f44f6-tp5lv\\\\\\\" is forbidden: violates PodSecurity \\\\\\\"restricted:v1.24\\\\\\\": privileged (container \\\\ \"mysql\\\\\\\" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers \\\\ \"restic-wait\\\\\\\",\\\\\\\"mysql\\\\\\\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers \\\\ \"restic-wait\\\\\\\", \\\\\\\"mysql\\\\\\\" must set securityContext.capabilities.drop=[\\\\\\\"ALL\\\\\\\"]), seccompProfile (pod or containers \\\\ \"restic-wait\\\\\\\", \\\\\\\"mysql\\\\\\\" must set securityContext.seccompProfile.type to \\\\ \"RuntimeDefault\\\\\\\" or \\\\\\\"Localhost\\\\\\\")\\\" logSource=\\\"/remote-source/velero/app/pkg/controller/restore_controller.go:510\\\" restore=openshift-adp/todolist-backup-0780518c-08ed-11ee-805c-0a580a80e92c\\n]\",",
"oc get dpa -o yaml",
"configuration: restic: enable: true velero: args: restore-resource-priorities: 'securitycontextconstraints,customresourcedefinitions,namespaces,storageclasses,volumesnapshotclass.snapshot.storage.k8s.io,volumesnapshotcontents.snapshot.storage.k8s.io,volumesnapshots.snapshot.storage.k8s.io,datauploads.velero.io,persistentvolumes,persistentvolumeclaims,serviceaccounts,secrets,configmaps,limitranges,pods,replicasets.apps,clusterclasses.cluster.x-k8s.io,endpoints,services,-,clusterbootstraps.run.tanzu.vmware.com,clusters.cluster.x-k8s.io,clusterresourcesets.addons.cluster.x-k8s.io' 1 defaultPlugins: - gcp - openshift",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather_<time>_essential 1",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather_with_timeout <timeout> 1",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather_metrics_dump",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather_without_tls <true/false>",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- skip_tls=true /usr/bin/gather_with_timeout <timeout_value_in_seconds>",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather_without_tls true",
"oc edit configmap cluster-monitoring-config -n openshift-monitoring",
"apiVersion: v1 data: config.yaml: | enableUserWorkload: true 1 kind: ConfigMap metadata:",
"oc get pods -n openshift-user-workload-monitoring",
"NAME READY STATUS RESTARTS AGE prometheus-operator-6844b4b99c-b57j9 2/2 Running 0 43s prometheus-user-workload-0 5/5 Running 0 32s prometheus-user-workload-1 5/5 Running 0 32s thanos-ruler-user-workload-0 3/3 Running 0 32s thanos-ruler-user-workload-1 3/3 Running 0 32s",
"oc get configmap user-workload-monitoring-config -n openshift-user-workload-monitoring",
"Error from server (NotFound): configmaps \"user-workload-monitoring-config\" not found",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: |",
"oc apply -f 2_configure_user_workload_monitoring.yaml configmap/user-workload-monitoring-config created",
"oc get svc -n openshift-adp -l app.kubernetes.io/name=velero",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE openshift-adp-velero-metrics-svc ClusterIP 172.30.38.244 <none> 8085/TCP 1h",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: app: oadp-service-monitor name: oadp-service-monitor namespace: openshift-adp spec: endpoints: - interval: 30s path: /metrics targetPort: 8085 scheme: http selector: matchLabels: app.kubernetes.io/name: \"velero\"",
"oc apply -f 3_create_oadp_service_monitor.yaml",
"servicemonitor.monitoring.coreos.com/oadp-service-monitor created",
"apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: sample-oadp-alert namespace: openshift-adp spec: groups: - name: sample-oadp-backup-alert rules: - alert: OADPBackupFailing annotations: description: 'OADP had {{USDvalue | humanize}} backup failures over the last 2 hours.' summary: OADP has issues creating backups expr: | increase(velero_backup_failure_total{job=\"openshift-adp-velero-metrics-svc\"}[2h]) > 0 for: 5m labels: severity: warning",
"oc apply -f 4_create_oadp_alert_rule.yaml",
"prometheusrule.monitoring.coreos.com/sample-oadp-alert created",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"oc api-resources",
"apiVersion: oadp.openshift.io/vialpha1 kind: DataProtectionApplication spec: configuration: velero: featureFlags: - EnableAPIGroupVersions",
"oc -n <your_pod_namespace> annotate pod/<your_pod_name> backup.velero.io/backup-volumes=<your_volume_name_1>, \\ <your_volume_name_2>>,...,<your_volume_name_n>",
"oc -n <your_pod_namespace> annotate pod/<your_pod_name> backup.velero.io/backup-volumes-excludes=<your_volume_name_1>, \\ <your_volume_name_2>>,...,<your_volume_name_n>",
"velero backup create <backup_name> --default-volumes-to-fs-backup <any_other_options>",
"cat change-storageclass.yaml",
"apiVersion: v1 kind: ConfigMap metadata: name: change-storage-class-config namespace: openshift-adp labels: velero.io/plugin-config: \"\" velero.io/change-storage-class: RestoreItemAction data: standard-csi: ssd-csi",
"oc create -f change-storage-class-config",
"oc debug --as-root node/<node_name>",
"sh-4.4# chroot /host",
"export HTTP_PROXY=http://<your_proxy.example.com>:8080",
"export HTTPS_PROXY=https://<your_proxy.example.com>:8080",
"export NO_PROXY=<example.com>",
"sh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backup",
"found latest kube-apiserver: /etc/kubernetes/static-pod-resources/kube-apiserver-pod-6 found latest kube-controller-manager: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-7 found latest kube-scheduler: /etc/kubernetes/static-pod-resources/kube-scheduler-pod-6 found latest etcd: /etc/kubernetes/static-pod-resources/etcd-pod-3 ede95fe6b88b87ba86a03c15e669fb4aa5bf0991c180d3c6895ce72eaade54a1 etcdctl version: 3.4.14 API version: 3.4 {\"level\":\"info\",\"ts\":1624647639.0188997,\"caller\":\"snapshot/v3_snapshot.go:119\",\"msg\":\"created temporary db file\",\"path\":\"/home/core/assets/backup/snapshot_2021-06-25_190035.db.part\"} {\"level\":\"info\",\"ts\":\"2021-06-25T19:00:39.030Z\",\"caller\":\"clientv3/maintenance.go:200\",\"msg\":\"opened snapshot stream; downloading\"} {\"level\":\"info\",\"ts\":1624647639.0301006,\"caller\":\"snapshot/v3_snapshot.go:127\",\"msg\":\"fetching snapshot\",\"endpoint\":\"https://10.0.0.5:2379\"} {\"level\":\"info\",\"ts\":\"2021-06-25T19:00:40.215Z\",\"caller\":\"clientv3/maintenance.go:208\",\"msg\":\"completed snapshot read; closing\"} {\"level\":\"info\",\"ts\":1624647640.6032252,\"caller\":\"snapshot/v3_snapshot.go:142\",\"msg\":\"fetched snapshot\",\"endpoint\":\"https://10.0.0.5:2379\",\"size\":\"114 MB\",\"took\":1.584090459} {\"level\":\"info\",\"ts\":1624647640.6047094,\"caller\":\"snapshot/v3_snapshot.go:152\",\"msg\":\"saved\",\"path\":\"/home/core/assets/backup/snapshot_2021-06-25_190035.db\"} Snapshot saved at /home/core/assets/backup/snapshot_2021-06-25_190035.db {\"hash\":3866667823,\"revision\":31407,\"totalKey\":12828,\"totalSize\":114446336} snapshot db and kube resources are successfully saved to /home/core/assets/backup",
"apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster spec: featureSet: TechPreviewNoUpgrade",
"oc apply -f enable-tech-preview-no-upgrade.yaml",
"oc get crd | grep backup",
"backups.config.openshift.io 2023-10-25T13:32:43Z etcdbackups.operator.openshift.io 2023-10-25T13:32:04Z",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc namespace: openshift-etcd spec: accessModes: - ReadWriteOnce resources: requests: storage: 200Gi 1 volumeMode: Filesystem",
"oc apply -f etcd-backup-pvc.yaml",
"oc get pvc",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE etcd-backup-pvc Bound 51s",
"apiVersion: operator.openshift.io/v1alpha1 kind: EtcdBackup metadata: name: etcd-single-backup namespace: openshift-etcd spec: pvcName: etcd-backup-pvc 1",
"oc apply -f etcd-single-backup.yaml",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: etcd-backup-local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: Immediate",
"oc apply -f etcd-backup-local-storage.yaml",
"apiVersion: v1 kind: PersistentVolume metadata: name: etcd-backup-pv-fs spec: capacity: storage: 100Gi 1 volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: etcd-backup-local-storage local: path: /mnt nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <example_master_node> 2",
"oc get pv",
"NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE etcd-backup-pv-fs 100Gi RWO Retain Available etcd-backup-local-storage 10s",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc namespace: openshift-etcd spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 10Gi 1",
"oc apply -f etcd-backup-pvc.yaml",
"apiVersion: operator.openshift.io/v1alpha1 kind: EtcdBackup metadata: name: etcd-single-backup namespace: openshift-etcd spec: pvcName: etcd-backup-pvc 1",
"oc apply -f etcd-single-backup.yaml",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc namespace: openshift-etcd spec: accessModes: - ReadWriteOnce resources: requests: storage: 200Gi 1 volumeMode: Filesystem storageClassName: etcd-backup-local-storage",
"oc apply -f etcd-backup-pvc.yaml",
"oc get pvc",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE etcd-backup-pvc Bound 51s",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: etcd-backup-local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: Immediate",
"oc apply -f etcd-backup-local-storage.yaml",
"apiVersion: v1 kind: PersistentVolume metadata: name: etcd-backup-pv-fs spec: capacity: storage: 100Gi 1 volumeMode: Filesystem accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Delete storageClassName: etcd-backup-local-storage local: path: /mnt/ nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <example_master_node> 2",
"oc get nodes",
"oc get pv",
"NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE etcd-backup-pv-fs 100Gi RWX Delete Available etcd-backup-local-storage 10s",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc spec: accessModes: - ReadWriteMany volumeMode: Filesystem resources: requests: storage: 10Gi 1 storageClassName: etcd-backup-local-storage",
"oc apply -f etcd-backup-pvc.yaml",
"apiVersion: config.openshift.io/v1alpha1 kind: Backup metadata: name: etcd-recurring-backup spec: etcd: schedule: \"20 4 * * *\" 1 timeZone: \"UTC\" pvcName: etcd-backup-pvc",
"spec: etcd: retentionPolicy: retentionType: RetentionNumber 1 retentionNumber: maxNumberOfBackups: 5 2",
"spec: etcd: retentionPolicy: retentionType: RetentionSize retentionSize: maxSizeOfBackupsGb: 20 1",
"oc create -f etcd-recurring-backup.yaml",
"oc get cronjob -n openshift-etcd",
"oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"EtcdMembersAvailable\")]}{.message}{\"\\n\"}'",
"2 of 3 members are available, ip-10-0-131-183.ec2.internal is unhealthy",
"oc get machines -A -ojsonpath='{range .items[*]}{@.status.nodeRef.name}{\"\\t\"}{@.status.providerStatus.instanceState}{\"\\n\"}' | grep -v running",
"ip-10-0-131-183.ec2.internal stopped 1",
"oc get nodes -o jsonpath='{range .items[*]}{\"\\n\"}{.metadata.name}{\"\\t\"}{range .spec.taints[*]}{.key}{\" \"}' | grep unreachable",
"ip-10-0-131-183.ec2.internal node-role.kubernetes.io/master node.kubernetes.io/unreachable node.kubernetes.io/unreachable 1",
"oc get nodes -l node-role.kubernetes.io/master | grep \"NotReady\"",
"ip-10-0-131-183.ec2.internal NotReady master 122m v1.29.4 1",
"oc get nodes -l node-role.kubernetes.io/master",
"NAME STATUS ROLES AGE VERSION ip-10-0-131-183.ec2.internal Ready master 6h13m v1.29.4 ip-10-0-164-97.ec2.internal Ready master 6h13m v1.29.4 ip-10-0-154-204.ec2.internal Ready master 6h13m v1.29.4",
"oc -n openshift-etcd get pods -l k8s-app=etcd",
"etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m 1 etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m",
"oc -n openshift-etcd get pods -l k8s-app=etcd",
"etcd-ip-10-0-131-183.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m",
"oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal",
"sh-4.2# etcdctl member list -w table",
"+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 6fc1e7c9db35841d | started | ip-10-0-131-183.ec2.internal | https://10.0.131.183:2380 | https://10.0.131.183:2379 | | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+",
"sh-4.2# etcdctl member remove 6fc1e7c9db35841d",
"Member 6fc1e7c9db35841d removed from cluster ead669ce1fbfb346",
"sh-4.2# etcdctl member list -w table",
"+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": {\"useUnsupportedUnsafeNonHANonProductionUnstableEtcd\": true}}}'",
"oc delete node <node_name>",
"oc delete node ip-10-0-131-183.ec2.internal",
"oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal 1",
"etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m",
"oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal",
"oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal",
"oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped 1 clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-154-204.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-164-97.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running",
"oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 1",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-154-204.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-164-97.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-133-53.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running 1 clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": null}}'",
"oc get etcd/cluster -oyaml",
"EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets \"etcd-peer-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-metrics-sno-0\": the object has been modified; please apply your changes to the latest version and try again]",
"oc -n openshift-etcd get pods -l k8s-app=etcd",
"etcd-ip-10-0-133-53.ec2.internal 3/3 Running 0 7m49s etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m",
"oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge 1",
"oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal",
"sh-4.2# etcdctl member list -w table",
"+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 5eb0d6b8ca24730c | started | ip-10-0-133-53.ec2.internal | https://10.0.133.53:2380 | https://10.0.133.53:2379 | | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+",
"oc debug node/ip-10-0-131-183.ec2.internal 1",
"sh-4.2# chroot /host",
"sh-4.2# mkdir /var/lib/etcd-backup",
"sh-4.2# mv /etc/kubernetes/manifests/etcd-pod.yaml /var/lib/etcd-backup/",
"sh-4.2# mv /var/lib/etcd/ /tmp",
"oc -n openshift-etcd get pods -l k8s-app=etcd",
"etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m",
"oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal",
"sh-4.2# etcdctl member list -w table",
"+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 62bcf33650a7170a | started | ip-10-0-131-183.ec2.internal | https://10.0.131.183:2380 | https://10.0.131.183:2379 | | b78e2856655bc2eb | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | d022e10b498760d5 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+",
"sh-4.2# etcdctl member remove 62bcf33650a7170a",
"Member 62bcf33650a7170a removed from cluster ead669ce1fbfb346",
"sh-4.2# etcdctl member list -w table",
"+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | b78e2856655bc2eb | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | d022e10b498760d5 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": {\"useUnsupportedUnsafeNonHANonProductionUnstableEtcd\": true}}}'",
"oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal 1",
"etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m",
"oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal",
"oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal",
"oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal",
"oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"single-master-recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge 1",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": null}}'",
"oc get etcd/cluster -oyaml",
"EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets \"etcd-peer-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-metrics-sno-0\": the object has been modified; please apply your changes to the latest version and try again]",
"oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal",
"sh-4.2# etcdctl endpoint health",
"https://10.0.131.183:2379 is healthy: successfully committed proposal: took = 16.671434ms https://10.0.154.204:2379 is healthy: successfully committed proposal: took = 16.698331ms https://10.0.164.97:2379 is healthy: successfully committed proposal: took = 16.621645ms",
"oc -n openshift-etcd get pods -l k8s-app=etcd -o wide",
"etcd-openshift-control-plane-0 5/5 Running 11 3h56m 192.168.10.9 openshift-control-plane-0 <none> <none> etcd-openshift-control-plane-1 5/5 Running 0 3h54m 192.168.10.10 openshift-control-plane-1 <none> <none> etcd-openshift-control-plane-2 5/5 Running 0 3h58m 192.168.10.11 openshift-control-plane-2 <none> <none>",
"oc rsh -n openshift-etcd etcd-openshift-control-plane-0",
"sh-4.2# etcdctl member list -w table",
"+------------------+---------+--------------------+---------------------------+---------------------------+---------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+--------------------+---------------------------+---------------------------+---------------------+ | 7a8197040a5126c8 | started | openshift-control-plane-2 | https://192.168.10.11:2380/ | https://192.168.10.11:2379/ | false | | 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380/ | https://192.168.10.10:2379/ | false | | cc3830a72fc357f9 | started | openshift-control-plane-0 | https://192.168.10.9:2380/ | https://192.168.10.9:2379/ | false | +------------------+---------+--------------------+---------------------------+---------------------------+---------------------+",
"sh-4.2# etcdctl member remove 7a8197040a5126c8",
"Member 7a8197040a5126c8 removed from cluster b23536c33f2cdd1b",
"sh-4.2# etcdctl member list -w table",
"+------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+ | cc3830a72fc357f9 | started | openshift-control-plane-2 | https://192.168.10.11:2380/ | https://192.168.10.11:2379/ | false | | 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380/ | https://192.168.10.10:2379/ | false | +------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": {\"useUnsupportedUnsafeNonHANonProductionUnstableEtcd\": true}}}'",
"oc get secrets -n openshift-etcd | grep openshift-control-plane-2",
"etcd-peer-openshift-control-plane-2 kubernetes.io/tls 2 134m etcd-serving-metrics-openshift-control-plane-2 kubernetes.io/tls 2 134m etcd-serving-openshift-control-plane-2 kubernetes.io/tls 2 134m",
"oc delete secret etcd-peer-openshift-control-plane-2 -n openshift-etcd secret \"etcd-peer-openshift-control-plane-2\" deleted",
"oc delete secret etcd-serving-metrics-openshift-control-plane-2 -n openshift-etcd secret \"etcd-serving-metrics-openshift-control-plane-2\" deleted",
"oc delete secret etcd-serving-openshift-control-plane-2 -n openshift-etcd secret \"etcd-serving-openshift-control-plane-2\" deleted",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned 1 examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-control-plane-2 Running 3h11m openshift-control-plane-2 baremetalhost:///openshift-machine-api/openshift-control-plane-2/3354bdac-61d8-410f-be5b-6a395b056135 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned",
"oc get clusteroperator baremetal",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE baremetal 4.16.0 True False False 3d15h",
"oc delete bmh openshift-control-plane-2 -n openshift-machine-api",
"baremetalhost.metal3.io \"openshift-control-plane-2\" deleted",
"oc delete machine -n openshift-machine-api examplecluster-control-plane-2",
"oc edit machine -n openshift-machine-api examplecluster-control-plane-2",
"finalizers: - machine.machine.openshift.io",
"machine.machine.openshift.io/examplecluster-control-plane-2 edited",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned",
"oc get nodes NAME STATUS ROLES AGE VERSION openshift-control-plane-0 Ready master 3h24m v1.29.4 openshift-control-plane-1 Ready master 3h24m v1.29.4 openshift-compute-0 Ready worker 176m v1.29.4 openshift-compute-1 Ready worker 176m v1.29.4",
"cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: openshift-control-plane-2-bmc-secret namespace: openshift-machine-api data: password: <password> username: <username> type: Opaque --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-control-plane-2 namespace: openshift-machine-api spec: automatedCleaningMode: disabled bmc: address: redfish://10.46.61.18:443/redfish/v1/Systems/1 credentialsName: openshift-control-plane-2-bmc-secret disableCertificateVerification: true bootMACAddress: 48:df:37:b0:8a:a0 bootMode: UEFI externallyProvisioned: false online: true rootDeviceHints: deviceName: /dev/disk/by-id/scsi-<serial_number> userData: name: master-user-data-managed namespace: openshift-machine-api EOF",
"oc get bmh -n openshift-machine-api NAME STATE CONSUMER ONLINE ERROR AGE openshift-control-plane-0 externally provisioned examplecluster-control-plane-0 true 4h48m openshift-control-plane-1 externally provisioned examplecluster-control-plane-1 true 4h48m openshift-control-plane-2 available examplecluster-control-plane-3 true 47m openshift-compute-0 provisioned examplecluster-compute-0 true 4h48m openshift-compute-1 provisioned examplecluster-compute-1 true 4h48m",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned 1 examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-control-plane-2 Running 3h11m openshift-control-plane-2 baremetalhost:///openshift-machine-api/openshift-control-plane-2/3354bdac-61d8-410f-be5b-6a395b056135 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned",
"oc get bmh -n openshift-machine-api",
"oc get bmh -n openshift-machine-api NAME STATE CONSUMER ONLINE ERROR AGE openshift-control-plane-0 externally provisioned examplecluster-control-plane-0 true 4h48m openshift-control-plane-1 externally provisioned examplecluster-control-plane-1 true 4h48m openshift-control-plane-2 provisioned examplecluster-control-plane-3 true 47m openshift-compute-0 provisioned examplecluster-compute-0 true 4h48m openshift-compute-1 provisioned examplecluster-compute-1 true 4h48m",
"oc get nodes",
"oc get nodes NAME STATUS ROLES AGE VERSION openshift-control-plane-0 Ready master 4h26m v1.29.4 openshift-control-plane-1 Ready master 4h26m v1.29.4 openshift-control-plane-2 Ready master 12m v1.29.4 openshift-compute-0 Ready worker 3h58m v1.29.4 openshift-compute-1 Ready worker 3h58m v1.29.4",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": null}}'",
"oc get etcd/cluster -oyaml",
"EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets \"etcd-peer-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-metrics-sno-0\": the object has been modified; please apply your changes to the latest version and try again]",
"oc -n openshift-etcd get pods -l k8s-app=etcd",
"etcd-openshift-control-plane-0 5/5 Running 0 105m etcd-openshift-control-plane-1 5/5 Running 0 107m etcd-openshift-control-plane-2 5/5 Running 0 103m",
"oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge 1",
"oc rsh -n openshift-etcd etcd-openshift-control-plane-0",
"sh-4.2# etcdctl member list -w table",
"+------------------+---------+--------------------+---------------------------+---------------------------+-----------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+--------------------+---------------------------+---------------------------+-----------------+ | 7a8197040a5126c8 | started | openshift-control-plane-2 | https://192.168.10.11:2380 | https://192.168.10.11:2379 | false | | 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380 | https://192.168.10.10:2379 | false | | cc3830a72fc357f9 | started | openshift-control-plane-0 | https://192.168.10.9:2380 | https://192.168.10.9:2379 | false | +------------------+---------+--------------------+---------------------------+---------------------------+-----------------+",
"etcdctl endpoint health --cluster",
"https://192.168.10.10:2379 is healthy: successfully committed proposal: took = 8.973065ms https://192.168.10.9:2379 is healthy: successfully committed proposal: took = 11.559829ms https://192.168.10.11:2379 is healthy: successfully committed proposal: took = 11.665203ms",
"oc get etcd -o=jsonpath='{range.items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision",
"sudo mv -v /etc/kubernetes/manifests/etcd-pod.yaml /tmp",
"sudo crictl ps | grep etcd | egrep -v \"operator|etcd-guard\"",
"sudo mv -v /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp",
"sudo crictl ps | grep kube-apiserver | egrep -v \"operator|guard\"",
"sudo mv -v /etc/kubernetes/manifests/kube-controller-manager-pod.yaml /tmp",
"sudo crictl ps | grep kube-controller-manager | egrep -v \"operator|guard\"",
"sudo mv -v /etc/kubernetes/manifests/kube-scheduler-pod.yaml /tmp",
"sudo crictl ps | grep kube-scheduler | egrep -v \"operator|guard\"",
"sudo mv -v /var/lib/etcd/ /tmp",
"sudo mv -v /etc/kubernetes/manifests/keepalived.yaml /tmp",
"sudo crictl ps --name keepalived",
"ip -o address | egrep '<api_vip>|<ingress_vip>'",
"sudo ip address del <reported_vip> dev <reported_vip_device>",
"ip -o address | grep <api_vip>",
"sudo -E /usr/local/bin/cluster-restore.sh /home/core/assets/backup",
"...stopping kube-scheduler-pod.yaml ...stopping kube-controller-manager-pod.yaml ...stopping etcd-pod.yaml ...stopping kube-apiserver-pod.yaml Waiting for container etcd to stop .complete Waiting for container etcdctl to stop .............................complete Waiting for container etcd-metrics to stop complete Waiting for container kube-controller-manager to stop complete Waiting for container kube-apiserver to stop ..........................................................................................complete Waiting for container kube-scheduler to stop complete Moving etcd data-dir /var/lib/etcd/member to /var/lib/etcd-backup starting restore-etcd static pod starting kube-apiserver-pod.yaml static-pod-resources/kube-apiserver-pod-7/kube-apiserver-pod.yaml starting kube-controller-manager-pod.yaml static-pod-resources/kube-controller-manager-pod-7/kube-controller-manager-pod.yaml starting kube-scheduler-pod.yaml static-pod-resources/kube-scheduler-pod-8/kube-scheduler-pod.yaml",
"oc get nodes -w",
"NAME STATUS ROLES AGE VERSION host-172-25-75-28 Ready master 3d20h v1.29.4 host-172-25-75-38 Ready infra,worker 3d20h v1.29.4 host-172-25-75-40 Ready master 3d20h v1.29.4 host-172-25-75-65 Ready master 3d20h v1.29.4 host-172-25-75-74 Ready infra,worker 3d20h v1.29.4 host-172-25-75-79 Ready worker 3d20h v1.29.4 host-172-25-75-86 Ready worker 3d20h v1.29.4 host-172-25-75-98 Ready infra,worker 3d20h v1.29.4",
"ssh -i <ssh-key-path> core@<master-hostname>",
"sh-4.4# pwd /var/lib/kubelet/pki sh-4.4# ls kubelet-client-2022-04-28-11-24-09.pem kubelet-server-2022-04-28-11-24-15.pem kubelet-client-current.pem kubelet-server-current.pem",
"sudo systemctl restart kubelet.service",
"oc get csr",
"NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2s94x 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 1 csr-4bd6t 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 2 csr-4hl85 13m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 3 csr-zhhhp 3m8s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 4",
"oc describe csr <csr_name> 1",
"oc adm certificate approve <csr_name>",
"oc adm certificate approve <csr_name>",
"sudo crictl ps | grep etcd | egrep -v \"operator|etcd-guard\"",
"3ad41b7908e32 36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009 About a minute ago Running etcd 0 7c05f8af362f0",
"oc -n openshift-etcd get pods -l k8s-app=etcd",
"NAME READY STATUS RESTARTS AGE etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47s",
"oc -n openshift-ovn-kubernetes delete pod -l app=ovnkube-control-plane",
"oc -n openshift-ovn-kubernetes get pod -l app=ovnkube-control-plane",
"sudo rm -f /var/lib/ovn-ic/etc/*.db",
"sudo systemctl restart ovs-vswitchd ovsdb-server",
"oc -n openshift-ovn-kubernetes delete pod -l app=ovnkube-node --field-selector=spec.nodeName==<node>",
"oc get po -n openshift-ovn-kubernetes",
"oc delete node <node>",
"ssh -i <ssh-key-path> core@<node>",
"sudo mv /var/lib/kubelet/pki/* /tmp",
"sudo systemctl restart kubelet.service",
"oc get csr",
"NAME AGE SIGNERNAME REQUESTOR CONDITION csr-<uuid> 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending",
"adm certificate approve csr-<uuid>",
"oc get nodes",
"oc -n openshift-ovn-kubernetes get pod -l app=ovnkube-node --field-selector=spec.nodeName==<node>",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped 1 clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running",
"oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 1",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-173-171.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running 1 clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": {\"useUnsupportedUnsafeNonHANonProductionUnstableEtcd\": true}}}'",
"export KUBECONFIG=/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost-recovery.kubeconfig",
"oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge 1",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": null}}'",
"oc get etcd/cluster -oyaml",
"oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision 3 nodes are at revision 7 1",
"oc patch kubeapiserver cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge",
"oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision 3 nodes are at revision 7 1",
"oc patch kubecontrollermanager cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge",
"oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision 3 nodes are at revision 7 1",
"oc patch kubescheduler cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge",
"oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision 3 nodes are at revision 7 1",
"oc adm wait-for-stable-cluster",
"oc -n openshift-etcd get pods -l k8s-app=etcd",
"etcd-ip-10-0-143-125.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9h",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig",
"oc whoami",
"oc get csr",
"NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2s94x 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 1 csr-4bd6t 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending csr-4hl85 13m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 2 csr-zhhhp 3m8s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc describe csr <csr_name> 1",
"oc adm certificate approve <csr_name>",
"oc adm certificate approve <csr_name>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/backup_and_restore/index |
5.3.13. Backing Up Volume Group Metadata | 5.3.13. Backing Up Volume Group Metadata Metadata backups and archives are automatically created on every volume group and logical volume configuration change unless disabled in the lvm.conf file. By default, the metadata backup is stored in the /etc/lvm/backup file and the metadata archives are stored in the /etc/lvm/archives file. You can manually back up the metadata to the /etc/lvm/backup file with the vgcfgbackup command. The vgcfrestore command restores the metadata of a volume group from the archive to all the physical volumes in the volume groups. For an example of using the vgcfgrestore command to recover physical volume metadata, see Section 7.4, "Recovering Physical Volume Metadata" . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/vg_backup |
Chapter 20. consistency | Chapter 20. consistency This chapter describes the commands under the consistency command. 20.1. consistency group add volume Add volume(s) to consistency group Usage: Table 20.1. Positional arguments Value Summary <consistency-group> Consistency group to contain <volume> (name or id) <volume> Volume(s) to add to <consistency-group> (name or id) (repeat option to add multiple volumes) Table 20.2. Command arguments Value Summary -h, --help Show this help message and exit 20.2. consistency group create Create new consistency group. Usage: Table 20.3. Positional arguments Value Summary <name> Name of new consistency group (default to none) Table 20.4. Command arguments Value Summary -h, --help Show this help message and exit --volume-type <volume-type> Volume type of this consistency group (name or id) --consistency-group-source <consistency-group> Existing consistency group (name or id) --consistency-group-snapshot <consistency-group-snapshot> Existing consistency group snapshot (name or id) --description <description> Description of this consistency group --availability-zone <availability-zone> Availability zone for this consistency group (not available if creating consistency group from source) Table 20.5. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 20.6. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 20.7. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 20.8. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 20.3. consistency group delete Delete consistency group(s). Usage: Table 20.9. Positional arguments Value Summary <consistency-group> Consistency group(s) to delete (name or id) Table 20.10. Command arguments Value Summary -h, --help Show this help message and exit --force Allow delete in state other than error or available 20.4. consistency group list List consistency groups. Usage: Table 20.11. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show details for all projects. admin only. (defaults to False) --long List additional fields in output Table 20.12. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 20.13. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 20.14. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 20.15. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 20.5. consistency group remove volume Remove volume(s) from consistency group Usage: Table 20.16. Positional arguments Value Summary <consistency-group> Consistency group containing <volume> (name or id) <volume> Volume(s) to remove from <consistency-group> (name or ID) (repeat option to remove multiple volumes) Table 20.17. Command arguments Value Summary -h, --help Show this help message and exit 20.6. consistency group set Set consistency group properties Usage: Table 20.18. Positional arguments Value Summary <consistency-group> Consistency group to modify (name or id) Table 20.19. Command arguments Value Summary -h, --help Show this help message and exit --name <name> New consistency group name --description <description> New consistency group description 20.7. consistency group show Display consistency group details. Usage: Table 20.20. Positional arguments Value Summary <consistency-group> Consistency group to display (name or id) Table 20.21. Command arguments Value Summary -h, --help Show this help message and exit Table 20.22. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 20.23. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 20.24. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 20.25. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 20.8. consistency group snapshot create Create new consistency group snapshot. Usage: Table 20.26. Positional arguments Value Summary <snapshot-name> Name of new consistency group snapshot (default to None) Table 20.27. Command arguments Value Summary -h, --help Show this help message and exit --consistency-group <consistency-group> Consistency group to snapshot (name or id) (default to be the same as <snapshot-name>) --description <description> Description of this consistency group snapshot Table 20.28. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 20.29. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 20.30. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 20.31. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 20.9. consistency group snapshot delete Delete consistency group snapshot(s). Usage: Table 20.32. Positional arguments Value Summary <consistency-group-snapshot> Consistency group snapshot(s) to delete (name or id) Table 20.33. Command arguments Value Summary -h, --help Show this help message and exit 20.10. consistency group snapshot list List consistency group snapshots. Usage: Table 20.34. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Show detail for all projects (admin only) (defaults to False) --long List additional fields in output --status <status> Filters results by a status ("available", "error", "creating", "deleting" or "error_deleting") --consistency-group <consistency-group> Filters results by a consistency group (name or id) Table 20.35. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 20.36. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 20.37. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 20.38. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 20.11. consistency group snapshot show Display consistency group snapshot details Usage: Table 20.39. Positional arguments Value Summary <consistency-group-snapshot> Consistency group snapshot to display (name or id) Table 20.40. Command arguments Value Summary -h, --help Show this help message and exit Table 20.41. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 20.42. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 20.43. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 20.44. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack consistency group add volume [-h] <consistency-group> <volume> [<volume> ...]",
"openstack consistency group create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] (--volume-type <volume-type> | --consistency-group-source <consistency-group> | --consistency-group-snapshot <consistency-group-snapshot>) [--description <description>] [--availability-zone <availability-zone>] [<name>]",
"openstack consistency group delete [-h] [--force] <consistency-group> [<consistency-group> ...]",
"openstack consistency group list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--all-projects] [--long]",
"openstack consistency group remove volume [-h] <consistency-group> <volume> [<volume> ...]",
"openstack consistency group set [-h] [--name <name>] [--description <description>] <consistency-group>",
"openstack consistency group show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <consistency-group>",
"openstack consistency group snapshot create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--consistency-group <consistency-group>] [--description <description>] [<snapshot-name>]",
"openstack consistency group snapshot delete [-h] <consistency-group-snapshot> [<consistency-group-snapshot> ...]",
"openstack consistency group snapshot list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--all-projects] [--long] [--status <status>] [--consistency-group <consistency-group>]",
"openstack consistency group snapshot show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <consistency-group-snapshot>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/consistency |
Product Guide | Product Guide Red Hat Virtualization 4.3 Introduction to Red Hat Virtualization 4.3 Red Hat Virtualization Documentation Team Red Hat Customer Content Services [email protected] Abstract This document provides an introduction to Red Hat Virtualization. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/product_guide/index |
3.2.3.2.2. Is Symmetric Multiprocessing Right for You? | 3.2.3.2.2. Is Symmetric Multiprocessing Right for You? Symmetric multiprocessing (also known as SMP) makes it possible for a computer system to have more than one CPU sharing all system resources. This means that, unlike a uniprocessor system, an SMP system may actually have more than one process running at the same time. At first glance, this seems like any system administrator's dream. First and foremost, SMP makes it possible to increase a system's CPU power even if CPUs with faster clock speeds are not available -- just by adding another CPU. However, this flexibility comes with some caveats. The first caveat is that not all systems are capable of SMP operation. Your system must have a motherboard designed to support multiple processors. If it does not, a motherboard upgrade (at the least) would be required. The second caveat is that SMP increases system overhead. This makes sense if you stop to think about it; with more CPUs to schedule work for, the operating system requires more CPU cycles for overhead. Another aspect to this is that with multiple CPUs, there can be more contention for system resources. Because of these factors, upgrading a dual-processor system to a quad-processor unit does not result in a 100% increase in available CPU power. In fact, depending on the actual hardware, the workload, and the processor architecture, it is possible to reach a point where the addition of another processor could actually reduce system performance. Another point to keep in mind is that SMP does not help workloads consisting of one monolithic application with a single stream of execution. In other words, if a large compute-bound simulation program runs as one process and without threads, it will not run any faster on an SMP system than on a single-processor machine. In fact, it may even run somewhat slower, due to the increased overhead SMP brings. For these reasons, many system administrators feel that when it comes to CPU power, single stream processing power is the way to go. It provides the most CPU power with the fewest restrictions on its use. While this discussion seems to indicate that SMP is never a good idea, there are circumstances in which it makes sense. For example, environments running multiple highly compute-bound applications are good candidates for SMP. The reason for this is that applications that do nothing but compute for long periods of time keep contention between active processes (and therefore, the operating system overhead) to a minimum, while the processes themselves keep every CPU busy. One other thing to keep in mind about SMP is that the performance of an SMP system tends to degrade more gracefully as the system load increases. This does make SMP systems popular in server and multi-user environments, as the ever-changing process mix can impact the system-wide load less on a multi-processor machine. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s4-bandwidth-processing-improve-capacity-smp |
Appendix D. Configuring a Host for PCI Passthrough | Appendix D. Configuring a Host for PCI Passthrough Note This is one in a series of topics that show how to set up and configure SR-IOV on Red Hat Virtualization. For more information, see Setting Up and Configuring SR-IOV Enabling PCI passthrough allows a virtual machine to use a host device as if the device were directly attached to the virtual machine. To enable the PCI passthrough function, you must enable virtualization extensions and the IOMMU function. The following procedure requires you to reboot the host. If the host is attached to the Manager already, ensure you place the host into maintenance mode first. Prerequisites Ensure that the host hardware meets the requirements for PCI device passthrough and assignment. See PCI Device Requirements for more information. Configuring a Host for PCI Passthrough Enable the virtualization extension and IOMMU extension in the BIOS. See Enabling Intel VT-x and AMD-V virtualization hardware extensions in BIOS in the Red Hat Enterprise Linux Virtualization Deployment and Administration Guide for more information. Enable the IOMMU flag in the kernel by selecting the Hostdev Passthrough & SR-IOV check box when adding the host to the Manager or by editing the grub configuration file manually. To enable the IOMMU flag from the Administration Portal, see Adding Standard Hosts to the Red Hat Virtualization Manager and Kernel Settings Explained . To edit the grub configuration file manually, see Enabling IOMMU Manually . For GPU passthrough, you need to run additional configuration steps on both the host and the guest system. See GPU device passthrough: Assigning a host GPU to a single virtual machine in Setting up an NVIDIA GPU for a virtual machine in Red Hat Virtualization for more information. Enabling IOMMU Manually Enable IOMMU by editing the grub configuration file. Note If you are using IBM POWER8 hardware, skip this step as IOMMU is enabled by default. For Intel, boot the machine, and append intel_iommu=on to the end of the GRUB_CMDLINE_LINUX line in the grub configuration file. For AMD, boot the machine, and append amd_iommu=on to the end of the GRUB_CMDLINE_LINUX line in the grub configuration file. # vi /etc/default/grub ... GRUB_CMDLINE_LINUX="nofb splash=quiet console=tty0 ... amd_iommu=on ... Note If intel_iommu=on or an AMD IOMMU is detected, you can try adding iommu=pt . The pt option only enables IOMMU for devices used in passthrough and provides better host performance. However, the option might not be supported on all hardware. Revert to the option if the pt option doesn't work for your host. If the passthrough fails because the hardware does not support interrupt remapping, you can consider enabling the allow_unsafe_interrupts option if the virtual machines are trusted. The allow_unsafe_interrupts is not enabled by default because enabling it potentially exposes the host to MSI attacks from virtual machines. To enable the option: Refresh the grub.cfg file and reboot the host for these changes to take effect: # grub2-mkconfig -o /boot/grub2/grub.cfg # reboot | [
"vi /etc/default/grub GRUB_CMDLINE_LINUX=\"nofb splash=quiet console=tty0 ... intel_iommu=on",
"vi /etc/default/grub ... GRUB_CMDLINE_LINUX=\"nofb splash=quiet console=tty0 ... amd_iommu=on ...",
"vi /etc/modprobe.d options vfio_iommu_type1 allow_unsafe_interrupts=1",
"grub2-mkconfig -o /boot/grub2/grub.cfg",
"reboot"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/installing_red_hat_virtualization_as_a_self-hosted_engine_using_the_command_line/configuring_a_host_for_pci_passthrough_migrate_dwh_db |
Chapter 12. Recovering from disaster | Chapter 12. Recovering from disaster This chapter explains how to restore your cluster to a working state after a disk or server failure. You must have configured disaster recovery options previously in order to use this chapter. See Configuring backup and recovery options for details. 12.1. Manually restoring data from a backup volume This section covers how to restore data from a remote backup volume to a freshly installed replacement deployment of Red Hat Hyperconverged Infrastructure for Virtualization. To do this, you must: Install and configure a replacement deployment according to the instructions in Deploying Red Hat Hyperconverged Infrastructure for Virtualization . 12.1.1. Restoring a volume from a geo-replicated backup Install and configure a replacement Hyperconverged Infrastructure deployment For instructions, refer to Deploying Red Hat Hyperconverged Infrastructure for Virtualization : https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/deploying_red_hat_hyperconverged_infrastructure_for_virtualization/ . Disable read-only mode on the backup volume Geo-replicated volumes are set to read-only after each sync to ensure that data is not modified. Red Hat Virtualization needs write permissions in order to import the volume as a storage domain. Run the following command to disable read-only mode on the backup volume. Import the backup of the storage domain From the new Hyperconverged Infrastructure deployment, in the Administration Portal: Click Storage Domains . Click Import Domain . The Import Pre-Configured Domain window opens. In the Storage Type field, specify GlusterFS . In the Name field, specify a name for the new volume that will be created from the backup volume. In the Path field, specify the path to the backup volume. Click OK . The following warning appears, with any active data centers listed below: Check the Approve operation checkbox and click OK . Determine a list of virtual machines to import Determine the imported domain's identifier by running the following command: For example: Determine the list of unregistered disks by running the following command: For example: Perform a partial import of each virtual machine to the storage domain Determine cluster identifier The following command returns the cluster identifier. For example: Import the virtual machines The following command imports a virtual machine without requiring all disks to be available in the storage domain. For example: For further information, see the Red Hat Virtualization REST API Guide : https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html/rest_api_guide/ . Migrate the partially imported disks to the new storage domain In the Administration Portal, click Storage Disks , and Click the Move Disk option. Move the imported disks from the synced volume to the replacement cluster's storage domain. For further information, see the Red Hat Virtualization Administration Guide . Attach the restored disks to the new virtual machines Follow the instructions in the Red Hat Virtualization Virtual Machine Management Guide to attach the replacement disks to each virtual machine. 12.2. Failing over to a secondary cluster This section covers how to fail over from your primary cluster to a remote secondary cluster in the event of server failure. Configure failover to a remote cluster . Verify that the mapping file for the source and target clusters remains accurate. Disable read-only mode on the backup volume Geo-replicated volumes are set to read-only after each sync to ensure that data is not modified. Red Hat Virtualization needs write permissions in order to import the volume as a storage domain. Run the following command to disable read-only mode on the backup volume. Important Make sure to stop the remote sync schedule at the primary site, before executing the failover playbook. If the Red Hat Virtualization Manager Administration portal at the primary site is reachable, then remove the scheduled remote data sync using the following steps: Login as admin to the Red Hat Virtualization Manager Administration Portal Select Storage Select Domains . Select the storage domain configured with remote data sync Select Remote Data Sync Setup tab Edit choose the Recurrence as none. Run the failover playbook with the fail_over tag. 12.3. Failing back to a primary cluster This section covers how to fail back from your secondary cluster to the primary cluster after you have corrected the cause of a server failure. Prepare the primary cluster for failback by running the cleanup playbook with the clean_engine tag. Verify that the mapping file for the source and target clusters remains accurate. Execute failback by running the failback playbook with the fail_back tag. 12.4. Stopping a geo-replication session using RHV Manager Stop a geo-replication session when you want to prevent data being replicated from an active source volume to a passive target volume via geo-replication. Verify that data is not currently being synchronized Click the Tasks icon at the top right of the Manager, and review the Tasks page. Ensure that there are no ongoing tasks related to Data Synchronization. If data synchronization tasks are present, wait until they are complete. Stop the geo-replication session Click Storage Volumes . Click the name of the volume that you want to prevent geo-replicating. Click the Geo-replication subtab. Select the session that you want to stop, then click Stop . 12.5. Turning off scheduled backups by deleting the geo-replication schedule You can stop scheduled backups via geo-replication by deleting the geo-replication schedule. Log in to the Administration Portal on any source node. Click Storage Domains . Click the name of the storage domain that you want to back up. Click the Remote Data Sync Setup subtab. Click Setup . The Setup Remote Data Synchronization window opens. In the Recurrence field, select a recurrence interval type of NONE and click OK . (Optional) Remove the geo-replication session Run the following command from the geo-replication master node: You can also run this command with the reset-sync-time parameter. For further information about this parameter and deleting a geo-replication session, see Deleting a Geo-replication Session in the Red Hat Gluster Storage 3.5 Administration Guide . | [
"gluster volume set <backup-vol> features.read-only off",
"This operation might be unrecoverable and destructive! Storage Domain(s) are already attached to a Data Center. Approving this operation might cause data corruption if both Data Centers are active.",
"curl -v -k -X GET -u \"admin@internal:password\" -H \"Accept: application/xml\" https://USDENGINE_FQDN/ovirt-engine/api/storagedomains/",
"curl -v -k -X GET -u \"[email protected]:mybadpassword\" -H \"Accept: application/xml\" https://10.0.2.1/ovirt-engine/api/storagedomains/",
"curl -v -k -X GET -u \"admin@internal:password\" -H \"Accept: application/xml\" \"https://USDENGINE_FQDN/ovirt-engine/api/storagedomains/DOMAIN_ID/vms;unregistered\"",
"curl -v -k -X GET -u \"[email protected]:mybadpassword\" -H \"Accept: application/xml\" \"https://10.0.2.1/ovirt-engine/api/storagedomains/5e1a37cf-933d-424c-8e3d-eb9e40b690a7/vms;unregistered\"",
"curl -v -k -X GET -u \"admin@internal:password\" -H \"Accept: application/xml\" https://USDENGINE_FQDN/ovirt-engine/api/clusters/",
"curl -v -k -X GET -u \"admin@example:mybadpassword\" -H \"Accept: application/xml\" https://10.0.2.1/ovirt-engine/api/clusters/",
"curl -v -k -u 'admin@internal:password' -H \"Content-type: application/xml\" -d '<action> <cluster id=\"CLUSTER_ID\"></cluster> <allow_partial_import>true</allow_partial_import> </action>' \"https://ENGINE_FQDN/ovirt-engine/api/storagedomains/DOMAIN_ID/vms/VM_ID/register\"",
"curl -v -k -u '[email protected]:mybadpassword' -H \"Content-type: application/xml\" -d '<action> <cluster id=\"bf5a9e9e-5b52-4b0d-aeba-4ee4493f1072\"></cluster> <allow_partial_import>true</allow_partial_import> </action>' \"https://10.0.2.1/ovirt-engine/api/storagedomains/8d21980a-a50b-45e9-9f32-cd8d2424882e/e164f8c6-769a-4cbd-ac2a-ef322c2c5f30/register\"",
"gluster volume set <backup-vol> features.read-only off",
"ansible-playbook dr-rhv-failover.yml --tags=\"fail_over\"",
"ansible-playbook dr-cleanup.yml --tags=\"clean_engine\"",
"ansible-playbook dr-cleanup.yml --tags=\"fail_back\"",
"gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL delete"
] | https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/maintaining_red_hat_hyperconverged_infrastructure_for_virtualization/maint-task-disaster-recovery |
Chapter 2. Using the Software Development Kit | Chapter 2. Using the Software Development Kit This chapter defines the modules and classes of the Ruby Software Development Kit and describes their usage. 2.1. Classes The OvirtSDK4 module contains the following software development kit classes: Connection The Connection class is the mechanism for connecting to the server and obtaining the reference to the root of the services tree. See Section 3.1, "Connecting to the Red Hat Virtualization Manager" for details. Types The Type classes implement the types supported by the API. For example, the Vm class is the implementation of the virtual machine type. The classes are data containers and do not contain any logic. You will be working with instances of types. Instances of these classes are used as parameters and return values of service methods. The conversion to or from the underlying representation is handled transparently by the software development kit. Services The Service classes implement the services supported by the API. For example, the VmsService class is the implementation of the service that manages the collection of virtual machines in the system. Instances of these classes are automatically created by the SDK when a service is referenced. For example, a new instance of the VmsService class is created automatically by the SDK when you call the vms_service method of the SystemService class: vms_service = connection.system_service.vms_service Warning Do not create instances of these classes manually. The constructor parameters and methods may change in the future. Error The Error class is the base exception class that the software development kit raises when it reports an error. Certain specific error classes extend the base error class: AuthError - Authentication or authorization failure ConnectionError - Server name cannot be resolved or server is unreachable NotFoundError - Requested object does not exist TimeoutError - Operation time-out Other Classes Other classes (for example, HTTP client classes, readers, and writers) are used for HTTP communication and for XML parsing and rendering. Using these classes is not recommended, because they comprise internal implementation details that may change in the future. Their backwards-compatibility cannot be relied upon. | [
"vms_service = connection.system_service.vms_service"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/ruby_sdk_guide/chap-using_the_software_development_kit |
Deploying RHEL 9 on Microsoft Azure | Deploying RHEL 9 on Microsoft Azure Red Hat Enterprise Linux 9 Obtaining RHEL system images and creating RHEL instances on Azure Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/deploying_rhel_9_on_microsoft_azure/index |
Chapter 4. Troubleshooting the Block Storage backup service | Chapter 4. Troubleshooting the Block Storage backup service You can diagnose many issues by verifying that the Block Storage services are running correctly and then by examining the log files for error messages. 4.1. Verifying the Block Storage backup service deployment After a deployment or when troubleshooting issues, it is important to verify that the necessary Block Storage services are running correctly on their hosts. Ensure that the Block Storage backup service is running on every Controller node, like the Block Storage scheduler service. After you verify that the necessary Block Storage services are running correctly, then you must verify that the Block Storage backup service is deployed successfully. Procedure Run the openstack volume service list command: Verify that the State entry of every service is up . If not, examine the relevant log files. For more information about the location of these log files, see Block Storage (cinder) Log Files in Managing overcloud observability . Verify that the Block Storage backup service is deployed successfully, by backing up any Block Storage volume and ensuring that the backup succeeds. For more information, see Troubleshooting backups . Additional resources Examining the Block Storage backup service log file 4.2. Troubleshooting backups The Block Storage backup service performs static checks when receiving a request to back up a Block Storage (cinder) volume. If these checks fail then you will immediately be notified: Check for an invalid volume reference ( missing ). Check if the volume is in-use or attached to an instance. The in-use case requires you to use the --force option to perform a backup. For more information, see Creating a backup of an in-use volume . When you use the --force volume backup option, you create a crash-consistent, but not an application-consistent, backup because the volume is not quiesced before performing the backup. Therefore, the data is intact but the backup does not have an awareness of which applications were running when the backup was performed. When these checks succeed: the Block Storage backup service accepts the request to backup this volume, the CLI backup command returns immediately, and the volume is backed up in the background. Therefore the CLI backup command returns even if the backup fails. You can use the openstack volume backup list command to verify that the volume backup is successful, when the Status of the backup entry is available . If a backup fails, examine the Block Storage backup service log file for error messages to discover the cause. For more information, see Examining the Block Storage backup service log file . 4.3. Examining the Block Storage backup service log file When a backup or restore does not succeed, you can examine the Block Storage backup service log file for error messages that can help you to determine the reason. Procedure Find the Block Storage backup service log file on the Controller node where the backup service is running. This log file is located in the following path: /var/log/containers/cinder/cinder-backup.log . 4.4. Volume backup workflow The following diagram and explanation describe the steps that occur when the user requests the cinder API to backup a Block Storage (cinder) volume. Figure 4.1. Creating a backup of a Block Storage volume The user issues a request to the cinder API, which is a REST API, to back up a Block Storage volume. The cinder API receives the request from HAProxy and validates the request, the user credentials, and other information. The cinder API creates the backup record in the SQL database. The cinder API makes an asynchronous RPC call to the cinder-backup service via AMQP to back up the volume. The cinder API returns the current backup record, with an ID, to the API caller. An RPC create message arrives on one of the backup services. The cinder-backup service performs a synchronous RPC call to get_backup_device . The cinder-volume service ensures that the correct device is returned to the caller. Normally, it is the same volume, but if the volume is in use, the service returns a temporary cloned volume or a temporary snapshot, depending on the configuration. The cinder-backup service issues another synchronous RPC to cinder-volume to expose the source device. The cinder-volume service exports and maps the source device (volume or snapshot) and returns the appropriate connection information. The cinder-backup service attaches the source device by using the connection information. The cinder-backup service calls the backup back end driver, with the device already attached, which begins the data transfer to the backup repository. The source device is detached from the Backup host. The cinder-backup service issues a synchronous RPC to cinder-volume to disconnect the source device. The cinder-volume service unmaps and removes the export for the device. If a temporary volume or temporary snapshot was created, cinder-backup calls cinder-volume to remove it. The cinder-volume service removes the temporary volume. When the backup is completed, the backup record is updated in the database. 4.5. Volume restore workflow The following diagram and explanation describe the steps that occur when the user requests the cinder API to restore a Block Storage service (cinder) backup. Figure 4.2. Restoring a Block Storage backup The user issues a request to the cinder API, which is a REST API, to restore a Block Storage backup. The cinder API receives the request from HAProxy and validates the request, the user credentials, and other information. If the request does not contain an existing volume as the destination, the cinder API makes an asynchronous RPC call to create a new volume and polls the status of the volume until it becomes available. The cinder-scheduler selects a volume service and makes the RPC call to create the volume. The selected cinder-volume service creates the volume. When the cinder API detects that the volume is available, the backup record is created in the database. The cinder API makes an asynchronous RPC call to the backup service via AMQP to restore the backup. The cinder API returns the current volume ID, backup ID, and volume name to the API caller. An RPC create message arrives on one of the backup services. The cinder-backup service performs a synchronous RPC call to cinder-volume to expose the volume. The cinder-volume service exports and maps the volume returning the appropriate connection information. The cinder-backup service attaches the volume by using the connection information. The cinder-backup service calls the back end driver with the volume already attached, which begins the data restoration to the volume. The volume is detached from the backup host. The cinder-backup service issues a synchronous RPC to cinder-volume to disconnect the volume. The cinder-volume service unmaps and removes the export for the volume. When the volume is restored, the backup record is updated in the database. | [
"openstack volume service list +------------------+-------------------------+------+---------+-------+----------------------------+ | Binary | Host | Zone | Status | State | Updated At | +------------------+-------------------------+------+---------+-------+----------------------------+ | cinder-scheduler | controller-0 | nova | enabled | up | 2023-06-21T13:07:42.000000 | | cinder-scheduler | controller-1 | nova | enabled | up | 2023-06-21T13:07:42.000000 | | cinder-scheduler | controller-2 | nova | enabled | up | 2023-06-21T13:07:42.000000 | | cinder-backup | controller-0 | nova | enabled | up | 2023-06-21T13:07:46.000000 | | cinder-backup | controller-1 | nova | enabled | up | 2023-06-21T13:07:46.000000 | | cinder-backup | controller-2 | nova | enabled | up | 2023-06-21T13:07:46.000000 | | cinder-volume | hostgroup@tripleo_iscsi | nova | enabled | up | 2023-06-21T13:07:47.000000 | +------------------+-------------------------+------+---------+-------+----------------------------+"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/backing_up_block_storage_volumes/assembly_backup-troubleshooting |
Chapter 4. Updating Capsule Server | Chapter 4. Updating Capsule Server Update Capsule Servers to the minor version. Procedure Synchronize the satellite-capsule-6.15-for-rhel-8-x86_64-rpms repository in the Satellite Server. Publish and promote a new version of the content view with which the Capsule is registered. Ensure that the Satellite Maintenance repository is enabled: Check the available versions to confirm the minor version is listed: Use the health check option to determine if the system is ready for upgrade: Review the results and address any highlighted error conditions before performing the upgrade. Because of the lengthy update time, use a utility such as tmux to suspend and reattach a communication session. You can then check the upgrade progress without staying connected to the command shell continuously. If you lose connection to the command shell where the upgrade command is running, you can see the logged messages in the /var/log/foreman-installer/capsule.log file to check if the process completed successfully. Perform the update: Determine if the system needs a reboot: If the command told you to reboot, then reboot the system: | [
"subscription-manager repos --enable satellite-maintenance-6.15-for-rhel-8-x86_64-rpms",
"satellite-maintain upgrade list-versions",
"satellite-maintain upgrade check --target-version 6.15.z",
"satellite-maintain upgrade run --target-version 6.15.z",
"dnf needs-restarting --reboothint",
"reboot"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/updating_red_hat_satellite/updating-smart-proxy_updating |
Chapter 3. Installing the Migration Toolkit for Containers | Chapter 3. Installing the Migration Toolkit for Containers You can install the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 4. Note To install MTC on OpenShift Container Platform 3, see Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 3 . By default, the MTC web console and the Migration Controller pod run on the target cluster. You can configure the Migration Controller custom resource manifest to run the MTC web console and the Migration Controller pod on a remote cluster . After you have installed MTC, you must configure an object storage to use as a replication repository. To uninstall MTC, see Uninstalling MTC and deleting resources . 3.1. Compatibility guidelines You must install the Migration Toolkit for Containers (MTC) Operator that is compatible with your OpenShift Container Platform version. Definitions legacy platform OpenShift Container Platform 4.5 and earlier. modern platform OpenShift Container Platform 4.6 and later. legacy operator The MTC Operator designed for legacy platforms. modern operator The MTC Operator designed for modern platforms. control cluster The cluster that runs the MTC controller and GUI. remote cluster A source or destination cluster for a migration that runs Velero. The Control Cluster communicates with Remote clusters via the Velero API to drive migrations. You must use the compatible MTC version for migrating your OpenShift Container Platform clusters. For the migration to succeed both your source cluster and the destination cluster must use the same version of MTC. MTC 1.7 supports migrations from OpenShift Container Platform 3.11 to 4.9. MTC 1.8 only supports migrations from OpenShift Container Platform 4.10 and later. Table 3.1. MTC compatibility: Migrating from a legacy or a modern platform Details OpenShift Container Platform 3.11 OpenShift Container Platform 4.0 to 4.5 OpenShift Container Platform 4.6 to 4.9 OpenShift Container Platform 4.10 or later Stable MTC version MTC v.1.7. z MTC v.1.7. z MTC v.1.7. z MTC v.1.8. z Installation Legacy MTC v.1.7. z operator: Install manually with the operator.yml file. [ IMPORTANT ] This cluster cannot be the control cluster. Install with OLM, release channel release-v1.7 Install with OLM, release channel release-v1.8 Edge cases exist in which network restrictions prevent modern clusters from connecting to other clusters involved in the migration. For example, when migrating from an OpenShift Container Platform 3.11 cluster on premises to a modern OpenShift Container Platform cluster in the cloud, where the modern cluster cannot connect to the OpenShift Container Platform 3.11 cluster. With MTC v.1.7. z , if one of the remote clusters is unable to communicate with the control cluster because of network restrictions, use the crane tunnel-api command. With the stable MTC release, although you should always designate the most modern cluster as the control cluster, in this specific case it is possible to designate the legacy cluster as the control cluster and push workloads to the remote cluster. 3.2. Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 4.2 to 4.5 You can install the legacy Migration Toolkit for Containers Operator manually on OpenShift Container Platform versions 4.2 to 4.5. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must have access to registry.redhat.io . You must have podman installed. Procedure Log in to registry.redhat.io with your Red Hat Customer Portal credentials: USD podman login registry.redhat.io Download the operator.yml file by entering the following command: podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./ Download the controller.yml file by entering the following command: podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./ Log in to your OpenShift Container Platform source cluster. Verify that the cluster can authenticate with registry.redhat.io : USD oc run test --image registry.redhat.io/ubi9 --command sleep infinity Create the Migration Toolkit for Containers Operator object: USD oc create -f operator.yml Example output namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-builders" already exists 1 Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-pullers" already exists 1 You can ignore Error from server (AlreadyExists) messages. They are caused by the Migration Toolkit for Containers Operator creating resources for earlier versions of OpenShift Container Platform 4 that are provided in later releases. Create the MigrationController object: USD oc create -f controller.yml Verify that the MTC pods are running: USD oc get pods -n openshift-migration 3.3. Installing the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.18 You install the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.18 by using the Operator Lifecycle Manager. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Use the Filter by keyword field to find the Migration Toolkit for Containers Operator . Select the Migration Toolkit for Containers Operator and click Install . Click Install . On the Installed Operators page, the Migration Toolkit for Containers Operator appears in the openshift-migration project with the status Succeeded . Click Migration Toolkit for Containers Operator . Under Provided APIs , locate the Migration Controller tile, and click Create Instance . Click Create . Click Workloads Pods to verify that the MTC pods are running. 3.4. Proxy configuration For OpenShift Container Platform 4.1 and earlier versions, you must configure proxies in the MigrationController custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy object. For OpenShift Container Platform 4.2 to 4.18, the MTC inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings. 3.4.1. Direct volume migration Direct Volume Migration (DVM) was introduced in MTC 1.4.2. DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy. If you want to perform a DVM from a source cluster behind a proxy, you must configure a TCP proxy that works at the transport layer and forwards the SSL connections transparently without decrypting and re-encrypting them with their own SSL certificates. A Stunnel proxy is an example of such a proxy. 3.4.1.1. TCP proxy setup for DVM You can set up a direct connection between the source and the target cluster through a TCP proxy and configure the stunnel_tcp_proxy variable in the MigrationController CR to use the proxy: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port Direct volume migration (DVM) supports only basic authentication for the proxy. Moreover, DVM works only from behind proxies that can tunnel a TCP connection transparently. HTTP/HTTPS proxies in man-in-the-middle mode do not work. The existing cluster-wide proxies might not support this behavior. As a result, the proxy settings for DVM are intentionally kept different from the usual proxy configuration in MTC. 3.4.1.2. Why use a TCP proxy instead of an HTTP/HTTPS proxy? You can enable DVM by running Rsync between the source and the target cluster over an OpenShift route. Traffic is encrypted using Stunnel, a TCP proxy. The Stunnel running on the source cluster initiates a TLS connection with the target Stunnel and transfers data over an encrypted channel. Cluster-wide HTTP/HTTPS proxies in OpenShift are usually configured in man-in-the-middle mode where they negotiate their own TLS session with the outside servers. However, this does not work with Stunnel. Stunnel requires that its TLS session be untouched by the proxy, essentially making the proxy a transparent tunnel which simply forwards the TCP connection as-is. Therefore, you must use a TCP proxy. 3.4.1.3. Known issue Migration fails with error Upgrade request required The migration Controller uses the SPDY protocol to execute commands within remote pods. If the remote cluster is behind a proxy or a firewall that does not support the SPDY protocol, the migration controller fails to execute remote commands. The migration fails with the error message Upgrade request required . Workaround: Use a proxy that supports the SPDY protocol. In addition to supporting the SPDY protocol, the proxy or firewall also must pass the Upgrade HTTP header to the API server. The client uses this header to open a websocket connection with the API server. If the Upgrade header is blocked by the proxy or firewall, the migration fails with the error message Upgrade request required . Workaround: Ensure that the proxy forwards the Upgrade header. 3.4.2. Tuning network policies for migrations OpenShift supports restricting traffic to or from pods using NetworkPolicy or EgressFirewalls based on the network plugin used by the cluster. If any of the source namespaces involved in a migration use such mechanisms to restrict network traffic to pods, the restrictions might inadvertently stop traffic to Rsync pods during migration. Rsync pods running on both the source and the target clusters must connect to each other over an OpenShift Route. Existing NetworkPolicy or EgressNetworkPolicy objects can be configured to automatically exempt Rsync pods from these traffic restrictions. 3.4.2.1. NetworkPolicy configuration 3.4.2.1.1. Egress traffic from Rsync pods You can use the unique labels of Rsync pods to allow egress traffic to pass from them if the NetworkPolicy configuration in the source or destination namespaces blocks this type of traffic. The following policy allows all egress traffic from Rsync pods in the namespace: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress 3.4.2.1.2. Ingress traffic to Rsync pods apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress 3.4.2.2. EgressNetworkPolicy configuration The EgressNetworkPolicy object or Egress Firewalls are OpenShift constructs designed to block egress traffic leaving the cluster. Unlike the NetworkPolicy object, the Egress Firewall works at a project level because it applies to all pods in the namespace. Therefore, the unique labels of Rsync pods do not exempt only Rsync pods from the restrictions. However, you can add the CIDR ranges of the source or target cluster to the Allow rule of the policy so that a direct connection can be setup between two clusters. Based on which cluster the Egress Firewall is present in, you can add the CIDR range of the other cluster to allow egress traffic between the two: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny 3.4.2.3. Choosing alternate endpoints for data transfer By default, DVM uses an OpenShift Container Platform route as an endpoint to transfer PV data to destination clusters. You can choose another type of supported endpoint, if cluster topologies allow. For each cluster, you can configure an endpoint by setting the rsync_endpoint_type variable on the appropriate destination cluster in your MigrationController CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route] 3.4.2.4. Configuring supplemental groups for Rsync pods When your PVCs use a shared storage, you can configure the access to that storage by adding supplemental groups to Rsync pod definitions in order for the pods to allow access: Table 3.2. Supplementary groups for Rsync pods Variable Type Default Description src_supplemental_groups string Not set Comma-separated list of supplemental groups for source Rsync pods target_supplemental_groups string Not set Comma-separated list of supplemental groups for target Rsync pods Example usage The MigrationController CR can be updated to set values for these supplemental groups: spec: src_supplemental_groups: "1000,2000" target_supplemental_groups: "2000,3000" 3.4.3. Configuring proxies Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Procedure Get the MigrationController CR manifest: USD oc get migrationcontroller <migration_controller> -n openshift-migration Update the proxy parameters: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration ... spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2 1 Stunnel proxy URL for direct volume migration. 2 Comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy nor the httpsProxy field is set. Save the manifest as migration-controller.yaml . Apply the updated manifest: USD oc replace -f migration-controller.yaml -n openshift-migration For more information, see Configuring the cluster-wide proxy . 3.4.4. Running Rsync as either root or non-root OpenShift Container Platform environments have the PodSecurityAdmission controller enabled by default. This controller requires cluster administrators to enforce Pod Security Standards by means of namespace labels. All workloads in the cluster are expected to run one of the following Pod Security Standard levels: Privileged , Baseline or Restricted . Every cluster has its own default policy set. To guarantee successful data transfer in all environments, Migration Toolkit for Containers (MTC) 1.7.5 introduced changes in Rsync pods, including running Rsync pods as non-root user by default. This ensures that data transfer is possible even for workloads that do not necessarily require higher privileges. This change was made because it is best to run workloads with the lowest level of privileges possible. 3.4.4.1. Manually overriding default non-root operation for data transfer Although running Rsync pods as non-root user works in most cases, data transfer might fail when you run workloads as root user on the source side. MTC provides two ways to manually override default non-root operation for data transfer: Configure all migrations to run an Rsync pod as root on the destination cluster for all migrations. Run an Rsync pod as root on the destination cluster per migration. In both cases, you must set the following labels on the source side of any namespaces that are running workloads with higher privileges before migration: enforce , audit , and warn. To learn more about Pod Security Admission and setting values for labels, see Controlling pod security admission synchronization . 3.4.4.2. Configuring the MigrationController CR as root or non-root for all migrations By default, Rsync runs as non-root. On the destination cluster, you can configure the MigrationController CR to run Rsync as root. Procedure Configure the MigrationController CR as follows: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] migration_rsync_privileged: true This configuration will apply to all future migrations. 3.4.4.3. Configuring the MigMigration CR as root or non-root per migration On the destination cluster, you can configure the MigMigration CR to run Rsync as root or non-root, with the following non-root options: As a specific user ID (UID) As a specific group ID (GID) Procedure To run Rsync as root, configure the MigMigration CR according to this example: apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsRoot: true To run Rsync as a specific User ID (UID) or as a specific Group ID (GID), configure the MigMigration CR according to this example: apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsUser: 10010001 runAsGroup: 3 3.5. Configuring a replication repository You must configure an object storage to use as a replication repository. The Migration Toolkit for Containers (MTC) copies data from the source cluster to the replication repository, and then from the replication repository to the target cluster. MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. Select a method that is suited for your environment and is supported by your storage provider. MTC supports the following storage providers: Multicloud Object Gateway Amazon Web Services S3 Google Cloud Platform Microsoft Azure Blob Generic S3 object storage, for example, Minio or Ceph S3 3.5.1. Prerequisites All clusters must have uninterrupted network access to the replication repository. If you use a proxy server with an internally hosted replication repository, you must ensure that the proxy allows access to the replication repository. 3.5.2. Retrieving Multicloud Object Gateway credentials You must retrieve the Multicloud Object Gateway (MCG) credentials and S3 endpoint, which you need to configure MCG as a replication repository for the Migration Toolkit for Containers (MTC). You must retrieve the Multicloud Object Gateway (MCG) credentials, which you need to create a Secret custom resource (CR) for MTC. Note Although the MCG Operator is deprecated , the MCG plugin is still available for OpenShift Data Foundation. To download the plugin, browse to Download Red Hat OpenShift Data Foundation and download the appropriate MCG plugin for your operating system. Prerequisites You must deploy OpenShift Data Foundation by using the appropriate Red Hat OpenShift Data Foundation deployment guide . Procedure Obtain the S3 endpoint, AWS_ACCESS_KEY_ID , and AWS_SECRET_ACCESS_KEY by running the describe command on the NooBaa custom resource. You use these credentials to add MCG as a replication repository. 3.5.3. Configuring Amazon Web Services You configure Amazon Web Services (AWS) S3 object storage as a replication repository for the Migration Toolkit for Containers (MTC) . Prerequisites You must have the AWS CLI installed. The AWS S3 storage bucket must be accessible to the source and target clusters. If you are using the snapshot copy method: You must have access to EC2 Elastic Block Storage (EBS). The source and target clusters must be in the same region. The source and target clusters must have the same storage class. The storage class must be compatible with snapshots. Procedure Set the BUCKET variable: USD BUCKET=<your_bucket> Set the REGION variable: USD REGION=<your_region> Create an AWS S3 bucket: USD aws s3api create-bucket \ --bucket USDBUCKET \ --region USDREGION \ --create-bucket-configuration LocationConstraint=USDREGION 1 1 us-east-1 does not support a LocationConstraint . If your region is us-east-1 , omit --create-bucket-configuration LocationConstraint=USDREGION . Create an IAM user: USD aws iam create-user --user-name velero 1 1 If you want to use Velero to back up multiple clusters with multiple S3 buckets, create a unique user name for each cluster. Create a velero-policy.json file: USD cat > velero-policy.json <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DescribeVolumes", "ec2:DescribeSnapshots", "ec2:CreateTags", "ec2:CreateVolume", "ec2:CreateSnapshot", "ec2:DeleteSnapshot" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:DeleteObject", "s3:PutObject", "s3:AbortMultipartUpload", "s3:ListMultipartUploadParts" ], "Resource": [ "arn:aws:s3:::USD{BUCKET}/*" ] }, { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation", "s3:ListBucketMultipartUploads" ], "Resource": [ "arn:aws:s3:::USD{BUCKET}" ] } ] } EOF Attach the policies to give the velero user the minimum necessary permissions: USD aws iam put-user-policy \ --user-name velero \ --policy-name velero \ --policy-document file://velero-policy.json Create an access key for the velero user: USD aws iam create-access-key --user-name velero Example output { "AccessKey": { "UserName": "velero", "Status": "Active", "CreateDate": "2017-07-31T22:24:41.576Z", "SecretAccessKey": <AWS_SECRET_ACCESS_KEY>, "AccessKeyId": <AWS_ACCESS_KEY_ID> } } Record the AWS_SECRET_ACCESS_KEY and the AWS_ACCESS_KEY_ID . You use the credentials to add AWS as a replication repository. 3.5.4. Configuring Google Cloud Platform You configure a Google Cloud Platform (GCP) storage bucket as a replication repository for the Migration Toolkit for Containers (MTC). Prerequisites You must have the gcloud and gsutil CLI tools installed. See the Google cloud documentation for details. The GCP storage bucket must be accessible to the source and target clusters. If you are using the snapshot copy method: The source and target clusters must be in the same region. The source and target clusters must have the same storage class. The storage class must be compatible with snapshots. Procedure Log in to GCP: USD gcloud auth login Set the BUCKET variable: USD BUCKET=<bucket> 1 1 Specify your bucket name. Create the storage bucket: USD gsutil mb gs://USDBUCKET/ Set the PROJECT_ID variable to your active project: USD PROJECT_ID=USD(gcloud config get-value project) Create a service account: USD gcloud iam service-accounts create velero \ --display-name "Velero service account" List your service accounts: USD gcloud iam service-accounts list Set the SERVICE_ACCOUNT_EMAIL variable to match its email value: USD SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list \ --filter="displayName:Velero service account" \ --format 'value(email)') Attach the policies to give the velero user the minimum necessary permissions: USD ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list iam.serviceAccounts.signBlob ) Create the velero.server custom role: USD gcloud iam roles create velero.server \ --project USDPROJECT_ID \ --title "Velero Server" \ --permissions "USD(IFS=","; echo "USD{ROLE_PERMISSIONS[*]}")" Add IAM policy binding to the project: USD gcloud projects add-iam-policy-binding USDPROJECT_ID \ --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL \ --role projects/USDPROJECT_ID/roles/velero.server Update the IAM service account: USD gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET} Save the IAM service account keys to the credentials-velero file in the current directory: USD gcloud iam service-accounts keys create credentials-velero \ --iam-account USDSERVICE_ACCOUNT_EMAIL You use the credentials-velero file to add GCP as a replication repository. 3.5.5. Configuring Microsoft Azure You configure a Microsoft Azure Blob storage container as a replication repository for the Migration Toolkit for Containers (MTC). Prerequisites You must have the Azure CLI installed. The Azure Blob storage container must be accessible to the source and target clusters. If you are using the snapshot copy method: The source and target clusters must be in the same region. The source and target clusters must have the same storage class. The storage class must be compatible with snapshots. Procedure Log in to Azure: USD az login Set the AZURE_RESOURCE_GROUP variable: USD AZURE_RESOURCE_GROUP=Velero_Backups Create an Azure resource group: USD az group create -n USDAZURE_RESOURCE_GROUP --location CentralUS 1 1 Specify your location. Set the AZURE_STORAGE_ACCOUNT_ID variable: USD AZURE_STORAGE_ACCOUNT_ID="veleroUSD(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')" Create an Azure storage account: USD az storage account create \ --name USDAZURE_STORAGE_ACCOUNT_ID \ --resource-group USDAZURE_RESOURCE_GROUP \ --sku Standard_GRS \ --encryption-services blob \ --https-only true \ --kind BlobStorage \ --access-tier Hot Set the BLOB_CONTAINER variable: USD BLOB_CONTAINER=velero Create an Azure Blob storage container: USD az storage container create \ -n USDBLOB_CONTAINER \ --public-access off \ --account-name USDAZURE_STORAGE_ACCOUNT_ID Create a service principal and credentials for velero : USD AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv` Create a service principal with the Contributor role, assigning a specific --role and --scopes : USD AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name "velero" \ --role "Contributor" \ --query 'password' -o tsv \ --scopes /subscriptions/USDAZURE_SUBSCRIPTION_ID/resourceGroups/USDAZURE_RESOURCE_GROUP` The CLI generates a password for you. Ensure you capture the password. After creating the service principal, obtain the client id. USD AZURE_CLIENT_ID=`az ad app credential list --id <your_app_id>` Note For this to be successful, you must know your Azure application ID. Save the service principal credentials in the credentials-velero file: USD cat << EOF > ./credentials-velero AZURE_SUBSCRIPTION_ID=USD{AZURE_SUBSCRIPTION_ID} AZURE_TENANT_ID=USD{AZURE_TENANT_ID} AZURE_CLIENT_ID=USD{AZURE_CLIENT_ID} AZURE_CLIENT_SECRET=USD{AZURE_CLIENT_SECRET} AZURE_RESOURCE_GROUP=USD{AZURE_RESOURCE_GROUP} AZURE_CLOUD_NAME=AzurePublicCloud EOF You use the credentials-velero file to add Azure as a replication repository. 3.5.6. Additional resources MTC workflow About data copy methods Adding a replication repository to the MTC web console 3.6. Uninstalling MTC and deleting resources You can uninstall the Migration Toolkit for Containers (MTC) and delete its resources to clean up the cluster. Note Deleting the velero CRDs removes Velero from the cluster. Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure Delete the MigrationController custom resource (CR) on all clusters: USD oc delete migrationcontroller <migration_controller> Uninstall the Migration Toolkit for Containers Operator on OpenShift Container Platform 4 by using the Operator Lifecycle Manager. Delete cluster-scoped resources on all clusters by running the following commands: migration custom resource definitions (CRDs): USD oc delete USD(oc get crds -o name | grep 'migration.openshift.io') velero CRDs: USD oc delete USD(oc get crds -o name | grep 'velero') migration cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io') migration-operator cluster role: USD oc delete clusterrole migration-operator velero cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'velero') migration cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io') migration-operator cluster role bindings: USD oc delete clusterrolebindings migration-operator velero cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'velero') | [
"podman login registry.redhat.io",
"cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./",
"cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./",
"oc run test --image registry.redhat.io/ubi9 --command sleep infinity",
"oc create -f operator.yml",
"namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-builders\" already exists 1 Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-pullers\" already exists",
"oc create -f controller.yml",
"oc get pods -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]",
"spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"",
"oc get migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2",
"oc replace -f migration-controller.yaml -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] migration_rsync_privileged: true",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsRoot: true",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsUser: 10010001 runAsGroup: 3",
"BUCKET=<your_bucket>",
"REGION=<your_region>",
"aws s3api create-bucket --bucket USDBUCKET --region USDREGION --create-bucket-configuration LocationConstraint=USDREGION 1",
"aws iam create-user --user-name velero 1",
"cat > velero-policy.json <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:DescribeVolumes\", \"ec2:DescribeSnapshots\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:GetObject\", \"s3:DeleteObject\", \"s3:PutObject\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}/*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\", \"s3:GetBucketLocation\", \"s3:ListBucketMultipartUploads\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}\" ] } ] } EOF",
"aws iam put-user-policy --user-name velero --policy-name velero --policy-document file://velero-policy.json",
"aws iam create-access-key --user-name velero",
"{ \"AccessKey\": { \"UserName\": \"velero\", \"Status\": \"Active\", \"CreateDate\": \"2017-07-31T22:24:41.576Z\", \"SecretAccessKey\": <AWS_SECRET_ACCESS_KEY>, \"AccessKeyId\": <AWS_ACCESS_KEY_ID> } }",
"gcloud auth login",
"BUCKET=<bucket> 1",
"gsutil mb gs://USDBUCKET/",
"PROJECT_ID=USD(gcloud config get-value project)",
"gcloud iam service-accounts create velero --display-name \"Velero service account\"",
"gcloud iam service-accounts list",
"SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list --filter=\"displayName:Velero service account\" --format 'value(email)')",
"ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list iam.serviceAccounts.signBlob )",
"gcloud iam roles create velero.server --project USDPROJECT_ID --title \"Velero Server\" --permissions \"USD(IFS=\",\"; echo \"USD{ROLE_PERMISSIONS[*]}\")\"",
"gcloud projects add-iam-policy-binding USDPROJECT_ID --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL --role projects/USDPROJECT_ID/roles/velero.server",
"gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET}",
"gcloud iam service-accounts keys create credentials-velero --iam-account USDSERVICE_ACCOUNT_EMAIL",
"az login",
"AZURE_RESOURCE_GROUP=Velero_Backups",
"az group create -n USDAZURE_RESOURCE_GROUP --location CentralUS 1",
"AZURE_STORAGE_ACCOUNT_ID=\"veleroUSD(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')\"",
"az storage account create --name USDAZURE_STORAGE_ACCOUNT_ID --resource-group USDAZURE_RESOURCE_GROUP --sku Standard_GRS --encryption-services blob --https-only true --kind BlobStorage --access-tier Hot",
"BLOB_CONTAINER=velero",
"az storage container create -n USDBLOB_CONTAINER --public-access off --account-name USDAZURE_STORAGE_ACCOUNT_ID",
"AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv`",
"AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name \"velero\" --role \"Contributor\" --query 'password' -o tsv --scopes /subscriptions/USDAZURE_SUBSCRIPTION_ID/resourceGroups/USDAZURE_RESOURCE_GROUP`",
"AZURE_CLIENT_ID=`az ad app credential list --id <your_app_id>`",
"cat << EOF > ./credentials-velero AZURE_SUBSCRIPTION_ID=USD{AZURE_SUBSCRIPTION_ID} AZURE_TENANT_ID=USD{AZURE_TENANT_ID} AZURE_CLIENT_ID=USD{AZURE_CLIENT_ID} AZURE_CLIENT_SECRET=USD{AZURE_CLIENT_SECRET} AZURE_RESOURCE_GROUP=USD{AZURE_RESOURCE_GROUP} AZURE_CLOUD_NAME=AzurePublicCloud EOF",
"oc delete migrationcontroller <migration_controller>",
"oc delete USD(oc get crds -o name | grep 'migration.openshift.io')",
"oc delete USD(oc get crds -o name | grep 'velero')",
"oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io')",
"oc delete clusterrole migration-operator",
"oc delete USD(oc get clusterroles -o name | grep 'velero')",
"oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io')",
"oc delete clusterrolebindings migration-operator",
"oc delete USD(oc get clusterrolebindings -o name | grep 'velero')"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/migration_toolkit_for_containers/installing-mtc |
Installing on any platform | Installing on any platform OpenShift Container Platform 4.17 Installing OpenShift Container Platform on any platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_any_platform/index |
Chapter 7. Using the Kafka Bridge with 3scale | Chapter 7. Using the Kafka Bridge with 3scale You can deploy and integrate Red Hat 3scale API Management with the AMQ Streams Kafka Bridge. Using an existing 3scale deployment? If you already have 3scale deployed to OpenShift and you wish to use it with the Kafka Bridge, ensure that you have the setup described in Deploying 3Scale for the Kafka Bridge . 7.1. 3scale API management With a plain deployment of the Kafka Bridge, there is no provision for authentication or authorization, and no support for a TLS encrypted connection to external clients. 3scale can secure the Kafka Bridge with TLS, and provide authentication and authorization. Integration with 3scale also means that additional features like metrics, rate limiting and billing are available. With 3scale, you can use different types of authentication for requests from external clients wishing to access AMQ Streams. 3scale supports the following types of authentication: Standard API Keys Single randomized strings or hashes acting as an identifier and a secret token. Application Identifier and Key pairs Immutable identifier and mutable secret key strings. OpenID Connect Protocol for delegated authentication. 7.1.1. Kafka Bridge service discovery 3scale is integrated using service discovery, which requires that 3scale is deployed to the same OpenShift cluster as AMQ Streams and the Kafka Bridge. Your AMQ Streams Cluster Operator deployment must have the following environment variables set: STRIMZI_CUSTOM_KAFKA_BRIDGE_SERVICE_LABELS STRIMZI_CUSTOM_KAFKA_BRIDGE_SERVICE_ANNOTATIONS When the Kafka Bridge is deployed, the service that exposes the REST interface of the Kafka Bridge uses the annotations and labels for discovery by 3scale. A discovery.3scale.net=true label is used by 3scale to find a service. Annotations provide information about the service. You can check your configuration in the OpenShift console by navigating to Services for the Kafka Bridge instance. Under Annotations you will see the endpoint to the OpenAPI specification for the Kafka Bridge. 7.1.2. 3scale APIcast gateway policies 3scale is used in conjunction with 3scale APIcast, an API gateway deployed with 3scale that provides a single point of entry for the Kafka Bridge. APIcast policies provide a mechanism to customize how the gateway operates. 3scale provides a set of standard policies for gateway configuration. You can also create your own policies. For more information on APIcast policies, see the Red Hat 3scale documentation . APIcast policies for the Kafka Bridge A sample policy configuration for 3scale integration with the Kafka Bridge is provided with the policies_config.json file, which defines: Anonymous access Header modification Routing URL rewriting Gateway policies are enabled or disabled through this file. You can use this sample as a starting point for defining your own policies. Anonymous access The anonymous access policy exposes a service without authentication, providing default credentials (for anonymous access) when a HTTP client does not provide them. The policy is not mandatory and can be disabled or removed if authentication is always needed. Header modification The header modification policy allows existing HTTP headers to be modified, or new headers added to requests or responses passing through the gateway. For 3scale integration, the policy adds headers to every request passing through the gateway from a HTTP client to the Kafka Bridge. When the Kafka Bridge receives a request for creating a new consumer, it returns a JSON payload containing a base_uri field with the URI that the consumer must use for all the subsequent requests. For example: { "instance_id": "consumer-1", "base_uri":"http://my-bridge:8080/consumers/my-group/instances/consumer1" } When using APIcast, clients send all subsequent requests to the gateway and not to the Kafka Bridge directly. So the URI requires the gateway hostname, not the address of the Kafka Bridge behind the gateway. Using header modification policies, headers are added to requests from the HTTP client so that the Kafka Bridge uses the gateway hostname. For example, by applying a Forwarded: host=my-gateway:80;proto=http header, the Kafka Bridge delivers the following to the consumer. { "instance_id": "consumer-1", "base_uri":"http://my-gateway:80/consumers/my-group/instances/consumer1" } An X-Forwarded-Path header carries the original path contained in a request from the client to the gateway. This header is strictly related to the routing policy applied when a gateway supports more than one Kafka Bridge instance. Routing A routing policy is applied when there is more than one Kafka Bridge instance. Requests must be sent to the same Kafka Bridge instance where the consumer was initially created, so a request must specify a route for the gateway to forward a request to the appropriate Kafka Bridge instance. A routing policy names each bridge instance, and routing is performed using the name. You specify the name in the KafkaBridge custom resource when you deploy the Kafka Bridge. For example, each request (using X-Forwarded-Path ) from a consumer to: http://my-gateway:80/my-bridge-1/consumers/my-group/instances/consumer1 is forwarded to: http://my-bridge-1-bridge-service:8080/consumers/my-group/instances/consumer1 URL rewriting policy removes the bridge name, as it is not used when forwarding the request from the gateway to the Kafka Bridge. URL rewriting The URL rewiring policy ensures that a request to a specific Kafka Bridge instance from a client does not contain the bridge name when forwarding the request from the gateway to the Kafka Bridge. The bridge name is not used in the endpoints exposed by the bridge. 7.1.3. 3scale APIcast for TLS validation You can set up APIcast for TLS validation, which requires a self-managed deployment of APIcast using a template. The apicast service is exposed as a route. You can also apply a TLS policy to the Kafka Bridge API. 7.2. Deploying 3scale for the Kafka Bridge In order to use 3scale with the Kafka Bridge, you first deploy it and then configure it to discover the Kafka Bridge API. You will also use 3scale APIcast and 3scale toolbox. APIcast is provided by 3scale as an NGINX-based API gateway for HTTP clients to connect to the Kafka Bridge API service. 3scale toolbox is a configuration tool that is used to import the OpenAPI specification for the Kafka Bridge service to 3scale. In this scenario, you run AMQ Streams, Kafka, the Kafka Bridge, and 3scale/APIcast in the same OpenShift cluster. Note If you already have 3scale deployed in the same cluster as the Kafka Bridge, you can skip the deployment steps and use your current deployment. Prerequisites An understanding of 3scale AMQ Streams and Kafka is running The Kafka Bridge is deployed For the 3scale deployment: Check the Red Hat 3scale API Management supported configurations . Installation requires a user with cluster-admin role, such as system:admin . You need access to the JSON files describing the: Kafka Bridge OpenAPI specification ( openapiv2.json ) Header modification and routing policies for the Kafka Bridge ( policies_config.json ) Find the JSON files on GitHub . For more information, see the Red Hat 3scale documentation . Procedure Deploy 3scale API Management to the OpenShift cluster. Create a new project or use an existing project. oc new-project my-project \ --description=" description " --display-name=" display_name " Deploy 3scale. The Red Hat 3scale documentation describes how to deploy 3scale on OpenShift using a template or operator. Whichever approach you use, make sure that you set the WILDCARD_DOMAIN parameter to the domain of your OpenShift cluster. Make a note of the URLS and credentials presented for accessing the 3scale Admin Portal. Grant authorization for 3scale to discover the Kafka Bridge service: oc adm policy add-cluster-role-to-user view system:serviceaccount: my-project :amp Verify that 3scale was successfully deployed to the Openshift cluster from the OpenShift console or CLI. For example: oc get deployment 3scale-operator Set up 3scale toolbox. Use the information provided in the Red Hat 3scale documentation to install 3scale toolbox. Set environment variables to be able to interact with 3scale: export REMOTE_NAME=strimzi-kafka-bridge 1 export SYSTEM_NAME=strimzi_http_bridge_for_apache_kafka 2 export TENANT=strimzi-kafka-bridge-admin 3 export PORTAL_ENDPOINT=USDTENANT.3scale.net 4 export TOKEN= 3scale access token 5 1 REMOTE_NAME is the name assigned to the remote address of the 3scale Admin Portal. 2 SYSTEM_NAME is the name of the 3scale service/API created by importing the OpenAPI specification through the 3scale toolbox. 3 TENANT is the tenant name of the 3scale Admin Portal (that is, https://USDTENANT.3scale.net ). 4 PORTAL_ENDPOINT is the endpoint running the 3scale Admin Portal. 5 TOKEN is the access token provided by the 3scale Admin Portal for interaction through the 3scale toolbox or HTTP requests. Configure the remote web address of the 3scale toolbox: 3scale remote add USDREMOTE_NAME https://USDTOKEN@USDPORTAL_ENDPOINT/ Now the endpoint address of the 3scale Admin Portal does not need to be specified every time you run the toolbox. Check that your Cluster Operator deployment has the labels and annotations properties required for the Kafka Bridge service to be discovered by 3scale. #... env: - name: STRIMZI_CUSTOM_KAFKA_BRIDGE_SERVICE_LABELS value: | discovery.3scale.net=true - name: STRIMZI_CUSTOM_KAFKA_BRIDGE_SERVICE_ANNOTATIONS value: | discovery.3scale.net/scheme=http discovery.3scale.net/port=8080 discovery.3scale.net/path=/ discovery.3scale.net/description-path=/openapi #... If not, add the properties through the OpenShift console or try redeploying the Cluster Operator and the Kafka Bridge . Discover the Kafka Bridge API service through 3scale. Log in to the 3scale Admin Portal using the credentials provided when 3scale was deployed. From APIs on the Admin Portal Dashboard, click Create Product . Click Import from OpenShift . Choose the the Kafka Bridge service Click Create Product . You may need to refresh the page to see the Kafka Bridge service. Now you need to import the configuration for the service. You do this from an editor, but keep the portal open to check the imports are successful. Edit the Host field in the OpenAPI specification (JSON file) to use the base URL of the Kafka Bridge service: For example: "host": "my-bridge-bridge-service.my-project.svc.cluster.local:8080" Check the host URL includes the correct: Kafka Bridge name ( my-bridge ) Project name ( my-project ) Port for the Kafka Bridge ( 8080 ) Import the updated OpenAPI specification using the 3scale toolbox: 3scale import openapi -k -d USDREMOTE_NAME openapiv2.json -t myproject-my-bridge-bridge-service Import the header modification and routing policies for the service (JSON file). Locate the ID for the service you created in 3scale. Here we use the `jq` utility : export SERVICE_ID=USD(curl -k -s -X GET "https://USDPORTAL_ENDPOINT/admin/api/services.json?access_token=USDTOKEN" | jq ".services[] | select(.service.system_name | contains(\"USDSYSTEM_NAME\")) | .service.id") You need the ID when importing the policies. Import the policies: curl -k -X PUT "https://USDPORTAL_ENDPOINT/admin/api/services/USDSERVICE_ID/proxy/policies.json" --data "access_token=USDTOKEN" --data-urlencode policies_config@policies_config.json From the 3scale Admin Portal, navigate to Integration Configuration to check that the endpoints and policies for the Kafka Bridge service have loaded. Navigate to Applications Create Application Plan to create an application plan. Navigate to Audience Developer Applications Create Application to create an application. The application is required in order to obtain a user key for authentication. (Production environment step) To make the API available to the production gateway, promote the configuration: 3scale proxy-config promote USDREMOTE_NAME USDSERVICE_ID Use an API testing tool to verify you can access the Kafka Bridge through the APIcast gateway using a call to create a consumer, and the user key created for the application. For example: https//my-project-my-bridge-bridge-service-3scale-apicast-staging.example.com:443/consumers/my-group?user_key=3dfc188650101010ecd7fdc56098ce95 If a payload is returned from the Kafka Bridge, the consumer was created successfully. { "instance_id": "consumer1", "base uri": "https//my-project-my-bridge-bridge-service-3scale-apicast-staging.example.com:443/consumers/my-group/instances/consumer1" } The base URI is the address that the client will use in subsequent requests. | [
"{ \"instance_id\": \"consumer-1\", \"base_uri\":\"http://my-bridge:8080/consumers/my-group/instances/consumer1\" }",
"{ \"instance_id\": \"consumer-1\", \"base_uri\":\"http://my-gateway:80/consumers/my-group/instances/consumer1\" }",
"new-project my-project --description=\" description \" --display-name=\" display_name \"",
"adm policy add-cluster-role-to-user view system:serviceaccount: my-project :amp",
"get deployment 3scale-operator",
"export REMOTE_NAME=strimzi-kafka-bridge 1 export SYSTEM_NAME=strimzi_http_bridge_for_apache_kafka 2 export TENANT=strimzi-kafka-bridge-admin 3 export PORTAL_ENDPOINT=USDTENANT.3scale.net 4 export TOKEN= 3scale access token 5",
"3scale remote add USDREMOTE_NAME https://USDTOKEN@USDPORTAL_ENDPOINT/",
"# env: - name: STRIMZI_CUSTOM_KAFKA_BRIDGE_SERVICE_LABELS value: | discovery.3scale.net=true - name: STRIMZI_CUSTOM_KAFKA_BRIDGE_SERVICE_ANNOTATIONS value: | discovery.3scale.net/scheme=http discovery.3scale.net/port=8080 discovery.3scale.net/path=/ discovery.3scale.net/description-path=/openapi #",
"\"host\": \"my-bridge-bridge-service.my-project.svc.cluster.local:8080\"",
"3scale import openapi -k -d USDREMOTE_NAME openapiv2.json -t myproject-my-bridge-bridge-service",
"export SERVICE_ID=USD(curl -k -s -X GET \"https://USDPORTAL_ENDPOINT/admin/api/services.json?access_token=USDTOKEN\" | jq \".services[] | select(.service.system_name | contains(\\\"USDSYSTEM_NAME\\\")) | .service.id\")",
"curl -k -X PUT \"https://USDPORTAL_ENDPOINT/admin/api/services/USDSERVICE_ID/proxy/policies.json\" --data \"access_token=USDTOKEN\" --data-urlencode policies_config@policies_config.json",
"3scale proxy-config promote USDREMOTE_NAME USDSERVICE_ID",
"https//my-project-my-bridge-bridge-service-3scale-apicast-staging.example.com:443/consumers/my-group?user_key=3dfc188650101010ecd7fdc56098ce95",
"{ \"instance_id\": \"consumer1\", \"base uri\": \"https//my-project-my-bridge-bridge-service-3scale-apicast-staging.example.com:443/consumers/my-group/instances/consumer1\" }"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.1/html/configuring_amq_streams_on_openshift/kafka-bridge-3-scale-str |
Chapter 3. Installing a cluster on Nutanix in a restricted network | Chapter 3. Installing a cluster on Nutanix in a restricted network In OpenShift Container Platform 4.14, you can install a cluster on Nutanix infrastructure in a restricted network by creating an internal mirror of the installation release content. 3.1. Prerequisites You have reviewed details about the OpenShift Container Platform installation and update processes. The installation program requires access to port 9440 on Prism Central and Prism Element. You verified that port 9440 is accessible. If you use a firewall, you have met these prerequisites: You confirmed that port 9440 is accessible. Control plane nodes must be able to reach Prism Central and Prism Element on port 9440 for the installation to succeed. You configured the firewall to grant access to the sites that OpenShift Container Platform requires. This includes the use of Telemetry. If your Nutanix environment is using the default self-signed SSL/TLS certificate, replace it with a certificate that is signed by a CA. The installation program requires a valid CA-signed certificate to access to the Prism Central API. For more information about replacing the self-signed certificate, see the Nutanix AOS Security Guide . If your Nutanix environment uses an internal CA to issue certificates, you must configure a cluster-wide proxy as part of the installation process. For more information, see Configuring a custom PKI . Important Use 2048-bit certificates. The installation fails if you use 4096-bit certificates with Prism Central 2022.x. You have a container image registry, such as Red Hat Quay. If you do not already have a registry, you can create a mirror registry using mirror registry for Red Hat OpenShift . You have used the oc-mirror OpenShift CLI (oc) plugin to mirror all of the required OpenShift Container Platform content and other images, including the Nutanix CSI Operator, to your mirror registry. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. 3.2. About installations in restricted networks In OpenShift Container Platform 4.14, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 3.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 3.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 3.4. Adding Nutanix root CA certificates to your system trust Because the installation program requires access to the Prism Central API, you must add your Nutanix trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster. Procedure From the Prism Central web console, download the Nutanix root CA certificates. Extract the compressed file that contains the Nutanix root CA certificates. Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command: # cp certs/lin/* /etc/pki/ca-trust/source/anchors Update your system trust. For example, on a Fedora operating system, run the following command: # update-ca-trust extract 3.5. Downloading the RHCOS cluster image Prism Central requires access to the Red Hat Enterprise Linux CoreOS (RHCOS) image to install the cluster. You can use the installation program to locate and download the RHCOS image and make it available through an internal HTTP server or Nutanix Objects. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. Procedure Change to the directory that contains the installation program and run the following command: USD ./openshift-install coreos print-stream-json Use the output of the command to find the location of the Nutanix image, and click the link to download it. Example output "nutanix": { "release": "411.86.202210041459-0", "formats": { "qcow2": { "disk": { "location": "https://rhcos.mirror.openshift.com/art/storage/releases/rhcos-4.11/411.86.202210041459-0/x86_64/rhcos-411.86.202210041459-0-nutanix.x86_64.qcow2", "sha256": "42e227cac6f11ac37ee8a2f9528bb3665146566890577fd55f9b950949e5a54b" Make the image available through an internal HTTP server or Nutanix Objects. Note the location of the downloaded image. You update the platform section in the installation configuration file ( install-config.yaml ) with the image's location before deploying the cluster. Snippet of an install-config.yaml file that specifies the RHCOS image platform: nutanix: clusterOSImage: http://example.com/images/rhcos-411.86.202210041459-0-nutanix.x86_64.qcow2 3.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Nutanix. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You have the imageContentSourcePolicy.yaml file that was created when you mirrored your registry. You have the location of the Red Hat Enterprise Linux CoreOS (RHCOS) image you download. You have obtained the contents of the certificate for your mirror registry. You have retrieved a Red Hat Enterprise Linux CoreOS (RHCOS) image and uploaded it to an accessible location. You have verified that you have met the Nutanix networking requirements. For more information, see "Preparing to install on Nutanix". Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select nutanix as the platform to target. Enter the Prism Central domain name or IP address. Enter the port that is used to log into Prism Central. Enter the credentials that are used to log into Prism Central. The installation program connects to Prism Central. Select the Prism Element that will manage the OpenShift Container Platform cluster. Select the network subnet to use. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you configured in the DNS records. Enter a descriptive name for your cluster. The cluster name you enter must match the cluster name you specified when configuring the DNS records. In the install-config.yaml file, set the value of platform.nutanix.clusterOSImage to the image location or name. For example: platform: nutanix: clusterOSImage: http://mirror.example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSourcePolicy.yaml file that was created when you mirrored the registry. Optional: Update one or more of the default configuration parameters in the install.config.yaml file to customize the installation. For more information about the parameters, see "Installation configuration parameters". Note If you are installing a three-node cluster, be sure to set the compute.replicas parameter to 0 . This ensures that cluster's control planes are schedulable. For more information, see "Installing a three-node cluster on {platform}". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for Nutanix 3.6.1. Sample customized install-config.yaml file for Nutanix You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: nutanix: 4 cpus: 2 coresPerSocket: 2 memoryMiB: 8196 osDisk: diskSizeGiB: 120 categories: 5 - key: <category_key_name> value: <category_value> controlPlane: 6 hyperthreading: Enabled 7 name: master replicas: 3 platform: nutanix: 8 cpus: 4 coresPerSocket: 2 memoryMiB: 16384 osDisk: diskSizeGiB: 120 categories: 9 - key: <category_key_name> value: <category_value> metadata: creationTimestamp: null name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: nutanix: apiVIP: 10.40.142.7 12 ingressVIP: 10.40.142.8 13 defaultMachinePlatform: bootType: Legacy categories: 14 - key: <category_key_name> value: <category_value> project: 15 type: name name: <project_name> prismCentral: endpoint: address: your.prismcentral.domainname 16 port: 9440 17 password: <password> 18 username: <username> 19 prismElements: - endpoint: address: your.prismelement.domainname port: 9440 uuid: 0005b0f1-8f43-a0f2-02b7-3cecef193712 subnetUUIDs: - c7938dc6-7659-453e-a688-e26020c68e43 clusterOSImage: http://example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 20 credentialsMode: Manual publish: External pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 additionalTrustBundle: | 24 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 25 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 10 12 13 16 17 18 19 Required. The installation program prompts you for this value. 2 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 3 7 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 8 Optional: Provide additional configuration for the machine pool parameters for the compute and control plane machines. 5 9 14 Optional: Provide one or more pairs of a prism category key and a prism category value. These category key-value pairs must exist in Prism Central. You can provide separate categories to compute machines, control plane machines, or all machines. 11 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 15 Optional: Specify a project with which VMs are associated. Specify either name or uuid for the project type, and then provide the corresponding UUID or project name. You can associate projects to compute machines, control plane machines, or all machines. 20 Optional: By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image. If Prism Central does not have internet access, you can override the default behavior by hosting the RHCOS image on any HTTP server or Nutanix Objects and pointing the installation program to the image. 21 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 22 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 23 Optional: You can provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 24 Provide the contents of the certificate file that you used for your mirror registry. 25 Provide these values from the metadata.name: release-0 section of the imageContentSourcePolicy.yaml file that was created when you mirrored the registry. 3.6.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 3.7. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 3.8. Configuring IAM for Nutanix Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets. Prerequisites You have configured the ccoctl binary. You have an install-config.yaml file. Procedure Create a YAML file that contains the credentials data in the following format: Credentials data format credentials: - type: basic_auth 1 data: prismCentral: 2 username: <username_for_prism_central> password: <password_for_prism_central> prismElements: 3 - name: <name_of_prism_element> username: <username_for_prism_element> password: <password_for_prism_element> 1 Specify the authentication type. Only basic authentication is supported. 2 Specify the Prism Central credentials. 3 Optional: Specify the Prism Element credentials. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: annotations: include.release.openshift.io/self-managed-high-availability: "true" labels: controller-tools.k8s.io: "1.0" name: openshift-machine-api-nutanix namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: NutanixProviderSpec secretRef: name: nutanix-credentials namespace: openshift-machine-api Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl nutanix create-shared-secrets \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ 1 --output-dir=<ccoctl_output_dir> \ 2 --credentials-source-filepath=<path_to_credentials_file> 3 1 Specify the path to the directory that contains the files for the component CredentialsRequests objects. 2 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 3 Optional: Specify the directory that contains the credentials data YAML file. By default, ccoctl expects this file to be in <home_directory>/.nutanix/credentials . Edit the install-config.yaml configuration file so that the credentialsMode parameter is set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 ... 1 Add this line to set the credentialsMode parameter to Manual . Create the installation manifests by running the following command: USD openshift-install create manifests --dir <installation_directory> 1 1 Specify the path to the directory that contains the install-config.yaml file for your cluster. Copy the generated credential files to the target manifests directory by running the following command: USD cp <ccoctl_output_dir>/manifests/*credentials.yaml ./<installation_directory>/manifests Verification Ensure that the appropriate secrets exist in the manifests directory. USD ls ./<installation_directory>/manifests Example output cluster-config.yaml cluster-dns-02-config.yml cluster-infrastructure-02-config.yml cluster-ingress-02-config.yml cluster-network-01-crd.yml cluster-network-02-config.yml cluster-proxy-01-config.yaml cluster-scheduler-02-config.yml cvo-overrides.yaml kube-cloud-config.yaml kube-system-configmap-root-ca.yaml machine-config-server-tls-secret.yaml openshift-config-secret-pull-secret.yaml openshift-cloud-controller-manager-nutanix-credentials-credentials.yaml openshift-machine-api-nutanix-credentials-credentials.yaml 3.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.10. Post installation Complete the following steps to complete the configuration of your cluster. 3.10.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 3.10.2. Installing the policy resources into the cluster Mirroring the OpenShift Container Platform content using the oc-mirror OpenShift CLI (oc) plugin creates resources, which include catalogSource-certified-operator-index.yaml and imageContentSourcePolicy.yaml . The ImageContentSourcePolicy resource associates the mirror registry with the source registry and redirects image pull requests from the online registries to the mirror registry. The CatalogSource resource is used by Operator Lifecycle Manager (OLM) to retrieve information about the available Operators in the mirror registry, which lets users discover and install Operators. After you install the cluster, you must install these resources into the cluster. Prerequisites You have mirrored the image set to the registry mirror in the disconnected environment. You have access to the cluster as a user with the cluster-admin role. Procedure Log in to the OpenShift CLI as a user with the cluster-admin role. Apply the YAML files from the results directory to the cluster: USD oc apply -f ./oc-mirror-workspace/results-<id>/ Verification Verify that the ImageContentSourcePolicy resources were successfully installed: USD oc get imagecontentsourcepolicy Verify that the CatalogSource resources were successfully installed: USD oc get catalogsource --all-namespaces 3.10.3. Configuring the default storage container After you install the cluster, you must install the Nutanix CSI Operator and configure the default storage container for the cluster. For more information, see the Nutanix documentation for installing the CSI Operator and configuring registry storage . 3.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. 3.12. Additional resources About remote health monitoring 3.13. steps If necessary, see Opt out of remote health reporting If necessary, see Registering your disconnected cluster Customize your cluster | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"cp certs/lin/* /etc/pki/ca-trust/source/anchors",
"update-ca-trust extract",
"./openshift-install coreos print-stream-json",
"\"nutanix\": { \"release\": \"411.86.202210041459-0\", \"formats\": { \"qcow2\": { \"disk\": { \"location\": \"https://rhcos.mirror.openshift.com/art/storage/releases/rhcos-4.11/411.86.202210041459-0/x86_64/rhcos-411.86.202210041459-0-nutanix.x86_64.qcow2\", \"sha256\": \"42e227cac6f11ac37ee8a2f9528bb3665146566890577fd55f9b950949e5a54b\"",
"platform: nutanix: clusterOSImage: http://example.com/images/rhcos-411.86.202210041459-0-nutanix.x86_64.qcow2",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"platform: nutanix: clusterOSImage: http://mirror.example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: nutanix: 4 cpus: 2 coresPerSocket: 2 memoryMiB: 8196 osDisk: diskSizeGiB: 120 categories: 5 - key: <category_key_name> value: <category_value> controlPlane: 6 hyperthreading: Enabled 7 name: master replicas: 3 platform: nutanix: 8 cpus: 4 coresPerSocket: 2 memoryMiB: 16384 osDisk: diskSizeGiB: 120 categories: 9 - key: <category_key_name> value: <category_value> metadata: creationTimestamp: null name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: nutanix: apiVIP: 10.40.142.7 12 ingressVIP: 10.40.142.8 13 defaultMachinePlatform: bootType: Legacy categories: 14 - key: <category_key_name> value: <category_value> project: 15 type: name name: <project_name> prismCentral: endpoint: address: your.prismcentral.domainname 16 port: 9440 17 password: <password> 18 username: <username> 19 prismElements: - endpoint: address: your.prismelement.domainname port: 9440 uuid: 0005b0f1-8f43-a0f2-02b7-3cecef193712 subnetUUIDs: - c7938dc6-7659-453e-a688-e26020c68e43 clusterOSImage: http://example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 20 credentialsMode: Manual publish: External pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 additionalTrustBundle: | 24 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 25 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"credentials: - type: basic_auth 1 data: prismCentral: 2 username: <username_for_prism_central> password: <password_for_prism_central> prismElements: 3 - name: <name_of_prism_element> username: <username_for_prism_element> password: <password_for_prism_element>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: annotations: include.release.openshift.io/self-managed-high-availability: \"true\" labels: controller-tools.k8s.io: \"1.0\" name: openshift-machine-api-nutanix namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: NutanixProviderSpec secretRef: name: nutanix-credentials namespace: openshift-machine-api",
"ccoctl nutanix create-shared-secrets --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --credentials-source-filepath=<path_to_credentials_file> 3",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1",
"openshift-install create manifests --dir <installation_directory> 1",
"cp <ccoctl_output_dir>/manifests/*credentials.yaml ./<installation_directory>/manifests",
"ls ./<installation_directory>/manifests",
"cluster-config.yaml cluster-dns-02-config.yml cluster-infrastructure-02-config.yml cluster-ingress-02-config.yml cluster-network-01-crd.yml cluster-network-02-config.yml cluster-proxy-01-config.yaml cluster-scheduler-02-config.yml cvo-overrides.yaml kube-cloud-config.yaml kube-system-configmap-root-ca.yaml machine-config-server-tls-secret.yaml openshift-config-secret-pull-secret.yaml openshift-cloud-controller-manager-nutanix-credentials-credentials.yaml openshift-machine-api-nutanix-credentials-credentials.yaml",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc apply -f ./oc-mirror-workspace/results-<id>/",
"oc get imagecontentsourcepolicy",
"oc get catalogsource --all-namespaces"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_nutanix/installing-restricted-networks-nutanix-installer-provisioned |
Chapter 8. Next steps | Chapter 8. steps The following sections might be useful after deploying a proof of concept version of Red Hat Quay. Many of these procedures can be used on a proof of concept deployment, offering insights to Red Hat Quay's features. Using Red Hat Quay . The content in this guide explains the following concepts: Adding users and repositories Using image tags Building Dockerfiles with build workers Setting up build triggers Adding notifications for repository events and more Managing Red Hat Quay . The content in this guide explains the following concepts: Using SSL/TLS Configuring action log storage Configuring Clair security scanner Repository mirroring IPv6 and dual-stack deployments Configuring OIDC for Red Hat Quay Geo-replication and more | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/proof_of_concept_-_deploying_red_hat_quay/poc-next-steps |
Chapter 9. Known issues | Chapter 9. Known issues This part describes known issues in Red Hat Enterprise Linux 8.10. 9.1. Installer and image creation During RHEL installation on IBM Z, udev does not assign predictable interface names to RoCE cards enumerated by FID If you start a RHEL 8.7 or later installation with the net.naming-scheme=rhel-8.7 kernel command-line option, the udev device manager on the RHEL installation media ignores this setting for RoCE cards enumerated by the function identifier (FID). As a consequence, udev assigns unpredictable interface names to these devices. There is no workaround during the installation, but you can configure the feature after the installation. For further details, see Determining a predictable RoCE device name on the IBM Z platform . (JIRA:RHEL-11397) Installation fails on IBM Power 10 systems with LPAR and secure boot enabled RHEL installer is not integrated with static key secure boot on IBM Power 10 systems. Consequently, when logical partition (LPAR) is enabled with the secure boot option, the installation fails with the error, Unable to proceed with RHEL-x.x Installation . To work around this problem, install RHEL without enabling secure boot. After booting the system: Copy the signed kernel into the PReP partition using the dd command. Restart the system and enable secure boot. Once the firmware verifies the boot loader and the kernel, the system boots up successfully. For more information, see https://www.ibm.com/support/pages/node/6528884 Bugzilla:2025814 [1] Unexpected SELinux policies on systems where Anaconda is running as an application When Anaconda is running as an application on an already installed system (for example to perform another installation to an image file using the -image anaconda option), the system is not prohibited to modify the SELinux types and attributes during installation. As a consequence, certain elements of SELinux policy might change on the system where Anaconda is running. To work around this problem, do not run Anaconda on the production system. Instead, run Anaconda in a temporary virtual machine to keep the SELinux policy unchanged on a production system. Running anaconda as part of the system installation process such as installing from boot.iso or dvd.iso is not affected by this issue. Bugzilla:2050140 The auth and authconfig Kickstart commands require the AppStream repository The authselect-compat package is required by the auth and authconfig Kickstart commands during installation. Without this package, the installation fails if auth or authconfig are used. However, by design, the authselect-compat package is only available in the AppStream repository. To work around this problem, verify that the BaseOS and AppStream repositories are available to the installation program or use the authselect Kickstart command during installation. Bugzilla:1640697 [1] The reboot --kexec and inst.kexec commands do not provide a predictable system state Performing a RHEL installation with the reboot --kexec Kickstart command or the inst.kexec kernel boot parameters do not provide the same predictable system state as a full reboot. As a consequence, switching to the installed system without rebooting can produce unpredictable results. Note that the kexec feature is deprecated and will be removed in a future release of Red Hat Enterprise Linux. Bugzilla:1697896 [1] The USB CD-ROM drive is not available as an installation source in Anaconda Installation fails when the USB CD-ROM drive is the source for it and the Kickstart ignoredisk --only-use= command is specified. In this case, Anaconda cannot find and use this source disk. To work around this problem, use the harddrive --partition=sdX --dir=/ command to install from USB CD-ROM drive. As a result, the installation does not fail. Jira:RHEL-4707 Network access is not enabled by default in the installation program Several installation features require network access, for example, registration of a system using the Content Delivery Network (CDN), NTP server support, and network installation sources. However, network access is not enabled by default, and as a result, these features cannot be used until network access is enabled. To work around this problem, add ip=dhcp to boot options to enable network access when the installation starts. Optionally, passing a Kickstart file or a repository located on the network using boot options also resolves the problem. As a result, the network-based installation features can be used. Bugzilla:1757877 [1] Hard drive partitioned installations with iso9660 filesystem fails You cannot install RHEL on systems where the hard drive is partitioned with the iso9660 filesystem. This is due to the updated installation code that is set to ignore any hard disk containing a iso9660 file system partition. This happens even when RHEL is installed without using a DVD. To workaround this problem, add the following script in the Kickstart file to format the disc before the installation starts. Note: Before performing the workaround, backup the data available on the disk. The wipefs command formats all the existing data from the disk. As a result, installations work as expected without any errors. Jira:RHEL-4711 IBM Power systems with HASH MMU mode fail to boot with memory allocation failures IBM Power Systems with HASH memory allocation unit (MMU) mode support kdump up to a maximum of 192 cores. Consequently, the system fails to boot with memory allocation failures if kdump is enabled on more than 192 cores. This limitation is due to RMA memory allocations during early boot in HASH MMU mode. To work around this problem, use the Radix MMU mode with fadump enabled instead of using kdump . Bugzilla:2028361 [1] RHEL for Edge installer image fails to create mount points when installing an rpm-ostree payload When deploying rpm-ostree payloads, used for example in a RHEL for Edge installer image, the installer does not properly create some mount points for custom partitions. As a consequence, the installation is aborted with the following error: To work around this issue: Use an automatic partitioning scheme and do not add any mount points manually. Manually assign mount points only inside /var directory. For example, /var/ my-mount-point ), and the following standard directories: / , /boot , /var . As a result, the installation process finishes successfully. Jira:RHEL-4744 Images built with the stig profile remediation fails to boot with FIPS error FIPS mode is not supported by RHEL image builder. When using RHEL image builder customized with the xccdf_org.ssgproject.content_profile_stig profile remediation, the system fails to boot with the following error: Enabling the FIPS policy manually after the system image installation with the fips-mode-setup --enable command does not work, because the /boot directory is on a different partition. System boots successfully if FIPS is disabled. Currently, there is no workaround available. Note You can manually enable FIPS after installing the image by using the fips-mode-setup --enable command. Jira:RHEL-4649 9.2. Security OpenSC might not detect CardOS V5.3 card objects correctly The OpenSC toolkit does not correctly read cache from different PKCS #15 file offsets used in some CardOS V5.3 cards. Consequently, OpenSC might not be able to list card objects and prevent using them from different applications. To work around the problem, turn off file caching by setting the use_file_caching = false option in the /etc/opensc.conf file. Jira:RHEL-4077 sshd -T provides inaccurate information about Ciphers, MACs and KeX algorithms The output of the sshd -T command does not contain the system-wide crypto policy configuration or other options that could come from an environment file in /etc/sysconfig/sshd and that are applied as arguments on the sshd command. This occurs because the upstream OpenSSH project did not support the Include directive to support Red-Hat-provided cryptographic defaults in RHEL 8. Crypto policies are applied as command-line arguments to the sshd executable in the sshd.service unit during the service's start by using an EnvironmentFile . To work around the problem, use the source command with the environment file and pass the crypto policy as an argument to the sshd command, as in sshd -T USDCRYPTO_POLICY . For additional information, see Ciphers, MACs or KeX algorithms differ from sshd -T to what is provided by current crypto policy level . As a result, the output from sshd -T matches the currently configured crypto policy. Bugzilla:2044354 [1] RHV hypervisor might not work correctly when hardening the system during installation When installing Red Hat Virtualization Hypervisor (RHV-H) and applying the Red Hat Enterprise Linux 8 STIG profile, OSCAP Anaconda Add-on might harden the system as RHEL instead of RVH-H and remove essential packages for RHV-H. Consequently, the RHV hypervisor might not work. To work around the problem, install the RHV-H system without applying any profile hardening, and after the installation is complete, apply the profile by using OpenSCAP. As a result, the RHV hypervisor works correctly. Jira:RHEL-1826 CVE OVAL feeds are now only in the compressed format, and data streams are not in the SCAP 1.3 standard Red Hat provides CVE OVAL feeds in the bzip2-compressed format and are no longer available in the XML file format. Because referencing compressed content is not standardized in the Security Content Automation Protocol (SCAP) 1.3 specification, third-party SCAP scanners can have problems scanning rules that use the feed. Bugzilla:2028428 Certain Rsyslog priority strings do not work correctly Support for the GnuTLS priority string for imtcp that allows fine-grained control over encryption is not complete. Consequently, the following priority strings do not work properly in the Rsyslog remote logging application: To work around this problem, use only correctly working priority strings: As a result, current configurations must be limited to the strings that work correctly. Bugzilla:1679512 Server with GUI and Workstation installations are not possible with CIS Server profiles The CIS Server Level 1 and Level 2 security profiles are not compatible with the Server with GUI and Workstation software selections. As a consequence, a RHEL 8 installation with the Server with GUI software selection and CIS Server profiles is not possible. An attempted installation using the CIS Server Level 1 or Level 2 profiles and either of these software selections will generate the error message: If you need to align systems with the Server with GUI or Workstation software selections according to CIS benchmarks, use the CIS Workstation Level 1 or Level 2 profiles instead. Bugzilla:1843932 Remediating service-related rules during Kickstart installations might fail During a Kickstart installation, the OpenSCAP utility sometimes incorrectly shows that a service enable or disable state remediation is not needed. Consequently, OpenSCAP might set the services on the installed system to a noncompliant state. As a workaround, you can scan and remediate the system after the Kickstart installation. This will fix the service-related issues. Bugzilla:1834716 Kickstart uses org_fedora_oscap instead of com_redhat_oscap in RHEL 8 The Kickstart references the Open Security Content Automation Protocol (OSCAP) Anaconda add-on as org_fedora_oscap instead of com_redhat_oscap , which might cause confusion. This is necessary to keep compatibility with Red Hat Enterprise Linux 7. Bugzilla:1665082 [1] libvirt overrides xccdf_org.ssgproject.content_rule_sysctl_net_ipv4_conf_all_forwarding The libvirt virtualization framework enables IPv4 forwarding whenever a virtual network with a forward mode of route or nat is started. This overrides the configuration by the xccdf_org.ssgproject.content_rule_sysctl_net_ipv4_conf_all_forwarding rule, and subsequent compliance scans report the fail result when assessing this rule. Apply one of these scenarios to work around the problem: Uninstall the libvirt packages if your scenario does not require them. Change the forwarding mode of virtual networks created by libvirt . Remove the xccdf_org.ssgproject.content_rule_sysctl_net_ipv4_conf_all_forwarding rule by tailoring your profile. Bugzilla:2118758 The fapolicyd utility incorrectly allows executing changed files Correctly, the IMA hash of a file should update after any change to the file, and fapolicyd should prevent execution of the changed file. However, this does not happen due to differences in IMA policy setup and in file hashing by the evctml utility. As a result, the IMA hash is not updated in the extended attribute of a changed file. Consequently, fapolicyd incorrectly allows the execution of the changed file. Jira:RHEL-520 [1] The semanage fcontext command reorders local modifications The semanage fcontext -l -C command lists local file context modifications stored in the file_contexts.local file. The restorecon utility processes the entries in the file_contexts.local from the most recent entry to the oldest. However, semanage fcontext -l -C lists the entries in a different order. This mismatch between processing order and listing order might cause problems when managing SELinux rules. Jira:RHEL-24461 [1] OpenSSL in FIPS mode accepts only specific D-H parameters In FIPS mode, TLS clients that use OpenSSL return a bad dh value error and cancel TLS connections to servers that use manually generated parameters. This is because OpenSSL, when configured to work in compliance with FIPS 140-2, works only with Diffie-Hellman parameters compliant to NIST SP 800-56A rev3 Appendix D (groups 14, 15, 16, 17, and 18 defined in RFC 3526 and with groups defined in RFC 7919). Also, servers that use OpenSSL ignore all other parameters and instead select known parameters of similar size. To work around this problem, use only the compliant groups. Bugzilla:1810911 [1] crypto-policies incorrectly allow Camellia ciphers The RHEL 8 system-wide cryptographic policies should disable Camellia ciphers in all policy levels, as stated in the product documentation. However, the Kerberos protocol enables the ciphers by default. To work around the problem, apply the NO-CAMELLIA subpolicy: In the command, replace DEFAULT with the cryptographic level name if you have switched from DEFAULT previously. As a result, Camellia ciphers are correctly disallowed across all applications that use system-wide crypto policies only when you disable them through the workaround. Bugzilla:1919155 Smart-card provisioning process through OpenSC pkcs15-init does not work properly The file_caching option is enabled in the default OpenSC configuration, and the file caching functionality does not handle some commands from the pkcs15-init tool properly. Consequently, the smart-card provisioning process through OpenSC fails. To work around the problem, add the following snippet to the /etc/opensc.conf file: The smart-card provisioning through pkcs15-init only works if you apply the previously described workaround. Bugzilla:1947025 Connections to servers with SHA-1 signatures do not work with GnuTLS SHA-1 signatures in certificates are rejected by the GnuTLS secure communications library as insecure. Consequently, applications that use GnuTLS as a TLS backend cannot establish a TLS connection to peers that offer such certificates. This behavior is inconsistent with other system cryptographic libraries. To work around this problem, upgrade the server to use certificates signed with SHA-256 or stronger hash, or switch to the LEGACY policy. Bugzilla:1628553 [1] libselinux-python is available only through its module The libselinux-python package contains only Python 2 bindings for developing SELinux applications and it is used for backward compatibility. For this reason, libselinux-python is no longer available in the default RHEL 8 repositories through the yum install libselinux-python command. To work around this problem, enable both the libselinux-python and python27 modules, and install the libselinux-python package and its dependencies with the following commands: Alternatively, install libselinux-python using its install profile with a single command: As a result, you can install libselinux-python using the given module. Bugzilla:1666328 [1] udica processes UBI 8 containers only when started with --env container=podman The Red Hat Universal Base Image 8 (UBI 8) containers set the container environment variable to the oci value instead of the podman value. This prevents the udica tool from analyzing a container JavaScript Object Notation (JSON) file. To work around this problem, start a UBI 8 container using a podman command with the --env container=podman parameter. As a result, udica can generate an SELinux policy for a UBI 8 container only when you use the described workaround. Bugzilla:1763210 Negative effects of the default logging setup on performance The default logging environment setup might consume 4 GB of memory or even more and adjustments of rate-limit values are complex when systemd-journald is running with rsyslog . See the Negative effects of the RHEL default logging setup on performance and their mitigations Knowledgebase article for more information. Jira:RHELPLAN-10431 [1] SELINUX=disabled in /etc/selinux/config does not work properly Disabling SELinux using the SELINUX=disabled option in the /etc/selinux/config results in a process in which the kernel boots with SELinux enabled and switches to disabled mode later in the boot process. This might cause memory leaks. To work around this problem, disable SELinux by adding the selinux=0 parameter to the kernel command line as described in the Changing SELinux modes at boot time section of the Using SELinux title if your scenario really requires to completely disable SELinux. Jira:RHELPLAN-34199 [1] IKE over TCP connections do not work on custom TCP ports The tcp-remoteport Libreswan configuration option does not work properly. Consequently, an IKE over TCP connection cannot be established when a scenario requires specifying a non-default TCP port. Bugzilla:1989050 scap-security-guide cannot configure termination of idle sessions Even though the sshd_set_idle_timeout rule still exists in the data stream, the former method for idle session timeout of configuring sshd is no longer available. Therefore, the rule is marked as not applicable and cannot harden anything. Other methods for configuring idle session termination, such as systemd (Logind), are also not available. As a consequence, scap-security-guide cannot configure the system to reliably disconnect idle sessions after a certain amount of time. You can work around this problem in one of the following ways, which might fulfill the security requirement: Configuring the accounts_tmout rule. However, this variable could be overridden by using the exec command. Configuring the configure_tmux_lock_after_time and configure_bashrc_exec_tmux rules. This requires installing the tmux package. Upgrading to RHEL 8.7 or later where the systemd feature is already implemented together with the proper SCAP rule. Jira:RHEL-1804 The OSCAP Anaconda add-on does not fetch tailored profiles in the graphical installation The OSCAP Anaconda add-on does not provide an option to select or deselect tailoring of security profiles in the RHEL graphical installation. Starting from RHEL 8.8, the add-on does not take tailoring into account by default when installing from archives or RPM packages. Consequently, the installation displays the following error message instead of fetching an OSCAP tailored profile: To work around this problem, you must specify paths in the %addon org_fedora_oscap section of your Kickstart file, for example: As a result, you can use the graphical installation for OSCAP tailored profiles only with the corresponding Kickstart specifications. Jira:RHEL-1810 OpenSCAP memory-consumption problems On systems with limited memory, the OpenSCAP scanner might stop prematurely or it might not generate the results files. To work around this problem, you can customize the scanning profile to deselect rules that involve recursion over the entire / file system: rpm_verify_hashes rpm_verify_permissions rpm_verify_ownership file_permissions_unauthorized_world_writable no_files_unowned_by_user dir_perms_world_writable_system_owned file_permissions_unauthorized_suid file_permissions_unauthorized_sgid file_permissions_ungroupowned dir_perms_world_writable_sticky_bits For more details and more workarounds, see the related Knowledgebase article . Bugzilla:2161499 Rebuilding the rpm database assigns incorrect SELinux labeling Rebuilding the rpm database with the rpmdb --rebuilddb command assigns incorrect SELinux labels to the rpm database files. As a consequence, some services that use the rpm database might not work correctly. To work around this problem after rebuilding the database, relabel the database by using the restorecon -Rv /var/lib/rpm command. Bugzilla:2166153 ANSSI BP28 HP SCAP rules for Audit are incorrectly used on the 64-bit ARM architecture The ANSSI BP28 High profile in the SCAP Security Guide (SSG) contains the following security content automation protocol (SCAP) rules that configure the Linux Audit subsystem but are invalid on the 64-bit ARM architecture: audit_rules_unsuccessful_file_modification_creat audit_rules_unsuccessful_file_modification_open audit_rules_file_deletion_events_rename audit_rules_file_deletion_events_rmdir audit_rules_file_deletion_events_unlink audit_rules_dac_modification_chmod audit_rules_dac_modification_chown audit_rules_dac_modification_lchown If you configure your RHEL system running on a 64-bit ARM machine by using this profile, the Audit daemon does not start due to the use of invalid system calls. To work around the problem, either use profile tailoring to remove the previously mentioned rules from the data stream or remove the -S <syscall> snippets by editing files in the /etc/audit/rules.d directory. The files must not contain the following system calls: creat open rename rmdir unlink chmod chown lchown As a result of any of the two described workarounds, the Audit daemon can start even after you use the ANSSI BP28 High profile on a 64-bit ARM system. Jira:RHEL-1897 9.3. RHEL for Edge composer-cli fails to build RHEL for Edge images when nodejs or npm is included Currently, while using RHEL image builder, you cannot customize your RHEL 8 Edge images with the nodejs and npm packages, because it is not possible to build a RHEL for Edge image with the nodejs package. The NPM package manager expects its configuration in the {prefix}/etc/npmrc directory and the npm RPM packages a symlink at the /usr/etc/npmrc directory pointing to /etc/npmrc . To work around this problem, install the nodejs and npm packages after building your RHEL for Edge system. Jira:RHELDOCS-17126 [1] 9.4. Subscription management syspurpose addons have no effect on the subscription-manager attach --auto output In Red Hat Enterprise Linux 8, four attributes of the syspurpose command-line tool have been added: role , usage , service_level_agreement and addons . Currently, only role , usage and service_level_agreement affect the output of running the subscription-manager attach --auto command. Users who attempt to set values to the addons argument will not observe any effect on the subscriptions that are auto-attached. Bugzilla:1687900 9.5. Software management YUM functionalities or plug-ins might log messages even if a logging service is not available Certain YUM functionalities or plug-ins might log messages to standard output or standard error when a logging service is not available. The level of the log message indicates where the message is logged: Information messages are logged to standard output. Error and debugging messages are logged to standard error. As a consequence, when scripting YUM options, unwanted log messages on standard output or standard error can affect the functionality of the script. To work around this issue, suppress the log messages from standard output and standard error by using the yum -q command. This suppresses log messages but not command results that are expected on standard output. Jira:RHELPLAN-50409 [1] cr_compress_file_with_stat() can cause a memory leak The createrepo_c C library has the API cr_compress_file_with_stat() function. This function is declared with char **dst as a second parameter. Depending on its other parameters, cr_compress_file_with_stat() either uses dst as an input parameter, or uses it to return an allocated string. This unpredictable behavior can cause a memory leak, because it does not inform the user when to free dst contents. To work around this problem, a new API cr_compress_file_with_stat_v2 function has been added, which uses the dst parameter only as an input. It is declared as char *dst . This prevents memory leak. Note that the cr_compress_file_with_stat_v2 function is temporary and will be present only in RHEL 8. Later, cr_compress_file_with_stat() will be fixed instead. Bugzilla:1973588 [1] YUM transactions reported as successful when a scriptlet fails Since RPM version 4.6, post-install scriptlets are allowed to fail without being fatal to the transaction. This behavior propagates up to YUM as well. This results in scriptlets which might occasionally fail while the overall package transaction reports as successful. There is no workaround available at the moment. Note that this is expected behavior that remains consistent between RPM and YUM. Any issues in scriptlets should be addressed at the package level. Bugzilla:1986657 9.6. Shells and command-line tools ipmitool is incompatible with certain server platforms The ipmitool utility serves for monitoring, configuring, and managing devices that support the Intelligent Platform Management Interface (IPMI). The current version of ipmitool uses Cipher Suite 17 by default instead of the Cipher Suite 3. Consequently, ipmitool fails to communicate with certain bare metal nodes that announced support for Cipher Suite 17 during negotiation, but do not actually support this cipher suite. As a result, ipmitool stops with the no matching cipher suite error message. For more details, see the related Knowledgebase article . To solve this problem, update your baseboard management controller (BMC) firmware to use the Cipher Suite 17. Optionally, if the BMC firmware update is not available, you can work around this problem by forcing ipmitool to use a certain cipher suite. When invoking a managing task with ipmitool , add the -C option to the ipmitool command together with the number of the cipher suite you want to use. See the following example: Jira:RHEL-6846 ReaR fails to re-create a volume group when you do not use clean disks for restoring ReaR fails to perform recovery when you want to restore to disks that contain existing data. To work around this problem, wipe the disks manually before restoring to them if they have been previously used. To wipe the disks in the rescue environment, use one of the following commands before running the rear recover command: The dd command to overwrite the disks. The wipefs command with the -a flag to erase all available metadata. See the following example of wiping metadata from the /dev/sda disk: This command wipes the metadata from the partitions on /dev/sda first, and then the partition table itself. Bugzilla:1925531 The ReaR rescue image on UEFI systems with Secure Boot enabled fails to boot with the default settings ReaR image creation by using the rear mkrescue or rear mkbackup command fails with the following message: The missing files are part of the grub2-efi-x64-modules package. If you install this package, the rescue image is created successfully without any errors. When the UEFI Secure Boot is enabled, the rescue image is not bootable because it uses a boot loader that is not signed. To work around this problem, add the following variables to the /etc/rear/local.conf or /etc/rear/site.conf ReaR configuration file): With the suggested workaround, the image can be produced successfully even on systems without the grub2-efi-x64-modules package, and it is bootable on systems with Secure Boot enabled. In addition, during the system recovery, the bootloader of the recovered system is set to the EFI shim bootloader. For more information about UEFI , Secure Boot , and shim bootloader , see the UEFI: what happens when booting the system Knowledge Base article. Jira:RHELDOCS-18064 [1] coreutils might report misleading EPERM error codes GNU Core Utilities ( coreutils ) started using the statx() system call. If a seccomp filter returns an EPERM error code for unknown system calls, coreutils might consequently report misleading EPERM error codes because EPERM cannot be distinguished from the actual Operation not permitted error returned by a working statx() syscall. To work around this problem, update the seccomp filter to either permit the statx() syscall, or to return an ENOSYS error code for syscalls it does not know. Bugzilla:2030661 The %vmeff metric from the sysstat package displays incorrect values The sysstat package provides the %vmeff metric to measure the page reclaim efficiency. The values of the %vmeff column returned by the sar -B command are incorrect because sysstat does not parse all relevant /proc/vmstat values provided by later kernel versions. To work around this problem, you can calculate the %vmeff value manually from the /proc/vmstat file. For details, see Why the sar(1) tool reports %vmeff values beyond 100 % in RHEL 8 and RHEL 9? Jira:RHEL-12008 The %util and svctm columns produced by sar and iostat utilities are invalid When you collect system usage statistics by using the sar or iostat utilities on a system with kernel version 4.18.0-55.el8 or later, the %util and svctm columns produced by sar or iostat might contain invalid data. Jira:RHEL-23074 [1] 9.7. Infrastructure services Postfix TLS fingerprint algorithm in the FIPS mode needs to be changed to SHA-256 By default in RHEL 8, postfix uses MD5 fingerprints with the TLS for backward compatibility. But in the FIPS mode, the MD5 hashing function is not available, which might cause TLS to incorrectly function in the default postfix configuration. To work around this problem, the hashing function needs to be changed to SHA-256 in the postfix configuration file. For more details, see the related Knowledgebase article Fix postfix TLS in the FIPS mode by switching to SHA-256 instead of MD5 . Bugzilla:1711885 The brltty package is not multilib compatible It is not possible to have both 32-bit and 64-bit versions of the brltty package installed. You can either install the 32-bit ( brltty.i686 ) or the 64-bit ( brltty.x86_64 ) version of the package. The 64-bit version is recommended. Bugzilla:2008197 9.8. Networking Outdated third-party modules which use the negative_advice() function can crash the kernel The core networking operation negative_advice() calls the inline dst_negative_advice() and __dst_negative_advice() functions. The kernel in RHEL 8.10 patched a security issue (CVE-2024-36971) in these inline functions. If a third-party module was compiled before the fix, this module might call negative_advice() incorrectly. Consequently, the third-party module can crash the kernel. To solve this problem, use an updated module that correctly calls the negative_advice() function. Jira:RHELDOCS-18748 RoCE interfaces lose their IP settings due to an unexpected change of the network interface name The RDMA over Converged Ethernet (RoCE) interfaces lose their IP settings due to an unexpected change of the network interface name if both conditions are met: User upgrades from a RHEL 8.6 system or earlier. The RoCE card is enumerated by UID. To work around this problem: Create the /etc/systemd/network/98-rhel87-s390x.link file with the following content: Reboot the system for the changes to take effect. Upgrade to RHEL 8.7 or newer. Note that RoCE interfaces that are enumerated by function ID (FID) and are non-unique, will still use unpredictable interface names unless you set the net.naming-scheme=rhel-8.7 kernel parameter. In this case, the RoCE interfaces will switch to predictable names with the ens prefix. Jira:RHEL-11398 [1] Systems with the IPv6_rpfilter option enabled experience low network throughput Systems with the IPv6_rpfilter option enabled in the firewalld.conf file currently experience suboptimal performance and low network throughput in high traffic scenarios, such as 100 Gbps links. To work around the problem, disable the IPv6_rpfilter option. To do so, add the following line in the /etc/firewalld/firewalld.conf file. As a result, the system performs better, but also has reduced security. Bugzilla:1871860 [1] 9.9. Kernel The kernel ACPI driver reports it has no access to a PCIe ECAM memory region The Advanced Configuration and Power Interface (ACPI) table provided by firmware does not define a memory region on the PCI bus in the Current Resource Settings (_CRS) method for the PCI bus device. Consequently, the following warning message occurs during the system boot: However, the kernel is still able to access the 0x30000000-0x31ffffff memory region, and can assign that memory region to the PCI Enhanced Configuration Access Mechanism (ECAM) properly. You can verify that PCI ECAM works correctly by accessing the PCIe configuration space over the 256 byte offset with the following output: As a result, you can ignore the warning message. For more information about the problem, see the "Firmware Bug: ECAM area mem 0x30000000-0x31ffffff not reserved in ACPI namespace" appears during system boot solution. Bugzilla:1868526 [1] The tuned-adm profile powersave command causes the system to become unresponsive Executing the tuned-adm profile powersave command leads to an unresponsive state of the Penguin Valkyrie 2000 2-socket systems with the older Thunderx (CN88xx) processors. Consequently, reboot the system to resume working. To work around this problem, avoid using the powersave profile if your system matches the mentioned specifications. Bugzilla:1609288 [1] The HP NMI watchdog does not always generate a crash dump In certain cases, the hpwdt driver for the HP NMI watchdog is not able to claim a non-maskable interrupt (NMI) generated by the HPE watchdog timer because the NMI was instead consumed by the perfmon driver. The missing NMI is initiated by one of two conditions: The Generate NMI button on the Integrated Lights-Out (iLO) server management software. This button is triggered by a user. The hpwdt watchdog. The expiration by default sends an NMI to the server. Both sequences typically occur when the system is unresponsive. Under normal circumstances, the NMI handler for both these situations calls the kernel panic() function and if configured, the kdump service generates a vmcore file. Because of the missing NMI, however, kernel panic() is not called and vmcore is not collected. In the first case (1.), if the system was unresponsive, it remains so. To work around this scenario, use the virtual Power button to reset or power cycle the server. In the second case (2.), the missing NMI is followed 9 seconds later by a reset from the Automated System Recovery (ASR). The HPE Gen9 Server line experiences this problem in single-digit percentages. The Gen10 at an even smaller frequency. Bugzilla:1602962 [1] Reloading an identical crash extension might cause segmentation faults When you load a copy of an already loaded crash extension file, it might trigger a segmentation fault. Currently, the crash utility detects if an original file has been loaded. Consequently, due to two identical files co-existing in the crash utility, a namespace collision occurs, which triggers the crash utility to cause a segmentation fault. You can work around the problem by loading the crash extension file only once. As a result, segmentation faults no longer occur in the described scenario. Bugzilla:1906482 Connections fail when attaching a virtual function to virtual machine Pensando network cards that use the ionic device driver silently accept VLAN tag configuration requests and attempt configuring network connections while attaching network virtual functions ( VF ) to a virtual machine ( VM ). Such network connections fail as this feature is not yet supported by the card's firmware. Bugzilla:1930576 [1] The OPEN MPI library might trigger run-time failures with default PML In OPEN Message Passing Interface (OPEN MPI) implementation 4.0.x series, Unified Communication X (UCX) is the default point-to-point communicator (PML). The later versions of OPEN MPI 4.0.x series deprecated openib Byte Transfer Layer (BTL). However, OPEN MPI, when run over a homogeneous cluster (same hardware and software configuration), UCX still uses openib BTL for MPI one-sided operations. As a consequence, this might trigger execution errors. To work around this problem: Run the mpirun command using following parameters: where, The -mca btl openib parameter disables openib BTL The -mca pml ucx parameter configures OPEN MPI to use ucx PML. The x UCX_NET_DEVICES= parameter restricts UCX to use the specified devices The OPEN MPI, when run over a heterogeneous cluster (different hardware and software configuration), it uses UCX as the default PML. As a consequence, this might cause the OPEN MPI jobs to run with erratic performance, unresponsive behavior, or crash failures. To work around this problem, set the UCX priority as: Run the mpirun command using following parameters: As a result, the OPEN MPI library is able to choose an alternative available transport layer over UCX. Bugzilla:1866402 [1] vmcore capture fails after memory hot-plug or unplug operation After performing the memory hot-plug or hot-unplug operation, the event comes after updating the device tree which contains memory layout information. Thereby the makedumpfile utility tries to access a non-existent physical address. The problem appears if all of the following conditions meet: A little-endian variant of IBM Power System runs RHEL 8. The kdump or fadump service is enabled on the system. Consequently, the capture kernel fails to save vmcore if a kernel crash is triggered after the memory hot-plug or hot-unplug operation. To work around this problem, restart the kdump service after hot-plug or hot-unplug: As a result, vmcore is successfully saved in the described scenario. Bugzilla:1793389 [1] Using irqpoll causes vmcore generation failure Due to an existing problem with the nvme driver on the 64-bit ARM architecture that run on the Amazon Web Services Graviton 1 processor, causes vmcore generation to fail when you provide the irqpoll kernel command line parameter to the first kernel. Consequently, no vmcore file is dumped in the /var/crash/ directory upon a kernel crash. To work around this problem: Append irqpoll to KDUMP_COMMANDLINE_REMOVE variable in the /etc/sysconfig/kdump file. Remove irqpoll from KDUMP_COMMANDLINE_APPEND variable in the /etc/sysconfig/kdump file. Restart the kdump service: As a result, the first kernel boots correctly and the vmcore file is expected to be captured upon the kernel crash. Note that the Amazon Web Services Graviton 2 and Amazon Web Services Graviton 3 processors do not require you to manually remove the irqpoll parameter in the /etc/sysconfig/kdump file. The kdump service can use a significant amount of crash kernel memory to dump the vmcore file. Ensure that the capture kernel has sufficient memory available for the kdump service. For related information on this Known Issue, see The irqpoll kernel command line parameter might cause vmcore generation failure article. Bugzilla:1654962 [1] Hardware certification of the real-time kernel on systems with large core-counts might require passing the skew-tick=1 boot parameter Large or moderate sized systems with numerous sockets and large core-counts can experience latency spikes due to lock contentions on xtime_lock , which is used in the timekeeping system. As a consequence, latency spikes and delays in hardware certifications might occur on multiprocessing systems. As a workaround, you can offset the timer tick per CPU to start at a different time by adding the skew_tick=1 boot parameter. To avoid lock conflicts, enable skew_tick=1 : Enable the skew_tick=1 parameter with grubby . Reboot for changes to take effect. Verify the new settings by displaying the kernel parameters you pass during boot. Note that enabling skew_tick=1 causes a significant increase in power consumption and, therefore, it must be enabled only if you are running latency sensitive real-time workloads. Jira:RHEL-9318 [1] Debug kernel fails to boot in crash capture environment on RHEL 8 Due to the memory-intensive nature of the debug kernel, a problem occurs when the debug kernel is in use and a kernel panic is triggered. As a consequence, the debug kernel is not able to boot as the capture kernel and a stack trace is generated instead. To work around this problem, increase the crash kernel memory as required. As a result, the debug kernel boots successfully in the crash capture environment. Bugzilla:1659609 [1] Allocating crash kernel memory fails at boot time On some Ampere Altra systems, allocating the crash kernel memory during boot fails when the 32-bit region is disabled in BIOS settings. Consequently, the kdump service fails to start. This is caused by memory fragmentation in the region below 4 GB with no fragment being large enough to contain the crash kernel memory. To work around this problem, enable the 32-bit memory region in BIOS as follows: Open the BIOS settings on your system. Open the Chipset menu. Under Memory Configuration , enable the Slave 32-bit option. As a result, crash kernel memory allocation within the 32-bit region succeeds and the kdump service works as expected. Bugzilla:1940674 [1] The QAT manager leaves no spare device for LKCF The Intel(R) QuickAssist Technology (QAT) manager ( qatmgr ) is a user space process, which by default uses all QAT devices in the system. As a consequence, there are no QAT devices left for the Linux Kernel Cryptographic Framework (LKCF). There is no need to work around this situation, as this behavior is expected and a majority of users will use acceleration from the user space. Bugzilla:1920086 [1] The Solarflare fails to create maximum number of virtual functions (VFs) The Solarflare NICs fail to create a maximum number of VFs due to insufficient resources. You can check the maximum number of VFs that a PCIe device can create in the /sys/bus/pci/devices/PCI_ID/sriov_totalvfs file. To workaround this problem, you can either adjust the number of VFs or the VF MSI interrupt value to a lower value, either from Solarflare Boot Manager on startup, or using Solarflare sfboot utility. The default VF MSI interrupt value is 8 . To adjust the VF MSI interrupt value using sfboot : Note Adjusting VF MSI interrupt value affects the VF performance. For more information about parameters to be adjusted accordingly, see the Solarflare Server Adapter user guide . Bugzilla:1971506 [1] Using page_poison=1 can cause a kernel crash When using page_poison=1 as the kernel parameter on firmware with faulty EFI implementation, the operating system can cause the kernel to crash. By default, this option is disabled and it is not recommended to enable it, especially in production systems. Bugzilla:2050411 [1] The iwl7260-firmware breaks Wi-Fi on Intel Wi-Fi 6 AX200, AX210, and Lenovo ThinkPad P1 Gen 4 After updating the iwl7260-firmware or iwl7260-wifi driver to the version provided by RHEL 8.7 and later, the hardware gets into an incorrect internal state. reports its state incorrectly. Consequently, Intel Wifi 6 cards might not work and display the error message: An unconfirmed work around is to power off the system and back on again. Do not reboot. Bugzilla:2106341 [1] Secure boot on IBM Power Systems does not support migration Currently, on IBM Power Systems, logical partition (LPAR) does not boot after successful physical volume (PV) migration. As a result, any type of automated migration with secure boot enabled on a partition fails. Bugzilla:2126777 [1] weak-modules from kmod fails to work with module inter-dependencies The weak-modules script provided by the kmod package determines which modules are kABI-compatible with installed kernels. However, while checking modules' kernel compatibility, weak-modules processes modules symbol dependencies from higher to lower release of the kernel for which they were built. As a consequence, modules with inter-dependencies built against different kernel releases might be interpreted as non-compatible, and therefore the weak-modules script fails to work in this scenario. To work around the problem, build or put the extra modules against the latest stock kernel before you install the new kernel. Bugzilla:2103605 [1] kdump in Ampere Altra servers enters the OOM state The firmware in Ampere Altra and Altra Max servers currently causes the kernel to allocate too many event, interrupt and command queues, which consumes too much memory. As a consequence, the kdump kernel enters the Out of memory (OOM) state. To work around this problem, reserve extra memory for kdump by increasing the value of the crashkernel= kernel option to 640M . Bugzilla:2111855 [1] 9.10. File systems and storage LVM mirror devices that store a LUKS volume sometimes become unresponsive Mirrored LVM devices with a segment type of mirror that store a LUKS volume might become unresponsive under certain conditions. The unresponsive devices reject all I/O operations. To work around the issue, Red Hat recommends that you use LVM RAID 1 devices with a segment type of raid1 instead of mirror if you need to stack LUKS volumes on top of resilient software-defined storage. The raid1 segment type is the default RAID configuration type and replaces mirror as the recommended solution. To convert mirror devices to raid1 , see Converting a mirrored LVM device to a RAID1 device . Bugzilla:1730502 [1] The /boot file system cannot be placed on LVM You cannot place the /boot file system on an LVM logical volume. This limitation exists for the following reasons: On EFI systems, the EFI System Partition conventionally serves as the /boot file system. The uEFI standard requires a specific GPT partition type and a specific file system type for this partition. RHEL 8 uses the Boot Loader Specification (BLS) for system boot entries. This specification requires that the /boot file system is readable by the platform firmware. On EFI systems, the platform firmware can read only the /boot configuration defined by the uEFI standard. The support for LVM logical volumes in the GRUB 2 boot loader is incomplete. Red Hat does not plan to improve the support because the number of use cases for the feature is decreasing due to standards such as uEFI and BLS. Red Hat does not plan to support /boot on LVM. Instead, Red Hat provides tools for managing system snapshots and rollback that do not need the /boot file system to be placed on an LVM logical volume. Bugzilla:1496229 [1] LVM no longer allows creating volume groups with mixed block sizes LVM utilities such as vgcreate or vgextend no longer allow you to create volume groups (VGs) where the physical volumes (PVs) have different logical block sizes. LVM has adopted this change because file systems fail to mount if you extend the underlying logical volume (LV) with a PV of a different block size. To re-enable creating VGs with mixed block sizes, set the allow_mixed_block_sizes=1 option in the lvm.conf file. Bugzilla:1768536 Limitations of LVM writecache The writecache LVM caching method has the following limitations, which are not present in the cache method: You cannot name a writecache logical volume when using pvmove commands. You cannot use logical volumes with writecache in combination with thin pools or VDO. The following limitation also applies to the cache method: You cannot resize a logical volume while cache or writecache is attached to it. Jira:RHELPLAN-27987 [1] , Bugzilla:1798631 , Bugzilla:1808012 System panics after enabling the IOMMU Enabling the Input-Output Memory Management Unit (IOMMU) on the kernel command line by setting the intel_iommu parameter to on results in system panic with general protection fault for the 0x6b6b6b6b6b6b6b6b: 0000 non-canonical address. To work around this problem, ensure that intel_iommu is set to off . Jira:RHEL-1765 [1] Device-mapper multipath is not supported when using NVMe/TCP driver. The use of device-mapper multipath on top of NVMe/TCP devices can cause reduced performance and error handling. To avoid this problem, use native NVMe multipath instead of DM multipath tools. For RHEL 8, you can add the option nvme_core.multipath=Y to the kernel command line. Bugzilla:2022359 [1] The blk-availability systemd service deactivates complex device stacks In systemd , the default block deactivation code does not always handle complex stacks of virtual block devices correctly. In some configurations, virtual devices might not be removed during the shutdown, which causes error messages to be logged. To work around this problem, deactivate complex block device stacks by executing the following command: As a result, complex virtual device stacks are correctly deactivated during shutdown and do not produce error messages. Bugzilla:2011699 [1] XFS quota warnings are triggered too often Using the quota timer results in quota warnings triggering too often, which causes soft quotas to be enforced faster than they should. To work around this problem, do not use soft quotas, which will prevent triggering warnings. As a result, the amount of warning messages will not enforce soft quota limit anymore, respecting the configured timeout. Bugzilla:2059262 [1] 9.11. Dynamic programming languages, web and database servers Git fails to clone or fetch from repositories with potentially unsafe ownership To prevent remote code execution and mitigate CVE-2024-32004 , stricter ownership checks have been introduced in Git for cloning local repositories. Since the update introduced in the RHSA-2024:4084 advisory, Git treats local repositories with potentially unsafe ownership as dubious. As a consequence, if you attempt to clone from a repository locally hosted through git-daemon and you are not the owner of the repository, Git returns a security alert about dubious ownership and fails to clone or fetch from the repository. To work around this problem, explicitly mark the repository as safe by executing the following command: Jira:RHELDOCS-18435 [1] Creating virtual Python 3.11 environments fails when using the virtualenv utility The virtualenv utility in RHEL 8, provided by the python3-virtualenv package, is not compatible with Python 3.11. An attempt to create a virtual environment by using virtualenv will fail with the following error message: To create Python 3.11 virtual environments, use the python3.11 -m venv command instead, which uses the venv module from the standard library. Bugzilla:2165702 python3.11-lxml does not provide the lxml.isoschematron submodule The python3.11-lxml package is distributed without the lxml.isoschematron submodule because it is not under an open source license. The submodule implements ISO Schematron support. As an alternative, pre-ISO-Schematron validation is available in the lxml.etree.Schematron class. The remaining content of the python3.11-lxml package is unaffected. Bugzilla:2157673 PAM plug-in version 1.0 does not work in MariaDB MariaDB 10.3 provides the Pluggable Authentication Modules (PAM) plug-in version 1.0. MariaDB 10.5 provides the plug-in versions 1.0 and 2.0, version 2.0 is the default. The MariaDB PAM plug-in version 1.0 does not work in RHEL 8. To work around this problem, use the PAM plug-in version 2.0 provided by the mariadb:10.5 module stream. Bugzilla:1942330 Symbol conflicts between OpenLDAP libraries might cause crashes in httpd When both the libldap and libldap_r libraries provided by OpenLDAP are loaded and used within a single process, symbol conflicts between these libraries might occur. Consequently, Apache httpd child processes using the PHP ldap extension might end unexpectedly if the mod_security or mod_auth_openidc modules are also loaded by the httpd configuration. Since the RHEL 8.3 update to the Apache Portable Runtime (APR) library, you can work around the problem by setting the APR_DEEPBIND environment variable, which enables the use of the RTLD_DEEPBIND dynamic linker option when loading httpd modules. When the APR_DEEPBIND environment variable is enabled, crashes no longer occur in httpd configurations that load conflicting libraries. Bugzilla:1819607 [1] getpwnam() might fail when called by a 32-bit application When a user of NIS uses a 32-bit application that calls the getpwnam() function, the call fails if the nss_nis.i686 package is missing. To work around this problem, manually install the missing package by using the yum install nss_nis.i686 command. Bugzilla:1803161 9.12. Identity Management Actions required when running Samba as a print server and updating from RHEL 8.4 and earlier With this update, the samba package no longer creates the /var/spool/samba/ directory. If you use Samba as a print server and use /var/spool/samba/ in the [printers] share to spool print jobs, SELinux prevents Samba users from creating files in this directory. Consequently, print jobs fail and the auditd service logs a denied message in /var/log/audit/audit.log . To avoid this problem after updating your system from 8.4 and earlier: Search the [printers] share in the /etc/samba/smb.conf file. If the share definition contains path = /var/spool/samba/ , update the setting and set the path parameter to /var/tmp/ . Restart the smbd service: If you newly installed Samba on RHEL 8.5 or later, no action is required. The default /etc/samba/smb.conf file provided by the samba-common package in this case already uses the /var/tmp/ directory to spool print jobs. Bugzilla:2009213 [1] Using the cert-fix utility with the --agent-uid pkidbuser option breaks Certificate System Using the cert-fix utility with the --agent-uid pkidbuser option corrupts the LDAP configuration of Certificate System. As a consequence, Certificate System might become unstable and manual steps are required to recover the system. Bugzilla:1729215 FIPS mode does not support using a shared secret to establish a cross-forest trust Establishing a cross-forest trust using a shared secret fails in FIPS mode because NTLMSSP authentication is not FIPS-compliant. To work around this problem, authenticate with an Active Directory (AD) administrative account when establishing a trust between an IdM domain with FIPS mode enabled and an AD domain. Jira:RHEL-4847 Downgrading authselect after the rebase to version 1.2.2 breaks system authentication The authselect package has been rebased to the latest upstream version 1.2.2 . Downgrading authselect is not supported and breaks system authentication for all users, including root . If you downgraded the authselect package to 1.2.1 or earlier, perform the following steps to work around this problem: At the GRUB boot screen, select Red Hat Enterprise Linux with the version of the kernel that you want to boot and press e to edit the entry. Type single as a separate word at the end of the line that starts with linux and press Ctrl+X to start the boot process. Upon booting in single-user mode, enter the root password. Restore authselect configuration using the following command: Bugzilla:1892761 IdM to AD cross-realm TGS requests fail The Privilege Attribute Certificate (PAC) information in IdM Kerberos tickets is now signed with AES SHA-2 HMAC encryption, which is not supported by Active Directory (AD). Consequently, IdM to AD cross-realm TGS requests, that is, two-way trust setups, are failing with the following error: Jira:RHEL-4910 Potential risk when using the default value for ldap_id_use_start_tls option When using ldap:// without TLS for identity lookups, it can pose a risk for an attack vector. Particularly a man-in-the-middle (MITM) attack which could allow an attacker to impersonate a user by altering, for example, the UID or GID of an object returned in an LDAP search. Currently, the SSSD configuration option to enforce TLS, ldap_id_use_start_tls , defaults to false . Ensure that your setup operates in a trusted environment and decide if it is safe to use unencrypted communication for id_provider = ldap . Note id_provider = ad and id_provider = ipa are not affected as they use encrypted connections protected by SASL and GSSAPI. If it is not safe to use unencrypted communication, enforce TLS by setting the ldap_id_use_start_tls option to true in the /etc/sssd/sssd.conf file. The default behavior is planned to be changed in a future release of RHEL. Jira:RHELPLAN-155168 [1] pki-core-debuginfo update from RHEL 8.6 to RHEL 8.7 or later fails Updating the pki-core-debuginfo package from RHEL 8.6 to RHEL 8.7 or later fails. To work around this problem, run the following commands: yum remove pki-core-debuginfo yum update -y yum install pki-core-debuginfo yum install idm-pki-symkey-debuginfo idm-pki-tools-debuginfo Jira:RHEL-13125 [1] Migrated IdM users might be unable to log in due to mismatching domain SIDs If you have used the ipa migrate-ds script to migrate users from one IdM deployment to another, those users might have problems using IdM services because their previously existing Security Identifiers (SIDs) do not have the domain SID of the current IdM environment. For example, those users can retrieve a Kerberos ticket with the kinit utility, but they cannot log in. To work around this problem, see the following Knowledgebase article: Migrated IdM users unable to log in due to mismatching domain SIDs . Jira:RHELPLAN-109613 [1] IdM in FIPS mode does not support using the NTLMSSP protocol to establish a two-way cross-forest trust Establishing a two-way cross-forest trust between Active Directory (AD) and Identity Management (IdM) with FIPS mode enabled fails because the New Technology LAN Manager Security Support Provider (NTLMSSP) authentication is not FIPS-compliant. IdM in FIPS mode does not accept the RC4 NTLM hash that the AD domain controller uses when attempting to authenticate. Jira:RHEL-4898 Incorrect warning when setting expiration dates for a Kerberos principal If you set a password expiration date for a Kerberos principal, the current timestamp is compared to the expiration timestamp using a 32-bit signed integer variable. If the expiration date is more than 68 years in the future, it causes an integer variable overflow resulting in the following warning message being displayed: You can ignore this message, the password will expire correctly at the configured date and time. Bugzilla:2125318 Slow enumeration of a large number of entries in the NIS maps on RHEL 8 When you install the nis_nss package on RHEL 8, the /etc/default/NSS configuration file is missing because the file is no longer provided by the glibc-common package. As a consequence, enumeration of a large number of entries in the NIS maps on RHEL 8 takes significantly longer than on RHEL 7 because every request is processed individually by default and not in batches. To work around this problem, create the /etc/default/nss file with the following content and make sure to set the SETENT_BATCH_READ variable to TRUE : Jira:RHEL-34075 [1] Smartcard authentication might require configuration update after introducing the new option local_auth_policy After updating to RHEL 8.10, Smartcard authentication might fail due to changes introduced by the local_auth_policy option. When local_auth_policy is set to its default value, match , SSSD restricts offline authentication methods to those available online. As a result, if Smartcard authentication is not provided by the configured backend, for example when using auth_provider = ldap , it will not be available to users. To work around this issue, explicitly enable Smartcard authentication method by adding local_auth_policy = enable:smartcard to the domain section of the sssd.conf file, then restart SSSD. Jira:RHELDOCS-18777 SSSD retrieves incomplete list of members if the group size exceeds 1500 members During the integration of SSSD with Active Directory, SSSD retrieves incomplete group member lists when the group size exceeds 1500 members. This issue occurs because Active Directory's MaxValRange policy, which restricts the number of members retrievable in a single query, is set to 1500 by default. To work around this problem, change the MaxValRange setting in Active Directory to accommodate larger group sizes. Jira:RHELDOCS-19603 9.13. Desktop Disabling flatpak repositories from Software Repositories is not possible Currently, it is not possible to disable or remove flatpak repositories in the Software Repositories tool in the GNOME Software utility. Bugzilla:1668760 Generation 2 RHEL 8 virtual machines sometimes fail to boot on Hyper-V Server 2016 hosts When using RHEL 8 as the guest operating system on a virtual machine (VM) running on a Microsoft Hyper-V Server 2016 host, the VM in some cases fails to boot and returns to the GRUB boot menu. In addition, the following error is logged in the Hyper-V event log: This error occurs due to a UEFI firmware bug on the Hyper-V host. To work around this problem, use Hyper-V Server 2019 or later as the host. Bugzilla:1583445 [1] Drag-and-drop does not work between desktop and applications Due to a bug in the gnome-shell-extensions package, the drag-and-drop functionality does not currently work between desktop and applications. Support for this feature will be added back in a future release. Bugzilla:1717947 WebKitGTK fails to display web pages on IBM Z The WebKitGTK web browser engine fails when trying to display web pages on the IBM Z architecture. The web page remains blank and the WebKitGTK process stops unexpectedly. As a consequence, you cannot use certain features of applications that use WebKitGTK to display web pages, such as the following: The Evolution mail client The GNOME Online Accounts settings The GNOME Help application Jira:RHEL-4158 9.14. Graphics infrastructures The radeon driver fails to reset hardware correctly The radeon kernel driver currently does not reset hardware in the kexec context correctly. Instead, radeon falls over, which causes the rest of the kdump service to fail. To work around this problem, disable radeon in kdump by adding the following line to the /etc/kdump.conf file: Restart the system and kdump . After starting kdump , the force_rebuild 1 line might be removed from the configuration file. Note that in this scenario, no graphics is available during the dump process, but kdump works correctly. Bugzilla:1694705 [1] Multiple HDR displays on a single MST topology might not power on On systems using NVIDIA Turing GPUs with the nouveau driver, using a DisplayPort hub (such as a laptop dock) with multiple monitors which support HDR plugged into it might result in failure to turn on. This is due to the system erroneously thinking there is not enough bandwidth on the hub to support all of the displays. Bugzilla:1812577 [1] GUI in ESXi might crash due to low video memory The graphical user interface (GUI) on RHEL virtual machines (VMs) in the VMware ESXi 7.0.1 hypervisor with vCenter Server 7.0.1 requires a certain amount of video memory. If you connect multiple consoles or high-resolution monitors to the VM, the GUI requires at least 16 MB of video memory. If you start the GUI with less video memory, the GUI might end unexpectedly. To work around the problem, configure the hypervisor to assign at least 16 MB of video memory to the VM. As a result, the GUI on the VM no longer crashes. If you encounter this issue, Red Hat recommends that you report it to VMware. See also the following VMware article: VMs with high resolution VM console might experience a crash on ESXi 7.0.1 (83194) . Bugzilla:1910358 [1] VNC Viewer displays wrong colors with the 16-bit color depth on IBM Z The VNC Viewer application displays wrong colors when you connect to a VNC session on an IBM Z server with the 16-bit color depth. To work around the problem, set the 24-bit color depth on the VNC server. With the Xvnc server, replace the -depth 16 option with -depth 24 in the Xvnc configuration. As a result, VNC clients display the correct colors but use more network bandwidth with the server. Bugzilla:1886147 Unable to run graphical applications using sudo command When trying to run graphical applications as a user with elevated privileges, the application fails to open with an error message. The failure happens because Xwayland is restricted by the Xauthority file to use regular user credentials for authentication. To work around this problem, use the sudo -E command to run graphical applications as a root user. Bugzilla:1673073 Hardware acceleration is not supported on ARM Built-in graphics drivers do not support hardware acceleration or the Vulkan API on the 64-bit ARM architecture. To enable hardware acceleration or Vulkan on ARM, install the proprietary Nvidia driver. Jira:RHELPLAN-57914 [1] 9.15. Red Hat Enterprise Linux system roles Using the RHEL system role with Ansible 2.9 can display a warning about using dnf with the command module Since RHEL 8.8, the RHEL system roles no longer use the warn parameter in with the dnf module because this parameter was removed in Ansible Core 2.14. However, if you use the latest rhel-system-roles package still with Ansible 2.9 and a role installs a package, one of the following warnings can be displayed: If you want to hide these warnings, add the command_warnings = False setting to the [Defaults] section of the ansible.cfg file. However, note that this setting disables all warnings in Ansible. Jira:RHELDOCS-17954 Unable to manage localhost by using the localhost hostname in the playbook or inventory With the inclusion of the ansible-core 2.13 package in RHEL, if you are running Ansible on the same host you manage your nodes, you cannot do it by using the localhost hostname in your playbook or inventory. This happens because ansible-core 2.13 uses the python38 module, and many of the libraries are missing, for example, blivet for the storage role, gobject for the network role. To workaround this problem, if you are already using the localhost hostname in your playbook or inventory, you can add a connection, by using ansible_connection=local , or by creating an inventory file that lists localhost with the ansible_connection=local option. With that, you are able to manage resources on localhost . For more details, see the article RHEL system roles playbooks fail when run on localhost . Bugzilla:2041997 The rhc system role fails on already registered systems when rhc_auth contains activation keys Executing playbook files on already registered systems fails if activation keys are specified for the rhc_auth parameter. To workaround this issue, do not specify activation keys when executing the playbook file on the already registered system. Bugzilla:2186908 Configuring the imuxsock input basics type causes a problem Configuring the "imuxsock" input basics type through the logging RHEL system role and the use_imuxsock option cause a problem in the resulting configuration on the managed nodes. This role sets the name parameter, however, the "imuxsock" input type does not support the name parameter. As a result, the rsyslog logging utility prints the parameter 'name' not known - typo in config file? error. Jira:RHELDOCS-18326 For RHEL 9 UEFI managed nodes the bootloader_password variable of the bootloader RHEL system role does not work Previously, the bootloader_password variable incorrectly placed the password information in the /boot/efi/EFI/redhat/user.cfg file. The proper location was the /boot/grub2/user.cfg file. Consequently, when you rebooted the managed node to modify any boot loader entry, GRUB2 did not prompt you for a password. To work around this problem, you can manually move the user.cfg file from the incorrect /boot/efi/EFI/redhat/ directory to the correct /boot/grub2/ directory to achieve the expected behavior. Jira:RHEL-45711 9.16. Virtualization Using a large number of queues might cause Windows virtual machines to fail Windows virtual machines (VMs) might fail when the virtual Trusted Platform Module (vTPM) device is enabled and the multi-queue virtio-net feature is configured to use more than 250 queues. This problem is caused by a limitation in the vTPM device. The vTPM device has a hard-coded limit on the maximum number of opened file descriptors. Since multiple file descriptors are opened for every new queue, the internal vTPM limit can be exceeded, causing the VM to fail. To work around this problem, choose one of the following two options: Keep the vTPM device enabled, but use less than 250 queues. Disable the vTPM device to use more than 250 queues. Jira:RHEL-13336 [1] The Milan VM CPU type is sometimes not available on AMD Milan systems On certain AMD Milan systems, the Enhanced REP MOVSB ( erms ) and Fast Short REP MOVSB ( fsrm ) feature flags are disabled in the BIOS by default. Consequently, the Milan CPU type might not be available on these systems. In addition, VM live migration between Milan hosts with different feature flag settings might fail. To work around these problems, manually turn on erms and fsrm in the BIOS of your host. Bugzilla:2077770 [1] SMT CPU topology is not detected by VMs when using host passthrough mode on AMD EPYC When a virtual machine (VM) boots with the CPU host passthrough mode on an AMD EPYC host, the TOPOEXT CPU feature flag is not present. Consequently, the VM is not able to detect a virtual CPU topology with multiple threads per core. To work around this problem, boot the VM with the EPYC CPU model instead of host passthrough. Bugzilla:1740002 Attaching LUN devices to virtual machines using virtio-blk does not work The q35 machine type does not support transitional virtio 1.0 devices, and RHEL 8 therefore lacks support for features that were deprecated in virtio 1.0. In particular, it is not possible on a RHEL 8 host to send SCSI commands from virtio-blk devices. As a consequence, attaching a physical disk as a LUN device to a virtual machine fails when using the virtio-blk controller. Note that physical disks can still be passed through to the guest operating system, but they should be configured with the device='disk' option rather than device='lun' . Bugzilla:1777138 [1] Virtual machines sometimes fail to start when using many virtio-blk disks Adding a large number of virtio-blk devices to a virtual machine (VM) might exhaust the number of interrupt vectors available in the platform. If this occurs, the VM's guest OS fails to boot, and displays a dracut-initqueue[392]: Warning: Could not boot error. Bugzilla:1719687 Virtual machines with iommu_platform=on fail to start on IBM POWER RHEL 8 currently does not support the iommu_platform=on parameter for virtual machines (VMs) on IBM POWER system. As a consequence, starting a VM with this parameter on IBM POWER hardware results in the VM becoming unresponsive during the boot process. Bugzilla:1910848 IBM POWER hosts now work correctly when using the ibmvfc driver When running RHEL 8 on a PowerVM logical partition (LPAR), a variety of errors could previously occur due to problems with the ibmvfc driver. As a consequence, a kernel panic triggered on the host under certain circumstances, such as: Using the Live Partition Mobility (LPM) feature Resetting a host adapter Using SCSI error handling (SCSI EH) functions With this update, the handling of ibmvfc has been fixed, and the described kernel panics no longer occur. Bugzilla:1961722 [1] Using perf kvm record on IBM POWER Systems can cause the VM to crash When using a RHEL 8 host on the little-endian variant of IBM POWER hardware, using the perf kvm record command to collect trace event samples for a KVM virtual machine (VM) in some cases results in the VM becoming unresponsive. This situation occurs when: The perf utility is used by an unprivileged user, and the -p option is used to identify the VM - for example perf kvm record -e trace_cycles -p 12345 . The VM was started using the virsh shell. To work around this problem, use the perf kvm utility with the -i option to monitor VMs that were created using the virsh shell. For example: Note that when using the -i option, child tasks do not inherit counters, and threads will therefore not be monitored. Bugzilla:1924016 [1] Windows Server 2016 virtual machines with Hyper-V enabled fail to boot when using certain CPU models Currently, it is not possible to boot a virtual machine (VM) that uses Windows Server 2016 as the guest operating system, has the Hyper-V role enabled, and uses one of the following CPU models: EPYC-IBPB EPYC To work around this problem, use the EPYC-v3 CPU model, or manually enable the xsaves CPU flag for the VM. Bugzilla:1942888 [1] Migrating a POWER9 guest from a RHEL 7-ALT host to RHEL 8 fails Currently, migrating a POWER9 virtual machine from a RHEL 7-ALT host system to RHEL 8 becomes unresponsive with a Migration status: active status. To work around this problem, disable Transparent Huge Pages (THP) on the RHEL 7-ALT host, which enables the migration to complete successfully. Bugzilla:1741436 [1] Using virt-customize sometimes causes guestfs-firstboot to fail After modifying a virtual machine (VM) disk image using the virt-customize utility, the guestfs-firstboot service in some cases fails due to incorrect SELinux permissions. This causes a variety of problems during VM startup, such as failing user creation or system registration. To avoid this problem, use the virt-customize command with the --selinux-relabel option. Bugzilla:1554735 Deleting a forward interface from a macvtap virtual network resets all connection counts of this network Currently, deleting a forward interface from a macvtap virtual network with multiple forward interfaces also resets the connection status of the other forward interfaces of the network. As a consequence, the connection information in the live network XML is incorrect. Note, however, that this does not affect the functionality of the virtual network. To work around the issue, restart the libvirtd service on your host. Bugzilla:1332758 Virtual machines with SLOF fail to boot in netcat interfaces When using a netcat ( nc ) interface to access the console of a virtual machine (VM) that is currently waiting at the Slimline Open Firmware (SLOF) prompt, the user input is ignored and VM stays unresponsive. To work around this problem, use the nc -C option when connecting to the VM, or use a telnet interface instead. Bugzilla:1974622 [1] Attaching mediated devices to virtual machines in virt-manager in some cases fails The virt-manager application is currently able to detect mediated devices, but cannot recognize whether the device is active. As a consequence, attempting to attach an inactive mediated device to a running virtual machine (VM) using virt-manager fails. Similarly, attempting to create a new VM that uses an inactive mediated device fails with a device not found error. To work around this issue, use the virsh nodedev-start or mdevctl start commands to activate the mediated device before using it in virt-manager . Bugzilla:2026985 RHEL 9 virtual machines fail to boot in POWER8 compatibility mode Currently, booting a virtual machine (VM) that runs RHEL 9 as its guest operating system fails if the VM also uses CPU configuration similar to the following: To work around this problem, do not use POWER8 compatibility mode in RHEL 9 VMs. In addition, note that running RHEL 9 VMs is not possible on POWER8 hosts. Bugzilla:2035158 SUID and SGID are not cleared automatically on virtiofs When you run the virtiofsd service with the killpriv_v2 feature, your system might not automatically clear the SUID and SGID permissions after performing some file-system operations. Consequently, not clearing the permissions might cause a potential security threat. To work around this issue, disable the killpriv_v2 feature by entering the following command: Bugzilla:1966475 [1] Restarting the OVS service on a host might block network connectivity on its running VMs When the Open vSwitch (OVS) service restarts or crashes on a host, virtual machines (VMs) that are running on this host cannot recover the state of the networking device. As a consequence, VMs might be completely unable to receive packets. This problem only affects systems that use the packed virtqueue format in their virtio networking stack. To work around this problem, use the packed=off parameter in the virtio networking device definition to disable packed virtqueue. With packed virtqueue disabled, the state of the networking device can, in some situations, be recovered from RAM. Bugzilla:1792683 nodedev-dumpxml does not list attributes correctly for certain mediated devices Currently, the nodedev-dumpxml does not list attributes correctly for mediated devices that were created using the nodedev-create command. To work around this problem, use the nodedev-define and nodedev-start commands instead. Bugzilla:2143160 Starting a VM with an NVIDIA A16 GPU sometimes causes the host GPU to stop working Currently, if you start a VM that uses an NVIDIA A16 GPU passthrough device, the NVIDIA A16 GPU physical device on the host system in some cases stops working. To work around the problem, reboot the hypervisor and set the reset_method for the GPU device to bus : For details, see the Red Hat Knowledgebase . Jira:RHEL-2451 [1] Windows Server 2019 virtual machines crash on boot if using more than 128 cores per CPU Virtual machines (VMs) that use a Windows Server 2019 guest operating system currently fail to boot when they are configured to use more than 128 cores for a single virtual CPU (vCPU). Instead of booting, the VM displays a stop error on a blue screen. To work around this issue, use fewer than 128 core per vCPU. Jira:RHELDOCS-18863 [1] 9.17. RHEL in cloud environments Setting static IP in a RHEL virtual machine on a VMware host does not work Currently, when using RHEL as a guest operating system of a virtual machine (VM) on a VMware host, the DatasourceOVF function does not work correctly. As a consequence, if you use the cloud-init utility to set the VM's network to static IP and then reboot the VM, the VM's network will be changed to DHCP. To work around this issue, see the VMware Knowledge Base . Jira:RHEL-12122 kdump sometimes does not start on Azure and Hyper-V On RHEL 8 guest operating systems hosted on the Microsoft Azure or Hyper-V hypervisors, starting the kdump kernel in some cases fails when post-exec notifiers are enabled. To work around this problem, disable crash kexec post notifiers: Bugzilla:1865745 [1] The SCSI host address sometimes changes when booting a Hyper-V VM with multiple guest disks Currently, when booting a RHEL 8 virtual machine (VM) on the Hyper-V hypervisor, the host portion of the Host, Bus, Target, Lun (HBTL) SCSI address in some cases changes. As a consequence, automated tasks set up with the HBTL SCSI identification or device node in the VM do not work consistently. This occurs if the VM has more than one disk or if the disks have different sizes. To work around the problem, modify your kickstart files, using one of the following methods: Method 1: Use persistent identifiers for SCSI devices. You can use for example the following powershell script to determine the specific device identifiers: You can use this script on the hyper-v host, for example as follows: Afterwards, the disk values can be used in the kickstart file, for example as follows: As these values are specific for each virtual disk, the configuration needs to be done for each VM instance. It might, therefore, be useful to use the %include syntax to place the disk information into a separate file. Method 2: Set up device selection by size. A kickstart file that configures disk selection based on size must include lines similar to the following: Bugzilla:1906870 [1] RHEL instances on Azure fail to boot if provisioned by cloud-init and configured with an NFSv3 mount entry Currently, booting a RHEL virtual machine (VM) on the Microsoft Azure cloud platform fails if the VM was provisioned by the cloud-init tool and the guest operating system of the VM has an NFSv3 mount entry in the /etc/fstab file. Bugzilla:2081114 [1] 9.18. Supportability The getattachment command fails to download multiple attachments at once The redhat-support-tool command offers the getattachment subcommand for downloading attachments. However, getattachment is currently only able to download a single attachment and fails to download multiple attachments. As a workaround, you can download multiple attachments one by one by passing the case number and UUID for each attachment in the getattachment subcommand. Bugzilla:2064575 redhat-support-tool does not work with the FUTURE crypto policy Because a cryptographic key used by a certificate on the Customer Portal API does not meet the requirements by the FUTURE system-wide cryptographic policy, the redhat-support-tool utility does not work with this policy level at the moment. To work around this problem, use the DEFAULT crypto policy while connecting to the Customer Portal API. Jira:RHEL-2345 Timeout when running sos report on IBM Power Systems, Little Endian When running the sos report command on IBM Power Systems, Little Endian with hundreds or thousands of CPUs, the processor plugin reaches its default timeout of 300 seconds when collecting huge content of the /sys/devices/system/cpu directory. As a workaround, increase the plugin's timeout accordingly: For one-time setting, run: For a permanent change, edit the [plugin_options] section of the /etc/sos/sos.conf file: The example value is set to 1800. The particular timeout value highly depends on a specific system. To set the plugin's timeout appropriately, you can first estimate the time needed to collect the one plugin with no timeout by running the following command: Bugzilla:2011413 [1] 9.19. Containers Running systemd within an older container image does not work Running systemd within an older container image, for example, centos:7 , does not work: To work around this problem, use the following commands: Jira:RHELPLAN-96940 [1] | [
"%pre wipefs -a /dev/sda %end",
"The command 'mount --bind /mnt/sysimage/data /mnt/sysroot/data' exited with the code 32.",
"Warning: /boot//.vmlinuz-<kernel version>.x86_64.hmac does not exist FATAL: FIPS integrity test failed Refusing to continue",
"NONE:+VERS-ALL:-VERS-TLS1.3:+MAC-ALL:+DHE-RSA:+AES-256-GCM:+SIGN-RSA-SHA384:+COMP-ALL:+GROUP-ALL",
"NONE:+VERS-ALL:-VERS-TLS1.3:+MAC-ALL:+ECDHE-RSA:+AES-128-CBC:+SIGN-RSA-SHA1:+COMP-ALL:+GROUP-ALL",
"package xorg-x11-server-common has been added to the list of excluded packages, but it can't be removed from the current software selection without breaking the installation.",
"update-crypto-policies --set DEFAULT:NO-CAMELLIA",
"app pkcs15-init { framework pkcs15 { use_file_caching = false; } }",
"yum module enable libselinux-python yum install libselinux-python",
"yum module install libselinux-python:2.8/common",
"There was an unexpected problem with the supplied content.",
"xccdf-path = /usr/share/xml/scap/sc_tailoring/ds-combined.xml tailoring-path = /usr/share/xml/scap/sc_tailoring/tailoring-xccdf.xml",
"ipmitool -I lanplus -H myserver.example.com -P mypass -C 3 chassis power status",
"wipefs -a /dev/sda[1-9] /dev/sda",
"grub2-mkstandalone might fail to make a bootable EFI image of GRUB2 (no /usr/*/grub*/x86_64-efi/moddep.lst file) (...) grub2-mkstandalone: error: /usr/lib/grub/x86_64-efi/modinfo.sh doesn't exist. Please specify --target or --directory.",
"UEFI_BOOTLOADER=/boot/efi/EFI/redhat/grubx64.efi SECURE_BOOT_BOOTLOADER=/boot/efi/EFI/redhat/shimx64.efi",
"[Match] Architecture=s390x KernelCommandLine=!net.naming-scheme=rhel-8.7 [Link] NamePolicy=kernel database slot path AlternativeNamesPolicy=database slot path MACAddressPolicy=persistent",
"IPv6_rpfilter=no",
"[ 2.817152] acpi PNP0A08:00: [Firmware Bug]: ECAM area [mem 0x30000000-0x31ffffff] not reserved in ACPI namespace [ 2.827911] acpi PNP0A08:00: ECAM at [mem 0x30000000-0x31ffffff] for [bus 00-1f]",
"03:00.0 Non-Volatile memory controller: Sandisk Corp WD Black 2018/PC SN720 NVMe SSD (prog-if 02 [NVM Express]) Capabilities: [900 v1] L1 PM Substates L1SubCap: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2+ ASPM_L1.1- L1_PM_Substates+ PortCommonModeRestoreTime=255us PortTPowerOnTime=10us L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1- T_CommonMode=0us LTR1.2_Threshold=0ns L1SubCtl2: T_PwrOn=10us",
"-mca btl openib -mca pml ucx -x UCX_NET_DEVICES=mlx5_ib0",
"-mca pml_ucx_priority 5",
"systemctl restart kdump.service",
"KDUMP_COMMANDLINE_REMOVE=\"hugepages hugepagesz slub_debug quiet log_buf_len swiotlb\"",
"KDUMP_COMMANDLINE_APPEND=\"irqpoll nr_cpus=1 reset_devices cgroup_disable=memory udev.children-max=2 panic=10 swiotlb=noforce novmcoredd\"",
"systemctl restart kdump",
"grubby --update-kernel=ALL --args=\"skew_tick=1\"",
"cat /proc/cmdline",
"sfboot vf-msix-limit=2",
"kernel: iwlwifi 0000:09:00.0: Failed to start RT ucode: -110 kernel: iwlwifi 0000:09:00.0: WRT: Collecting data: ini trigger 13 fired (delay=0ms) kernel: iwlwifi 0000:09:00.0: Failed to run INIT ucode: -110",
"systemctl enable --now blk-availability.service",
"git config --global --add safe.directory /path/to/repository",
"virtualenv -p python3.11 venv3.11 Running virtualenv with interpreter /usr/bin/python3.11 ERROR: Virtual environments created by virtualenv < 20 are not compatible with Python 3.11. ERROR: Use `python3.11 -m venv` instead.",
"systemctl restart smbd",
"authselect select sssd --force",
"Generic error (see e-text) while getting credentials for <service principal>",
"Warning: Your password will expire in less than one hour on [expiration date]",
"/etc/default/nss This file can theoretically contain a bunch of customization variables for Name Service Switch in the GNU C library. For now there are only four variables: # NETID_AUTHORITATIVE If set to TRUE, the initgroups() function will accept the information from the netid.byname NIS map as authoritative. This can speed up the function significantly if the group.byname map is large. The content of the netid.byname map is used AS IS. The system administrator has to make sure it is correctly generated. #NETID_AUTHORITATIVE=TRUE # SERVICES_AUTHORITATIVE If set to TRUE, the getservbyname{,_r}() function will assume services.byservicename NIS map exists and is authoritative, particularly that it contains both keys with /proto and without /proto for both primary service names and service aliases. The system administrator has to make sure it is correctly generated. #SERVICES_AUTHORITATIVE=TRUE # SETENT_BATCH_READ If set to TRUE, various setXXent() functions will read the entire database at once and then hand out the requests one by one from memory with every getXXent() call. Otherwise each getXXent() call might result into a network communication with the server to get the next entry. SETENT_BATCH_READ=TRUE # ADJUNCT_AS_SHADOW If set to TRUE, the passwd routines in the NIS NSS module will not use the passwd.adjunct.byname tables to fill in the password data in the passwd structure. This is a security problem if the NIS server cannot be trusted to send the passwd.adjuct table only to privileged clients. Instead the passwd.adjunct.byname table is used to synthesize the shadow.byname table if it does not exist. #ADJUNCT_AS_SHADOW=TRUE",
"The guest operating system reported that it failed with the following error code: 0x1E",
"dracut_args --omit-drivers \"radeon\" force_rebuild 1",
"[WARNING]: Consider using the dnf module rather than running 'dnf'. If you need to use command because dnf is insufficient you can add 'warn: false' to this command task or set 'command_warnings=False' in ansible.cfg to get rid of this message.",
"[WARNING]: Consider using the yum, dnf or zypper module rather than running 'rpm'. If you need to use command because yum, dnf or zypper is insufficient you can add 'warn: false' to this command task or set 'command_warnings=False' in ansible.cfg to get rid of this message.",
"perf kvm record -e trace_imc/trace_cycles/ -p <guest pid> -i",
"<cpu mode=\"host-model\"> <model>power8</model> </cpu>",
"virtiofsd -o no_killpriv_v2",
"echo bus > /sys/bus/pci/devices/<DEVICE-PCI-ADDRESS>/reset_method cat /sys/bus/pci/devices/<DEVICE-PCI-ADDRESS>/reset_method bus",
"echo N > /sys/module/kernel/parameters/crash_kexec_post_notifiers",
"Output what the /dev/disk/by-id/<value> for the specified hyper-v virtual disk. Takes a single parameter which is the virtual disk file. Note: kickstart syntax works with and without the /dev/ prefix. param ( [Parameter(Mandatory=USDtrue)][string]USDvirtualdisk ) USDwhat = Get-VHD -Path USDvirtualdisk USDpart = USDwhat.DiskIdentifier.ToLower().split('-') USDp = USDpart[0] USDs0 = USDp[6] + USDp[7] + USDp[4] + USDp[5] + USDp[2] + USDp[3] + USDp[0] + USDp[1] USDp = USDpart[1] USDs1 = USDp[2] + USDp[3] + USDp[0] + USDp[1] [string]::format(\"/dev/disk/by-id/wwn-0x60022480{0}{1}{2}\", USDs0, USDs1, USDpart[4])",
"PS C:\\Users\\Public\\Documents\\Hyper-V\\Virtual hard disks> .\\by-id.ps1 .\\Testing_8\\disk_3_8.vhdx /dev/disk/by-id/wwn-0x60022480e00bc367d7fd902e8bf0d3b4 PS C:\\Users\\Public\\Documents\\Hyper-V\\Virtual hard disks> .\\by-id.ps1 .\\Testing_8\\disk_3_9.vhdx /dev/disk/by-id/wwn-0x600224807270e09717645b1890f8a9a2",
"part / --fstype=xfs --grow --asprimary --size=8192 --ondisk=/dev/disk/by-id/wwn-0x600224807270e09717645b1890f8a9a2 part /home --fstype=\"xfs\" --grow --ondisk=/dev/disk/by-id/wwn-0x60022480e00bc367d7fd902e8bf0d3b4",
"Disk partitioning information is supplied in a file to kick start %include /tmp/disks Partition information is created during install using the %pre section %pre --interpreter /bin/bash --log /tmp/ks_pre.log # Dump whole SCSI/IDE disks out sorted from smallest to largest ouputting # just the name disks=(`lsblk -n -o NAME -l -b -x SIZE -d -I 8,3`) || exit 1 # We are assuming we have 3 disks which will be used # and we will create some variables to represent d0=USD{disks[0]} d1=USD{disks[1]} d2=USD{disks[2]} echo \"part /home --fstype=\"xfs\" --ondisk=USDd2 --grow\" >> /tmp/disks echo \"part swap --fstype=\"swap\" --ondisk=USDd0 --size=4096\" >> /tmp/disks echo \"part / --fstype=\"xfs\" --ondisk=USDd1 --grow\" >> /tmp/disks echo \"part /boot --fstype=\"xfs\" --ondisk=USDd1 --size=1024\" >> /tmp/disks %end",
"sos report -k processor.timeout=1800",
"Specify any plugin options and their values here. These options take the form plugin_name.option_name = value #rpm.rpmva = off processor.timeout = 1800",
"time sos report -o processor -k processor.timeout=0 --batch --build",
"podman run --rm -ti centos:7 /usr/lib/systemd/systemd Storing signatures Failed to mount cgroup at /sys/fs/cgroup/systemd: Operation not permitted [!!!!!!] Failed to mount API filesystems, freezing.",
"mkdir /sys/fs/cgroup/systemd mount none -t cgroup -o none,name=systemd /sys/fs/cgroup/systemd podman run --runtime /usr/bin/crun --annotation=run.oci.systemd.force_cgroup_v1=/sys/fs/cgroup --rm -ti centos:7 /usr/lib/systemd/systemd"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.10_release_notes/known-issues |
Chapter 1. About Red Hat OpenShift GitOps | Chapter 1. About Red Hat OpenShift GitOps Red Hat OpenShift GitOps is an Operator that uses Argo CD as the declarative GitOps engine. It enables GitOps workflows across multicluster OpenShift and Kubernetes infrastructure. Using Red Hat OpenShift GitOps, administrators can consistently configure and deploy Kubernetes-based infrastructure and applications across clusters and development lifecycles. Red Hat OpenShift GitOps is based on the open source project Argo CD and provides a similar set of features to what the upstream offers, with additional automation, integration into Red Hat {OCP} and the benefits of Red Hat's enterprise support, quality assurance and focus on enterprise security. Note Because Red Hat OpenShift GitOps releases on a different cadence from OpenShift Container Platform, the Red Hat OpenShift GitOps documentation is now available as a separate documentation set at Red Hat OpenShift GitOps . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/gitops/about-redhat-openshift-gitops |
31.2. Configuring Host-based Access Control in an IdM Domain | 31.2. Configuring Host-based Access Control in an IdM Domain To configure your domain for host-based access control: Create HBAC rules Test the new HBAC rules Disable the default allow_all HBAC rule Important Do not disable the allow_all rule before creating custom HBAC rules. If you do this, no users will be able to access any hosts. 31.2.1. Creating HBAC Rules To create an HBAC rule, you can use: the IdM web UI (see the section called "Web UI: Creating an HBAC Rule" ) the command line (see the section called "Command Line: Creating HBAC Rules" ) For examples, see the section called "Examples of HBAC Rules" . Note IdM stores the primary group of a user as a numerical value of the gidNumber attribute instead of a link to an IdM group object. For this reason, an HBAC rule can only reference a user's supplementary groups and not its primary group. Web UI: Creating an HBAC Rule Select Policy Host-Based Access Control HBAC Rules . Click Add to start adding a new rule. Enter a name for the rule, and click Add and Edit to go directly to the HBAC rule configuration page. In the Who area, specify the target users. To apply the HBAC rule to specified users or groups only, select Specified Users and Groups . Then click Add to add the users or groups. To apply the HBAC rule to all users, select Anyone . Figure 31.2. Specifying a Target User for an HBAC Rule In the Accessing area, specify the target hosts: To apply the HBAC rule to specified hosts or groups only, select Specified Hosts and Groups . Then click Add to add the hosts or groups. To apply the HBAC rule to all hosts, select Any Host . In the Via Service area, specify the target HBAC services: To apply the HBAC rule to specified services or groups only, select Specified Services and Groups . Then click Add to add the services or groups. To apply the HBAC rule to all services, select Any Service . Note Only the most common services and service groups are configured for HBAC rules by default. To display the list of services that are currently available, select Policy Host-Based Access Control HBAC Services . To display the list of service groups that are currently available, select Policy Host-Based Access Control HBAC Service Groups . To add more services and service groups, see Section 31.3, "Adding HBAC Service Entries for Custom HBAC Services" and Section 31.4, "Adding HBAC Service Groups" . Changing certain settings on the HBAC rule configuration page highlights the Save button at the top of the page. If this happens, click the button to confirm the changes. Command Line: Creating HBAC Rules Use the ipa hbacrule-add command to add the rule. Specify the target users. To apply the HBAC rule to specified users or groups only, use the ipa hbacrule-add-user command. For example, to add a group: To add multiple users or groups, use the --users and --groups options: To apply the HBAC rule to all users, use the ipa hbacrule-mod command and specify the all user category: Note If the HBAC rule is associated with individual users or groups, ipa hbacrule-mod --usercat=all fails. In this situation, remove the users and groups using the ipa hbacrule-remove-user command. For details, run ipa hbacrule-remove-user with the --help option. Specify the target hosts. To apply the HBAC rule to specified hosts or groups only, use the ipa hbacrule-add-host command. For example, to add a single host: To add multiple hosts or groups, use the --hosts and --hostgroups options: To apply the HBAC rule to all hosts, use the ipa hbacrule-mod command and specify the all host category: Note If the HBAC rule is associated with individual hosts or groups, ipa hbacrule-mod --hostcat=all fails. In this situation, remove the hosts and groups using the ipa hbacrule-remove-host command. For details, run ipa hbacrule-remove-host with the --help option. Specify the target HBAC services. To apply the HBAC rule to specified services or groups only, use the ipa hbacrule-add-service command. For example, to add a single service: To add multiple services or groups, you can use the --hbacsvcs and --hbacsvcgroups options: Note Only the most common services and service groups are configured for HBAC rules. To add more, see Section 31.3, "Adding HBAC Service Entries for Custom HBAC Services" and Section 31.4, "Adding HBAC Service Groups" . To apply the HBAC rule to all services, use the ipa hbacrule-mod command and specify the all service category: Note If the HBAC rule is associated with individual services or groups, ipa hbacrule-mod --servicecat=all fails. In this situation, remove the services and groups using the ipa hbacrule-remove-service command. For details, run ipa hbacrule-remove-service with the --help option. Optional. Verify that the HBAC rule has been added correctly. Use the ipa hbacrule-find command to verify that the HBAC rule has been added to IdM. Use the ipa hbacrule-show command to verify the properties of the HBAC rule. For details, run the commands with the --help option. Examples of HBAC Rules Example 31.1. Granting a Single User Access to All Hosts Using Any Service To allow the admin user to access all systems in the domain using any service, create a new HBAC rule and set: the user to admin the host to Any host (in the web UI), or use --hostcat=all with ipa hbacrule-add (when adding the rule) or ipa hbacrule-mod the service to Any service (in the web UI), or use --servicecat=all with ipa hbacrule-add (when adding the rule) or ipa hbacrule-mod Example 31.2. Ensuring That Only Specific Services Can Be Used to Access a Host To make sure that all users must use sudo -related services to access the host named host.example.com , create a new HBAC rule and set: the user to Anyone (in the web UI), or use --usercat=all with ipa hbacrule-add (when adding the rule) or ipa hbacrule-mod the host to host.example.com the HBAC service group to Sudo , which is a default group for sudo and related services 31.2.2. Testing HBAC Rules IdM enables you to test your HBAC configuration in various situations using simulated scenarios. By performing these simulated test runs, you can discover misconfiguration problems or security risks before deploying HBAC rules in production. Important Always test custom HBAC rules before you start using them in production. Note that IdM does not test the effect of HBAC rules on trusted Active Directory (AD) users. Because AD data is not stored in the IdM LDAP directory, IdM cannot resolve group membership of AD users when simulating HBAC scenarios. To test an HBAC rule, you can use: the IdM web UI (see the section called "Web UI: Testing an HBAC Rule" ) the command line (see the section called "Command Line: Testing an HBAC Rule" ) Web UI: Testing an HBAC Rule Select Policy Host-Based Access Control HBAC Test . On the Who screen: Specify the user under whose identity you want to perform the test, and click . Figure 31.3. Specifying the Target User for an HBAC Test On the Accessing screen: Specify the host that the user will attempt to access, and click . On the Via Service screen: Specify the service that the user will attempt to use, and click . On the Rules screen: Select the HBAC rules you want to test, and click . If you do not select any rule, all rules will be tested. Select Include Enabled to run the test on all rules whose status is Enabled . Select Include Disabled to run the test on all rules whose status is Disabled . To view and change the status of HBAC rules, select Policy Host-Based Access Control HBAC Rules . Important If the test runs on multiple rules, it will pass successfully if at least one of the selected rules allows access. On the Run Test screen: Click Run Test . Figure 31.4. Running an HBAC Test Review the test results: If you see ACCESS DENIED , the user was not granted access in the test. If you see ACCESS GRANTED , the user was able to access the host successfully. Figure 31.5. Reviewing HBAC Test Results By default, IdM lists all the tested HBAC rules when displaying the test results. Select Matched to display the rules that allowed successful access. Select Unmatched to display the rules that prevented access. Command Line: Testing an HBAC Rule Use the ipa hbactest command and specify at least: the user under whose identity you want to perform the test the host that the user will attempt to access the service that the user will attempt to use For example, when specifying these values interactively: By default, IdM runs the test on all HBAC rules whose status is enabled . To specify different HBAC rules: Use the --rules option to define one or more HBAC rules. Use the --disabled option to test all HBAC rules whose status is disabled . To see the current status of HBAC rules, run the ipa hbacrule-find command. Example 31.3. Testing an HBAC Rule from the Command Line In the following test, an HBAC rule named rule2 prevented user1 from accessing example.com using the sudo service: Example 31.4. Testing Multiple HBAC Rules from the Command Line When testing multiple HBAC rules, the test passes if at least one rule allows the user successful access. In the output: Matched rules list the rules that allowed successful access. Not matched rules list the rules that prevented access. 31.2.3. Disabling HBAC Rules Disabling an HBAC rule deactivates the rule, but does not delete it. If you disable an HBAC rule, you can re-enable it later. Note For example, disabling HBAC rules is useful after you configure custom HBAC rules for the first time. To ensure that your new configuration is not overridden by the default allow_all HBAC rule, you must disable allow_all . To disable an HBAC rule, you can use: the IdM web UI (see the section called "Web UI: Disabling an HBAC Rule" ) the command line (see the section called "Command Line: Disabling an HBAC Rule" ) Web UI: Disabling an HBAC Rule Select Policy Host-Based Access Control HBAC Rules . Select the HBAC rule you want to disable, and click Disable . Figure 31.6. Disabling the allow_all HBAC Rule Command Line: Disabling an HBAC Rule Use the ipa hbacrule-disable command. For example, to disable the allow_all rule: | [
"ipa hbacrule-add Rule name: rule_name --------------------------- Added HBAC rule \"rule_name\" --------------------------- Rule name: rule_name Enabled: TRUE",
"ipa hbacrule-add-user Rule name: rule_name [member user]: [member group]: group_name Rule name: rule_name Enabled: TRUE User Groups: group_name ------------------------- Number of members added 1 -------------------------",
"ipa hbacrule-add-user rule_name --users= user1 --users= user2 --users= user3 Rule name: rule_name Enabled: TRUE Users: user1, user2, user3 ------------------------- Number of members added 3 -------------------------",
"ipa hbacrule-mod rule_name --usercat=all ------------------------------ Modified HBAC rule \"rule_name\" ------------------------------ Rule name: rule_name User category: all Enabled: TRUE",
"ipa hbacrule-add-host Rule name: rule_name [member host]: host.example.com [member host group]: Rule name: rule_name Enabled: TRUE Hosts: host.example.com ------------------------- Number of members added 1 -------------------------",
"ipa hbacrule-add-host rule_name --hosts= host1 --hosts= host2 --hosts= host3 Rule name: rule_name Enabled: TRUE Hosts: host1, host2, host3 ------------------------- Number of members added 3 -------------------------",
"ipa hbacrule-mod rule_name --hostcat=all ------------------------------ Modified HBAC rule \"rule_name\" ------------------------------ Rule name: rule_name Host category: all Enabled: TRUE",
"ipa hbacrule-add-service Rule name: rule_name [member HBAC service]: ftp [member HBAC service group]: Rule name: rule_name Enabled: TRUE Services: ftp ------------------------- Number of members added 1 -------------------------",
"ipa hbacrule-add-service rule_name --hbacsvcs= su --hbacsvcs= sudo Rule name: rule_name Enabled: TRUE Services: su, sudo ------------------------- Number of members added 2 -------------------------",
"ipa hbacrule-mod rule_name --servicecat=all ------------------------------ Modified HBAC rule \"rule_name\" ------------------------------ Rule name: rule_name Service category: all Enabled: TRUE",
"ipa hbactest User name: user1 Target host: example.com Service: sudo --------------------- Access granted: False --------------------- Not matched rules: rule1",
"ipa hbactest --user= user1 --host= example.com --service= sudo --rules= rule1 --------------------- Access granted: False --------------------- Not matched rules: rule1",
"ipa hbactest --user= user1 --host= example.com --service= sudo --rules= rule1 --rules= rule2 -------------------- Access granted: True -------------------- Matched rules: rule2 Not matched rules: rule1",
"ipa hbacrule-disable allow_all ------------------------------ Disabled HBAC rule \"allow_all\" ------------------------------"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/hbac-configure-domain |
Preface | Preface Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Service on AWS with hosted control planes. Note Only internal OpenShift Data Foundation clusters are supported on AWS. See Planning your deployment and Preparing to deploy OpenShift Data Foundation for more information about deployment requirements. To deploy OpenShift Data Foundation, start with the requirements in Preparing to deploy OpenShift Data Foundation chapter and then follow the deployment process in Deploying using dynamic storage devices . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_using_red_hat_openshift_service_on_aws_with_hosted_control_planes/preface-rosahcp |
function::usymline | function::usymline Name function::usymline - Return the line number of an address. Synopsis Arguments addr The address to translate. Description Returns the (approximate) line number of the given address, if known. If the line number cannot be found, the hex string representation of the address will be returned. | [
"usymline:string(addr:long)"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-usymline |
13.2. Adding and Removing User or Host Groups | 13.2. Adding and Removing User or Host Groups To add a group, you can use: The web UI (see the section called "Web UI: Adding a User or Host Group" ) The command line (see the section called "Command Line: Adding a User or Host Group" ) IdM enables specifying a custom GID when creating a user group. If you do this, be careful to avoid ID conflicts. See Section 14.6, "Ensuring That ID Values Are Unique" . If you do not specify a custom GID, IdM automatically assigns a GID from the available ID range. To remove a group, you can use: The web UI (see the section called "Web UI: Removing a User or Host Group" ) The command line (see the section called "Command Line: Removing a User or Host Group" ) Note that removing a group does not delete the group members from IdM. Web UI: Adding a User or Host Group Click Identity Groups , and select User Groups or Host Groups in the left sidebar. Click Add to start adding the group. Fill out the information about the group. For details on user group types, see Section 13.1.4, "User Group Types in IdM" . Click Add to confirm. Command Line: Adding a User or Host Group Log in as the administrator: To add a user group, use the ipa group-add command. To add a host group, use the ipa hostgroup-add command. By default, ipa group-add adds a POSIX user group. To specify a different group type, add options to ipa group-add : --nonposix to create a non-POSIX group --external to create an external group For details on group types, see Section 13.1.4, "User Group Types in IdM" . Web UI: Removing a User or Host Group Click Identity Groups and select User Groups or Host Groups in the left sidebar. Select the group to remove, and click Delete . Command Line: Removing a User or Host Group Log in as the administrator: To delete a user group, use the ipa group-del group_name command. To delete a host group, use the ipa hostgroup-del group_name command. | [
"kinit admin",
"ipa group-add group_name ----------------------- Added group \"group_name\" ------------------------",
"kinit admin",
"ipa group-del group_name -------------------------- Deleted group \"group_name\" --------------------------"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/user-groups-add |
10.5.32. TypesConfig | 10.5.32. TypesConfig TypesConfig names the file which sets the default list of MIME type mappings (file name extensions to content types). The default TypesConfig file is /etc/mime.types . Instead of editing /etc/mime.types , the recommended way to add MIME type mappings is to use the AddType directive. For more information about AddType , refer to Section 10.5.55, " AddType " . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-apache-typesconfig |
function::print_backtrace | function::print_backtrace Name function::print_backtrace - Print kernel stack back trace Synopsis Arguments None Description This function is equivalent to print_stack( backtrace ), except that deeper stack nesting may be supported. See print_ubacktrace for user-space backtrace. The function does not return a value. | [
"print_backtrace()"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-print-backtrace |
Chapter 2. CSIDriver [storage.k8s.io/v1] | Chapter 2. CSIDriver [storage.k8s.io/v1] Description CSIDriver captures information about a Container Storage Interface (CSI) volume driver deployed on the cluster. Kubernetes attach detach controller uses this object to determine whether attach is required. Kubelet uses this object to determine whether pod information needs to be passed on mount. CSIDriver objects are non-namespaced. Type object Required spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object metadata. metadata.Name indicates the name of the CSI driver that this object refers to; it MUST be the same name returned by the CSI GetPluginName() call for that driver. The driver name must be 63 characters or less, beginning and ending with an alphanumeric character ([a-z0-9A-Z]) with dashes (-), dots (.), and alphanumerics between. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object CSIDriverSpec is the specification of a CSIDriver. 2.1.1. .spec Description CSIDriverSpec is the specification of a CSIDriver. Type object Property Type Description attachRequired boolean attachRequired indicates this CSI volume driver requires an attach operation (because it implements the CSI ControllerPublishVolume() method), and that the Kubernetes attach detach controller should call the attach volume interface which checks the volumeattachment status and waits until the volume is attached before proceeding to mounting. The CSI external-attacher coordinates with CSI volume driver and updates the volumeattachment status when the attach operation is complete. If the CSIDriverRegistry feature gate is enabled and the value is specified to false, the attach operation will be skipped. Otherwise the attach operation will be called. This field is immutable. fsGroupPolicy string Defines if the underlying volume supports changing ownership and permission of the volume before being mounted. Refer to the specific FSGroupPolicy values for additional details. This field is immutable. Defaults to ReadWriteOnceWithFSType, which will examine each volume to determine if Kubernetes should modify ownership and permissions of the volume. With the default policy the defined fsGroup will only be applied if a fstype is defined and the volume's access mode contains ReadWriteOnce. podInfoOnMount boolean If set to true, podInfoOnMount indicates this CSI volume driver requires additional pod information (like podName, podUID, etc.) during mount operations. If set to false, pod information will not be passed on mount. Default is false. The CSI driver specifies podInfoOnMount as part of driver deployment. If true, Kubelet will pass pod information as VolumeContext in the CSI NodePublishVolume() calls. The CSI driver is responsible for parsing and validating the information passed in as VolumeContext. The following VolumeConext will be passed if podInfoOnMount is set to true. This list might grow, but the prefix will be used. "csi.storage.k8s.io/pod.name": pod.Name "csi.storage.k8s.io/pod.namespace": pod.Namespace "csi.storage.k8s.io/pod.uid": string(pod.UID) "csi.storage.k8s.io/ephemeral": "true" if the volume is an ephemeral inline volume defined by a CSIVolumeSource, otherwise "false" "csi.storage.k8s.io/ephemeral" is a new feature in Kubernetes 1.16. It is only required for drivers which support both the "Persistent" and "Ephemeral" VolumeLifecycleMode. Other drivers can leave pod info disabled and/or ignore this field. As Kubernetes 1.15 doesn't support this field, drivers can only support one mode when deployed on such a cluster and the deployment determines which mode that is, for example via a command line parameter of the driver. This field is immutable. requiresRepublish boolean RequiresRepublish indicates the CSI driver wants NodePublishVolume being periodically called to reflect any possible change in the mounted volume. This field defaults to false. Note: After a successful initial NodePublishVolume call, subsequent calls to NodePublishVolume should only update the contents of the volume. New mount points will not be seen by a running container. seLinuxMount boolean SELinuxMount specifies if the CSI driver supports "-o context" mount option. When "true", the CSI driver must ensure that all volumes provided by this CSI driver can be mounted separately with different -o context options. This is typical for storage backends that provide volumes as filesystems on block devices or as independent shared volumes. Kubernetes will call NodeStage / NodePublish with "-o context=xyz" mount option when mounting a ReadWriteOncePod volume used in Pod that has explicitly set SELinux context. In the future, it may be expanded to other volume AccessModes. In any case, Kubernetes will ensure that the volume is mounted only with a single SELinux context. When "false", Kubernetes won't pass any special SELinux mount options to the driver. This is typical for volumes that represent subdirectories of a bigger shared filesystem. Default is "false". storageCapacity boolean If set to true, storageCapacity indicates that the CSI volume driver wants pod scheduling to consider the storage capacity that the driver deployment will report by creating CSIStorageCapacity objects with capacity information. The check can be enabled immediately when deploying a driver. In that case, provisioning new volumes with late binding will pause until the driver deployment has published some suitable CSIStorageCapacity object. Alternatively, the driver can be deployed with the field unset or false and it can be flipped later when storage capacity information has been published. This field was immutable in Kubernetes ⇐ 1.22 and now is mutable. tokenRequests array TokenRequests indicates the CSI driver needs pods' service account tokens it is mounting volume for to do necessary authentication. Kubelet will pass the tokens in VolumeContext in the CSI NodePublishVolume calls. The CSI driver should parse and validate the following VolumeContext: "csi.storage.k8s.io/serviceAccount.tokens": { "<audience>": { "token": <token>, "expirationTimestamp": <expiration timestamp in RFC3339>, }, ... } Note: Audience in each TokenRequest should be different and at most one token is empty string. To receive a new token after expiry, RequiresRepublish can be used to trigger NodePublishVolume periodically. tokenRequests[] object TokenRequest contains parameters of a service account token. volumeLifecycleModes array (string) volumeLifecycleModes defines what kind of volumes this CSI volume driver supports. The default if the list is empty is "Persistent", which is the usage defined by the CSI specification and implemented in Kubernetes via the usual PV/PVC mechanism. The other mode is "Ephemeral". In this mode, volumes are defined inline inside the pod spec with CSIVolumeSource and their lifecycle is tied to the lifecycle of that pod. A driver has to be aware of this because it is only going to get a NodePublishVolume call for such a volume. For more information about implementing this mode, see https://kubernetes-csi.github.io/docs/ephemeral-local-volumes.html A driver can support one or more of these modes and more modes may be added in the future. This field is beta. This field is immutable. 2.1.2. .spec.tokenRequests Description TokenRequests indicates the CSI driver needs pods' service account tokens it is mounting volume for to do necessary authentication. Kubelet will pass the tokens in VolumeContext in the CSI NodePublishVolume calls. The CSI driver should parse and validate the following VolumeContext: "csi.storage.k8s.io/serviceAccount.tokens": { "<audience>": { "token": <token>, "expirationTimestamp": <expiration timestamp in RFC3339>, }, ... } Note: Audience in each TokenRequest should be different and at most one token is empty string. To receive a new token after expiry, RequiresRepublish can be used to trigger NodePublishVolume periodically. Type array 2.1.3. .spec.tokenRequests[] Description TokenRequest contains parameters of a service account token. Type object Required audience Property Type Description audience string Audience is the intended audience of the token in "TokenRequestSpec". It will default to the audiences of kube apiserver. expirationSeconds integer ExpirationSeconds is the duration of validity of the token in "TokenRequestSpec". It has the same default value of "ExpirationSeconds" in "TokenRequestSpec". 2.2. API endpoints The following API endpoints are available: /apis/storage.k8s.io/v1/csidrivers DELETE : delete collection of CSIDriver GET : list or watch objects of kind CSIDriver POST : create a CSIDriver /apis/storage.k8s.io/v1/watch/csidrivers GET : watch individual changes to a list of CSIDriver. deprecated: use the 'watch' parameter with a list operation instead. /apis/storage.k8s.io/v1/csidrivers/{name} DELETE : delete a CSIDriver GET : read the specified CSIDriver PATCH : partially update the specified CSIDriver PUT : replace the specified CSIDriver /apis/storage.k8s.io/v1/watch/csidrivers/{name} GET : watch changes to an object of kind CSIDriver. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 2.2.1. /apis/storage.k8s.io/v1/csidrivers Table 2.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of CSIDriver Table 2.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 2.3. Body parameters Parameter Type Description body DeleteOptions schema Table 2.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind CSIDriver Table 2.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.6. HTTP responses HTTP code Reponse body 200 - OK CSIDriverList schema 401 - Unauthorized Empty HTTP method POST Description create a CSIDriver Table 2.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.8. Body parameters Parameter Type Description body CSIDriver schema Table 2.9. HTTP responses HTTP code Reponse body 200 - OK CSIDriver schema 201 - Created CSIDriver schema 202 - Accepted CSIDriver schema 401 - Unauthorized Empty 2.2.2. /apis/storage.k8s.io/v1/watch/csidrivers Table 2.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of CSIDriver. deprecated: use the 'watch' parameter with a list operation instead. Table 2.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.3. /apis/storage.k8s.io/v1/csidrivers/{name} Table 2.12. Global path parameters Parameter Type Description name string name of the CSIDriver Table 2.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a CSIDriver Table 2.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 2.15. Body parameters Parameter Type Description body DeleteOptions schema Table 2.16. HTTP responses HTTP code Reponse body 200 - OK CSIDriver schema 202 - Accepted CSIDriver schema 401 - Unauthorized Empty HTTP method GET Description read the specified CSIDriver Table 2.17. HTTP responses HTTP code Reponse body 200 - OK CSIDriver schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified CSIDriver Table 2.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 2.19. Body parameters Parameter Type Description body Patch schema Table 2.20. HTTP responses HTTP code Reponse body 200 - OK CSIDriver schema 201 - Created CSIDriver schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified CSIDriver Table 2.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.22. Body parameters Parameter Type Description body CSIDriver schema Table 2.23. HTTP responses HTTP code Reponse body 200 - OK CSIDriver schema 201 - Created CSIDriver schema 401 - Unauthorized Empty 2.2.4. /apis/storage.k8s.io/v1/watch/csidrivers/{name} Table 2.24. Global path parameters Parameter Type Description name string name of the CSIDriver Table 2.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind CSIDriver. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 2.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/storage_apis/csidriver-storage-k8s-io-v1 |
Chapter 1. Preparing to deploy OpenShift Data Foundation | Chapter 1. Preparing to deploy OpenShift Data Foundation Deploying OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provides you with the option to create internal cluster resources. This will result in the internal provisioning of the base services, which helps to make additional storage classes available to applications. Before you begin the deployment of Red Hat OpenShift Data Foundation, follow these steps: Optional: If you want to enable cluster-wide encryption using an external Key Management System (KMS): Ensure that a policy with a token exists and the key value backend path in Vault is enabled. See enabled the key value backend path and policy in Vault . Ensure that you are using signed certificates on your Vault servers. Minimum starting node requirements [Technology Preview] An OpenShift Data Foundation cluster will be deployed with minimum configuration when the standard deployment resource requirement is not met. See Resource requirements section in Planning guide. Regional-DR requirements [Developer Preview] Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites in order to successfully implement a Disaster Recovery solution: A valid Red Hat OpenShift Data Foundation Advanced entitlement A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed requirements, see Regional-DR requirements and RHACM requirements . 1.1. Enabling key value backend path and policy in Vault Prerequisites Administrator access to Vault. Carefully, choose a unique path name as the backend path that follows the naming convention since it cannot be changed later. Procedure Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict users to perform a write or delete operation on the secret using the following commands. Create a token matching the above policy. | [
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault token create -policy=odf -format json"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/preparing_to_deploy_openshift_data_foundation |
Chapter 1. Introduction and planning an Instance HA deployment | Chapter 1. Introduction and planning an Instance HA deployment High availability for Compute instances (Instance HA) is a tool that you can use to evacuate instances from a failed Compute node and re-create the instances on a different Compute node. Instance HA works with shared storage or local storage environments, which means that evacuated instances maintain the same network configuration, such as static IP addresses and floating IP addresses. The re-created instances also maintain the same characteristics inside the new Compute node. 1.1. How Instance HA works When a Compute node fails, the overcloud fencing agent fences the node, then the Instance HA agents evacuate instances from the failed Compute node to a different Compute node. The following events occur when a Compute node fails and triggers Instance HA: At the time of failure, the IPMI agent performs first-layer fencing, which includes physically resetting the node to ensure that it shuts down and preventing data corruption or multiple identical instances on the overcloud. When the node is offline, it is considered fenced. After the physical IPMI fencing, the fence-nova agent automatically performs second-layer fencing and marks the fenced node with the "evacuate=yes" cluster per-node attribute by running the following command: FAILEDHOST is the name of the failed Compute node. The nova-evacuate agent continually runs in the background and periodically checks the cluster for nodes with the "evacuate=yes" attribute. When nova-evacuate detects that the fenced node contains this attribute, the agent starts evacuating the node. The evacuation process is similar to the manual instance evacuation process that you can perform at any time. When the failed node restarts after the IPMI reset, the nova-compute process on that node also starts automatically. Because the node was previously fenced, it does not run any new instances until Pacemaker un-fences the node. When Pacemaker detects that the Compute node is online, it starts the compute-unfence-trigger resource agent on the node, which releases the node and so that it can run instances again. Additional resources Evacuating an instance 1.2. Planning your Instance HA deployment Before you deploy Instance HA, review the resource names for compliance and configure your storage and networking based on your environment. Compute node host names and Pacemaker remote resource names must comply with the W3C naming conventions. For more information, see Declaring Namespaces and Names and Tokens in the W3C documentation. Typically, Instance HA requires that you configure shared storage for disk images of instances. Therefore, if you attempt to use the no-shared-storage option, you might receive an InvalidSharedStorage error during evacuation, and the instances will not start on another Compute node. However, if all your instances are configured to boot from an OpenStack Block Storage ( cinder ) volume, you do not need to configure shared storage for the disk image of the instances, and you can evacuate all instances using the no-shared-storage option. During evacuation, if your instances are configured to boot from a Block Storage volume, any evacuated instances boot from the same volume on another Compute node. Therefore, the evacuated instances immediately restart their jobs because the OS image and the application data are stored on the OpenStack Block Storage volume. If you deploy Instance HA in a Spine-Leaf environment, you must define a single internal_api network for the Controller and Compute nodes. You can then define a subnet for each leaf. For more information about configuring Spine-Leaf networks, see Creating a roles data file in the Spine Leaf Networking guide. From Red Hat OpenStack Platform 13 and later, you use director to upgrade Instance HA as a part of the overcloud upgrade. For more information about upgrading the overcloud, see Keeping Red Hat OpenStack Platform Updated guide. Disabling Instance HA with the director after installation is not supported. For a workaround to manually remove Instance HA components from your deployment, see the article How can I remove Instance HA components from the controller nodes? . Important This workaround is not verified for production environments. You must verify the procedure in a test environment before you implement it in a production environment. 1.3. Instance HA resource agents Instance HA uses the fence_compute , NovaEvacuate , and comput-unfence-trigger resource agents to evacuate and re-created instance if a Compute node fails. Agent name Name inside cluster Role fence_compute fence-nova Marks a Compute node for evacuation when the node becomes unavailable. NovaEvacuate nova-evacuate Evacuates instances from failed nodes. This agent runs on one of the Controller nodes. Dummy compute-unfence-trigger Releases a fenced node and enables the node to run instances again. | [
"attrd_updater -n evacuate -A name=\"evacuate\" host=\" FAILEDHOST \" value=\"yes\""
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/high_availability_for_compute_instances/assembly_introduction-and-planning-an-instance-ha-deployment_rhosp |
Chapter 23. Boot Options | Chapter 23. Boot Options The Red Hat Enterprise Linux installation system includes a range of boot options for administrators, which modify the default behavior of the installation program by enabling (or disabling) certain functions. To use boot options, append them to the boot command line, as described in Section 23.1, "Configuring the Installation System at the Boot Menu" . Multiple options added to the boot line need to be separated by a single space. There are two basic types of options described in this chapter: Options presented as ending with an "equals" sign ( = ) require a value to be specified - they cannot be used on their own. For example, the inst.vncpassword= option must also contain a value (in this case, a password). The correct form is therefore inst.vncpassword= password . On its own, without a password specified, the option is invalid. Options presented without the " = " sign do not accept any values or parameters. For example, the rd.live.check option forces Anaconda to verify the installation media before starting the installation; if this option is present, the check will be performed, and if it is not present, the check will be skipped. 23.1. Configuring the Installation System at the Boot Menu Note The exact way to specify custom boot options is different on every system architecture. For architecture-specific instructions about editing boot options, see: Section 7.2, "The Boot Menu" for 64-bit AMD, Intel and ARM systems Section 12.1, "The Boot Menu" for IBM Power Systems servers Chapter 21, Parameter and Configuration Files on IBM Z for IBM Z There are several different ways to edit boot options at the boot menu (that is, the menu which appears after you boot the installation media): The boot: prompt, accessed by pressing the Esc key anywhere in the boot menu. When using this prompt, the first option must always specify the installation program image file to be loaded. In most cases, the image can be specified using the linux keyword. After that, additional options can be specified as needed. Pressing the Tab key at this prompt will display help in the form of usable commands where applicable. To start the installation with your options, press the Enter key. To return from the boot: prompt to the boot menu, restart the computer and boot from the installation media again. The > prompt on BIOS-based AMD64 and Intel 64 systems, accessed by highlighting an entry in the boot menu and pressing the Tab key. Unlike the boot: prompt, this prompt allows you to edit a predefined set of boot options. For example, if you highlight the entry labeled Test this media & install Red Hat Enterprise Linux 7.5 , a full set of options used by this menu entry will be displayed on the prompt, allowing you to add your own options. Pressing Enter will start the installation using the options you specified. To cancel editing and return to the boot menu, press the Esc key at any time. The GRUB2 menu on UEFI-based 64-bit AMD, Intel and ARM systems. If your system uses UEFI, you can edit boot options by highlighting an entry and pressing the e key. When you finish editing, press F10 or Ctrl + X to start the installation using the options you specified. In addition to the options described in this chapter, the boot prompt also accepts dracut kernel options. A list of these options is available as the dracut.cmdline(7) man page. Note Boot options specific to the installation program always start with inst. in this guide. Currently, this prefix is optional, for example, resolution=1024x768 will work exactly the same as inst.resolution=1024x768 . However, it is expected that the inst. prefix will be mandatory in future releases. Specifying the Installation Source inst.repo= Specifies the installation source - that is, a location where the installation program can find the images and packages it requires. For example: The target must be either: an installable tree, which is a directory structure containing the installation program's images, packages and repodata as well as a valid .treeinfo file a DVD (a physical disk present in the system's DVD drive) an ISO image of the full Red Hat Enterprise Linux installation DVD, placed on a hard drive or a network location accessible from the installation system (requires specifying NFS Server as the installation source) This option allows for the configuration of different installation methods using different formats. The syntax is described in the table below. Table 23.1. Installation Sources Installation source Option format Any CD/DVD drive inst.repo=cdrom Specific CD/DVD drive inst.repo=cdrom: device Hard Drive inst.repo=hd: device :/ path HMC inst.repo=hmc HTTP Server inst.repo=http:// host / path HTTPS Server inst.repo=https:// host / path FTP Server inst.repo=ftp:// username : password @ host / path NFS Server inst.repo=nfs:[ options :] server :/ path [a] [a] This option uses NFS protocol version 3 by default. To use a different version, add nfsvers= X to options , replacing X with the version number that you want to use. Note In releases of Red Hat Enterprise Linux, there were separate options for an installable tree accessible by NFS (the nfs option) and an ISO image located on an NFS source (the nfsiso option). In Red Hat Enterprise Linux 7, the installation program can automatically detect whether the source is an installable tree or a directory containing an ISO image, and the nfsiso option is deprecated. Disk device names can be set using the following formats: Kernel device name, for example /dev/sda1 or sdb2 File system label, for example LABEL=Flash or LABEL=RHEL7 File system UUID, for example UUID=8176c7bf-04ff-403a-a832-9557f94e61db Non-alphanumeric characters must be represented as \x NN , where NN is the hexadecimal representation of the character. For example, \x20 is a white space (" "). inst.stage2= Specifies the location of the installation program runtime image to be loaded. The syntax is the same as in Specifying the Installation Source . This option expects a path to a directory containing a valid .treeinfo file; the location of the runtime image will be read from this file if found. If a .treeinfo file is not available, Anaconda will try to load the image from LiveOS/squashfs.img . Use the option multiple times to specify multiple HTTP, HTTPS or FTP sources. Note By default, the inst.stage2= boot option is used on the installation media and set to a specific label (for example, inst.stage2=hd:LABEL=RHEL7\x20Server.x86_64 ). If you modify the default label of the file system containing the runtime image, or if you use a customized procedure to boot the installation system, you must ensure this option is set to the correct value. inst.dd= If you need to perform a driver update during the installation, use the inst.dd= option. It can be used multiple times. The location of a driver RPM package can be specified using any of the formats detailed in Specifying the Installation Source . With the exception of the inst.dd=cdrom option, the device name must always be specified. For example: Using this option without any parameters (only as inst.dd ) will prompt the installation program to ask you for a driver update disk with an interactive menu. Driver disks can also be loaded from a hard disk drive or a similar device instead of being loaded over the network or from initrd . Follow this procedure: Load the driver disk on a hard disk drive, a USB or any similar device. Set the label, for example, DD , to this device. Start the installation with: as the boot argument. Replace DD with a specific label and replace dd.rpm with a specific name. Use anything supported by the inst.repo command instead of LABEL to specify your hard disk drive. For more information about driver updates during the installation, see Chapter 6, Updating Drivers During Installation on AMD64 and Intel 64 Systems for AMD64 and Intel 64 systems and Chapter 11, Updating Drivers During Installation on IBM Power Systems for IBM Power Systems servers. Kickstart Boot Options inst.ks= Gives the location of a Kickstart file to be used to automate the installation. Locations can be specified using any of the formats valid for inst.repo . See Specifying the Installation Source for details. Use the option multiple times to specify multiple HTTP, HTTPS and FTP sources. If multiple HTTP, HTTPS and FTP locations are specified, the locations are tried sequentially until one succeeds: If you only specify a device and not a path, the installation program will look for the Kickstart file in /ks.cfg on the specified device. If you use this option without specifying a device, the installation program will use the following: In the above example, -server is the DHCP -server option or the IP address of the DHCP server itself, and filename is the DHCP filename option, or /kickstart/ . If the given file name ends with the / character, ip -kickstart is appended. For example: Table 23.2. Default Kickstart File Location DHCP server address Client address Kickstart file location 192.168.122.1 192.168.122.100 192.168.122.1 : /kickstart/192.168.122.100-kickstart Additionally, starting with Red Hat Enterprise Linux 7.2, the installer will attempt to load a Kickstart file named ks.cfg from a volume with a label of OEMDRV if present. If your Kickstart file is in this location, you do not need to use the inst.ks= boot option at all. inst.ks.sendmac Adds headers to outgoing HTTP requests with the MAC addresses of all network interfaces. For example: This can be useful when using inst.ks=http to provision systems. inst.ks.sendsn Adds a header to outgoing HTTP requests. This header will contain the system's serial number, read from /sys/class/dmi/id/product_serial . The header has the following syntax: Console, Environment and Display Options console= This kernel option specifies a device to be used as the primary console. For example, to use a console on the first serial port, use console=ttyS0 . This option should be used along with the inst.text option. You can use this option multiple times. In that case, the boot message will be displayed on all specified consoles, but only the last one will be used by the installation program afterwards. For example, if you specify console=ttyS0 console=ttyS1 , the installation program will use ttyS1 . noshell Disables access to the root shell during the installation. This is useful with automated (Kickstart) installations - if you use this option, a user can watch the installation progress, but they cannot interfere with it by accessing the root shell by pressing Ctrl + Alt + F2 . inst.lang= Sets the language to be used during the installation. Language codes are the same as the ones used in the lang Kickstart command as described in Section 27.3.1, "Kickstart Commands and Options" . On systems where the system-config-language package is installed, a list of valid values can also be found in /usr/share/system-config-language/locale-list . inst.geoloc= Configures geolocation usage in the installation program. Geolocation is used to preset the language and time zone, and uses the following syntax: inst.geoloc= value The value parameter can be any of the following: Table 23.3. Valid Values for the inst.geoloc Option Disable geolocation inst.geoloc=0 Use the Fedora GeoIP API inst.geoloc=provider_fedora_geoip Use the Hostip.info GeoIP API inst.geoloc=provider_hostip If this option is not specified, Anaconda will use provider_fedora_geoip . inst.keymap= Specifies the keyboard layout to be used by the installation program. Layout codes are the same as the ones used in the keyboard Kickstart command as described in Section 27.3.1, "Kickstart Commands and Options" . inst.text Forces the installation program to run in text mode instead of graphical mode. The text user interface is limited, for example, it does not allow you to modify the partition layout or set up LVM. When installing a system on a machine with a limited graphical capabilities, it is recommended to use VNC as described in Enabling Remote Access . inst.cmdline Forces the installation program to run in command line mode. This mode does not allow any interaction, all options must be specified in a Kickstart file or on the command line. inst.graphical Forces the installation program to run in graphical mode. This mode is the default. inst.resolution= Specifies the screen resolution in graphical mode. The format is N x M , where N is the screen width and M is the screen height (in pixels). The lowest supported resolution is 800x600 . inst.headless Specifies that the machine being installed onto does not have any display hardware. In other words, this option prevents the installation program from trying to detect a screen. inst.xdriver= Specifies the name of the X driver to be used both during the installation and on the installed system. inst.usefbx Tells the installation program to use the frame buffer X driver instead of a hardware-specific driver. This option is equivalent to inst.xdriver=fbdev . modprobe.blacklist= Blacklists (completely disables) one or more drivers. Drivers (mods) disabled using this option will be prevented from loading when the installation starts, and after the installation finishes, the installed system will keep these settings. The blacklisted drivers can then be found in the /etc/modprobe.d/ directory. Use a comma-separated list to disable multiple drivers. For example: inst.sshd=0 By default, sshd is only automatically started on IBM Z, and on other architectures, sshd is not started unless the inst.sshd option is used. This option prevents sshd from starting automatically on IBM Z. inst.sshd Starts the sshd service during the installation, which allows you to connect to the system during the installation using SSH and monitor its progress. For more information on SSH, see the ssh(1) man page and the corresponding chapter in the Red Hat Enterprise Linux 7 System Administrator's Guide . By default, sshd is only automatically started on IBM Z, and on other architectures, sshd is not started unless the inst.sshd option is used. Note During the installation, the root account has no password by default. You can set a root password to be used during the installation with the sshpw Kickstart command as described in Section 27.3.1, "Kickstart Commands and Options" . inst.kdump_addon= Enables or disables the Kdump configuration screen (add-on) in the installer. This screen is enabled by default; use inst.kdump_addon=off to disable it. Note that disabling the add-on will disable the Kdump screens in both the graphical and text-based interface as well as the %addon com_redhat_kdump Kickstart command. Network Boot Options Initial network initialization is handled by dracut . This section only lists some of the more commonly used options; for a complete list, see the dracut.cmdline(7) man page. Additional information on networking is also available in Red Hat Enterprise Linux 7 Networking Guide . ip= Configures one or more network interfaces. To configure multiple interfaces, you can use the ip option multiple times - once for each interface. If multiple interfaces are configured, you must also use the option rd.neednet=1 , and you must specify a primary boot interface using the bootdev option, described below. Alternatively, you can use the ip option once, and then use Kickstart to set up further interfaces. This option accepts several different formats. The most common are described in Table 23.4, "Network Interface Configuration Formats" . Table 23.4. Network Interface Configuration Formats Configuration Method Option format Automatic configuration of any interface ip= method Automatic configuration of a specific interface ip= interface : method Static configuration ip= ip :: gateway : netmask : hostname : interface :none Automatic configuration of a specific interface with an override [a] ip= ip :: gateway : netmask : hostname : interface : method : mtu [a] Brings up the specified interface using the specified method of automatic configuration, such as dhcp , but overrides the automatically obtained IP address, gateway, netmask, host name or other specified parameter. All parameters are optional; only specify the ones you want to override and automatically obtained values will be used for the others. The method parameter can be any the following: Table 23.5. Automatic Interface Configuration Methods Automatic configuration method Value DHCP dhcp IPv6 DHCP dhcp6 IPv6 automatic configuration auto6 iBFT (iSCSI Boot Firmware Table) ibft Note If you use a boot option which requires network access, such as inst.ks=http:// host / path , without specifying the ip option, the installation program will use ip=dhcp . Important To connect automatically to an iSCSI target, a network device for accessing the target needs to be activated. The recommended way to do so is to use ip=ibft boot option. In the above tables, the ip parameter specifies the client's IP address. IPv6 addresses can be specified by putting them in square brackets, for example, [2001:DB8::1] . The gateway parameter is the default gateway. IPv6 addresses are accepted here as well. The netmask parameter is the netmask to be used. This can either be a full netmask for IPv4 (for example 255.255.255.0 ) or a prefix for IPv6 (for example 64 ). The hostname parameter is the host name of the client system. This parameter is optional. nameserver= Specifies the address of the name server. This option can be used multiple times. rd.neednet= You must use the option rd.neednet=1 if you use more than one ip option. Alternatively, to set up multiple network interfaces you can use the ip once, and then set up further interfaces using Kickstart. bootdev= Specifies the boot interface. This option is mandatory if you use more than one ip option. ifname= Assigns a given interface name to a network device with a given MAC address. Can be used multiple times. The syntax is ifname= interface : MAC . For example: Note Using the ifname= option is the only supported way to set custom network interface names during installation. inst.dhcpclass= Specifies the DHCP vendor class identifier. The dhcpd service will see this value as vendor-class-identifier . The default value is anaconda-USD(uname -srm) . inst.waitfornet= Using the inst.waitfornet= SECONDS boot option causes the installation system to wait for network connectivity before installation. The value given in the SECONDS argument specifies maximum amount of time to wait for network connectivity before timing out and continuing the installation process even if network connectivity is not present. vlan= Sets up a Virtual LAN (VLAN) device on a specified interface with a given name. The syntax is vlan= name : interface . For example: The above will set up a VLAN device named vlan5 on the em1 interface. The name can take the following forms: Table 23.6. VLAN Device Naming Conventions Naming scheme Example VLAN_PLUS_VID vlan0005 VLAN_PLUS_VID_NO_PAD vlan5 DEV_PLUS_VID em1.0005 . DEV_PLUS_VID_NO_PAD em1.5 . bond= Set up a bonding device with the following syntax: bond= name [: slaves ][: options ] . Replace name with the bonding device name, slaves with a comma-separated list of physical (ethernet) interfaces, and options with a comma-separated list of bonding options. For example: For a list of available options, execute the modinfo bonding command. Using this option without any parameters will assume bond=bond0:eth0,eth1:mode=balance-rr . team= Set up a team device with the following syntax: team= master : slaves . Replace master with the name of the master team device and slaves with a comma-separated list of physical (ethernet) devices to be used as slaves in the team device. For example: Advanced Installation Options inst.kexec If this option is specified, the installer will use the kexec system call at the end of the installation, instead of performing a reboot. This loads the new system immediately, and bypasses the hardware initialization normally performed by the BIOS or firmware. Important Due to the complexities involved with booting systems using kexec , it cannot be explicitly tested and guaranteed to function in every situation. When kexec is used, device registers (which would normally be cleared during a full system reboot) might stay filled with data, which could potentially create issues for some device drivers. inst.gpt Force the installation program to install partition information into a GUID Partition Table (GPT) instead of a Master Boot Record (MBR). This option is meaningless on UEFI-based systems, unless they are in BIOS compatibility mode. Normally, BIOS-based systems and UEFI-based systems in BIOS compatibility mode will attempt to use the MBR schema for storing partitioning information, unless the disk is 2 32 sectors in size or larger. Most commonly, disk sectors are 512 bytes in size, meaning that this is usually equivalent to 2 TiB. Using this option will change this behavior, allowing a GPT to be written to disks smaller than this. See Section 8.14.1.1, "MBR and GPT Considerations" for more information about GPT and MBR, and Section A.1.4, "GUID Partition Table (GPT)" for more general information about GPT, MBR and disk partitioning in general. inst.multilib Configure the system for multilib packages (that is, to allow installing 32-bit packages on a 64-bit AMD64 or Intel 64 system) and install packages specified in this section as such. Normally, on an AMD64 or Intel 64 system, only packages for this architecture (marked as x86_64 ) and packages for all architectures (marked as noarch would be installed. When you use this option, packages for 32-bit AMD or Intel systems (marked as i686 ) will be automatically installed as well if available. This only applies to packages directly specified in the %packages section. If a package is only installed as a dependency, only the exact specified dependency will be installed. For example, if you are installing package bash which depends on package glibc , the former will be installed in multiple variants, while the latter will only be installed in variants specifically required. selinux=0 By default, SELinux operates in permissive mode in the installer, and in enforcing mode in the installed system. This option disables the use of SELinux in the installer and the installed system entirely. Note The selinux=0 and inst.selinux=0 options are not the same. The selinux=0 option disables the use of SELinux in the installer and the installed system, whereas inst.selinux=0 disables SELinux only in the installer. By default, SELinux is set to operate in permissive mode in the installer, so disabling it has little effect. inst.nosave= This option, introduced in Red Hat Enterprise Linux 7.3, controls which Kickstart files and installation logs are saved to the installed system. It can be especially useful to disable saving such data when performing OEM operating system installations, or when generating images using sensitive resources (such as internal repository URLs), as these resources might otherwise be mentioned in kickstart files, or in logs on the image, or both. Possible values for this option are: input_ks - disables saving of the input Kickstart file (if any). output_ks - disables saving of the output Kickstart file generated by Anaconda. all_ks - disables saving of both input and output Kickstart files. logs - disables saving of all installation logs. all - disables saving of all Kickstart files and all installation logs. Multiple values can be combined as a comma separated list, for example: input_ks,logs inst.zram This option controls the usage of zRAM swap during the installation. It creates a compressed block device inside the system RAM and uses it for swap space instead of the hard drive. This allows the installer to essentially increase the amount of memory available, which makes the installation faster on systems with low memory. By default, swap on zRAM is enabled on systems with 2 GiB or less RAM, and disabled on systems with more than 2 GiB of memory. You can use this option to change this behavior - on a system with more than 2 GiB RAM, use inst.zram=1 to enable it, and on systems with 2 GiB or less memory, use inst.zram=0 to disable this feature. Enabling Remote Access The following options are necessary to configure Anaconda for remote graphical installation. See Chapter 25, Using VNC for more details. inst.vnc Specifies that the installation program's graphical interface should be run in a VNC session. If you specify this option, you will need to connect to the system using a VNC client application to be able to interact with the installation program. VNC sharing is enabled, so multiple clients can connect to the system at the same time. Note A system installed using VNC will start in text mode by default. inst.vncpassword= Sets a password on the VNC server used by the installation program. Any VNC client attempting to connecting to the system will have to provide the correct password to gain access. For example, inst.vncpassword= testpwd will set the password to testpwd . The VNC password must be between 6 and 8 characters long. Note If you specify an invalid password (one that is too short or too long), you will be prompted to specify a new one by a message from the installation program: inst.vncconnect= Connect to a listening VNC client at a specified host and port once the installation starts. The correct syntax is inst.vncconnect= host : port , where host is the address to the VNC client's host, and port specifies which port to use. The port parameter is optional, if you do not specify one, the installation program will use 5900 . Debugging and Troubleshooting inst.updates= Specifies the location of the updates.img file to be applied to the installation program runtime. The syntax is the same as in the inst.repo option - see Table 23.1, "Installation Sources" for details. In all formats, if you do not specify a file name but only a directory, the installation program will look for a file named updates.img . inst.loglevel= Specifies the minimum level for messages to be logged on a terminal. This only concerns terminal logging; log files will always contain messages of all levels. Possible values for this option from the lowest to highest level are: debug , info , warning , error and critical . The default value is info , which means that by default, the logging terminal will display messages ranging from info to critical . inst.syslog= Once the installation starts, this option sends log messages to the syslog process on the specified host. The remote syslog process must be configured to accept incoming connections. For information on how to configure a syslog service to accept incoming connections, see the Red Hat Enterprise Linux 7 System Administrator's Guide . inst.virtiolog= Specifies a virtio port (a character device at /dev/virtio-ports/ name ) to be used for forwarding logs. The default value is org.fedoraproject.anaconda.log.0 ; if this port is present, it will be used. rd.live.ram If this option is specified, the stage 2 image will be copied into RAM. When the stage2 image on NFS repository is used, this option may make the installation proceed smoothly, since the installation is sometimes affected by reconfiguring network in an environment built upon the stage 2 image on NFS. Note that using this option when the stage 2 image is on an NFS server will increase the minimum required memory by the size of the image - roughly 500 MiB. inst.nokill A debugging option that prevents anaconda from and rebooting when a fatal error occurs or at the end of the installation process. This allows you to capture installation logs which would be lost upon reboot. 23.1.1. Deprecated and Removed Boot Options Deprecated Boot Options Options in this list are deprecated . They will still work, but there are other options which offer the same functionality. Using deprecated options is not recommended and they are expected to be removed in future releases. Note Note that as Section 23.1, "Configuring the Installation System at the Boot Menu" describes, options specific to the installation program now use the inst. prefix. For example, the vnc= option is considered deprecated and replaced by the inst.vnc= option. These changes are not listed here. method= Configured the installation method. Use the inst.repo= option instead. repo=nfsiso: server :/ path In NFS installations, specified that the target is an ISO image located on an NFS server instead of an installable tree. The difference is now detected automatically, which means this option is the same as inst.repo=nfs: server :/ path . dns= Configured the Domain Name Server (DNS). Use the nameserver= option instead. netmask= , gateway= , hostname= , ip= , ipv6= These options have been consolidated under the ip= option. ksdevice= Select network device to be used at early stage of installation. Different values have been replaced with different options; see the table below. Table 23.7. Automatic Interface Configuration Methods Value Current behavior Not present Activation of all devices is attempted using dhcp , unless the desired device and configuration is specified by the ip= option or the BOOTIF option. ksdevice=link Similar to the above, with the difference that network will always be activated in the initramfs, whether it is needed or not. The supported rd.neednet dracut option should be used to achieve the same result. ksdevice=bootif Ignored (the BOOTIF= option is used by default when specified) ksdevice=ibft Replaced with the ip=ibft dracut option ksdevice= MAC Replaced with BOOTIF= MAC ksdevice= device Replaced by specifying the device name using the ip= dracut option. blacklist= Used to disable specified drivers. This is now handled by the modprobe.blacklist= option. nofirewire= Disabled support for the FireWire interface. You can disable the FireWire driver ( firewire_ohci ) by using the modprobe.blacklist= option instead: nicdelay= Used to indicate the delay after which the network was considered active; the system waited until either the gateway was successfully pinged, or until the amount of seconds specified in this parameter passed. In RHEL 7, network devices are configured and activated during the early stage of installation by the dracut modules which ensure that the gateway is accessible before proceeding. For more information about dracut , see the dracut.cmdline(7) man page. linksleep= Used to configure how long anaconda should wait for a link on a device before activating it. This functionality is now available in the dracut modules where specific rd.net.timeout.* options can be configured to handle issues caused by slow network device initialization. For more information about dracut , see the dracut.cmdline(7) man page. Removed Boot Options The following options are removed. They were present in releases of Red Hat Enterprise Linux, but they cannot be used anymore. askmethod , asknetwork The installation program's initramfs is now completely non-interactive, which means that these options are not available anymore. Instead, use the inst.repo= to specify the installation method and ip= to configure network settings. serial This option forced Anaconda to use the /dev/ttyS0 console as the output. Use the console=/dev/ttyS0 (or similar) instead. updates= Specified the location of updates for the installation program. Use the inst.updates= option instead. essid= , wepkey= , wpakey= Configured wireless network access. Network configuration is now being handled by dracut , which does not support wireless networking, rendering these options useless. ethtool= Used in the past to configure additional low-level network settings. All network settings are now handled by the ip= option. gdb Allowed you to debug the loader. Use rd.debug instead. mediacheck Verified the installation media before starting the installation. Replaced with the rd.live.check option. ks=floppy Specified a 3.5 inch diskette as the Kickstart file source. These drives are not supported anymore. display= Configured a remote display. Replaced with the inst.vnc option. utf8 Added UTF8 support when installing in text mode. UTF8 support now works automatically. noipv6 Used to disable IPv6 support in the installation program. IPv6 is now built into the kernel so the driver cannot be blacklisted; however, it is possible to disable IPv6 using the ipv6.disable dracut option. upgradeany Upgrades are done in a different way in Red Hat Enterprise Linux 7. For more information about upgrading your system, see Chapter 29, Upgrading Your Current System . vlanid= Used to configure Virtual LAN (802.1q tag) devices. Use the vlan= dracut option instead. | [
"inst.repo=cdrom",
"inst.stage2=host1/install.img inst.stage2=host2/install.img inst.stage2=host3/install.img",
"inst.dd=/dev/sdb1",
"inst.dd=hd: LABEL = DD :/dd.rpm",
"inst.ks=host1/ directory /ks.cfg inst.ks=host2/ directory /ks.cfg inst.ks=host3/ directory /ks.cfg",
"inst.ks=nfs: next-server :/ filename",
"X-RHN-Provisioning-MAC-0: eth0 01:23:45:67:89:ab",
"X-System-Serial-Number: R8VA23D",
"modprobe.blacklist=ahci,firewire_ohci",
"ifname=eth0:01:23:45:67:89:ab",
"vlan=vlan5:em1",
"bond=bond0:em1,em2:mode=active-backup,tx_queues=32,downdelay=5000",
"team=team0:em1,em2",
"VNC password must be six to eight characters long. Please enter a new one, or leave blank for no password. Password:",
"modprobe.blacklist=firewire_ohci"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/chap-anaconda-boot-options |
Chapter 1. Introduction | Chapter 1. Introduction 1.1. About this release This release of Red Hat OpenStack Platform (RHOSP) is based on the OpenStack "Wallaby" release. It includes additional features, known issues, and resolved issues specific to RHOSP. Only changes specific to RHOSP are included in this document. The release notes for the OpenStack "Wallaby" release itself are available at the following location: https://releases.openstack.org/wallaby/index.html . RHOSP uses components from other Red Hat products. For specific information pertaining to the support of these components, see https://access.redhat.com/site/support/policy/updates/openstack/platform/ . To evaluate RHOSP, sign up at http://www.redhat.com/openstack/ . Note The Red Hat Enterprise Linux High Availability Add-On is available for RHOSP use cases. For more details about the add-on, see http://www.redhat.com/products/enterprise-linux-add-ons/high-availability/ . For details about the package versions to use in combination with RHOSP, see https://access.redhat.com/site/solutions/509783 . 1.2. Requirements This version of Red Hat OpenStack Platform (RHOSP) runs on the most recent fully supported release of Red Hat Enterprise Linux 9.2 Extended Update Support (EUS). The dashboard for this release supports the latest stable versions of the following web browsers: Mozilla Firefox Mozilla Firefox ESR Google Chrome Note Before you deploy RHOSP, familiarize yourself with the recommended deployment methods. See Installing and Managing Red Hat OpenStack Platform . 1.3. Deployment limits For a list of deployment limits for Red Hat OpenStack Platform (RHOSP), see Deployment Limits for Red Hat OpenStack Platform . 1.4. Database size management For recommended practices on maintaining the size of the MariaDB databases in your Red Hat OpenStack Platform (RHOSP) environment, see Database Size Management for Red Hat Enterprise Linux OpenStack Platform . 1.5. Certified guest operating systems For a list of the certified guest operating systems in Red Hat OpenStack Platform (RHOSP), see Certified Guest Operating Systems in Red Hat OpenStack Platform and Red Hat Enterprise Virtualization . 1.6. Product certification catalog For a list of the Red Hat Official Product Certification Catalog, see Product Certification Catalog . 1.7. Compute drivers This release of Red Hat OpenStack Platform (RHOSP) is supported only with the libvirt driver (using KVM as the hypervisor on Compute nodes). This release of RHOSP runs with Bare Metal Provisioning. Bare Metal Provisioning has been fully supported since the release of RHOSP 7 (Kilo). You can use Bare Metal Provisioning to provision bare-metal machines by using common technologies such as PXE and IPMI, to cover a wide range of hardware while supporting pluggable drivers to allow the addition of vendor-specific functionality. Red Hat does not provide support for other Compute virtualization drivers such as the deprecated VMware "direct-to-ESX" hypervisor or non-KVM libvirt hypervisors. 1.8. Content Delivery Network (CDN) repositories This section describes the repositories required to deploy Red Hat OpenStack Platform (RHOSP) 17.1. You can install RHOSP 17.1 through the Content Delivery Network (CDN) by using subscription-manager . For more information, see Planning your undercloud . Warning Some packages in the RHOSP software repositories conflict with packages provided by the Extra Packages for Enterprise Linux (EPEL) software repositories. The use of RHOSP on systems with the EPEL software repositories enabled is unsupported. 1.8.1. Undercloud repositories You run Red Hat OpenStack Platform (RHOSP) 17.1 on Red Hat Enterprise Linux (RHEL) 9.2. Note If you synchronize repositories with Red Hat Satellite, you can enable specific versions of the Red Hat Enterprise Linux repositories. However, the repository remains the same despite the version you choose. For example, you can enable the 9.2 version of the BaseOS repository, but the repository name is still rhel-9-for-x86_64-baseos-eus-rpms despite the specific version you choose. Warning Any repositories except the ones specified here are not supported. Unless recommended, do not enable any other products or repositories except the ones listed in the following tables or else you might encounter package dependency issues. Do not enable Extra Packages for Enterprise Linux (EPEL). Core repositories The following table lists core repositories for installing the undercloud. Name Repository Description of requirement Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs) Extended Update Support (EUS) rhel-9-for-x86_64-baseos-eus-rpms Base operating system repository for x86_64 systems. Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs) rhel-9-for-x86_64-appstream-eus-rpms Contains Red Hat OpenStack Platform dependencies. Red Hat Enterprise Linux 9 for x86_64 - High Availability (RPMs) Extended Update Support (EUS) rhel-9-for-x86_64-highavailability-eus-rpms High availability tools for Red Hat Enterprise Linux. Used for Controller node high availability. Red Hat OpenStack Platform for RHEL 9 (RPMs) openstack-17.1-for-rhel-9-x86_64-rpms Core Red Hat OpenStack Platform repository, which contains packages for Red Hat OpenStack Platform director. Red Hat Fast Datapath for RHEL 9 (RPMS) fast-datapath-for-rhel-9-x86_64-rpms Provides Open vSwitch (OVS) packages for OpenStack Platform. 1.8.2. Overcloud repositories You run Red Hat OpenStack Platform (RHOSP) 17.1 on Red Hat Enterprise Linux (RHEL) 9.2. Note If you synchronize repositories with Red Hat Satellite, you can enable specific versions of the Red Hat Enterprise Linux repositories. However, the repository remains the same despite the version you choose. For example, you can enable the 9.2 version of the BaseOS repository, but the repository name is still rhel-9-for-x86_64-baseos-eus-rpms despite the specific version you choose. Warning Any repositories except the ones specified here are not supported. Unless recommended, do not enable any other products or repositories except the ones listed in the following tables or else you might encounter package dependency issues. Do not enable Extra Packages for Enterprise Linux (EPEL). Controller node repositories The following table lists core repositories for Controller nodes in the overcloud. Name Repository Description of requirement Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs) Extended Update Support (EUS) rhel-9-for-x86_64-baseos-eus-rpms Base operating system repository for x86_64 systems. Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs) rhel-9-for-x86_64-appstream-eus-rpms Contains Red Hat OpenStack Platform dependencies. Red Hat Enterprise Linux 9 for x86_64 - High Availability (RPMs) Extended Update Support (EUS) rhel-9-for-x86_64-highavailability-eus-rpms High availability tools for Red Hat Enterprise Linux. Red Hat OpenStack Platform for RHEL 9 x86_64 (RPMs) openstack-17.1-for-rhel-9-x86_64-rpms Core Red Hat OpenStack Platform repository. Red Hat Fast Datapath for RHEL 9 (RPMS) fast-datapath-for-rhel-9-x86_64-rpms Provides Open vSwitch (OVS) packages for OpenStack Platform. Red Hat Ceph Storage Tools 6 for RHEL 9 x86_64 (RPMs) rhceph-6-tools-for-rhel-9-x86_64-rpms Tools for Red Hat Ceph Storage 6 for Red Hat Enterprise Linux 9. Compute and ComputeHCI node repositories The following table lists core repositories for Compute and ComputeHCI nodes in the overcloud. Name Repository Description of requirement Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs) Extended Update Support (EUS) rhel-9-for-x86_64-baseos-eus-rpms Base operating system repository for x86_64 systems. Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs) rhel-9-for-x86_64-appstream-eus-rpms Contains Red Hat OpenStack Platform dependencies. Red Hat Enterprise Linux 9 for x86_64 - High Availability (RPMs) Extended Update Support (EUS) rhel-9-for-x86_64-highavailability-eus-rpms High availability tools for Red Hat Enterprise Linux. Red Hat OpenStack Platform for RHEL 9 x86_64 (RPMs) openstack-17.1-for-rhel-9-x86_64-rpms Core Red Hat OpenStack Platform repository. Red Hat Fast Datapath for RHEL 9 (RPMS) fast-datapath-for-rhel-9-x86_64-rpms Provides Open vSwitch (OVS) packages for OpenStack Platform. Red Hat Ceph Storage Tools 6 for RHEL 9 x86_64 (RPMs) rhceph-6-tools-for-rhel-9-x86_64-rpms Tools for Red Hat Ceph Storage 6 for Red Hat Enterprise Linux 9. Ceph Storage node repositories The following table lists Ceph Storage related repositories for the overcloud. Name Repository Description of requirement Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs) rhel-9-for-x86_64-baseos-rpms Base operating system repository for x86_64 systems. Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs) rhel-9-for-x86_64-appstream-rpms Contains Red Hat OpenStack Platform dependencies. Red Hat OpenStack Platform Deployment Tools for RHEL 9 x86_64 (RPMs) openstack-17.1-deployment-tools-for-rhel-9-x86_64-rpms Packages to help director configure Ceph Storage nodes. This repository is included with standalone Ceph Storage subscriptions. If you use a combined OpenStack Platform and Ceph Storage subscription, use the openstack-17.1-for-rhel-9-x86_64-rpms repository. Red Hat OpenStack Platform for RHEL 9 x86_64 (RPMs) openstack-17.1-for-rhel-9-x86_64-rpms Packages to help director configure Ceph Storage nodes. This repository is included with combined Red Hat OpenStack Platform and Red Hat Ceph Storage subscriptions. If you use a standalone Red Hat Ceph Storage subscription, use the openstack-17.1-deployment-tools-for-rhel-9-x86_64-rpms repository. Red Hat Ceph Storage Tools 6 for RHEL 9 x86_64 (RPMs) rhceph-6-tools-for-rhel-9-x86_64-rpms Provides tools for nodes to communicate with the Ceph Storage cluster. Red Hat Fast Datapath for RHEL 9 (RPMS) fast-datapath-for-rhel-9-x86_64-rpms Provides Open vSwitch (OVS) packages for OpenStack Platform. If you are using OVS on Ceph Storage nodes, add this repository to the network interface configuration (NIC) templates. 1.9. Product support The resources available for product support include the following: Customer Portal The Red Hat Customer Portal offers a wide range of resources to help guide you through planning, deploying, and maintaining your Red Hat OpenStack Platform (RHOSP) deployment. You can access the following facilities through the Customer Portal: Product documentation Knowledge base articles and solutions Technical briefs Support case management Access the Customer Portal at https://access.redhat.com/ . Mailing Lists You can join the rhsa-announce public mailing list to receive notification of security fixes for RHOSP and other Red Hat products. Subscribe at https://www.redhat.com/mailman/listinfo/rhsa-announce . 1.10. Unsupported features The following features are not supported in Red Hat OpenStack Platform (RHOSP): Custom policies, which includes modification of policy.json or policy.yaml files either manually or through any Policies heat parameters. Do not modify the default policies unless the documentation contains explicit instructions to do so. Containers are not available for the following packages, therefore they are not supported in RHOSP: nova-serialproxy nova-spicehtml5proxy File injection of personality files to inject user data into virtual machine instances. Instead, cloud users can pass data to their instances by using the --user-data option to run a script during instance boot, or set instance metadata by using the --property option when launching an instance. For more information, see Creating a customized instance . Persistent memory for instances (vPMEM). You can create persistent memory namespaces only on Compute nodes that have NVDIMM hardware. Red Hat has removed support for persistent memory from RHOSP 17+ in response to the announcement by the Intel Corporation on July 28, 2022 that they are discontinuing investment in their Intel(R) OptaneTM business: Intel(R) OptaneTM Business Update: What Does This Mean for Warranty and Support Virtualized control planes. If you require support for any of these features, contact the Red Hat Customer Experience and Engagement team to discuss a support exception, if applicable, or other options. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/release_notes/chap-introduction |
Chapter 3. Configuring the Collector | Chapter 3. Configuring the Collector 3.1. Configuring the Collector The Red Hat build of OpenTelemetry Operator uses a custom resource definition (CRD) file that defines the architecture and configuration settings to be used when creating and deploying the Red Hat build of OpenTelemetry resources. You can install the default configuration or modify the file. 3.1.1. OpenTelemetry Collector configuration options The OpenTelemetry Collector consists of five types of components that access telemetry data: Receivers Processors Exporters Connectors Extensions You can define multiple instances of components in a custom resource YAML file. When configured, these components must be enabled through pipelines defined in the spec.config.service section of the YAML file. As a best practice, only enable the components that you need. Example of the OpenTelemetry Collector custom resource file apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: tracing-system spec: mode: deployment observability: metrics: enableMetrics: true config: receivers: otlp: protocols: grpc: {} http: {} processors: {} exporters: otlp: endpoint: otel-collector-headless.tracing-system.svc:4317 tls: ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" prometheus: endpoint: 0.0.0.0:8889 resource_to_telemetry_conversion: enabled: true # by default resource attributes are dropped service: 1 pipelines: traces: receivers: [otlp] processors: [] exporters: [otlp] metrics: receivers: [otlp] processors: [] exporters: [prometheus] 1 If a component is configured but not defined in the service section, the component is not enabled. Table 3.1. Parameters used by the Operator to define the OpenTelemetry Collector Parameter Description Values Default A receiver is how data gets into the Collector. By default, no receivers are configured. There must be at least one enabled receiver for a configuration to be considered valid. Receivers are enabled by being added to a pipeline. otlp , jaeger , prometheus , zipkin , kafka , opencensus None Processors run through the received data before it is exported. By default, no processors are enabled. batch , memory_limiter , resourcedetection , attributes , span , k8sattributes , filter , routing None An exporter sends data to one or more back ends or destinations. By default, no exporters are configured. There must be at least one enabled exporter for a configuration to be considered valid. Exporters are enabled by being added to a pipeline. Exporters might be used with their default settings, but many require configuration to specify at least the destination and security settings. otlp , otlphttp , debug , prometheus , kafka None Connectors join pairs of pipelines by consuming data as end-of-pipeline exporters and emitting data as start-of-pipeline receivers. Connectors can be used to summarize, replicate, or route consumed data. spanmetrics None Optional components for tasks that do not involve processing telemetry data. bearertokenauth , oauth2client , jaegerremotesampling , pprof , health_check , memory_ballast , zpages None Components are enabled by adding them to a pipeline under services.pipeline . You enable receivers for tracing by adding them under service.pipelines.traces . None You enable processors for tracing by adding them under service.pipelines.traces . None You enable exporters for tracing by adding them under service.pipelines.traces . None You enable receivers for metrics by adding them under service.pipelines.metrics . None You enable processors for metircs by adding them under service.pipelines.metrics . None You enable exporters for metrics by adding them under service.pipelines.metrics . None 3.1.2. Creating the required RBAC resources automatically Some Collector components require configuring the RBAC resources. Procedure Add the following permissions to the opentelemetry-operator-controller-manage service account so that the Red Hat build of OpenTelemetry Operator can create them automatically: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: generate-processors-rbac rules: - apiGroups: - rbac.authorization.k8s.io resources: - clusterrolebindings - clusterroles verbs: - create - delete - get - list - patch - update - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: generate-processors-rbac roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: generate-processors-rbac subjects: - kind: ServiceAccount name: opentelemetry-operator-controller-manager namespace: openshift-opentelemetry-operator 3.2. Receivers Receivers get data into the Collector. A receiver can be push or pull based. Generally, a receiver accepts data in a specified format, translates it into the internal format, and passes it to processors and exporters defined in the applicable pipelines. By default, no receivers are configured. One or more receivers must be configured. Receivers may support one or more data sources. Currently, the following General Availability and Technology Preview receivers are available for the Red Hat build of OpenTelemetry: OTLP Receiver Jaeger Receiver Host Metrics Receiver Kubernetes Objects Receiver Kubelet Stats Receiver Prometheus Receiver OTLP JSON File Receiver Zipkin Receiver Kafka Receiver Kubernetes Cluster Receiver OpenCensus Receiver Filelog Receiver Journald Receiver Kubernetes Events Receiver 3.2.1. OTLP Receiver The OTLP Receiver ingests traces, metrics, and logs by using the OpenTelemetry Protocol (OTLP). The OTLP Receiver ingests traces and metrics using the OpenTelemetry protocol (OTLP). OpenTelemetry Collector custom resource with an enabled OTLP Receiver # ... config: receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem client_ca_file: client.pem 3 reload_interval: 1h 4 http: endpoint: 0.0.0.0:4318 5 tls: {} 6 service: pipelines: traces: receivers: [otlp] metrics: receivers: [otlp] # ... 1 The OTLP gRPC endpoint. If omitted, the default 0.0.0.0:4317 is used. 2 The server-side TLS configuration. Defines paths to TLS certificates. If omitted, the TLS is disabled. 3 The path to the TLS certificate at which the server verifies a client certificate. This sets the value of ClientCAs and ClientAuth to RequireAndVerifyClientCert in the TLSConfig . For more information, see the Config of the Golang TLS package . 4 Specifies the time interval at which the certificate is reloaded. If the value is not set, the certificate is never reloaded. The reload_interval field accepts a string containing valid units of time such as ns , us (or ms ), ms , s , m , h . 5 The OTLP HTTP endpoint. The default value is 0.0.0.0:4318 . 6 The server-side TLS configuration. For more information, see the grpc protocol configuration section. 3.2.2. Jaeger Receiver The Jaeger Receiver ingests traces in the Jaeger formats. OpenTelemetry Collector custom resource with an enabled Jaeger Receiver # ... config: receivers: jaeger: protocols: grpc: endpoint: 0.0.0.0:14250 1 thrift_http: endpoint: 0.0.0.0:14268 2 thrift_compact: endpoint: 0.0.0.0:6831 3 thrift_binary: endpoint: 0.0.0.0:6832 4 tls: {} 5 service: pipelines: traces: receivers: [jaeger] # ... 1 The Jaeger gRPC endpoint. If omitted, the default 0.0.0.0:14250 is used. 2 The Jaeger Thrift HTTP endpoint. If omitted, the default 0.0.0.0:14268 is used. 3 The Jaeger Thrift Compact endpoint. If omitted, the default 0.0.0.0:6831 is used. 4 The Jaeger Thrift Binary endpoint. If omitted, the default 0.0.0.0:6832 is used. 5 The server-side TLS configuration. See the OTLP Receiver configuration section for more details. 3.2.3. Host Metrics Receiver The Host Metrics Receiver ingests metrics in the OTLP format. OpenTelemetry Collector custom resource with an enabled Host Metrics Receiver apiVersion: v1 kind: ServiceAccount metadata: name: otel-hostfs-daemonset namespace: <namespace> # ... --- apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints allowHostDirVolumePlugin: true allowHostIPC: false allowHostNetwork: false allowHostPID: true allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: true allowedCapabilities: null defaultAddCapabilities: - SYS_ADMIN fsGroup: type: RunAsAny groups: [] metadata: name: otel-hostmetrics readOnlyRootFilesystem: true runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny supplementalGroups: type: RunAsAny users: - system:serviceaccount:<namespace>:otel-hostfs-daemonset volumes: - configMap - emptyDir - hostPath - projected # ... --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <namespace> spec: serviceAccount: otel-hostfs-daemonset mode: daemonset volumeMounts: - mountPath: /hostfs name: host readOnly: true volumes: - hostPath: path: / name: host config: receivers: hostmetrics: collection_interval: 10s 1 initial_delay: 1s 2 root_path: / 3 scrapers: 4 cpu: {} memory: {} disk: {} service: pipelines: metrics: receivers: [hostmetrics] # ... 1 Sets the time interval for host metrics collection. If omitted, the default value is 1m . 2 Sets the initial time delay for host metrics collection. If omitted, the default value is 1s . 3 Configures the root_path so that the Host Metrics Receiver knows where the root filesystem is. If running multiple instances of the Host Metrics Receiver, set the same root_path value for each instance. 4 Lists the enabled host metrics scrapers. Available scrapers are cpu , disk , load , filesystem , memory , network , paging , processes , and process . 3.2.4. Kubernetes Objects Receiver The Kubernetes Objects Receiver pulls or watches objects to be collected from the Kubernetes API server. This receiver watches primarily Kubernetes events, but it can collect any type of Kubernetes objects. This receiver gathers telemetry for the cluster as a whole, so only one instance of this receiver suffices for collecting all the data. Important The Kubernetes Objects Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with an enabled Kubernetes Objects Receiver apiVersion: v1 kind: ServiceAccount metadata: name: otel-k8sobj namespace: <namespace> # ... --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-k8sobj namespace: <namespace> rules: - apiGroups: - "" resources: - events - pods verbs: - get - list - watch - apiGroups: - "events.k8s.io" resources: - events verbs: - watch - list # ... --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-k8sobj subjects: - kind: ServiceAccount name: otel-k8sobj namespace: <namespace> roleRef: kind: ClusterRole name: otel-k8sobj apiGroup: rbac.authorization.k8s.io # ... --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel-k8s-obj namespace: <namespace> spec: serviceAccount: otel-k8sobj mode: deployment config: receivers: k8sobjects: auth_type: serviceAccount objects: - name: pods 1 mode: pull 2 interval: 30s 3 label_selector: 4 field_selector: 5 namespaces: [<namespace>,...] 6 - name: events mode: watch exporters: debug: service: pipelines: logs: receivers: [k8sobjects] exporters: [debug] # ... 1 The Resource name that this receiver observes: for example, pods , deployments , or events . 2 The observation mode that this receiver uses: pull or watch . 3 Only applicable to the pull mode. The request interval for pulling an object. If omitted, the default value is 1h . 4 The label selector to define targets. 5 The field selector to filter targets. 6 The list of namespaces to collect events from. If omitted, the default value is all . 3.2.5. Kubelet Stats Receiver The Kubelet Stats Receiver extracts metrics related to nodes, pods, containers, and volumes from the kubelet's API server. These metrics are then channeled through the metrics-processing pipeline for additional analysis. OpenTelemetry Collector custom resource with an enabled Kubelet Stats Receiver # ... config: receivers: kubeletstats: collection_interval: 20s auth_type: "serviceAccount" endpoint: "https://USD{env:K8S_NODE_NAME}:10250" insecure_skip_verify: true service: pipelines: metrics: receivers: [kubeletstats] env: - name: K8S_NODE_NAME 1 valueFrom: fieldRef: fieldPath: spec.nodeName # ... 1 Sets the K8S_NODE_NAME to authenticate to the API. The Kubelet Stats Receiver requires additional permissions for the service account used for running the OpenTelemetry Collector. Permissions required by the service account apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [''] resources: ['nodes/stats'] verbs: ['get', 'watch', 'list'] - apiGroups: [""] resources: ["nodes/proxy"] 1 verbs: ["get"] # ... 1 The permissions required when using the extra_metadata_labels or request_utilization or limit_utilization metrics. 3.2.6. Prometheus Receiver The Prometheus Receiver scrapes the metrics endpoints. Important The Prometheus Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with an enabled Prometheus Receiver # ... config: receivers: prometheus: config: scrape_configs: 1 - job_name: 'my-app' 2 scrape_interval: 5s 3 static_configs: - targets: ['my-app.example.svc.cluster.local:8888'] 4 service: pipelines: metrics: receivers: [prometheus] # ... 1 Scrapes configurations using the Prometheus format. 2 The Prometheus job name. 3 The lnterval for scraping the metrics data. Accepts time units. The default value is 1m . 4 The targets at which the metrics are exposed. This example scrapes the metrics from a my-app application in the example project. 3.2.7. OTLP JSON File Receiver The OTLP JSON File Receiver extracts pipeline information from files containing data in the ProtoJSON format and conforming to the OpenTelemetry Protocol specification. The receiver watches a specified directory for changes such as created or modified files to process. Important The OTLP JSON File Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled OTLP JSON File Receiver # ... config: otlpjsonfile: include: - "/var/log/*.log" 1 exclude: - "/var/log/test.log" 2 # ... 1 The list of file path glob patterns to watch. 2 The list of file path glob patterns to ignore. 3.2.8. Zipkin Receiver The Zipkin Receiver ingests traces in the Zipkin v1 and v2 formats. OpenTelemetry Collector custom resource with the enabled Zipkin Receiver # ... config: receivers: zipkin: endpoint: 0.0.0.0:9411 1 tls: {} 2 service: pipelines: traces: receivers: [zipkin] # ... 1 The Zipkin HTTP endpoint. If omitted, the default 0.0.0.0:9411 is used. 2 The server-side TLS configuration. See the OTLP Receiver configuration section for more details. 3.2.9. Kafka Receiver The Kafka Receiver receives traces, metrics, and logs from Kafka in the OTLP format. Important The Kafka Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled Kafka Receiver # ... config: receivers: kafka: brokers: ["localhost:9092"] 1 protocol_version: 2.0.0 2 topic: otlp_spans 3 auth: plain_text: 4 username: example password: example tls: 5 ca_file: ca.pem cert_file: cert.pem key_file: key.pem insecure: false 6 server_name_override: kafka.example.corp 7 service: pipelines: traces: receivers: [kafka] # ... 1 The list of Kafka brokers. The default is localhost:9092 . 2 The Kafka protocol version. For example, 2.0.0 . This is a required field. 3 The name of the Kafka topic to read from. The default is otlp_spans . 4 The plain text authentication configuration. If omitted, plain text authentication is disabled. 5 The client-side TLS configuration. Defines paths to the TLS certificates. If omitted, TLS authentication is disabled. 6 Disables verifying the server's certificate chain and host name. The default is false . 7 ServerName indicates the name of the server requested by the client to support virtual hosting. 3.2.10. Kubernetes Cluster Receiver The Kubernetes Cluster Receiver gathers cluster metrics and entity events from the Kubernetes API server. It uses the Kubernetes API to receive information about updates. Authentication for this receiver is only supported through service accounts. Important The Kubernetes Cluster Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled Kubernetes Cluster Receiver # ... config: receivers: k8s_cluster: distribution: openshift collection_interval: 10s exporters: debug: {} service: pipelines: metrics: receivers: [k8s_cluster] exporters: [debug] logs/entity_events: receivers: [k8s_cluster] exporters: [debug] # ... This receiver requires a configured service account, RBAC rules for the cluster role, and the cluster role binding that binds the RBAC with the service account. ServiceAccount object apiVersion: v1 kind: ServiceAccount metadata: labels: app: otelcontribcol name: otelcontribcol # ... RBAC rules for the ClusterRole object apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otelcontribcol labels: app: otelcontribcol rules: - apiGroups: - quota.openshift.io resources: - clusterresourcequotas verbs: - get - list - watch - apiGroups: - "" resources: - events - namespaces - namespaces/status - nodes - nodes/spec - pods - pods/status - replicationcontrollers - replicationcontrollers/status - resourcequotas - services verbs: - get - list - watch - apiGroups: - apps resources: - daemonsets - deployments - replicasets - statefulsets verbs: - get - list - watch - apiGroups: - extensions resources: - daemonsets - deployments - replicasets verbs: - get - list - watch - apiGroups: - batch resources: - jobs - cronjobs verbs: - get - list - watch - apiGroups: - autoscaling resources: - horizontalpodautoscalers verbs: - get - list - watch # ... ClusterRoleBinding object apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otelcontribcol labels: app: otelcontribcol roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: otelcontribcol subjects: - kind: ServiceAccount name: otelcontribcol namespace: default # ... 3.2.11. OpenCensus Receiver The OpenCensus Receiver provides backwards compatibility with the OpenCensus project for easier migration of instrumented codebases. It receives metrics and traces in the OpenCensus format via gRPC or HTTP and Json. OpenTelemetry Collector custom resource with the enabled OpenCensus Receiver # ... config: receivers: opencensus: endpoint: 0.0.0.0:9411 1 tls: 2 cors_allowed_origins: 3 - https://*.<example>.com service: pipelines: traces: receivers: [opencensus] # ... 1 The OpenCensus endpoint. If omitted, the default is 0.0.0.0:55678 . 2 The server-side TLS configuration. See the OTLP Receiver configuration section for more details. 3 You can also use the HTTP JSON endpoint to optionally configure CORS, which is enabled by specifying a list of allowed CORS origins in this field. Wildcards with * are accepted under the cors_allowed_origins . To match any origin, enter only * . 3.2.12. Filelog Receiver The Filelog Receiver tails and parses logs from files. Important The Filelog Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled Filelog Receiver that tails a text file # ... config: receivers: filelog: include: [ /simple.log ] 1 operators: 2 - type: regex_parser regex: '^(?P<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}) (?P<sev>[A-Z]*) (?P<msg>.*)USD' timestamp: parse_from: attributes.time layout: '%Y-%m-%d %H:%M:%S' severity: parse_from: attributes.sev # ... 1 A list of file glob patterns that match the file paths to be read. 2 An array of Operators. Each Operator performs a simple task such as parsing a timestamp or JSON. To process logs into a desired format, chain the Operators together. 3.2.13. Journald Receiver The Journald Receiver parses journald events from the systemd journal and sends them as logs. Important The Journald Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled Journald Receiver apiVersion: v1 kind: Namespace metadata: name: otel-journald labels: security.openshift.io/scc.podSecurityLabelSync: "false" pod-security.kubernetes.io/enforce: "privileged" pod-security.kubernetes.io/audit: "privileged" pod-security.kubernetes.io/warn: "privileged" # ... --- apiVersion: v1 kind: ServiceAccount metadata: name: privileged-sa namespace: otel-journald # ... --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-journald-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:openshift:scc:privileged subjects: - kind: ServiceAccount name: privileged-sa namespace: otel-journald # ... --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel-journald-logs namespace: otel-journald spec: mode: daemonset serviceAccount: privileged-sa securityContext: allowPrivilegeEscalation: false capabilities: drop: - CHOWN - DAC_OVERRIDE - FOWNER - FSETID - KILL - NET_BIND_SERVICE - SETGID - SETPCAP - SETUID readOnlyRootFilesystem: true seLinuxOptions: type: spc_t seccompProfile: type: RuntimeDefault config: receivers: journald: files: /var/log/journal/*/* priority: info 1 units: 2 - kubelet - crio - init.scope - dnsmasq all: true 3 retry_on_failure: enabled: true 4 initial_interval: 1s 5 max_interval: 30s 6 max_elapsed_time: 5m 7 processors: exporters: debug: {} service: pipelines: logs: receivers: [journald] exporters: [debug] volumeMounts: - name: journal-logs mountPath: /var/log/journal/ readOnly: true volumes: - name: journal-logs hostPath: path: /var/log/journal tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule # ... 1 Filters output by message priorities or priority ranges. The default value is info . 2 Lists the units to read entries from. If empty, entries are read from all units. 3 Includes very long logs and logs with unprintable characters. The default value is false . 4 If set to true , the receiver pauses reading a file and attempts to resend the current batch of logs when encountering an error from downstream components. The default value is false . 5 The time interval to wait after the first failure before retrying. The default value is 1s . The units are ms , s , m , h . 6 The upper bound for the retry backoff interval. When this value is reached, the time interval between consecutive retry attempts remains constant at this value. The default value is 30s . The supported units are ms , s , m , h . 7 The maximum time interval, including retry attempts, for attempting to send a logs batch to a downstream consumer. When this value is reached, the data are discarded. If the set value is 0 , retrying never stops. The default value is 5m . The supported units are ms , s , m , h . 3.2.14. Kubernetes Events Receiver The Kubernetes Events Receiver collects events from the Kubernetes API server. The collected events are converted into logs. Important The Kubernetes Events Receiver is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenShift Container Platform permissions required for the Kubernetes Events Receiver apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector labels: app: otel-collector rules: - apiGroups: - "" resources: - events - namespaces - namespaces/status - nodes - nodes/spec - pods - pods/status - replicationcontrollers - replicationcontrollers/status - resourcequotas - services verbs: - get - list - watch - apiGroups: - apps resources: - daemonsets - deployments - replicasets - statefulsets verbs: - get - list - watch - apiGroups: - extensions resources: - daemonsets - deployments - replicasets verbs: - get - list - watch - apiGroups: - batch resources: - jobs - cronjobs verbs: - get - list - watch - apiGroups: - autoscaling resources: - horizontalpodautoscalers verbs: - get - list - watch # ... OpenTelemetry Collector custom resource with the enabled Kubernetes Event Receiver # ... serviceAccount: otel-collector 1 config: receivers: k8s_events: namespaces: [project1, project2] 2 service: pipelines: logs: receivers: [k8s_events] # ... 1 The service account of the Collector that has the required ClusterRole otel-collector RBAC. 2 The list of namespaces to collect events from. The default value is empty, which means that all namespaces are collected. 3.2.15. Additional resources OpenTelemetry Protocol (OTLP) documentation 3.3. Processors Processors process the data between it is received and exported. Processors are optional. By default, no processors are enabled. Processors must be enabled for every data source. Not all processors support all data sources. Depending on the data source, multiple processors might be enabled. Note that the order of processors matters. Currently, the following General Availability and Technology Preview processors are available for the Red Hat build of OpenTelemetry: Batch Processor Memory Limiter Processor Resource Detection Processor Attributes Processor Resource Processor Span Processor Kubernetes Attributes Processor Filter Processor Routing Processor Cumulative-to-Delta Processor Group-by-Attributes Processor Transform Processor 3.3.1. Batch Processor The Batch Processor batches traces and metrics to reduce the number of outgoing connections needed to transfer the telemetry information. Example of the OpenTelemetry Collector custom resource when using the Batch Processor # ... config: processors: batch: timeout: 5s send_batch_max_size: 10000 service: pipelines: traces: processors: [batch] metrics: processors: [batch] # ... Table 3.2. Parameters used by the Batch Processor Parameter Description Default timeout Sends the batch after a specific time duration and irrespective of the batch size. 200ms send_batch_size Sends the batch of telemetry data after the specified number of spans or metrics. 8192 send_batch_max_size The maximum allowable size of the batch. Must be equal or greater than the send_batch_size . 0 metadata_keys When activated, a batcher instance is created for each unique set of values found in the client.Metadata . [] metadata_cardinality_limit When the metadata_keys are populated, this configuration restricts the number of distinct metadata key-value combinations processed throughout the duration of the process. 1000 3.3.2. Memory Limiter Processor The Memory Limiter Processor periodically checks the Collector's memory usage and pauses data processing when the soft memory limit is reached. This processor supports traces, metrics, and logs. The preceding component, which is typically a receiver, is expected to retry sending the same data and may apply a backpressure to the incoming data. When memory usage exceeds the hard limit, the Memory Limiter Processor forces garbage collection to run. Example of the OpenTelemetry Collector custom resource when using the Memory Limiter Processor # ... config: processors: memory_limiter: check_interval: 1s limit_mib: 4000 spike_limit_mib: 800 service: pipelines: traces: processors: [batch] metrics: processors: [batch] # ... Table 3.3. Parameters used by the Memory Limiter Processor Parameter Description Default check_interval Time between memory usage measurements. The optimal value is 1s . For spiky traffic patterns, you can decrease the check_interval or increase the spike_limit_mib . 0s limit_mib The hard limit, which is the maximum amount of memory in MiB allocated on the heap. Typically, the total memory usage of the OpenTelemetry Collector is about 50 MiB greater than this value. 0 spike_limit_mib Spike limit, which is the maximum expected spike of memory usage in MiB. The optimal value is approximately 20% of limit_mib . To calculate the soft limit, subtract the spike_limit_mib from the limit_mib . 20% of limit_mib limit_percentage Same as the limit_mib but expressed as a percentage of the total available memory. The limit_mib setting takes precedence over this setting. 0 spike_limit_percentage Same as the spike_limit_mib but expressed as a percentage of the total available memory. Intended to be used with the limit_percentage setting. 0 3.3.3. Resource Detection Processor The Resource Detection Processor identifies host resource details in alignment with OpenTelemetry's resource semantic standards. Using the detected information, this processor can add or replace the resource values in telemetry data. This processor supports traces and metrics. You can use this processor with multiple detectors such as the Docket metadata detector or the OTEL_RESOURCE_ATTRIBUTES environment variable detector. Important The Resource Detection Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenShift Container Platform permissions required for the Resource Detection Processor kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: ["config.openshift.io"] resources: ["infrastructures", "infrastructures/status"] verbs: ["get", "watch", "list"] # ... OpenTelemetry Collector using the Resource Detection Processor # ... config: processors: resourcedetection: detectors: [openshift] override: true service: pipelines: traces: processors: [resourcedetection] metrics: processors: [resourcedetection] # ... OpenTelemetry Collector using the Resource Detection Processor with an environment variable detector # ... config: processors: resourcedetection/env: detectors: [env] 1 timeout: 2s override: false # ... 1 Specifies which detector to use. In this example, the environment detector is specified. 3.3.4. Attributes Processor The Attributes Processor can modify attributes of a span, log, or metric. You can configure this processor to filter and match input data and include or exclude such data for specific actions. Important The Attributes Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . This processor operates on a list of actions, executing them in the order specified in the configuration. The following actions are supported: Insert Inserts a new attribute into the input data when the specified key does not already exist. Update Updates an attribute in the input data if the key already exists. Upsert Combines the insert and update actions: Inserts a new attribute if the key does not exist yet. Updates the attribute if the key already exists. Delete Removes an attribute from the input data. Hash Hashes an existing attribute value as SHA1. Extract Extracts values by using a regular expression rule from the input key to the target keys defined in the rule. If a target key already exists, it is overridden similarly to the Span Processor's to_attributes setting with the existing attribute as the source. Convert Converts an existing attribute to a specified type. OpenTelemetry Collector using the Attributes Processor # ... config: processors: attributes/example: actions: - key: db.table action: delete - key: redacted_span value: true action: upsert - key: copy_key from_attribute: key_original action: update - key: account_id value: 2245 action: insert - key: account_password action: delete - key: account_email action: hash - key: http.status_code action: convert converted_type: int # ... 3.3.5. Resource Processor The Resource Processor applies changes to the resource attributes. This processor supports traces, metrics, and logs. Important The Resource Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector using the Resource Detection Processor # ... config: processors: attributes: - key: cloud.availability_zone value: "zone-1" action: upsert - key: k8s.cluster.name from_attribute: k8s-cluster action: insert - key: redundant-attribute action: delete # ... Attributes represent the actions that are applied to the resource attributes, such as delete the attribute, insert the attribute, or upsert the attribute. 3.3.6. Span Processor The Span Processor modifies the span name based on its attributes or extracts the span attributes from the span name. This processor can also change the span status and include or exclude spans. This processor supports traces. Span renaming requires specifying attributes for the new name by using the from_attributes configuration. Important The Span Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector using the Span Processor for renaming a span # ... config: processors: span: name: from_attributes: [<key1>, <key2>, ...] 1 separator: <value> 2 # ... 1 Defines the keys to form the new span name. 2 An optional separator. You can use this processor to extract attributes from the span name. OpenTelemetry Collector using the Span Processor for extracting attributes from a span name # ... config: processors: span/to_attributes: name: to_attributes: rules: - ^\/api\/v1\/document\/(?P<documentId>.*)\/updateUSD 1 # ... 1 This rule defines how the extraction is to be executed. You can define more rules: for example, in this case, if the regular expression matches the name, a documentID attibute is created. In this example, if the input span name is /api/v1/document/12345678/update , this results in the /api/v1/document/{documentId}/update output span name, and a new "documentId"="12345678" attribute is added to the span. You can have the span status modified. OpenTelemetry Collector using the Span Processor for status change # ... config: processors: span/set_status: status: code: Error description: "<error_description>" # ... 3.3.7. Kubernetes Attributes Processor The Kubernetes Attributes Processor enables automatic configuration of spans, metrics, and log resource attributes by using the Kubernetes metadata. This processor supports traces, metrics, and logs. This processor automatically identifies the Kubernetes resources, extracts the metadata from them, and incorporates this extracted metadata as resource attributes into relevant spans, metrics, and logs. It utilizes the Kubernetes API to discover all pods operating within a cluster, maintaining records of their IP addresses, pod UIDs, and other relevant metadata. Minimum OpenShift Container Platform permissions required for the Kubernetes Attributes Processor kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [''] resources: ['pods', 'namespaces'] verbs: ['get', 'watch', 'list'] # ... OpenTelemetry Collector using the Kubernetes Attributes Processor # ... config: processors: k8sattributes: filter: node_from_env_var: KUBE_NODE_NAME # ... 3.3.8. Filter Processor The Filter Processor leverages the OpenTelemetry Transformation Language to establish criteria for discarding telemetry data. If any of these conditions are satisfied, the telemetry data are discarded. You can combine the conditions by using the logical OR operator. This processor supports traces, metrics, and logs. Important The Filter Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with an enabled OTLP Exporter # ... config: processors: filter/ottl: error_mode: ignore 1 traces: span: - 'attributes["container.name"] == "app_container_1"' 2 - 'resource.attributes["host.name"] == "localhost"' 3 # ... 1 Defines the error mode. When set to ignore , ignores errors returned by conditions. When set to propagate , returns the error up the pipeline. An error causes the payload to be dropped from the Collector. 2 Filters the spans that have the container.name == app_container_1 attribute. 3 Filters the spans that have the host.name == localhost resource attribute. 3.3.9. Routing Processor The Routing Processor routes logs, metrics, or traces to specific exporters. This processor can read a header from an incoming gRPC or plain HTTP request or read a resource attribute, and then direct the trace information to relevant exporters according to the read value. Important The Routing Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with an enabled OTLP Exporter # ... config: processors: routing: from_attribute: X-Tenant 1 default_exporters: 2 - jaeger table: 3 - value: acme exporters: [jaeger/acme] exporters: jaeger: endpoint: localhost:14250 jaeger/acme: endpoint: localhost:24250 # ... 1 The HTTP header name for the lookup value when performing the route. 2 The default exporter when the attribute value is not present in the table in the section. 3 The table that defines which values are to be routed to which exporters. Optionally, you can create an attribute_source configuration, which defines where to look for the attribute that you specify in the from_attribute field. The supported values are context for searching the context including the HTTP headers, and resource for searching the resource attributes. 3.3.10. Cumulative-to-Delta Processor The Cumulative-to-Delta Processor processor converts monotonic, cumulative-sum, and histogram metrics to monotonic delta metrics. You can filter metrics by using the include: or exclude: fields and specifying the strict or regexp metric name matching. This processor does not convert non-monotonic sums and exponential histograms. Important The Cumulative-to-Delta Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Example of an OpenTelemetry Collector custom resource with an enabled Cumulative-to-Delta Processor # ... config: processors: cumulativetodelta: include: 1 match_type: strict 2 metrics: 3 - <metric_1_name> - <metric_2_name> exclude: 4 match_type: regexp metrics: - "<regular_expression_for_metric_names>" # ... 1 Optional: Configures which metrics to include. When omitted, all metrics, except for those listed in the exclude field, are converted to delta metrics. 2 Defines a value provided in the metrics field as a strict exact match or regexp regular expression. 3 Lists the metric names, which are exact matches or matches for regular expressions, of the metrics to be converted to delta metrics. If a metric matches both the include and exclude filters, the exclude filter takes precedence. 4 Optional: Configures which metrics to exclude. When omitted, no metrics are excluded from conversion to delta metrics. 3.3.11. Group-by-Attributes Processor The Group-by-Attributes Processor groups all spans, log records, and metric datapoints that share the same attributes by reassigning them to a Resource that matches those attributes. Important The Group-by-Attributes Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . At minimum, configuring this processor involves specifying an array of attribute keys to be used to group spans, log records, or metric datapoints together, as in the following example: # ... config: processors: groupbyattrs: keys: 1 - <key1> 2 - <key2> # ... 1 Specifies attribute keys to group by. 2 If a processed span, log record, or metric datapoint contains at least one of the specified attribute keys, it is reassigned to a Resource that shares the same attribute values; and if no such Resource exists, a new one is created. If none of the specified attribute keys is present in the processed span, log record, or metric datapoint, then it remains associated with its current Resource. Multiple instances of the same Resource are consolidated. 3.3.12. Transform Processor The Transform Processor enables modification of telemetry data according to specified rules and in the OpenTelemetry Transformation Language (OTTL) . For each signal type, the processor processes a series of conditions and statements associated with a specific OTTL Context type and then executes them in sequence on incoming telemetry data as specified in the configuration. Each condition and statement can access and modify telemetry data by using various functions, allowing conditions to dictate if a function is to be executed. All statements are written in the OTTL. You can configure multiple context statements for different signals, traces, metrics, and logs. The value of the context type specifies which OTTL Context the processor must use when interpreting the associated statements. Important The Transform Processor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Configuration summary # ... config: processors: transform: error_mode: ignore 1 <trace|metric|log>_statements: 2 - context: <string> 3 conditions: 4 - <string> - <string> statements: 5 - <string> - <string> - <string> - context: <string> statements: - <string> - <string> - <string> # ... 1 Optional: See the following table "Values for the optional error_mode field". 2 Indicates a signal to be transformed. 3 See the following table "Values for the context field". 4 Optional: Conditions for performing a transformation. Configuration example # ... config: transform: error_mode: ignore trace_statements: 1 - context: resource statements: - keep_keys(attributes, ["service.name", "service.namespace", "cloud.region", "process.command_line"]) 2 - replace_pattern(attributes["process.command_line"], "password\\=[^\\s]*(\\s?)", "password=***") 3 - limit(attributes, 100, []) - truncate_all(attributes, 4096) - context: span 4 statements: - set(status.code, 1) where attributes["http.path"] == "/health" - set(name, attributes["http.route"]) - replace_match(attributes["http.target"], "/user/*/list/*", "/user/{userId}/list/{listId}") - limit(attributes, 100, []) - truncate_all(attributes, 4096) # ... 1 Transforms a trace signal. 2 Keeps keys on the resources. 3 Replaces attributes and replaces string characters in password fields with asterisks. 4 Performs transformations at the span level. Table 3.4. Values for the context field Signal Statement Valid Contexts trace_statements resource , scope , span , spanevent metric_statements resource , scope , metric , datapoint log_statements resource , scope , log Table 3.5. Values for the optional error_mode field Value Description ignore Ignores and logs errors returned by statements and then continues to the statement. silent Ignores and doesn't log errors returned by statements and then continues to the statement. propagate Returns errors up the pipeline and drops the payload. Implicit default. 3.3.13. Additional resources OpenTelemetry Protocol (OTLP) documentation 3.4. Exporters Exporters send data to one or more back ends or destinations. An exporter can be push or pull based. By default, no exporters are configured. One or more exporters must be configured. Exporters can support one or more data sources. Exporters might be used with their default settings, but many exporters require configuration to specify at least the destination and security settings. Currently, the following General Availability and Technology Preview exporters are available for the Red Hat build of OpenTelemetry: OTLP Exporter OTLP HTTP Exporter Debug Exporter Load Balancing Exporter Prometheus Exporter Prometheus Remote Write Exporter Kafka Exporter AWS CloudWatch Exporter AWS EMF Exporter AWS X-Ray Exporter File Exporter 3.4.1. OTLP Exporter The OTLP gRPC Exporter exports traces and metrics by using the OpenTelemetry protocol (OTLP). OpenTelemetry Collector custom resource with the enabled OTLP Exporter # ... config: exporters: otlp: endpoint: tempo-ingester:4317 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem insecure: false 3 insecure_skip_verify: false # 4 reload_interval: 1h 5 server_name_override: <name> 6 headers: 7 X-Scope-OrgID: "dev" service: pipelines: traces: exporters: [otlp] metrics: exporters: [otlp] # ... 1 The OTLP gRPC endpoint. If the https:// scheme is used, then client transport security is enabled and overrides the insecure setting in the tls . 2 The client-side TLS configuration. Defines paths to TLS certificates. 3 Disables client transport security when set to true . The default value is false by default. 4 Skips verifying the certificate when set to true . The default value is false . 5 Specifies the time interval at which the certificate is reloaded. If the value is not set, the certificate is never reloaded. The reload_interval accepts a string containing valid units of time such as ns , us (or ms ), ms , s , m , h . 6 Overrides the virtual host name of authority such as the authority header field in requests. You can use this for testing. 7 Headers are sent for every request performed during an established connection. 3.4.2. OTLP HTTP Exporter The OTLP HTTP Exporter exports traces and metrics by using the OpenTelemetry protocol (OTLP). OpenTelemetry Collector custom resource with the enabled OTLP Exporter # ... config: exporters: otlphttp: endpoint: http://tempo-ingester:4318 1 tls: 2 headers: 3 X-Scope-OrgID: "dev" disable_keep_alives: false 4 service: pipelines: traces: exporters: [otlphttp] metrics: exporters: [otlphttp] # ... 1 The OTLP HTTP endpoint. If the https:// scheme is used, then client transport security is enabled and overrides the insecure setting in the tls . 2 The client side TLS configuration. Defines paths to TLS certificates. 3 Headers are sent in every HTTP request. 4 If true, disables HTTP keep-alives. It will only use the connection to the server for a single HTTP request. 3.4.3. Debug Exporter The Debug Exporter prints traces and metrics to the standard output. OpenTelemetry Collector custom resource with the enabled Debug Exporter # ... config: exporters: debug: verbosity: detailed 1 sampling_initial: 5 2 sampling_thereafter: 200 3 use_internal_logger: true 4 service: pipelines: traces: exporters: [debug] metrics: exporters: [debug] # ... 1 Verbosity of the debug export: detailed , normal , or basic . When set to detailed , pipeline data are verbosely logged. Defaults to normal . 2 Initial number of messages logged per second. The default value is 2 messages per second. 3 Sampling rate after the initial number of messages, the value in sampling_initial , has been logged. Disabled by default with the default 1 value. Sampling is enabled with values greater than 1 . For more information, see the page for the sampler function in the zapcore package on the Go Project's website. 4 When set to true , enables output from the Collector's internal logger for the exporter. 3.4.4. Load Balancing Exporter The Load Balancing Exporter consistently exports spans, metrics, and logs according to the routing_key configuration. Important The Load Balancing Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled Load Balancing Exporter # ... config: exporters: loadbalancing: routing_key: "service" 1 protocol: otlp: 2 timeout: 1s resolver: 3 static: 4 hostnames: - backend-1:4317 - backend-2:4317 dns: 5 hostname: otelcol-headless.observability.svc.cluster.local k8s: 6 service: lb-svc.kube-public ports: - 15317 - 16317 # ... 1 The routing_key: service exports spans for the same service name to the same Collector instance to provide accurate aggregation. The routing_key: traceID exports spans based on their traceID . The implicit default is traceID based routing. 2 The OTLP is the only supported load-balancing protocol. All options of the OTLP exporter are supported. 3 You can configure only one resolver. 4 The static resolver distributes the load across the listed endpoints. 5 You can use the DNS resolver only with a Kubernetes headless service. 6 The Kubernetes resolver is recommended. 3.4.5. Prometheus Exporter The Prometheus Exporter exports metrics in the Prometheus or OpenMetrics formats. Important The Prometheus Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled Prometheus Exporter # ... ports: - name: promexporter 1 port: 8889 protocol: TCP config: exporters: prometheus: endpoint: 0.0.0.0:8889 2 tls: 3 ca_file: ca.pem cert_file: cert.pem key_file: key.pem namespace: prefix 4 const_labels: 5 label1: value1 enable_open_metrics: true 6 resource_to_telemetry_conversion: 7 enabled: true metric_expiration: 180m 8 add_metric_suffixes: false 9 service: pipelines: metrics: exporters: [prometheus] # ... 1 Exposes the Prometheus port from the Collector pod and service. You can enable scraping of metrics by Prometheus by using the port name in ServiceMonitor or PodMonitor custom resource. 2 The network endpoint where the metrics are exposed. 3 The server-side TLS configuration. Defines paths to TLS certificates. 4 If set, exports metrics under the provided value. No default. 5 Key-value pair labels that are applied for every exported metric. No default. 6 If true , metrics are exported using the OpenMetrics format. Exemplars are only exported in the OpenMetrics format and only for histogram and monotonic sum metrics such as counter . Disabled by default. 7 If enabled is true , all the resource attributes are converted to metric labels by default. Disabled by default. 8 Defines how long metrics are exposed without updates. The default is 5m . 9 Adds the metrics types and units suffixes. Must be disabled if the monitor tab in Jaeger console is enabled. The default is true . 3.4.6. Prometheus Remote Write Exporter The Prometheus Remote Write Exporter exports metrics to compatible back ends. Important The Prometheus Remote Write Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled Prometheus Remote Write Exporter # ... config: exporters: prometheusremotewrite: endpoint: "https://my-prometheus:7900/api/v1/push" 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem target_info: true 3 export_created_metric: true 4 max_batch_size_bytes: 3000000 5 service: pipelines: metrics: exporters: [prometheusremotewrite] # ... 1 Endpoint for sending the metrics. 2 Server-side TLS configuration. Defines paths to TLS certificates. 3 When set to true , creates a target_info metric for each resource metric. 4 When set to true , exports a _created metric for the Summary, Histogram, and Monotonic Sum metric points. 5 Maximum size of the batch of samples that is sent to the remote write endpoint. Exceeding this value results in batch splitting. The default value is 3000000 , which is approximately 2.861 megabytes. Warning This exporter drops non-cumulative monotonic, histogram, and summary OTLP metrics. You must enable the --web.enable-remote-write-receiver feature flag on the remote Prometheus instance. Without it, pushing the metrics to the instance using this exporter fails. 3.4.7. Kafka Exporter The Kafka Exporter exports logs, metrics, and traces to Kafka. This exporter uses a synchronous producer that blocks and does not batch messages. You must use it with batch and queued retry processors for higher throughput and resiliency. Important The Kafka Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled Kafka Exporter # ... config: exporters: kafka: brokers: ["localhost:9092"] 1 protocol_version: 2.0.0 2 topic: otlp_spans 3 auth: plain_text: 4 username: example password: example tls: 5 ca_file: ca.pem cert_file: cert.pem key_file: key.pem insecure: false 6 server_name_override: kafka.example.corp 7 service: pipelines: traces: exporters: [kafka] # ... 1 The list of Kafka brokers. The default is localhost:9092 . 2 The Kafka protocol version. For example, 2.0.0 . This is a required field. 3 The name of the Kafka topic to read from. The following are the defaults: otlp_spans for traces, otlp_metrics for metrics, otlp_logs for logs. 4 The plain text authentication configuration. If omitted, plain text authentication is disabled. 5 The client-side TLS configuration. Defines paths to the TLS certificates. If omitted, TLS authentication is disabled. 6 Disables verifying the server's certificate chain and host name. The default is false . 7 ServerName indicates the name of the server requested by the client to support virtual hosting. 3.4.8. AWS CloudWatch Logs Exporter The AWS CloudWatch Logs Exporter sends logs data to the Amazon CloudWatch Logs service and signs requests by using the AWS SDK for Go and the default credential provider chain. Important The AWS CloudWatch Logs Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled AWS CloudWatch Logs Exporter # ... config: exporters: awscloudwatchlogs: log_group_name: "<group_name_of_amazon_cloudwatch_logs>" 1 log_stream_name: "<log_stream_of_amazon_cloudwatch_logs>" 2 region: <aws_region_of_log_stream> 3 endpoint: <service_endpoint_of_amazon_cloudwatch_logs> 4 log_retention: <supported_value_in_days> 5 # ... 1 Required. If the log group does not exist yet, it is automatically created. 2 Required. If the log stream does not exist yet, it is automatically created. 3 Optional. If the AWS region is not already set in the default credential chain, you must specify it. 4 Optional. You can override the default Amazon CloudWatch Logs service endpoint to which the requests are forwarded. For the list of service endpoints by region, see Amazon CloudWatch Logs endpoints and quotas (AWS General Reference). 5 Optional. With this parameter, you can set the log retention policy for new Amazon CloudWatch log groups. If this parameter is omitted or set to 0 , the logs never expire by default. Supported values for retention in days are 1 , 3 , 5 , 7 , 14 , 30 , 60 , 90 , 120 , 150 , 180 , 365 , 400 , 545 , 731 , 1827 , 2192 , 2557 , 2922 , 3288 , or 3653 . Additional resources What is Amazon CloudWatch Logs? (Amazon CloudWatch Logs User Guide) Specifying Credentials (AWS SDK for Go Developer Guide) Amazon CloudWatch Logs endpoints and quotas (AWS General Reference) 3.4.9. AWS EMF Exporter The AWS EMF Exporter converts the following OpenTelemetry metrics datapoints to the AWS CloudWatch Embedded Metric Format (EMF): Int64DataPoints DoubleDataPoints SummaryDataPoints The EMF metrics are then sent directly to the Amazon CloudWatch Logs service by using the PutLogEvents API. One of the benefits of using this exporter is the possibility to view logs and metrics in the Amazon CloudWatch console at https://console.aws.amazon.com/cloudwatch/ . Important The AWS EMF Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled AWS EMF Exporter # ... config: exporters: awsemf: log_group_name: "<group_name_of_amazon_cloudwatch_logs>" 1 log_stream_name: "<log_stream_of_amazon_cloudwatch_logs>" 2 resource_to_telemetry_conversion: 3 enabled: true region: <region> 4 endpoint: <endpoint> 5 log_retention: <supported_value_in_days> 6 namespace: <custom_namespace> 7 # ... 1 Customized log group name. 2 Customized log stream name. 3 Optional. Converts resource attributes to telemetry attributes such as metric labels. Disabled by default. 4 The AWS region of the log stream. If a region is not already set in the default credential provider chain, you must specify the region. 5 Optional. You can override the default Amazon CloudWatch Logs service endpoint to which the requests are forwarded. For the list of service endpoints by region, see Amazon CloudWatch Logs endpoints and quotas (AWS General Reference). 6 Optional. With this parameter, you can set the log retention policy for new Amazon CloudWatch log groups. If this parameter is omitted or set to 0 , the logs never expire by default. Supported values for retention in days are 1 , 3 , 5 , 7 , 14 , 30 , 60 , 90 , 120 , 150 , 180 , 365 , 400 , 545 , 731 , 1827 , 2192 , 2557 , 2922 , 3288 , or 3653 . 7 Optional. A custom namespace for the Amazon CloudWatch metrics. Log group name The log_group_name parameter allows you to customize the log group name and supports the default /metrics/default value or the following placeholders: /aws/metrics/{ClusterName} This placeholder is used to search for the ClusterName or aws.ecs.cluster.name resource attribute in the metrics data and replace it with the actual cluster name. {NodeName} This placeholder is used to search for the NodeName or k8s.node.name resource attribute. {TaskId} This placeholder is used to search for the TaskId or aws.ecs.task.id resource attribute. If no resource attribute is found in the resource attribute map, the placeholder is replaced by the undefined value. Log stream name The log_stream_name parameter allows you to customize the log stream name and supports the default otel-stream value or the following placeholders: {ClusterName} This placeholder is used to search for the ClusterName or aws.ecs.cluster.name resource attribute. {ContainerInstanceId} This placeholder is used to search for the ContainerInstanceId or aws.ecs.container.instance.id resource attribute. This resource attribute is valid only for the AWS ECS EC2 launch type. {NodeName} This placeholder is used to search for the NodeName or k8s.node.name resource attribute. {TaskDefinitionFamily} This placeholder is used to search for the TaskDefinitionFamily or aws.ecs.task.family resource attribute. {TaskId} This placeholder is used to search for the TaskId or aws.ecs.task.id resource attribute in the metrics data and replace it with the actual task ID. If no resource attribute is found in the resource attribute map, the placeholder is replaced by the undefined value. Additional resources Specification: Embedded metric format (Amazon CloudWatch User Guide) PutLogEvents (Amazon CloudWatch Logs API Reference) Amazon CloudWatch Logs endpoints and quotas (AWS General Reference) 3.4.10. AWS X-Ray Exporter The AWS X-Ray Exporter converts OpenTelemetry spans to AWS X-Ray Segment Documents and then sends them directly to the AWS X-Ray service. The AWS X-Ray Exporter uses the PutTraceSegments API and signs requests by using the AWS SDK for Go and the default credential provider chain. Important The AWS X-Ray Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled AWS X-Ray Exporter # ... config: exporters: awsxray: region: "<region>" 1 endpoint: <endpoint> 2 resource_arn: "<aws_resource_arn>" 3 role_arn: "<iam_role>" 4 indexed_attributes: [ "<indexed_attr_0>", "<indexed_attr_1>" ] 5 aws_log_groups: ["<group1>", "<group2>"] 6 request_timeout_seconds: 120 7 # ... 1 The destination region for the X-Ray segments sent to the AWS X-Ray service. For example, eu-west-1 . 2 Optional. You can override the default Amazon CloudWatch Logs service endpoint to which the requests are forwarded. For the list of service endpoints by region, see Amazon CloudWatch Logs endpoints and quotas (AWS General Reference). 3 The Amazon Resource Name (ARN) of the AWS resource that is running the Collector. 4 The AWS Identity and Access Management (IAM) role for uploading the X-Ray segments to a different account. 5 The list of attribute names to be converted to X-Ray annotations. 6 The list of log group names for Amazon CloudWatch Logs. 7 Time duration in seconds before timing out a request. If omitted, the default value is 30 . Additional resources What is AWS X-Ray? (AWS X-Ray Developer Guide) AWS SDK for Go API Reference (AWS Documentation) Specifying Credentials (AWS SDK for Go Developer Guide) IAM roles (AWS Identity and Access Management User Guide) 3.4.11. File Exporter The File Exporter writes telemetry data to files in persistent storage and supports file operations such as rotation, compression, and writing to multiple files. With this exporter, you can also use a resource attribute to control file naming. The only required setting is path , which specifies the destination path for telemetry files in the persistent-volume file system. Important The File Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the enabled File Exporter # ... config: | exporters: file: path: /data/metrics.json 1 rotation: 2 max_megabytes: 10 3 max_days: 3 4 max_backups: 3 5 localtime: true 6 format: proto 7 compression: zstd 8 flush_interval: 5 9 # ... 1 The file-system path where the data is to be written. There is no default. 2 File rotation is an optional feature of this exporter. By default, telemetry data is exported to a single file. Add the rotation setting to enable file rotation. 3 The max_megabytes setting is the maximum size a file is allowed to reach until it is rotated. The default is 100 . 4 The max_days setting is for how many days a file is to be retained, counting from the timestamp in the file name. There is no default. 5 The max_backups setting is for retaining several older files. The defalt is 100 . 6 The localtime setting specifies the local-time format for the timestamp, which is appended to the file name in front of any extension, when the file is rotated. The default is the Coordinated Universal Time (UTC). 7 The format for encoding the telemetry data before writing it to a file. The default format is json . The proto format is also supported. 8 File compression is optional and not set by default. This setting defines the compression algorithm for the data that is exported to a file. Currently, only the zstd compression algorithm is supported. There is no default. 9 The time interval between flushes. A value without a unit is set in nanoseconds. This setting is ignored when file rotation is enabled through the rotation settings. 3.4.12. Additional resources OpenTelemetry Protocol (OTLP) documentation 3.5. Connectors A connector connects two pipelines. It consumes data as an exporter at the end of one pipeline and emits data as a receiver at the start of another pipeline. It can consume and emit data of the same or different data type. It can generate and emit data to summarize the consumed data, or it can merely replicate or route data. Currently, the following General Availability and Technology Preview connectors are available for the Red Hat build of OpenTelemetry: Count Connector Routing Connector Forward Connector Spanmetrics Connector 3.5.1. Count Connector The Count Connector counts trace spans, trace span events, metrics, metric data points, and log records in exporter pipelines. Important The Count Connector is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The following are the default metric names: trace.span.count trace.span.event.count metric.count metric.datapoint.count log.record.count You can also expose custom metric names. OpenTelemetry Collector custom resource (CR) with an enabled Count Connector # ... config: receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 exporters: prometheus: endpoint: 0.0.0.0:8889 connectors: count: {} service: pipelines: 1 traces/in: receivers: [otlp] exporters: [count] 2 metrics/out: receivers: [count] 3 exporters: [prometheus] # ... 1 It is important to correctly configure the Count Connector as an exporter or receiver in the pipeline and to export the generated metrics to the correct exporter. 2 The Count Connector is configured to receive spans as an exporter. 3 The Count Connector is configured to emit generated metrics as a receiver. Tip If the Count Connector is not generating the expected metrics, you can check whether the OpenTelemetry Collector is receiving the expected spans, metrics, and logs, and whether the telemetry data flow through the Count Connector as expected. You can also use the Debug Exporter to inspect the incoming telemetry data. The Count Connector can count telemetry data according to defined conditions and expose those data as metrics when configured by using such fields as spans , spanevents , metrics , datapoints , or logs . See the example. Example OpenTelemetry Collector CR for the Count Connector to count spans by conditions # ... config: connectors: count: spans: 1 <custom_metric_name>: 2 description: "<custom_metric_description>" conditions: - 'attributes["env"] == "dev"' - 'name == "devevent"' # ... 1 In this example, the exposed metric counts spans with the specified conditions. 2 You can specify a custom metric name such as cluster.prod.event.count . Tip Write conditions correctly and follow the required syntax for attribute matching or telemetry field conditions. Improperly defined conditions are the most likely sources of errors. The Count Connector can count telemetry data according to defined attributes when configured by using such fields as spans , spanevents , metrics , datapoints , or logs . See the example. The attribute keys are injected into the telemetry data. You must define a value for the default_value field for missing attributes. Example OpenTelemetry Collector CR for the Count Connector to count logs by attributes # ... config: connectors: count: logs: 1 <custom_metric_name>: 2 description: "<custom_metric_description>" attributes: - key: env default_value: unknown 3 # ... 1 Specifies attributes for logs. 2 You can specify a custom metric name such as my.log.count . 3 Defines a default value when the attribute is not set. 3.5.2. Routing Connector The Routing Connector routes logs, metrics, and traces to specified pipelines according to resource attributes and their routing conditions, which are written as OpenTelemetry Transformation Language (OTTL) statements. Important The Routing Connector is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with an enabled Routing Connector # ... config: connectors: routing: table: 1 - statement: route() where attributes["X-Tenant"] == "dev" 2 pipelines: [traces/dev] 3 - statement: route() where attributes["X-Tenant"] == "prod" pipelines: [traces/prod] default_pipelines: [traces/dev] 4 error_mode: ignore 5 match_once: false 6 service: pipelines: traces/in: receivers: [otlp] exporters: [routing] traces/dev: receivers: [routing] exporters: [otlp/dev] traces/prod: receivers: [routing] exporters: [otlp/prod] # ... 1 Connector routing table. 2 Routing conditions written as OTTL statements. 3 Destination pipelines for routing the matching telemetry data. 4 Destination pipelines for routing the telemetry data for which no routing condition is satisfied. 5 Error-handling mode: The propagate value is for logging an error and dropping the payload. The ignore value is for ignoring the condition and attempting to match with the one. The silent value is the same as ignore but without logging the error. The default is propagate . 6 When set to true , the payload is routed only to the first pipeline whose routing condition is met. The default is false . 3.5.3. Forward Connector The Forward Connector merges two pipelines of the same type. Important The Forward Connector is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with an enabled Forward Connector # ... config: receivers: otlp: protocols: grpc: jaeger: protocols: grpc: processors: batch: exporters: otlp: endpoint: tempo-simplest-distributor:4317 tls: insecure: true connectors: forward: {} service: pipelines: traces/regiona: receivers: [otlp] processors: [] exporters: [forward] traces/regionb: receivers: [jaeger] processors: [] exporters: [forward] traces: receivers: [forward] processors: [batch] exporters: [otlp] # ... 3.5.4. Spanmetrics Connector The Spanmetrics Connector aggregates Request, Error, and Duration (R.E.D) OpenTelemetry metrics from span data. OpenTelemetry Collector custom resource with an enabled Spanmetrics Connector # ... config: connectors: spanmetrics: metrics_flush_interval: 15s 1 service: pipelines: traces: exporters: [spanmetrics] metrics: receivers: [spanmetrics] # ... 1 Defines the flush interval of the generated metrics. Defaults to 15s . 3.5.5. Additional resources OpenTelemetry Protocol (OTLP) documentation 3.6. Extensions Extensions add capabilities to the Collector. For example, authentication can be added to the receivers and exporters automatically. Currently, the following General Availability and Technology Preview extensions are available for the Red Hat build of OpenTelemetry: BearerTokenAuth Extension OAuth2Client Extension File Storage Extension OIDC Auth Extension Jaeger Remote Sampling Extension Performance Profiler Extension Health Check Extension zPages Extension 3.6.1. BearerTokenAuth Extension The BearerTokenAuth Extension is an authenticator for receivers and exporters that are based on the HTTP and the gRPC protocol. You can use the OpenTelemetry Collector custom resource to configure client authentication and server authentication for the BearerTokenAuth Extension on the receiver and exporter side. This extension supports traces, metrics, and logs. OpenTelemetry Collector custom resource with client and server authentication configured for the BearerTokenAuth Extension # ... config: extensions: bearertokenauth: scheme: "Bearer" 1 token: "<token>" 2 filename: "<token_file>" 3 receivers: otlp: protocols: http: auth: authenticator: bearertokenauth 4 exporters: otlp: auth: authenticator: bearertokenauth 5 service: extensions: [bearertokenauth] pipelines: traces: receivers: [otlp] exporters: [otlp] # ... 1 You can configure the BearerTokenAuth Extension to send a custom scheme . The default is Bearer . 2 You can add the BearerTokenAuth Extension token as metadata to identify a message. 3 Path to a file that contains an authorization token that is transmitted with every message. 4 You can assign the authenticator configuration to an OTLP Receiver. 5 You can assign the authenticator configuration to an OTLP Exporter. 3.6.2. OAuth2Client Extension The OAuth2Client Extension is an authenticator for exporters that are based on the HTTP and the gRPC protocol. Client authentication for the OAuth2Client Extension is configured in a separate section in the OpenTelemetry Collector custom resource. This extension supports traces, metrics, and logs. Important The OAuth2Client Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with client authentication configured for the OAuth2Client Extension # ... config: extensions: oauth2client: client_id: <client_id> 1 client_secret: <client_secret> 2 endpoint_params: 3 audience: <audience> token_url: https://example.com/oauth2/default/v1/token 4 scopes: ["api.metrics"] 5 # tls settings for the token client tls: 6 insecure: true 7 ca_file: /var/lib/mycert.pem 8 cert_file: <cert_file> 9 key_file: <key_file> 10 timeout: 2s 11 receivers: otlp: protocols: http: {} exporters: otlp: auth: authenticator: oauth2client 12 service: extensions: [oauth2client] pipelines: traces: receivers: [otlp] exporters: [otlp] # ... 1 Client identifier, which is provided by the identity provider. 2 Confidential key used to authenticate the client to the identity provider. 3 Further metadata, in the key-value pair format, which is transferred during authentication. For example, audience specifies the intended audience for the access token, indicating the recipient of the token. 4 The URL of the OAuth2 token endpoint, where the Collector requests access tokens. 5 The scopes define the specific permissions or access levels requested by the client. 6 The Transport Layer Security (TLS) settings for the token client, which is used to establish a secure connection when requesting tokens. 7 When set to true , configures the Collector to use an insecure or non-verified TLS connection to call the configured token endpoint. 8 The path to a Certificate Authority (CA) file that is used to verify the server's certificate during the TLS handshake. 9 The path to the client certificate file that the client must use to authenticate itself to the OAuth2 server if required. 10 The path to the client's private key file that is used with the client certificate if needed for authentication. 11 Sets a timeout for the token client's request. 12 You can assign the authenticator configuration to an OTLP exporter. 3.6.3. File Storage Extension The File Storage Extension supports traces, metrics, and logs. This extension can persist the state to the local file system. This extension persists the sending queue for the OpenTelemetry Protocol (OTLP) exporters that are based on the HTTP and the gRPC protocols. This extension requires the read and write access to a directory. This extension can use a default directory, but the default directory must already exist. Important The File Storage Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with a configured File Storage Extension that persists an OTLP sending queue # ... config: extensions: file_storage/all_settings: directory: /var/lib/otelcol/mydir 1 timeout: 1s 2 compaction: on_start: true 3 directory: /tmp/ 4 max_transaction_size: 65_536 5 fsync: false 6 exporters: otlp: sending_queue: storage: file_storage/all_settings 7 service: extensions: [file_storage/all_settings] 8 pipelines: traces: receivers: [otlp] exporters: [otlp] # ... 1 Specifies the directory in which the telemetry data is stored. 2 Specifies the timeout time interval for opening the stored files. 3 Starts compaction when the Collector starts. If omitted, the default is false . 4 Specifies the directory in which the compactor stores the telemetry data. 5 Defines the maximum size of the compaction transaction. To ignore the transaction size, set to zero. If omitted, the default is 65536 bytes. 6 When set, forces the database to perform an fsync call after each write operation. This helps to ensure database integrity if there is an interruption to the database process, but at the cost of performance. 7 Buffers the OTLP Exporter data on the local file system. 8 Starts the File Storage Extension by the Collector. 3.6.4. OIDC Auth Extension The OIDC Auth Extension authenticates incoming requests to receivers by using the OpenID Connect (OIDC) protocol. It validates the ID token in the authorization header against the issuer and updates the authentication context of the incoming request. Important The OIDC Auth Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the configured OIDC Auth Extension # ... config: extensions: oidc: attribute: authorization 1 issuer_url: https://example.com/auth/realms/opentelemetry 2 issuer_ca_path: /var/run/tls/issuer.pem 3 audience: otel-collector 4 username_claim: email 5 receivers: otlp: protocols: grpc: auth: authenticator: oidc exporters: debug: {} service: extensions: [oidc] pipelines: traces: receivers: [otlp] exporters: [debug] # ... 1 The name of the header that contains the ID token. The default name is authorization . 2 The base URL of the OIDC provider. 3 Optional: The path to the issuer's CA certificate. 4 The audience for the token. 5 The name of the claim that contains the username. The default name is sub . 3.6.5. Jaeger Remote Sampling Extension The Jaeger Remote Sampling Extension enables serving sampling strategies after Jaeger's remote sampling API. You can configure this extension to proxy requests to a backing remote sampling server such as a Jaeger collector down the pipeline or to a static JSON file from the local file system. Important The Jaeger Remote Sampling Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with a configured Jaeger Remote Sampling Extension # ... config: extensions: jaegerremotesampling: source: reload_interval: 30s 1 remote: endpoint: jaeger-collector:14250 2 file: /etc/otelcol/sampling_strategies.json 3 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [jaegerremotesampling] pipelines: traces: receivers: [otlp] exporters: [debug] # ... 1 The time interval at which the sampling configuration is updated. 2 The endpoint for reaching the Jaeger remote sampling strategy provider. 3 The path to a local file that contains a sampling strategy configuration in the JSON format. Example of a Jaeger Remote Sampling strategy file { "service_strategies": [ { "service": "foo", "type": "probabilistic", "param": 0.8, "operation_strategies": [ { "operation": "op1", "type": "probabilistic", "param": 0.2 }, { "operation": "op2", "type": "probabilistic", "param": 0.4 } ] }, { "service": "bar", "type": "ratelimiting", "param": 5 } ], "default_strategy": { "type": "probabilistic", "param": 0.5, "operation_strategies": [ { "operation": "/health", "type": "probabilistic", "param": 0.0 }, { "operation": "/metrics", "type": "probabilistic", "param": 0.0 } ] } } 3.6.6. Performance Profiler Extension The Performance Profiler Extension enables the Go net/http/pprof endpoint. Developers use this extension to collect performance profiles and investigate issues with the service. Important The Performance Profiler Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the configured Performance Profiler Extension # ... config: extensions: pprof: endpoint: localhost:1777 1 block_profile_fraction: 0 2 mutex_profile_fraction: 0 3 save_to_file: test.pprof 4 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [pprof] pipelines: traces: receivers: [otlp] exporters: [debug] # ... 1 The endpoint at which this extension listens. Use localhost: to make it available only locally or ":" to make it available on all network interfaces. The default value is localhost:1777 . 2 Sets a fraction of blocking events to be profiled. To disable profiling, set this to 0 or a negative integer. See the documentation for the runtime package. The default value is 0 . 3 Set a fraction of mutex contention events to be profiled. To disable profiling, set this to 0 or a negative integer. See the documentation for the runtime package. The default value is 0 . 4 The name of the file in which the CPU profile is to be saved. Profiling starts when the Collector starts. Profiling is saved to the file when the Collector is terminated. 3.6.7. Health Check Extension The Health Check Extension provides an HTTP URL for checking the status of the OpenTelemetry Collector. You can use this extension as a liveness and readiness probe on OpenShift. Important The Health Check Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the configured Health Check Extension # ... config: extensions: health_check: endpoint: "0.0.0.0:13133" 1 tls: 2 ca_file: "/path/to/ca.crt" cert_file: "/path/to/cert.crt" key_file: "/path/to/key.key" path: "/health/status" 3 check_collector_pipeline: 4 enabled: true 5 interval: "5m" 6 exporter_failure_threshold: 5 7 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [health_check] pipelines: traces: receivers: [otlp] exporters: [debug] # ... 1 The target IP address for publishing the health check status. The default is 0.0.0.0:13133 . 2 The TLS server-side configuration. Defines paths to TLS certificates. If omitted, the TLS is disabled. 3 The path for the health check server. The default is / . 4 Settings for the Collector pipeline health check. 5 Enables the Collector pipeline health check. The default is false . 6 The time interval for checking the number of failures. The default is 5m . 7 The threshold of multiple failures until which a container is still marked as healthy. The default is 5 . 3.6.8. zPages Extension The zPages Extension provides an HTTP endpoint that serves live data for debugging instrumented components in real time. You can use this extension for in-process diagnostics and insights into traces and metrics without relying on an external backend. With this extension, you can monitor and troubleshoot the behavior of the OpenTelemetry Collector and related components by watching the diagnostic information at the provided endpoint. Important The zPages Extension is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenTelemetry Collector custom resource with the configured zPages Extension # ... config: extensions: zpages: endpoint: "localhost:55679" 1 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [zpages] pipelines: traces: receivers: [otlp] exporters: [debug] # ... 1 Specifies the HTTP endpoint for serving the zPages extension. The default is localhost:55679 . Important Accessing the HTTP endpoint requires port-forwarding because the Red Hat build of OpenTelemetry Operator does not expose this route. You can enable port-forwarding by running the following oc command: USD oc port-forward pod/USD(oc get pod -l app.kubernetes.io/name=instance-collector -o=jsonpath='{.items[0].metadata.name}') 55679 The Collector provides the following zPages for diagnostics: ServiceZ Shows an overview of the Collector services and links to the following zPages: PipelineZ , ExtensionZ , and FeatureZ . This page also displays information about the build version and runtime. An example of this page's URL is http://localhost:55679/debug/servicez . PipelineZ Shows detailed information about the active pipelines in the Collector. This page displays the pipeline type, whether data are modified, and the associated receivers, processors, and exporters for each pipeline. An example of this page's URL is http://localhost:55679/debug/pipelinez . ExtensionZ Shows the currently active extensions in the Collector. An example of this page's URL is http://localhost:55679/debug/extensionz . FeatureZ Shows the feature gates enabled in the Collector along with their status and description. An example of this page's URL is http://localhost:55679/debug/featurez . TraceZ Shows spans categorized by latency. Available time ranges include 0 ms, 10 ms, 100 ms, 1 ms, 10 ms, 100 ms, 1 s, 10 s, 1 m. This page also allows for quick inspection of error samples. An example of this page's URL is http://localhost:55679/debug/tracez . 3.6.9. Additional resources OpenTelemetry Protocol (OTLP) documentation 3.7. Target Allocator The Target Allocator is an optional component of the OpenTelemetry Operator that shards scrape targets across the deployed fleet of OpenTelemetry Collector instances. The Target Allocator integrates with the Prometheus PodMonitor and ServiceMonitor custom resources (CR). When the Target Allocator is enabled, the OpenTelemetry Operator adds the http_sd_config field to the enabled prometheus receiver that connects to the Target Allocator service. Important The Target Allocator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Example OpenTelemetryCollector CR with the enabled Target Allocator apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: statefulset 1 targetAllocator: enabled: true 2 serviceAccount: 3 prometheusCR: enabled: true 4 scrapeInterval: 10s serviceMonitorSelector: 5 name: app1 podMonitorSelector: 6 name: app2 config: receivers: prometheus: 7 config: scrape_configs: [] processors: exporters: debug: {} service: pipelines: metrics: receivers: [prometheus] processors: [] exporters: [debug] # ... 1 When the Target Allocator is enabled, the deployment mode must be set to statefulset . 2 Enables the Target Allocator. Defaults to false . 3 The service account name of the Target Allocator deployment. The service account needs to have RBAC to get the ServiceMonitor , PodMonitor custom resources, and other objects from the cluster to properly set labels on scraped metrics. The default service name is <collector_name>-targetallocator . 4 Enables integration with the Prometheus PodMonitor and ServiceMonitor custom resources. 5 Label selector for the Prometheus ServiceMonitor custom resources. When left empty, enables all service monitors. 6 Label selector for the Prometheus PodMonitor custom resources. When left empty, enables all pod monitors. 7 Prometheus receiver with the minimal, empty scrape_config: [] configuration option. The Target Allocator deployment uses the Kubernetes API to get relevant objects from the cluster, so it requires a custom RBAC configuration. RBAC configuration for the Target Allocator service account apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-targetallocator rules: - apiGroups: [""] resources: - services - pods - namespaces verbs: ["get", "list", "watch"] - apiGroups: ["monitoring.coreos.com"] resources: - servicemonitors - podmonitors - scrapeconfigs - probes verbs: ["get", "list", "watch"] - apiGroups: ["discovery.k8s.io"] resources: - endpointslices verbs: ["get", "list", "watch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-targetallocator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: otel-targetallocator subjects: - kind: ServiceAccount name: otel-targetallocator 1 namespace: observability 2 # ... 1 The name of the Target Allocator service account mane. 2 The namespace of the Target Allocator service account. | [
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: tracing-system spec: mode: deployment observability: metrics: enableMetrics: true config: receivers: otlp: protocols: grpc: {} http: {} processors: {} exporters: otlp: endpoint: otel-collector-headless.tracing-system.svc:4317 tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\" prometheus: endpoint: 0.0.0.0:8889 resource_to_telemetry_conversion: enabled: true # by default resource attributes are dropped service: 1 pipelines: traces: receivers: [otlp] processors: [] exporters: [otlp] metrics: receivers: [otlp] processors: [] exporters: [prometheus]",
"receivers:",
"processors:",
"exporters:",
"connectors:",
"extensions:",
"service: pipelines:",
"service: pipelines: traces: receivers:",
"service: pipelines: traces: processors:",
"service: pipelines: traces: exporters:",
"service: pipelines: metrics: receivers:",
"service: pipelines: metrics: processors:",
"service: pipelines: metrics: exporters:",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: generate-processors-rbac rules: - apiGroups: - rbac.authorization.k8s.io resources: - clusterrolebindings - clusterroles verbs: - create - delete - get - list - patch - update - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: generate-processors-rbac roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: generate-processors-rbac subjects: - kind: ServiceAccount name: opentelemetry-operator-controller-manager namespace: openshift-opentelemetry-operator",
"config: receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem client_ca_file: client.pem 3 reload_interval: 1h 4 http: endpoint: 0.0.0.0:4318 5 tls: {} 6 service: pipelines: traces: receivers: [otlp] metrics: receivers: [otlp]",
"config: receivers: jaeger: protocols: grpc: endpoint: 0.0.0.0:14250 1 thrift_http: endpoint: 0.0.0.0:14268 2 thrift_compact: endpoint: 0.0.0.0:6831 3 thrift_binary: endpoint: 0.0.0.0:6832 4 tls: {} 5 service: pipelines: traces: receivers: [jaeger]",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-hostfs-daemonset namespace: <namespace> --- apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints allowHostDirVolumePlugin: true allowHostIPC: false allowHostNetwork: false allowHostPID: true allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: true allowedCapabilities: null defaultAddCapabilities: - SYS_ADMIN fsGroup: type: RunAsAny groups: [] metadata: name: otel-hostmetrics readOnlyRootFilesystem: true runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny supplementalGroups: type: RunAsAny users: - system:serviceaccount:<namespace>:otel-hostfs-daemonset volumes: - configMap - emptyDir - hostPath - projected --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <namespace> spec: serviceAccount: otel-hostfs-daemonset mode: daemonset volumeMounts: - mountPath: /hostfs name: host readOnly: true volumes: - hostPath: path: / name: host config: receivers: hostmetrics: collection_interval: 10s 1 initial_delay: 1s 2 root_path: / 3 scrapers: 4 cpu: {} memory: {} disk: {} service: pipelines: metrics: receivers: [hostmetrics]",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-k8sobj namespace: <namespace> --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-k8sobj namespace: <namespace> rules: - apiGroups: - \"\" resources: - events - pods verbs: - get - list - watch - apiGroups: - \"events.k8s.io\" resources: - events verbs: - watch - list --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-k8sobj subjects: - kind: ServiceAccount name: otel-k8sobj namespace: <namespace> roleRef: kind: ClusterRole name: otel-k8sobj apiGroup: rbac.authorization.k8s.io --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel-k8s-obj namespace: <namespace> spec: serviceAccount: otel-k8sobj mode: deployment config: receivers: k8sobjects: auth_type: serviceAccount objects: - name: pods 1 mode: pull 2 interval: 30s 3 label_selector: 4 field_selector: 5 namespaces: [<namespace>,...] 6 - name: events mode: watch exporters: debug: service: pipelines: logs: receivers: [k8sobjects] exporters: [debug]",
"config: receivers: kubeletstats: collection_interval: 20s auth_type: \"serviceAccount\" endpoint: \"https://USD{env:K8S_NODE_NAME}:10250\" insecure_skip_verify: true service: pipelines: metrics: receivers: [kubeletstats] env: - name: K8S_NODE_NAME 1 valueFrom: fieldRef: fieldPath: spec.nodeName",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [''] resources: ['nodes/stats'] verbs: ['get', 'watch', 'list'] - apiGroups: [\"\"] resources: [\"nodes/proxy\"] 1 verbs: [\"get\"]",
"config: receivers: prometheus: config: scrape_configs: 1 - job_name: 'my-app' 2 scrape_interval: 5s 3 static_configs: - targets: ['my-app.example.svc.cluster.local:8888'] 4 service: pipelines: metrics: receivers: [prometheus]",
"config: otlpjsonfile: include: - \"/var/log/*.log\" 1 exclude: - \"/var/log/test.log\" 2",
"config: receivers: zipkin: endpoint: 0.0.0.0:9411 1 tls: {} 2 service: pipelines: traces: receivers: [zipkin]",
"config: receivers: kafka: brokers: [\"localhost:9092\"] 1 protocol_version: 2.0.0 2 topic: otlp_spans 3 auth: plain_text: 4 username: example password: example tls: 5 ca_file: ca.pem cert_file: cert.pem key_file: key.pem insecure: false 6 server_name_override: kafka.example.corp 7 service: pipelines: traces: receivers: [kafka]",
"config: receivers: k8s_cluster: distribution: openshift collection_interval: 10s exporters: debug: {} service: pipelines: metrics: receivers: [k8s_cluster] exporters: [debug] logs/entity_events: receivers: [k8s_cluster] exporters: [debug]",
"apiVersion: v1 kind: ServiceAccount metadata: labels: app: otelcontribcol name: otelcontribcol",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otelcontribcol labels: app: otelcontribcol rules: - apiGroups: - quota.openshift.io resources: - clusterresourcequotas verbs: - get - list - watch - apiGroups: - \"\" resources: - events - namespaces - namespaces/status - nodes - nodes/spec - pods - pods/status - replicationcontrollers - replicationcontrollers/status - resourcequotas - services verbs: - get - list - watch - apiGroups: - apps resources: - daemonsets - deployments - replicasets - statefulsets verbs: - get - list - watch - apiGroups: - extensions resources: - daemonsets - deployments - replicasets verbs: - get - list - watch - apiGroups: - batch resources: - jobs - cronjobs verbs: - get - list - watch - apiGroups: - autoscaling resources: - horizontalpodautoscalers verbs: - get - list - watch",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otelcontribcol labels: app: otelcontribcol roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: otelcontribcol subjects: - kind: ServiceAccount name: otelcontribcol namespace: default",
"config: receivers: opencensus: endpoint: 0.0.0.0:9411 1 tls: 2 cors_allowed_origins: 3 - https://*.<example>.com service: pipelines: traces: receivers: [opencensus]",
"config: receivers: filelog: include: [ /simple.log ] 1 operators: 2 - type: regex_parser regex: '^(?P<time>\\d{4}-\\d{2}-\\d{2} \\d{2}:\\d{2}:\\d{2}) (?P<sev>[A-Z]*) (?P<msg>.*)USD' timestamp: parse_from: attributes.time layout: '%Y-%m-%d %H:%M:%S' severity: parse_from: attributes.sev",
"apiVersion: v1 kind: Namespace metadata: name: otel-journald labels: security.openshift.io/scc.podSecurityLabelSync: \"false\" pod-security.kubernetes.io/enforce: \"privileged\" pod-security.kubernetes.io/audit: \"privileged\" pod-security.kubernetes.io/warn: \"privileged\" --- apiVersion: v1 kind: ServiceAccount metadata: name: privileged-sa namespace: otel-journald --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-journald-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:openshift:scc:privileged subjects: - kind: ServiceAccount name: privileged-sa namespace: otel-journald --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel-journald-logs namespace: otel-journald spec: mode: daemonset serviceAccount: privileged-sa securityContext: allowPrivilegeEscalation: false capabilities: drop: - CHOWN - DAC_OVERRIDE - FOWNER - FSETID - KILL - NET_BIND_SERVICE - SETGID - SETPCAP - SETUID readOnlyRootFilesystem: true seLinuxOptions: type: spc_t seccompProfile: type: RuntimeDefault config: receivers: journald: files: /var/log/journal/*/* priority: info 1 units: 2 - kubelet - crio - init.scope - dnsmasq all: true 3 retry_on_failure: enabled: true 4 initial_interval: 1s 5 max_interval: 30s 6 max_elapsed_time: 5m 7 processors: exporters: debug: {} service: pipelines: logs: receivers: [journald] exporters: [debug] volumeMounts: - name: journal-logs mountPath: /var/log/journal/ readOnly: true volumes: - name: journal-logs hostPath: path: /var/log/journal tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector labels: app: otel-collector rules: - apiGroups: - \"\" resources: - events - namespaces - namespaces/status - nodes - nodes/spec - pods - pods/status - replicationcontrollers - replicationcontrollers/status - resourcequotas - services verbs: - get - list - watch - apiGroups: - apps resources: - daemonsets - deployments - replicasets - statefulsets verbs: - get - list - watch - apiGroups: - extensions resources: - daemonsets - deployments - replicasets verbs: - get - list - watch - apiGroups: - batch resources: - jobs - cronjobs verbs: - get - list - watch - apiGroups: - autoscaling resources: - horizontalpodautoscalers verbs: - get - list - watch",
"serviceAccount: otel-collector 1 config: receivers: k8s_events: namespaces: [project1, project2] 2 service: pipelines: logs: receivers: [k8s_events]",
"config: processors: batch: timeout: 5s send_batch_max_size: 10000 service: pipelines: traces: processors: [batch] metrics: processors: [batch]",
"config: processors: memory_limiter: check_interval: 1s limit_mib: 4000 spike_limit_mib: 800 service: pipelines: traces: processors: [batch] metrics: processors: [batch]",
"kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [\"config.openshift.io\"] resources: [\"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]",
"config: processors: resourcedetection: detectors: [openshift] override: true service: pipelines: traces: processors: [resourcedetection] metrics: processors: [resourcedetection]",
"config: processors: resourcedetection/env: detectors: [env] 1 timeout: 2s override: false",
"config: processors: attributes/example: actions: - key: db.table action: delete - key: redacted_span value: true action: upsert - key: copy_key from_attribute: key_original action: update - key: account_id value: 2245 action: insert - key: account_password action: delete - key: account_email action: hash - key: http.status_code action: convert converted_type: int",
"config: processors: attributes: - key: cloud.availability_zone value: \"zone-1\" action: upsert - key: k8s.cluster.name from_attribute: k8s-cluster action: insert - key: redundant-attribute action: delete",
"config: processors: span: name: from_attributes: [<key1>, <key2>, ...] 1 separator: <value> 2",
"config: processors: span/to_attributes: name: to_attributes: rules: - ^\\/api\\/v1\\/document\\/(?P<documentId>.*)\\/updateUSD 1",
"config: processors: span/set_status: status: code: Error description: \"<error_description>\"",
"kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [''] resources: ['pods', 'namespaces'] verbs: ['get', 'watch', 'list']",
"config: processors: k8sattributes: filter: node_from_env_var: KUBE_NODE_NAME",
"config: processors: filter/ottl: error_mode: ignore 1 traces: span: - 'attributes[\"container.name\"] == \"app_container_1\"' 2 - 'resource.attributes[\"host.name\"] == \"localhost\"' 3",
"config: processors: routing: from_attribute: X-Tenant 1 default_exporters: 2 - jaeger table: 3 - value: acme exporters: [jaeger/acme] exporters: jaeger: endpoint: localhost:14250 jaeger/acme: endpoint: localhost:24250",
"config: processors: cumulativetodelta: include: 1 match_type: strict 2 metrics: 3 - <metric_1_name> - <metric_2_name> exclude: 4 match_type: regexp metrics: - \"<regular_expression_for_metric_names>\"",
"config: processors: groupbyattrs: keys: 1 - <key1> 2 - <key2>",
"config: processors: transform: error_mode: ignore 1 <trace|metric|log>_statements: 2 - context: <string> 3 conditions: 4 - <string> - <string> statements: 5 - <string> - <string> - <string> - context: <string> statements: - <string> - <string> - <string>",
"config: transform: error_mode: ignore trace_statements: 1 - context: resource statements: - keep_keys(attributes, [\"service.name\", \"service.namespace\", \"cloud.region\", \"process.command_line\"]) 2 - replace_pattern(attributes[\"process.command_line\"], \"password\\\\=[^\\\\s]*(\\\\s?)\", \"password=***\") 3 - limit(attributes, 100, []) - truncate_all(attributes, 4096) - context: span 4 statements: - set(status.code, 1) where attributes[\"http.path\"] == \"/health\" - set(name, attributes[\"http.route\"]) - replace_match(attributes[\"http.target\"], \"/user/*/list/*\", \"/user/{userId}/list/{listId}\") - limit(attributes, 100, []) - truncate_all(attributes, 4096)",
"config: exporters: otlp: endpoint: tempo-ingester:4317 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem insecure: false 3 insecure_skip_verify: false # 4 reload_interval: 1h 5 server_name_override: <name> 6 headers: 7 X-Scope-OrgID: \"dev\" service: pipelines: traces: exporters: [otlp] metrics: exporters: [otlp]",
"config: exporters: otlphttp: endpoint: http://tempo-ingester:4318 1 tls: 2 headers: 3 X-Scope-OrgID: \"dev\" disable_keep_alives: false 4 service: pipelines: traces: exporters: [otlphttp] metrics: exporters: [otlphttp]",
"config: exporters: debug: verbosity: detailed 1 sampling_initial: 5 2 sampling_thereafter: 200 3 use_internal_logger: true 4 service: pipelines: traces: exporters: [debug] metrics: exporters: [debug]",
"config: exporters: loadbalancing: routing_key: \"service\" 1 protocol: otlp: 2 timeout: 1s resolver: 3 static: 4 hostnames: - backend-1:4317 - backend-2:4317 dns: 5 hostname: otelcol-headless.observability.svc.cluster.local k8s: 6 service: lb-svc.kube-public ports: - 15317 - 16317",
"ports: - name: promexporter 1 port: 8889 protocol: TCP config: exporters: prometheus: endpoint: 0.0.0.0:8889 2 tls: 3 ca_file: ca.pem cert_file: cert.pem key_file: key.pem namespace: prefix 4 const_labels: 5 label1: value1 enable_open_metrics: true 6 resource_to_telemetry_conversion: 7 enabled: true metric_expiration: 180m 8 add_metric_suffixes: false 9 service: pipelines: metrics: exporters: [prometheus]",
"config: exporters: prometheusremotewrite: endpoint: \"https://my-prometheus:7900/api/v1/push\" 1 tls: 2 ca_file: ca.pem cert_file: cert.pem key_file: key.pem target_info: true 3 export_created_metric: true 4 max_batch_size_bytes: 3000000 5 service: pipelines: metrics: exporters: [prometheusremotewrite]",
"config: exporters: kafka: brokers: [\"localhost:9092\"] 1 protocol_version: 2.0.0 2 topic: otlp_spans 3 auth: plain_text: 4 username: example password: example tls: 5 ca_file: ca.pem cert_file: cert.pem key_file: key.pem insecure: false 6 server_name_override: kafka.example.corp 7 service: pipelines: traces: exporters: [kafka]",
"config: exporters: awscloudwatchlogs: log_group_name: \"<group_name_of_amazon_cloudwatch_logs>\" 1 log_stream_name: \"<log_stream_of_amazon_cloudwatch_logs>\" 2 region: <aws_region_of_log_stream> 3 endpoint: <service_endpoint_of_amazon_cloudwatch_logs> 4 log_retention: <supported_value_in_days> 5",
"config: exporters: awsemf: log_group_name: \"<group_name_of_amazon_cloudwatch_logs>\" 1 log_stream_name: \"<log_stream_of_amazon_cloudwatch_logs>\" 2 resource_to_telemetry_conversion: 3 enabled: true region: <region> 4 endpoint: <endpoint> 5 log_retention: <supported_value_in_days> 6 namespace: <custom_namespace> 7",
"config: exporters: awsxray: region: \"<region>\" 1 endpoint: <endpoint> 2 resource_arn: \"<aws_resource_arn>\" 3 role_arn: \"<iam_role>\" 4 indexed_attributes: [ \"<indexed_attr_0>\", \"<indexed_attr_1>\" ] 5 aws_log_groups: [\"<group1>\", \"<group2>\"] 6 request_timeout_seconds: 120 7",
"config: | exporters: file: path: /data/metrics.json 1 rotation: 2 max_megabytes: 10 3 max_days: 3 4 max_backups: 3 5 localtime: true 6 format: proto 7 compression: zstd 8 flush_interval: 5 9",
"config: receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 exporters: prometheus: endpoint: 0.0.0.0:8889 connectors: count: {} service: pipelines: 1 traces/in: receivers: [otlp] exporters: [count] 2 metrics/out: receivers: [count] 3 exporters: [prometheus]",
"config: connectors: count: spans: 1 <custom_metric_name>: 2 description: \"<custom_metric_description>\" conditions: - 'attributes[\"env\"] == \"dev\"' - 'name == \"devevent\"'",
"config: connectors: count: logs: 1 <custom_metric_name>: 2 description: \"<custom_metric_description>\" attributes: - key: env default_value: unknown 3",
"config: connectors: routing: table: 1 - statement: route() where attributes[\"X-Tenant\"] == \"dev\" 2 pipelines: [traces/dev] 3 - statement: route() where attributes[\"X-Tenant\"] == \"prod\" pipelines: [traces/prod] default_pipelines: [traces/dev] 4 error_mode: ignore 5 match_once: false 6 service: pipelines: traces/in: receivers: [otlp] exporters: [routing] traces/dev: receivers: [routing] exporters: [otlp/dev] traces/prod: receivers: [routing] exporters: [otlp/prod]",
"config: receivers: otlp: protocols: grpc: jaeger: protocols: grpc: processors: batch: exporters: otlp: endpoint: tempo-simplest-distributor:4317 tls: insecure: true connectors: forward: {} service: pipelines: traces/regiona: receivers: [otlp] processors: [] exporters: [forward] traces/regionb: receivers: [jaeger] processors: [] exporters: [forward] traces: receivers: [forward] processors: [batch] exporters: [otlp]",
"config: connectors: spanmetrics: metrics_flush_interval: 15s 1 service: pipelines: traces: exporters: [spanmetrics] metrics: receivers: [spanmetrics]",
"config: extensions: bearertokenauth: scheme: \"Bearer\" 1 token: \"<token>\" 2 filename: \"<token_file>\" 3 receivers: otlp: protocols: http: auth: authenticator: bearertokenauth 4 exporters: otlp: auth: authenticator: bearertokenauth 5 service: extensions: [bearertokenauth] pipelines: traces: receivers: [otlp] exporters: [otlp]",
"config: extensions: oauth2client: client_id: <client_id> 1 client_secret: <client_secret> 2 endpoint_params: 3 audience: <audience> token_url: https://example.com/oauth2/default/v1/token 4 scopes: [\"api.metrics\"] 5 # tls settings for the token client tls: 6 insecure: true 7 ca_file: /var/lib/mycert.pem 8 cert_file: <cert_file> 9 key_file: <key_file> 10 timeout: 2s 11 receivers: otlp: protocols: http: {} exporters: otlp: auth: authenticator: oauth2client 12 service: extensions: [oauth2client] pipelines: traces: receivers: [otlp] exporters: [otlp]",
"config: extensions: file_storage/all_settings: directory: /var/lib/otelcol/mydir 1 timeout: 1s 2 compaction: on_start: true 3 directory: /tmp/ 4 max_transaction_size: 65_536 5 fsync: false 6 exporters: otlp: sending_queue: storage: file_storage/all_settings 7 service: extensions: [file_storage/all_settings] 8 pipelines: traces: receivers: [otlp] exporters: [otlp]",
"config: extensions: oidc: attribute: authorization 1 issuer_url: https://example.com/auth/realms/opentelemetry 2 issuer_ca_path: /var/run/tls/issuer.pem 3 audience: otel-collector 4 username_claim: email 5 receivers: otlp: protocols: grpc: auth: authenticator: oidc exporters: debug: {} service: extensions: [oidc] pipelines: traces: receivers: [otlp] exporters: [debug]",
"config: extensions: jaegerremotesampling: source: reload_interval: 30s 1 remote: endpoint: jaeger-collector:14250 2 file: /etc/otelcol/sampling_strategies.json 3 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [jaegerremotesampling] pipelines: traces: receivers: [otlp] exporters: [debug]",
"{ \"service_strategies\": [ { \"service\": \"foo\", \"type\": \"probabilistic\", \"param\": 0.8, \"operation_strategies\": [ { \"operation\": \"op1\", \"type\": \"probabilistic\", \"param\": 0.2 }, { \"operation\": \"op2\", \"type\": \"probabilistic\", \"param\": 0.4 } ] }, { \"service\": \"bar\", \"type\": \"ratelimiting\", \"param\": 5 } ], \"default_strategy\": { \"type\": \"probabilistic\", \"param\": 0.5, \"operation_strategies\": [ { \"operation\": \"/health\", \"type\": \"probabilistic\", \"param\": 0.0 }, { \"operation\": \"/metrics\", \"type\": \"probabilistic\", \"param\": 0.0 } ] } }",
"config: extensions: pprof: endpoint: localhost:1777 1 block_profile_fraction: 0 2 mutex_profile_fraction: 0 3 save_to_file: test.pprof 4 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [pprof] pipelines: traces: receivers: [otlp] exporters: [debug]",
"config: extensions: health_check: endpoint: \"0.0.0.0:13133\" 1 tls: 2 ca_file: \"/path/to/ca.crt\" cert_file: \"/path/to/cert.crt\" key_file: \"/path/to/key.key\" path: \"/health/status\" 3 check_collector_pipeline: 4 enabled: true 5 interval: \"5m\" 6 exporter_failure_threshold: 5 7 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [health_check] pipelines: traces: receivers: [otlp] exporters: [debug]",
"config: extensions: zpages: endpoint: \"localhost:55679\" 1 receivers: otlp: protocols: http: {} exporters: debug: {} service: extensions: [zpages] pipelines: traces: receivers: [otlp] exporters: [debug]",
"oc port-forward pod/USD(oc get pod -l app.kubernetes.io/name=instance-collector -o=jsonpath='{.items[0].metadata.name}') 55679",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: statefulset 1 targetAllocator: enabled: true 2 serviceAccount: 3 prometheusCR: enabled: true 4 scrapeInterval: 10s serviceMonitorSelector: 5 name: app1 podMonitorSelector: 6 name: app2 config: receivers: prometheus: 7 config: scrape_configs: [] processors: exporters: debug: {} service: pipelines: metrics: receivers: [prometheus] processors: [] exporters: [debug]",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-targetallocator rules: - apiGroups: [\"\"] resources: - services - pods - namespaces verbs: [\"get\", \"list\", \"watch\"] - apiGroups: [\"monitoring.coreos.com\"] resources: - servicemonitors - podmonitors - scrapeconfigs - probes verbs: [\"get\", \"list\", \"watch\"] - apiGroups: [\"discovery.k8s.io\"] resources: - endpointslices verbs: [\"get\", \"list\", \"watch\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-targetallocator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: otel-targetallocator subjects: - kind: ServiceAccount name: otel-targetallocator 1 namespace: observability 2"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/red_hat_build_of_opentelemetry/configuring-the-collector |
12.5. Samba Configuration | 12.5. Samba Configuration The Samba configuration file smb.conf is located at /etc/samba/smb.conf in this example. It contains the following parameters: This example exports a share with name csmb located at /mnt/gfs2/share . This is different from the GFS2 shared filesystem at /mnt/ctdb/.ctdb.lock that we specified as the CTDB_RECOVERY_LOCK parameter in the CTDB configuration file at /etc/sysconfig/ctdb . In this example, we will create the share directory in /mnt/gfs2 when we mount it for the first time. The clustering = yes entry instructs Samba to use CTDB. The netbios name = csmb-server entry explicitly sets all the nodes to have a common NetBIOS name. The ea support parameter is required if you plan to use extended attributes. The smb.conf configuration file must be identical on all of the cluster nodes. Samba also offers registry-based configuration using the net conf command to automatically keep configuration in sync between cluster members without having to manually copy configuration files among the cluster nodes. For information on the net conf command, see the net (8) man page. | [
"[global] guest ok = yes clustering = yes netbios name = csmb-server [csmb] comment = Clustered Samba public = yes path = /mnt/gfs2/share writeable = yes ea support = yes"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-samba-configuration-CA |
Chapter 19. Configuring System Purpose using the subscription-manager command-line tool | Chapter 19. Configuring System Purpose using the subscription-manager command-line tool System purpose is a feature of the Red Hat Enterprise Linux installation to help RHEL customers get the benefit of our subscription experience and services offered in the Red Hat Hybrid Cloud Console, a dashboard-based, Software-as-a-Service (SaaS) application that enables you to view subscription usage in your Red Hat account. You can configure system purpose attributes either on the activation keys or by using the subscription manager tool. Prerequisites You have installed and registered your Red Hat Enterprise Linux 9 system, but system purpose is not configured. You are logged in as a root user. Note In the entitlement mode, if your system is registered but has subscriptions that do not satisfy the required purpose, you can run the subscription-manager remove --all command to remove attached subscriptions. You can then use the command-line subscription-manager syspurpose {role, usage, service-level} tools to set the required purpose attributes, and lastly run subscription-manager attach --auto to re-entitle the system with considerations for the updated attributes. Whereas, in the SCA enabled account, you can directly update the system purpose details post registration without making an update to the subscriptions in the system. Procedure From a terminal window, run the following command to set the intended role of the system: Replace VALUE with the role that you want to assign: Red Hat Enterprise Linux Server Red Hat Enterprise Linux Workstation Red Hat Enterprise Linux Compute Node For example: Optional: Before setting a value, see the available roles supported by the subscriptions for your organization: Optional: Run the following command to unset the role: Run the following command to set the intended Service Level Agreement (SLA) of the system: Replace VALUE with the SLA that you want to assign: Premium Standard Self-Support For example: Optional: Before setting a value, see the available service-levels supported by the subscriptions for your organization: Optional: Run the following command to unset the SLA: Run the following command to set the intended usage of the system: Replace VALUE with the usage that you want to assign: Production Disaster Recovery Development/Test For example: Optional: Before setting a value, see the available usages supported by the subscriptions for your organization: Optional: Run the following command to unset the usage: Run the following command to show the current system purpose properties: Optional: For more detailed syntax information run the following command to access the subscription-manager man page and browse to the SYSPURPOSE OPTIONS: Verification To verify the system's subscription status in a system registered with an account having entitlement mode enabled: An overall status Current means that all of the installed products are covered by the subscription(s) attached and entitlements to access their content set repositories has been granted. A system purpose status Matched means that all of the system purpose attributes (role, usage, service-level) that were set on the system are satisfied by the subscription(s) attached. When the status information is not ideal, additional information is displayed to help the system administrator decide what corrections to make to the attached subscriptions to cover the installed products and intended system purpose. To verify the system's subscription status in a system registered with an account having SCA mode enabled: In SCA mode, subscriptions are no longer required to be attached to individual systems. Hence, both the overall status and system purpose status are displayed as Disabled . However, the technical, business, and operational use cases supplied by system purpose attributes are important to the subscriptions service. Without these attributes, the subscriptions service data is less accurate. Additional resources To learn more about the subscriptions service, see the Getting Started with the Subscriptions Service guide . | [
"subscription-manager syspurpose role --set \"VALUE\"",
"subscription-manager syspurpose role --set \"Red Hat Enterprise Linux Server\"",
"subscription-manager syspurpose role --list",
"subscription-manager syspurpose role --unset",
"subscription-manager syspurpose service-level --set \"VALUE\"",
"subscription-manager syspurpose service-level --set \"Standard\"",
"subscription-manager syspurpose service-level --list",
"subscription-manager syspurpose service-level --unset",
"subscription-manager syspurpose usage --set \"VALUE\"",
"subscription-manager syspurpose usage --set \"Production\"",
"subscription-manager syspurpose usage --list",
"subscription-manager syspurpose usage --unset",
"subscription-manager syspurpose --show",
"man subscription-manager",
"subscription-manager status +-------------------------------------------+ System Status Details +-------------------------------------------+ Overall Status: Current System Purpose Status: Matched",
"subscription-manager status +-------------------------------------------+ System Status Details +-------------------------------------------+ Overall Status: Disabled Content Access Mode is set to Simple Content Access. This host has access to content, regardless of subscription status. System Purpose Status: Disabled"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/interactively_installing_rhel_over_the_network/proc_configuring-system-purpose-using-the-subscription-manager-command-line-tool_rhel-installer |
Chapter 10. Cloning Subsystems | Chapter 10. Cloning Subsystems When a new subsystem instance is first configured, the Red Hat Certificate System allows subsystems to be cloned, or duplicated, for high availability of the Certificate System. The cloned instances run on different machines to avoid a single point of failure and their databases are synchronized through replication. The master CA and its clones are functionally identical, they only differ in serial number assignments and CRL generation. Therefore, this chapter refers to master or any of its clones as replicated CAs . 10.1. Backing up Subsystem Keys from a Software Database Ideally, the keys for the master instance are backed up when the instance is first created. If the keys were not backed up then or if the backup file is lost, then it is possible to extract the keys from the internal software database for the subsystem instance using the PKCS12Export utility. For example: Then copy the PKCS #12 file to the clone machine to be used in the clone instance configuration. For more details, see Section 2.7.6, "Cloning and Key Stores" . Note Keys cannot be exported from an HSM. However, in a typical deployment, HSMs support networked access, as long as the clone instance is installed using the same HSM as the master. If both instances use the same key store, then the keys are naturally available to the clone. If backing up keys from the HSM is required, contact the HSM manufacturer for assistance. | [
"PKCS12Export -debug -d /var/lib/pki/ instance_name /alias -w p12pwd.txt -p internal.txt -o master.p12"
] | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/Cloning_a_Subsystem |
Chapter 30. Kernel | Chapter 30. Kernel A fix of PT_NOTE entries that were previously corrupted during crashdump On some HP servers, a kernel crash could lead to the corruption of PT_NOTE entries because of a kernel code defect. As a consequence, the kernel crash dump utility failed to initialize. The provided patch aligns the allocation of PT_NOTE entries so that they are inside one physical page, and thus written and read data is identical. As a result, kernel crash dump now works as expected in the described situation. (BZ#1073651) Removal of the slub_debug parameter to save memory The slub_debug parameter enables debugging of the SLUB allocator, which makes each object consume extra memory. If the slub_debug kernel parameter was used, not enough memory was allocated to the kdump capture kernel by the automatic setting on 128 GB systems. Consequently, various tasks from the kdump init script terminated with an Out Of Memory (OOM) error message and no crash dump was saved. The provided patch removes the slub_debug parameter, and crash dump is now saved as expected in the aforementioned scenario. (BZ#1180246) Removal of a race condition causing a deadlock when a new CPU was attached Previously, when a new CPU was attached, a race condition between the CPU hotplug and the stop_two_cpus() function could occur causing a deadlock if that migration thread on the new CPU was already marked as active but not enabled . A set of patches has been applied which removes this race condition. As a result, systems with attached new CPUs now run as intended. (BZ#1252281) Update of the kernel with hugepage migration patches from the upstream Previously, several types of bugs including the kernel panic could occur with the hugepage migration. A set of patches from the upstream has been backported which fix these bugs. The updated kernel is now more stable and hugepage migration is automatically disabled in architectures other than AMD64 and Intel 64. (BZ#1287322) Booting kernel with UEFI and the secure boot enabled When the Unified Extensible Firmware Interface (UEFI) was used and the secure boot was enabled, the operating system failed to boot for all kernels since the 3.10.0-327.3.1.el7.x86_64 kernel. With the update to the 3.10.0-327.4.4.el7 kernel and newer versions the system boots up as expected. (BZ#1290441) New microcode added into initramfs images for all installed kernels Previously, when the microcode_ctl package was installed, the postinstall scriptlet rebuilt the initramfs file only for the running kernel and not for any other installed kernels. Consequently, when the build completed, there was an initramfs file for a kernel that was not even installed. The provided fix adds new microcode into initramfs images for all installed kernels. As a result, the superfluous initramfs file is no longer generated. (BZ#1292158) kernel slab errors caused by a race condition in GFS2 no longer occur A race condition previously occurred in the GFS2 file system in which two processes simultaneously tried to free kernel slab memory used for directory lookup. As a consequence, when both processes freed the same memory, a slab memory error occurred in the kernel. The GFS2 file system has been patched to eliminate the race condition, and a process now cannot try to free the memory that has already been freed by another process. Now, each process is forced to take turns when trying to free the memory. As a result, kernel slab errors no longer occur. (BZ#1276477) GFS2 now writes data to the correct location within the file Previously, the GFS2 file system miscalculated the starting offset when writing files opened with O_DIRECT (Direct I/O) at a location larger than 4 KB. As a consequence, the data was written to an incorrect location in the file. GFS2 has been patched to calculate the correct file offset for Direct I/O writes. As a result, GFS2 now writes data to the correct location within the file. (BZ#1289630) Dump-capture kernel memory freed when kdump mechanism fails When crashkernel memory was allocated using the ,high and ,low syntax, there were cases where the reservation of the high portion succeeded but with the reservation of the low portion the kdump mechanism failed. This failure could occur especially on large systems for several reasons. The manually specified crashkernel low memory was too large and thus an adequate memblock region was not found. The kexec utility could load the dump-capture kernel successfully, but booting the dump-capture kernel failed, as there was no low memory. The provided patch set reserves low memory for the dump-capture kernel after the high memory portion has been allocated. As a result, the dump-capture kernel memory is freed if the kdump mechanism fails. The user thus has a chance to take measures accordingly. (BZ#1241236) The ksc utility no longer fails to file bugs due to the unavailable kabi-whitelists component In an earlier update, the kabi-whitelists component was changed to the kabi-whitelists sub-component of the kernel component. Consequently, the ksc utility was not able to file bugs, as the kabi-whitelists component value was not active, and the following error message was generated: With this update, the correct sub-component of the kernel component is kabi-whitelisted, and ksc files bugs as expected. (BZ# 1328384 ) ksc now returns an error instead of crashing when running without mandatory arguments Previously, the ksc tool terminated unexpectedly when running without the mandatory arguments. With this update, ksc returns an error message and exits gracefully in the described situation. (BZ# 1272348 ) ext4 file systems can now be resized as expected Due to a bug in the ext4 code, it was previously impossible to resize ext4 file systems that had 1 kilobyte block size and were smaller than 32 megabytes. A patch has been applied to fix this bug, and the described ext4 file systems can now be resized as expected. (BZ#1172496) Unexpected behavior when attaching a qdisc to a virtual device no longer occurs Previously, attaching a qdisc to a virtual device could result in unexpected behavior such as packets being dropped prematurely and reduced bandwidth. With this update, virtual devices have a default tx_queue_len of 1000 and are represented by a device flag. Attaching a qdisc to a virtual device is now supported with default settings and any special handling of the tx_queue_len=0 is no longer needed. (BZ# 1152231 ) The udev daemon is no longer stopped by dracut Previously, a dracut script in the initramfs process stopped the udev daemon by using the udevadm control command, which caused the udev daemon to exit. However, the systemd service policy is to restart the daemon. Under certain circumstances, this prevented the system from booting. With this update, the code to stop the udev daemon has been removed from the dracut script, which avoids the described problem. (BZ#1276983) multi-fsb buffer logging has been fixed Previously, directory modifications on XFS filesystems with large directory block sizes could lead to a kernel panic and consequent server crash due to the problems with logging the multi-block buffers. The provided patch fixes the multi-fsb buffer logging, and the servers no longer crash in this scenario. (BZ#1356009) Hard screen lock-up no longer occurs on laptops using integrated graphics in the 6th Generation Intel Core processors On laptops using integrated graphics in the 6th Generation Intel Core processors, hard screen lock-up previously sometimes occurred when: Moving the cursor between the edges of the monitor Moving the cursor between multiple monitors Changing any aspect of the monitor configuration Docking or undocking the machine Plugging or unplugging a monitor The bug has been fixed, and the hard lock-up of the screen no longer occurs in these situations. (BZ#1341633) Multiple problems fixed on systems with persistent memory Several problems sometimes occurred during boot on systems with persistent memory, either real Non-Volatile Dual In-line Memory Modules (NVDIMMs) or emulated NVDIMMs using the memmap=X!Y kernel command-line parameter. The onlining of persistent memory caused the following messages to be displayed for every block (128 MB) of pmem devices: The system became unresponsive. The following BUG message was displayed: This update fixes the described bugs. (BZ#1367257) python errors no longer appear when SUDO_USER and USER variables are not set Previously, when executing in spare environments that do not have SUDO_USER or USER environment variables set, a number of python errors appeared. With this update, undefined SUDO_USER and USER variables are handled correctly, and the errors no longer appear. (BZ# 1312057 ) CIFS anonymous authentication no longer fails Previously, the cifs module set values incorrectly for anonymous authentication. Changes made to the samba file server exposed this bug. As a consequence, anonymous authentication failed. This update changes the behavior of the client and sets the correct auth values for anonymous authentication. As a result, CIFS anonymous authentication now works correctly. (BZ#1361407) | [
"Could not create bug.<Fault 32000:\"The component value 'kabi-whitelists' is not active\">",
"Built 2 zonelists in Zone order, mobility grouping on. Total pages: 8126731 Policy zone: Normal",
"BUG: unable to handle kernel paging request at ffff88007b7eef70"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.3_release_notes/bug_fixes_kernel |
5.15. bind | 5.15. bind 5.15.1. RHBA-2012:1107 - bind bug fix update Updated bind packages that fix one bug are now available for Red Hat Enterprise Linux 6. BIND (Berkeley Internet Name Domain) is an implementation of the DNS (Domain Name System) protocols. BIND includes a DNS server (named), which resolves host names to IP addresses; a resolver library (routines for applications to use when interfacing with the DNS server); and tools for verifying that the DNS server is operating properly. Bug Fix BZ# 838956 Due to a race condition in the rbtdb.c source file, the named daemon could terminate unexpectedly with the INSIST error code. This bug has been fixed in the code and the named daemon no longer crashes in the described scenario. All users of bind are advised to upgrade to these updated packages, which fix this bug. 5.15.2. RHSA-2012:1549 - Important: bind security update Updated bind packages that fix one security issue are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having important security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link(s) associated with each description below. The Berkeley Internet Name Domain (BIND) is an implementation of the Domain Name System (DNS) protocols. BIND includes a DNS server (named); a resolver library (routines for applications to use when interfacing with DNS); and tools for verifying that the DNS server is operating correctly. DNS64 is used to automatically generate DNS records so IPv6 based clients can access IPv4 systems through a NAT64 server. Security Fix CVE-2012-5688 A flaw was found in the DNS64 implementation in BIND. If a remote attacker sent a specially-crafted query to a named server, named could exit unexpectedly with an assertion failure. Note that DNS64 support is not enabled by default. Users of bind are advised to upgrade to these updated packages, which correct this issue. After installing the update, the BIND daemon (named) will be restarted automatically. 5.15.3. RHSA-2012:1268 - Important: bind security update Updated bind packages that fix one security issue are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having important security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link(s) associated with each description below. The Berkeley Internet Name Domain (BIND) is an implementation of the Domain Name System (DNS) protocols. BIND includes a DNS server (named); a resolver library (routines for applications to use when interfacing with DNS); and tools for verifying that the DNS server is operating correctly. Security Fix CVE-2012-4244 A flaw was found in the way BIND handled resource records with a large RDATA value. A malicious owner of a DNS domain could use this flaw to create specially-crafted DNS resource records, that would cause a recursive resolver or secondary server to exit unexpectedly with an assertion failure. Users of bind are advised to upgrade to these updated packages, which correct this issue. After installing the update, the BIND daemon (named) will be restarted automatically. 5.15.4. RHSA-2012:1123 - Important: bind security update Updated bind packages that fix one security issue are now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having important security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link(s) associated with each description below. The Berkeley Internet Name Domain (BIND) is an implementation of the Domain Name System (DNS) protocols. BIND includes a DNS server (named); a resolver library (routines for applications to use when interfacing with DNS); and tools for verifying that the DNS server is operating correctly. Security Fix CVE-2012-3817 An uninitialized data structure use flaw was found in BIND when DNSSEC validation was enabled. A remote attacker able to send a large number of queries to a DNSSEC validating BIND resolver could use this flaw to cause it to exit unexpectedly with an assertion failure. Users of bind are advised to upgrade to these updated packages, which correct this issue. After installing the update, the BIND daemon (named) will be restarted automatically. 5.15.5. RHBA-2012:1341 - bind bug fix update Updated bind packages that fix one bug are now available for Red Hat Enterprise Linux 6. BIND (Berkeley Internet Name Domain) is an implementation of the DNS (Domain Name System) protocols. BIND includes a DNS server (named), which resolves host names to IP addresses; a resolver library containing routines for applications to use when interfacing with the DNS server; and tools for verifying that the DNS server is operating properly. Bug Fix BZ# 858273 Previously, BIND rejected "forward" and "forwarders" statements in static-stub zones. Consequently, it was impossible to forward certain queries to specified servers. With this update, BIND accepts those options for static-stub zones properly, thus fixing this bug. All users of bind are advised to upgrade to these updated packages, which fix this bug. 5.15.6. RHSA-2012:1363 - Important: bind security update Updated bind packages that fix one security issue are now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having important security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link(s) associated with each description below. The Berkeley Internet Name Domain (BIND) is an implementation of the Domain Name System (DNS) protocols. BIND includes a DNS server (named); a resolver library (routines for applications to use when interfacing with DNS); and tools for verifying that the DNS server is operating correctly. Security Fix CVE-2012-5166 A flaw was found in the way BIND handled certain combinations of resource records. A remote attacker could use this flaw to cause a recursive resolver, or an authoritative server in certain configurations, to lockup. Users of bind are advised to upgrade to these updated packages, which correct this issue. After installing the update, the BIND daemon (named) will be restarted automatically. 5.15.7. RHBA-2012:0830 - bind bug fix and enhancement update Updated bind packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. BIND ( Berkeley Internet Name Domain ) is an implementation of the DNS ( Domain Name System ) protocols. BIND includes a DNS server ( named ), which resolves host names to IP addresses; a resolver library (routines for applications to use when interfacing with DNS); and tools for verifying that the DNS server is operating properly. Note The bind package has been upgraded to upstream version 9.8.2rc1 which provides a number of bug fixes and enhancements over the version. Refer to /usr/share/doc/bind-9.8.2/README for a detailed list of enhancements. (BZ# 745284 , BZ# 755618 , BZ# 797972 ) Bug Fixes BZ# 734458 When /etc/resolv.conf contained nameservers with disabled recursion, nslookup failed to resolve certain host names. With this update, a patch has been applied and nslookup now works as expected in the scenario described. BZ# 739406 Prior to this update, errors arising on automatic update of DNSSEC trust anchors were handled incorrectly. Consequently, the named daemon could become unresponsive on shutdown. With this update, the error handling has been improved and named exits on shutdown gracefully. BZ# 739410 The multi-threaded named daemon uses the atomic operations feature to speed-up access to shared data. This feature did not work correctly on 32-bit and 64-bit PowerPC architectures. Therefore, named sometimes became unresponsive on these architectures. This update disables the atomic operations feature on 32-bit and 64-bit PowerPC architectures, which ensures that named is now more stable and reliable and no longer hangs. BZ# 746694 Prior to this update, a race condition could occur on validation of DNSSEC-signed NXDOMAIN responses and named could terminate unexpectedly. With this update, the underlying code has been fixed and the race condition no longer occurs. BZ# 759502 The named daemon, configured as the master server, sometimes failed to transfer an uncompressible zone. The following error message was logged: The code which handles zone transfers has been fixed and this error no longer occurs in the scenario described. BZ# 759503 During a DNS zone transfer, named sometimes terminated unexpectedly with an assertion failure. With this update, a patch has been applied to make the code more robust, and named no longer crashes in the scenario described. BZ# 768798 Previously, the rndc.key file was generated during package installation by the rndc-confgen -a command, but this feature was removed in Red Hat Enterprise Linux 6.1 because users reported that installation of bind package sometimes hung due to lack of entropy in /dev/random . The named initscript now generates rndc.key during the service startup if it does not exist. BZ# 786362 After the rndc reload command was executed, named failed to update DNSSEC trust anchors and emitted the following message to the log: This issue was fixed in the 9.8.2rc1 upstream version. BZ# 789886 Due to an error in the bind spec file, the bind-chroot subpackage did not create a /dev/null device. In addition, some empty directories were left behind after uninstalling bind . With this update, the bind-chroot packaging errors have been fixed. BZ# 795414 The dynamic-db plug-ins were loaded too early which caused the configuration in the named.conf file to override the configuration supplied by the plug-in. Consequently, named sometimes failed to start. With this update the named.conf is parsed before plug-in initialization and named now starts as expected. BZ# 812900 Previously, when the /var/named directory was mounted the /etc/init.d/named initscript did not distinguish between situations when chroot configuration was enabled and when chroot was not enabled. Consequently, when stopping the named service the /var/named directory was always unmounted. The initscript has been fixed and now unmounts /var/named only when chroot configuration is enabled. As a result, /var/named stays mounted after the named service is stopped when chroot configuration is not enabled. BZ# 816164 Previously, the nslookup utility did not return a non-zero exit code when it failed to get an answer. Consequently, it was impossible to determine if an nslookup run was successful or not from the error code. The nslookup utility has been fixed and now it returns "1" as the exit code when fails to get answer. Enhancements BZ# 735438 By default BIND returns resource records in round-robin order. The rrset-order option now supports fixed ordering. When this option is set, the resource records for each domain name are always returned in the order they are loaded from the zone file. BZ# 788870 Previously, named logged too many messages relating to external DNS queries. The severity of these error messages has been decreased from " notice " to " debug " so that the system log is not flooded with mostly unnecessary information. BZ# 790682 The named daemon now uses portreserve to reserve the Remote Name Daemon Control ( RNDC ) port to avoid conflicts with other services. All users of bind are advised to upgrade to these updated packages, which fix these bugs and provide these enhancements. | [
"transfer of './IN': sending zone data: ran out of space",
"managed-keys-zone ./IN: Failed to create fetch for DNSKEY update"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/bind |
Preface | Preface Open Java Development Kit (OpenJDK) is a free and open source implementation of the Java Platform, Standard Edition (Java SE). The Red Hat build of OpenJDK is available in three versions: 8u, 11u, and 17u. Packages for the Red Hat build of OpenJDK are made available on Red Hat Enterprise Linux and Microsoft Windows and shipped as a JDK and JRE in the Red Hat Ecosystem Catalog. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.8/pr01 |
Chapter 23. OpenShiftAPIServer [operator.openshift.io/v1] | Chapter 23. OpenShiftAPIServer [operator.openshift.io/v1] Description OpenShiftAPIServer provides information to configure an operator to manage openshift-apiserver. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 23.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the specification of the desired behavior of the OpenShift API Server. status object status defines the observed status of the OpenShift API Server. 23.1.1. .spec Description spec is the specification of the desired behavior of the OpenShift API Server. Type object Property Type Description logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". unsupportedConfigOverrides `` unsupportedConfigOverrides overrides the final configuration that was computed by the operator. Red Hat does not support the use of this field. Misuse of this field could lead to unexpected behavior or conflict with other configuration options. Seek guidance from the Red Hat support before using this field. Use of this property blocks cluster upgrades, it must be removed before upgrading your cluster. 23.1.2. .status Description status defines the observed status of the OpenShift API Server. Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. latestAvailableRevision integer latestAvailableRevision is the deploymentID of the most recent deployment observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 23.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 23.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Required lastTransitionTime status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string reason string status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. 23.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 23.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Required group name namespace resource Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 23.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/openshiftapiservers DELETE : delete collection of OpenShiftAPIServer GET : list objects of kind OpenShiftAPIServer POST : create an OpenShiftAPIServer /apis/operator.openshift.io/v1/openshiftapiservers/{name} DELETE : delete an OpenShiftAPIServer GET : read the specified OpenShiftAPIServer PATCH : partially update the specified OpenShiftAPIServer PUT : replace the specified OpenShiftAPIServer /apis/operator.openshift.io/v1/openshiftapiservers/{name}/status GET : read status of the specified OpenShiftAPIServer PATCH : partially update status of the specified OpenShiftAPIServer PUT : replace status of the specified OpenShiftAPIServer 23.2.1. /apis/operator.openshift.io/v1/openshiftapiservers HTTP method DELETE Description delete collection of OpenShiftAPIServer Table 23.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind OpenShiftAPIServer Table 23.2. HTTP responses HTTP code Reponse body 200 - OK OpenShiftAPIServerList schema 401 - Unauthorized Empty HTTP method POST Description create an OpenShiftAPIServer Table 23.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 23.4. Body parameters Parameter Type Description body OpenShiftAPIServer schema Table 23.5. HTTP responses HTTP code Reponse body 200 - OK OpenShiftAPIServer schema 201 - Created OpenShiftAPIServer schema 202 - Accepted OpenShiftAPIServer schema 401 - Unauthorized Empty 23.2.2. /apis/operator.openshift.io/v1/openshiftapiservers/{name} Table 23.6. Global path parameters Parameter Type Description name string name of the OpenShiftAPIServer HTTP method DELETE Description delete an OpenShiftAPIServer Table 23.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 23.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OpenShiftAPIServer Table 23.9. HTTP responses HTTP code Reponse body 200 - OK OpenShiftAPIServer schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OpenShiftAPIServer Table 23.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 23.11. HTTP responses HTTP code Reponse body 200 - OK OpenShiftAPIServer schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OpenShiftAPIServer Table 23.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 23.13. Body parameters Parameter Type Description body OpenShiftAPIServer schema Table 23.14. HTTP responses HTTP code Reponse body 200 - OK OpenShiftAPIServer schema 201 - Created OpenShiftAPIServer schema 401 - Unauthorized Empty 23.2.3. /apis/operator.openshift.io/v1/openshiftapiservers/{name}/status Table 23.15. Global path parameters Parameter Type Description name string name of the OpenShiftAPIServer HTTP method GET Description read status of the specified OpenShiftAPIServer Table 23.16. HTTP responses HTTP code Reponse body 200 - OK OpenShiftAPIServer schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified OpenShiftAPIServer Table 23.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 23.18. HTTP responses HTTP code Reponse body 200 - OK OpenShiftAPIServer schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified OpenShiftAPIServer Table 23.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 23.20. Body parameters Parameter Type Description body OpenShiftAPIServer schema Table 23.21. HTTP responses HTTP code Reponse body 200 - OK OpenShiftAPIServer schema 201 - Created OpenShiftAPIServer schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/operator_apis/openshiftapiserver-operator-openshift-io-v1 |
Chapter 89. MyBatis | Chapter 89. MyBatis Since Camel 2.7 Both producer and consumer are supported The MyBatis component allows you to query, poll, insert, update and delete data in a relational database using MyBatis . 89.1. Dependencies When using mybatis with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-mybatis-starter</artifactId> </dependency> 89.2. URI format Where statementName is the statement name in the MyBatis XML mapping file which maps to the query, insert, update or delete operation you choose to evaluate. You can append query options to the URI in the following format, ?option=value&option=value&... This component will by default load the MyBatis SqlMapConfig file from the root of the classpath with the expected name of SqlMapConfig.xml . If the file is located in another location, you will need to configure the configurationUri option on the MyBatisComponent component. 89.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 89.3.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 89.3.2. Configuring Endpoint Options Endpoints have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. Use Property Placeholders to configure options that allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 89.4. Component Options The MyBatis component supports 5 options, which are listed below. Name Description Default Type configurationUri (common) Location of MyBatis xml configuration file. The default value is: SqlMapConfig.xml loaded from the classpath. SqlMapConfig.xml String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean sqlSessionFactory (advanced) To use the SqlSessionFactory. SqlSessionFactory 89.5. Endpoint Options The MyBatis endpoint is configured using URI syntax: Following are the path and query parameters. 89.5.1. Path Parameters (1 parameters) Name Description Default Type statement (common) Required The statement name in the MyBatis XML mapping file which maps to the query, insert, update or delete operation you wish to evaluate. String 89.5.2. Query Parameters (30 parameters) Name Description Default Type maxMessagesPerPoll (consumer) This option is intended to split results returned by the database pool into the batches and deliver them in multiple exchanges. This integer defines the maximum messages to deliver in single exchange. By default, no maximum is set. Can be used to set a limit of e.g. 1000 to avoid when starting up the server that there are thousands of files. Set a value of 0 or negative to disable it. 0 int onConsume (consumer) Statement to run after data has been processed in the route. String routeEmptyResultSet (consumer) Whether allow empty resultset to be routed to the hop. false boolean sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean transacted (consumer) Enables or disables transaction. If enabled then if processing an exchange failed then the consumer breaks out processing any further exchanges to cause a rollback eager. false boolean useIterator (consumer) Process resultset individually or as a list. true boolean bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: * InOnly * InOut * InOptionalOut ExchangePattern pollStrategy (consumer (advanced)) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPollStrategy processingStrategy (consumer (advanced)) To use a custom MyBatisProcessingStrategy. MyBatisProcessingStrategy executorType (producer) The executor type to be used while executing statements. simple - executor does nothing special. reuse - executor reuses prepared statements. batch - executor reuses statements and batches updates. Enum values: * SIMPLE * REUSE * BATCH SIMPLE ExecutorType inputHeader (producer) User the header value for input parameters instead of the message body. By default, inputHeader == null and the input parameters are taken from the message body. If outputHeader is set, the value is used and query parameters will be taken from the header instead of the body. String outputHeader (producer) Store the query result in a header instead of the message body. By default, outputHeader == null and the query result is stored in the message body, any existing content in the message body is discarded. If outputHeader is set, the value is used as the name of the header to store the query result and the original message body is preserved. Setting outputHeader will also omit populating the default CamelMyBatisResult header since it would be the same as outputHeader all the time. String statementType (producer) Mandatory to specify for the producer to control which kind of operation to invoke. Enum values: * SelectOne * SelectList * Insert * InsertList * Update * UpdateList * Delete * DeleteList StatementType lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. 500 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. 1000 long repeatCount (scheduler) Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. 0 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. Enum values: * TRACE * DEBUG * INFO * WARN * ERROR * OFF TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutorService scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler. none Object schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. Enum values: * NANOSECONDS * MICROSECONDS * MILLISECONDS * SECONDS * MINUTES * HOURS * DAYS MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean 89.6. Message Headers The MyBatis component supports 2 message headers that are listed below. Name Description Default Type CamelMyBatisResult (producer) Constant: MYBATIS_RESULT The response returned from MtBatis in any of the operations. For instance an INSERT could return the auto-generated key, or number of rows etc. Object CamelMyBatisStatementName (common) Constant: MYBATIS_STATEMENT_NAME The statementName used (for example: insertAccount). String 89.7. Message Body The response from MyBatis will only be set as the body if it is a SELECT statement. For example, for INSERT statements Camel will not replace the body. This allows you to continue routing and keep the original body. The response from MyBatis is always stored in the header with the key CamelMyBatisResult . 89.8. Samples For example if you wish to consume beans from a JMS queue and insert them into a database you could do the following: from("activemq:queue:newAccount") .to("mybatis:insertAccount?statementType=Insert"); You must specify the statementType as you need to instruct Camel which kind of operation to invoke. Where insertAccount is the MyBatis ID in the SQL mapping file: <!-- Insert example, using the Account parameter class --> <insert id="insertAccount" parameterType="Account"> insert into ACCOUNT ( ACC_ID, ACC_FIRST_NAME, ACC_LAST_NAME, ACC_EMAIL ) values ( #{id}, #{firstName}, #{lastName}, #{emailAddress} ) </insert> 89.9. Using StatementType for better control of MyBatis When routing to an MyBatis endpoint you will want more fine grained control so you can control whether the SQL statement to be executed is a SELECT , UPDATE , DELETE or INSERT etc. So for instance if we want to route to an MyBatis endpoint in which the IN body contains parameters to a SELECT statement we can do: In the code above we can invoke the MyBatis statement selectAccountById and the IN body should contain the account id we want to retrieve, such as an Integer type. You can do the same for some of the other operations, such as SelectList : And the same for UPDATE , where you can send an Account object as the IN body to MyBatis: 89.9.1. Using InsertList StatementType MyBatis allows you to insert multiple rows using its for-each batch driver. To use this, you need to use the <foreach> in the mapper XML file. For example as shown below: Then you can insert multiple rows, by sending a Camel message to the mybatis endpoint which uses the InsertList statement type, as shown below: 89.9.2. Using UpdateList StatementType MyBatis allows you to update multiple rows using its for-each batch driver. To use this, you need to use the <foreach> in the mapper XML file. For example as shown below: <update id="batchUpdateAccount" parameterType="java.util.Map"> update ACCOUNT set ACC_EMAIL = #{emailAddress} where ACC_ID in <foreach item="Account" collection="list" open="(" close=")" separator=","> #{Account.id} </foreach> </update> Then you can update multiple rows, by sending a Camel message to the mybatis endpoint which uses the UpdateList statement type, as shown below: from("direct:start") .to("mybatis:batchUpdateAccount?statementType=UpdateList") .to("mock:result"); 89.9.3. Using DeleteList StatementType MyBatis allows you to delete multiple rows using its for-each batch driver. To use this, you need to use the <foreach> in the mapper XML file. For example as shown below: <delete id="batchDeleteAccountById" parameterType="java.util.List"> delete from ACCOUNT where ACC_ID in <foreach item="AccountID" collection="list" open="(" close=")" separator=","> #{AccountID} </foreach> </delete> Then you can delete multiple rows, by sending a Camel message to the mybatis endpoint which uses the DeleteList statement type, as shown below: from("direct:start") .to("mybatis:batchDeleteAccount?statementType=DeleteList") .to("mock:result"); 89.9.4. Notice on InsertList, UpdateList and DeleteList StatementTypes Parameter of any type (List, Map, etc.) can be passed to mybatis and an end user is responsible for handling it as required with the help of mybatis dynamic queries capabilities. 89.9.5. cheduled polling example This component supports scheduled polling and can therefore be used as a Polling Consumer. For example to poll the database every minute: from("mybatis:selectAllAccounts?delay=60000") .to("activemq:queue:allAccounts"); See "ScheduledPollConsumer Options" on Polling Consumer for more options. Alternatively you can use another mechanism for triggering the scheduled polls, such as the Timer or Quartz components. In the sample below we poll the database, every 30 seconds using the Timer component and send the data to the JMS queue: from("timer://pollTheDatabase?delay=30000") .to("mybatis:selectAllAccounts") .to("activemq:queue:allAccounts"); And the MyBatis SQL mapping file used: <!-- Select with no parameters using the result map for Account class. --> <select id="selectAllAccounts" resultMap="AccountResult"> select * from ACCOUNT </select> 89.9.6. Using onConsume This component supports executing statements after data have been consumed and processed by Camel. This allows you to do post updates in the database. Notice all statements must be UPDATE statements. Camel supports executing multiple statements whose names should be separated by commas. The route below illustrates we execute the consumeAccount statement data is processed. This allows us to change the status of the row in the database to processed, so we avoid consuming it twice or more. And the statements in the sqlmap file: 89.9.7. Participating in transactions Setting up a transaction manager under camel-mybatis can be a little bit fiddly, as it involves externalizing the database configuration outside the standard MyBatis SqlMapConfig.xml file. The first part requires the setup of a DataSource . This is typically a pool (either DBCP, or c3p0), which needs to be wrapped in a Spring proxy. This proxy enables non-Spring use of the DataSource to participate in Spring transactions (the MyBatis SqlSessionFactory does just this). <bean id="dataSource" class="org.springframework.jdbc.datasource.TransactionAwareDataSourceProxy"> <constructor-arg> <bean class="com.mchange.v2.c3p0.ComboPooledDataSource"> <property name="driverClass" value="org.postgresql.Driver"/> <property name="jdbcUrl" value="jdbc:postgresql://localhost:5432/myDatabase"/> <property name="user" value="myUser"/> <property name="password" value="myPassword"/> </bean> </constructor-arg> </bean> This has the additional benefit of enabling the database configuration to be externalized using property placeholders. A transaction manager is then configured to manage the outermost DataSource : <bean id="txManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager"> <property name="dataSource" ref="dataSource"/> </bean> A mybatis-spring SqlSessionFactoryBean then wraps that same DataSource : <bean id="sqlSessionFactory" class="org.mybatis.spring.SqlSessionFactoryBean"> <property name="dataSource" ref="dataSource"/> <!-- standard mybatis config file --> <property name="configLocation" value="/META-INF/SqlMapConfig.xml"/> <!-- externalised mappers --> <property name="mapperLocations" value="classpath*:META-INF/mappers/**/*.xml"/> </bean> The camel-mybatis component is then configured with that factory: <bean id="mybatis" class="org.apache.camel.component.mybatis.MyBatisComponent"> <property name="sqlSessionFactory" ref="sqlSessionFactory"/> </bean> Finally, a transaction policy is defined over the top of the transaction manager, which can then be used as usual: <bean id="PROPAGATION_REQUIRED" class="org.apache.camel.spring.spi.SpringTransactionPolicy"> <property name="transactionManager" ref="txManager"/> <property name="propagationBehaviorName" value="PROPAGATION_REQUIRED"/> </bean> <camelContext id="my-model-context" xmlns="http://camel.apache.org/schema/spring"> <route id="insertModel"> <from uri="direct:insert"/> <transacted ref="PROPAGATION_REQUIRED"/> <to uri="mybatis:myModel.insert?statementType=Insert"/> </route> </camelContext> 89.10. MyBatis Spring Boot Starter integration Spring Boot users can use mybatis-spring-boot-starter artifact provided by the mybatis team <dependency> <groupId>org.mybatis.spring.boot</groupId> <artifactId>mybatis-spring-boot-starter</artifactId> <version>2.3.0</version> </dependency> in particular AutoConfigured beans from mybatis-spring-boot-starter can be used as follow: #application.properties camel.component.mybatis.sql-session-factory=#sqlSessionFactory 89.11. Spring Boot Auto-Configuration The component supports 11 options, which are listed below. Name Description Default Type camel.component.mybatis-bean.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.mybatis-bean.configuration-uri Location of MyBatis xml configuration file. The default value is: SqlMapConfig.xml loaded from the classpath. SqlMapConfig.xml String camel.component.mybatis-bean.enabled Whether to enable auto configuration of the mybatis-bean component. This is enabled by default. Boolean camel.component.mybatis-bean.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.mybatis-bean.sql-session-factory To use the SqlSessionFactory. The option is a org.apache.ibatis.session.SqlSessionFactory type. SqlSessionFactory camel.component.mybatis.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.mybatis.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.mybatis.configuration-uri Location of MyBatis xml configuration file. The default value is: SqlMapConfig.xml loaded from the classpath. SqlMapConfig.xml String camel.component.mybatis.enabled Whether to enable auto configuration of the mybatis component. This is enabled by default. Boolean camel.component.mybatis.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.mybatis.sql-session-factory To use the SqlSessionFactory. The option is a org.apache.ibatis.session.SqlSessionFactory type. SqlSessionFactory | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-mybatis-starter</artifactId> </dependency>",
"mybatis:statementName[?options]",
"mybatis:statement",
"from(\"activemq:queue:newAccount\") .to(\"mybatis:insertAccount?statementType=Insert\");",
"<!-- Insert example, using the Account parameter class --> <insert id=\"insertAccount\" parameterType=\"Account\"> insert into ACCOUNT ( ACC_ID, ACC_FIRST_NAME, ACC_LAST_NAME, ACC_EMAIL ) values ( #{id}, #{firstName}, #{lastName}, #{emailAddress} ) </insert>",
"<update id=\"batchUpdateAccount\" parameterType=\"java.util.Map\"> update ACCOUNT set ACC_EMAIL = #{emailAddress} where ACC_ID in <foreach item=\"Account\" collection=\"list\" open=\"(\" close=\")\" separator=\",\"> #{Account.id} </foreach> </update>",
"from(\"direct:start\") .to(\"mybatis:batchUpdateAccount?statementType=UpdateList\") .to(\"mock:result\");",
"<delete id=\"batchDeleteAccountById\" parameterType=\"java.util.List\"> delete from ACCOUNT where ACC_ID in <foreach item=\"AccountID\" collection=\"list\" open=\"(\" close=\")\" separator=\",\"> #{AccountID} </foreach> </delete>",
"from(\"direct:start\") .to(\"mybatis:batchDeleteAccount?statementType=DeleteList\") .to(\"mock:result\");",
"from(\"mybatis:selectAllAccounts?delay=60000\") .to(\"activemq:queue:allAccounts\");",
"from(\"timer://pollTheDatabase?delay=30000\") .to(\"mybatis:selectAllAccounts\") .to(\"activemq:queue:allAccounts\");",
"<!-- Select with no parameters using the result map for Account class. --> <select id=\"selectAllAccounts\" resultMap=\"AccountResult\"> select * from ACCOUNT </select>",
"<bean id=\"dataSource\" class=\"org.springframework.jdbc.datasource.TransactionAwareDataSourceProxy\"> <constructor-arg> <bean class=\"com.mchange.v2.c3p0.ComboPooledDataSource\"> <property name=\"driverClass\" value=\"org.postgresql.Driver\"/> <property name=\"jdbcUrl\" value=\"jdbc:postgresql://localhost:5432/myDatabase\"/> <property name=\"user\" value=\"myUser\"/> <property name=\"password\" value=\"myPassword\"/> </bean> </constructor-arg> </bean>",
"<bean id=\"txManager\" class=\"org.springframework.jdbc.datasource.DataSourceTransactionManager\"> <property name=\"dataSource\" ref=\"dataSource\"/> </bean>",
"<bean id=\"sqlSessionFactory\" class=\"org.mybatis.spring.SqlSessionFactoryBean\"> <property name=\"dataSource\" ref=\"dataSource\"/> <!-- standard mybatis config file --> <property name=\"configLocation\" value=\"/META-INF/SqlMapConfig.xml\"/> <!-- externalised mappers --> <property name=\"mapperLocations\" value=\"classpath*:META-INF/mappers/**/*.xml\"/> </bean>",
"<bean id=\"mybatis\" class=\"org.apache.camel.component.mybatis.MyBatisComponent\"> <property name=\"sqlSessionFactory\" ref=\"sqlSessionFactory\"/> </bean>",
"<bean id=\"PROPAGATION_REQUIRED\" class=\"org.apache.camel.spring.spi.SpringTransactionPolicy\"> <property name=\"transactionManager\" ref=\"txManager\"/> <property name=\"propagationBehaviorName\" value=\"PROPAGATION_REQUIRED\"/> </bean> <camelContext id=\"my-model-context\" xmlns=\"http://camel.apache.org/schema/spring\"> <route id=\"insertModel\"> <from uri=\"direct:insert\"/> <transacted ref=\"PROPAGATION_REQUIRED\"/> <to uri=\"mybatis:myModel.insert?statementType=Insert\"/> </route> </camelContext>",
"<dependency> <groupId>org.mybatis.spring.boot</groupId> <artifactId>mybatis-spring-boot-starter</artifactId> <version>2.3.0</version> </dependency>",
"#application.properties camel.component.mybatis.sql-session-factory=#sqlSessionFactory"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-mybatis-component |
Appendix B. Testing the Active-Passive Configuration | Appendix B. Testing the Active-Passive Configuration You must test your disaster recovery solution after configuring it. This section provides multiple options to test the active-passive disaster recovery configuration. Test failover while the primary site remains active and without interfering with virtual machines on the primary site's storage domains. See Section B.1, "Discreet Failover Test" . Test failover and failback using specific storage domains attached to the the primary site, therefore allowing the primary site to remain active. See Section B.2, "Discreet Failover and Failback Test" . Test failover and failback for an impending disaster where you have a grace period to failover to the secondary site, or an unplanned shutdown of the primary site. See Section B.3, "Full Failover and Failback test" . Important Ensure that you completed all the steps to configure your active-passive configuration before running any of these tests. B.1. Discreet Failover Test This test simulates a failover while the primary site and all its storage domains remain active, allowing users to continue working in the primary site. For this to happen you will need to disable replication between the primary storage domains and the replicated (secondary) storage domains. During this test the primary site will be unaware of the failover activities on the secondary site. This test will not allow you to test the failback functionality. Important Ensure that no production tasks are performed after the failover. For example, ensure that email systems are blocked from sending emails to real users, or redirect emails elsewhere. If systems are used to directly manage other systems, prohibit access to the systems or ensure that they access parallel systems in the secondary site. Performing the discreet failover test: Disable storage replication between the primary and replicated storage domains, and ensure that all replicated storage domains are in read/write mode. Run the command to fail over to the secondary site: For more information, see Section 3.5, "Execute a Failback" . Verify that all relevant storage domains, virtual machines, and templates are registered and running successfully. Restoring the environment to its active-passive state: Detach the storage domains from the secondary site. Enable storage replication between the primary and secondary storage domains. B.2. Discreet Failover and Failback Test For this test you must define testable storage domains that will be used specifically for testing the failover and failback. These storage domains must be replicated so that the replicated storage can be attached to the secondary site. This allows you to test the failover while users continue to work in the primary site. Note Red Hat recommends defining the testable storage domains on a separate storage server that can be offline without affecting the primary storage domains used for production in the primary site. For more information about failing over the environment, cleaning the environment, and performing the failback, see Section 3.3, "Execute a Failover" , Section 3.4, "Clean the Primary Site" , and Section 3.5, "Execute a Failback" . Performing the discreet failover test: Stop the test storage domains in the primary site. You can do this by, for example, shutting down the server host or blocking it with a firewall rule. Disable the storage replication between the testable storage domains and ensure that all replicated storage domains used for the test are in read/write mode. Place the test primary storage domains into read-only mode. Run the command to fail over to the secondary site: Verify that all relevant storage domains, virtual machines, and templates are registered and running successfully. Performing the discreet failback test Run the command to clean the primary site and remove all inactive storage domains and related virtual machines and templates: Run the failback command: Enable replication from the primary storage domains to the secondary storage domains. Verify that all relevant storage domains, virtual machines, and templates are registered and running successfully. B.3. Full Failover and Failback test This test performs a full failover and failback between the primary and secondary site. You can simulate the disaster by shutting down the primary site's hosts or by adding iptables rules to block writing to the storage domains. For more information about failing over the environment, cleaning the environment, and performing the failback, see Section 3.3, "Execute a Failover" , Section 3.4, "Clean the Primary Site" , and Section 3.5, "Execute a Failback" . Performing the failover test: Disable storage replication between the primary and replicated storage domains and ensure that all replicated storage domains are in read/write mode. Run the command to fail over to the secondary site: Verify that all relevant storage domains, virtual machines, and templates are registered and running successfully. Performing the failback test Synchronize replication between the secondary site's storage domains and the primary site's storage domains. The secondary site's storage domains must be in read/write mode and the primary site's storage domains must be in read-only mode. Run the command to clean the primary site and remove all inactive storage domains and related virtual machines and templates: Run the failback command: Enable replication from the primary storage domains to the secondary storage domains. Verify that all relevant storage domains, virtual machines, and templates are registered and running successfully. | [
"ansible-playbook playbook --tags \"fail_over\"",
"ansible-playbook playbook --tags \"fail_over\"",
"ansible-playbook playbook --tags \"clean_engine\"",
"ansible-playbook playbook --tags \"fail_back\"",
"ansible-playbook playbook --tags \"fail_over\"",
"ansible-playbook playbook --tags \"clean_engine\"",
"ansible-playbook playbook --tags \"fail_back\""
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/disaster_recovery_guide/testing_active_passive |
13.2.19. Domain Options: Using IP Addresses in Certificate Subject Names (LDAP Only) | 13.2.19. Domain Options: Using IP Addresses in Certificate Subject Names (LDAP Only) Using an IP address in the ldap_uri option instead of the server name may cause the TLS/SSL connection to fail. TLS/SSL certificates contain the server name, not the IP address. However, the subject alternative name field in the certificate can be used to include the IP address of the server, which allows a successful secure connection using an IP address. Procedure 13.8. Using IP Addresses in Certificate Subject Names Convert an existing certificate into a certificate request. The signing key ( -signkey ) is the key of the issuer of whatever CA originally issued the certificate. If this is done by an external CA, it requires a separate PEM file; if the certificate is self-signed, then this is the certificate itself. For example: With a self-signed certificate: Edit the /etc/pki/tls/openssl.cnf configuration file to include the server's IP address under the [ v3_ca ] section: Use the generated certificate request to generate a new self-signed certificate with the specified IP address: The -extensions option sets which extensions to use with the certificate. For this, it should be v3_ca to load the appropriate section. Copy the private key block from the old_cert.pem file into the new_cert.pem file to keep all relevant information in one file. When creating a certificate through the certutil utility provided by the nss-tools package, note that certutil supports DNS subject alternative names for certificate creation only. | [
"openssl x509 -x509toreq -in old_cert.pem -out req.pem -signkey key.pem",
"openssl x509 -x509toreq -in old_cert.pem -out req.pem -signkey old_cert.pem",
"subjectAltName = IP:10.0.0.10",
"openssl x509 -req -in req.pem -out new_cert.pem -extfile ./openssl.cnf -extensions v3_ca -signkey old_cert.pem"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sssd-ldap-domain-ip |
Chapter 1. About Observability | Chapter 1. About Observability Red Hat OpenShift Observability provides real-time visibility, monitoring, and analysis of various system metrics, logs, traces, and events to help users quickly diagnose and troubleshoot issues before they impact systems or applications. To help ensure the reliability, performance, and security of your applications and infrastructure, OpenShift Container Platform offers the following Observability components: Monitoring Logging Distributed tracing Red Hat build of OpenTelemetry Network Observability Red Hat OpenShift Observability connects open-source observability tools and technologies to create a unified Observability solution. The components of Red Hat OpenShift Observability work together to help you collect, store, deliver, analyze, and visualize data. Note With the exception of monitoring, Red Hat OpenShift Observability components have distinct release cycles separate from the core OpenShift Container Platform release cycles. See the Red Hat OpenShift Operator Life Cycles page for their release compatibility. 1.1. Monitoring Monitor the in-cluster health and performance of your applications running on OpenShift Container Platform with metrics and customized alerts for CPU and memory usage, network connectivity, and other resource usage. Monitoring stack components are deployed and managed by the Cluster Monitoring Operator. Monitoring stack components are deployed by default in every OpenShift Container Platform installation and are managed by the Cluster Monitoring Operator (CMO). These components include Prometheus, Alertmanager, Thanos Querier, and others. The CMO also deploys the Telemeter Client, which sends a subset of data from platform Prometheus instances to Red Hat to facilitate Remote Health Monitoring for clusters. For more information, see Monitoring overview and About remote health monitoring . 1.2. Logging Collect, visualize, forward, and store log data to troubleshoot issues, identify performance bottlenecks, and detect security threats. In logging 5.7 and later versions, users can configure the LokiStack deployment to produce customized alerts and recorded metrics. For more information, see About Logging . 1.3. Distributed tracing Store and visualize large volumes of requests passing through distributed systems, across the whole stack of microservices, and under heavy loads. Use it for monitoring distributed transactions, gathering insights into your instrumented services, network profiling, performance and latency optimization, root cause analysis, and troubleshooting the interaction between components in modern cloud-native microservices-based applications. For more information, see Distributed tracing architecture . 1.4. Red Hat build of OpenTelemetry Instrument, generate, collect, and export telemetry traces, metrics, and logs to analyze and understand your software's performance and behavior. Use open-source back ends like Tempo or Prometheus, or use commercial offerings. Learn a single set of APIs and conventions, and own the data that you generate. For more information, see Red Hat build of OpenTelemetry . 1.5. Network Observability Observe the network traffic for OpenShift Container Platform clusters and create network flows with the Network Observability Operator. View and analyze the stored network flows information in the OpenShift Container Platform console for further insight and troubleshooting. For more information, see Network Observability overview . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/observability_overview/observability-overview |
Chapter 10. Viewing and exporting logs | Chapter 10. Viewing and exporting logs Activity logs are gathered for all repositories and namespace in Quay.io. Viewing usage logs of Quay.io. can provide valuable insights and benefits for both operational and security purposes. Usage logs might reveal the following information: Resource Planning : Usage logs can provide data on the number of image pulls, pushes, and overall traffic to your registry. User Activity : Logs can help you track user activity, showing which users are accessing and interacting with images in the registry. This can be useful for auditing, understanding user behavior, and managing access controls. Usage Patterns : By studying usage patterns, you can gain insights into which images are popular, which versions are frequently used, and which images are rarely accessed. This information can help prioritize image maintenance and cleanup efforts. Security Auditing : Usage logs enable you to track who is accessing images and when. This is crucial for security auditing, compliance, and investigating any unauthorized or suspicious activity. Image Lifecycle Management : Logs can reveal which images are being pulled, pushed, and deleted. This information is essential for managing image lifecycles, including deprecating old images and ensuring that only authorized images are used. Compliance and Regulatory Requirements : Many industries have compliance requirements that mandate tracking and auditing of access to sensitive resources. Usage logs can help you demonstrate compliance with such regulations. Identifying Abnormal Behavior : Unusual or abnormal patterns in usage logs can indicate potential security breaches or malicious activity. Monitoring these logs can help you detect and respond to security incidents more effectively. Trend Analysis : Over time, usage logs can provide trends and insights into how your registry is being used. This can help you make informed decisions about resource allocation, access controls, and image management strategies. There are multiple ways of accessing log files: Viewing logs through the web UI. Exporting logs so that they can be saved externally. Accessing log entries using the API. To access logs, you must have administrative privileges for the selected repository or namespace. Note A maximum of 100 log results are available at a time via the API. To gather more results that that, you must use the log exporter feature described in this chapter. 10.1. Viewing usage logs Logs can provide valuable information about the way that your registry is being used. Logs can be viewed by Organization, repository, or namespace on the v2 UI by using the following procedure. Procedure Log in to your Red Hat Quay registry. Navigate to an Organization, repository, or namespace for which you are an administrator of. Click Logs . Optional. Set the date range for viewing log entries by adding dates to the From and To boxes. Optional. Export the logs by clicking Export . You must enter an email address or a valid callback URL that starts with http:// or https:// . This process can take an hour depending on how many logs there are. 10.2. Exporting repository logs by using the UI You can obtain a larger number of log files and save them outside of Quay.io by using the Export Logs feature. This feature has the following benefits and constraints: You can choose a range of dates for the logs you want to gather from a repository. You can request that the logs be sent to you by an email attachment or directed to a callback URL. To export logs, you must be an administrator of the repository or namespace. 30 days worth of logs are retained for all users. Export logs only gathers log data that was previously produced. It does not stream logging data. When logs are gathered and made available to you, you should immediately copy that data if you want to save it. By default, the data expires after one hour. Use the following procedure to export logs. Procedure Select a repository for which you have administrator privileges. Click the Logs tab. Optional. If you want to specify specific dates, enter the range in the From and to boxes. Click the Export Logs button. An Export Usage Logs pop-up appears, as shown Enter an email address or callback URL to receive the exported log. For the callback URL, you can use a URL to a specified domain, for example, <webhook.site>. Select Confirm to start the process for gather the selected log entries. Depending on the amount of logging data being gathered, this can take anywhere from a few minutes to several hours to complete. When the log export is completed, the one of following two events happens: An email is received, alerting you to the available of your requested exported log entries. A successful status of your log export request from the webhook URL is returned. Additionally, a link to the exported data is made available for you to delete to download the logs. | null | https://docs.redhat.com/en/documentation/red_hat_quay/3/html/about_quay_io/use-quay-view-export-logs |
3.3. Kernel Address Space Randomization | 3.3. Kernel Address Space Randomization Red Hat Enterprise Linux 7.5 and later include the Kernel Address Space Randomization (KASLR) feature for KVM guest virtual machines. KASLR enables randomizing the physical and virtual address at which the kernel image is decompressed, and thus prevents guest security exploits based on the location of kernel objects. KASLR is activated by default, but can be deactivated on a specific guest by adding the nokaslr string to the guest's kernel command line. To edit the guest boot options, use the following command, where guestname is the name of your guest: Afterwards, modify the GRUB_CMDLINE_LINUX line, for example: Important Guest dump files created from guests that have with KASLR activated are not readable by the crash utility. To fix this, add the <vmcoreinfo/> element to the <features> section of the XML configuration files of your guests. Note, however, that migrating guests with <vmcoreinfo/> fails if the destination host is using an OS that does not support <vmcoreinfo/> . These include Red Hat Enterprise Linux 7.4 and earlier, as well as Red Hat Enterprise Linux 6.9 and earlier. | [
"virt-edit -d guestname /etc/default/grub",
"GRUB_CMDLINE_LINUX=\"rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet nokaslr\""
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_security_guide/sect-virtualization_security_guide-guest_security-kaslr |
Chapter 2. Cluster Observability Operator overview | Chapter 2. Cluster Observability Operator overview The Cluster Observability Operator (COO) is an optional component of the OpenShift Container Platform designed for creating and managing highly customizable monitoring stacks. It enables cluster administrators to automate configuration and management of monitoring needs extensively, offering a more tailored and detailed view of each namespace compared to the default OpenShift Container Platform monitoring system. The COO deploys the following monitoring components: Prometheus - A highly available Prometheus instance capable of sending metrics to an external endpoint by using remote write. Thanos Querier (optional) - Enables querying of Prometheus instances from a central location. Alertmanager (optional) - Provides alert configuration capabilities for different services. UI plugins (optional) - Enhances the observability capabilities with plugins for monitoring, logging, distributed tracing and troubleshooting. Korrel8r (optional) - Provides observability signal correlation, powered by the open source Korrel8r project. 2.1. COO compared to default monitoring stack The COO components function independently of the default in-cluster monitoring stack, which is deployed and managed by the Cluster Monitoring Operator (CMO). Monitoring stacks deployed by the two Operators do not conflict. You can use a COO monitoring stack in addition to the default platform monitoring components deployed by the CMO. The key differences between COO and the default in-cluster monitoring stack are shown in the following table: Feature COO Default monitoring stack Scope and integration Offers comprehensive monitoring and analytics for enterprise-level needs, covering cluster and workload performance. However, it lacks direct integration with OpenShift Container Platform and typically requires an external Grafana instance for dashboards. Limited to core components within the cluster, for example, API server and etcd, and to OpenShift-specific namespaces. There is deep integration into OpenShift Container Platform including console dashboards and alert management in the console. Configuration and customization Broader configuration options including data retention periods, storage methods, and collected data types. The COO can delegate ownership of single configurable fields in custom resources to users by using Server-Side Apply (SSA), which enhances customization. Built-in configurations with limited customization options. Data retention and storage Long-term data retention, supporting historical analysis and capacity planning Shorter data retention times, focusing on short-term monitoring and real-time detection. 2.2. Key advantages of using COO Deploying COO helps you address monitoring requirements that are hard to achieve using the default monitoring stack. 2.2.1. Extensibility You can add more metrics to a COO-deployed monitoring stack, which is not possible with core platform monitoring without losing support. You can receive cluster-specific metrics from core platform monitoring through federation. COO supports advanced monitoring scenarios like trend forecasting and anomaly detection. 2.2.2. Multi-tenancy support You can create monitoring stacks per user namespace. You can deploy multiple stacks per namespace or a single stack for multiple namespaces. COO enables independent configuration of alerts and receivers for different teams. 2.2.3. Scalability Supports multiple monitoring stacks on a single cluster. Enables monitoring of large clusters through manual sharding. Addresses cases where metrics exceed the capabilities of a single Prometheus instance. 2.2.4. Flexibility Decoupled from OpenShift Container Platform release cycles. Faster release iterations and rapid response to changing requirements. Independent management of alerting rules. 2.3. Target users for COO COO is ideal for users who need high customizability, scalability, and long-term data retention, especially in complex, multi-tenant enterprise environments. 2.3.1. Enterprise-level users and administrators Enterprise users require in-depth monitoring capabilities for OpenShift Container Platform clusters, including advanced performance analysis, long-term data retention, trend forecasting, and historical analysis. These features help enterprises better understand resource usage, prevent performance issues, and optimize resource allocation. 2.3.2. Operations teams in multi-tenant environments With multi-tenancy support, COO allows different teams to configure monitoring views for their projects and applications, making it suitable for teams with flexible monitoring needs. 2.3.3. Development and operations teams COO provides fine-grained monitoring and customizable observability views for in-depth troubleshooting, anomaly detection, and performance tuning during development and operations. 2.4. Using Server-Side Apply to customize Prometheus resources Server-Side Apply is a feature that enables collaborative management of Kubernetes resources. The control plane tracks how different users and controllers manage fields within a Kubernetes object. It introduces the concept of field managers and tracks ownership of fields. This centralized control provides conflict detection and resolution, and reduces the risk of unintended overwrites. Compared to Client-Side Apply, it is more declarative, and tracks field management instead of last applied state. Server-Side Apply Declarative configuration management by updating a resource's state without needing to delete and recreate it. Field management Users can specify which fields of a resource they want to update, without affecting the other fields. Managed fields Kubernetes stores metadata about who manages each field of an object in the managedFields field within metadata. Conflicts If multiple managers try to modify the same field, a conflict occurs. The applier can choose to overwrite, relinquish control, or share management. Merge strategy Server-Side Apply merges fields based on the actor who manages them. Procedure Add a MonitoringStack resource using the following configuration: Example MonitoringStack object apiVersion: monitoring.rhobs/v1alpha1 kind: MonitoringStack metadata: labels: coo: example name: sample-monitoring-stack namespace: coo-demo spec: logLevel: debug retention: 1d resourceSelector: matchLabels: app: demo A Prometheus resource named sample-monitoring-stack is generated in the coo-demo namespace. Retrieve the managed fields of the generated Prometheus resource by running the following command: USD oc -n coo-demo get Prometheus.monitoring.rhobs -oyaml --show-managed-fields Example output managedFields: - apiVersion: monitoring.rhobs/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:labels: f:app.kubernetes.io/managed-by: {} f:app.kubernetes.io/name: {} f:app.kubernetes.io/part-of: {} f:ownerReferences: k:{"uid":"81da0d9a-61aa-4df3-affc-71015bcbde5a"}: {} f:spec: f:additionalScrapeConfigs: {} f:affinity: f:podAntiAffinity: f:requiredDuringSchedulingIgnoredDuringExecution: {} f:alerting: f:alertmanagers: {} f:arbitraryFSAccessThroughSMs: {} f:logLevel: {} f:podMetadata: f:labels: f:app.kubernetes.io/component: {} f:app.kubernetes.io/part-of: {} f:podMonitorSelector: {} f:replicas: {} f:resources: f:limits: f:cpu: {} f:memory: {} f:requests: f:cpu: {} f:memory: {} f:retention: {} f:ruleSelector: {} f:rules: f:alert: {} f:securityContext: f:fsGroup: {} f:runAsNonRoot: {} f:runAsUser: {} f:serviceAccountName: {} f:serviceMonitorSelector: {} f:thanos: f:baseImage: {} f:resources: {} f:version: {} f:tsdb: {} manager: observability-operator operation: Apply - apiVersion: monitoring.rhobs/v1 fieldsType: FieldsV1 fieldsV1: f:status: .: {} f:availableReplicas: {} f:conditions: .: {} k:{"type":"Available"}: .: {} f:lastTransitionTime: {} f:observedGeneration: {} f:status: {} f:type: {} k:{"type":"Reconciled"}: .: {} f:lastTransitionTime: {} f:observedGeneration: {} f:status: {} f:type: {} f:paused: {} f:replicas: {} f:shardStatuses: .: {} k:{"shardID":"0"}: .: {} f:availableReplicas: {} f:replicas: {} f:shardID: {} f:unavailableReplicas: {} f:updatedReplicas: {} f:unavailableReplicas: {} f:updatedReplicas: {} manager: PrometheusOperator operation: Update subresource: status Check the metadata.managedFields values, and observe that some fields in metadata and spec are managed by the MonitoringStack resource. Modify a field that is not controlled by the MonitoringStack resource: Change spec.enforcedSampleLimit , which is a field not set by the MonitoringStack resource. Create the file prom-spec-edited.yaml : prom-spec-edited.yaml apiVersion: monitoring.rhobs/v1 kind: Prometheus metadata: name: sample-monitoring-stack namespace: coo-demo spec: enforcedSampleLimit: 1000 Apply the YAML by running the following command: USD oc apply -f ./prom-spec-edited.yaml --server-side Note You must use the --server-side flag. Get the changed Prometheus object and note that there is one more section in managedFields which has spec.enforcedSampleLimit : USD oc get prometheus -n coo-demo Example output managedFields: 1 - apiVersion: monitoring.rhobs/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:labels: f:app.kubernetes.io/managed-by: {} f:app.kubernetes.io/name: {} f:app.kubernetes.io/part-of: {} f:spec: f:enforcedSampleLimit: {} 2 manager: kubectl operation: Apply 1 managedFields 2 spec.enforcedSampleLimit Modify a field that is managed by the MonitoringStack resource: Change spec.LogLevel , which is a field managed by the MonitoringStack resource, using the following YAML configuration: # changing the logLevel from debug to info apiVersion: monitoring.rhobs/v1 kind: Prometheus metadata: name: sample-monitoring-stack namespace: coo-demo spec: logLevel: info 1 1 spec.logLevel has been added Apply the YAML by running the following command: USD oc apply -f ./prom-spec-edited.yaml --server-side Example output error: Apply failed with 1 conflict: conflict with "observability-operator": .spec.logLevel Please review the fields above--they currently have other managers. Here are the ways you can resolve this warning: * If you intend to manage all of these fields, please re-run the apply command with the `--force-conflicts` flag. * If you do not intend to manage all of the fields, please edit your manifest to remove references to the fields that should keep their current managers. * You may co-own fields by updating your manifest to match the existing value; in this case, you'll become the manager if the other manager(s) stop managing the field (remove it from their configuration). See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts Notice that the field spec.logLevel cannot be changed using Server-Side Apply, because it is already managed by observability-operator . Use the --force-conflicts flag to force the change. USD oc apply -f ./prom-spec-edited.yaml --server-side --force-conflicts Example output prometheus.monitoring.rhobs/sample-monitoring-stack serverside-applied With --force-conflicts flag, the field can be forced to change, but since the same field is also managed by the MonitoringStack resource, the Observability Operator detects the change, and reverts it back to the value set by the MonitoringStack resource. Note Some Prometheus fields generated by the MonitoringStack resource are influenced by the fields in the MonitoringStack spec stanza, for example, logLevel . These can be changed by changing the MonitoringStack spec . To change the logLevel in the Prometheus object, apply the following YAML to change the MonitoringStack resource: apiVersion: monitoring.rhobs/v1alpha1 kind: MonitoringStack metadata: name: sample-monitoring-stack labels: coo: example spec: logLevel: info To confirm that the change has taken place, query for the log level by running the following command: USD oc -n coo-demo get Prometheus.monitoring.rhobs -o=jsonpath='{.items[0].spec.logLevel}' Example output info Note If a new version of an Operator generates a field that was previously generated and controlled by an actor, the value set by the actor will be overridden. For example, you are managing a field enforcedSampleLimit which is not generated by the MonitoringStack resource. If the Observability Operator is upgraded, and the new version of the Operator generates a value for enforcedSampleLimit , this will overide the value you have previously set. The Prometheus object generated by the MonitoringStack resource may contain some fields which are not explicitly set by the monitoring stack. These fields appear because they have default values. Additional resources Kubernetes documentation for Server-Side Apply (SSA) | [
"apiVersion: monitoring.rhobs/v1alpha1 kind: MonitoringStack metadata: labels: coo: example name: sample-monitoring-stack namespace: coo-demo spec: logLevel: debug retention: 1d resourceSelector: matchLabels: app: demo",
"oc -n coo-demo get Prometheus.monitoring.rhobs -oyaml --show-managed-fields",
"managedFields: - apiVersion: monitoring.rhobs/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:labels: f:app.kubernetes.io/managed-by: {} f:app.kubernetes.io/name: {} f:app.kubernetes.io/part-of: {} f:ownerReferences: k:{\"uid\":\"81da0d9a-61aa-4df3-affc-71015bcbde5a\"}: {} f:spec: f:additionalScrapeConfigs: {} f:affinity: f:podAntiAffinity: f:requiredDuringSchedulingIgnoredDuringExecution: {} f:alerting: f:alertmanagers: {} f:arbitraryFSAccessThroughSMs: {} f:logLevel: {} f:podMetadata: f:labels: f:app.kubernetes.io/component: {} f:app.kubernetes.io/part-of: {} f:podMonitorSelector: {} f:replicas: {} f:resources: f:limits: f:cpu: {} f:memory: {} f:requests: f:cpu: {} f:memory: {} f:retention: {} f:ruleSelector: {} f:rules: f:alert: {} f:securityContext: f:fsGroup: {} f:runAsNonRoot: {} f:runAsUser: {} f:serviceAccountName: {} f:serviceMonitorSelector: {} f:thanos: f:baseImage: {} f:resources: {} f:version: {} f:tsdb: {} manager: observability-operator operation: Apply - apiVersion: monitoring.rhobs/v1 fieldsType: FieldsV1 fieldsV1: f:status: .: {} f:availableReplicas: {} f:conditions: .: {} k:{\"type\":\"Available\"}: .: {} f:lastTransitionTime: {} f:observedGeneration: {} f:status: {} f:type: {} k:{\"type\":\"Reconciled\"}: .: {} f:lastTransitionTime: {} f:observedGeneration: {} f:status: {} f:type: {} f:paused: {} f:replicas: {} f:shardStatuses: .: {} k:{\"shardID\":\"0\"}: .: {} f:availableReplicas: {} f:replicas: {} f:shardID: {} f:unavailableReplicas: {} f:updatedReplicas: {} f:unavailableReplicas: {} f:updatedReplicas: {} manager: PrometheusOperator operation: Update subresource: status",
"apiVersion: monitoring.rhobs/v1 kind: Prometheus metadata: name: sample-monitoring-stack namespace: coo-demo spec: enforcedSampleLimit: 1000",
"oc apply -f ./prom-spec-edited.yaml --server-side",
"oc get prometheus -n coo-demo",
"managedFields: 1 - apiVersion: monitoring.rhobs/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:labels: f:app.kubernetes.io/managed-by: {} f:app.kubernetes.io/name: {} f:app.kubernetes.io/part-of: {} f:spec: f:enforcedSampleLimit: {} 2 manager: kubectl operation: Apply",
"changing the logLevel from debug to info apiVersion: monitoring.rhobs/v1 kind: Prometheus metadata: name: sample-monitoring-stack namespace: coo-demo spec: logLevel: info 1",
"oc apply -f ./prom-spec-edited.yaml --server-side",
"error: Apply failed with 1 conflict: conflict with \"observability-operator\": .spec.logLevel Please review the fields above--they currently have other managers. Here are the ways you can resolve this warning: * If you intend to manage all of these fields, please re-run the apply command with the `--force-conflicts` flag. * If you do not intend to manage all of the fields, please edit your manifest to remove references to the fields that should keep their current managers. * You may co-own fields by updating your manifest to match the existing value; in this case, you'll become the manager if the other manager(s) stop managing the field (remove it from their configuration). See https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts",
"oc apply -f ./prom-spec-edited.yaml --server-side --force-conflicts",
"prometheus.monitoring.rhobs/sample-monitoring-stack serverside-applied",
"apiVersion: monitoring.rhobs/v1alpha1 kind: MonitoringStack metadata: name: sample-monitoring-stack labels: coo: example spec: logLevel: info",
"oc -n coo-demo get Prometheus.monitoring.rhobs -o=jsonpath='{.items[0].spec.logLevel}'",
"info"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/cluster_observability_operator/cluster-observability-operator-overview |
Chapter 1. Web Console Overview | Chapter 1. Web Console Overview The Red Hat Red Hat OpenShift Service on AWS web console provides a graphical user interface to visualize your project data and perform administrative, management, and troubleshooting tasks. The web console runs as pods on the control plane nodes in the openshift-console project. It is managed by a console-operator pod. Both Administrator and Developer perspectives are supported. Both Administrator and Developer perspectives enable you to create quick start tutorials for Red Hat OpenShift Service on AWS. A quick start is a guided tutorial with user tasks and is useful for getting oriented with an application, Operator, or other product offering. 1.1. About the Administrator perspective in the web console The Administrator perspective enables you to view the cluster inventory, capacity, general and specific utilization information, and the stream of important events, all of which help you to simplify planning and troubleshooting tasks. Both project administrators and cluster administrators can view the Administrator perspective. Cluster administrators can also open an embedded command line terminal instance with the web terminal Operator in Red Hat OpenShift Service on AWS 4.7 and later. Note The default web console perspective that is shown depends on the role of the user. The Administrator perspective is displayed by default if the user is recognized as an administrator. The Administrator perspective provides workflows specific to administrator use cases, such as the ability to: Manage workload, storage, networking, and cluster settings. Install and manage Operators using the Operator Hub. Add identity providers that allow users to log in and manage user access through roles and role bindings. View and manage a variety of advanced settings such as cluster updates, partial cluster updates, cluster Operators, custom resource definitions (CRDs), role bindings, and resource quotas. Access and manage monitoring features such as metrics, alerts, and monitoring dashboards. View and manage logging, metrics, and high-status information about the cluster. Visually interact with applications, components, and services associated with the Administrator perspective in Red Hat OpenShift Service on AWS. 1.2. About the Developer perspective in the web console The Developer perspective offers several built-in ways to deploy applications, services, and databases. In the Developer perspective, you can: View real-time visualization of rolling and recreating rollouts on the component. View the application status, resource utilization, project event streaming, and quota consumption. Share your project with others. Troubleshoot problems with your applications by running Prometheus Query Language (PromQL) queries on your project and examining the metrics visualized on a plot. The metrics provide information about the state of a cluster and any user-defined workloads that you are monitoring. Cluster administrators can also open an embedded command line terminal instance in the web console in Red Hat OpenShift Service on AWS 4.7 and later. Note The default web console perspective that is shown depends on the role of the user. The Developer perspective is displayed by default if the user is recognised as a developer. The Developer perspective provides workflows specific to developer use cases, such as the ability to: Create and deploy applications on Red Hat OpenShift Service on AWS by importing existing codebases, images, and container files. Visually interact with applications, components, and services associated with them within a project and monitor their deployment and build status. Group components within an application and connect the components within and across applications. Integrate serverless capabilities (Technology Preview). Create workspaces to edit your application code using Eclipse Che. You can use the Topology view to display applications, components, and workloads of your project. If you have no workloads in the project, the Topology view will show some links to create or import them. You can also use the Quick Search to import components directly. Additional resources See Viewing application composition using the Topology view for more information on using the Topology view in Developer perspective. 1.3. Accessing the Perspectives You can access the Administrator and Developer perspective from the web console as follows: Prerequisites To access a perspective, ensure that you have logged in to the web console. Your default perspective is automatically determined by the permission of the users. The Administrator perspective is selected for users with access to all projects, while the Developer perspective is selected for users with limited access to their own projects Additional resources See Adding User Preferences for more information on changing perspectives. Procedure Use the perspective switcher to switch to the Administrator or Developer perspective. Select an existing project from the Project drop-down list. You can also create a new project from this dropdown. Note You can use the perspective switcher only as cluster-admin . Additional resources Viewing cluster information Using the web terminal Creating quick start tutorials | null | https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/web_console/web-console-overview |
4.2. Installing Debuginfo Packages | 4.2. Installing Debuginfo Packages Red Hat Enterprise Linux also provides -debuginfo packages for all architecture-dependent RPMs included in the operating system. A packagename -debuginfo- version - release . architecture .rpm package contains detailed information about the relationship of the package source files and the final installed binary. The debuginfo packages contain both .debug files, which in turn contain DWARF debuginfo and the source files used for compiling the binary packages. Note Most of the debugger functionality is missed if attempting to debug a package without having its debuginfo equivalent installed. For example, the names of exported shared library functions will still be available, but the matching source file lines will not be without the debuginfo package installed. Use gcc compilation option -g for your own programs. The debugging experience is better if no optimizations (gcc option -O , such as -O2 ) is applied with -g . For Red Hat Enterprise Linux 6, the debuginfo packages are now available on a new channel on the Red Hat Network. To install the -debuginfo package of a package (that is, typically packagename -debuginfo ), first the machine has to be subscribed to the corresponding Debuginfo channel. For example, for Red Hat Enterprise Server 6, the corresponding channel would be Red Hat Enterprise Linux Server Debuginfo (v. 6) . Red Hat Enterprise Linux system packages are compiled with optimizations (gcc option -O2 ). This means that some variables will be displayed as <optimized out> . Stepping through code will 'jump' a little but a crash can still be analyzed. If some debugging information is missing because of the optimizations, the right variable information can be found by disassembling the code and matching it to the source manually. This is applicable only in exceptional cases and is not suitable for regular debugging. For system packages, GDB informs the user if it is missing some debuginfo packages that limit its functionality. If the system package to be debugged is known, use the command suggested by GDB above. It will also automatically install all the debug packages packagename depends on. 4.2.1. Installing Debuginfo Packages for Core Files Analysis A core file is a representation of the memory image at the time of a process crash. For bug reporting of system program crashes, Red Hat recommends the use of the ABRT tool, explained in the Automatic Bug Reporting Tool chapter in the Red Hat Deployment Guide . If ABRT is not suitable for your purposes, the steps it automates are explained here. If the ulimit -c unlimited setting is in use when a process crashes, the core file is dumped into the current directory. The core file contains only the memory areas modified by the process from the original state of disk files. In order to perform a full analysis of a crash, a core file is required to have: the core file itself the executable binary which has crashed, such as /usr/sbin/sendmail all the shared libraries loaded in the binary when it crashed .debug files and source files (both stored in debuginfo RPMs) for the executable and all of its loaded libraries For a proper analysis, either the exact version-release.architecture for all the RPMs involved or the same build of your own compiled binaries is needed. At the time of the crash, the application may have already recompiled or been updated by yum on the disk, rendering the files inappropriate for the core file analysis. The core file contains build-ids of all the binaries involved. For more information on build-id, see Section 3.3, "build-id Unique Identification of Binaries" . The contents of the core file can be displayed by: The meaning of the columns in each line are: The in-memory address where the specific binary was mapped to (for example, 0x400000 in the first line). The size of the binary (for example, +0x207000 in the first line). The 160-bit SHA-1 build-id of the binary (for example, 2818b2009547f780a5639c904cded443e564973e in the first line). The in-memory address where the build-id bytes were stored (for example, @0x400284 in the first line). The on-disk binary file, if available (for example, /bin/sleep in the first line). This was found by eu-unstrip for this module. The on-disk debuginfo file, if available (for example, /usr/lib/debug/bin/sleep.debug ). However, best practice is to use the binary file reference instead. The shared library name as stored in the shared library list in the core file (for example, libc.so.6 in the third line). For each build-id (for example, ab/cdef0123456789012345678901234567890123 ) a symbolic link is included in its debuginfo RPM. Using the /bin/sleep executable above as an example, the coreutils-debuginfo RPM contains, among other files: In some cases (such as loading a core file), GDB does not know the name, version, or release of a name -debuginfo- version-release .rpm package; it only knows the build-id. In such cases, GDB suggests a different command: The version-release.architecture of the binary package packagename -debuginfo- version-release.architecture .rpm must be an exact match. If it differs then GDB cannot use the debuginfo package. Even the same version-release.architecture from a different build leads to an incompatible debuginfo package. If GDB reports a missing debuginfo, ensure to recheck: rpm -q packagename packagename -debuginfo The version-release.architecture definitions should match. rpm -V packagename packagename- debuginfo This command should produce no output, except possibly modified configuration files of packagename , for example. rpm -qi packagename packagename -debuginfo The version-release.architecture should display matching information for Vendor, Build Date, and Build Host. For example, using a CentOS debuginfo RPM for a Red Hat Enterprise Linux RPM package will not work. If the required build-id is known, the following command can query which RPM contains it: For example, a version of an executable which matches the core file can be installed by: Similar methods are available if the binaries are not packaged into RPMs and stored in yum repositories. It is possible to create local repositories with custom application builds by using /usr/bin/createrepo . | [
"gdb ls [...] Reading symbols from /bin/ls...(no debugging symbols found)...done. Missing separate debuginfos, use: debuginfo-install coreutils-8.4-16.el6.x86_64 (gdb) q",
"debuginfo-install packagename",
"eu-unstrip -n --core=./core.9814 0x400000+0x207000 2818b2009547f780a5639c904cded443e564973e@0x400284 /bin/sleep /usr/lib/debug/bin/sleep.debug [exe] 0x7fff26fff000+0x1000 1e2a683b7d877576970e4275d41a6aaec280795e@0x7fff26fff340 . - linux-vdso.so.1 0x35e7e00000+0x3b6000 374add1ead31ccb449779bc7ee7877de3377e5ad@0x35e7e00280 /lib64/libc-2.14.90.so /usr/lib/debug/lib64/libc-2.14.90.so.debug libc.so.6 0x35e7a00000+0x224000 3ed9e61c2b7e707ce244816335776afa2ad0307d@0x35e7a001d8 /lib64/ld-2.14.90.so /usr/lib/debug/lib64/ld-2.14.90.so.debug ld-linux-x86-64.so.2",
"lrwxrwxrwx 1 root root 24 Nov 29 17:07 /usr/lib/debug/.build-id/28/18b2009547f780a5639c904cded443e564973e -> ../../../../../bin/sleep* lrwxrwxrwx 1 root root 21 Nov 29 17:07 /usr/lib/debug/.build-id/28/18b2009547f780a5639c904cded443e564973e.debug -> ../../bin/sleep.debug",
"gdb -c ./core [...] Missing separate debuginfo for the main executable filename Try: yum --disablerepo='*' --enablerepo='*debug*' install /usr/lib/debug/.build-id/ef/dd0b5e69b0742fa5e5bad0771df4d1df2459d1",
"repoquery --disablerepo='*' --enablerepo='*-debug*' -qf /usr/lib/debug/.build-id/ef/dd0b5e69b0742fa5e5bad0771df4d1df2459d1",
"yum --enablerepo='*-debug*' install USD(eu-unstrip -n --core=./core.9814 | sed -e 's#^[^ ]* \\(..\\)\\([^@ ]*\\).*USD#/usr/lib/debug/.build-id/\\1/\\2#p' -e 's/USD/.debug/')"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/developer_guide/intro.debuginfo |
Chapter 2. Archive JFR recordings | Chapter 2. Archive JFR recordings You can archive active JFR recordings to avoid potential data loss from JFR recordings. You can download or upload the archived JFR recording, so that you can analyze the recording to suits your needs. You can find archived JFR recordings from the Archives menu in chronological order under one of three headings: All Targets , All Archives , and Uploads . Depending on what actions you performed on a JFR recording, the recording might display under each table. 2.1. Archiving JDK Flight Recorder (JFR) recordings You can archive active JFR recordings to avoid potential data loss from JFR recordings. Data loss might occur when Cryostat replaces legacy JFR recording data with new data to save storage space or when a target JVM abruptly stops or restarts. When you create an archived recording, Cryostat copies the active JFR recording's data and stores the data in a persistent storage location on your Cryostat instance. The Red Hat build of Cryostat Operator builds this persistent storage location onto the associated persistent volume claim (PVC) on the Red Hat OpenShift cluster. You can archive any JFR recording, regardless of its configuration. Additionally, you can archive snapshots from a JFR recording. Prerequisites Entered your authentication details for your Cryostat instance. Created a target JVM recording and entered your authenticated details to access the Recordings menu. See Creating a JDK Flight Recorder (JFR) recording (Creating a JFR recording with Cryostat). Procedure On the Active Recordings tab, select the checkbox for your JFR recording. The Archive button is activated in the Active Recordings toolbar. Figure 2.1. Archive button for your JFR recording Click the Archive button. Cryostat creates an archived recording of your JFR recording. You can view your archived recording from under the Archived Recordings tab along with any other recording that relates to your selected target JVM. Alternatively, you can view your archived recording from under the All Targets table. Figure 2.2. Example of a listed target JVM application that is under the All Targets table Tip To remove a target JVM entry that does not have an archived recording, select the Hide targets with zero recordings checkbox. After you click on the twistie ( v ) beside the JVM target entry, you can access a filter function, where you can edit labels to enhance your filter or click the Delete button to remove the filter. From the All Targets table, select the checkbox beside each target JVM application that you want to review. The table lists each archived recording and its source location. Go to the All Archives table. This table looks similar to the All Targets table, but the All Archives table lists target JVM applications from files that Cryostat archived inside Cryostat. Note If an archived file has no recognizable JVM applications, it is still listed on the All Archives table but opens within a nested table under the heading lost . Optional: To delete an archived recording, select the checkbox to the specific archived JFR recording item, and click Delete when prompted. Figure 2.3. Deleting an archived JFR recording Note Cryostat assigns names to archived recordings based on the address of the target JVM's application, the name of the active recording, and the timestamp of the created archived recordings. Additional resources See Persistent storage using local volumes (Red Hat OpenShift) 2.2. Downloading an active recording or an archived recording You can use Cryostat to download an active recording or an archived recording to your local system. Prerequisites Entered your authentication details for your Cryostat instance. Created a JFR recording. See Creating a JDK Flight Recorder (JFR) recording (Creating a JFR recording with Cryostat). Optional: Uploaded an SSL certificate or provided your credentials to the target JVM. Optional: Archived your JFR recording. See Archiving JDK Flight Recorder (JFR) recordings (Using Cryostat to manage a JFR recording). Procedure Navigate to the Recordings menu or the Archives menu on your Cryostat instance. Note The remaining steps use the Recordings menu as an example, but you can follow similar steps on the Archives menu. Determine the recording you want by clicking either the Active Recordings tab or the Archived Recordings tab. Locate your listed JFR recording and then select its overflow menu. Figure 2.4. Viewing a JFR recording's overflow menu Choose one of the following options: From the overflow menu, click Download Recording . Depending on how you configured your operating system, a file-save dialog opens. Save the JFR binary file and the JSON file to your preferred location. From the All Targets table, select the overflow menu for your listed JFR recordings. Click Download to save the archived file along with its JSON file, which contains metadata and label information, to your local system. Optional: View the downloaded file with the Java Mission Control (JMC) desktop application. Note If you do not want to download the .jfr file, but instead want to view the data from your recording on the Cryostat application, you can click the View in Grafana option. 2.3. Uploading a JFR recording to the Cryostat archives location You can upload a JFR recording from your local system to the archives location of your Cryostat. To save Cryostat storage space, you might have scaled down or removed your JFR recording. If you downloaded a JFR recording, you can upload it to your Cryostat instance when you scale up or redeploy the instance. Additionally, you can upload a file from a Cryostat instance to a new Cryostat instance. Cryostat analysis tools work on the recording uploaded to the new Cryostat instance. Prerequisites Entered your authentication details for your Cryostat instance. Created a JFR recording. See Creating a JDK Flight Recorder (JFR) recording (Creating a JFR recording with Cryostat). See Downloading an active recording or an archived recordings (Using Cryostat to manage a JFR recording). Procedure Go to the Archives menu on your Cryostat instance. Figure 2.5. Archives menu on the Cryostat web console Optional: From the Uploads table, you can view all of your uploaded JFR recordings. The Uploads table also includes a filtering mechanism similar to other tables, such as the All Targets table, and other output. You can also use the filtering mechanism on the Archives menu to find an archived file that might have no recognizable target JVM application. Figure 2.6. The Uploads table in the Archives menu Click the upload icon. A Re-Upload Archived Recording window opens in your Cryostat web console: Figure 2.7. Re-Upload Archived Recording window In the JFR File field, click Upload . Locate the JFR recording files, which are files with a .jfr extension, and then click Submit . Note Alternatively, you can drag and drop .jfr files into the JFR File field. Your JFR recording files open in the Uploads table. Figure 2.8. Example of a JFR recording that is in the Uploads table | null | https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/3/html/using_cryostat_to_manage_a_jfr_recording/assembly_archive-jfr-recordings_assembly_security-options |
Chapter 12. Configuring the node port service range | Chapter 12. Configuring the node port service range As a cluster administrator, you can expand the available node port range. If your cluster uses of a large number of node ports, you might need to increase the number of available ports. The default port range is 30000-32767 . You can never reduce the port range, even if you first expand it beyond the default range. 12.1. Prerequisites Your cluster infrastructure must allow access to the ports that you specify within the expanded range. For example, if you expand the node port range to 30000-32900 , the inclusive port range of 32768-32900 must be allowed by your firewall or packet filtering configuration. 12.2. Expanding the node port range You can expand the node port range for the cluster. Important You can expand the node port range into the protected port range, which is between 0 and 32767. However, after expansion, you cannot change the range. Attempting to change the range returns the following error: The Network "cluster" is invalid: spec.serviceNodePortRange: Invalid value: "30000-32767": new service node port range 30000-32767 does not completely cover the range 0-32767 . Before making changes, ensure that the new range you set is appropriate for your cluster. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Procedure To expand the node port range, enter the following command. Replace <port> with the largest port number in the new range. USD oc patch network.config.openshift.io cluster --type=merge -p \ '{ "spec": { "serviceNodePortRange": "30000-<port>" } }' Tip You can alternatively apply the following YAML to update the node port range: apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: serviceNodePortRange: "30000-<port>" Example output network.config.openshift.io/cluster patched To confirm that the configuration is active, enter the following command. It can take several minutes for the update to apply. USD oc get configmaps -n openshift-kube-apiserver config \ -o jsonpath="{.data['config\.yaml']}" | \ grep -Eo '"service-node-port-range":["[[:digit:]]+-[[:digit:]]+"]' Example output "service-node-port-range":["30000-33000"] 12.3. Additional resources Configuring ingress cluster traffic using a NodePort Network [config.openshift.io/v1 ] Service [core/v1 ] | [
"oc patch network.config.openshift.io cluster --type=merge -p '{ \"spec\": { \"serviceNodePortRange\": \"30000-<port>\" } }'",
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: serviceNodePortRange: \"30000-<port>\"",
"network.config.openshift.io/cluster patched",
"oc get configmaps -n openshift-kube-apiserver config -o jsonpath=\"{.data['config\\.yaml']}\" | grep -Eo '\"service-node-port-range\":[\"[[:digit:]]+-[[:digit:]]+\"]'",
"\"service-node-port-range\":[\"30000-33000\"]"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/networking/configuring-node-port-service-range |
Chapter 3. Advanced Topics | Chapter 3. Advanced Topics This chapter discusses advanced topics on packaging Software Collections. 3.1. Using Software Collections over NFS In some environments, the requirement is often to have a centralized model for how applications and tools are distributed rather than allowing users to install the application or tool version they prefer. In this way, NFS is the common method of mounting centrally managed software. You need to define a Software Collection macro nfsmountable to use a Software Collection over NFS. If the macro is defined when building a Software Collection, the resulting Software Collection has its state files and configuration files located outside the Software Collection's /opt file system hierarchy. This enables you to mount the /opt file system hierarchy over NFS as read-only. It also makes state files and configuration files easier to manage. If you do not need support for Software Collections over NFS, using nfsmountable is optional but recommended. To define the nfsmountable macro, ensure that the Software Collection metapackage spec file contains the following lines: %global nfsmountable 1 %scl_package %scl As shown above, the nfsmountable macro must be defined before defining the %scl_package macro. This is because the %scl_package macro redefines the _sysconfdir , _sharedstatedir , and _localstatedir macros depending on whether the nfsmountable macro has been defined or not. The values that nfsmountable changes for the redefined macros are detailed in the following table. Table 3.1. Changed Values for Software Collection Macros Macro Original definition Expanded value for the original definition Changed definition Expanded value for the changed definition _sysconfdir %{_scl_root}/etc /opt/provider/%{scl}/root/etc %{_root_sysconfdir}%{_scl_prefix}/%{scl} /etc/opt/provider/%{scl} _sharedstatedir %{_scl_root}/var/lib /opt/provider/%{scl}/root/var/lib %{_root_localstatedir}%{_scl_prefix}/%{scl}/lib /var/opt/provider/%{scl}/lib _localstatedir %{_scl_root}/var /opt/provider/%{scl}/root/var %{_root_localstatedir}%{_scl_prefix}/%{scl} /var/opt/provider/%{scl} 3.1.1. Changed Directory Structure and File Ownership The nfsmountable macro also has an impact on how the scl_install and scl_files macros create a directory structure and set the file ownership when you run the rpmbuild command. For example, a directory structure of a Software Collection named software_collection with the nfsmountable macro defined looks as follows: 3.1.2. Registering and Deregistering Software Collections In case a Software Collection is shared over NFS but not locally installed on your system, you need to make the scl tool aware of it by registering that Software Collection. Registering a Software Collection is done by running the scl register command: where /opt/provider/software_collection is the absolute path to the file system hierarchy of the Software Collection you want to register. The path's directory must contain the enable scriptlet and the root/ directory to be considered a valid Software Collection file system hierarchy. Deregistering a Software Collection is a reverse operation that you perform when you no longer want the scl tool to be aware of a registered Software Collection. Deregistering a Software Collection is done by calling a deregister scriptet when running the scl command: where software_collection is the name of the Software Collection you want to deregister. 3.1.2.1. Using (de)register Scriptlets in a Software Collection Metapackage You can specify (de)register scriptlets in a Software Collection metapackage similarly to how enable scriptlets are specified. When specifying the scriptets, remember to explicitly include them in the %file section of the metapackage spec file. See the following sample code for an example of specifying (de)register scriptets: %install %scl_install cat >> %{buildroot}%{_scl_scripts}/enable << EOF # Contents of the enable scriptlet goes here ... EOF cat >> %{buildroot}%{_scl_scripts}/register << EOF # Contents of the register scriptlet goes here ... EOF cat >> %{buildroot}%{_scl_scripts}/deregister << EOF # Contents of the deregister scriptlet goes here ... EOF ... %files runtime -f filelist %scl_files %{_scl_scripts}/register %{_scl_scripts}/deregister In the register scriptlet, you can optionally specify the commands you want to run when registering the Software Collection, for example, commands to create files in /etc/opt/ or /var/opt/ . | [
"%global nfsmountable 1 %scl_package %scl",
"rpmbuild -ba software_collection.spec --define 'scl software_collection' USD rpm -qlp software_collection-runtime-1-1.el6.x86_64 /etc/opt/provider/software_collection /etc/opt/provider/software_collection/X11 /etc/opt/provider/software_collection/X11/applnk /etc/opt/provider/software_collection/X11/fontpath.d /opt/provider/software_collection/root/usr/src /opt/provider/software_collection/root/usr/src/debug /opt/provider/software_collection/root/usr/src/kernels /opt/provider/software_collection/root/usr/tmp /var/opt/provider/software_collection /var/opt/provider/software_collection/cache /var/opt/provider/software_collection/db /var/opt/provider/software_collection/empty",
"scl register /opt/provider/software_collection",
"scl deregister software_collection",
"%install %scl_install cat >> %{buildroot}%{_scl_scripts}/enable << EOF Contents of the enable scriptlet goes here EOF cat >> %{buildroot}%{_scl_scripts}/register << EOF Contents of the register scriptlet goes here EOF cat >> %{buildroot}%{_scl_scripts}/deregister << EOF Contents of the deregister scriptlet goes here EOF %files runtime -f filelist %scl_files %{_scl_scripts}/register %{_scl_scripts}/deregister"
] | https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/packaging_guide/chap-Advanced_Topics |
Chapter 72. server | Chapter 72. server This chapter describes the commands under the server command. 72.1. server add fixed ip Add fixed IP address to server Usage: Table 72.1. Positional arguments Value Summary <server> Server to receive the fixed ip address (name or id) <network> Network to allocate the fixed ip address from (name or ID) Table 72.2. Command arguments Value Summary -h, --help Show this help message and exit --fixed-ip-address <ip-address> Requested fixed ip address --tag <tag> Tag for the attached interface. (supported by --os- compute-api-version 2.52 or above) 72.2. server add floating ip Add floating IP address to server Usage: Table 72.3. Positional arguments Value Summary <server> Server to receive the floating ip address (name or id) <ip-address> Floating ip address to assign to the first available server port (IP only) Table 72.4. Command arguments Value Summary -h, --help Show this help message and exit --fixed-ip-address <ip-address> Fixed ip address to associate with this floating ip address. The first server port containing the fixed IP address will be used 72.3. server add network Add network to server Usage: Table 72.5. Positional arguments Value Summary <server> Server to add the network to (name or id) <network> Network to add to the server (name or id) Table 72.6. Command arguments Value Summary -h, --help Show this help message and exit --tag <tag> Tag for the attached interface. (supported by --os-compute-api- version 2.49 or above) 72.4. server add port Add port to server Usage: Table 72.7. Positional arguments Value Summary <server> Server to add the port to (name or id) <port> Port to add to the server (name or id) Table 72.8. Command arguments Value Summary -h, --help Show this help message and exit --tag <tag> Tag for the attached interface. (supported by api versions 2.49 - 2.latest ) 72.5. server add security group Add security group to server Usage: Table 72.9. Positional arguments Value Summary <server> Server (name or id) <group> Security group to add (name or id) Table 72.10. Command arguments Value Summary -h, --help Show this help message and exit 72.6. server add volume Add volume to server. Specify ``--os-compute-api-version 2.20`` or higher to add a volume to a server with status ``SHELVED`` or ``SHELVED_OFFLOADED``. Usage: Table 72.11. Positional arguments Value Summary <server> Server (name or id) <volume> Volume to add (name or id) Table 72.12. Command arguments Value Summary -h, --help Show this help message and exit --device <device> Server internal device name for volume --tag <tag> Tag for the attached volume (supported by --os- compute-api-version 2.49 or above) --enable-delete-on-termination Delete the volume when the server is destroyed (supported by --os-compute-api-version 2.79 or above) --disable-delete-on-termination Do not delete the volume when the server is destroyed (supported by --os-compute-api-version 2.79 or above) 72.7. server backup create Create a server backup image Usage: Table 72.13. Positional arguments Value Summary <server> Server to back up (name or id) Table 72.14. Command arguments Value Summary -h, --help Show this help message and exit --name <image-name> Name of the backup image (default: server name) --type <backup-type> Used to populate the backup_type property of the backup image (default: empty) --rotate <count> Number of backups to keep (default: 1) --wait Wait for backup image create to complete Table 72.15. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 72.16. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.17. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 72.18. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.8. server create Create a new server Usage: Table 72.19. Positional arguments Value Summary <server-name> New server name Table 72.20. Command arguments Value Summary -h, --help Show this help message and exit --flavor <flavor> Create server with this flavor (name or id) --image <image> Create server boot disk from this image (name or id) --image-property <key=value> Create server using the image that matches the specified property. Property must match exactly one property. --volume <volume> Create server using this volume as the boot disk (name or ID) This option automatically creates a block device mapping with a boot index of 0. On many hypervisors (libvirt/kvm for example) this will be device vda. Do not create a duplicate mapping using --block-device- mapping for this volume. --snapshot <snapshot> Create server using this snapshot as the boot disk (name or ID) This option automatically creates a block device mapping with a boot index of 0. On many hypervisors (libvirt/kvm for example) this will be device vda. Do not create a duplicate mapping using --block-device- mapping for this volume. --boot-from-volume <volume-size> When used in conjunction with the ``--image`` or ``--image-property`` option, this option automatically creates a block device mapping with a boot index of 0 and tells the compute service to create a volume of the given size (in GB) from the specified image and use it as the root disk of the server. The root volume will not be deleted when the server is deleted. This option is mutually exclusive with the ``--volume`` and ``--snapshot`` options. --block-device-mapping <dev-name=mapping> deprecated create a block device on the server. Block device mapping in the format <dev-name>=<id>:<type>:<size(GB)>:<delete-on- terminate> <dev-name>: block device name, like: vdb, xvdc (required) <id>: Name or ID of the volume, volume snapshot or image (required) <type>: volume, snapshot or image; default: volume (optional) <size(GB)>: volume size if create from image or snapshot (optional) <delete-on-terminate>: true or false; default: false (optional) Replaced by --block-device --block-device Create a block device on the server. Either a path to a JSON file or a CSV-serialized string describing the block device mapping. The following keys are accepted for both: uuid=<uuid>: UUID of the volume, snapshot or ID (required if using source image, snapshot or volume), source_type=<source_type>: source type (one of: image, snapshot, volume, blank), destination_typ=<destination_type>: destination type (one of: volume, local) (optional), disk_bus=<disk_bus>: device bus (one of: uml, lxc, virtio, ... ) (optional), device_type=<device_type>: device type (one of: disk, cdrom, etc. (optional), device_name=<device_name>: name of the device (optional), volume_size=<volume_size>: size of the block device in MiB (for swap) or GiB (for everything else) (optional), guest_format=<guest_format>: format of device (optional), boot_index=<boot_index>: index of disk used to order boot disk (required for volume-backed instances), delete_on_termination=<true|false>: whether to delete the volume upon deletion of server (optional), tag=<tag>: device metadata tag (optional), volume_type=<volume_type>: type of volume to create (name or ID) when source if blank, image or snapshot and dest is volume (optional) --swap <swap> Create and attach a local swap block device of <swap_size> MiB. --ephemeral <size=size[,format=format]> Create and attach a local ephemeral block device of <size> GiB and format it to <format>. --network <network> Create a nic on the server and connect it to network. Specify option multiple times to create multiple NICs. This is a wrapper for the --nic net-id=<network> parameter that provides simple syntax for the standard use case of connecting a new server to a given network. For more advanced use cases, refer to the -- nic parameter. --port <port> Create a nic on the server and connect it to port. Specify option multiple times to create multiple NICs. This is a wrapper for the --nic port-id=<port> parameter that provides simple syntax for the standard use case of connecting a new server to a given port. For more advanced use cases, refer to the --nic parameter. --nic <net-id=net-uuid,port-id=port-uuid,v4-fixed-ip=ip-addr,v6-fixed-ip=ip-addr,tag=tag,auto,none> Create a nic on the server. NIC in the format: net-id=<net-uuid>: attach NIC to network with this UUID, port-id=<port-uuid>: attach NIC to port with this UUID, v4-fixed-ip=<ip-addr>: IPv4 fixed address for NIC (optional), v6-fixed-ip=<ip-addr>: IPv6 fixed address for NIC (optional), tag: interface metadata tag (optional) (supported by --os-compute-api-version 2.43 or above), none: (v2.37+) no network is attached, auto: (v2.37+) the compute service will automatically allocate a network. Specify option multiple times to create multiple NICs. Specifying a --nic of auto or none cannot be used with any other --nic value. Either net-id or port-id must be provided, but not both. --password <password> Set the password to this server --security-group <security-group> Security group to assign to this server (name or id) (repeat option to set multiple groups) --key-name <key-name> Keypair to inject into this server --property <key=value> Set a property on this server (repeat option to set multiple values) --file <dest-filename=source-filename> File(s) to inject into image before boot (repeat option to set multiple files)(supported by --os- compute-api-version 2.57 or below) --user-data <user-data> User data file to serve from the metadata server --description <description> Set description for the server (supported by --os- compute-api-version 2.19 or above) --availability-zone <zone-name> Select an availability zone for the server. host and node are optional parameters. Availability zone in the format <zone-name>:<host-name>:<node-name>, <zone- name>::<node-name>, <zone-name>:<host-name> or <zone- name> --host <host> Requested host to create servers. (admin only) (supported by --os-compute-api-version 2.74 or above) --hypervisor-hostname <hypervisor-hostname> Requested hypervisor hostname to create servers. (admin only) (supported by --os-compute-api-version 2.74 or above) --hint <key=value> Hints for the scheduler --use-config-drive Enable config drive. --no-config-drive Disable config drive. --config-drive <config-drive-volume>|True deprecated use specified volume as the config drive, or True to use an ephemeral drive. Replaced by --use-config-drive . --min <count> Minimum number of servers to launch (default=1) --max <count> Maximum number of servers to launch (default=1) --tag <tag> Tags for the server. specify multiple times to add multiple tags. (supported by --os-compute-api-version 2.52 or above) --wait Wait for build to complete Table 72.21. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 72.22. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.23. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 72.24. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.9. server delete Delete server(s) Usage: Table 72.25. Positional arguments Value Summary <server> Server(s) to delete (name or id) Table 72.26. Command arguments Value Summary -h, --help Show this help message and exit --force Force delete server(s) --all-projects Delete server(s) in another project by name (admin only)(can be specified using the ALL_PROJECTS envvar) --wait Wait for delete to complete 72.10. server dump create Create a dump file in server(s) Trigger crash dump in server(s) with features like kdump in Linux. It will create a dump file in the server(s) dumping the server(s)' memory, and also crash the server(s). OSC sees the dump file (server dump) as a kind of resource. This command requires ``--os-compute-api- version`` 2.17 or greater. Usage: Table 72.27. Positional arguments Value Summary <server> Server(s) to create dump file (name or id) Table 72.28. Command arguments Value Summary -h, --help Show this help message and exit 72.11. server evacuate Evacuate a server to a different host. This command is used to recreate a server after the host it was on has failed. It can only be used if the compute service that manages the server is down. This command should only be used by an admin after they have confirmed that the instance is not running on the failed host. If the server instance was created with an ephemeral root disk on non-shared storage the server will be rebuilt using the original glance image preserving the ports and any attached data volumes. If the server uses boot for volume or has its root disk on shared storage the root disk will be preserved and reused for the evacuated instance on the new host. Usage: Table 72.29. Positional arguments Value Summary <server> Server (name or id) Table 72.30. Command arguments Value Summary -h, --help Show this help message and exit --wait Wait for evacuation to complete --host <host> Set the preferred host on which to rebuild the evacuated server. The host will be validated by the scheduler. (supported by --os-compute-api-version 2.29 or above) --password <password> Set the password on the evacuated instance. this option is mutually exclusive with the --shared-storage option --shared-storage Indicate that the instance is on shared storage. this will be auto-calculated with --os-compute-api-version 2.14 and greater and should not be used with later microversions. This option is mutually exclusive with the --password option Table 72.31. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 72.32. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.33. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 72.34. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.12. server event list List recent events of a server. Specify ``--os-compute-api-version 2.21`` or higher to show events for a deleted server, specified by ID only. Usage: Table 72.35. Positional arguments Value Summary <server> Server to list events (name or id) Table 72.36. Command arguments Value Summary -h, --help Show this help message and exit --long List additional fields in output --changes-since <changes-since> List only server events changed later or equal to a certain point of time. The provided time should be an ISO 8061 formatted time, e.g. ``2016-03-04T06:27:59Z``. (supported with --os- compute-api-version 2.58 or above) --changes-before <changes-before> List only server events changed earlier or equal to a certain point of time. The provided time should be an ISO 8061 formatted time, e.g. ``2016-03-04T06:27:59Z``. (supported with --os- compute-api-version 2.66 or above) --marker MARKER The last server event id of the page (supported by --os-compute-api-version 2.58 or above) --limit LIMIT Maximum number of server events to display (supported by --os-compute-api-version 2.58 or above) Table 72.37. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 72.38. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 72.39. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.40. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.13. server event show Show server event details. Specify ``--os-compute-api-version 2.21`` or higher to show event details for a deleted server, specified by ID only. Specify ``--os-compute-api-version 2.51`` or higher to show event details for non- admin users. Usage: Table 72.41. Positional arguments Value Summary <server> Server to show event details (name or id) <request-id> Request id of the event to show (id only) Table 72.42. Command arguments Value Summary -h, --help Show this help message and exit Table 72.43. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 72.44. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.45. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 72.46. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.14. server group create Create a new server group. Usage: Table 72.47. Positional arguments Value Summary <name> New server group name Table 72.48. Command arguments Value Summary -h, --help Show this help message and exit --policy <policy> Add a policy to <name> specify --os-compute-api- version 2.15 or higher for the soft-affinity or soft-anti-affinity policy. --rule <key=value> A rule for the policy. currently, only the max_server_per_host rule is supported for the anti- affinity policy. Table 72.49. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 72.50. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.51. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 72.52. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.15. server group delete Delete existing server group(s). Usage: Table 72.53. Positional arguments Value Summary <server-group> Server group(s) to delete (name or id) Table 72.54. Command arguments Value Summary -h, --help Show this help message and exit 72.16. server group list List all server groups. Usage: Table 72.55. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Display information from all projects (admin only) --long List additional fields in output --offset <offset> Index from which to start listing servers. this should typically be a factor of --limit. Display all servers groups if not specified. --limit <limit> Maximum number of server groups to display. if limit is greater than osapi_max_limit option of Nova API, osapi_max_limit will be used instead. Table 72.56. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 72.57. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 72.58. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.59. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.17. server group show Display server group details. Usage: Table 72.60. Positional arguments Value Summary <server-group> Server group to display (name or id) Table 72.61. Command arguments Value Summary -h, --help Show this help message and exit Table 72.62. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 72.63. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.64. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 72.65. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.18. server image create Create a new server disk image from an existing server Usage: Table 72.66. Positional arguments Value Summary <server> Server to create image (name or id) Table 72.67. Command arguments Value Summary -h, --help Show this help message and exit --name <image-name> Name of new disk image (default: server name) --property <key=value> Set a new property to meta_data.json on the metadata server (repeat option to set multiple values) --wait Wait for operation to complete Table 72.68. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 72.69. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.70. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 72.71. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.19. server list List servers Usage: Table 72.72. Command arguments Value Summary -h, --help Show this help message and exit --reservation-id <reservation-id> Only return instances that match the reservation --ip <ip-address-regex> Regular expression to match ip addresses --ip6 <ip-address-regex> Regular expression to match ipv6 addresses. note that this option only applies for non-admin users when using ``--os-compute-api-version`` 2.5 or greater. --name <name-regex> Regular expression to match names --instance-name <server-name> Regular expression to match instance name (admin only) --status <status> Search by server status --flavor <flavor> Search by flavor (name or id) --image <image> Search by image (name or id) --host <hostname> Search by hostname --all-projects Include all projects (admin only) (can be specified using the ALL_PROJECTS envvar) --project <project> Search by project (admin only) (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --user <user> Search by user (name or id) (admin only before microversion 2.83) --user-domain <user-domain> Domain the user belongs to (name or id). this can be used in case collisions between user names exist. --deleted Only display deleted servers (admin only) --availability-zone AVAILABILITY_ZONE Search by availability zone (admin only before microversion 2.83) --key-name KEY_NAME Search by keypair name (admin only before microversion 2.83) --config-drive Only display servers with a config drive attached (admin only before microversion 2.83) --no-config-drive Only display servers without a config drive attached (admin only before microversion 2.83) --progress PROGRESS Search by progress value (%) (admin only before microversion 2.83) --vm-state <state> Search by vm_state value (admin only before microversion 2.83) --task-state <state> Search by task_state value (admin only before microversion 2.83) --power-state <state> Search by power_state value (admin only before microversion 2.83) --long List additional fields in output -n, --no-name-lookup Skip flavor and image name lookup. mutually exclusive with "--name-lookup-one-by-one" option. --name-lookup-one-by-one When looking up flavor and image names, look them upone by one as needed instead of all together (default). Mutually exclusive with "--no-name- lookup|-n" option. --marker <server> The last server of the page. display list of servers after marker. Display all servers if not specified. When used with ``--deleted``, the marker must be an ID, otherwise a name or ID can be used. --limit <num-servers> Maximum number of servers to display. if limit equals -1, all servers will be displayed. If limit is greater than osapi_max_limit option of Nova API, osapi_max_limit will be used instead. --changes-before <changes-before> List only servers changed before a certain point of time. The provided time should be an ISO 8061 formatted time (e.g., 2016-03-05T06:27:59Z). (supported by --os-compute-api-version 2.66 or above) --changes-since <changes-since> List only servers changed after a certain point of time. The provided time should be an ISO 8061 formatted time (e.g., 2016-03-04T06:27:59Z). --locked Only display locked servers (supported by --os- compute-api-version 2.73 or above) --unlocked Only display unlocked servers (supported by --os- compute-api-version 2.73 or above) --tags <tag> Only list servers with the specified tag. specify multiple times to filter on multiple tags. (supported by --os-compute-api-version 2.26 or above) --not-tags <tag> Only list servers without the specified tag. specify multiple times to filter on multiple tags. (supported by --os-compute-api-version 2.26 or above) Table 72.73. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 72.74. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 72.75. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.76. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.20. server lock Lock server(s). A non-admin user will not be able to execute actions Usage: Table 72.77. Positional arguments Value Summary <server> Server(s) to lock (name or id) Table 72.78. Command arguments Value Summary -h, --help Show this help message and exit --reason <reason> Reason for locking the server(s). requires ``--os- compute-api-version`` 2.73 or greater. 72.21. server migrate confirm DEPRECATED: Confirm server migration. Use server migration confirm instead. Usage: Table 72.79. Positional arguments Value Summary <server> Server (name or id) Table 72.80. Command arguments Value Summary -h, --help Show this help message and exit 72.22. server migrate revert Revert server migration. Use server migration revert instead. Usage: Table 72.81. Positional arguments Value Summary <server> Server (name or id) Table 72.82. Command arguments Value Summary -h, --help Show this help message and exit 72.23. server migrate Migrate server to different host. A migrate operation is implemented as a resize operation using the same flavor as the old server. This means that, like resize, migrate works by creating a new server using the same flavor and copying the contents of the original disk into a new one. As with resize, the migrate operation is a two-step process for the user: the first step is to perform the migrate, and the second step is to either confirm (verify) success and release the old server, or to declare a revert to release the new server and restart the old one. Usage: Table 72.83. Positional arguments Value Summary <server> Server (name or id) Table 72.84. Command arguments Value Summary -h, --help Show this help message and exit --live-migration Live migrate the server; use the ``--host`` option to specify a target host for the migration which will be validated by the scheduler --host <hostname> Migrate the server to the specified host. (supported with --os-compute-api-version 2.30 or above when used with the --live-migration option) (supported with --os-compute-api-version 2.56 or above when used without the --live-migration option) --shared-migration Perform a shared live migration (default before --os- compute-api-version 2.25, auto after) --block-migration Perform a block live migration (auto-configured from --os-compute-api-version 2.25) --disk-overcommit Allow disk over-commit on the destination host(supported with --os-compute-api-version 2.24 or below) --no-disk-overcommit Do not over-commit disk on the destination host (default)(supported with --os-compute-api-version 2.24 or below) --wait Wait for migrate to complete 72.24. server migration abort Cancel an ongoing live migration. This command requires ``--os-compute-api- version`` 2.24 or greater. Usage: Table 72.85. Positional arguments Value Summary <server> Server (name or id) <migration> Migration (id) Table 72.86. Command arguments Value Summary -h, --help Show this help message and exit 72.25. server migration confirm Confirm server migration. Confirm (verify) success of the migration operation and release the old server. Usage: Table 72.87. Positional arguments Value Summary <server> Server (name or id) Table 72.88. Command arguments Value Summary -h, --help Show this help message and exit 72.26. server migration force complete Force an ongoing live migration to complete. This command requires ``--os- compute-api-version`` 2.22 or greater. Usage: Table 72.89. Positional arguments Value Summary <server> Server (name or id) <migration> Migration (id) Table 72.90. Command arguments Value Summary -h, --help Show this help message and exit 72.27. server migration list List server migrations Usage: Table 72.91. Command arguments Value Summary -h, --help Show this help message and exit --server <server> Filter migrations by server (name or id) --host <host> Filter migrations by source or destination host --status <status> Filter migrations by status --type <type> Filter migrations by type --marker <marker> The last migration of the page; displays list of migrations after marker . Note that the marker is the migration UUID. (supported with --os-compute-api- version 2.59 or above) --limit <limit> Maximum number of migrations to display. note that there is a configurable max limit on the server, and the limit that is used will be the minimum of what is requested here and what is configured in the server. (supported with --os-compute-api-version 2.59 or above) --changes-since <changes-since> List only migrations changed later or equal to a certain point of time. The provided time should be an ISO 8061 formatted time, e.g. ``2016-03-04T06:27:59Z``. (supported with --os- compute-api-version 2.59 or above) --changes-before <changes-before> List only migrations changed earlier or equal to a certain point of time. The provided time should be an ISO 8061 formatted time, e.g. ``2016-03-04T06:27:59Z``. (supported with --os- compute-api-version 2.66 or above) --project <project> Filter migrations by project (name or id) (supported with --os-compute-api-version 2.80 or above) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --user <user> Filter migrations by user (name or id) (supported with --os-compute-api-version 2.80 or above) --user-domain <user-domain> Domain the user belongs to (name or id). this can be used in case collisions between user names exist. Table 72.92. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 72.93. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 72.94. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.95. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.28. server migration revert Revert server migration. Revert the migration operation. Release the new server and restart the old one. Usage: Table 72.96. Positional arguments Value Summary <server> Server (name or id) Table 72.97. Command arguments Value Summary -h, --help Show this help message and exit 72.29. server migration show Show a migration for a given server. Usage: Table 72.98. Positional arguments Value Summary <server> Server (name or id) <migration> Migration (id) Table 72.99. Command arguments Value Summary -h, --help Show this help message and exit 72.30. server pause Pause server(s) Usage: Table 72.100. Positional arguments Value Summary <server> Server(s) to pause (name or id) Table 72.101. Command arguments Value Summary -h, --help Show this help message and exit 72.31. server reboot Perform a hard or soft server reboot Usage: Table 72.102. Positional arguments Value Summary <server> Server (name or id) Table 72.103. Command arguments Value Summary -h, --help Show this help message and exit --hard Perform a hard reboot --soft Perform a soft reboot --wait Wait for reboot to complete 72.32. server rebuild Rebuild server Usage: Table 72.104. Positional arguments Value Summary <server> Server (name or id) Table 72.105. Command arguments Value Summary -h, --help Show this help message and exit --image <image> Recreate server from the specified image (name or ID).Defaults to the currently used one. --name <name> Set the new name of the rebuilt server --password <password> Set the password on the rebuilt server --property <key=value> Set a new property on the rebuilt server (repeat option to set multiple values) --description <description> Set a new description on the rebuilt server (supported by --os-compute-api-version 2.19 or above) --preserve-ephemeral Preserve the default ephemeral storage partition on rebuild. --no-preserve-ephemeral Do not preserve the default ephemeral storage partition on rebuild. --key-name <key-name> Set the key name of key pair on the rebuilt server. Cannot be specified with the --key-unset option. (supported by --os-compute-api-version 2.54 or above) --no-key-name Unset the key name of key pair on the rebuilt server. Cannot be specified with the --key-name option. (supported by --os-compute-api-version 2.54 or above) --user-data <user-data> Add a new user data file to the rebuilt server. cannot be specified with the --no-user-data option. (supported by --os-compute-api-version 2.57 or above) --no-user-data Remove existing user data when rebuilding server. Cannot be specified with the --user-data option. (supported by --os-compute-api-version 2.57 or above) --trusted-image-cert <trusted-cert-id> Trusted image certificate ids used to validate certificates during the image signature verification process. Defaults to env[OS_TRUSTED_IMAGE_CERTIFICATE_IDS]. May be specified multiple times to pass multiple trusted image certificate IDs. Cannot be specified with the --no-trusted-certs option. (supported by --os-compute- api-version 2.63 or above) --no-trusted-image-certs Remove any existing trusted image certificates from the server. Cannot be specified with the --trusted- certs option. (supported by --os-compute-api-version 2.63 or above) --wait Wait for rebuild to complete Table 72.106. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 72.107. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.108. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 72.109. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.33. server remove fixed ip Remove fixed IP address from server Usage: Table 72.110. Positional arguments Value Summary <server> Server to remove the fixed ip address from (name or id) <ip-address> Fixed ip address to remove from the server (ip only) Table 72.111. Command arguments Value Summary -h, --help Show this help message and exit 72.34. server remove floating ip Remove floating IP address from server Usage: Table 72.112. Positional arguments Value Summary <server> Server to remove the floating ip address from (name or id) <ip-address> Floating ip address to remove from server (ip only) Table 72.113. Command arguments Value Summary -h, --help Show this help message and exit 72.35. server remove network Remove all ports of a network from server Usage: Table 72.114. Positional arguments Value Summary <server> Server to remove the port from (name or id) <network> Network to remove from the server (name or id) Table 72.115. Command arguments Value Summary -h, --help Show this help message and exit 72.36. server remove port Remove port from server Usage: Table 72.116. Positional arguments Value Summary <server> Server to remove the port from (name or id) <port> Port to remove from the server (name or id) Table 72.117. Command arguments Value Summary -h, --help Show this help message and exit 72.37. server remove security group Remove security group from server Usage: Table 72.118. Positional arguments Value Summary <server> Name or id of server to use <group> Name or id of security group to remove from server Table 72.119. Command arguments Value Summary -h, --help Show this help message and exit 72.38. server remove volume Remove volume from server. Specify ``--os-compute-api-version 2.20`` or higher to remove a volume from a server with status ``SHELVED`` or ``SHELVED_OFFLOADED``. Usage: Table 72.120. Positional arguments Value Summary <server> Server (name or id) <volume> Volume to remove (name or id) Table 72.121. Command arguments Value Summary -h, --help Show this help message and exit 72.39. server rescue Put server in rescue mode Usage: Table 72.122. Positional arguments Value Summary <server> Server (name or id) Table 72.123. Command arguments Value Summary -h, --help Show this help message and exit --image <image> Image (name or id) to use for the rescue mode. Defaults to the currently used one. --password <password> Set the password on the rescued instance 72.40. server resize confirm Confirm server resize. Confirm (verify) success of resize operation and release the old server. Usage: Table 72.124. Positional arguments Value Summary <server> Server (name or id) Table 72.125. Command arguments Value Summary -h, --help Show this help message and exit 72.41. server resize revert Revert server resize. Revert the resize operation. Release the new server and restart the old one. Usage: Table 72.126. Positional arguments Value Summary <server> Server (name or id) Table 72.127. Command arguments Value Summary -h, --help Show this help message and exit 72.42. server resize Scale server to a new flavor. A resize operation is implemented by creating a new server and copying the contents of the original disk into a new one. It is a two-step process for the user: the first step is to perform the resize, and the second step is to either confirm (verify) success and release the old server or to declare a revert to release the new server and restart the old one. Usage: Table 72.128. Positional arguments Value Summary <server> Server (name or id) Table 72.129. Command arguments Value Summary -h, --help Show this help message and exit --flavor <flavor> Resize server to specified flavor --confirm Confirm server resize is complete --revert Restore server state before resize --wait Wait for resize to complete 72.43. server restore Restore server(s) Usage: Table 72.130. Positional arguments Value Summary <server> Server(s) to restore (name or id) Table 72.131. Command arguments Value Summary -h, --help Show this help message and exit 72.44. server resume Resume server(s) Usage: Table 72.132. Positional arguments Value Summary <server> Server(s) to resume (name or id) Table 72.133. Command arguments Value Summary -h, --help Show this help message and exit 72.45. server set Set server properties Usage: Table 72.134. Positional arguments Value Summary <server> Server (name or id) Table 72.135. Command arguments Value Summary -h, --help Show this help message and exit --name <new-name> New server name --password PASSWORD Set the server password --no-password Clear the admin password for the server from the metadata service; note that this action does not actually change the server password --property <key=value> Property to add/change for this server (repeat option to set multiple properties) --state <state> New server state (valid value: active, error) --description <description> New server description (supported by --os-compute-api- version 2.19 or above) --tag <tag> Tag for the server. specify multiple times to add multiple tags. (supported by --os-compute-api-version 2.26 or above) 72.46. server shelve Shelve and optionally offload server(s). Shelving a server creates a snapshot of the server and stores this snapshot before shutting down the server. This shelved server can then be offloaded or deleted from the host, freeing up remaining resources on the host, such as network interfaces. Shelved servers can be unshelved, restoring the server from the snapshot. Shelving is therefore useful where users wish to retain the UUID and IP of a server, without utilizing other resources or disks. Most clouds are configured to automatically offload shelved servers immediately or after a small delay. For clouds where this is not configured, or where the delay is larger, offloading can be manually specified. This is an admin-only operation by default. Usage: Table 72.136. Positional arguments Value Summary <server> Server(s) to shelve (name or id) Table 72.137. Command arguments Value Summary -h, --help Show this help message and exit --offload Remove the shelved server(s) from the host (admin only). Invoking this option on an unshelved server(s) will result in the server being shelved first --wait Wait for shelve and/or offload operation to complete 72.47. server show Show server details. Specify ``--os-compute-api-version 2.47`` or higher to see the embedded flavor information for the server. Usage: Table 72.138. Positional arguments Value Summary <server> Server (name or id) Table 72.139. Command arguments Value Summary -h, --help Show this help message and exit --diagnostics Display server diagnostics information --topology Include topology information in the output (supported by --os-compute-api-version 2.78 or above) Table 72.140. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 72.141. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.142. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 72.143. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.48. server ssh SSH to server Usage: Table 72.144. Positional arguments Value Summary <server> Server (name or id) Table 72.145. Command arguments Value Summary -h, --help Show this help message and exit --login <login-name> Login name (ssh -l option) --port <port> Destination port (ssh -p option) --identity <keyfile> Private key file (ssh -i option) --option <config-options> Options in ssh_config(5) format (ssh -o option) -4 Use only ipv4 addresses -6 Use only ipv6 addresses --public Use public ip address --private Use private ip address --address-type <address-type> Use other ip address (public, private, etc) 72.49. server start Start server(s). Usage: Table 72.146. Positional arguments Value Summary <server> Server(s) to start (name or id) Table 72.147. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Start server(s) in another project by name (admin only)(can be specified using the ALL_PROJECTS envvar) 72.50. server stop Stop server(s). Usage: Table 72.148. Positional arguments Value Summary <server> Server(s) to stop (name or id) Table 72.149. Command arguments Value Summary -h, --help Show this help message and exit --all-projects Stop server(s) in another project by name (admin only)(can be specified using the ALL_PROJECTS envvar) 72.51. server suspend Suspend server(s) Usage: Table 72.150. Positional arguments Value Summary <server> Server(s) to suspend (name or id) Table 72.151. Command arguments Value Summary -h, --help Show this help message and exit 72.52. server unlock Unlock server(s) Usage: Table 72.152. Positional arguments Value Summary <server> Server(s) to unlock (name or id) Table 72.153. Command arguments Value Summary -h, --help Show this help message and exit 72.53. server unpause Unpause server(s) Usage: Table 72.154. Positional arguments Value Summary <server> Server(s) to unpause (name or id) Table 72.155. Command arguments Value Summary -h, --help Show this help message and exit 72.54. server unrescue Restore server from rescue mode Usage: Table 72.156. Positional arguments Value Summary <server> Server (name or id) Table 72.157. Command arguments Value Summary -h, --help Show this help message and exit 72.55. server unset Unset server properties and tags Usage: Table 72.158. Positional arguments Value Summary <server> Server (name or id) Table 72.159. Command arguments Value Summary -h, --help Show this help message and exit --property <key> Property key to remove from server (repeat option to remove multiple values) --description Unset server description (supported by --os-compute-api- version 2.19 or above) --tag <tag> Tag to remove from the server. specify multiple times to remove multiple tags. (supported by --os-compute-api- version 2.26 or above) 72.56. server unshelve Unshelve server(s) Usage: Table 72.160. Positional arguments Value Summary <server> Server(s) to unshelve (name or id) Table 72.161. Command arguments Value Summary -h, --help Show this help message and exit --availability-zone AVAILABILITY_ZONE Name of the availability zone in which to unshelve a SHELVED_OFFLOADED server (supported by --os-compute- api-version 2.77 or above) --wait Wait for unshelve operation to complete 72.57. server volume list List all the volumes attached to a server. Usage: Table 72.162. Positional arguments Value Summary server Server to list volume attachments for (name or id) Table 72.163. Command arguments Value Summary -h, --help Show this help message and exit Table 72.164. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 72.165. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 72.166. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.167. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.58. server volume update Update a volume attachment on the server. Usage: Table 72.168. Positional arguments Value Summary server Server to update volume for (name or id) volume Volume (id) Table 72.169. Command arguments Value Summary -h, --help Show this help message and exit --delete-on-termination Delete the volume when the server is destroyed (supported by --os-compute-api-version 2.85 or above) --preserve-on-termination Preserve the volume when the server is destroyed (supported by --os-compute-api-version 2.85 or above) | [
"openstack server add fixed ip [-h] [--fixed-ip-address <ip-address>] [--tag <tag>] <server> <network>",
"openstack server add floating ip [-h] [--fixed-ip-address <ip-address>] <server> <ip-address>",
"openstack server add network [-h] [--tag <tag>] <server> <network>",
"openstack server add port [-h] [--tag <tag>] <server> <port>",
"openstack server add security group [-h] <server> <group>",
"openstack server add volume [-h] [--device <device>] [--tag <tag>] [--enable-delete-on-termination | --disable-delete-on-termination] <server> <volume>",
"openstack server backup create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <image-name>] [--type <backup-type>] [--rotate <count>] [--wait] <server>",
"openstack server create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --flavor <flavor> (--image <image> | --image-property <key=value> | --volume <volume> | --snapshot <snapshot>) [--boot-from-volume <volume-size>] [--block-device-mapping <dev-name=mapping>] [--block-device] [--swap <swap>] [--ephemeral <size=size[,format=format]>] [--network <network>] [--port <port>] [--nic <net-id=net-uuid,port-id=port-uuid,v4-fixed-ip=ip-addr,v6-fixed-ip=ip-addr,tag=tag,auto,none>] [--password <password>] [--security-group <security-group>] [--key-name <key-name>] [--property <key=value>] [--file <dest-filename=source-filename>] [--user-data <user-data>] [--description <description>] [--availability-zone <zone-name>] [--host <host>] [--hypervisor-hostname <hypervisor-hostname>] [--hint <key=value>] [--use-config-drive | --no-config-drive | --config-drive <config-drive-volume>|True] [--min <count>] [--max <count>] [--tag <tag>] [--wait] <server-name>",
"openstack server delete [-h] [--force] [--all-projects] [--wait] <server> [<server> ...]",
"openstack server dump create [-h] <server> [<server> ...]",
"openstack server evacuate [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--wait] [--host <host>] [--password <password> | --shared-storage] <server>",
"openstack server event list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--long] [--changes-since <changes-since>] [--changes-before <changes-before>] [--marker MARKER] [--limit LIMIT] <server>",
"openstack server event show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <server> <request-id>",
"openstack server group create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--policy <policy>] [--rule <key=value>] <name>",
"openstack server group delete [-h] <server-group> [<server-group> ...]",
"openstack server group list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--all-projects] [--long] [--offset <offset>] [--limit <limit>]",
"openstack server group show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <server-group>",
"openstack server image create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <image-name>] [--property <key=value>] [--wait] <server>",
"openstack server list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--reservation-id <reservation-id>] [--ip <ip-address-regex>] [--ip6 <ip-address-regex>] [--name <name-regex>] [--instance-name <server-name>] [--status <status>] [--flavor <flavor>] [--image <image>] [--host <hostname>] [--all-projects] [--project <project>] [--project-domain <project-domain>] [--user <user>] [--user-domain <user-domain>] [--deleted] [--availability-zone AVAILABILITY_ZONE] [--key-name KEY_NAME] [--config-drive | --no-config-drive] [--progress PROGRESS] [--vm-state <state>] [--task-state <state>] [--power-state <state>] [--long] [-n | --name-lookup-one-by-one] [--marker <server>] [--limit <num-servers>] [--changes-before <changes-before>] [--changes-since <changes-since>] [--locked | --unlocked] [--tags <tag>] [--not-tags <tag>]",
"openstack server lock [-h] [--reason <reason>] <server> [<server> ...]",
"openstack server migrate confirm [-h] <server>",
"openstack server migrate revert [-h] <server>",
"openstack server migrate [-h] [--live-migration] [--host <hostname>] [--shared-migration | --block-migration] [--disk-overcommit | --no-disk-overcommit] [--wait] <server>",
"openstack server migration abort [-h] <server> <migration>",
"openstack server migration confirm [-h] <server>",
"openstack server migration force complete [-h] <server> <migration>",
"openstack server migration list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--server <server>] [--host <host>] [--status <status>] [--type <type>] [--marker <marker>] [--limit <limit>] [--changes-since <changes-since>] [--changes-before <changes-before>] [--project <project>] [--project-domain <project-domain>] [--user <user>] [--user-domain <user-domain>]",
"openstack server migration revert [-h] <server>",
"openstack server migration show [-h] <server> <migration>",
"openstack server pause [-h] <server> [<server> ...]",
"openstack server reboot [-h] [--hard | --soft] [--wait] <server>",
"openstack server rebuild [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--image <image>] [--name <name>] [--password <password>] [--property <key=value>] [--description <description>] [--preserve-ephemeral | --no-preserve-ephemeral] [--key-name <key-name> | --no-key-name] [--user-data <user-data> | --no-user-data] [--trusted-image-cert <trusted-cert-id> | --no-trusted-image-certs] [--wait] <server>",
"openstack server remove fixed ip [-h] <server> <ip-address>",
"openstack server remove floating ip [-h] <server> <ip-address>",
"openstack server remove network [-h] <server> <network>",
"openstack server remove port [-h] <server> <port>",
"openstack server remove security group [-h] <server> <group>",
"openstack server remove volume [-h] <server> <volume>",
"openstack server rescue [-h] [--image <image>] [--password <password>] <server>",
"openstack server resize confirm [-h] <server>",
"openstack server resize revert [-h] <server>",
"openstack server resize [-h] [--flavor <flavor> | --confirm | --revert] [--wait] <server>",
"openstack server restore [-h] <server> [<server> ...]",
"openstack server resume [-h] <server> [<server> ...]",
"openstack server set [-h] [--name <new-name>] [--password PASSWORD | --no-password] [--property <key=value>] [--state <state>] [--description <description>] [--tag <tag>] <server>",
"openstack server shelve [-h] [--offload] [--wait] <server> [<server> ...]",
"openstack server show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--diagnostics | --topology] <server>",
"openstack server ssh [-h] [--login <login-name>] [--port <port>] [--identity <keyfile>] [--option <config-options>] [-4 | -6] [--public | --private | --address-type <address-type>] <server>",
"openstack server start [-h] [--all-projects] <server> [<server> ...]",
"openstack server stop [-h] [--all-projects] <server> [<server> ...]",
"openstack server suspend [-h] <server> [<server> ...]",
"openstack server unlock [-h] <server> [<server> ...]",
"openstack server unpause [-h] <server> [<server> ...]",
"openstack server unrescue [-h] <server>",
"openstack server unset [-h] [--property <key>] [--description] [--tag <tag>] <server>",
"openstack server unshelve [-h] [--availability-zone AVAILABILITY_ZONE] [--wait] <server> [<server> ...]",
"openstack server volume list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] server",
"openstack server volume update [-h] [--delete-on-termination | --preserve-on-termination] server volume"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/command_line_interface_reference/server |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_automation_platform_performance_considerations_for_operator_based_installations/providing-feedback |
Chapter 8. Changing the insights-client schedule | Chapter 8. Changing the insights-client schedule You can disable, enable, and modify the schedule that controls when the Insights client runs. By default, the Insights client runs every 24 hours. The timers in the default schedules vary so that all systems do not run the client at the same time. 8.1. Disabling the Insights client schedule You must disable the client schedule before you can change the default Insights client settings and create a new schedule. The procedure you use to disable the insights-client schedule depends on your Red Hat Enterprise Linux and client versions. Additional resources KCS article about creating custom schedules KCS article about cron 8.1.1. Disabling the client schedule for RHEL 6, RHEL 7 and later with Client 3.x Note The --no-schedule option is deprecated in Client 3.x and later. Prerequisites Root-level access to your system. Procedure Enter the insights-client command with the --version option to verify the client version. Enter the insights-client command with the --disable-schedule option to disable the client schedule. 8.2. Enabling the Insights client schedule When you first enable the client schedule, it runs using its default settings. If you make changes to the schedule, those settings take precedence. When you run insights-client from the command line, Insights client runs using the settings you specify for only that session. When the scheduled run takes place, it uses the default settings. 8.2.1. Enabling the Insights client schedule on RHEL 7 or later and Client 3.x You can enable the client schedule so that it runs on its default settings. If you change the default schedule settings, the changed settings take precedence. Prerequisites Root-level access to your system. The client schedule is disabled. (Optional) You modified the default schedule. Procedure To verify the client version, enter the insights-client command with the --version option. Enter the insights-client command with the --enable-schedule option to enable the client schedule. 8.3. Modifying the Insights client schedule To change when the Insights client runs, modify the schedule. The method that you use depends on the RHEL release and client version that your system is running. Select the procedure that matches your version of RHEL. For Red Hat Enterprise Linux 7.4 and earlier, use cron to modify the system schedule. For Red Hat Enterprise Linux 7.5 and later, update the systemd settings and the insights-client-timer file. 8.3.1. Scheduling insights-client using systemd settings Note Use this for systems running RHEL 7.5 and later with Client 3.x. You can change the default schedule for running insights-client by updating the system systemd settings and the insights-client.timer file. Prerequisites Root-level access to your system. Procedure To edit the settings in the insights-client.timer file, enter the systemctl edit command and the file name. This action opens an empty file with the default system editor. Enter different settings to modify the schedule. The values in this example are the default settings for systemd . Enable the insights-client schedule. Additional resources Review the man pages for systemctl(1) , systemd.timer(5) , and systemd.time(7) to understand systemd What is cron and how is it used? 8.3.2. Refreshing the package cache for systems managed by Red Hat Satellite Insights now provides the optional --build-packagecache command to provide accurate reporting for applicable updates on Satellite-managed systems. This option rebuilds the yum/dnf package caches for insights-client , and creates a refreshed list of applicable updates for the system. You can run the command manually to rebuild the package caches immediately, or you can edit the client configuration file ( /etc/insights-client/insights-client.conf ) to rebuild the package caches automatically each time the system checks in to Insights. Additional resources For more information about how to run the --build-packagecache command, see Managing system content and patch updates with Red Hat Insights with FedRAMP . For more information about the --build-packagecache options, see the following KCS article: https://access.redhat.com/solutions/7041171 For more information about managing errata in Red Hat Satellite, see Managing content . | [
"insights-client --version Client: 3.0.6-0 Core: 3.0.121-1",
"insights-client --disable-schedule",
"insights-client --version Client: 3.0.6-0 Core: 3.0.121-1",
"insights-client --enable-schedule",
"systemctl edit insights-client.timer",
"[Timer] OnCalendar=daily RandomizedDelaySec=14400",
"insights-client --enable-schedule"
] | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/client_configuration_guide_for_red_hat_insights_with_fedramp/assembly-client-changing-schedule |
18.2.2. Installation Phase 1 | 18.2.2. Installation Phase 1 After the kernel boot, you will configure one network device. This network device is needed to complete the installation. The interface you will use in installation phase 1 is the linuxrc interface, which is line-mode and text-based. (Refer to Chapter 21, Installation Phase 1: Configuring a Network Device .) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/installation_procedure_overview-s390-phase-1 |
13.2.5. Configuring Services: NSS | 13.2.5. Configuring Services: NSS SSSD provides an NSS module, sssd_nss , which instructs the system to use SSSD to retrieve user information. The NSS configuration must include a reference to the SSSD module, and then the SSSD configuration sets how SSSD interacts with NSS. About NSS Service Maps and SSSD The Name Service Switch (NSS) provides a central configuration for services to look up a number of configuration and name resolution services. NSS provides one method of mapping system identities and services with configuration sources. SSSD works with NSS as a provider services for several types of NSS maps: Passwords ( passwd ) User groups ( shadow ) Groups ( groups ) Netgroups ( netgroups ) Services ( services ) Procedure 13.1. Configuring NSS Services to Use SSSD NSS can use multiple identity and configuration providers for any and all of its service maps. The default is to use system files for services; for SSSD to be included, the nss_sss module has to be included for the desired service type. Use the Authentication Configuration tool to enable SSSD. This automatically configured the nsswitch.conf file to use SSSD as a provider. This automatically configures the password, shadow, group, and netgroups services maps to use the SSSD module: The services map is not enabled by default when SSSD is enabled with authconfig . To include that map, open the nsswitch.conf file and add the sss module to the services map: Procedure 13.2. Configuring SSSD to Work with NSS The options and configuration that SSSD uses to service NSS requests are configured in the SSSD configuration file, in the [nss] services section. Open the sssd.conf file. Make sure that NSS is listed as one of the services that works with SSSD. In the [nss] section, change any of the NSS parameters. These are listed in Table 13.2, "SSSD [nss] Configuration Parameters" . Restart SSSD. Table 13.2. SSSD [nss] Configuration Parameters Parameter Value Format Description entry_cache_nowait_percentage integer Specifies how long sssd_nss should return cached entries before refreshing the cache. Setting this to zero ( 0 ) disables the entry cache refresh. This configures the entry cache to update entries in the background automatically if they are requested if the time before the update is a certain percentage of the interval. For example, if the interval is 300 seconds and the cache percentage is 75, then the entry cache will begin refreshing when a request comes in at 225 seconds - 75% of the interval. The allowed values for this option are 0 to 99, which sets the percentage based on the entry_cache_timeout value. The default value is 50%. entry_negative_timeout integer Specifies how long, in seconds, sssd_nss should cache negative cache hits. A negative cache hit is a query for an invalid database entries, including non-existent entries. filter_users, filter_groups string Tells SSSD to exclude certain users from being fetched from the NSS database. This is particularly useful for system accounts such as root . filter_users_in_groups Boolean Sets whether users listed in the filter_users list appear in group memberships when performing group lookups. If set to FALSE , group lookups return all users that are members of that group. If not specified, this value defaults to true , which filters the group member lists. debug_level integer, 0 - 9 Sets a debug logging level. NSS Compatibility Mode NSS compatibility (compat) mode provides the support for additional entries in the /etc/passwd file to ensure that users or members of netgroups have access to the system. To enable NSS compatibility mode to work with SSSD, add the following entries to the /etc/nsswitch.conf file: Once NSS compatibility mode is enabled, the following passwd entries are supported: + user - user Include ( + ) or exclude ( - ) a specified user from the Network Information System (NIS) map. +@ netgroup -@ netgroup Include ( + ) or exclude ( - ) all users in the given netgroup from the NIS map. + Exclude all users, except previously excluded ones from the NIS map. For more information about NSS compatibility mode, see the nsswitch.conf(5) manual page. | [
"~]# authconfig --enablesssd --update",
"passwd: files sss shadow: files sss group: files sss netgroup: files sss",
"~]# vim /etc/nsswitch.conf services: file sss",
"~]# vim /etc/sssd/sssd.conf",
"[sssd] config_file_version = 2 reconnection_retries = 3 sbus_timeout = 30 services = nss , pam",
"[nss] filter_groups = root filter_users = root reconnection_retries = 3 entry_cache_timeout = 300 entry_cache_nowait_percentage = 75",
"~]# service sssd restart",
"passwd: compat passwd_compat: sss"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/configuration_options-nss_configuration_options |
Chapter 10. Connecting to an instance | Chapter 10. Connecting to an instance You can access an instance from a location external to the cloud by using a remote shell such as SSH or WinRM, when you have allowed the protocol in the instance security group rules. You can also connect directly to the console of an instance, so that you can debug even if the network connection fails. Note If you did not provide a key pair to the instance, or allocate a security group to the instance, you can access the instance only from inside the cloud by using VNC. You cannot ping the instance. 10.1. Accessing an instance console You can connect directly to the VNC console for an instance by entering the VNC console URL in a browser. Procedure To display the VNC console URL for an instance, enter the following command: To connect directly to the VNC console, enter the displayed URL in a browser. 10.2. Logging in to an instance You can log in to public instances remotely. Prerequisites You have the key pair certificate for the instance. The certificate is downloaded when the key pair is created. If you did not create the key pair yourself, ask your administrator. The instance is configured as a public instance. For more information on the requirements of a public instance, see Providing public access to an instance . You have a cloud user account. Procedure Retrieve the floating IP address of the instance you want to log in to: Replace <instance> with the name or ID of the instance that you want to connect to. Use the automatically created cloud-user account to log in to your instance: Replace <keypair> with the name of the key pair. Replace <floating_ip> with the floating IP address of the instance. Tip You can use the following command to log in to an instance without the floating IP address: Replace <keypair> with the name of the key pair. Replace <instance> with the name or ID of the instance that you want to connect to. | [
"openstack console url show <vm_name> +-------+------------------------------------------------------+ | Field | Value | +-------+------------------------------------------------------+ | type | novnc | | url | http://172.25.250.50:6080/vnc_auto.html?token= | | | 962dfd71-f047-43d3-89a5-13cb88261eb9 | +-------+-------------------------------------------------------+",
"openstack server show <instance>",
"ssh -i ~/.ssh/<keypair>.pem cloud-user@<floatingIP> [cloud-user@demo-server1 ~]USD",
"openstack server ssh --login cloud-user --identity ~/.ssh/<keypair>.pem --private <instance>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/creating_and_managing_instances/assembly_connecting-to-an-instance_instances |
Chapter 6. Network connections | Chapter 6. Network connections 6.1. Connection URLs Connection URLs encode the information used to establish new connections. Connection URL syntax Scheme - The connection transport, either amqp for unencrypted TCP or amqps for TCP with SSL/TLS encryption. Host - The remote network host. The value can be a hostname or a numeric IP address. IPv6 addresses must be enclosed in square brackets. Port - The remote network port. This value is optional. The default value is 5672 for the amqp scheme and 5671 for the amqps scheme. Connection URL examples 6.2. Creating outgoing connections To connect to a remote server, call the container::connect() method with a connection URL . This is typically done inside the messaging_handler::on_container_start() method. Example: Creating outgoing connections class example_handler : public proton::messaging_handler { void on_container_start(proton::container& cont) override { cont.connect("amqp://example.com"); } void on_connection_open(proton::connection& conn) override { std::cout << "The connection is open\n"; } }; For information about creating secure connections, see Chapter 7, Security . 6.3. Configuring reconnect Reconnect allows a client to recover from lost connections. It is used to ensure that the components in a distributed system reestablish communication after temporary network or component failures. AMQ C++ disables reconnect by default. To enable it, set the reconnect connection option to an instance of the reconnect_options class. Example: Enabling reconnect proton::connection_options opts {}; proton::reconnect_options ropts {}; opts.reconnect(ropts); container.connect("amqp://example.com", opts); With reconnect enabled, if a connection is lost or a connection attempt fails, the client will try again after a brief delay. The delay increases exponentially for each new attempt. To control the delays between connection attempts, set the delay , delay_multiplier , and max_delay options. All durations are specified in milliseconds. To limit the number of reconnect attempts, set the max_attempts option. Setting it to 0 removes any limit. Example: Configuring reconnect proton::connection_options opts {}; proton::reconnect_options ropts {}; ropts.delay(proton::duration(10)); ropts.delay_multiplier(2.0); ropts.max_delay(proton::duration::FOREVER); ropts.max_attempts(0); opts.reconnect(ropts); container.connect("amqp://example.com", opts); 6.4. Configuring failover AMQ C++ allows you to configure multiple connection endpoints. If connecting to one fails, the client attempts to connect to the in the list. If the list is exhausted, the process starts over. To specify alternate connection endpoints, set the failover_urls reconnect option to a list of connection URLs. Example: Configuring failover std::vector<std::string> failover_urls = { "amqp://backup1.example.com", "amqp://backup2.example.com" }; proton::connection_options opts {}; proton::reconnect_options ropts {}; opts.reconnect(ropts); ropts.failover_urls(failover_urls); container.connect("amqp://primary.example.com", opts); 6.5. Accepting incoming connections AMQ C++ can accept inbound network connections, enabling you to build custom messaging servers. To start listening for connections, use the proton::container::listen() method with a URL containing the local host address and port to listen on. Example: Accepting incoming connections class example_handler : public proton::messaging_handler { void on_container_start(proton::container& cont) override { cont.listen("0.0.0.0"); } void on_connection_open(proton::connection& conn) override { std::cout << "New incoming connection\n"; } }; The special IP address 0.0.0.0 listens on all available IPv4 interfaces. To listen on all IPv6 interfaces, use [::0] . For more information, see the server receive.cpp example . | [
"scheme://host[:port]",
"amqps://example.com amqps://example.net:56720 amqp://127.0.0.1 amqp://[::1]:2000",
"class example_handler : public proton::messaging_handler { void on_container_start(proton::container& cont) override { cont.connect(\"amqp://example.com\"); } void on_connection_open(proton::connection& conn) override { std::cout << \"The connection is open\\n\"; } };",
"proton::connection_options opts {}; proton::reconnect_options ropts {}; opts.reconnect(ropts); container.connect(\"amqp://example.com\", opts);",
"proton::connection_options opts {}; proton::reconnect_options ropts {}; ropts.delay(proton::duration(10)); ropts.delay_multiplier(2.0); ropts.max_delay(proton::duration::FOREVER); ropts.max_attempts(0); opts.reconnect(ropts); container.connect(\"amqp://example.com\", opts);",
"std::vector<std::string> failover_urls = { \"amqp://backup1.example.com\", \"amqp://backup2.example.com\" }; proton::connection_options opts {}; proton::reconnect_options ropts {}; opts.reconnect(ropts); ropts.failover_urls(failover_urls); container.connect(\"amqp://primary.example.com\", opts);",
"class example_handler : public proton::messaging_handler { void on_container_start(proton::container& cont) override { cont.listen(\"0.0.0.0\"); } void on_connection_open(proton::connection& conn) override { std::cout << \"New incoming connection\\n\"; } };"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_cpp_client/network_connections |
Chapter 12. Setting container network modes | Chapter 12. Setting container network modes The chapter provides information about how to set different network modes. 12.1. Running containers with a static IP The podman run command with the --ip option sets the container network interface to a particular IP address (for example, 10.88.0.44). To verify that you set the IP address correctly, run the podman inspect command. Prerequisites The container-tools module is installed. Procedure Set the container network interface to the IP address 10.88.0.44: Verification Check that the IP address is set properly: 12.2. Running the DHCP plugin for Netavark using systemd Prerequisites The container-tools module is installed. Procedure Enable the DHCP proxy by using the systemd socket: Optional: Display the socket unit file: Create a macvlan network and specify your host interface with it. Typically, it is your external interface: Run the container by using newly created network: Verification Confirm the container has an IP on your local subnet: Inspect the container to verify it uses correct IP addresses: Note When attempting to connect to this IP address, ensure the connection is made from a different host. Connections from the same host are not supported when using macvlan networking. Additional resources Lease dynamic IPs with Netavark 12.3. Running the DHCP plugin for CNI using systemd You can use the systemd unit file to run the dhcp plugin. Prerequisites The container-tools module is installed. Procedure Optional: Make sure you re sign the CNI network stack: Enable the DHCP proxy by using the systemd socket: Optional: Display the socket unit file: Verification Check the status of the socket: 12.4. The macvlan plugin Most of the container images do not have a DHCP client, the dhcp plugin acts as a proxy DHCP client for the containers to interact with a DHCP server. The host system does not have network access to the container. To allow network connections from outside the host to the container, the container has to have an IP on the same network as the host. The macvlan plugin enables you to connect a container to the same network as the host. Note This procedure only applies to rootfull containers. Rootless containers are not able to use the macvlan and dhcp plugins. Note You can create a macvlan network using the podman network create --driver=macvlan command. 12.5. Switching the network stack from CNI to Netavark Previously, containers were able to use DNS only when connected to the single Container Network Interface (CNI) plugin. Netavark is a network stack for containers. You can use Netavark with Podman and other Open Container Initiative (OCI) container management applications. The advanced network stack for Podman is compatible with advanced Docker functionalities. Now, containers in multiple networks access containers on any of those networks. Netavark is capable of the following: Create, manage, and remove network interfaces, including bridge and MACVLAN interfaces. Configure firewall settings, such as network address translation (NAT) and port mapping rules. Support IPv4 and IPv6. Improve support for containers in multiple networks. Prerequisites The container-tools module is installed. Procedure If the /etc/containers/containers.conf file does not exist, copy the /usr/share/containers/containers.conf file to the /etc/containers/ directory: Edit the /etc/containers/containers.conf file, and add the following content to the [network] section: If you have any containers or pods, reset the storage back to the initial state: Reboot the system: Verification Verify that the network stack is changed to Netavark: Note If you are using Podman 4.0.0 or later, use the podman info command to check the network stack setting. Additional resources Podman 4.0's new network stack: What you need to know podman-system-reset man page on your system 12.6. Switching the network stack from Netavark to CNI You can switch the network stack from Netavark to CNI. Prerequisites The container-tools module is installed. Procedure If the /etc/containers/containers.conf file does not exist, copy the /usr/share/containers/containers.conf file to the /etc/containers/ directory: Edit the /etc/containers/containers.conf file, and add the following content to the [network] section: If you have any containers or pods, reset the storage back to the initial state: Reboot the system: Verification Verify that the network stack is changed to CNI: Note If you are using Podman 4.0.0 or later, use the podman info command to check the network stack setting. Additional resources Podman 4.0's new network stack: What you need to know podman-system-reset man page on your system | [
"podman run -d --name=myubi --ip=10.88.0.44 registry.access.redhat.com/ubi8/ubi efde5f0a8c723f70dd5cb5dc3d5039df3b962fae65575b08662e0d5b5f9fbe85",
"podman inspect --format='{{.NetworkSettings.IPAddress}}' myubi 10.88.0.44",
"systemctl enable --now netavark-dhcp-proxy.socket Created symlink /etc/systemd/system/sockets.target.wants/netavark-dhcp-proxy.socket /usr/lib/systemd/system/netavark-dhcp-proxy.socket.",
"cat /usr/lib/systemd/system/netavark-dhcp-proxy.socket [Unit] Description=Netavark DHCP proxy socket [Socket] ListenStream=%t/podman/nv-proxy.sock SocketMode=0660 [Install] WantedBy=sockets.target",
"podman network create -d macvlan --interface-name <LAN_INTERFACE> mv1 mv1",
"podman run --rm --network mv1 -d --name test alpine top 894ae3b6b1081aca2a5d90a9855568eaa533c08a174874be59569d4656f9bc45",
"podman exec test ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 5a:30:72:bf:13:76 brd ff:ff:ff:ff:ff:ff inet 192.168.188.36/24 brd 192.168.188.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::5830:72ff:febf:1376/64 scope link valid_lft forever preferred_lft forever",
"podman container inspect test --format {{.NetworkSettings.Networks.mv1.IPAddress}} 192.168.188.36",
"podman info --format \"{{.Host.NetworkBackend}}\" cni",
"systemctl enable --now cni-dhcp.socket Created symlink /etc/systemd/system/sockets.target.wants/cni-dhcp.socket /usr/lib/systemd/system/cni-dhcp.socket.",
"cat /usr/lib/systemd/system/io.podman.dhcp.socket [Unit] Description=CNI DHCP service socket Documentation=https://github.com/containernetworking/plugins/tree/master/plugins/ipam/dhcp PartOf=cni-dhcp.service [Socket] ListenStream=/run/cni/dhcp.sock SocketMode=0660 SocketUser=root SocketGroup=root RemoveOnStop=true [Install] WantedBy=sockets.target",
"systemctl status io.podman.dhcp.socket systemctl status cni-dhcp.socket ● cni-dhcp.socket - CNI DHCP service socket Loaded: loaded (/usr/lib/systemd/system/cni-dhcp.socket; enabled; vendor preset: disabled) Active: active (listening) since Mon 2025-01-06 08:39:35 EST; 33s ago Docs: https://github.com/containernetworking/plugins/tree/master/plugins/ipam/dhcp Listen: /run/cni/dhcp.sock (Stream) Tasks: 0 (limit: 11125) Memory: 4.0K CGroup: /system.slice/cni-dhcp.socket",
"cp /usr/share/containers/containers.conf /etc/containers/",
"network_backend=\"netavark\"",
"podman system reset",
"reboot",
"cat /etc/containers/containers.conf [network] network_backend=\"netavark\"",
"cp /usr/share/containers/containers.conf /etc/containers/",
"network_backend=\"cni\"",
"podman system reset",
"reboot",
"cat /etc/containers/containers.conf [network] network_backend=\"cni\""
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/building_running_and_managing_containers/assembly_setting-container-network-modes_building-running-and-managing-containers |
Subsets and Splits