title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 7. Creating and managing topics | Chapter 7. Creating and managing topics Messages in Kafka are always sent to or received from a topic. This chapter describes how to create and manage Kafka topics. 7.1. Partitions and replicas A topic is always split into one or more partitions. Partitions act as shards. That means that every message sent by a producer is always written only into a single partition. Each partition can have one or more replicas, which will be stored on different brokers in the cluster. When creating a topic you can configure the number of replicas using the replication factor . Replication factor defines the number of copies which will be held within the cluster. One of the replicas for a given partition will be elected as a leader. The leader replica will be used by the producers to send new messages and by the consumers to consume messages. The other replicas will be follower replicas. The followers replicate the leader. If the leader fails, one of the in-sync followers will automatically become the new leader. Each server acts as a leader for some of its partitions and a follower for others so the load is well balanced within the cluster. Note The replication factor determines the number of replicas including the leader and the followers. For example, if you set the replication factor to 3 , then there will be one leader and two follower replicas. 7.2. Message retention The message retention policy defines how long the messages will be stored on the Kafka brokers. It can be defined based on time, partition size or both. For example, you can define that the messages should be kept: For 7 days Until the partition has 1GB of messages. Once the limit is reached, the oldest messages will be removed. For 7 days or until the 1GB limit has been reached. Whatever limit comes first will be used. Warning Kafka brokers store messages in log segments. The messages which are past their retention policy will be deleted only when a new log segment is created. New log segments are created when the log segment exceeds the configured log segment size. Additionally, users can request new segments to be created periodically. Kafka brokers support a compacting policy. For a topic with the compacted policy, the broker will always keep only the last message for each key. The older messages with the same key will be removed from the partition. Because compacting is a periodically executed action, it does not happen immediately when the new message with the same key is sent to the partition. Instead it might take some time until the older messages are removed. For more information about the message retention configuration options, see Section 7.5, "Topic configuration" . 7.3. Topic auto-creation By default, Kafka automatically creates a topic if a producer or consumer attempts to send or receive messages from a non-existent topic. This behavior is governed by the auto.create.topics.enable configuration property, which is set to true by default. For production environments, it is recommended to disable automatic topic creation. To do so, set auto.create.topics.enable to false in the Kafka configuration properties file: Disabling automatic topic creation 7.4. Topic deletion Kafka provides the option to prevent topic deletion, controlled by the delete.topic.enable property . By default, this property is set to true , allowing topics to be deleted. However, setting it to false in the Kafka configuration properties file will disable topic deletion. In this case, attempts to delete a topic will return a success status, but the topic itself will not be deleted. Disabling topic deletion 7.5. Topic configuration Auto-created topics will use the default topic configuration which can be specified in the broker properties file. However, when creating topics manually, their configuration can be specified at creation time. It is also possible to change a topic's configuration after it has been created. The main topic configuration options for manually created topics are: cleanup.policy Configures the retention policy to delete or compact . The delete policy will delete old records. The compact policy will enable log compaction. The default value is delete . For more information about log compaction, see Kafka website . compression.type Specifies the compression which is used for stored messages. Valid values are gzip , snappy , lz4 , uncompressed (no compression) and producer (retain the compression codec used by the producer). The default value is producer . max.message.bytes The maximum size of a batch of messages allowed by the Kafka broker, in bytes. The default value is 1000012 . min.insync.replicas The minimum number of replicas which must be in sync for a write to be considered successful. The default value is 1 . retention.ms Maximum number of milliseconds for which log segments will be retained. Log segments older than this value will be deleted. The default value is 604800000 (7 days). retention.bytes The maximum number of bytes a partition will retain. Once the partition size grows over this limit, the oldest log segments will be deleted. Value of -1 indicates no limit. The default value is -1 . segment.bytes The maximum file size of a single commit log segment file in bytes. When the segment reaches its size, a new segment will be started. The default value is 1073741824 bytes (1 gibibyte). The defaults for auto-created topics can be specified in the Kafka broker configuration using similar options: log.cleanup.policy See cleanup.policy above. compression.type See compression.type above. message.max.bytes See max.message.bytes above. min.insync.replicas See min.insync.replicas above. log.retention.ms See retention.ms above. log.retention.bytes See retention.bytes above. log.segment.bytes See segment.bytes above. default.replication.factor Default replication factor for automatically created topics. Default value is 1 . num.partitions Default number of partitions for automatically created topics. Default value is 1 . 7.6. Internal topics Internal topics are created and used internally by the Kafka brokers and clients. Kafka has several internal topics, two of which are used to store consumer offsets ( __consumer_offsets ) and transaction state ( __transaction_state ). __consumer_offsets and __transaction_state topics can be configured using dedicated Kafka broker configuration options starting with prefix offsets.topic. and transaction.state.log. . The most important configuration options are: offsets.topic.replication.factor Number of replicas for __consumer_offsets topic. The default value is 3 . offsets.topic.num.partitions Number of partitions for __consumer_offsets topic. The default value is 50 . transaction.state.log.replication.factor Number of replicas for __transaction_state topic. The default value is 3 . transaction.state.log.num.partitions Number of partitions for __transaction_state topic. The default value is 50 . transaction.state.log.min.isr Minimum number of replicas that must acknowledge a write to __transaction_state topic to be considered successful. If this minimum cannot be met, then the producer will fail with an exception. The default value is 2 . 7.7. Creating a topic Use the kafka-topics.sh tool to manage topics. kafka-topics.sh is part of the Streams for Apache Kafka distribution and is found in the bin directory. Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. Creating a topic Create a topic using the kafka-topics.sh utility and specify the following: Host and port of the Kafka broker in the --bootstrap-server option. The new topic to be created in the --create option. Topic name in the --topic option. The number of partitions in the --partitions option. Topic replication factor in the --replication-factor option. You can also override some of the default topic configuration options using the option --config . This option can be used multiple times to override different options. /opt/kafka/bin/kafka-topics.sh --bootstrap-server <broker_address> --create --topic <TopicName> --partitions <NumberOfPartitions> --replication-factor <ReplicationFactor> --config <Option1> = <Value1> --config <Option2> = <Value2> Example of the command to create a topic named mytopic /opt/kafka/bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic mytopic --partitions 50 --replication-factor 3 --config cleanup.policy=compact --config min.insync.replicas=2 Verify that the topic exists using kafka-topics.sh . /opt/kafka/bin/kafka-topics.sh --bootstrap-server <broker_address> --describe --topic <TopicName> Example of the command to describe a topic named mytopic /opt/kafka/bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe --topic mytopic 7.8. Listing and describing topics The kafka-topics.sh tool can be used to list and describe topics. kafka-topics.sh is part of the Streams for Apache Kafka distribution and can be found in the bin directory. Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. Describing a topic Describe a topic using the kafka-topics.sh utility and specify the following: Host and port of the Kafka broker in the --bootstrap-server option. Use the --describe option to specify that you want to describe a topic. Topic name must be specified in the --topic option. When the --topic option is omitted, it describes all available topics. /opt/kafka/bin/kafka-topics.sh --bootstrap-server <broker_host>:<port> --describe --topic <topic_name> Example of the command to describe a topic named mytopic /opt/kafka/bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe --topic mytopic The command lists all partitions and replicas which belong to this topic. It also lists all topic configuration options. 7.9. Modifying a topic configuration The kafka-configs.sh tool can be used to modify topic configurations. kafka-configs.sh is part of the Streams for Apache Kafka distribution and can be found in the bin directory. Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. Modify topic configuration Use the kafka-configs.sh tool to get the current configuration. Specify the host and port of the Kafka broker in the --bootstrap-server option. Set the --entity-type as topic and --entity-name to the name of your topic. Use --describe option to get the current configuration. /opt/kafka/bin/kafka-configs.sh --bootstrap-server <broker_host>:<port> --entity-type topics --entity-name <topic_name> --describe Example of the command to get configuration of a topic named mytopic /opt/kafka/bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name mytopic --describe Use the kafka-configs.sh tool to change the configuration. Specify the host and port of the Kafka broker in the --bootstrap-server option. Set the --entity-type as topic and --entity-name to the name of your topic. Use --alter option to modify the current configuration. Specify the options you want to add or change in the option --add-config . /opt/kafka/bin/kafka-configs.sh --bootstrap-server <broker_host>:<port> --entity-type topics --entity-name <topic_name> --alter --add-config <option>=<value> Example of the command to change configuration of a topic named mytopic /opt/kafka/bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name mytopic --alter --add-config min.insync.replicas=1 Use the kafka-configs.sh tool to delete an existing configuration option. Specify the host and port of the Kafka broker in the --bootstrap-server option. Set the --entity-type as topic and --entity-name to the name of your topic. Use --delete-config option to remove existing configuration option. Specify the options you want to remove in the option --remove-config . /opt/kafka/bin/kafka-configs.sh --bootstrap-server <broker_host>:<port> --entity-type topics --entity-name <topic_name> --alter --delete-config <option> Example of the command to change configuration of a topic named mytopic /opt/kafka/bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name mytopic --alter --delete-config min.insync.replicas 7.10. Deleting a topic The kafka-topics.sh tool can be used to manage topics. kafka-topics.sh is part of the Streams for Apache Kafka distribution and can be found in the bin directory. Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. Deleting a topic Delete a topic using the kafka-topics.sh utility. Host and port of the Kafka broker in the --bootstrap-server option. Use the --delete option to specify that an existing topic should be deleted. Topic name must be specified in the --topic option. /opt/kafka/bin/kafka-topics.sh --bootstrap-server <broker_host>:<port> --delete --topic <topic_name> Example of the command to create a topic named mytopic /opt/kafka/bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic mytopic Verify that the topic was deleted using kafka-topics.sh . /opt/kafka/bin/kafka-topics.sh --bootstrap-server <broker_host>:<port> --list Example of the command to list all topics /opt/kafka/bin/kafka-topics.sh --bootstrap-server localhost:9092 --list | [
"auto.create.topics.enable=false",
"delete.topic.enable=false",
"/opt/kafka/bin/kafka-topics.sh --bootstrap-server <broker_address> --create --topic <TopicName> --partitions <NumberOfPartitions> --replication-factor <ReplicationFactor> --config <Option1> = <Value1> --config <Option2> = <Value2>",
"/opt/kafka/bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic mytopic --partitions 50 --replication-factor 3 --config cleanup.policy=compact --config min.insync.replicas=2",
"/opt/kafka/bin/kafka-topics.sh --bootstrap-server <broker_address> --describe --topic <TopicName>",
"/opt/kafka/bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe --topic mytopic",
"/opt/kafka/bin/kafka-topics.sh --bootstrap-server <broker_host>:<port> --describe --topic <topic_name>",
"/opt/kafka/bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe --topic mytopic",
"/opt/kafka/bin/kafka-configs.sh --bootstrap-server <broker_host>:<port> --entity-type topics --entity-name <topic_name> --describe",
"/opt/kafka/bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name mytopic --describe",
"/opt/kafka/bin/kafka-configs.sh --bootstrap-server <broker_host>:<port> --entity-type topics --entity-name <topic_name> --alter --add-config <option>=<value>",
"/opt/kafka/bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name mytopic --alter --add-config min.insync.replicas=1",
"/opt/kafka/bin/kafka-configs.sh --bootstrap-server <broker_host>:<port> --entity-type topics --entity-name <topic_name> --alter --delete-config <option>",
"/opt/kafka/bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name mytopic --alter --delete-config min.insync.replicas",
"/opt/kafka/bin/kafka-topics.sh --bootstrap-server <broker_host>:<port> --delete --topic <topic_name>",
"/opt/kafka/bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic mytopic",
"/opt/kafka/bin/kafka-topics.sh --bootstrap-server <broker_host>:<port> --list",
"/opt/kafka/bin/kafka-topics.sh --bootstrap-server localhost:9092 --list"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/using_streams_for_apache_kafka_on_rhel_with_zookeeper/topics-str |
12.0 Release Notes | 12.0 Release Notes Red Hat Developer Toolset 12 Release Notes for Red Hat Developer Toolset 12.0 Lenka Spackova Red Hat Customer Content Services [email protected] Jaromir Hradilek Red Hat Customer Content Services [email protected] Eliska Slobodova Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_developer_toolset/12/html/12.0_release_notes/index |
4.4. Logical Volume Administration | 4.4. Logical Volume Administration This section describes the commands that perform the various aspects of logical volume administration. 4.4.1. Creating Linear Logical Volumes To create a logical volume, use the lvcreate command. If you do not specify a name for the logical volume, the default name lvol # is used where # is the internal number of the logical volume. When you create a logical volume, the logical volume is carved from a volume group using the free extents on the physical volumes that make up the volume group. Normally logical volumes use up any space available on the underlying physical volumes on a -free basis. Modifying the logical volume frees and reallocates space in the physical volumes. The following command creates a logical volume 10 gigabytes in size in the volume group vg1 . The default unit for logical volume size is megabytes. The following command creates a 1500 megabyte linear logical volume named testlv in the volume group testvg , creating the block device /dev/testvg/testlv . The following command creates a 50 gigabyte logical volume named gfslv from the free extents in volume group vg0 . You can use the -l argument of the lvcreate command to specify the size of the logical volume in extents. You can also use this argument to specify the percentage of of the size of a related volume group, logical volume, or set of physical volumes. The suffix %VG denotes the total size of the volume group, the suffix %FREE the remaining free space in the volume group, and the suffix %PVS the free space in the specified physical volumes. For a snapshot, the size can be expressed as a percentage of the total size of the origin logical volume with the suffix %ORIGIN (100%ORIGIN provides space for the whole origin). When expressed as a percentage, the size defines an upper limit for the number of logical extents in the new logical volume. The precise number of logical extents in the new LV is not determined until the command has completed. The following command creates a logical volume called mylv that uses 60% of the total space in volume group testvg . The following command creates a logical volume called yourlv that uses all of the unallocated space in the volume group testvg . You can use -l argument of the lvcreate command to create a logical volume that uses the entire volume group. Another way to create a logical volume that uses the entire volume group is to use the vgdisplay command to find the "Total PE" size and to use those results as input to the lvcreate command. The following commands create a logical volume called mylv that fills the volume group named testvg . The underlying physical volumes used to create a logical volume can be important if the physical volume needs to be removed, so you may need to consider this possibility when you create the logical volume. For information on removing a physical volume from a volume group, see Section 4.3.7, "Removing Physical Volumes from a Volume Group" . To create a logical volume to be allocated from a specific physical volume in the volume group, specify the physical volume or volumes at the end at the lvcreate command line. The following command creates a logical volume named testlv in volume group testvg allocated from the physical volume /dev/sdg1 , You can specify which extents of a physical volume are to be used for a logical volume. The following example creates a linear logical volume out of extents 0 through 24 of physical volume /dev/sda1 and extents 50 through 124 of physical volume /dev/sdb1 in volume group testvg . The following example creates a linear logical volume out of extents 0 through 25 of physical volume /dev/sda1 and then continues laying out the logical volume at extent 100. The default policy for how the extents of a logical volume are allocated is inherit , which applies the same policy as for the volume group. These policies can be changed using the lvchange command. For information on allocation policies, see Section 4.3.1, "Creating Volume Groups" . 4.4.2. Creating Striped Volumes For large sequential reads and writes, creating a striped logical volume can improve the efficiency of the data I/O. For general information about striped volumes, see Section 2.3.2, "Striped Logical Volumes" . When you create a striped logical volume, you specify the number of stripes with the -i argument of the lvcreate command. This determines over how many physical volumes the logical volume will be striped. The number of stripes cannot be greater than the number of physical volumes in the volume group (unless the --alloc anywhere argument is used). If the underlying physical devices that make up a striped logical volume are different sizes, the maximum size of the striped volume is determined by the smallest underlying device. For example, in a two-legged stripe, the maximum size is twice the size of the smaller device. In a three-legged stripe, the maximum size is three times the size of the smallest device. The following command creates a striped logical volume across 2 physical volumes with a stripe of 64 kilobytes. The logical volume is 50 gigabytes in size, is named gfslv , and is carved out of volume group vg0 . As with linear volumes, you can specify the extents of the physical volume that you are using for the stripe. The following command creates a striped volume 100 extents in size that stripes across two physical volumes, is named stripelv and is in volume group testvg . The stripe will use sectors 0-49 of /dev/sda1 and sectors 50-99 of /dev/sdb1 . 4.4.3. RAID Logical Volumes LVM supports RAID0/1/4/5/6/10. Note RAID logical volumes are not cluster-aware. While RAID logical volumes can be created and activated exclusively on one machine, they cannot be activated simultaneously on more than one machine. If you require non-exclusive mirrored volumes, you must create the volumes with a mirror segment type, as described in Section 4.4.4, "Creating Mirrored Volumes" . To create a RAID logical volume, you specify a raid type as the --type argument of the lvcreate command. Table 4.1, "RAID Segment Types" describes the possible RAID segment types. Table 4.1. RAID Segment Types Segment type Description raid1 RAID1 mirroring. This is the default value for the --type argument of the lvcreate command when you specify the -m but you do not specify striping. raid4 RAID4 dedicated parity disk raid5 Same as raid5_ls raid5_la RAID5 left asymmetric. Rotating parity 0 with data continuation raid5_ra RAID5 right asymmetric. Rotating parity N with data continuation raid5_ls RAID5 left symmetric. Rotating parity 0 with data restart raid5_rs RAID5 right symmetric. Rotating parity N with data restart raid6 Same as raid6_zr raid6_zr RAID6 zero restart Rotating parity zero (left-to-right) with data restart raid6_nr RAID6 N restart Rotating parity N (left-to-right) with data restart raid6_nc RAID6 N continue Rotating parity N (left-to-right) with data continuation raid10 Striped mirrors. This is the default value for the --type argument of the lvcreate command if you specify the -m and you specify a number of stripes that is greater than 1. Striping of mirror sets raid0/raid0_meta (Red Hat Enterprise Linux 7.3 and later) Striping. RAID0 spreads logical volume data across multiple data subvolumes in units of stripe size. This is used to increase performance. Logical volume data will be lost if any of the data subvolumes fail. For information on creating RAID0 volumes, see Section 4.4.3.1, "Creating RAID0 Volumes (Red Hat Enterprise Linux 7.3 and Later)" . For most users, specifying one of the five available primary types ( raid1 , raid4 , raid5 , raid6 , raid10 ) should be sufficient. When you create a RAID logical volume, LVM creates a metadata subvolume that is one extent in size for every data or parity subvolume in the array. For example, creating a 2-way RAID1 array results in two metadata subvolumes ( lv_rmeta_0 and lv_rmeta_1 ) and two data subvolumes ( lv_rimage_0 and lv_rimage_1 ). Similarly, creating a 3-way stripe (plus 1 implicit parity device) RAID4 results in 4 metadata subvolumes ( lv_rmeta_0 , lv_rmeta_1 , lv_rmeta_2 , and lv_rmeta_3 ) and 4 data subvolumes ( lv_rimage_0 , lv_rimage_1 , lv_rimage_2 , and lv_rimage_3 ). The following command creates a 2-way RAID1 array named my_lv in the volume group my_vg that is one gigabyte in size. You can create RAID1 arrays with different numbers of copies according to the value you specify for the -m argument. Similarly, you specify the number of stripes for a RAID 4/5/6 logical volume with the -i argument . You can also specify the stripe size with the -I argument. The following command creates a RAID5 array (3 stripes + 1 implicit parity drive) named my_lv in the volume group my_vg that is one gigabyte in size. Note that you specify the number of stripes just as you do for an LVM striped volume; the correct number of parity drives is added automatically. The following command creates a RAID6 array (3 stripes + 2 implicit parity drives) named my_lv in the volume group my_vg that is one gigabyte in size. After you have created a RAID logical volume with LVM, you can activate, change, remove, display, and use the volume just as you would any other LVM logical volume. When you create RAID10 logical volumes, the background I/O required to initialize the logical volumes with a sync operation can crowd out other I/O operations to LVM devices, such as updates to volume group metadata, particularly when you are creating many RAID logical volumes. This can cause the other LVM operations to slow down. You can control the rate at which a RAID logical volume is initialized by implementing recovery throttling. You control the rate at which sync operations are performed by setting the minimum and maximum I/O rate for those operations with the --minrecoveryrate and --maxrecoveryrate options of the lvcreate command. You specify these options as follows. --maxrecoveryrate Rate [bBsSkKmMgG] Sets the maximum recovery rate for a RAID logical volume so that it will not crowd out nominal I/O operations. The Rate is specified as an amount per second for each device in the array. If no suffix is given, then kiB/sec/device is assumed. Setting the recovery rate to 0 means it will be unbounded. --minrecoveryrate Rate [bBsSkKmMgG] Sets the minimum recovery rate for a RAID logical volume to ensure that I/O for sync operations achieves a minimum throughput, even when heavy nominal I/O is present. The Rate is specified as an amount per second for each device in the array. If no suffix is given, then kiB/sec/device is assumed. The following command creates a 2-way RAID10 array with 3 stripes that is 10 gigabytes in size with a maximum recovery rate of 128 kiB/sec/device. The array is named my_lv and is in the volume group my_vg . You can also specify minimum and maximum recovery rates for a RAID scrubbing operation. For information on RAID scrubbing, see Section 4.4.3.11, "Scrubbing a RAID Logical Volume" . Note You can generate commands to create logical volumes on RAID storage with the LVM RAID Calculator application. This application uses the information you input about your current or planned storage to generate these commands. The LVM RAID Calculator application can be found at https://access.redhat.com/labs/lvmraidcalculator/ . The following sections describes the administrative tasks you can perform on LVM RAID devices: Section 4.4.3.1, "Creating RAID0 Volumes (Red Hat Enterprise Linux 7.3 and Later)" . Section 4.4.3.2, "Converting a Linear Device to a RAID Device" Section 4.4.3.3, "Converting an LVM RAID1 Logical Volume to an LVM Linear Logical Volume" Section 4.4.3.4, "Converting a Mirrored LVM Device to a RAID1 Device" Section 4.4.3.5, "Resizing a RAID Logical Volume" Section 4.4.3.6, "Changing the Number of Images in an Existing RAID1 Device" Section 4.4.3.7, "Splitting off a RAID Image as a Separate Logical Volume" Section 4.4.3.8, "Splitting and Merging a RAID Image" Section 4.4.3.9, "Setting a RAID fault policy" Section 4.4.3.10, "Replacing a RAID device" Section 4.4.3.11, "Scrubbing a RAID Logical Volume" Section 4.4.3.12, "RAID Takeover (Red Hat Enterprise Linux 7.4 and Later)" Section 4.4.3.13, "Reshaping a RAID Logical Volume (Red Hat Enterprise Linux 7.4 and Later)" Section 4.4.3.14, "Controlling I/O Operations on a RAID1 Logical Volume" Section 4.4.3.15, "Changing the region size on a RAID Logical Volume (Red Hat Enterprise Linux 7.4 and later)" 4.4.3.1. Creating RAID0 Volumes (Red Hat Enterprise Linux 7.3 and Later) The format for the command to create a RAID0 volume is as follows. Table 4.2. RAID0 Command Creation parameters Parameter Description --type raid0[_meta] Specifying raid0 creates a RAID0 volume without metadata volumes. Specifying raid0_meta creates a RAID0 volume with metadata volumes. Because RAID0 is non-resilient, it does not have to store any mirrored data blocks as RAID1/10 or calculate and store any parity blocks as RAID4/5/6 do. Hence, it does not need metadata volumes to keep state about resynchronization progress of mirrored or parity blocks. Metadata volumes become mandatory on a conversion from RAID0 to RAID4/5/6/10, however, and specifying raid0_meta preallocates those metadata volumes to prevent a respective allocation failure. --stripes Stripes Specifies the number of devices to spread the logical volume across. --stripesize StripeSize Specifies the size of each stripe in kilobytes. This is the amount of data that is written to one device before moving to the device. VolumeGroup Specifies the volume group to use. PhysicalVolumePath ... Specifies the devices to use. If this is not specified, LVM will choose the number of devices specified by the Stripes option, one for each stripe. 4.4.3.2. Converting a Linear Device to a RAID Device You can convert an existing linear logical volume to a RAID device by using the --type argument of the lvconvert command. The following command converts the linear logical volume my_lv in volume group my_vg to a 2-way RAID1 array. Since RAID logical volumes are composed of metadata and data subvolume pairs, when you convert a linear device to a RAID1 array, a new metadata subvolume is created and associated with the original logical volume on (one of) the same physical volumes that the linear volume is on. The additional images are added in metadata/data subvolume pairs. For example, if the original device is as follows: After conversion to a 2-way RAID1 array the device contains the following data and metadata subvolume pairs: If the metadata image that pairs with the original logical volume cannot be placed on the same physical volume, the lvconvert will fail. 4.4.3.3. Converting an LVM RAID1 Logical Volume to an LVM Linear Logical Volume You can convert an existing RAID1 LVM logical volume to an LVM linear logical volume with the lvconvert command by specifying the -m0 argument. This removes all the RAID data subvolumes and all the RAID metadata subvolumes that make up the RAID array, leaving the top-level RAID1 image as the linear logical volume. The following example displays an existing LVM RAID1 logical volume. The following command converts the LVM RAID1 logical volume my_vg/my_lv to an LVM linear device. When you convert an LVM RAID1 logical volume to an LVM linear volume, you can specify which physical volumes to remove. The following example shows the layout of an LVM RAID1 logical volume made up of two images: /dev/sda1 and /dev/sdb1 . In this example, the lvconvert command specifies that you want to remove /dev/sda1 , leaving /dev/sdb1 as the physical volume that makes up the linear device. 4.4.3.4. Converting a Mirrored LVM Device to a RAID1 Device You can convert an existing mirrored LVM device with a segment type of mirror to a RAID1 LVM device with the lvconvert command by specifying the --type raid1 argument. This renames the mirror subvolumes ( *_mimage_* ) to RAID subvolumes ( *_rimage_* ). In addition, the mirror log is removed and metadata subvolumes ( *_rmeta_* ) are created for the data subvolumes on the same physical volumes as the corresponding data subvolumes. The following example shows the layout of a mirrored logical volume my_vg/my_lv . The following command converts the mirrored logical volume my_vg/my_lv to a RAID1 logical volume. 4.4.3.5. Resizing a RAID Logical Volume You can resize a RAID logical volume in the following ways; You can increase the size of a RAID logical volume of any type with the lvresize or lvextend command. This does not change the number of RAID images. For striped RAID logical volumes the same stripe rounding constraints apply as when you create a striped RAID logical volume. For more information on extending a RAID volume, see Section 4.4.18, "Extending a RAID Volume" . You can reduce the size of a RAID logical volume of any type with the lvresize or lvreduce command. This does not change the number of RAID images. As with the lvextend command, the same stripe rounding constraints apply as when you create a striped RAID logical volume. For an example of a command to reduce the size of a logical volume, see Section 4.4.16, "Shrinking Logical Volumes" . As of Red Hat Enterprise Linux 7.4, you can change the number of stripes on a striped RAID logical volume ( raid4/5/6/10 ) with the --stripes N parameter of the lvconvert command. This increases or reduces the size of the RAID logical volume by the capacity of the stripes added or removed. Note that raid10 volumes are capable only of adding stripes. This capability is part of the RAID reshaping feature that allows you to change attributes of a RAID logical volume while keeping the same RAID level. For information on RAID reshaping and examples of using the lvconvert command to reshape a RAID logical volume, see the lvmraid (7) man page. 4.4.3.6. Changing the Number of Images in an Existing RAID1 Device You can change the number of images in an existing RAID1 array just as you can change the number of images in the earlier implementation of LVM mirroring. Use the lvconvert command to specify the number of additional metadata/data subvolume pairs to add or remove. For information on changing the volume configuration in the earlier implementation of LVM mirroring, see Section 4.4.4.4, "Changing Mirrored Volume Configuration" . When you add images to a RAID1 device with the lvconvert command, you can specify the total number of images for the resulting device, or you can specify how many images to add to the device. You can also optionally specify on which physical volumes the new metadata/data image pairs will reside. Metadata subvolumes (named *_rmeta_* ) always exist on the same physical devices as their data subvolume counterparts *_rimage_* ). The metadata/data subvolume pairs will not be created on the same physical volumes as those from another metadata/data subvolume pair in the RAID array (unless you specify --alloc anywhere ). The format for the command to add images to a RAID1 volume is as follows: For example, the following command displays the LVM device my_vg/my_lv , which is a 2-way RAID1 array: The following command converts the 2-way RAID1 device my_vg/my_lv to a 3-way RAID1 device: When you add an image to a RAID1 array, you can specify which physical volumes to use for the image. The following command converts the 2-way RAID1 device my_vg/my_lv to a 3-way RAID1 device, specifying that the physical volume /dev/sdd1 be used for the array: To remove images from a RAID1 array, use the following command. When you remove images from a RAID1 device with the lvconvert command, you can specify the total number of images for the resulting device, or you can specify how many images to remove from the device. You can also optionally specify the physical volumes from which to remove the device. Additionally, when an image and its associated metadata subvolume volume are removed, any higher-numbered images will be shifted down to fill the slot. If you remove lv_rimage_1 from a 3-way RAID1 array that consists of lv_rimage_0 , lv_rimage_1 , and lv_rimage_2 , this results in a RAID1 array that consists of lv_rimage_0 and lv_rimage_1 . The subvolume lv_rimage_2 will be renamed and take over the empty slot, becoming lv_rimage_1 . The following example shows the layout of a 3-way RAID1 logical volume my_vg/my_lv . The following command converts the 3-way RAID1 logical volume into a 2-way RAID1 logical volume. The following command converts the 3-way RAID1 logical volume into a 2-way RAID1 logical volume, specifying the physical volume that contains the image to remove as /dev/sde1 . 4.4.3.7. Splitting off a RAID Image as a Separate Logical Volume You can split off an image of a RAID logical volume to form a new logical volume. The procedure for splitting off a RAID image is the same as the procedure for splitting off a redundant image of a mirrored logical volume, as described in Section 4.4.4.2, "Splitting Off a Redundant Image of a Mirrored Logical Volume" . The format of the command to split off a RAID image is as follows: Just as when you are removing a RAID image from an existing RAID1 logical volume (as described in Section 4.4.3.6, "Changing the Number of Images in an Existing RAID1 Device" ), when you remove a RAID data subvolume (and its associated metadata subvolume) from the middle of the device any higher numbered images will be shifted down to fill the slot. The index numbers on the logical volumes that make up a RAID array will thus be an unbroken sequence of integers. Note You cannot split off a RAID image if the RAID1 array is not yet in sync. The following example splits a 2-way RAID1 logical volume, my_lv , into two linear logical volumes, my_lv and new . The following example splits a 3-way RAID1 logical volume, my_lv , into a 2-way RAID1 logical volume, my_lv , and a linear logical volume, new 4.4.3.8. Splitting and Merging a RAID Image You can temporarily split off an image of a RAID1 array for read-only use while keeping track of any changes by using the --trackchanges argument in conjunction with the --splitmirrors argument of the lvconvert command. This allows you to merge the image back into the array at a later time while resyncing only those portions of the array that have changed since the image was split. The format for the lvconvert command to split off a RAID image is as follows. When you split off a RAID image with the --trackchanges argument, you can specify which image to split but you cannot change the name of the volume being split. In addition, the resulting volumes have the following constraints. The new volume you create is read-only. You cannot resize the new volume. You cannot rename the remaining array. You cannot resize the remaining array. You can activate the new volume and the remaining array independently. You can merge an image that was split off with the --trackchanges argument specified by executing a subsequent lvconvert command with the --merge argument. When you merge the image, only the portions of the array that have changed since the image was split are resynced. The format for the lvconvert command to merge a RAID image is as follows. The following example creates a RAID1 logical volume and then splits off an image from that volume while tracking changes to the remaining array. The following example splits off an image from a RAID1 volume while tracking changes to the remaining array, then merges the volume back into the array. Once you have split off an image from a RAID1 volume, you can make the split permanent by issuing a second lvconvert --splitmirrors command, repeating the initial lvconvert command that split the image without specifying the --trackchanges argument. This breaks the link that the --trackchanges argument created. After you have split an image with the --trackchanges argument, you cannot issue a subsequent lvconvert --splitmirrors command on that array unless your intent is to permanently split the image being tracked. The following sequence of commands splits an image and tracks the image and then permanently splits off the image being tracked. Note, however, that the following sequence of commands will fail. Similarly, the following sequence of commands will fail as well, since the split image is not the image being tracked. 4.4.3.9. Setting a RAID fault policy LVM RAID handles device failures in an automatic fashion based on the preferences defined by the raid_fault_policy field in the lvm.conf file. If the raid_fault_policy field is set to allocate , the system will attempt to replace the failed device with a spare device from the volume group. If there is no available spare device, this will be reported to the system log. If the raid_fault_policy field is set to warn , the system will produce a warning and the log will indicate that a device has failed. This allows the user to determine the course of action to take. As long as there are enough devices remaining to support usability, the RAID logical volume will continue to operate. 4.4.3.9.1. The allocate RAID Fault Policy In the following example, the raid_fault_policy field has been set to allocate in the lvm.conf file. The RAID logical volume is laid out as follows. If the /dev/sde device fails, the system log will display error messages. Since the raid_fault_policy field has been set to allocate , the failed device is replaced with a new device from the volume group. Note that even though the failed device has been replaced, the display still indicates that LVM could not find the failed device. This is because, although the failed device has been removed from the RAID logical volume, the failed device has not yet been removed from the volume group. To remove the failed device from the volume group, you can execute vgreduce --removemissing VG . If the raid_fault_policy has been set to allocate but there are no spare devices, the allocation will fail, leaving the logical volume as it is. If the allocation fails, you have the option of fixing the drive, then deactivating and activating the logical volume; this is described in Section 4.4.3.9.2, "The warn RAID Fault Policy" . Alternately, you can replace the failed device, as described in Section 4.4.3.10, "Replacing a RAID device" . 4.4.3.9.2. The warn RAID Fault Policy In the following example, the raid_fault_policy field has been set to warn in the lvm.conf file. The RAID logical volume is laid out as follows. If the /dev/sdh device fails, the system log will display error messages. In this case, however, LVM will not automatically attempt to repair the RAID device by replacing one of the images. Instead, if the device has failed you can replace the device with the --repair argument of the lvconvert command, as shown below. Note that even though the failed device has been replaced, the display still indicates that LVM could not find the failed device. This is because, although the failed device has been removed from the RAID logical volume, the failed device has not yet been removed from the volume group. To remove the failed device from the volume group, you can execute vgreduce --removemissing VG . If the device failure is a transient failure or you are able to repair the device that failed, you can initiate recovery of the failed device with the --refresh option of the lvchange command. Previously it was necessary to deactivate and then activate the logical volume. The following command refreshes a logical volume. 4.4.3.10. Replacing a RAID device RAID is not like traditional LVM mirroring. LVM mirroring required failed devices to be removed or the mirrored logical volume would hang. RAID arrays can keep on running with failed devices. In fact, for RAID types other than RAID1, removing a device would mean converting to a lower level RAID (for example, from RAID6 to RAID5, or from RAID4 or RAID5 to RAID0). Therefore, rather than removing a failed device unconditionally and potentially allocating a replacement, LVM allows you to replace a device in a RAID volume in a one-step solution by using the --replace argument of the lvconvert command. The format for the lvconvert --replace is as follows. The following example creates a RAID1 logical volume and then replaces a device in that volume. The following example creates a RAID1 logical volume and then replaces a device in that volume, specifying which physical volume to use for the replacement. You can replace more than one RAID device at a time by specifying multiple replace arguments, as in the following example. Note When you specify a replacement drive using the lvconvert --replace command, the replacement drives should never be allocated from extra space on drives already used in the array. For example, lv_rimage_0 and lv_rimage_1 should not be located on the same physical volume. 4.4.3.11. Scrubbing a RAID Logical Volume LVM provides scrubbing support for RAID logical volumes. RAID scrubbing is the process of reading all the data and parity blocks in an array and checking to see whether they are coherent. You initiate a RAID scrubbing operation with the --syncaction option of the lvchange command. You specify either a check or repair operation. A check operation goes over the array and records the number of discrepancies in the array but does not repair them. A repair operation corrects the discrepancies as it finds them. The format of the command to scrub a RAID logical volume is as follows: Note The lvchange --syncaction repair vg/raid_lv operation does not perform the same function as the lvconvert --repair vg/raid_lv operation. The lvchange --syncaction repair operation initiates a background synchronization operation on the array, while the lvconvert --repair operation is designed to repair/replace failed devices in a mirror or RAID logical volume. In support of the new RAID scrubbing operation, the lvs command now supports two new printable fields: raid_sync_action and raid_mismatch_count . These fields are not printed by default. To display these fields you specify them with the -o parameter of the lvs , as follows. The raid_sync_action field displays the current synchronization operation that the raid volume is performing. It can be one of the following values: idle : All sync operations complete (doing nothing) resync : Initializing an array or recovering after a machine failure recover : Replacing a device in the array check : Looking for array inconsistencies repair : Looking for and repairing inconsistencies The raid_mismatch_count field displays the number of discrepancies found during a check operation. The Cpy%Sync field of the lvs command now prints the progress of any of the raid_sync_action operations, including check and repair . The lv_attr field of the lvs command output now provides additional indicators in support of the RAID scrubbing operation. Bit 9 of this field displays the health of the logical volume, and it now supports the following indicators. ( m )ismatches indicates that there are discrepancies in a RAID logical volume. This character is shown after a scrubbing operation has detected that portions of the RAID are not coherent. ( r )efresh indicates that a device in a RAID array has suffered a failure and the kernel regards it as failed, even though LVM can read the device label and considers the device to be operational. The logical volume should be (r)efreshed to notify the kernel that the device is now available, or the device should be (r)eplaced if it is suspected of having failed. For information on the lvs command, see Section 4.8.2, "Object Display Fields" . When you perform a RAID scrubbing operation, the background I/O required by the sync operations can crowd out other I/O operations to LVM devices, such as updates to volume group metadata. This can cause the other LVM operations to slow down. You can control the rate at which the RAID logical volume is scrubbed by implementing recovery throttling. You control the rate at which sync operations are performed by setting the minimum and maximum I/O rate for those operations with the --minrecoveryrate and --maxrecoveryrate options of the lvchange command. You specify these options as follows. --maxrecoveryrate Rate [bBsSkKmMgG] Sets the maximum recovery rate for a RAID logical volume so that it will not crowd out nominal I/O operations. The Rate is specified as an amount per second for each device in the array. If no suffix is given, then kiB/sec/device is assumed. Setting the recovery rate to 0 means it will be unbounded. --minrecoveryrate Rate [bBsSkKmMgG] Sets the minimum recovery rate for a RAID logical volume to ensure that I/O for sync operations achieves a minimum throughput, even when heavy nominal I/O is present. The Rate is specified as an amount per second for each device in the array. If no suffix is given, then kiB/sec/device is assumed. 4.4.3.12. RAID Takeover (Red Hat Enterprise Linux 7.4 and Later) LVM supports Raid takeover , which means converting a RAID logical volume from one RAID level to another (such as from RAID 5 to RAID 6). Changing the RAID level is usually done to increase or decrease resilience to device failures or to restripe logical volumes. You use the lvconvert for RAID takeover. For information on RAID takeover and for examples of using the lvconvert to convert a RAID logical volume, see the lvmraid (7) man page. 4.4.3.13. Reshaping a RAID Logical Volume (Red Hat Enterprise Linux 7.4 and Later) RAID reshaping means changing attributes of a RAID logical volume while keeping the same RAID level. Some attributes you can change include RAID layout, stripe size, and number of stripes. For information on RAID reshaping and examples of using the lvconvert command to reshape a RAID logical volume, see the lvmraid (7) man page. 4.4.3.14. Controlling I/O Operations on a RAID1 Logical Volume You can control the I/O operations for a device in a RAID1 logical volume by using the --writemostly and --writebehind parameters of the lvchange command. The format for using these parameters is as follows. --[raid]writemostly PhysicalVolume [:{t|y|n}] Marks a device in a RAID1 logical volume as write-mostly . All reads to these drives will be avoided unless necessary. Setting this parameter keeps the number of I/O operations to the drive to a minimum. By default, the write-mostly attribute is set to yes for the specified physical volume in the logical volume. It is possible to remove the write-mostly flag by appending :n to the physical volume or to toggle the value by specifying :t . The --writemostly argument can be specified more than one time in a single command, making it possible to toggle the write-mostly attributes for all the physical volumes in a logical volume at once. --[raid]writebehind IOCount Specifies the maximum number of outstanding writes that are allowed to devices in a RAID1 logical volume that are marked as write-mostly . Once this value is exceeded, writes become synchronous, causing all writes to the constituent devices to complete before the array signals the write has completed. Setting the value to zero clears the preference and allows the system to choose the value arbitrarily. 4.4.3.15. Changing the region size on a RAID Logical Volume (Red Hat Enterprise Linux 7.4 and later) When you create a RAID logical volume, the region size for the logical volume will be the value of the raid_region_size parameter in the /etc/lvm/lvm.conf file. You can override this default value with the -R option of the lvcreate command. After you have created a RAID logical volume, you can change the region size of the volume with the -R option of the lvconvert command. The following example changes the region size of logical volume vg/raidlv to 4096K. The RAID volume must be synced in order to change the region size. 4.4.4. Creating Mirrored Volumes For the Red Hat Enterprise Linux 7.0 release, LVM supports RAID 1/4/5/6/10, as described in Section 4.4.3, "RAID Logical Volumes" . RAID logical volumes are not cluster-aware. While RAID logical volumes can be created and activated exclusively on one machine, they cannot be activated simultaneously on more than one machine. If you require non-exclusive mirrored volumes, you must create the volumes with a mirror segment type, as described in this section. Note For information on converting an existing LVM device with a segment type of mirror to a RAID1 LVM device, see Section 4.4.3.4, "Converting a Mirrored LVM Device to a RAID1 Device" . Note Creating a mirrored LVM logical volume in a cluster requires the same commands and procedures as creating a mirrored LVM logical volume with a segment type of mirror on a single node. However, in order to create a mirrored LVM volume in a cluster, the cluster and cluster mirror infrastructure must be running, the cluster must be quorate, and the locking type in the lvm.conf file must be set correctly to enable cluster locking. For an example of creating a mirrored volume in a cluster, see Section 5.5, "Creating a Mirrored LVM Logical Volume in a Cluster" . Attempting to run multiple LVM mirror creation and conversion commands in quick succession from multiple nodes in a cluster might cause a backlog of these commands. This might cause some of the requested operations to time out and, subsequently, fail. To avoid this issue, it is recommended that cluster mirror creation commands be executed from one node of the cluster. When you create a mirrored volume, you specify the number of copies of the data to make with the -m argument of the lvcreate command. Specifying -m1 creates one mirror, which yields two copies of the file system: a linear logical volume plus one copy. Similarly, specifying -m2 creates two mirrors, yielding three copies of the file system. The following command creates a mirrored logical volume with a single mirror. The volume is 50 gigabytes in size, is named mirrorlv , and is carved out of volume group vg0 : An LVM mirror divides the device being copied into regions that, by default, are 512KB in size. You can use the -R argument of the lvcreate command to specify the region size in megabytes. You can also change the default region size by editing the mirror_region_size setting in the lvm.conf file. Note Due to limitations in the cluster infrastructure, cluster mirrors greater than 1.5TB cannot be created with the default region size of 512KB. Users that require larger mirrors should increase the region size from its default to something larger. Failure to increase the region size will cause LVM creation to hang and may hang other LVM commands as well. As a general guideline for specifying the region size for mirrors that are larger than 1.5TB, you could take your mirror size in terabytes and round up that number to the power of 2, using that number as the -R argument to the lvcreate command. For example, if your mirror size is 1.5TB, you could specify -R 2 . If your mirror size is 3TB, you could specify -R 4 . For a mirror size of 5TB, you could specify -R 8 . The following command creates a mirrored logical volume with a region size of 2MB: When a mirror is created, the mirror regions are synchronized. For large mirror components, the sync process may take a long time. When you are creating a new mirror that does not need to be revived, you can specify the --nosync argument to indicate that an initial synchronization from the first device is not required. LVM maintains a small log which it uses to keep track of which regions are in sync with the mirror or mirrors. By default, this log is kept on disk, which keeps it persistent across reboots and ensures that the mirror does not need to be re-synced every time a machine reboots or crashes. You can specify instead that this log be kept in memory with the --mirrorlog core argument; this eliminates the need for an extra log device, but it requires that the entire mirror be resynchronized at every reboot. The following command creates a mirrored logical volume from the volume group bigvg . The logical volume is named ondiskmirvol and has a single mirror. The volume is 12MB in size and keeps the mirror log in memory. The mirror log is created on a separate device from the devices on which any of the mirror legs are created. It is possible, however, to create the mirror log on the same device as one of the mirror legs by using the --alloc anywhere argument of the vgcreate command. This may degrade performance, but it allows you to create a mirror even if you have only two underlying devices. The following command creates a mirrored logical volume with a single mirror for which the mirror log is on the same device as one of the mirror legs. In this example, the volume group vg0 consists of only two devices. This command creates a 500 MB volume named mirrorlv in the vg0 volume group. Note With clustered mirrors, the mirror log management is completely the responsibility of the cluster node with the currently lowest cluster ID. Therefore, when the device holding the cluster mirror log becomes unavailable on a subset of the cluster, the clustered mirror can continue operating without any impact, as long as the cluster node with lowest ID retains access to the mirror log. Since the mirror is undisturbed, no automatic corrective action (repair) is issued, either. When the lowest-ID cluster node loses access to the mirror log, however, automatic action will kick in (regardless of accessibility of the log from other nodes). To create a mirror log that is itself mirrored, you can specify the --mirrorlog mirrored argument. The following command creates a mirrored logical volume from the volume group bigvg . The logical volume is named twologvol and has a single mirror. The volume is 12MB in size and the mirror log is mirrored, with each log kept on a separate device. Just as with a standard mirror log, it is possible to create the redundant mirror logs on the same device as the mirror legs by using the --alloc anywhere argument of the vgcreate command. This may degrade performance, but it allows you to create a redundant mirror log even if you do not have sufficient underlying devices for each log to be kept on a separate device than the mirror legs. When a mirror is created, the mirror regions are synchronized. For large mirror components, the sync process may take a long time. When you are creating a new mirror that does not need to be revived, you can specify the --nosync argument to indicate that an initial synchronization from the first device is not required. You can specify which devices to use for the mirror legs and log, and which extents of the devices to use. To force the log onto a particular disk, specify exactly one extent on the disk on which it will be placed. LVM does not necessary respect the order in which devices are listed in the command line. If any physical volumes are listed that is the only space on which allocation will take place. Any physical extents included in the list that are already allocated will get ignored. The following command creates a mirrored logical volume with a single mirror and a single log that is not mirrored. The volume is 500 MB in size, it is named mirrorlv , and it is carved out of volume group vg0 . The first leg of the mirror is on device /dev/sda1 , the second leg of the mirror is on device /dev/sdb1 , and the mirror log is on /dev/sdc1 . The following command creates a mirrored logical volume with a single mirror. The volume is 500 MB in size, it is named mirrorlv , and it is carved out of volume group vg0 . The first leg of the mirror is on extents 0 through 499 of device /dev/sda1 , the second leg of the mirror is on extents 0 through 499 of device /dev/sdb1 , and the mirror log starts on extent 0 of device /dev/sdc1 . These are 1MB extents. If any of the specified extents have already been allocated, they will be ignored. Note You can combine striping and mirroring in a single logical volume. Creating a logical volume while simultaneously specifying the number of mirrors ( --mirrors X ) and the number of stripes ( --stripes Y ) results in a mirror device whose constituent devices are striped. 4.4.4.1. Mirrored Logical Volume Failure Policy You can define how a mirrored logical volume behaves in the event of a device failure with the mirror_image_fault_policy and mirror_log_fault_policy parameters in the activation section of the lvm.conf file. When these parameters are set to remove , the system attempts to remove the faulty device and run without it. When these parameters are set to allocate , the system attempts to remove the faulty device and tries to allocate space on a new device to be a replacement for the failed device. This policy acts like the remove policy if no suitable device and space can be allocated for the replacement. By default, the mirror_log_fault_policy parameter is set to allocate . Using this policy for the log is fast and maintains the ability to remember the sync state through crashes and reboots. If you set this policy to remove , when a log device fails the mirror converts to using an in-memory log; in this instance, the mirror will not remember its sync status across crashes and reboots and the entire mirror will be re-synced. By default, the mirror_image_fault_policy parameter is set to remove . With this policy, if a mirror image fails the mirror will convert to a non-mirrored device if there is only one remaining good copy. Setting this policy to allocate for a mirror device requires the mirror to resynchronize the devices; this is a slow process, but it preserves the mirror characteristic of the device. Note When an LVM mirror suffers a device failure, a two-stage recovery takes place. The first stage involves removing the failed devices. This can result in the mirror being reduced to a linear device. The second stage, if the mirror_log_fault_policy parameter is set to allocate , is to attempt to replace any of the failed devices. Note, however, that there is no guarantee that the second stage will choose devices previously in-use by the mirror that had not been part of the failure if others are available. For information on manually recovering from an LVM mirror failure, see Section 6.2, "Recovering from LVM Mirror Failure" . 4.4.4.2. Splitting Off a Redundant Image of a Mirrored Logical Volume You can split off a redundant image of a mirrored logical volume to form a new logical volume. To split off an image, use the --splitmirrors argument of the lvconvert command, specifying the number of redundant images to split off. You must use the --name argument of the command to specify a name for the newly-split-off logical volume. The following command splits off a new logical volume named copy from the mirrored logical volume vg/lv . The new logical volume contains two mirror legs. In this example, LVM selects which devices to split off. You can specify which devices to split off. The following command splits off a new logical volume named copy from the mirrored logical volume vg/lv . The new logical volume contains two mirror legs consisting of devices /dev/sdc1 and /dev/sde1 . 4.4.4.3. Repairing a Mirrored Logical Device You can use the lvconvert --repair command to repair a mirror after a disk failure. This brings the mirror back into a consistent state. The lvconvert --repair command is an interactive command that prompts you to indicate whether you want the system to attempt to replace any failed devices. To skip the prompts and replace all of the failed devices, specify the -y option on the command line. To skip the prompts and replace none of the failed devices, specify the -f option on the command line. To skip the prompts and still indicate different replacement policies for the mirror image and the mirror log, you can specify the --use-policies argument to use the device replacement policies specified by the mirror_log_fault_policy and mirror_device_fault_policy parameters in the lvm.conf file. 4.4.4.4. Changing Mirrored Volume Configuration You can increase or decrease the number of mirrors that a logical volume contains by using the lvconvert command. This allows you to convert a logical volume from a mirrored volume to a linear volume or from a linear volume to a mirrored volume. You can also use this command to reconfigure other mirror parameters of an existing logical volume, such as corelog . When you convert a linear volume to a mirrored volume, you are creating mirror legs for an existing volume. This means that your volume group must contain the devices and space for the mirror legs and for the mirror log. If you lose a leg of a mirror, LVM converts the volume to a linear volume so that you still have access to the volume, without the mirror redundancy. After you replace the leg, use the lvconvert command to restore the mirror. This procedure is provided in Section 6.2, "Recovering from LVM Mirror Failure" . The following command converts the linear logical volume vg00/lvol1 to a mirrored logical volume. The following command converts the mirrored logical volume vg00/lvol1 to a linear logical volume, removing the mirror leg. The following example adds an additional mirror leg to the existing logical volume vg00/lvol1 . This example shows the configuration of the volume before and after the lvconvert command changed the volume to a volume with two mirror legs. 4.4.5. Creating Thinly-Provisioned Logical Volumes Logical volumes can be thinly provisioned. This allows you to create logical volumes that are larger than the available extents. Using thin provisioning, you can manage a storage pool of free space, known as a thin pool, which can be allocated to an arbitrary number of devices when needed by applications. You can then create devices that can be bound to the thin pool for later allocation when an application actually writes to the logical volume. The thin pool can be expanded dynamically when needed for cost-effective allocation of storage space. Note This section provides an overview of the basic commands you use to create and grow thinly-provisioned logical volumes. For detailed information on LVM thin provisioning as well as information on using the LVM commands and utilities with thinly-provisioned logical volumes, see the lvmthin (7) man page. Note Thin volumes are not supported across the nodes in a cluster. The thin pool and all its thin volumes must be exclusively activated on only one cluster node. To create a thin volume, perform the following tasks: Create a volume group with the vgcreate command. Create a thin pool with the lvcreate command. Create a thin volume in the thin pool with the lvcreate command. You can use the -T (or --thin ) option of the lvcreate command to create either a thin pool or a thin volume. You can also use -T option of the lvcreate command to create both a thin pool and a thin volume in that pool at the same time with a single command. The following command uses the -T option of the lvcreate command to create a thin pool named mythinpool in the volume group vg001 and that is 100M in size. Note that since you are creating a pool of physical space, you must specify the size of the pool. The -T option of the lvcreate command does not take an argument; it deduces what type of device is to be created from the other options the command specifies. The following command uses the -T option of the lvcreate command to create a thin volume named thinvolume in the thin pool vg001/mythinpool . Note that in this case you are specifying virtual size, and that you are specifying a virtual size for the volume that is greater than the pool that contains it. The following command uses the -T option of the lvcreate command to create a thin pool and a thin volume in that pool by specifying both a size and a virtual size argument for the lvcreate command. This command creates a thin pool named mythinpool in the volume group vg001 and it also creates a thin volume named thinvolume in that pool. You can also create a thin pool by specifying the --thinpool parameter of the lvcreate command. Unlike the -T option, the --thinpool parameter requires an argument, which is the name of the thin pool logical volume that you are creating. The following example specifies the --thinpool parameter of the lvcreate command to create a thin pool named mythinpool in the volume group vg001 and that is 100M in size: Use the following criteria for using chunk size: Smaller chunk size requires more metadata and hinders the performance, but it provides better space utilization with snapshots. Huge chunk size requires less metadata manipulation but makes the snapshot less efficient. LVM2 calculates chunk size in the following manner: By default, LVM starts with a 64KiB chunk size and increases its value when the resulting size of the thin pool metadata device grows above 128MiB, so the metadata size remains compact. This may result in some big chunk size values, which is less efficient for snapshot usage. In this case, the smaller chunk size and bigger metadata size is a better option. If the volume data size is in the range of TiB, use ~15.8GiB metadata size, which is the maximum supported size, and use the chunk size as per your requirement. But it is not possible to increase the metadata size if you need to extend this volume data size and have a small chunk size. Warning Red Hat recommends to use at least the default chunk size. If the chunk size is too small and your volume runs out of space for metadata, the volume is unable to create data. Monitor your logical volumes to ensure that they are expanded or more storage created before metadata volumes become completely full. Ensure that you set up your thin pool with a large enough chunk size so that they do not run out of room for metadata. Striping is supported for pool creation. The following command creates a 100M thin pool named pool in volume group vg001 with two 64 kB stripes and a chunk size of 256 kB. It also creates a 1T thin volume, vg00/thin_lv . You can extend the size of a thin volume with the lvextend command. You cannot, however, reduce the size of a thin pool. The following command resizes an existing thin pool that is 100M in size by extending it another 100M. As with other types of logical volumes, you can rename the volume with the lvrename , you can remove the volume with the lvremove , and you can display information about the volume with the lvs and lvdisplay commands. By default, the lvcreate command sets the size of the thin pool's metadata logical volume according to the formula (Pool_LV_size / Pool_LV_chunk_size * 64). If you will have large numbers of snapshots or if you have small chunk sizes for your thin pool and thus expect significant growth of the size of the thin pool at a later time, you may need to increase the default value of the thin pool's metadata volume with the --poolmetadatasize parameter of the lvcreate command. The supported value for the thin pool's metadata logical volume is in the range between 2MiB and 16GiB. You can use the --thinpool parameter of the lvconvert command to convert an existing logical volume to a thin pool volume. When you convert an existing logical volume to a thin pool volume, you must use the --poolmetadata parameter in conjunction with the --thinpool parameter of the lvconvert to convert an existing logical volume to the thin pool volume's metadata volume. Note Converting a logical volume to a thin pool volume or a thin pool metadata volume destroys the content of the logical volume, since in this case the lvconvert does not preserve the content of the devices but instead overwrites the content. The following example converts the existing logical volume lv1 in volume group vg001 to a thin pool volume and converts the existing logical volume lv2 in volume group vg001 to the metadata volume for that thin pool volume. 4.4.6. Creating Snapshot Volumes Note LVM supports thinly-provisioned snapshots. For information on creating thinly-provisioned snapshot volumes, see Section 4.4.7, "Creating Thinly-Provisioned Snapshot Volumes" . Use the -s argument of the lvcreate command to create a snapshot volume. A snapshot volume is writable. Note LVM snapshots are not supported across the nodes in a cluster. You cannot create a snapshot volume in a clustered volume group. However, if you need to create a consistent backup of data on a clustered logical volume you can activate the volume exclusively and then create the snapshot. For information on activating logical volumes exclusively on one node, see Section 4.7, "Activating Logical Volumes on Individual Nodes in a Cluster" . Note LVM snapshots are supported for mirrored logical volumes. Snapshots are supported for RAID logical volumes. For information on creating RAID logical volumes, see Section 4.4.3, "RAID Logical Volumes" . LVM does not allow you to create a snapshot volume that is larger than the size of the origin volume plus needed metadata for the volume. If you specify a snapshot volume that is larger than this, the system will create a snapshot volume that is only as large as will be needed for the size of the origin. By default, a snapshot volume is skipped during normal activation commands. For information on controlling the activation of a snapshot volume, see Section 4.4.20, "Controlling Logical Volume Activation" . The following command creates a snapshot logical volume that is 100 MB in size named /dev/vg00/snap . This creates a snapshot of the origin logical volume named /dev/vg00/lvol1 . If the original logical volume contains a file system, you can mount the snapshot logical volume on an arbitrary directory in order to access the contents of the file system to run a backup while the original file system continues to get updated. After you create a snapshot logical volume, specifying the origin volume on the lvdisplay command yields output that includes a list of all snapshot logical volumes and their status (active or inactive). The following example shows the status of the logical volume /dev/new_vg/lvol0 , for which a snapshot volume /dev/new_vg/newvgsnap has been created. The lvs command, by default, displays the origin volume and the current percentage of the snapshot volume being used. The following example shows the default output for the lvs command for a system that includes the logical volume /dev/new_vg/lvol0 , for which a snapshot volume /dev/new_vg/newvgsnap has been created. Warning Because the snapshot increases in size as the origin volume changes, it is important to monitor the percentage of the snapshot volume regularly with the lvs command to be sure it does not fill. A snapshot that is 100% full is lost completely, as a write to unchanged parts of the origin would be unable to succeed without corrupting the snapshot. In addition to the snapshot itself being invalidated when full, any mounted file systems on that snapshot device are forcibly unmounted, avoiding the inevitable file system errors upon access to the mount point. In addition, you can specify the snapshot_autoextend_threshold option in the lvm.conf file. This option allows automatic extension of a snapshot whenever the remaining snapshot space drops below the threshold you set. This feature requires that there be unallocated space in the volume group. LVM does not allow you to create a snapshot volume that is larger than the size of the origin volume plus needed metadata for the volume. Similarly, automatic extension of a snapshot will not increase the size of a snapshot volume beyond the maximum calculated size that is necessary for the snapshot. Once a snapshot has grown large enough to cover the origin, it is no longer monitored for automatic extension. Information on setting snapshot_autoextend_threshold and snapshot_autoextend_percent is provided in the lvm.conf file itself. For information about the lvm.conf file, see Appendix B, The LVM Configuration Files . 4.4.7. Creating Thinly-Provisioned Snapshot Volumes Red Hat Enterprise Linux provides support for thinly-provisioned snapshot volumes. For information on the benefits and limitations of thin snapshot volumes, see Section 2.3.6, "Thinly-Provisioned Snapshot Volumes" . Note This section provides an overview of the basic commands you use to create and grow thinly-provisioned snapshot volumes. For detailed information on LVM thin provisioning as well as information on using the LVM commands and utilities with thinly-provisioned logical volumes, see the lvmthin (7) man page. Important When creating a thin snapshot volume, you do not specify the size of the volume. If you specify a size parameter, the snapshot that will be created will not be a thin snapshot volume and will not use the thin pool for storing data. For example, the command lvcreate -s vg/thinvolume -L10M will not create a thin snapshot, even though the origin volume is a thin volume. Thin snapshots can be created for thinly-provisioned origin volumes, or for origin volumes that are not thinly-provisioned. You can specify a name for the snapshot volume with the --name option of the lvcreate command. The following command creates a thinly-provisioned snapshot volume of the thinly-provisioned logical volume vg001/thinvolume that is named mysnapshot1 . Note When using thin provisioning, it is important that the storage administrator monitor the storage pool and add more capacity if it starts to become full. For information on extending the size of a thin volume, see Section 4.4.5, "Creating Thinly-Provisioned Logical Volumes" A thin snapshot volume has the same characteristics as any other thin volume. You can independently activate the volume, extend the volume, rename the volume, remove the volume, and even snapshot the volume. By default, a snapshot volume is skipped during normal activation commands. For information on controlling the activation of a snapshot volume, see Section 4.4.20, "Controlling Logical Volume Activation" . You can also create a thinly-provisioned snapshot of a non-thinly-provisioned logical volume. Since the non-thinly-provisioned logical volume is not contained within a thin pool, it is referred to as an external origin . External origin volumes can be used and shared by many thinly-provisioned snapshot volumes, even from different thin pools. The external origin must be inactive and read-only at the time the thinly-provisioned snapshot is created. To create a thinly-provisioned snapshot of an external origin, you must specify the --thinpool option. The following command creates a thin snapshot volume of the read-only inactive volume origin_volume . The thin snapshot volume is named mythinsnap . The logical volume origin_volume then becomes the thin external origin for the thin snapshot volume mythinsnap in volume group vg001 that will use the existing thin pool vg001/pool . Because the origin volume must be in the same volume group as the snapshot volume, you do not need to specify the volume group when specifying the origin logical volume. You can create a second thinly-provisioned snapshot volume of the first snapshot volume, as in the following command. As of Red Hat Enterprise Linux 7.2, you can display a list of all ancestors and descendants of a thin snapshot logical volume by specifying the lv_ancestors and lv_descendants reporting fields of the lvs command. In the following example: stack1 is an origin volume in volume group vg001 . stack2 is a snapshot of stack1 stack3 is a snapshot of stack2 stack4 is a snapshot of stack3 Additionally: stack5 is also a snapshot of stack2 stack6 is a snapshot of stack5 Note The lv_ancestors and lv_descendants fields display existing dependencies but do not track removed entries which can break a dependency chain if the entry was removed from the middle of the chain. For example, if you remove the logical volume stack3 from this sample configuration, the display is as follows. As of Red Hat Enterprise Linux 7.3, however, you can configure your system to track and display logical volumes that have been removed, and you can display the full dependency chain that includes those volumes by specifying the lv_ancestors_full and lv_descendants_full fields. For information on tracking, displaying, and removing historical logical volumes, see Section 4.4.21, "Tracking and Displaying Historical Logical Volumes (Red Hat Enterprise Linux 7.3 and Later)" . 4.4.8. Creating LVM Cache Logical Volumes As of the Red Hat Enterprise Linux 7.1 release, LVM provides full support for LVM cache logical volumes. A cache logical volume uses a small logical volume consisting of fast block devices (such as SSD drives) to improve the performance of a larger and slower logical volume by storing the frequently used blocks on the smaller, faster logical volume. LVM caching uses the following LVM logical volume types. All of these associated logical volumes must be in the same volume group. Origin logical volume - the large, slow logical volume Cache pool logical volume - the small, fast logical volume, which is composed of two devices: the cache data logical volume, and the cache metadata logical volume Cache data logical volume - the logical volume containing the data blocks for the cache pool logical volume Cache metadata logical volume - the logical volume containing the metadata for the cache pool logical volume, which holds the accounting information that specifies where data blocks are stored (for example, on the origin logical volume or the cache data logical volume). Cache logical volume - the logical volume containing the origin logical volume and the cache pool logical volume. This is the resultant usable device which encapsulates the various cache volume components. The following procedure creates an LVM cache logical volume. Create a volume group that contains a slow physical volume and a fast physical volume. In this example. /dev/sde1 is a slow device and /dev/sdf1 is a fast device and both devices are contained in volume group VG . Create the origin volume. This example creates an origin volume named lv that is ten gigabytes in size and that consists of /dev/sde1 , the slow physical volume. Create the cache pool logical volume. This example creates the cache pool logical volume named cpool on the fast device /dev/sdf1 , which is part of the volume group VG . The cache pool logical volume this command creates consists of the hidden cache data logical volume cpool_cdata and the hidden cache metadata logical volume cpool_cmeta . For more complicated configurations you may need to create the cache data and the cache metadata logical volumes individually and then combine the volumes into a cache pool logical volume. For information on this procedure, see the lvmcache (7) man page. Create the cache logical volume by linking the cache pool logical volume to the origin logical volume. The resulting user-accessible cache logical volume takes the name of the origin logical volume. The origin logical volume becomes a hidden logical volume with _corig appended to the original name. Note that this conversion can be done live, although you must ensure you have performed a backup first. Optionally, as of Red Hat Enterprise Linux release 7.2, you can convert the cached logical volume to a thin pool logical volume. Note that any thin logical volumes created from the pool will share the cache. The following command uses the fast device, /dev/sdf1 , for allocating the thin pool metadata ( lv_tmeta ). This is the same device that is used by the cache pool volume, which means that the thin pool metadata volume shares that device with both the cache data logical volume cpool_cdata and the cache metadata logical volume cpool_cmeta . For further information on LVM cache volumes, including additional administrative examples, see the lvmcache (7) man page. For information on creating thinly-provisioned logical volumes, see Section 4.4.5, "Creating Thinly-Provisioned Logical Volumes" . 4.4.9. Merging Snapshot Volumes You can use the --merge option of the lvconvert command to merge a snapshot into its origin volume. If both the origin and snapshot volume are not open, the merge will start immediately. Otherwise, the merge will start the first time either the origin or snapshot are activated and both are closed. Merging a snapshot into an origin that cannot be closed, for example a root file system, is deferred until the time the origin volume is activated. When merging starts, the resulting logical volume will have the origin's name, minor number and UUID. While the merge is in progress, reads or writes to the origin appear as they were directed to the snapshot being merged. When the merge finishes, the merged snapshot is removed. The following command merges snapshot volume vg00/lvol1_snap into its origin. You can specify multiple snapshots on the command line, or you can use LVM object tags to specify that multiple snapshots be merged to their respective origins. In the following example, logical volumes vg00/lvol1 , vg00/lvol2 , and vg00/lvol3 are all tagged with the tag @some_tag . The following command merges the snapshot logical volumes for all three volumes serially: vg00/lvol1 , then vg00/lvol2 , then vg00/lvol3 . If the --background option were used, all snapshot logical volume merges would start in parallel. For information on tagging LVM objects, see Appendix D, LVM Object Tags . For further information on the lvconvert --merge command, see the lvconvert (8) man page. 4.4.10. Persistent Device Numbers Major and minor device numbers are allocated dynamically at module load. Some applications work best if the block device is always activated with the same device (major and minor) number. You can specify these with the lvcreate and the lvchange commands by using the following arguments: Use a large minor number to be sure that it has not already been allocated to another device dynamically. If you are exporting a file system using NFS, specifying the fsid parameter in the exports file may avoid the need to set a persistent device number within LVM. 4.4.11. Changing the Parameters of a Logical Volume Group To change the parameters of a logical volume, use the lvchange command. For a listing of the parameters you can change, see the lvchange (8) man page. You can use the lvchange command to activate and deactivate logical volumes. To activate and deactivate all the logical volumes in a volume group at the same time, use the vgchange command, as described in Section 4.3.9, "Changing the Parameters of a Volume Group" . The following command changes the permission on volume lvol1 in volume group vg00 to be read-only. 4.4.12. Renaming Logical Volumes To rename an existing logical volume, use the lvrename command. Either of the following commands renames logical volume lvold in volume group vg02 to lvnew . Renaming the root logical volume requires additional reconfiguration. For information on renaming a root volume, see How to rename root volume group or logical volume in Red Hat Enterprise Linux . For more information on activating logical volumes on individual nodes in a cluster, see Section 4.7, "Activating Logical Volumes on Individual Nodes in a Cluster" . 4.4.13. Removing Logical Volumes To remove an inactive logical volume, use the lvremove command. If the logical volume is currently mounted, unmount the volume before removing it. In addition, in a clustered environment you must deactivate a logical volume before it can be removed. The following command removes the logical volume /dev/testvg/testlv from the volume group testvg . Note that in this case the logical volume has not been deactivated. You could explicitly deactivate the logical volume before removing it with the lvchange -an command, in which case you would not see the prompt verifying whether you want to remove an active logical volume. 4.4.14. Displaying Logical Volumes There are three commands you can use to display properties of LVM logical volumes: lvs , lvdisplay , and lvscan . The lvs command provides logical volume information in a configurable form, displaying one line per logical volume. The lvs command provides a great deal of format control, and is useful for scripting. For information on using the lvs command to customize your output, see Section 4.8, "Customized Reporting for LVM" . The lvdisplay command displays logical volume properties (such as size, layout, and mapping) in a fixed format. The following command shows the attributes of lvol2 in vg00 . If snapshot logical volumes have been created for this original logical volume, this command shows a list of all snapshot logical volumes and their status (active or inactive) as well. The lvscan command scans for all logical volumes in the system and lists them, as in the following example. 4.4.15. Growing Logical Volumes To increase the size of a logical volume, use the lvextend command. When you extend the logical volume, you can indicate how much you want to extend the volume, or how large you want it to be after you extend it. The following command extends the logical volume /dev/myvg/homevol to 12 gigabytes. The following command adds another gigabyte to the logical volume /dev/myvg/homevol . As with the lvcreate command, you can use the -l argument of the lvextend command to specify the number of extents by which to increase the size of the logical volume. You can also use this argument to specify a percentage of the volume group, or a percentage of the remaining free space in the volume group. The following command extends the logical volume called testlv to fill all of the unallocated space in the volume group myvg . After you have extended the logical volume it is necessary to increase the file system size to match. By default, most file system resizing tools will increase the size of the file system to be the size of the underlying logical volume so you do not need to worry about specifying the same size for each of the two commands. 4.4.16. Shrinking Logical Volumes You can reduce the size of a logical volume with the lvreduce command. Note Shrinking is not supported on a GFS2 or XFS file system, so you cannot reduce the size of a logical volume that contains a GFS2 or XFS file system. If the logical volume you are reducing contains a file system, to prevent data loss you must ensure that the file system is not using the space in the logical volume that is being reduced. For this reason, it is recommended that you use the --resizefs option of the lvreduce command when the logical volume contains a file system. When you use this option, the lvreduce command attempts to reduce the file system before shrinking the logical volume. If shrinking the file system fails, as can occur if the file system is full or the file system does not support shrinking, then the lvreduce command will fail and not attempt to shrink the logical volume. Warning In most cases, the lvreduce command warns about possible data loss and asks for a confirmation. However, you should not rely on these confirmation prompts to prevent data loss because in some cases you will not see these prompts, such as when the logical volume is inactive or the --resizefs option is not used. Note that using the --test option of the lvreduce command does not indicate where the operation is safe, as this option does not check the file system or test the file system resize. The following command shrinks the logical volume lvol1 in volume group vg00 to be 64 megabytes. In this example, lvol1 contains a file system, which this command resizes together with the logical volume. This example shows the output to the command. Specifying the - sign before the resize value indicates that the value will be subtracted from the logical volume's actual size. The following example shows the command you would use if, instead of shrinking a logical volume to an absolute size of 64 megabytes, you wanted to shrink the volume by a value 64 megabytes. 4.4.17. Extending a Striped Volume In order to increase the size of a striped logical volume, there must be enough free space on the underlying physical volumes that make up the volume group to support the stripe. For example, if you have a two-way stripe that that uses up an entire volume group, adding a single physical volume to the volume group will not enable you to extend the stripe. Instead, you must add at least two physical volumes to the volume group. For example, consider a volume group vg that consists of two underlying physical volumes, as displayed with the following vgs command. You can create a stripe using the entire amount of space in the volume group. Note that the volume group now has no more free space. The following command adds another physical volume to the volume group, which then has 135 gigabytes of additional space. At this point you cannot extend the striped logical volume to the full size of the volume group, because two underlying devices are needed in order to stripe the data. To extend the striped logical volume, add another physical volume and then extend the logical volume. In this example, having added two physical volumes to the volume group we can extend the logical volume to the full size of the volume group. If you do not have enough underlying physical devices to extend the striped logical volume, it is possible to extend the volume anyway if it does not matter that the extension is not striped, which may result in uneven performance. When adding space to the logical volume, the default operation is to use the same striping parameters of the last segment of the existing logical volume, but you can override those parameters. The following example extends the existing striped logical volume to use the remaining free space after the initial lvextend command fails. 4.4.18. Extending a RAID Volume You can grow RAID logical volumes with the lvextend command without performing a synchronization of the new RAID regions. If you specify the --nosync option when you create a RAID logical volume with the lvcreate command, the RAID regions are not synchronized when the logical volume is created. If you later extend a RAID logical volume that you have created with the --nosync option, the RAID extensions are not synchronized at that time, either. You can determine whether an existing logical volume was created with the --nosync option by using the lvs command to display the volume's attributes. A logical volume will show "R" as the first character in the attribute field if it is a RAID volume that was created without an initial synchronization, and it will show "r" if it was created with initial synchronization. The following command displays the attributes of a RAID logical volume named lv that was created without initial synchronization, showing "R" as the first character in the attribute field. The seventh character in the attribute field is "r", indicating a target type of RAID. For information on the meaning of the attribute field, see Table 4.5, "lvs Display Fields" . If you grow this logical volume with the lvextend command, the RAID extension will not be resynchronized. If you created a RAID logical volume without specifying the --nosync option of the lvcreate command, you can grow the logical volume without resynchronizing the mirror by specifying the --nosync option of the lvextend command. The following example extends a RAID logical volume that was created without the --nosync option, indicated that the RAID volume was synchronized when it was created. This example, however, specifies that the volume not be synchronized when the volume is extended. Note that the volume has an attribute of "r", but after executing the lvextend command with the --nosync option the volume has an attribute of "R". If a RAID volume is inactive, it will not automatically skip synchronization when you extend the volume, even if you create the volume with the --nosync option specified. Instead, you will be prompted whether to do a full resync of the extended portion of the logical volume. Note If a RAID volume is performing recovery, you cannot extend the logical volume if you created or extended the volume with the --nosync option specified. If you did not specify the --nosync option, however, you can extend the RAID volume while it is recovering. 4.4.19. Extending a Logical Volume with the cling Allocation Policy When extending an LVM volume, you can use the --alloc cling option of the lvextend command to specify the cling allocation policy. This policy will choose space on the same physical volumes as the last segment of the existing logical volume. If there is insufficient space on the physical volumes and a list of tags is defined in the lvm.conf file, LVM will check whether any of the tags are attached to the physical volumes and seek to match those physical volume tags between existing extents and new extents. For example, if you have logical volumes that are mirrored between two sites within a single volume group, you can tag the physical volumes according to where they are situated by tagging the physical volumes with @site1 and @site2 tags. You can then specify the following line in the lvm.conf file: For information on tagging physical volumes, see Appendix D, LVM Object Tags . In the following example, the lvm.conf file has been modified to contain the following line: Also in this example, a volume group taft has been created that consists of the physical volumes /dev/sdb1 , /dev/sdc1 , /dev/sdd1 , /dev/sde1 , /dev/sdf1 , /dev/sdg1 , and /dev/sdh1 . These physical volumes have been tagged with tags A , B , and C . The example does not use the C tag, but this will show that LVM uses the tags to select which physical volumes to use for the mirror legs. The following command creates a 10 gigabyte mirrored volume from the volume group taft . The following command shows which devices are used for the mirror legs and RAID metadata subvolumes. The following command extends the size of the mirrored volume, using the cling allocation policy to indicate that the mirror legs should be extended using physical volumes with the same tag. The following display command shows that the mirror legs have been extended using physical volumes with the same tag as the leg. Note that the physical volumes with a tag of C were ignored. 4.4.20. Controlling Logical Volume Activation You can flag a logical volume to be skipped during normal activation commands with the -k or --setactivationskip {y|n} option of the lvcreate or lvchange command. This flag is not applied during deactivation. You can determine whether this flag is set for a logical volume with the lvs command, which displays the k attribute as in the following example. By default, thin snapshot volumes are flagged for activation skip. You can activate a logical volume with the k attribute set by using the -K or --ignoreactivationskip option in addition to the standard -ay or --activate y option. The following command activates a thin snapshot logical volume. The persistent "activation skip" flag can be turned off when the logical volume is created by specifying the -kn or --setactivationskip n option of the lvcreate command. You can turn the flag off for an existing logical volume by specifying the -kn or --setactivationskip n option of the lvchange command. You can turn the flag on again with the -ky or --setactivationskip y option. The following command creates a snapshot logical volume without the activation skip flag The following command removes the activation skip flag from a snapshot logical volume. You can control the default activation skip setting with the auto_set_activation_skip setting in the /etc/lvm/lvm.conf file. 4.4.21. Tracking and Displaying Historical Logical Volumes (Red Hat Enterprise Linux 7.3 and Later) As of Red Hat Enterprise Linux 7.3, you can configure your system to track thin snapshot and thin logical volumes that have been removed by enabling the record_lvs_history metadata option in the lvm.conf configuration file. This allows you to display a full thin snapshot dependency chain that includes logical volumes that have been removed from the original dependency chain and have become historical logical volumes. You can configure your system to retain historical volumes for a defined period of time by specifying the retention time, in seconds, with the lvs_history_retention_time metadata option in the lvm.conf configuration file. A historical logical volume retains a simplified representation of the logical volume that has been removed, including the following reporting fields for the volume: lv_time_removed : the removal time of the logical volume lv_time : the creation time of the logical volume lv_name : the name of the logical volume lv_uuid : the UUID of the logical volume vg_name : the volume group that contains the logical volume. When a volume is removed, the historical logical volume name acquires a hypen as a prefix. For example, when you remove the logical volume lvol1 , the name of the historical volume is -lvol1 . A historical logical volume cannot be reactivated. Even when the record_lvs_history metadata option enabled, you can prevent the retention of historical logical volumes on an individual basis when you remove a logical volume by specifying the --nohistory option of the lvremove command. To include historical logical volumes in volume display, you specify the -H|--history option of an LVM display command. You can display a full thin snapshot dependency chain that includes historical volumes by specifying the lv_full_ancestors and lv_full_descendants reporting fields along with the -H option. The following series of commands provides examples of how you can display and manage historical logical volumes. Ensure that historical logical volumes are retained by setting record_lvs_history=1 in the lvm.conf file. This metadata option is not enabled by default. Enter the following command to display a thin provisioned snapshot chain. In this example: lvol1 is an origin volume, the first volume in the chain. lvol2 is a snapshot of lvol1 . lvol3 is a snapshot of lvol2 . lvol4 is a snapshot of lvol3 . lvol5 is also a snapshot of lvol3 . Note that even though the example lvs display command includes the -H option, no thin snapshot volume has yet been removed and there are no historical logical volumes to display. Remove logical volume lvol3 from the snapshot chain, then run the following lvs command again to see how historical logical volumes are displayed, along with their ancestors and descendants. You can use the lv_time_removed reporting field to display the time a historical volume was removed. You can reference historical logical volumes individually in a display command by specifying the vgname/lvname format, as in the following example. Note that the fifth bit in the lv_attr field is set to h to indicate the volume is a historical volume. LVM does not keep historical logical volumes if the volume has no live descendant. This means that if you remove a logical volume at the end of a snapshot chain, the logical volume is not retained as a historical logical volume. Run the following commands to remove the volume lvol1 and lvol2 and to see how the lvs command displays the volumes once they have been removed. To remove a historical logical volume completely, you can run the lvremove command again, specifying the name of the historical volume that now includes the hyphen, as in the following example. A historical logical volumes is retained as long as there is a chain that includes live volumes in its descendants. This means that removing a historical logical volume also removes all of the logical volumes in the chain if no existing descendant is linked to them, as shown in the following example. | [
"lvcreate -L 10G vg1",
"lvcreate -L 1500 -n testlv testvg",
"lvcreate -L 50G -n gfslv vg0",
"lvcreate -l 60%VG -n mylv testvg",
"lvcreate -l 100%FREE -n yourlv testvg",
"vgdisplay testvg | grep \"Total PE\" Total PE 10230 lvcreate -l 10230 -n mylv testvg",
"lvcreate -L 1500 -n testlv testvg /dev/sdg1",
"lvcreate -l 100 -n testlv testvg /dev/sda1:0-24 /dev/sdb1:50-124",
"lvcreate -l 100 -n testlv testvg /dev/sda1:0-25:100-",
"lvcreate -L 50G -i 2 -I 64 -n gfslv vg0",
"lvcreate -l 100 -i 2 -n stripelv testvg /dev/sda1:0-49 /dev/sdb1:50-99 Using default stripesize 64.00 KB Logical volume \"stripelv\" created",
"lvcreate --type raid1 -m 1 -L 1G -n my_lv my_vg",
"lvcreate --type raid5 -i 3 -L 1G -n my_lv my_vg",
"lvcreate --type raid6 -i 3 -L 1G -n my_lv my_vg",
"lvcreate --type raid10 -i 2 -m 1 -L 10G --maxrecoveryrate 128 -n my_lv my_vg",
"lvcreate --type raid0[_meta] --stripes Stripes --stripesize StripeSize VolumeGroup [ PhysicalVolumePath ...]",
"lvconvert --type raid1 -m 1 my_vg/my_lv",
"lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv /dev/sde1(0)",
"lvconvert --type raid1 -m 1 my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 6.25 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(0) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rmeta_0] /dev/sde1(256) [my_lv_rmeta_1] /dev/sdf1(0)",
"lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdf1(0)",
"lvconvert -m0 my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv /dev/sde1(1)",
"lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sda1(1) [my_lv_rimage_1] /dev/sdb1(1) [my_lv_rmeta_0] /dev/sda1(0) [my_lv_rmeta_1] /dev/sdb1(0) lvconvert -m0 my_vg/my_lv /dev/sda1 lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv /dev/sdb1(1)",
"lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 15.20 my_lv_mimage_0(0),my_lv_mimage_1(0) [my_lv_mimage_0] /dev/sde1(0) [my_lv_mimage_1] /dev/sdf1(0) [my_lv_mlog] /dev/sdd1(0)",
"lvconvert --type raid1 my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(0) [my_lv_rimage_1] /dev/sdf1(0) [my_lv_rmeta_0] /dev/sde1(125) [my_lv_rmeta_1] /dev/sdf1(125)",
"lvconvert -m new_absolute_count vg/lv [ removable_PVs ] lvconvert -m + num_additional_images vg/lv [ removable_PVs ]",
"lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 6.25 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(0) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rmeta_0] /dev/sde1(256) [my_lv_rmeta_1] /dev/sdf1(0)",
"lvconvert -m 2 my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 6.25 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(0) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rimage_2] /dev/sdg1(1) [my_lv_rmeta_0] /dev/sde1(256) [my_lv_rmeta_1] /dev/sdf1(0) [my_lv_rmeta_2] /dev/sdg1(0)",
"lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 56.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sda1(1) [my_lv_rimage_1] /dev/sdb1(1) [my_lv_rmeta_0] /dev/sda1(0) [my_lv_rmeta_1] /dev/sdb1(0) lvconvert -m 2 my_vg/my_lv /dev/sdd1 lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 28.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sda1(1) [my_lv_rimage_1] /dev/sdb1(1) [my_lv_rimage_2] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sda1(0) [my_lv_rmeta_1] /dev/sdb1(0) [my_lv_rmeta_2] /dev/sdd1(0)",
"lvconvert -m new_absolute_count vg/lv [ removable_PVs ] lvconvert -m - num_fewer_images vg/lv [ removable_PVs ]",
"lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rimage_2] /dev/sdg1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdf1(0) [my_lv_rmeta_2] /dev/sdg1(0)",
"lvconvert -m1 my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdf1(0)",
"lvconvert -m1 my_vg/my_lv /dev/sde1 lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sdf1(1) [my_lv_rimage_1] /dev/sdg1(1) [my_lv_rmeta_0] /dev/sdf1(0) [my_lv_rmeta_1] /dev/sdg1(0)",
"lvconvert --splitmirrors count -n splitname vg/lv [ removable_PVs ]",
"lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 12.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdf1(0) lvconvert --splitmirror 1 -n new my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv /dev/sde1(1) new /dev/sdf1(1)",
"lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rimage_2] /dev/sdg1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdf1(0) [my_lv_rmeta_2] /dev/sdg1(0) lvconvert --splitmirror 1 -n new my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdf1(0) new /dev/sdg1(1)",
"lvconvert --splitmirrors count --trackchanges vg/lv [ removable_PVs ]",
"lvconvert --merge raid_image",
"lvcreate --type raid1 -m 2 -L 1G -n my_lv .vg Logical volume \"my_lv\" created lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sdb1(1) [my_lv_rimage_1] /dev/sdc1(1) [my_lv_rimage_2] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sdb1(0) [my_lv_rmeta_1] /dev/sdc1(0) [my_lv_rmeta_2] /dev/sdd1(0) lvconvert --splitmirrors 1 --trackchanges my_vg/my_lv my_lv_rimage_2 split from my_lv for read-only purposes. Use 'lvconvert --merge my_vg/my_lv_rimage_2' to merge back into my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sdb1(1) [my_lv_rimage_1] /dev/sdc1(1) my_lv_rimage_2 /dev/sdd1(1) [my_lv_rmeta_0] /dev/sdb1(0) [my_lv_rmeta_1] /dev/sdc1(0) [my_lv_rmeta_2] /dev/sdd1(0)",
"lvconvert --splitmirrors 1 --trackchanges my_vg/my_lv lv_rimage_1 split from my_lv for read-only purposes. Use 'lvconvert --merge my_vg/my_lv_rimage_1' to merge back into my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sdc1(1) my_lv_rimage_1 /dev/sdd1(1) [my_lv_rmeta_0] /dev/sdc1(0) [my_lv_rmeta_1] /dev/sdd1(0) lvconvert --merge my_vg/my_lv_rimage_1 my_vg/my_lv_rimage_1 successfully merged back into my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sdc1(1) [my_lv_rimage_1] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sdc1(0) [my_lv_rmeta_1] /dev/sdd1(0)",
"lvconvert --splitmirrors 1 --trackchanges my_vg/my_lv my_lv_rimage_1 split from my_lv for read-only purposes. Use 'lvconvert --merge my_vg/my_lv_rimage_1' to merge back into my_lv lvconvert --splitmirrors 1 -n new my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv /dev/sdc1(1) new /dev/sdd1(1)",
"lvconvert --splitmirrors 1 --trackchanges my_vg/my_lv my_lv_rimage_1 split from my_lv for read-only purposes. Use 'lvconvert --merge my_vg/my_lv_rimage_1' to merge back into my_lv lvconvert --splitmirrors 1 --trackchanges my_vg/my_lv Cannot track more than one split image at a time",
"lvconvert --splitmirrors 1 --trackchanges my_vg/my_lv my_lv_rimage_1 split from my_lv for read-only purposes. Use 'lvconvert --merge my_vg/my_lv_rimage_1' to merge back into my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sdc1(1) my_lv_rimage_1 /dev/sdd1(1) [my_lv_rmeta_0] /dev/sdc1(0) [my_lv_rmeta_1] /dev/sdd1(0) lvconvert --splitmirrors 1 -n new my_vg/my_lv /dev/sdc1 Unable to split additional image from my_lv while tracking changes for my_lv_rimage_1",
"lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rimage_2] /dev/sdg1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdf1(0) [my_lv_rmeta_2] /dev/sdg1(0)",
"grep lvm /var/log/messages Jan 17 15:57:18 bp-01 lvm[8599]: Device #0 of raid1 array, my_vg-my_lv, has failed. Jan 17 15:57:18 bp-01 lvm[8599]: /dev/sde1: read failed after 0 of 2048 at 250994294784: Input/output error Jan 17 15:57:18 bp-01 lvm[8599]: /dev/sde1: read failed after 0 of 2048 at 250994376704: Input/output error Jan 17 15:57:18 bp-01 lvm[8599]: /dev/sde1: read failed after 0 of 2048 at 0: Input/output error Jan 17 15:57:18 bp-01 lvm[8599]: /dev/sde1: read failed after 0 of 2048 at 4096: Input/output error Jan 17 15:57:19 bp-01 lvm[8599]: Couldn't find device with uuid 3lugiV-3eSP-AFAR-sdrP-H20O-wM2M-qdMANy. Jan 17 15:57:27 bp-01 lvm[8599]: raid1 array, my_vg-my_lv, is not in-sync. Jan 17 15:57:36 bp-01 lvm[8599]: raid1 array, my_vg-my_lv, is now in-sync.",
"lvs -a -o name,copy_percent,devices vg Couldn't find device with uuid 3lugiV-3eSP-AFAR-sdrP-H20O-wM2M-qdMANy. LV Copy% Devices lv 100.00 lv_rimage_0(0),lv_rimage_1(0),lv_rimage_2(0) [lv_rimage_0] /dev/sdh1(1) [lv_rimage_1] /dev/sdf1(1) [lv_rimage_2] /dev/sdg1(1) [lv_rmeta_0] /dev/sdh1(0) [lv_rmeta_1] /dev/sdf1(0) [lv_rmeta_2] /dev/sdg1(0)",
"lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sdh1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rimage_2] /dev/sdg1(1) [my_lv_rmeta_0] /dev/sdh1(0) [my_lv_rmeta_1] /dev/sdf1(0) [my_lv_rmeta_2] /dev/sdg1(0)",
"lvconvert --repair my_vg/my_lv /dev/sdh1: read failed after 0 of 2048 at 250994294784: Input/output error /dev/sdh1: read failed after 0 of 2048 at 250994376704: Input/output error /dev/sdh1: read failed after 0 of 2048 at 0: Input/output error /dev/sdh1: read failed after 0 of 2048 at 4096: Input/output error Couldn't find device with uuid fbI0YO-GX7x-firU-Vy5o-vzwx-vAKZ-feRxfF. Attempt to replace failed RAID images (requires full device resync)? [y/n]: y lvs -a -o name,copy_percent,devices my_vg Couldn't find device with uuid fbI0YO-GX7x-firU-Vy5o-vzwx-vAKZ-feRxfF. LV Copy% Devices my_lv 64.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rimage_2] /dev/sdg1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdf1(0) [my_lv_rmeta_2] /dev/sdg1(0)",
"lvchange --refresh my_vg/my_lv",
"lvconvert --replace dev_to_remove vg/lv [ possible_replacements ]",
"lvcreate --type raid1 -m 2 -L 1G -n my_lv my_vg Logical volume \"my_lv\" created lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sdb1(1) [my_lv_rimage_1] /dev/sdb2(1) [my_lv_rimage_2] /dev/sdc1(1) [my_lv_rmeta_0] /dev/sdb1(0) [my_lv_rmeta_1] /dev/sdb2(0) [my_lv_rmeta_2] /dev/sdc1(0) lvconvert --replace /dev/sdb2 my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 37.50 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sdb1(1) [my_lv_rimage_1] /dev/sdc2(1) [my_lv_rimage_2] /dev/sdc1(1) [my_lv_rmeta_0] /dev/sdb1(0) [my_lv_rmeta_1] /dev/sdc2(0) [my_lv_rmeta_2] /dev/sdc1(0)",
"lvcreate --type raid1 -m 1 -L 100 -n my_lv my_vg Logical volume \"my_lv\" created lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sda1(1) [my_lv_rimage_1] /dev/sdb1(1) [my_lv_rmeta_0] /dev/sda1(0) [my_lv_rmeta_1] /dev/sdb1(0) pvs PV VG Fmt Attr PSize PFree /dev/sda1 my_vg lvm2 a-- 1020.00m 916.00m /dev/sdb1 my_vg lvm2 a-- 1020.00m 916.00m /dev/sdc1 my_vg lvm2 a-- 1020.00m 1020.00m /dev/sdd1 my_vg lvm2 a-- 1020.00m 1020.00m lvconvert --replace /dev/sdb1 my_vg/my_lv /dev/sdd1 lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 28.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sda1(1) [my_lv_rimage_1] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sda1(0) [my_lv_rmeta_1] /dev/sdd1(0)",
"lvcreate --type raid1 -m 2 -L 100 -n my_lv my_vg Logical volume \"my_lv\" created lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sda1(1) [my_lv_rimage_1] /dev/sdb1(1) [my_lv_rimage_2] /dev/sdc1(1) [my_lv_rmeta_0] /dev/sda1(0) [my_lv_rmeta_1] /dev/sdb1(0) [my_lv_rmeta_2] /dev/sdc1(0) lvconvert --replace /dev/sdb1 --replace /dev/sdc1 my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 60.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sda1(1) [my_lv_rimage_1] /dev/sdd1(1) [my_lv_rimage_2] /dev/sde1(1) [my_lv_rmeta_0] /dev/sda1(0) [my_lv_rmeta_1] /dev/sdd1(0) [my_lv_rmeta_2] /dev/sde1(0)",
"lvchange --syncaction {check|repair} vg/raid_lv",
"lvs -o +raid_sync_action,raid_mismatch_count vg/lv",
"lvconvert -R 4096K vg/raid1 Do you really want to change the region_size 512.00 KiB of LV vg/raid1 to 4.00 MiB? [y/n]: y Changed region size on RAID LV vg/raid1 to 4.00 MiB.",
"lvcreate --type mirror -L 50G -m 1 -n mirrorlv vg0",
"lvcreate --type mirror -m 1 -L 2T -R 2 -n mirror vol_group",
"lvcreate --type mirror -L 12MB -m 1 --mirrorlog core -n ondiskmirvol bigvg Logical volume \"ondiskmirvol\" created",
"lvcreate --type mirror -L 500M -m 1 -n mirrorlv -alloc anywhere vg0",
"lvcreate --type mirror -L 12MB -m 1 --mirrorlog mirrored -n twologvol bigvg Logical volume \"twologvol\" created",
"lvcreate --type mirror -L 500M -m 1 -n mirrorlv vg0 /dev/sda1 /dev/sdb1 /dev/sdc1",
"lvcreate --type mirror -L 500M -m 1 -n mirrorlv vg0 /dev/sda1:0-499 /dev/sdb1:0-499 /dev/sdc1:0",
"lvconvert --splitmirrors 2 --name copy vg/lv",
"lvconvert --splitmirrors 2 --name copy vg/lv /dev/sd[ce]1",
"lvconvert -m1 vg00/lvol1",
"lvconvert -m0 vg00/lvol1",
"lvs -a -o name,copy_percent,devices vg00 LV Copy% Devices lvol1 100.00 lvol1_mimage_0(0),lvol1_mimage_1(0) [lvol1_mimage_0] /dev/sda1(0) [lvol1_mimage_1] /dev/sdb1(0) [lvol1_mlog] /dev/sdd1(0) lvconvert -m 2 vg00/lvol1 vg00/lvol1: Converted: 13.0% vg00/lvol1: Converted: 100.0% Logical volume lvol1 converted. lvs -a -o name,copy_percent,devices vg00 LV Copy% Devices lvol1 100.00 lvol1_mimage_0(0),lvol1_mimage_1(0),lvol1_mimage_2(0) [lvol1_mimage_0] /dev/sda1(0) [lvol1_mimage_1] /dev/sdb1(0) [lvol1_mimage_2] /dev/sdc1(0) [lvol1_mlog] /dev/sdd1(0)",
"lvcreate -L 100M -T vg001/mythinpool Rounding up size to full physical extent 4.00 MiB Logical volume \"mythinpool\" created lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert my mythinpool vg001 twi-a-tz 100.00m 0.00",
"lvcreate -V 1G -T vg001/mythinpool -n thinvolume Logical volume \"thinvolume\" created lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert mythinpool vg001 twi-a-tz 100.00m 0.00 thinvolume vg001 Vwi-a-tz 1.00g mythinpool 0.00",
"lvcreate -L 100M -T vg001/mythinpool -V 1G -n thinvolume Rounding up size to full physical extent 4.00 MiB Logical volume \"thinvolume\" created lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert mythinpool vg001 twi-a-tz 100.00m 0.00 thinvolume vg001 Vwi-a-tz 1.00g mythinpool 0.00",
"lvcreate -L 100M --thinpool mythinpool vg001 Rounding up size to full physical extent 4.00 MiB Logical volume \"mythinpool\" created lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert mythinpool vg001 twi-a-tz 100.00m 0.00",
"lvcreate -i 2 -I 64 -c 256 -L 100M -T vg00/pool -V 1T --name thin_lv",
"lvextend -L+100M vg001/mythinpool Extending logical volume mythinpool to 200.00 MiB Logical volume mythinpool successfully resized lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert mythinpool vg001 twi-a-tz 200.00m 0.00 thinvolume vg001 Vwi-a-tz 1.00g mythinpool 0.00",
"lvconvert --thinpool vg001/lv1 --poolmetadata vg001/lv2 Converted vg001/lv1 to thin pool.",
"lvcreate --size 100M --snapshot --name snap /dev/vg00/lvol1",
"lvdisplay /dev/new_vg/lvol0 --- Logical volume --- LV Name /dev/new_vg/lvol0 VG Name new_vg LV UUID LBy1Tz-sr23-OjsI-LT03-nHLC-y8XW-EhCl78 LV Write Access read/write LV snapshot status source of /dev/new_vg/newvgsnap1 [active] LV Status available # open 0 LV Size 52.00 MB Current LE 13 Segments 1 Allocation inherit Read ahead sectors 0 Block device 253:2",
"lvs LV VG Attr LSize Origin Snap% Move Log Copy% lvol0 new_vg owi-a- 52.00M newvgsnap1 new_vg swi-a- 8.00M lvol0 0.20",
"lvcreate -s --name mysnapshot1 vg001/thinvolume Logical volume \"mysnapshot1\" created lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert mysnapshot1 vg001 Vwi-a-tz 1.00g mythinpool thinvolume 0.00 mythinpool vg001 twi-a-tz 100.00m 0.00 thinvolume vg001 Vwi-a-tz 1.00g mythinpool 0.00",
"lvcreate -s --thinpool vg001/pool origin_volume --name mythinsnap",
"lvcreate -s vg001/mythinsnap --name my2ndthinsnap",
"lvs -o name,lv_ancestors,lv_descendants vg001 LV Ancestors Descendants stack1 stack2,stack3,stack4,stack5,stack6 stack2 stack1 stack3,stack4,stack5,stack6 stack3 stack2,stack1 stack4 stack4 stack3,stack2,stack1 stack5 stack2,stack1 stack6 stack6 stack5,stack2,stack1 pool",
"lvs -o name,lv_ancestors,lv_descendants vg001 LV Ancestors Descendants stack1 stack2,stack5,stack6 stack2 stack1 stack5,stack6 stack4 stack5 stack2,stack1 stack6 stack6 stack5,stack2,stack1 pool",
"pvcreate /dev/sde1 pvcreate /dev/sdf1 vgcreate VG /dev/sde1 /dev/sdf1",
"lvcreate -L 10G -n lv VG /dev/sde1",
"lvcreate --type cache-pool -L 5G -n cpool VG /dev/sdf1 Using default stripesize 64.00 KiB. Logical volume \"cpool\" created. lvs -a -o name,size,attr,devices VG LV LSize Attr Devices [cpool] 5.00g Cwi---C--- cpool_cdata(0) [cpool_cdata] 5.00g Cwi-ao---- /dev/sdf1(4) [cpool_cmeta] 8.00m ewi-ao---- /dev/sdf1(2)",
"lvconvert --type cache --cachepool cpool VG/lv Logical volume cpool is now cached. lvs -a -o name,size,attr,devices vg LV LSize Attr Devices [cpool] 5.00g Cwi---C--- cpool_cdata(0) [cpool_cdata] 5.00g Cwi-ao---- /dev/sdf1(4) [cpool_cmeta] 8.00m ewi-ao---- /dev/sdf1(2) lv 10.00g Cwi-a-C--- lv_corig(0) [lv_corig] 10.00g owi-aoC--- /dev/sde1(0) [lvol0_pmspare] 8.00m ewi------- /dev/sdf1(0)",
"lvconvert --type thin-pool VG/lv /dev/sdf1 WARNING: Converting logical volume VG/lv to thin pool's data volume with metadata wiping. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Do you really want to convert VG/lv? [y/n]: y Converted VG/lv to thin pool. lvs -a -o name,size,attr,devices vg LV LSize Attr Devices [cpool] 5.00g Cwi---C--- cpool_cdata(0) [cpool_cdata] 5.00g Cwi-ao---- /dev/sdf1(4) [cpool_cmeta] 8.00m ewi-ao---- /dev/sdf1(2) lv 10.00g twi-a-tz-- lv_tdata(0) [lv_tdata] 10.00g Cwi-aoC--- lv_tdata_corig(0) [lv_tdata_corig] 10.00g owi-aoC--- /dev/sde1(0) [lv_tmeta] 12.00m ewi-ao---- /dev/sdf1(1284) [lvol0_pmspare] 12.00m ewi------- /dev/sdf1(0) [lvol0_pmspare] 12.00m ewi------- /dev/sdf1(1287)",
"lvconvert --merge vg00/lvol1_snap",
"lvconvert --merge @some_tag",
"--persistent y --major major --minor minor",
"lvchange -pr vg00/lvol1",
"lvrename /dev/vg02/lvold /dev/vg02/lvnew",
"lvrename vg02 lvold lvnew",
"lvremove /dev/testvg/testlv Do you really want to remove active logical volume \"testlv\"? [y/n]: y Logical volume \"testlv\" successfully removed",
"lvdisplay -v /dev/vg00/lvol2",
"lvscan ACTIVE '/dev/vg0/gfslv' [1.46 GB] inherit",
"lvextend -L12G /dev/myvg/homevol lvextend -- extending logical volume \"/dev/myvg/homevol\" to 12 GB lvextend -- doing automatic backup of volume group \"myvg\" lvextend -- logical volume \"/dev/myvg/homevol\" successfully extended",
"lvextend -L+1G /dev/myvg/homevol lvextend -- extending logical volume \"/dev/myvg/homevol\" to 13 GB lvextend -- doing automatic backup of volume group \"myvg\" lvextend -- logical volume \"/dev/myvg/homevol\" successfully extended",
"lvextend -l +100%FREE /dev/myvg/testlv Extending logical volume testlv to 68.59 GB Logical volume testlv successfully resized",
"lvreduce --resizefs -L 64M vg00/lvol1 fsck from util-linux 2.23.2 /dev/mapper/vg00-lvol1: clean, 11/25688 files, 8896/102400 blocks resize2fs 1.42.9 (28-Dec-2013) Resizing the filesystem on /dev/mapper/vg00-lvol1 to 65536 (1k) blocks. The filesystem on /dev/mapper/vg00-lvol1 is now 65536 blocks long. Size of logical volume vg00/lvol1 changed from 100.00 MiB (25 extents) to 64.00 MiB (16 extents). Logical volume vg00/lvol1 successfully resized.",
"lvreduce --resizefs -L -64M vg00/lvol1",
"vgs VG #PV #LV #SN Attr VSize VFree vg 2 0 0 wz--n- 271.31G 271.31G",
"lvcreate -n stripe1 -L 271.31G -i 2 vg Using default stripesize 64.00 KB Rounding up size to full physical extent 271.31 GB Logical volume \"stripe1\" created lvs -a -o +devices LV VG Attr LSize Origin Snap% Move Log Copy% Devices stripe1 vg -wi-a- 271.31G /dev/sda1(0),/dev/sdb1(0)",
"vgs VG #PV #LV #SN Attr VSize VFree vg 2 1 0 wz--n- 271.31G 0",
"vgextend vg /dev/sdc1 Volume group \"vg\" successfully extended vgs VG #PV #LV #SN Attr VSize VFree vg 3 1 0 wz--n- 406.97G 135.66G",
"lvextend vg/stripe1 -L 406G Using stripesize of last segment 64.00 KB Extending logical volume stripe1 to 406.00 GB Insufficient suitable allocatable extents for logical volume stripe1: 34480 more required",
"vgextend vg /dev/sdd1 Volume group \"vg\" successfully extended vgs VG #PV #LV #SN Attr VSize VFree vg 4 1 0 wz--n- 542.62G 271.31G lvextend vg/stripe1 -L 542G Using stripesize of last segment 64.00 KB Extending logical volume stripe1 to 542.00 GB Logical volume stripe1 successfully resized",
"lvextend vg/stripe1 -L 406G Using stripesize of last segment 64.00 KB Extending logical volume stripe1 to 406.00 GB Insufficient suitable allocatable extents for logical volume stripe1: 34480 more required lvextend -i1 -l+100%FREE vg/stripe1",
"lvs vg LV VG Attr LSize Pool Origin Snap% Move Log Cpy%Sync Convert lv vg Rwi-a-r- 5.00g 100.00",
"lvs vg LV VG Attr LSize Pool Origin Snap% Move Log Cpy%Sync Convert lv vg rwi-a-r- 20.00m 100.00 lvextend -L +5G vg/lv --nosync Extending 2 mirror images. Extending logical volume lv to 5.02 GiB Logical volume lv successfully resized lvs vg LV VG Attr LSize Pool Origin Snap% Move Log Cpy%Sync Convert lv vg Rwi-a-r- 5.02g 100.00",
"cling_tag_list = [ \"@site1\", \"@site2\" ]",
"cling_tag_list = [ \"@A\", \"@B\" ]",
"pvs -a -o +pv_tags /dev/sd[bcdefgh] PV VG Fmt Attr PSize PFree PV Tags /dev/sdb1 taft lvm2 a-- 15.00g 15.00g A /dev/sdc1 taft lvm2 a-- 15.00g 15.00g B /dev/sdd1 taft lvm2 a-- 15.00g 15.00g B /dev/sde1 taft lvm2 a-- 15.00g 15.00g C /dev/sdf1 taft lvm2 a-- 15.00g 15.00g C /dev/sdg1 taft lvm2 a-- 15.00g 15.00g A /dev/sdh1 taft lvm2 a-- 15.00g 15.00g A",
"lvcreate --type raid1 -m 1 -n mirror --nosync -L 10G taft WARNING: New raid1 won't be synchronised. Don't read what you didn't write! Logical volume \"mirror\" created",
"lvs -a -o +devices LV VG Attr LSize Log Cpy%Sync Devices mirror taft Rwi-a-r--- 10.00g 100.00 mirror_rimage_0(0),mirror_rimage_1(0) [mirror_rimage_0] taft iwi-aor--- 10.00g /dev/sdb1(1) [mirror_rimage_1] taft iwi-aor--- 10.00g /dev/sdc1(1) [mirror_rmeta_0] taft ewi-aor--- 4.00m /dev/sdb1(0) [mirror_rmeta_1] taft ewi-aor--- 4.00m /dev/sdc1(0)",
"lvextend --alloc cling -L +10G taft/mirror Extending 2 mirror images. Extending logical volume mirror to 20.00 GiB Logical volume mirror successfully resized",
"lvs -a -o +devices LV VG Attr LSize Log Cpy%Sync Devices mirror taft Rwi-a-r--- 20.00g 100.00 mirror_rimage_0(0),mirror_rimage_1(0) [mirror_rimage_0] taft iwi-aor--- 20.00g /dev/sdb1(1) [mirror_rimage_0] taft iwi-aor--- 20.00g /dev/sdg1(0) [mirror_rimage_1] taft iwi-aor--- 20.00g /dev/sdc1(1) [mirror_rimage_1] taft iwi-aor--- 20.00g /dev/sdd1(0) [mirror_rmeta_0] taft ewi-aor--- 4.00m /dev/sdb1(0) [mirror_rmeta_1] taft ewi-aor--- 4.00m /dev/sdc1(0)",
"lvs vg/thin1s1 LV VG Attr LSize Pool Origin thin1s1 vg Vwi---tz-k 1.00t pool0 thin1",
"lvchange -ay -K VG/SnapLV",
"lvcreate --type thin -n SnapLV -kn -s ThinLV --thinpool VG/ThinPoolLV",
"lvchange -kn VG/SnapLV",
"lvs -H -o name,full_ancestors,full_descendants LV FAncestors FDescendants lvol1 lvol2,lvol3,lvol4,lvol5 lvol2 lvol1 lvol3,lvol4,lvol5 lvol3 lvol2,lvol1 lvol4,lvol5 lvol4 lvol3,lvol2,lvol1 lvol5 lvol3,lvol2,lvol1 pool",
"lvremove -f vg/lvol3 Logical volume \"lvol3\" successfully removed lvs -H -o name,full_ancestors,full_descendants LV FAncestors FDescendants lvol1 lvol2,-lvol3,lvol4,lvol5 lvol2 lvol1 -lvol3,lvol4,lvol5 -lvol3 lvol2,lvol1 lvol4,lvol5 lvol4 -lvol3,lvol2,lvol1 lvol5 -lvol3,lvol2,lvol1 pool",
"lvs -H -o name,full_ancestors,full_descendants,time_removed LV FAncestors FDescendants RTime lvol1 lvol2,-lvol3,lvol4,lvol5 lvol2 lvol1 -lvol3,lvol4,lvol5 -lvol3 lvol2,lvol1 lvol4,lvol5 2016-03-14 14:14:32 +0100 lvol4 -lvol3,lvol2,lvol1 lvol5 -lvol3,lvol2,lvol1 pool",
"lvs -H vg/-lvol3 LV VG Attr LSize -lvol3 vg ----h----- 0",
"lvremove -f vg/lvol5 Automatically removing historical logical volume vg/-lvol5. Logical volume \"lvol5\" successfully removed lvs -H -o name,full_ancestors,full_descendants LV FAncestors FDescendants lvol1 lvol2,-lvol3,lvol4 lvol2 lvol1 -lvol3,lvol4 -lvol3 lvol2,lvol1 lvol4 lvol4 -lvol3,lvol2,lvol1 pool",
"lvremove -f vg/lvol1 vg/lvol2 Logical volume \"lvol1\" successfully removed Logical volume \"lvol2\" successfully removed lvs -H -o name,full_ancestors,full_descendants LV FAncestors FDescendants -lvol1 -lvol2,-lvol3,lvol4 -lvol2 -lvol1 -lvol3,lvol4 -lvol3 -lvol2,-lvol1 lvol4 lvol4 -lvol3,-lvol2,-lvol1 pool",
"lvremove -f vg/-lvol3 Historical logical volume \"lvol3\" successfully removed lvs -H -o name,full_ancestors,full_descendants LV FAncestors FDescendants -lvol1 -lvol2,lvol4 -lvol2 -lvol1 lvol4 lvol4 -lvol2,-lvol1 pool",
"lvremove -f vg/lvol4 Automatically removing historical logical volume vg/-lvol1. Automatically removing historical logical volume vg/-lvol2. Automatically removing historical logical volume vg/-lvol4. Logical volume \"lvol4\" successfully removed"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/lv?extIdCarryOver=true&sc_cid=701f2000001Css5AAC |
Chapter 3. Distribution of content in RHEL 8 | Chapter 3. Distribution of content in RHEL 8 3.1. Installation Red Hat Enterprise Linux 8 is installed using ISO images. Two types of ISO image are available for the AMD64, Intel 64-bit, 64-bit ARM, IBM Power Systems, and IBM Z architectures: Binary DVD ISO: A full installation image that contains the BaseOS and AppStream repositories and allows you to complete the installation without additional repositories. Note The Binary DVD ISO image is larger than 4.7 GB, and as a result, it might not fit on a single-layer DVD. A dual-layer DVD or USB key is recommended when using the Binary DVD ISO image to create bootable installation media. You can also use the Image Builder tool to create customized RHEL images. For more information about Image Builder, see the Composing a customized RHEL system image document. Boot ISO: A minimal boot ISO image that is used to boot into the installation program. This option requires access to the BaseOS and AppStream repositories to install software packages. The repositories are part of the Binary DVD ISO image. See the Interactively installing RHEL from installation media document for instructions on downloading ISO images, creating installation media, and completing a RHEL installation. For automated Kickstart installations and other advanced topics, see the Automatically installing RHEL document. 3.2. Repositories Red Hat Enterprise Linux 8 is distributed through two main repositories: BaseOS AppStream Both repositories are required for a basic RHEL installation, and are available with all RHEL subscriptions. Content in the BaseOS repository is intended to provide the core set of the underlying OS functionality that provides the foundation for all installations. This content is available in the RPM format and is subject to support terms similar to those in releases of RHEL. For a list of packages distributed through BaseOS, see the Package manifest . Content in the Application Stream repository includes additional user space applications, runtime languages, and databases in support of the varied workloads and use cases. Application Streams are available in the familiar RPM format, as an extension to the RPM format called modules , or as Software Collections. For a list of packages available in AppStream, see the Package manifest . In addition, the CodeReady Linux Builder repository is available with all RHEL subscriptions. It provides additional packages for use by developers. Packages included in the CodeReady Linux Builder repository are unsupported. For more information about RHEL 8 repositories, see the Package manifest . 3.3. Application Streams Red Hat Enterprise Linux 8 introduces the concept of Application Streams. Multiple versions of user space components are now delivered and updated more frequently than the core operating system packages. This provides greater flexibility to customize Red Hat Enterprise Linux without impacting the underlying stability of the platform or specific deployments. Components made available as Application Streams can be packaged as modules or RPM packages and are delivered through the AppStream repository in RHEL 8. Each Application Stream component has a given life cycle, either the same as RHEL 8 or shorter. For details, see Red Hat Enterprise Linux Life Cycle . Modules are collections of packages representing a logical unit: an application, a language stack, a database, or a set of tools. These packages are built, tested, and released together. Module streams represent versions of the Application Stream components. For example, several streams (versions) of the PostgreSQL database server are available in the postgresql module with the default postgresql:10 stream. Only one module stream can be installed on the system. Different versions can be used in separate containers. Detailed module commands are described in the Installing, managing, and removing user-space components document. For a list of modules available in AppStream, see the Package manifest . 3.4. Package management with YUM/DNF On Red Hat Enterprise Linux 8, installing software is ensured by the YUM tool, which is based on the DNF technology. We deliberately adhere to usage of the yum term for consistency with major versions of RHEL. However, if you type dnf instead of yum , the command works as expected because yum is an alias to dnf for compatibility. For more details, see the following documentation: Installing, managing, and removing user-space components Considerations in adopting RHEL 8 | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.2_release_notes/distribution-of-content-in-rhel-8 |
9.4. Security Technical Implementation Guide | 9.4. Security Technical Implementation Guide A Security Technical Implementation Guide (STIG) is a methodology for standardized secure installation and maintenance of computer software and hardware. See the following URL for more information on STIG: https://public.cyber.mil/stigs/ . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/sec-security_technical_implementation_guide |
Chapter 24. Automatically discovering bare metal nodes | Chapter 24. Automatically discovering bare metal nodes You can use auto-discovery to register overcloud nodes and generate their metadata, without the need to create an instackenv.json file. This improvement can help to reduce the time it takes to collect information about a node. For example, if you use auto-discovery, you do not to collate the IPMI IP addresses and subsequently create the instackenv.json . 24.1. Enabling auto-discovery Enable and configure Bare Metal auto-discovery to automatically discover and import nodes that join your provisioning network when booting with PXE. Procedure Enable Bare Metal auto-discovery in the undercloud.conf file: enable_node_discovery - When enabled, any node that boots the introspection ramdisk using PXE is enrolled in the Bare Metal service (ironic) automatically. discovery_default_driver - Sets the driver to use for discovered nodes. For example, ipmi . Add your IPMI credentials to ironic: Add your IPMI credentials to a file named ipmi-credentials.json . Replace the SampleUsername , RedactedSecurePassword , and bmc_address values in this example to suit your environment: Import the IPMI credentials file into ironic: 24.2. Testing auto-discovery PXE boot a node that is connected to your provisioning network to test the Bare Metal auto-discovery feature. Procedure Power on the required nodes. Run the openstack baremetal node list command. You should see the new nodes listed in an enrolled state: Set the resource class for each node: Configure the kernel and ramdisk for each node: Set all nodes to available: 24.3. Using rules to discover different vendor hardware If you have a heterogeneous hardware environment, you can use introspection rules to assign credentials and remote management credentials. For example, you might want a separate discovery rule to handle your Dell nodes that use DRAC. Procedure Create a file named dell-drac-rules.json with the following contents: Replace the user name and password values in this example to suit your environment: Import the rule into ironic: | [
"enable_node_discovery = True discovery_default_driver = ipmi",
"[ { \"description\": \"Set default IPMI credentials\", \"conditions\": [ {\"op\": \"eq\", \"field\": \"data://auto_discovered\", \"value\": true} ], \"actions\": [ {\"action\": \"set-attribute\", \"path\": \"driver_info/ipmi_username\", \"value\": \"SampleUsername\"}, {\"action\": \"set-attribute\", \"path\": \"driver_info/ipmi_password\", \"value\": \"RedactedSecurePassword\"}, {\"action\": \"set-attribute\", \"path\": \"driver_info/ipmi_address\", \"value\": \"{data[inventory][bmc_address]}\"} ] } ]",
"openstack baremetal introspection rule import ipmi-credentials.json",
"openstack baremetal node list +--------------------------------------+------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+------+---------------+-------------+--------------------+-------------+ | c6e63aec-e5ba-4d63-8d37-bd57628258e8 | None | None | power off | enroll | False | | 0362b7b2-5b9c-4113-92e1-0b34a2535d9b | None | None | power off | enroll | False | +--------------------------------------+------+---------------+-------------+--------------------+-------------+",
"for NODE in `openstack baremetal node list -c UUID -f value` ; do openstack baremetal node set USDNODE --resource-class baremetal ; done",
"for NODE in `openstack baremetal node list -c UUID -f value` ; do openstack baremetal node manage USDNODE ; done openstack overcloud node configure --all-manageable",
"for NODE in `openstack baremetal node list -c UUID -f value` ; do openstack baremetal node provide USDNODE ; done",
"[ { \"description\": \"Set default IPMI credentials\", \"conditions\": [ {\"op\": \"eq\", \"field\": \"data://auto_discovered\", \"value\": true}, {\"op\": \"ne\", \"field\": \"data://inventory.system_vendor.manufacturer\", \"value\": \"Dell Inc.\"} ], \"actions\": [ {\"action\": \"set-attribute\", \"path\": \"driver_info/ipmi_username\", \"value\": \"SampleUsername\"}, {\"action\": \"set-attribute\", \"path\": \"driver_info/ipmi_password\", \"value\": \"RedactedSecurePassword\"}, {\"action\": \"set-attribute\", \"path\": \"driver_info/ipmi_address\", \"value\": \"{data[inventory][bmc_address]}\"} ] }, { \"description\": \"Set the vendor driver for Dell hardware\", \"conditions\": [ {\"op\": \"eq\", \"field\": \"data://auto_discovered\", \"value\": true}, {\"op\": \"eq\", \"field\": \"data://inventory.system_vendor.manufacturer\", \"value\": \"Dell Inc.\"} ], \"actions\": [ {\"action\": \"set-attribute\", \"path\": \"driver\", \"value\": \"idrac\"}, {\"action\": \"set-attribute\", \"path\": \"driver_info/drac_username\", \"value\": \"SampleUsername\"}, {\"action\": \"set-attribute\", \"path\": \"driver_info/drac_password\", \"value\": \"RedactedSecurePassword\"}, {\"action\": \"set-attribute\", \"path\": \"driver_info/drac_address\", \"value\": \"{data[inventory][bmc_address]}\"} ] } ]",
"openstack baremetal introspection rule import dell-drac-rules.json"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/director_installation_and_usage/assembly_automatically-discovering-bare-metal-nodes |
4.14. Using Shared System Certificates | 4.14. Using Shared System Certificates The Shared System Certificates storage allows NSS, GnuTLS, OpenSSL, and Java to share a default source for retrieving system certificate anchors and black list information. By default, the trust store contains the Mozilla CA list, including positive and negative trust. The system allows updating of the core Mozilla CA list or choosing another certificate list. 4.14.1. Using a System-wide Trust Store In Red Hat Enterprise Linux 7, the consolidated system-wide trust store is located in the /etc/pki/ca-trust/ and /usr/share/pki/ca-trust-source/ directories. The trust settings in /usr/share/pki/ca-trust-source/ are processed with lower priority than settings in /etc/pki/ca-trust/ . Certificate files are treated depending on the subdirectory they are installed to: /usr/share/pki/ca-trust-source/anchors/ or /etc/pki/ca-trust/source/anchors/ - for trust anchors. See Section 4.5.6, "Understanding Trust Anchors" . /usr/share/pki/ca-trust-source/blacklist/ or /etc/pki/ca-trust/source/blacklist/ - for distrusted certificates. /usr/share/pki/ca-trust-source/ or /etc/pki/ca-trust/source/ - for certificates in the extended BEGIN TRUSTED file format. 4.14.2. Adding New Certificates To add a certificate in the simple PEM or DER file formats to the list of CAs trusted on the system, copy the certificate file to the /usr/share/pki/ca-trust-source/anchors/ or /etc/pki/ca-trust/source/anchors/ directory. To update the system-wide trust store configuration, use the update-ca-trust command, for example: Note While the Firefox browser is able to use an added certificate without executing update-ca-trust , it is recommended to run update-ca-trust after a CA change. Also note that browsers, such as Firefox, Epiphany, or Chromium, cache files, and you might need to clear the browser's cache or restart your browser to load the current system certificates configuration. 4.14.3. Managing Trusted System Certificates To list, extract, add, remove, or change trust anchors, use the trust command. To see the built-in help for this command, enter it without any arguments or with the --help directive: To list all system trust anchors and certificates, use the trust list command: All sub-commands of the trust commands offer a detailed built-in help, for example: To store a trust anchor into the system-wide trust store, use the trust anchor sub-command and specify a path.to a certificate, for example: To remove a certificate, use either a path.to a certificate or an ID of a certificate: 4.14.4. Additional Resources For more information, see the following man pages: update-ca-trust(8) trust(1) | [
"cp ~/certificate-trust-examples/Cert-trust-test-ca.pem /usr/share/pki/ca-trust-source/anchors/ update-ca-trust",
"trust usage: trust command <args> Common trust commands are: list List trust or certificates extract Extract certificates and trust extract-compat Extract trust compatibility bundles anchor Add, remove, change trust anchors dump Dump trust objects in internal format See 'trust <command> --help' for more information",
"trust list pkcs11:id=%d2%87%b4%e3%df%37%27%93%55%f6%56%ea%81%e5%36%cc%8c%1e%3f%bd;type=cert type: certificate label: ACCVRAIZ1 trust: anchor category: authority pkcs11:id=%a6%b3%e1%2b%2b%49%b6%d7%73%a1%aa%94%f5%01%e7%73%65%4c%ac%50;type=cert type: certificate label: ACEDICOM Root trust: anchor category: authority [output has been truncated]",
"trust list --help usage: trust list --filter=<what> --filter=<what> filter of what to export ca-anchors certificate anchors blacklist blacklisted certificates trust-policy anchors and blacklist (default) certificates all certificates pkcs11:object=xx a PKCS#11 URI --purpose=<usage> limit to certificates usable for the purpose server-auth for authenticating servers client-auth for authenticating clients email for email protection code-signing for authenticating signed code 1.2.3.4.5... an arbitrary object id -v, --verbose show verbose debug output -q, --quiet suppress command output",
"trust anchor path.to/certificate.crt",
"trust anchor --remove path.to/certificate.crt trust anchor --remove \"pkcs11:id=%AA%BB%CC%DD%EE;type=cert\""
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/sec-shared-system-certificates |
4.10. Configuring Fencing Levels | 4.10. Configuring Fencing Levels Pacemaker supports fencing nodes with multiple devices through a feature called fencing topologies. To implement topologies, create the individual devices as you normally would and then define one or more fencing levels in the fencing-topology section in the configuration. Each level is attempted in ascending numeric order, starting at 1. If a device fails, processing terminates for the current level. No further devices in that level are exercised and the level is attempted instead. If all devices are successfully fenced, then that level has succeeded and no other levels are tried. The operation is finished when a level has passed (success), or all levels have been attempted (failed). Use the following command to add a fencing level to a node. The devices are given as a comma-separated list of stonith ids, which are attempted for the node at that level. The following command lists all of the fencing levels that are currently configured. In the following example, there are two fence devices configured for node rh7-2 : an ilo fence device called my_ilo and an apc fence device called my_apc . These commands sets up fence levels so that if the device my_ilo fails and is unable to fence the node, then Pacemaker will attempt to use the device my_apc . This example also shows the output of the pcs stonith level command after the levels are configured. The following command removes the fence level for the specified node and devices. If no nodes or devices are specified then the fence level you specify is removed from all nodes. The following command clears the fence levels on the specified node or stonith id. If you do not specify a node or stonith id, all fence levels are cleared. If you specify more than one stonith id, they must be separated by a comma and no spaces, as in the following example. The following command verifies that all fence devices and nodes specified in fence levels exist. | [
"pcs stonith level add level node devices",
"pcs stonith level",
"pcs stonith level add 1 rh7-2 my_ilo pcs stonith level add 2 rh7-2 my_apc pcs stonith level Node: rh7-2 Level 1 - my_ilo Level 2 - my_apc",
"pcs stonith level remove level [ node_id ] [ stonith_id ] ... [ stonith_id ]",
"pcs stonith level clear [ node | stonith_id (s)]",
"pcs stonith level clear dev_a,dev_b",
"pcs stonith level verify"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-fencelevels-haar |
Chapter 17. Installing a three-node cluster on AWS | Chapter 17. Installing a three-node cluster on AWS In OpenShift Container Platform version 4.15, you can install a three-node cluster on Amazon Web Services (AWS). A three-node cluster consists of three control plane machines, which also act as compute machines. This type of cluster provides a smaller, more resource efficient cluster, for cluster administrators and developers to use for testing, development, and production. You can install a three-node cluster using either installer-provisioned or user-provisioned infrastructure. Note Deploying a three-node cluster using an AWS Marketplace image is not supported. 17.1. Configuring a three-node cluster You configure a three-node cluster by setting the number of worker nodes to 0 in the install-config.yaml file before deploying the cluster. Setting the number of worker nodes to 0 ensures that the control plane machines are schedulable. This allows application workloads to be scheduled to run from the control plane nodes. Note Because application workloads run from control plane nodes, additional subscriptions are required, as the control plane nodes are considered to be compute nodes. Prerequisites You have an existing install-config.yaml file. Procedure Set the number of compute replicas to 0 in your install-config.yaml file, as shown in the following compute stanza: Example install-config.yaml file for a three-node cluster apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0 # ... If you are deploying a cluster with user-provisioned infrastructure: After you create the Kubernetes manifest files, make sure that the spec.mastersSchedulable parameter is set to true in cluster-scheduler-02-config.yml file. You can locate this file in <installation_directory>/manifests . For more information, see "Creating the Kubernetes manifest and Ignition config files" in "Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormation templates". Do not create additional worker nodes. Example cluster-scheduler-02-config.yml file for a three-node cluster apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: "" status: {} 17.2. steps Installing a cluster on AWS with customizations Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormation templates | [
"apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: \"\" status: {}"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_aws/installing-aws-three-node |
Chapter 2. Upgrading to Red Hat Ansible Automation Platform 2.4 | Chapter 2. Upgrading to Red Hat Ansible Automation Platform 2.4 To upgrade your Red Hat Ansible Automation Platform, start by reviewing planning information to ensure a successful upgrade. You can then download the desired version of the Ansible Automation Platform installer, configure the inventory file in the installation bundle to reflect your environment, and then run the installer. 2.1. Ansible Automation Platform upgrade planning Before you begin the upgrade process, review the following considerations to plan and prepare your Ansible Automation Platform deployment: Automation controller Even if you have a valid license from a version, you must provide your credentials or a subscriptions manifest upon upgrading to the latest version of automation controller. If you need to upgrade Red Hat Enterprise Linux and automation controller, you must first backup and restore your automation controller data. Clustered upgrades require special attention to instance and instance groups before upgrading. Additional resources Importing a subscription Backup and restore Clustering Automation hub When upgrading to Ansible Automation Platform 2.4, you can either add an existing automation hub API token or generate a new one and invalidate any existing tokens. Existing container images are removed when upgrading Ansible Automation Platform. This is because, when upgrading Ansible Automation Platform with setup.sh script, podman system reset -f is executed. This removes all container images on your Ansible Automation Platform nodes then pushes the new execution environment image that is bundled with installer. See Running the Red Hat Ansible Automation Platform installer setup script . Additional resources Setting up the inventory file Event-Driven Ansible controller If you are currently running Event-Driven Ansible controller and plan to deploy it when you upgrade to Ansible Automation Platform 2.4, it is recommended that you disable all Event-Driven Ansible activations before upgrading to ensure that only new activations run after the upgrade process has completed. This prevents possibilities of orphaned containers running activations from the version. 2.2. Choosing and obtaining a Red Hat Ansible Automation Platform installer Choose the Red Hat Ansible Automation Platform installer you need based on your Red Hat Enterprise Linux environment internet connectivity. Review the following scenarios and decide on which Red Hat Ansible Automation Platform installer meets your needs. Note A valid Red Hat customer account is required to access Red Hat Ansible Automation Platform installer downloads on the Red Hat Customer Portal. Installing with internet access Choose the Red Hat Ansible Automation Platform installer if your Red Hat Enterprise Linux environment is connected to the internet. Installing with internet access retrieves the latest required repositories, packages, and dependencies. Choose one of the following ways to set up your Ansible Automation Platform installer. Tarball install Navigate to the Red Hat Ansible Automation Platform download page. Click Download Now for the Ansible Automation Platform <latest-version> Setup . Extract the files: USD tar xvzf ansible-automation-platform-setup-<latest-version>.tar.gz RPM install Install Ansible Automation Platform Installer Package v.2.4 for RHEL 8 for x86_64 USD sudo dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-8-x86_64-rpms ansible-automation-platform-installer v.2.4 for RHEL 9 for x86-64 USD sudo dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-9-x86_64-rpms ansible-automation-platform-installer Note dnf install enables the repo as the repo is disabled by default. When you use the RPM installer, the files are placed under the /opt/ansible-automation-platform/installer directory. Installing without internet access Use the Red Hat Ansible Automation Platform Bundle installer if you are unable to access the internet, or would prefer not to install separate components and dependencies from online repositories. Access to Red Hat Enterprise Linux repositories is still needed. All other dependencies are included in the tar archive. Navigate to the Red Hat Ansible Automation Platform download page. Click Download Now for the Ansible Automation Platform <latest-version> Setup Bundle . Extract the files: USD tar xvzf ansible-automation-platform-setup-bundle-<latest-version>.tar.gz 2.3. Setting up the inventory file Before upgrading your Red Hat Ansible Automation Platform installation, edit the inventory file so that it matches your desired configuration. You can keep the same parameters from your existing Ansible Automation Platform deployment or you can modify the parameters to match any changes to your environment. Procedure Navigate to the installation program directory. Bundled installer USD cd ansible-automation-platform-setup-bundle-2.4-1-x86_64 Online installer USD cd ansible-automation-platform-setup-2.4-1 Open the inventory file for editing. Modify the inventory file to provision new nodes, deprovision nodes or groups, and import or generate automation hub API tokens. You can use the same inventory file from an existing Ansible Automation Platform 2.1 installation if there are no changes to the environment. Note Provide a reachable IP address or fully qualified domain name (FQDN) for the [automationhub] and [automationcontroller] hosts to ensure that users can synchronize and install content from Ansible automation hub from a different node. Do not use localhost . If localhost is used, the upgrade will be stopped as part of preflight checks. Provisioning new nodes in a cluster Add new nodes alongside existing nodes in the inventory file as follows: [controller] clusternode1.example.com clusternode2.example.com clusternode3.example.com [all:vars] admin_password='password' pg_host='' pg_port='' pg_database='<database_name>' pg_username='<your_username>' pg_password='<your_password>' Deprovisioning nodes or groups in a cluster Append node_state-deprovision to the node or group within the inventory file. Importing and generating API tokens When upgrading from Red Hat Ansible Automation Platform 2.0 or earlier to Red Hat Ansible Automation Platform 2.1 or later, you can use your existing automation hub API token or generate a new token. In the inventory file, edit one of the following fields before running the Red Hat Ansible Automation Platform installer setup script setup.sh : Import an existing API token with the automationhub_api_token flag as follows: automationhub_api_token= <api_token> Generate a new API token, and invalidate any existing tokens, with the generate_automationhub_token flag as follows: generate_automationhub_token=True Additional resources Red Hat Ansible Automation Platform Installation Guide Deprovisioning individual nodes or instance groups 2.4. Running the Red Hat Ansible Automation Platform installer setup script You can run the setup script once you have finished updating the inventory file. Procedure Run the setup.sh script USD ./setup.sh The installation will begin. | [
"tar xvzf ansible-automation-platform-setup-<latest-version>.tar.gz",
"sudo dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-8-x86_64-rpms ansible-automation-platform-installer",
"sudo dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-9-x86_64-rpms ansible-automation-platform-installer",
"tar xvzf ansible-automation-platform-setup-bundle-<latest-version>.tar.gz",
"cd ansible-automation-platform-setup-bundle-2.4-1-x86_64",
"cd ansible-automation-platform-setup-2.4-1",
"[controller] clusternode1.example.com clusternode2.example.com clusternode3.example.com [all:vars] admin_password='password' pg_host='' pg_port='' pg_database='<database_name>' pg_username='<your_username>' pg_password='<your_password>'",
"automationhub_api_token= <api_token>",
"generate_automationhub_token=True",
"./setup.sh"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_automation_platform_upgrade_and_migration_guide/aap-upgrading-platform |
Preface | Preface You can install Red Hat Developer Hub on Microsoft Azure Kubernetes Service (AKS) using one of the following methods: The Red Hat Developer Hub Operator The Red Hat Developer Hub Helm chart | null | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/installing_red_hat_developer_hub_on_microsoft_azure_kubernetes_service/pr01 |
Chapter 7. Managing DNS records in IdM | Chapter 7. Managing DNS records in IdM This chapter describes how to manage DNS records in Identity Management (IdM). As an IdM administrator, you can add, modify and delete DNS records in IdM. The chapter contains the following sections: DNS records in IdM Adding DNS resource records from the IdM Web UI Adding DNS resource records from the IdM CLI Common ipa dnsrecord-add options Deleting DNS records in the IdM Web UI Deleting an entire DNS record in the IdM Web UI Deleting DNS records in the IdM CLI Prerequisites Your IdM deployment contains an integrated DNS server. For information how to install IdM with integrated DNS, see one of the following links: Installing an IdM server: With integrated DNS, with an integrated CA as the root CA . Installing an IdM server: With integrated DNS, with an external CA as the root CA . 7.1. DNS records in IdM Identity Management (IdM) supports many different DNS record types. The following four are used most frequently: A This is a basic map for a host name and an IPv4 address. The record name of an A record is a host name, such as www . The IP Address value of an A record is an IPv4 address, such as 192.0.2.1 . For more information about A records, see RFC 1035 . AAAA This is a basic map for a host name and an IPv6 address. The record name of an AAAA record is a host name, such as www . The IP Address value is an IPv6 address, such as 2001:DB8::1111 . For more information about AAAA records, see RFC 3596 . SRV Service (SRV) resource records map service names to the DNS name of the server that is providing that particular service. For example, this record type can map a service like an LDAP directory to the server which manages it. The record name of an SRV record has the format _service . _protocol , such as _ldap._tcp . The configuration options for SRV records include priority, weight, port number, and host name for the target service. For more information about SRV records, see RFC 2782 . PTR A pointer record (PTR) adds a reverse DNS record, which maps an IP address to a domain name. Note All reverse DNS lookups for IPv4 addresses use reverse entries that are defined in the in-addr.arpa. domain. The reverse address, in human-readable form, is the exact reverse of the regular IP address, with the in-addr.arpa. domain appended to it. For example, for the network address 192.0.2.0/24 , the reverse zone is 2.0.192.in-addr.arpa . The record name of a PTR must be in the standard format specified in RFC 1035 , extended in RFC 2317 , and RFC 3596 . The host name value must be a canonical host name of the host for which you want to create the record. Note Reverse zones can also be configured for IPv6 addresses, with zones in the .ip6.arpa. domain. For more information about IPv6 reverse zones, see RFC 3596 . When adding DNS resource records, note that many of the records require different data. For example, a CNAME record requires a host name, while an A record requires an IP address. In the IdM Web UI, the fields in the form for adding a new record are updated automatically to reflect what data is required for the currently selected type of record. 7.2. Adding DNS resource records in the IdM Web UI Follow this procedure to add DNS resource records in the Identity Management (IdM) Web UI. Prerequisites The DNS zone to which you want to add a DNS record exists and is managed by IdM. For more information about creating a DNS zone in IdM DNS, see Managing DNS zones in IdM . You are logged in as IdM administrator. Procedure In the IdM Web UI, click Network Services DNS DNS Zones . Click the DNS zone to which you want to add a DNS record. In the DNS Resource Records section, click Add to add a new record. Figure 7.1. Adding a New DNS Resource Record Select the type of record to create and fill out the other fields as required. Figure 7.2. Defining a New DNS Resource Record Click Add to confirm the new record. 7.3. Adding DNS resource records from the IdM CLI Follow this procedure to add a DNS resource record of any type from the command line interface (CLI). Prerequisites The DNS zone to which you want to add a DNS records exists. For more information about creating a DNS zone in IdM DNS, see Managing DNS zones in IdM . You are logged in as IdM administrator. Procedure To add a DNS resource record, use the ipa dnsrecord-add command. The command follows this syntax: In the command above: The zone_name is the name of the DNS zone to which the record is being added. The record_name is an identifier for the new DNS resource record. For example, to add an A type DNS record of host1 to the idm.example.com zone, enter: 7.4. Common ipa dnsrecord-* options You can use the following options when adding, modifying and deleting the most common DNS resource record types in Identity Management (IdM): A (IPv4) AAAA (IPv6) SRV PTR In Bash , you can define multiple entries by listing the values in a comma-separated list inside curly braces, such as --option={val1,val2,val3} . Table 7.1. General Record Options Option Description --ttl = number Sets the time to live for the record. --structured Parses the raw DNS records and returns them in a structured format. Table 7.2. "A" record options Option Description Examples --a-rec = ARECORD Passes a single A record or a list of A records. ipa dnsrecord-add idm.example.com host1 --a-rec=192.168.122.123 Can create a wildcard A record with a given IP address. ipa dnsrecord-add idm.example.com "*" --a-rec=192.168.122.123 [a] --a-ip-address = string Gives the IP address for the record. When creating a record, the option to specify the A record value is --a-rec . However, when modifying an A record, the --a-rec option is used to specify the current value for the A record. The new value is set with the --a-ip-address option. ipa dnsrecord-mod idm.example.com --a-rec 192.168.122.123 --a-ip-address 192.168.122.124 [a] The example creates a wildcard A record with the IP address of 192.0.2.123. Table 7.3. "AAAA" record options Option Description Example --aaaa-rec = AAAARECORD Passes a single AAAA (IPv6) record or a list of AAAA records. ipa dnsrecord-add idm.example.com www --aaaa-rec 2001:db8::1231:5675 --aaaa-ip-address = string Gives the IPv6 address for the record. When creating a record, the option to specify the A record value is --aaaa-rec . However, when modifying an A record, the --aaaa-rec option is used to specify the current value for the A record. The new value is set with the --a-ip-address option. ipa dnsrecord-mod idm.example.com --aaaa-rec 2001:db8::1231:5675 --aaaa-ip-address 2001:db8::1231:5676 Table 7.4. "PTR" record options Option Description Example --ptr-rec = PTRRECORD Passes a single PTR record or a list of PTR records. When adding the reverse DNS record, the zone name used with the ipa dnsrecord-add command is reversed, compared to the usage for adding other DNS records. Typically, the host IP address is the last octet of the IP address in a given network. The first example on the right adds a PTR record for server4.idm.example.com with IPv4 address 192.168.122.4. The second example adds a reverse DNS entry to the 0.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa. IPv6 reverse zone for the host server2.example.com with the IP address 2001:DB8::1111 . ipa dnsrecord-add 122.168.192.in-addr.arpa 4 --ptr-rec server4.idm.example.com. USD ipa dnsrecord-add 0.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa. 1.1.1.0.0.0.0.0.0.0.0.0.0.0.0 --ptr-rec server2.idm.example.com. --ptr-hostname = string Gives the host name for the record. Table 7.5. "SRV" Record Options Option Description Example --srv-rec = SRVRECORD Passes a single SRV record or a list of SRV records. In the examples on the right, _ldap._tcp defines the service type and the connection protocol for the SRV record. The --srv-rec option defines the priority, weight, port, and target values. The weight values of 51 and 49 in the examples add up to 100 and represent the probability, in percentages, that a particular record is used. # ipa dnsrecord-add idm.example.com _ldap._tcp --srv-rec="0 51 389 server1.idm.example.com." # ipa dnsrecord-add server.idm.example.com _ldap._tcp --srv-rec="1 49 389 server2.idm.example.com." --srv-priority = number Sets the priority of the record. There can be multiple SRV records for a service type. The priority (0 - 65535) sets the rank of the record; the lower the number, the higher the priority. A service has to use the record with the highest priority first. # ipa dnsrecord-mod server.idm.example.com _ldap._tcp --srv-rec="1 49 389 server2.idm.example.com." --srv-priority=0 --srv-weight = number Sets the weight of the record. This helps determine the order of SRV records with the same priority. The set weights should add up to 100, representing the probability (in percentages) that a particular record is used. # ipa dnsrecord-mod server.idm.example.com _ldap._tcp --srv-rec="0 49 389 server2.idm.example.com." --srv-weight=60 --srv-port = number Gives the port for the service on the target host. # ipa dnsrecord-mod server.idm.example.com _ldap._tcp --srv-rec="0 60 389 server2.idm.example.com." --srv-port=636 --srv-target = string Gives the domain name of the target host. This can be a single period (.) if the service is not available in the domain. Additional resources Run ipa dnsrecord-add --help . 7.5. Deleting DNS records in the IdM Web UI Follow this procedure to delete DNS records in Identity Management (IdM) using the IdM Web UI. Prerequisites You are logged in as IdM administrator. Procedure In the IdM Web UI, click Network Services DNS DNS Zones . Click the zone from which you want to delete a DNS record, for example example.com. . In the DNS Resource Records section, click the name of the resource record. Figure 7.3. Selecting a DNS Resource Record Select the check box by the name of the record type to delete. Click Delete . Figure 7.4. Deleting a DNS Resource Record The selected record type is now deleted. The other configuration of the resource record is left intact. Additional resources See Deleting an entire DNS record in the IdM Web UI . 7.6. Deleting an entire DNS record in the IdM Web UI Follow this procedure to delete all the records for a particular resource in a zone using the Identity Management (IdM) Web UI. Prerequisites You are logged in as IdM administrator. Procedure In the IdM Web UI, click Network Services DNS DNS Zones . Click the zone from which you want to delete a DNS record, for example zone.example.com. . In the DNS Resource Records section, select the check box of the resource record to delete. Click Delete . Figure 7.5. Deleting an Entire Resource Record The entire resource record is now deleted. 7.7. Deleting DNS records in the IdM CLI Follow this procedure to remove DNS records from a zone managed by the Identity Management (IdM) DNS. Prerequisites You are logged in as IdM administrator. Procedure To remove records from a zone, use the ipa dnsrecord-del command and add the -- recordType -rec option together with the record value. For example, to remove an A type record: If you run ipa dnsrecord-del without any options, the command prompts for information about the record to delete. Note that passing the --del-all option with the command removes all associated records for the zone. Additional resources Run the ipa dnsrecord-del --help command. 7.8. Additional resources See Using Ansible to manage DNS records in IdM . | [
"ipa dnsrecord-add zone_name record_name -- record_type_option=data",
"ipa dnsrecord-add idm.example.com host1 --a-rec=192.168.122.123",
"ipa dnsrecord-del example.com www --a-rec 192.0.2.1"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/working_with_dns_in_identity_management/managing-dns-records-in-idm_working-with-dns-in-identity-management |
8.38. evolution | 8.38. evolution 8.38.1. RHSA-2013:1540 - Low: evolution security, bug fix, and enhancement update Updated evolution packages that fix one security issue, several bugs, and add various enhancements are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having low security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, are available for each vulnerability from the CVE links associated with each description below. Evolution is the integrated collection of email, calendaring, contact management, communications, and personal information management (PIM) tools for the GNOME desktop environment. Security Fix CVE-2013-4166 A flaw was found in the way Evolution selected GnuPG public keys when encrypting emails. This could result in emails being encrypted with public keys other than the one belonging to the intended recipient. Note The Evolution packages have been upgraded to upstream version 2.32.3, which provides a number of bug fixes and enhancements over the version. These changes include implementation of Gnome XDG Config Folders, and support for Exchange Web Services (EWS) protocol to connect to Microsoft Exchange servers. EWS support has been added as a part of the evolution-exchange packages. (BZ# 883010 , BZ# 883014 , BZ# 883015 , BZ# 883017 , BZ# 524917 , BZ# 524921 , BZ# 883044 ) The gtkhtml3 packages have been upgraded to upstream version 2.32.2, which provides a number of bug fixes and enhancements over the version. (BZ# 883019 ) The libgdata packages have been upgraded to upstream version 0.6.4, which provides a number of bug fixes and enhancements over the version. (BZ# 883032 ) Bug Fix BZ# 665967 The Exchange Calendar could not fetch the "Free" and "Busy" information for meeting attendees when using Microsoft Exchange 2010 servers, and this information thus could not be displayed. This happened because Microsoft Exchange 2010 servers use more strict rules for "Free" and "Busy" information fetching. With this update, the respective code in the openchange packages has been modified so the "Free" and "Busy" information fetching now complies with the fetching rules on Microsoft Exchange 2010 servers. The "Free" and "Busy" information can now be displayed as expected in the Exchange Calendar. All Evolution users are advised to upgrade to these updated packages, which contain backported patches to correct these issues and add these enhancements. All running instances of Evolution must be restarted for this update to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/evolution |
Chapter 1. Connecting RHEL systems directly to AD using SSSD | Chapter 1. Connecting RHEL systems directly to AD using SSSD To connect a RHEL system to Active Directory (AD), use: System Security Services Daemon (SSSD) for identity and authentication realmd to detect available domains and configure the underlying RHEL system services. 1.1. Overview of direct integration using SSSD You use SSSD to access a user directory for authentication and authorization through a common framework with user caching to permit offline logins. SSSD is highly configurable; it provides Pluggable Authentication Modules (PAM) and Name Switch Service (NSS) integration and a database to store local users as well as extended user data retrieved from a central server. SSSD is the recommended component to connect a RHEL system with one of the following types of identity server: Active Directory Identity Management (IdM) in RHEL Any generic LDAP or Kerberos server Note Direct integration with SSSD works only within a single AD forest by default. The most convenient way to configure SSSD to directly integrate a Linux system with AD is to use the realmd service. It allows callers to configure network authentication and domain membership in a standard way. The realmd service automatically discovers information about accessible domains and realms and does not require advanced configuration to join a domain or realm. You can use SSSD for both direct and indirect integration with AD and it allows you to switch from one integration approach to another. Direct integration is a simple way to introduce RHEL systems to an AD environment. However, as the share of RHEL systems grows, your deployments usually need a better centralized management of the identity-related policies such as host-based access control, sudo, or SELinux user mappings. Initially, you can maintain the configuration of these aspects of the RHEL systems in local configuration files. However, with a growing number of systems, distribution and management of the configuration files is easier with a provisioning system such as Red Hat Satellite. When direct integration does not scale anymore, you should consider indirect integration. For more information about moving from direct integration (RHEL clients are in the AD domain) to indirect integration (IdM with trust to AD), see Moving RHEL clients from AD domain to IdM Server. Important If IdM is in FIPS mode, the IdM-AD integration does not work due to AD only supporting the use of RC4 or AES HMAC-SHA1 encryptions, while RHEL 9 in FIPS mode allows only AES HMAC-SHA2 by default. For more information, see the Red Hat Knowledgebase solution AD Domain Users unable to login in to the FIPS-compliant environment . IdM does not support the more restrictive FIPS:OSPP crypto policy, which should only be used on Common Criteria evaluated systems. Additional resources realm(8) , sssd-ad(5) , and sssd(8) man pages on your system Deciding between indirect and direct integration 1.2. Supported Windows platforms for direct integration You can directly integrate your RHEL system with Active Directory forests that use the following forest and domain functional levels: Forest functional level range: Windows Server 2008 - Windows Server 2016 Domain functional level range: Windows Server 2008 - Windows Server 2016 Direct integration has been tested on the following supported operating systems: Windows Server 2022 (RHEL 9.1 or later) Windows Server 2019 Windows Server 2016 Windows Server 2012 R2 Note Windows Server 2019 and Windows Server 2022 do not introduce a new functional level. The highest functional level Windows Server 2019 and Windows Server 2022 use is Windows Server 2016. 1.3. Connecting directly to AD The System Security Services Daemon (SSSD) is the recommended component to connect a Red Hat Enterprise Linux (RHEL) system with Active Directory (AD). You can integrate directly with AD by using either POSIX ID mapping, which is the default for SSSD, or by using POSIX attributes defined in AD. Important Before joining your system to AD, ensure you configured your system correctly by following the procedure in the Red Hat Knowledgebase solution Basic Prechecks Steps: RHEL Join With Active Directory using 'adcli', 'realm' and 'net' commands . 1.3.1. Options for integrating with AD: using POSIX ID mapping or POSIX attributes Linux and Windows systems use different identifiers for users and groups: Linux uses user IDs (UID) and group IDs (GID). See Introduction to managing user and group accounts in Configuring Basic System Settings . Linux UIDs and GIDs are compliant with the POSIX standard. Windows use security IDs (SID). Important After connecting a RHEL system to AD, you can authenticate with your AD username and password. Do not create a Linux user with the same name as a Windows user, as duplicate names might cause a conflict and interrupt the authentication process. To authenticate to a RHEL system as an AD user, you must have a UID and GID assigned. SSSD provides the option to integrate with AD either using POSIX ID mapping or POSIX attributes in AD. The default is to use POSIX ID mapping. 1.3.2. Connecting to AD using POSIX ID mapping SSSD uses the SID of an AD user to algorithmically generate POSIX IDs in a process called POSIX ID mapping. POSIX ID mapping creates an association between SIDs in AD and IDs on Linux. When SSSD detects a new AD domain, it assigns a range of available IDs to the new domain. When an AD user logs in to an SSSD client machine for the first time, SSSD creates an entry for the user in the SSSD cache, including a UID based on the user's SID and the ID range for that domain. Because the IDs for an AD user are generated in a consistent way from the same SID, the user has the same UID and GID when logging in to any RHEL system. Note When all client systems use SSSD to map SIDs to Linux IDs, the mapping is consistent. If some clients use different software, choose one of the following: Ensure that the same mapping algorithm is used on all clients. Use explicit POSIX attributes defined in AD. For more information, see the section on ID mapping in the sssd-ad man page. 1.3.2.1. Discovering and joining an AD Domain using SSSD Follow this procedure to discover an AD domain and connect a RHEL system to that domain using SSSD. Prerequisites Ensure that the required ports are open: Ports required for direct integration of RHEL systems into AD using SSSD Ensure that you are using the AD domain controller server for DNS. Verify that the system time on both systems is synchronized. This ensures that Kerberos is able to work correctly. Procedure Install the following packages: To display information for a specific domain, run realm discover and add the name of the domain you want to discover: The realmd system uses DNS SRV lookups to find the domain controllers in this domain automatically. Note The realmd system can discover both Active Directory and Identity Management domains. If both domains exist in your environment, you can limit the discovery results to a specific type of server using the --server-software=active-directory option. Configure the local RHEL system with the realm join command. The realmd suite edits all required configuration files automatically. For example, for a domain named ad.example.com : Verification Display an AD user details, such as the administrator user: Additional resources realm(8) and nmcli(1) man pages on your system 1.3.3. Connecting to AD using POSIX attributes defined in Active Directory AD can create and store POSIX attributes, such as uidNumber , gidNumber , unixHomeDirectory , or loginShell . When using POSIX ID mapping, SSSD creates new UIDs and GIDs, which overrides the values defined in AD. To keep the AD-defined values, you must disable POSIX ID mapping in SSSD. For best performance, publish the POSIX attributes to the AD global catalog. If POSIX attributes are not present in the global catalog, SSSD connects to the individual domain controllers directly on the LDAP port. Prerequisites Ensure that the required ports are open: Ports required for direct integration of RHEL systems into AD using SSSD Ensure that you are using the AD domain controller server for DNS. Verify that the system time on both systems is synchronized. This ensures that Kerberos is able to work correctly. Procedure Install the following packages: Configure the local RHEL system with POSIX ID mapping disabled using the realm join command with the --automatic-id-mapping=no option. The realmd suite edits all required configuration files automatically. For example, for a domain named ad.example.com : If you already joined a domain, you can manually disable POSIX ID Mapping in SSSD: Open the /etc/sssd/sssd.conf file. In the AD domain section, add the ldap_id_mapping = false setting. Remove the SSSD caches: Restart SSSD: SSSD now uses POSIX attributes from AD, instead of creating them locally. Note You must have the relevant POSIX attributes ( uidNumber , gidNumber , unixHomeDirectory , and loginShell ) configured for the users in AD. Verification Display an AD user details, such as the administrator user: Additional resources sssd-ldap(8) man page on your system 1.3.4. Connecting to multiple domains in different AD forests with SSSD You can use an Active Directory (AD) Managed Service Account (MSA) to access AD domains from different forests where there is no trust between them. See Accessing AD with a Managed Service Account . 1.4. How the AD provider handles dynamic DNS updates Active Directory (AD) actively maintains its DNS records by timing out ( aging ) and removing ( scavenging ) inactive records. By default, the SSSD service refreshes a RHEL client's DNS record at the following intervals: Every time the identity provider comes online. Every time the RHEL system reboots. At the interval specified by the dyndns_refresh_interval option in the /etc/sssd/sssd.conf configuration file. The default value is 86400 seconds (24 hours). Note If you set the dyndns_refresh_interval option to the same interval as the DHCP lease, you can update the DNS record after the IP lease is renewed. SSSD sends dynamic DNS updates to the AD server using Kerberos/GSSAPI for DNS (GSS-TSIG). This means that you only need to enable secure connections to AD. Additional resources sssd-ad(5) man page on your system 1.5. Modifying dynamic DNS settings for the AD provider The System Security Services Daemon (SSSD) service refreshes the DNS record of a Red Hat Enterprise Linux (RHEL) client joined to an AD environment at default intervals. The following procedure adjusts these intervals. Prerequisites You have joined a RHEL host to an Active Directory environment with the SSSD service. You need root permissions to edit the /etc/sssd/sssd.conf configuration file. Procedure Open the /etc/sssd/sssd.conf configuration file in a text editor. Add the following options to the [domain] section for your AD domain to set the DNS record refresh interval to 12 hours, disable updating PTR records, and set the DNS record Time To Live (TTL) to 1 hour. Save and close the /etc/sssd/sssd.conf configuration file. Restart the SSSD service to load the configuration changes. Note You can disable dynamic DNS updates by setting the dyndns_update option in the sssd.conf file to false : Additional resources How the AD provider handles dynamic DNS updates sssd-ad(5) man page on your system 1.6. How the AD provider handles trusted domains If you set the id_provider = ad option in the /etc/sssd/sssd.conf configuration file, SSSD handles trusted domains as follows: SSSD only supports domains in a single AD forest. If SSSD requires access to multiple domains from multiple forests, consider using IPA with trusts (preferred) or the winbindd service instead of SSSD. By default, SSSD discovers all domains in the forest and, if a request for an object in a trusted domain arrives, SSSD tries to resolve it. If the trusted domains are not reachable or geographically distant, which makes them slow, you can set the ad_enabled_domains parameter in /etc/sssd/sssd.conf to limit from which trusted domains SSSD resolves objects. By default, you must use fully-qualified user names to resolve users from trusted domains. Additional resources sssd.conf(5) man page on your system 1.7. Overriding Active Directory site autodiscovery with SSSD Active Directory (AD) forests can be very large, with numerous different domain controllers, domains, child domains and physical sites. AD uses the concept of sites to identify the physical location for its domain controllers. This enables clients to connect to the domain controller that is geographically closest, which increases client performance. This section describes how SSSD uses autodiscovery to find an AD site to connect to, and how you can override autodiscovery and specify a site manually. 1.7.1. How SSSD handles AD site autodiscovery By default, SSSD clients use autodiscovery to find its AD site and connect to the closest domain controller. The process consists of these steps: SSSD performs an SRV query to find Domain Controllers (DCs) in the domain. SSSD reads the discovery domain from the dns_discovery_domain or the ad_domain options in the SSSD configuration file. SSSD performs Connection-Less LDAP (CLDAP) pings to these DCs in 3 batches to avoid pinging too many DCs and avoid timeouts from unreachable DCs. If SSSD receives site and forest information during any of these batches, it skips the rest of the batches. SSSD creates and saves a list of site-specific and backup servers. 1.7.2. Overriding AD site autodiscovery To override the autodiscovery process, specify the AD site to which you want the client to connect by adding the ad_site option to the [domain] section of the /etc/sssd/sssd.conf file. This example configures the client to connect to the ExampleSite AD site. Prerequisites You have joined a RHEL host to an Active Directory environment using the SSSD service. You can authenticate as the root user so you can edit the /etc/sssd/sssd.conf configuration file. Procedure Open the /etc/sssd/sssd.conf file in a text editor. Add the ad_site option to the [domain] section for your AD domain: Save and close the /etc/sssd/sssd.conf configuration file. Restart the SSSD service to load the configuration changes: 1.8. realm commands The realmd system has two major task areas: Managing system enrollment in a domain. Controlling which domain users are allowed to access local system resources. In realmd use the command line tool realm to run commands. Most realm commands require the user to specify the action that the utility should perform, and the entity, such as a domain or user account, for which to perform the action. Table 1.1. realmd commands Command Description Realm Commands discover Run a discovery scan for domains on the network. join Add the system to the specified domain. leave Remove the system from the specified domain. list List all configured domains for the system or all discovered and configured domains. Login Commands permit Enable access for specific users or for all users within a configured domain to access the local system. deny Restrict access for specific users or for all users within a configured domain to access the local system. Additional resources realm(8) man page on your system 1.9. Ports required for direct integration of RHEL systems into AD using SSSD The following ports must be open and accessible to the AD domain controllers and the RHEL host. Table 1.2. Ports Required for Direct Integration of Linux Systems into AD Using SSSD Service Port Protocol Notes DNS 53 UDP and TCP LDAP 389 UDP and TCP LDAPS 636 TCP Optional Samba 445 UDP and TCP For AD Group Policy Objects (GPOs) Kerberos 88 UDP and TCP Kerberos 464 UDP and TCP Used by kadmin for setting and changing a password LDAP Global Catalog 3268 TCP If the id_provider = ad option is being used LDAPS Global Catalog 3269 TCP Optional NTP 123 UDP Optional NTP 323 UDP Optional | [
"dnf install samba-common-tools realmd oddjob oddjob-mkhomedir sssd adcli krb5-workstation",
"realm discover ad.example.com ad.example.com type: kerberos realm-name: AD.EXAMPLE.COM domain-name: ad.example.com configured: no server-software: active-directory client-software: sssd required-package: oddjob required-package: oddjob-mkhomedir required-package: sssd required-package: adcli required-package: samba-common",
"realm join ad.example.com",
"getent passwd [email protected] [email protected]:*:1450400500:1450400513:Administrator:/home/[email protected]:/bin/bash",
"dnf install realmd oddjob oddjob-mkhomedir sssd adcli krb5-workstation",
"realm join --automatic-id-mapping=no ad.example.com",
"rm -f /var/lib/sss/db/*",
"systemctl restart sssd",
"getent passwd [email protected] [email protected]:*:10000:10000:Administrator:/home/Administrator:/bin/bash",
"[domain/ ad.example.com ] id_provider = ad dyndns_refresh_interval = 43200 dyndns_update_ptr = false dyndns_ttl = 3600",
"systemctl restart sssd",
"[domain/ ad.example.com ] id_provider = ad dyndns_update = false",
"[domain/ad.example.com] id_provider = ad ad_site = ExampleSite",
"systemctl restart sssd"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/integrating_rhel_systems_directly_with_windows_active_directory/connecting-rhel-systems-directly-to-ad-using-sssd_integrating-rhel-systems-directly-with-active-directory |
Chapter 29. Delegating permissions to user groups to manage users using IdM CLI | Chapter 29. Delegating permissions to user groups to manage users using IdM CLI Delegation is one of the access control methods in IdM, along with self-service rules and role-based access control (RBAC). You can use delegation to assign permissions to one group of users to manage entries for another group of users. This section covers the following topics: Delegation rules Creating a delegation rule using IdM CLI Viewing existing delegation rules using IdM CLI Modifying a delegation rule using IdM CLI Deleting a delegation rule using IdM CLI 29.1. Delegation rules You can delegate permissions to user groups to manage users by creating delegation rules . Delegation rules allow a specific user group to perform write (edit) operations on specific attributes for users in another user group. This form of access control rule is limited to editing the values of a subset of attributes you specify in a delegation rule; it does not grant the ability to add or remove whole entries or control over unspecified attributes. Delegation rules grant permissions to existing user groups in IdM. You can use delegation to, for example, allow the managers user group to manage selected attributes of users in the employees user group. 29.2. Creating a delegation rule using IdM CLI Follow this procedure to create a delegation rule using the IdM CLI. Prerequisites You are logged in as a member of the admins group. Procedure Enter the ipa delegation-add command. Specify the following options: --group : the group who is being granted permissions to the entries of users in the user group. --membergroup : the group whose entries can be edited by members of the delegation group. --permissions : whether users will have the right to view the given attributes ( read ) and add or change the given attributes ( write ). If you do not specify permissions, only the write permission will be added. --attrs : the attributes which users in the member group are allowed to view or edit. For example: 29.3. Viewing existing delegation rules using IdM CLI Follow this procedure to view existing delegation rules using the IdM CLI. Prerequisites You are logged in as a member of the admins group. Procedure Enter the ipa delegation-find command: 29.4. Modifying a delegation rule using IdM CLI Follow this procedure to modify an existing delegation rule using the IdM CLI. Important The --attrs option overwrites whatever the list of supported attributes was, so always include the complete list of attributes along with any new attributes. This also applies to the --permissions option. Prerequisites You are logged in as a member of the admins group. Procedure Enter the ipa delegation-mod command with the desired changes. For example, to add the displayname attribute to the basic manager attributes example rule: 29.5. Deleting a delegation rule using IdM CLI Follow this procedure to delete an existing delegation rule using the IdM CLI. Prerequisites You are logged in as a member of the admins group. Procedure Enter the ipa delegation-del command. When prompted, enter the name of the delegation rule you want to delete: | [
"ipa delegation-add \"basic manager attributes\" --permissions=read --permissions=write --attrs=businesscategory --attrs=departmentnumber --attrs=employeetype --attrs=employeenumber --group=managers --membergroup=employees ------------------------------------------- Added delegation \"basic manager attributes\" ------------------------------------------- Delegation name: basic manager attributes Permissions: read, write Attributes: businesscategory, departmentnumber, employeetype, employeenumber Member user group: employees User group: managers",
"ipa delegation-find -------------------- 1 delegation matched -------------------- Delegation name: basic manager attributes Permissions: read, write Attributes: businesscategory, departmentnumber, employeenumber, employeetype Member user group: employees User group: managers ---------------------------- Number of entries returned 1 ----------------------------",
"ipa delegation-mod \"basic manager attributes\" --attrs=businesscategory --attrs=departmentnumber --attrs=employeetype --attrs=employeenumber --attrs=displayname ---------------------------------------------- Modified delegation \"basic manager attributes\" ---------------------------------------------- Delegation name: basic manager attributes Permissions: read, write Attributes: businesscategory, departmentnumber, employeetype, employeenumber, displayname Member user group: employees User group: managers",
"ipa delegation-del Delegation name: basic manager attributes --------------------------------------------- Deleted delegation \"basic manager attributes\" ---------------------------------------------"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_idm_users_groups_hosts_and_access_control_rules/delegating-permissions-to-user-groups-to-manage-users-using-idm-cli_managing-users-groups-hosts |
Chapter 4. Technology previews | Chapter 4. Technology previews This section describes technology preview features introduced in Red Hat OpenShift Data Foundation 4.9 under Technology Preview support limitations. Important Technology Preview features are provided with a limited support scope, as detailed on the Customer Portal: Technology Preview Features Support Scope . PV encryption - service account per namespace As of Openshift Data Foundation 4.9, you can use service accounts to authenticate a tenant with Vault as a technology preview. For more information, see Persistent volume encryption . Alerts to control overprovision With this release, you can get alerts for the overprovision. This enables you to define a quota on the amount of persistent volume claims (PVCs) consumed from a storage cluster based on the specific application namespace. For more information, see Overprovision level policy control . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/4.9_release_notes/technology_previews |
1.7. Listing Enabled Software Collections | 1.7. Listing Enabled Software Collections To get a list of Software Collections that are enabled in the current session, print the USDX_SCLS environment variable by running the following command: echo USDX_SCLS | null | https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/packaging_guide/sect-listing_enabled_software_collections |
Chapter 5. Installing a three-node cluster on Nutanix | Chapter 5. Installing a three-node cluster on Nutanix In OpenShift Container Platform version 4.16, you can install a three-node cluster on Nutanix. A three-node cluster consists of three control plane machines, which also act as compute machines. This type of cluster provides a smaller, more resource efficient cluster, for cluster administrators and developers to use for testing, development, and production. 5.1. Configuring a three-node cluster You configure a three-node cluster by setting the number of worker nodes to 0 in the install-config.yaml file before deploying the cluster. Setting the number of worker nodes to 0 ensures that the control plane machines are schedulable. This allows application workloads to be scheduled to run from the control plane nodes. Note Because application workloads run from control plane nodes, additional subscriptions are required, as the control plane nodes are considered to be compute nodes. Prerequisites You have an existing install-config.yaml file. Procedure Set the number of compute replicas to 0 in your install-config.yaml file, as shown in the following compute stanza: Example install-config.yaml file for a three-node cluster apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0 # ... 5.2. steps Installing a cluster on Nutanix | [
"apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_nutanix/installing-nutanix-three-node |
Preface | Preface Open Java Development Kit (OpenJDK) is a free and open source implementation of the Java Platform, Standard Edition (Java SE). The Red Hat build of OpenJDK is available in three versions: 8u, 11u, and 17u. Packages for the Red Hat build of OpenJDK are made available on Red Hat Enterprise Linux and Microsoft Windows and shipped as a JDK and JRE in the Red Hat Ecosystem Catalog. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.6/pr01 |
Chapter 305. SIP Component | Chapter 305. SIP Component Available as of Camel version 2.5 The sip component in Camel is a communication component, based on the Jain SIP implementation (available under the JCP license). Session Initiation Protocol (SIP) is an IETF-defined signaling protocol, widely used for controlling multimedia communication sessions such as voice and video calls over Internet Protocol (IP).The SIP protocol is an Application Layer protocol designed to be independent of the underlying transport layer; it can run on Transmission Control Protocol (TCP), User Datagram Protocol (UDP) or Stream Control Transmission Protocol (SCTP). The Jain SIP implementation supports TCP and UDP only. The Camel SIP component only supports the SIP Publish and Subscribe capability as described in the RFC3903 - Session Initiation Protocol (SIP) Extension for Event This camel component supports both producer and consumer endpoints. Camel SIP Producers (Event Publishers) and SIP Consumers (Event Subscribers) communicate event & state information to each other using an intermediary entity called a SIP Presence Agent (a stateful brokering entity). For SIP based communication, a SIP Stack with a listener must be instantiated on both the SIP Producer and Consumer (using separate ports if using localhost). This is necessary in order to support the handshakes & acknowledgements exchanged between the SIP Stacks during communication. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-sip</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 305.1. URI format The URI scheme for a sip endpoint is as follows: sip://johndoe@localhost:99999[?options] sips://johndoe@localhost:99999/[?options] This component supports producer and consumer endpoints for both TCP and UDP. You can append query options to the URI in the following format, ?option=value&option=value&... 305.2. Options The SIP Component offers an extensive set of configuration options & capability to create custom stateful headers needed to propagate state via the SIP protocol. The SIP component has no options. The SIP endpoint is configured using URI syntax: with the following path and query parameters: 305.2.1. Path Parameters (1 parameters): Name Description Default Type uri Required URI of the SIP server to connect to (the username and password can be included such as: john:secretmyserver:9999) URI 305.2.2. Query Parameters (44 parameters): Name Description Default Type cacheConnections (common) Should connections be cached by the SipStack to reduce cost of connection creation. This is useful if the connection is used for long running conversations. false boolean contentSubType (common) Setting for contentSubType can be set to any valid MimeSubType. plain String contentType (common) Setting for contentType can be set to any valid MimeType. text String eventHeaderName (common) Setting for a String based event type. String eventId (common) Setting for a String based event Id. Mandatory setting unless a registry based FromHeader is specified String fromHost (common) Hostname of the message originator. Mandatory setting unless a registry based FromHeader is specified String fromPort (common) Port of the message originator. Mandatory setting unless a registry based FromHeader is specified int fromUser (common) Username of the message originator. Mandatory setting unless a registry based custom FromHeader is specified. String msgExpiration (common) The amount of time a message received at an endpoint is considered valid 3600 int receiveTimeoutMillis (common) Setting for specifying amount of time to wait for a Response and/or Acknowledgement can be received from another SIP stack 10000 long stackName (common) Name of the SIP Stack instance associated with an SIP Endpoint. NAME_NOT_SET String toHost (common) Hostname of the message receiver. Mandatory setting unless a registry based ToHeader is specified String toPort (common) Portname of the message receiver. Mandatory setting unless a registry based ToHeader is specified int toUser (common) Username of the message receiver. Mandatory setting unless a registry based custom ToHeader is specified. String transport (common) Setting for choice of transport protocol. Valid choices are tcp or udp. tcp String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean consumer (consumer) This setting is used to determine whether the kind of header (FromHeader,ToHeader etc) that needs to be created for this endpoint false boolean presenceAgent (consumer) This setting is used to distinguish between a Presence Agent & a consumer. This is due to the fact that the SIP Camel component ships with a basic Presence Agent (for testing purposes only). Consumers have to set this flag to true. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern addressFactory (advanced) To use a custom AddressFactory AddressFactory callIdHeader (advanced) A custom Header object containing call details. Must implement the type javax.sip.header.CallIdHeader CallIdHeader contactHeader (advanced) An optional custom Header object containing verbose contact details (email, phone number etc). Must implement the type javax.sip.header.ContactHeader ContactHeader contentTypeHeader (advanced) A custom Header object containing message content details. Must implement the type javax.sip.header.ContentTypeHeader ContentTypeHeader eventHeader (advanced) A custom Header object containing event details. Must implement the type javax.sip.header.EventHeader EventHeader expiresHeader (advanced) A custom Header object containing message expiration details. Must implement the type javax.sip.header.ExpiresHeader ExpiresHeader extensionHeader (advanced) A custom Header object containing user/application specific details. Must implement the type javax.sip.header.ExtensionHeader ExtensionHeader fromHeader (advanced) A custom Header object containing message originator settings. Must implement the type javax.sip.header.FromHeader FromHeader headerFactory (advanced) To use a custom HeaderFactory HeaderFactory listeningPoint (advanced) To use a custom ListeningPoint implementation ListeningPoint maxForwardsHeader (advanced) A custom Header object containing details on maximum proxy forwards. This header places a limit on the viaHeaders possible. Must implement the type javax.sip.header.MaxForwardsHeader MaxForwardsHeader maxMessageSize (advanced) Setting for maximum allowed Message size in bytes. 1048576 int messageFactory (advanced) To use a custom MessageFactory MessageFactory sipFactory (advanced) To use a custom SipFactory to create the SipStack to be used SipFactory sipStack (advanced) To use a custom SipStack SipStack sipUri (advanced) To use a custom SipURI. If none configured, then the SipUri fallback to use the options toUser toHost:toPort SipURI synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean toHeader (advanced) A custom Header object containing message receiver settings. Must implement the type javax.sip.header.ToHeader ToHeader viaHeaders (advanced) List of custom Header objects of the type javax.sip.header.ViaHeader. Each ViaHeader containing a proxy address for request forwarding. (Note this header is automatically updated by each proxy when the request arrives at its listener) List implementationDebugLogFile (logging) Name of client debug log file to use for logging String implementationServerLogFile (logging) Name of server log file to use for logging String implementationTraceLevel (logging) Logging level for tracing 0 String maxForwards (proxy) Number of maximum proxy forwards int useRouterForAllUris (proxy) This setting is used when requests are sent to the Presence Agent via a proxy. false boolean 305.3. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.sip.enabled Enable sip component true Boolean camel.component.sip.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 305.4. Sending Messages to/from a SIP endpoint 305.4.1. Creating a Camel SIP Publisher In the example below, a SIP Publisher is created to send SIP Event publications to a user "agent@localhost:5152". This is the address of the SIP Presence Agent which acts as a broker between the SIP Publisher and Subscriber using a SIP Stack named client using a registry based eventHeader called evtHdrName using a registry based eventId called evtId from a SIP Stack with Listener set up as user2@localhost:3534 The Event being published is EVENT_A A Mandatory Header called REQUEST_METHOD is set to Request.Publish thereby setting up the endpoint as a Event publisher" producerTemplate.sendBodyAndHeader( "sip://agent@localhost:5152?stackName=client&eventHeaderName=evtHdrName&eventId=evtid&fromUser=user2&fromHost=localhost&fromPort=3534", "EVENT_A", "REQUEST_METHOD", Request.PUBLISH); 305.4.2. Creating a Camel SIP Subscriber In the example below, a SIP Subscriber is created to receive SIP Event publications sent to a user "johndoe@localhost:5154" using a SIP Stack named Subscriber registering with a Presence Agent user called agent@localhost:5152 using a registry based eventHeader called evtHdrName. The evtHdrName contains the Event which is se to "Event_A" using a registry based eventId called evtId @Override protected RouteBuilder createRouteBuilder() throws Exception { return new RouteBuilder() { @Override public void configure() throws Exception { // Create PresenceAgent from("sip://agent@localhost:5152?stackName=PresenceAgent&presenceAgent=true&eventHeaderName=evtHdrName&eventId=evtid") .to("mock:neverland"); // Create Sip Consumer(Event Subscriber) from("sip://johndoe@localhost:5154?stackName=Subscriber&toUser=agent&toHost=localhost&toPort=5152&eventHeaderName=evtHdrName&eventId=evtid") .to("log:ReceivedEvent?level=DEBUG") .to("mock:notification"); } }; } The Camel SIP component also ships with a Presence Agent that is meant to be used for Testing and Demo purposes only. An example of instantiating a Presence Agent is given above. Note that the Presence Agent is set up as a user agent@localhost:5152 and is capable of communicating with both Publisher as well as Subscriber. It has a separate SIP stackName distinct from Publisher as well as Subscriber. While it is set up as a Camel Consumer, it does not actually send any messages along the route to the endpoint "mock:neverland". | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-sip</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"sip://johndoe@localhost:99999[?options] sips://johndoe@localhost:99999/[?options]",
"sip:uri",
"producerTemplate.sendBodyAndHeader( \"sip://agent@localhost:5152?stackName=client&eventHeaderName=evtHdrName&eventId=evtid&fromUser=user2&fromHost=localhost&fromPort=3534\", \"EVENT_A\", \"REQUEST_METHOD\", Request.PUBLISH);",
"@Override protected RouteBuilder createRouteBuilder() throws Exception { return new RouteBuilder() { @Override public void configure() throws Exception { // Create PresenceAgent from(\"sip://agent@localhost:5152?stackName=PresenceAgent&presenceAgent=true&eventHeaderName=evtHdrName&eventId=evtid\") .to(\"mock:neverland\"); // Create Sip Consumer(Event Subscriber) from(\"sip://johndoe@localhost:5154?stackName=Subscriber&toUser=agent&toHost=localhost&toPort=5152&eventHeaderName=evtHdrName&eventId=evtid\") .to(\"log:ReceivedEvent?level=DEBUG\") .to(\"mock:notification\"); } }; }"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/sip-component |
Chapter 6. Deploying Event-Driven Ansible controller with Red Hat Ansible Automation Platform Operator on Red Hat OpenShift Container Platform | Chapter 6. Deploying Event-Driven Ansible controller with Red Hat Ansible Automation Platform Operator on Red Hat OpenShift Container Platform Event-Driven Ansible controller is the interface for event-driven automation and introduces automated resolution of IT requests. This component helps you connect to sources of events and acts on those events using rulebooks. When you deploy Event-Driven Ansible controller, you can automate decision making, use numerous event sources, implement event-driven automation within and across multiple IT use cases, and achieve more efficient service delivery. Use the following instructions to install Event-Driven Ansible with your Ansible Automation Platform Operator on OpenShift Container Platform. Prerequisites You have installed Ansible Automation Platform Operator on OpenShift Container Platform. You have installed and configured automation controller. Procedure Select Operators Installed Operators . Locate and select your installation of Ansible Automation Platform. Under the Details tab, locate the EDA modal and click Create instance . Click Form view , and in the Name field, enter the name you want for your new Event-Driven Ansible controller deployment. Important If you have installed other Ansible Automation Platform components in your current OpenShift Container Platform namespace, ensure that you provide a unique name for your Event-Driven Ansible controller when you create your Event-Driven Ansible custom resource. Otherwise, naming conflicts can occur and impact Event-Driven Ansible controller deployment. Specify your controller URL in the Automation Server URL field. If you deployed automation controller in Openshift as well, you can find the URL in the navigation panel under Networking Routes . Note This is the only required customization, but you can customize other options using the UI form or directly in the YAML configuration tab, if desired. Important To ensure that you can run concurrent Event-Driven Ansible activations efficiently, you must set your maximum number of activations in proportion to the resources available on your cluster. You can do this by adjusting your Event-Driven Ansible settings in the YAML view. When you activate an Event-Driven Ansible rulebook under standard conditions, it uses approximately 250 MB of memory. However, the actual memory consumption can vary significantly based on the complexity of your rules and the volume and size of the events processed. In scenarios where a large number of events are anticipated or the rulebook complexity is high, conduct a preliminary assessment of resource usage in a staging environment. This ensures that your maximum number of activations is based on the capacity of your resources. Click YAML view to update your YAML key values. Copy and paste the following string at the end of the spec key value section: extra_settings: - setting: EDA_MAX_RUNNING_ACTIVATIONS value: '12' database: resource_requirements: requests: cpu: 200m memory: 512Mi storage_requirements: requests: storage: 100Gi Click Create . This deploys Event-Driven Ansible controller in the namespace you specified. After a couple minutes when the installation is marked as Successful , you can find the URL for the Event-Driven Ansible UI on the Routes page in the OpenShift UI. From the navigation panel, select Networking Routes to find the new Route URL that has been created for you. Routes are listed according to the name of your custom resource. Click the new URL under the Location column to navigate to Event-Driven Ansible in the browser. From the navigation panel, select Workloads Secrets and locate the Admin Password k8s secret that was created for you, unless you specified a custom one. Secrets are listed according to the name of your custom resource and appended with -admin-password. Note You can use the password value in the secret to log in to the Event-Driven Ansible controller UI. The default user is admin . | [
"extra_settings: - setting: EDA_MAX_RUNNING_ACTIVATIONS value: '12' database: resource_requirements: requests: cpu: 200m memory: 512Mi storage_requirements: requests: storage: 100Gi"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/deploying_the_red_hat_ansible_automation_platform_operator_on_openshift_container_platform/deploy-eda-controller-on-aap-operator-ocp |
Chapter 2. Conclusion | Chapter 2. Conclusion Using the procedure you can: Create groups with permissions to curate namespaces and upload collections to it. Add information and resources to the namespace that helps end users of the collection in their automation tasks. Upload a collection to the namespace. Review the namespace import logs to determine the success or failure of uploading the collection and its current approval status. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/curating_collections_using_namespaces_in_automation_hub/conclusion |
Chapter 1. Installing and Configuring Data Warehouse | Chapter 1. Installing and Configuring Data Warehouse 1.1. Overview of Configuring Data Warehouse You can install and configure Data Warehouse on the same machine as the Manager, or on a separate machine with access to the Manager: Install and configure Data Warehouse on the Manager machine This configuration requires only a single registered machine, and is the simplest to configure, but it increases the demand on the Manager machine. Users who require access to the Data Warehouse service require access to the Manager machine itself. See Configuring the Red Hat Virtualization Manager in Installing Red Hat Virtualization as a standalone Manager with local databases . Install and configure Data Warehouse on a separate machine This configuration requires two registered machines. It reduces the load on the Manager machine and avoids potential CPU and memory-sharing conflicts on that machine. Administrators can also allow user access to the Data Warehouse machine, without the need to grant access to the Manager machine. See Installing and Configuring Data Warehouse on a Separate Machine for more information on this configuration. Important It is recommended that you set the system time zone for all machines in your Data Warehouse deployment to UTC. This ensures that data collection is not interrupted by variations in your local time zone: for example, a change from summer time to winter time. To calculate an estimate of the space and resources the ovirt_engine_history database will use, use the RHV Manager History Database Size Calculator tool. The estimate is based on the number of entities and the length of time you have chosen to retain the history records. Important The following behavior is expected in engine-setup : Install the Data Warehouse package, run engine-setup , and answer No to configuring Data Warehouse: Configure Data Warehouse on this host (Yes, No) [Yes]: No Run engine-setup again; setup no longer presents the option to configure Data Warehouse. To force engine-setup to present the option again, run engine-setup --reconfigure-optional-components . To configure only the currently installed Data Warehouse packages, and prevent setup from applying package updates found in enabled repositories, add the --offline option . | [
"Configure Data Warehouse on this host (Yes, No) [Yes]: No"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/data_warehouse_guide/chap-installing_and_configuring_data_warehouse |
Chapter 2. Learn more about OpenShift Container Platform | Chapter 2. Learn more about OpenShift Container Platform Use the following sections to find content to help you learn about and use OpenShift Container Platform. 2.1. Architect Learn about OpenShift Container Platform Plan an OpenShift Container Platform deployment Additional resources Enterprise Kubernetes with OpenShift Tested platforms OpenShift blog Architecture Security and compliance What's new in OpenShift Container Platform Networking OpenShift Container Platform life cycle Backup and restore 2.2. Cluster Administrator Learn about OpenShift Container Platform Deploy OpenShift Container Platform Manage OpenShift Container Platform Additional resources Enterprise Kubernetes with OpenShift Installing OpenShift Container Platform Using Insights to identify issues with your cluster Getting Support Architecture Post installation configuration Logging OpenShift Knowledgebase articles OpenShift Interactive Learning Portal Networking Monitoring overview OpenShift Container Platform Life Cycle Storage Backup and restore Updating a cluster 2.3. Application Site Reliability Engineer (App SRE) Learn about OpenShift Container Platform Deploy and manage applications Additional resources OpenShift Interactive Learning Portal Projects Getting Support Architecture Operators OpenShift Knowledgebase articles Logging OpenShift Container Platform Life Cycle Blogs about logging Monitoring 2.4. Developer Learn about application development in OpenShift Container Platform Deploy applications Getting started with OpenShift for developers (interactive tutorial) Creating applications Red Hat Developers site Builds Red Hat CodeReady Workspaces Operators Images Developer-focused CLI | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/about/learn_more_about_openshift |
Installation and Configuration Guide | Installation and Configuration Guide Red Hat Enterprise Linux Atomic Host 7 Installation and Configuration Guide Red Hat Atomic Host Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/installation_and_configuration_guide/index |
Release Notes for Connectivity Link 1.0 | Release Notes for Connectivity Link 1.0 Red Hat Connectivity Link 1.0 What's new in Red Hat Connectivity Link Red Hat Connectivity Link documentation team | null | https://docs.redhat.com/en/documentation/red_hat_connectivity_link/1.0/html/release_notes_for_connectivity_link_1.0/index |
22.16.8. Adding a Manycast Server Address | 22.16.8. Adding a Manycast Server Address To add a manycast server address, that is to say, to configure an address to allow the clients to discover the server by multicasting NTP packets, make use of the manycastserver command in the ntp.conf file. The manycastserver command takes the following form: manycastserver address Enables the sending of multicast messages. Where address is the address to multicast to. This should be used together with authentication to prevent service disruption. This command configures a system to act as an NTP server. Systems can be both client and server at the same time. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2_adding_a_manycastserver_address |
10.5.13. ExtendedStatus | 10.5.13. ExtendedStatus The ExtendedStatus directive controls whether Apache generates basic ( off ) or detailed server status information ( on ), when the server-status handler is called. The server-status handler is called using Location tags. More information on calling server-status is included in Section 10.5.60, " Location " . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-apache-extendedstatus |
Chapter 2. Integrating with CI systems | Chapter 2. Integrating with CI systems Red Hat Advanced Cluster Security for Kubernetes (RHACS) integrates with a variety of continuous integration (CI) products. Before you deploy images, you can use RHACS to apply build-time and deploy-time security rules to your images. After images are built and pushed to a registry, RHACS integrates into CI pipelines. Pushing the image first allows developers to continue testing their artifacts while dealing with any policy violations alongside any other CI test failures, linter violations, or other problems. If possible, configure the version control system to block pull or merge requests from being merged if the build stage, which includes RHACS checks, fails. The integration with your CI product functions by contacting your RHACS installation to check whether the image complies with build-time policies you have configured. If there are policy violations, a detailed message is displayed on the console log, including the policy description, rationale, and remediation instructions. Each policy includes an optional enforcement setting. If you mark a policy for build-time enforcement, failure of that policy causes the client to exit with a nonzero error code. To integrate Red Hat Advanced Cluster Security for Kubernetes with your CI system, follow these steps: Configure build policies . Configure a registry integration . Configure access to your RHACS instance. Integrate with your CI pipeline . 2.1. Configuring build policies You can check RHACS policies during builds. Procedure Configure policies that apply to the build time of the container lifecycle. Integrate with the registry that images are pushed to during the build. Additional resources Integrating with image registries 2.1.1. Checking existing build-time policies Use the RHACS portal to check any existing build-time policies that you have configured in Red Hat Advanced Cluster Security for Kubernetes. Procedure In the RHACS portal, go to Platform Configuration Policy Management . Use global search to search for Lifecycle Stage:Build . 2.1.2. Creating a new system policy In addition to using the default policies, you can also create custom policies in Red Hat Advanced Cluster Security for Kubernetes. Procedure In the RHACS portal, go to Platform Configuration Policy Management . Click + New Policy . Enter the Name for the policy. Select a Severity level for the policy: Critical, High, Medium, or Low. Choose the Lifecycle Stages for which the policy is applicable, from Build , Deploy , or Runtime . You can select more than one stage. Note If you create a new policy for integrating with a CI system, select Build as the lifecycle stage. Build-time policies apply to image fields such as CVEs and Dockerfile instructions. Deploy-time policies can include all build-time policy criteria. They can also have data from your cluster configurations, such as running in privileged mode or mounting the Docker daemon socket. Runtime policies can include all build-time and deploy-time policy criteria, and data about process executions during runtime. Enter information about the policy in the Description , Rationale , and Remediation fields. When CI validates the build, the data from these fields is displayed. Therefore, include all information explaining the policy. Select a category from the Categories drop-down menu. Select a notifier from the Notifications drop-down menu that receives alert notifications when a violation occurs for this policy. Note You must integrate RHACS with your notification providers, such as webhooks, Jira, or PagerDuty, to receive alert notifications. Notifiers only show up if you have integrated any notification providers with RHACS. Use Restrict to Scope to enable this policy only for a specific cluster, namespace, or label. You can add multiple scopes and also use regular expressions in RE2 Syntax for namespaces and labels. Use Exclude by Scope to exclude deployments, clusters, namespaces, and labels. This field indicates that the policy will not apply to the entities that you specify. You can add multiple scopes and also use regular expressions in RE2 Syntax for namespaces and labels. However, you cannot use regular expressions for selecting deployments. For Excluded Images (Build Lifecycle only) , select all the images from the list for which you do not want to trigger a violation for the policy. Note The Excluded Images (Build Lifecycle only) setting only applies when you check images in a continuous integration system (the Build lifecycle stage). It does not have any effect if you use this policy to check running deployments (the Deploy lifecycle stage) or runtime activities (the Runtime lifecycle stage). In the Policy Criteria section, configure the attributes that will trigger the policy. Select on the panel header. The new policy panel shows a preview of the violations that are triggered if you enable the policy. Select on the panel header. Choose the enforcement behavior for the policy. Enforcement settings are only available for the stages that you selected for the Lifecycle Stages option. Select ON to enforce policy and report a violation. Select OFF to only report a violation. Note The enforcement behavior is different for each lifecycle stage. For the Build stage, RHACS fails your CI builds when images match the conditions of the policy. For the Deploy stage, RHACS blocks the creation and update of deployments that match the conditions of the policy if the RHACS admission controller is configured and running. In clusters with admission controller enforcement, the Kubernetes or OpenShift Container Platform API server blocks all noncompliant deployments. In other clusters, RHACS edits noncompliant deployments to prevent pods from being scheduled. For existing deployments, policy changes only result in enforcement at the detection of the criteria, when a Kubernetes event occurs. For more information about enforcement, see "Security policy enforcement for the deploy stage". For the Runtime stage, RHACS stops all pods that match the conditions of the policy. Warning Policy enforcement can impact running applications or development processes. Before you enable enforcement options, inform all stakeholders and plan how to respond to the automated enforcement actions. 2.1.2.1. Security policy enforcement for the deploy stage Red Hat Advanced Cluster Security for Kubernetes supports two forms of security policy enforcement for deploy-time policies: hard enforcement through the admission controller and soft enforcement by RHACS Sensor. The admission controller blocks creation or updating of deployments that violate policy. If the admission controller is disabled or unavailable, Sensor can perform enforcement by scaling down replicas for deployments that violate policy to 0 . Warning Policy enforcement can impact running applications or development processes. Before you enable enforcement options, inform all stakeholders and plan how to respond to the automated enforcement actions. 2.1.2.1.1. Hard enforcement Hard enforcement is performed by the RHACS admission controller. In clusters with admission controller enforcement, the Kubernetes or OpenShift Container Platform API server blocks all noncompliant deployments. The admission controller blocks CREATE and UPDATE operations. Any pod create or update request that satisfies a policy configured with deploy-time enforcement enabled will fail. Note Kubernetes admission webhooks support only CREATE , UPDATE , DELETE , or CONNECT operations. The RHACS admission controller supports only CREATE and UPDATE operations. Operations such as kubectl patch , kubectl set , and kubectl scale are PATCH operations, not UPDATE operations. Because PATCH operations are not supported in Kubernetes, RHACS cannot perform enforcement on PATCH operations. For blocking enforcement, you must enable the following settings for the cluster in RHACS: Enforce on Object Creates : This toggle in the Dynamic Configuration section controls the behavior of the admission control service. You must have the Configure Admission Controller Webhook to listen on Object Creates toggle in the Static Configuration section turned on for this to work. Enforce on Object Updates : This toggle in the Dynamic Configuration section controls the behavior of the admission control service. You must have the Configure Admission Controller Webhook to listen on Object Updates toggle in the Static Configuration section turned on for this to work. If you make changes to settings in the Static Configuration setting, you must redeploy the secured cluster for those changes to take effect. 2.1.2.1.2. Soft enforcement Soft enforcement is performed by RHACS Sensor. This enforcement prevents an operation from being initiated. With soft enforcement, Sensor scales the replicas to 0, and prevents pods from being scheduled. In this enforcement, a non-ready deployment is available in the cluster. If soft enforcement is configured, and Sensor is down, then RHACS cannot perform enforcement. 2.1.2.1.3. Namespace exclusions By default, RHACS excludes certain administrative namespaces, such as the stackrox , kube-system , and istio-system namespaces, from enforcement blocking. The reason for this is that some items in these namespaces must be deployed for RHACS to work correctly. 2.1.2.1.4. Enforcement on existing deployments For existing deployments, policy changes only result in enforcement at the detection of the criteria, when a Kubernetes event occurs. If you make changes to a policy, you must reassess policies by selecting Policy Management and clicking Reassess All . This action applies deploy policies on all existing deployments regardless of whether there are any new incoming Kubernetes events. If a policy is violated, then RHACS performs enforcement. Additional resources Using admission controller enforcement 2.2. Configuring registry integration To scan images, you must provide Red Hat Advanced Cluster Security for Kubernetes with access to the image registry you are using in your build pipeline. 2.2.1. Checking for existing registry integration You can use the RHACS portal to check if you have already integrated with a registry. Procedure In the RHACS portal, go to Platform Configuration Integrations . Under the Image Integration section, look for highlighted Registry tiles. The tiles also list the number of items already configured for that tile. If none of the Registry tiles are highlighted, you must first integrate with an image registry. 2.2.1.1. Additional resources Integrating with image registries 2.3. Configuring access RHACS provides the roxctl command-line interface (CLI) to make it easy to integrate RHACS policies into your build pipeline. The roxctl CLI prints detailed information about problems and how to fix them so that developers can maintain high standards in the early phases of the container lifecycle. To securely authenticate to the Red Hat Advanced Cluster Security for Kubernetes API server, you must create an API token. 2.3.1. Exporting and saving the API token Procedure After you have generated the authentication token, export it as the ROX_API_TOKEN variable by entering the following command: USD export ROX_API_TOKEN=<api_token> (Optional): You can also save the token in a file and use it with the --token-file option by entering the following command: USD roxctl central debug dump --token-file <token_file> Note the following guidelines: You cannot use both the -password ( -p ) and the --token-file options simultaneously. If you have already set the ROX_API_TOKEN variable, and specify the --token-file option, the roxctl CLI uses the specified token file for authentication. If you have already set the ROX_API_TOKEN variable, and specify the --password option, the roxctl CLI uses the specified password for authentication. 2.3.2. Installing the roxctl CLI by downloading the binary You can install the roxctl CLI to interact with Red Hat Advanced Cluster Security for Kubernetes from a command-line interface. You can install roxctl on Linux, Windows, or macOS. 2.3.2.1. Installing the roxctl CLI on Linux You can install the roxctl CLI binary on Linux by using the following procedure. Note roxctl CLI for Linux is available for amd64 , arm64 , ppc64le , and s390x architectures. Procedure Determine the roxctl architecture for the target operating system: USD arch="USD(uname -m | sed "s/x86_64//")"; arch="USD{arch:+-USDarch}" Download the roxctl CLI: USD curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Linux/roxctlUSD{arch}" Make the roxctl binary executable: USD chmod +x roxctl Place the roxctl binary in a directory that is on your PATH : To check your PATH , execute the following command: USD echo USDPATH Verification Verify the roxctl version you have installed: USD roxctl version 2.3.2.2. Installing the roxctl CLI on macOS You can install the roxctl CLI binary on macOS by using the following procedure. Note roxctl CLI for macOS is available for amd64 and arm64 architectures. Procedure Determine the roxctl architecture for the target operating system: USD arch="USD(uname -m | sed "s/x86_64//")"; arch="USD{arch:+-USDarch}" Download the roxctl CLI: USD curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Darwin/roxctlUSD{arch}" Remove all extended attributes from the binary: USD xattr -c roxctl Make the roxctl binary executable: USD chmod +x roxctl Place the roxctl binary in a directory that is on your PATH : To check your PATH , execute the following command: USD echo USDPATH Verification Verify the roxctl version you have installed: USD roxctl version 2.3.2.3. Installing the roxctl CLI on Windows You can install the roxctl CLI binary on Windows by using the following procedure. Note roxctl CLI for Windows is available for the amd64 architecture. Procedure Download the roxctl CLI: USD curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Windows/roxctl.exe Verification Verify the roxctl version you have installed: USD roxctl version 2.3.3. Running the roxctl CLI from a container The roxctl client is the default entry point in the RHACS roxctl image. To run the roxctl client in a container image: Prerequisites You must first generate an authentication token from the RHACS portal. Procedure Log in to the registry.redhat.io registry. USD docker login registry.redhat.io Pull the latest container image for the roxctl CLI. USD docker pull registry.redhat.io/advanced-cluster-security/rhacs-roxctl-rhel8:4.6.3 After you install the CLI, you can run it by using the following command: USD docker run -e ROX_API_TOKEN=USDROX_API_TOKEN \ -it registry.redhat.io/advanced-cluster-security/rhacs-roxctl-rhel8:4.6.3 \ -e USDROX_CENTRAL_ADDRESS <command> Note In Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service), when using roxctl commands that require the Central address, use the Central instance address as displayed in the Instance Details section of the Red Hat Hybrid Cloud Console. For example, use acs-ABCD12345.acs.rhcloud.com instead of acs-data-ABCD12345.acs.rhcloud.com . Verification Verify the roxctl version you have installed. USD docker run -it registry.redhat.io/advanced-cluster-security/rhacs-roxctl-rhel8:4.6.3 version 2.4. Integrating with your CI pipeline After you have finished these procedures, the step is to integrate with your CI pipeline. Each CI system might require a slightly different configuration. 2.4.1. Using Jenkins Use the StackRox Container Image Scanner Jenkins plugin for integrating with Jenkins. You can use this plugin in both Jenkins freestyle projects and pipelines. 2.4.2. Using CircleCI You can integrate Red Hat Advanced Cluster Security for Kubernetes with CircleCI. Prerequisites You have a token with read and write permissions for the Image resource. You have a username and password for your Docker Hub account. Procedure Log in to CircleCI and open an existing project or create a new project. Click Project Settings . Click Environment variables . Click Add variable and create the following three environment variables: Name : STACKROX_CENTRAL_HOST - The DNS name or IP address of Central. Name : ROX_API_TOKEN - The API token to access Red Hat Advanced Cluster Security for Kubernetes. Name : DOCKERHUB_PASSWORD - The password for your Docker Hub account. Name : DOCKERHUB_USER - The username for your Docker Hub account. Create a directory called .circleci in the root directory of your local code repository for your selected project, if you do not already have a CircleCI configuration file. Create a config.yml configuration file with the following lines in the .circleci directory: version: 2 jobs: check-policy-compliance: docker: - image: 'circleci/node:latest' auth: username: USDDOCKERHUB_USER password: USDDOCKERHUB_PASSWORD steps: - checkout - run: name: Install roxctl command: | curl -H "Authorization: Bearer USDROX_API_TOKEN" https://USDSTACKROX_CENTRAL_HOST:443/api/cli/download/roxctl-linux -o roxctl && chmod +x ./roxctl - run: name: Scan images for policy deviations and vulnerabilities command: | ./roxctl image check --endpoint "USDSTACKROX_CENTRAL_HOST:443" --image "<your_registry/repo/image_name>" 1 - run: name: Scan deployment files for policy deviations command: | ./roxctl image check --endpoint "USDSTACKROX_CENTRAL_HOST:443" --image "<your_deployment_file>" 2 # Important note: This step assumes the YAML file you'd like to test is located in the project. workflows: version: 2 build_and_test: jobs: - check-policy-compliance 1 Replace <your_registry/repo/image_name> with your registry and image path. 2 Replace <your_deployment_file> with the path to your deployment file. Note If you already have a config.yml file for CircleCI in your repository, add a new jobs section with the specified details in your existing configuration file. After you commit the configuration file to your repository, go to the Jobs queue in your CircleCI dashboard to verify the build policy enforcement. | [
"export ROX_API_TOKEN=<api_token>",
"roxctl central debug dump --token-file <token_file>",
"arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"",
"curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Linux/roxctlUSD{arch}\"",
"chmod +x roxctl",
"echo USDPATH",
"roxctl version",
"arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"",
"curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Darwin/roxctlUSD{arch}\"",
"xattr -c roxctl",
"chmod +x roxctl",
"echo USDPATH",
"roxctl version",
"curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Windows/roxctl.exe",
"roxctl version",
"docker login registry.redhat.io",
"docker pull registry.redhat.io/advanced-cluster-security/rhacs-roxctl-rhel8:4.6.3",
"docker run -e ROX_API_TOKEN=USDROX_API_TOKEN -it registry.redhat.io/advanced-cluster-security/rhacs-roxctl-rhel8:4.6.3 -e USDROX_CENTRAL_ADDRESS <command>",
"docker run -it registry.redhat.io/advanced-cluster-security/rhacs-roxctl-rhel8:4.6.3 version",
"version: 2 jobs: check-policy-compliance: docker: - image: 'circleci/node:latest' auth: username: USDDOCKERHUB_USER password: USDDOCKERHUB_PASSWORD steps: - checkout - run: name: Install roxctl command: | curl -H \"Authorization: Bearer USDROX_API_TOKEN\" https://USDSTACKROX_CENTRAL_HOST:443/api/cli/download/roxctl-linux -o roxctl && chmod +x ./roxctl - run: name: Scan images for policy deviations and vulnerabilities command: | ./roxctl image check --endpoint \"USDSTACKROX_CENTRAL_HOST:443\" --image \"<your_registry/repo/image_name>\" 1 - run: name: Scan deployment files for policy deviations command: | ./roxctl image check --endpoint \"USDSTACKROX_CENTRAL_HOST:443\" --image \"<your_deployment_file>\" 2 # Important note: This step assumes the YAML file you'd like to test is located in the project. workflows: version: 2 build_and_test: jobs: - check-policy-compliance"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/integrating/integrate-with-ci-systems |
Chapter 7. Managing alerts on the Ceph dashboard | Chapter 7. Managing alerts on the Ceph dashboard As a storage administrator, you can see the details of alerts and create silences for them on the Red Hat Ceph Storage dashboard. This includes the following pre-defined alerts: CephadmDaemonFailed CephadmPaused CephadmUpgradeFailed CephDaemonCrash CephDeviceFailurePredicted CephDeviceFailurePredictionTooHigh CephDeviceFailureRelocationIncomplete CephFilesystemDamaged CephFilesystemDegraded CephFilesystemFailureNoStandby CephFilesystemInsufficientStandby CephFilesystemMDSRanksLow CephFilesystemOffline CephFilesystemReadOnly CephHealthError CephHealthWarning CephMgrModuleCrash CephMgrPrometheusModuleInactive CephMonClockSkew CephMonDiskspaceCritical CephMonDiskspaceLow CephMonDown CephMonDownQuorumAtRisk CephNodeDiskspaceWarning CephNodeInconsistentMTU CephNodeNetworkPacketDrops CephNodeNetworkPacketErrors CephNodeRootFilesystemFull CephObjectMissing CephOSDBackfillFull CephOSDDown CephOSDDownHigh CephOSDFlapping CephOSDFull CephOSDHostDown CephOSDInternalDiskSizeMismatch CephOSDNearFull CephOSDReadErrors CephOSDTimeoutsClusterNetwork CephOSDTimeoutsPublicNetwork CephOSDTooManyRepairs CephPGBackfillAtRisk CephPGImbalance CephPGNotDeepScrubbed CephPGNotScrubbed CephPGRecoveryAtRisk CephPGsDamaged CephPGsHighPerOSD CephPGsInactive CephPGsUnclean CephPGUnavilableBlockingIO CephPoolBackfillFull CephPoolFull CephPoolGrowthWarning CephPoolNearFull CephSlowOps PrometheusJobMissing Figure 7.1. Pre-defined alerts You can also monitor alerts using simple network management protocol (SNMP) traps. 7.1. Enabling monitoring stack You can manually enable the monitoring stack of the Red Hat Ceph Storage cluster, such as Prometheus, Alertmanager, and Grafana, using the command-line interface. You can use the Prometheus and Alertmanager API to manage alerts and silences. Prerequisite A running Red Hat Ceph Storage cluster. root-level access to all the hosts. Procedure Log into the cephadm shell: Example Set the APIs for the monitoring stack: Specify the host and port of the Alertmanager server: Syntax Example To see the configured alerts, configure the URL to the Prometheus API. Using this API, the Ceph Dashboard UI verifies that a new silence matches a corresponding alert. Syntax Example After setting up the hosts, refresh your browser's dashboard window. Specify the host and port of the Grafana server: Syntax Example Get the Prometheus, Alertmanager, and Grafana API host details: Example Optional: If you are using a self-signed certificate in your Prometheus, Alertmanager, or Grafana setup, disable the certificate verification in the dashboard This avoids refused connections caused by certificates signed by an unknown Certificate Authority (CA) or that do not match the hostname. For Prometheus: Example For Alertmanager: Example For Grafana: Example Get the details of the self-signed certificate verification setting for Prometheus, Alertmanager, and Grafana: Example Optional: If the dashboard does not reflect the changes, you have to disable and then enable the dashboard: Example Additional Resources See the Bootstrap command options section in the Red Hat Ceph Storage Installation Guide . See the Red Hat Ceph Storage installation chapter in the Red Hat Ceph Storage Installation Guide . See the Deploying the monitoring stack using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide . 7.2. Configuring Grafana certificate The cephadm deploys Grafana using the certificate defined in the ceph key/value store. If a certificate is not specified, cephadm generates a self-signed certificate during the deployment of the Grafana service. You can configure a custom certificate with the ceph config-key set command. Prerequisite A running Red Hat Ceph Storage cluster. Procedure Log into the cephadm shell: Example Configure the custom certificate for Grafana: Example If Grafana is already deployed, then run reconfig to update the configuration: Example Every time a new certificate is added, follow the below steps: Make a new directory Example Generate the key: Example View the key: Example Make a request: Example Review the request prior to sending it for signature: Example As the CA sign: Example Check the signed certificate: Example Additional Resources See the Using shared system certificates for more details. 7.3. Adding Alertmanager webhooks You can add new webhooks to an existing Alertmanager configuration to receive real-time alerts about the health of the storage cluster. You have to enable incoming webhooks to allow asynchronous messages into third-party applications. For example, if an OSD is down in a Red Hat Ceph Storage cluster, you can configure the Alertmanager to send notification on Google chat. Prerequisite A running Red Hat Ceph Storage cluster with monitoring stack components enabled. Incoming webhooks configured on the receiving third-party application. Procedure Log into the cephadm shell: Example Configure the Alertmanager to use the webhook for notification: Syntax The default_webhook_urls is a list of additional URLs that are added to the default receivers' webhook_configs configuration. Example Update Alertmanager configuration: Example Verification An example notification from Alertmanager to Gchat: Example 7.4. Viewing alerts on the Ceph dashboard After an alert has fired, you can view it on the Red Hat Ceph Storage Dashboard. You can edit the Manager module settings to trigger a mail when an alert is fired. Prerequisite A running Red Hat Ceph Storage cluster. Dashboard is installed. A running simple mail transfer protocol (SMTP) configured. An alert emitted. Procedure From the dashboard navigation, go to Observability->Alerts . View active Prometheus alerts from the Active Alerts tab. View all alerts from the Alerts tab. To view alert details, expand the alert row. To view the source of an alert, click on its row, and then click Source . Additional resources See the Using the Ceph Manager alerts module for more details to configure SMTP. 7.5. Creating a silence on the Ceph dashboard You can create a silence for an alert for a specified amount of time on the Red Hat Ceph Storage Dashboard. Prerequisite A running Red Hat Ceph Storage cluster. Dashboard is installed. An alert fired. Procedure From the dashboard navigation, go to Observability->Alerts . On the Silences tab, click Create . In the Create Silence form, fill in the required fields. Use the Add matcher to add silence requirements. Figure 7.2. Creating a silence Click Create Silence . A notification displays that the silence was created successfully and the Alerts Silenced updates in the Silences table. 7.6. Recreating a silence on the Ceph dashboard You can recreate a silence from an expired silence on the Red Hat Ceph Storage Dashboard. Prerequisite A running Red Hat Ceph Storage cluster. Dashboard is installed. An alert fired. A silence created for the alert. Procedure From the dashboard navigation, go to Observability->Alerts . On the Silences tab, select the row with the alert that you want to recreate, and click Recreate from the action drop-down. Edit any needed details, and click Recreate Silence button. A notification displays indicating that the silence was edited successfully and the status of the silence is now active . 7.7. Editing a silence on the Ceph dashboard You can edit an active silence, for example, to extend the time it is active on the Red Hat Ceph Storage Dashboard. If the silence has expired, you can either recreate a silence or create a new silence for the alert. Prerequisite A running Red Hat Ceph Storage cluster. Dashboard is installed. An alert fired. A silence created for the alert. Procedure Log in to the Dashboard. On the navigation menu, click Cluster . Select Monitoring from the drop-down menu. Click the Silences tab. To edit the silence, click it's row. In the Edit drop-down menu, select Edit . In the Edit Silence window, update the details and click Edit Silence . Figure 7.3. Edit silence You get a notification that the silence was updated successfully. 7.8. Expiring a silence on the Ceph dashboard You can expire a silence so any matched alerts will not be suppressed on the Red Hat Ceph Storage Dashboard. Prerequisite A running Red Hat Ceph Storage cluster. Dashboard is installed. An alert fired. A silence created for the alert. Procedure From the dashboard navigation, go to Observability->Alerts . On the Silences tab, select the row with the alert that you want to expire, and click Expire from the action drop-down. In the Expire Silence notification, select Yes, I am sure and click Expire Silence . A notification displays indicating that the silence was expired successfully and the Status of the alert is expired , in the Silences table. Additional Resources For more information, see the Red Hat Ceph StorageTroubleshooting Guide . | [
"cephadm shell",
"ceph dashboard set-alertmanager-api-host ALERTMANAGER_API_HOST : PORT",
"ceph dashboard set-alertmanager-api-host http://10.0.0.101:9093 Option ALERTMANAGER_API_HOST updated",
"ceph dashboard set-prometheus-api-host PROMETHEUS_API_HOST : PORT",
"ceph dashboard set-prometheus-api-host http://10.0.0.101:9095 Option PROMETHEUS_API_HOST updated",
"ceph dashboard set-grafana-api-url GRAFANA_API_URL : PORT",
"ceph dashboard set-grafana-api-url https://10.0.0.101:3000 Option GRAFANA_API_URL updated",
"ceph dashboard get-alertmanager-api-host http://10.0.0.101:9093 ceph dashboard get-prometheus-api-host http://10.0.0.101:9095 ceph dashboard get-grafana-api-url http://10.0.0.101:3000",
"ceph dashboard set-prometheus-api-ssl-verify False",
"ceph dashboard set-alertmanager-api-ssl-verify False",
"ceph dashboard set-grafana-api-ssl-verify False",
"ceph dashboard get-prometheus-api-ssl-verify ceph dashboard get-alertmanager-api-ssl-verify ceph dashboard get-grafana-api-ssl-verify",
"ceph mgr module disable dashboard ceph mgr module enable dashboard",
"cephadm shell",
"ceph config-key set mgr/cephadm/grafana_key -i USDPWD/key.pem ceph config-key set mgr/cephadm/grafana_crt -i USDPWD/certificate.pem",
"ceph orch reconfig grafana",
"mkdir /root/internalca cd /root/internalca",
"openssl ecparam -genkey -name secp384r1 -out USD(date +%F).key",
"openssl ec -text -in USD(date +%F).key | less",
"umask 077; openssl req -config openssl-san.cnf -new -sha256 -key USD(date +%F).key -out USD(date +%F).csr",
"openssl req -text -in USD(date +%F).csr | less",
"openssl ca -extensions v3_req -in USD(date +%F).csr -out USD(date +%F).crt -extfile openssl-san.cnf",
"openssl x509 -text -in USD(date +%F).crt -noout | less",
"cephadm shell",
"service_type: alertmanager spec: user_data: default_webhook_urls: - \"_URLS_\"",
"service_type: alertmanager spec: user_data: webhook_configs: - url: 'http:127.0.0.10:8080'",
"ceph orch reconfig alertmanager",
"using: https://chat.googleapis.com/v1/spaces/(xx- space identifyer -xx)/messages posting: {'status': 'resolved', 'labels': {'alertname': 'PrometheusTargetMissing', 'instance': 'postgres-exporter.host03.chest response: 200 response: { \"name\": \"spaces/(xx- space identifyer -xx)/messages/3PYDBOsIofE.3PYDBOsIofE\", \"sender\": { \"name\": \"users/114022495153014004089\", \"displayName\": \"monitoring\", \"avatarUrl\": \"\", \"email\": \"\", \"domainId\": \"\", \"type\": \"BOT\", \"isAnonymous\": false, \"caaEnabled\": false }, \"text\": \"Prometheus target missing (instance postgres-exporter.cluster.local:9187)\\n\\nA Prometheus target has disappeared. An e \"cards\": [], \"annotations\": [], \"thread\": { \"name\": \"spaces/(xx- space identifyer -xx)/threads/3PYDBOsIofE\" }, \"space\": { \"name\": \"spaces/(xx- space identifyer -xx)\", \"type\": \"ROOM\", \"singleUserBotDm\": false, \"threaded\": false, \"displayName\": \"_privmon\", \"legacyGroupChat\": false }, \"fallbackText\": \"\", \"argumentText\": \"Prometheus target missing (instance postgres-exporter.cluster.local:9187)\\n\\nA Prometheus target has disappea \"attachment\": [], \"createTime\": \"2022-06-06T06:17:33.805375Z\", \"lastUpdateTime\": \"2022-06-06T06:17:33.805375Z\""
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/dashboard_guide/management-of-alerts-on-the-ceph-dashboard |
Appendix G. Kafka Streams configuration parameters | Appendix G. Kafka Streams configuration parameters application.id Type: string Importance: high An identifier for the stream processing application. Must be unique within the Kafka cluster. It is used as 1) the default client-id prefix, 2) the group-id for membership management, 3) the changelog topic prefix. bootstrap.servers Type: list Importance: high A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping-this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form host1:port1,host2:port2,... . Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down). replication.factor Type: int Default: 1 Importance: high The replication factor for change log topics and repartition topics created by the stream processing application. If your broker cluster is on version 2.4 or newer, you can set -1 to use the broker default replication factor. state.dir Type: string Default: /tmp/kafka-streams Importance: high Directory location for state store. This path must be unique for each streams instance sharing the same underlying filesystem. acceptable.recovery.lag Type: long Default: 10000 Valid Values: [0,... ] Importance: medium The maximum acceptable lag (number of offsets to catch up) for a client to be considered caught-up for an active task.Should correspond to a recovery time of well under a minute for a given workload. Must be at least 0. cache.max.bytes.buffering Type: long Default: 10485760 Valid Values: [0,... ] Importance: medium Maximum number of memory bytes to be used for buffering across all threads. client.id Type: string Default: "" Importance: medium An ID prefix string used for the client IDs of internal consumer, producer and restore-consumer, with pattern '<client.id>-StreamThread-<threadSequenceNumber>-<consumer|producer|restore-consumer>'. default.deserialization.exception.handler Type: class Default: org.apache.kafka.streams.errors.LogAndFailExceptionHandler Importance: medium Exception handling class that implements the org.apache.kafka.streams.errors.DeserializationExceptionHandler interface. default.key.serde Type: class Default: org.apache.kafka.common.serialization.SerdesUSDByteArraySerde Importance: medium Default serializer / deserializer class for key that implements the org.apache.kafka.common.serialization.Serde interface. Note when windowed serde class is used, one needs to set the inner serde class that implements the org.apache.kafka.common.serialization.Serde interface via 'default.windowed.key.serde.inner' or 'default.windowed.value.serde.inner' as well. default.production.exception.handler Type: class Default: org.apache.kafka.streams.errors.DefaultProductionExceptionHandler Importance: medium Exception handling class that implements the org.apache.kafka.streams.errors.ProductionExceptionHandler interface. default.timestamp.extractor Type: class Default: org.apache.kafka.streams.processor.FailOnInvalidTimestamp Importance: medium Default timestamp extractor class that implements the org.apache.kafka.streams.processor.TimestampExtractor interface. default.value.serde Type: class Default: org.apache.kafka.common.serialization.SerdesUSDByteArraySerde Importance: medium Default serializer / deserializer class for value that implements the org.apache.kafka.common.serialization.Serde interface. Note when windowed serde class is used, one needs to set the inner serde class that implements the org.apache.kafka.common.serialization.Serde interface via 'default.windowed.key.serde.inner' or 'default.windowed.value.serde.inner' as well. default.windowed.key.serde.inner Type: class Default: null Importance: medium Default serializer / deserializer for the inner class of a windowed key. Must implement the org.apache.kafka.common.serialization.Serde interface. default.windowed.value.serde.inner Type: class Default: null Importance: medium Default serializer / deserializer for the inner class of a windowed value. Must implement the org.apache.kafka.common.serialization.Serde interface. max.task.idle.ms Type: long Default: 0 Importance: medium Maximum amount of time in milliseconds a stream task will stay idle when not all of its partition buffers contain records, to avoid potential out-of-order record processing across multiple input streams. max.warmup.replicas Type: int Default: 2 Valid Values: [1,... ] Importance: medium The maximum number of warmup replicas (extra standbys beyond the configured num.standbys) that can be assigned at once for the purpose of keeping the task available on one instance while it is warming up on another instance it has been reassigned to. Used to throttle how much extra broker traffic and cluster state can be used for high availability. Must be at least 1. num.standby.replicas Type: int Default: 0 Importance: medium The number of standby replicas for each task. num.stream.threads Type: int Default: 1 Importance: medium The number of threads to execute stream processing. processing.guarantee Type: string Default: at_least_once Valid Values: [at_least_once, exactly_once, exactly_once_beta] Importance: medium The processing guarantee that should be used. Possible values are at_least_once (default), exactly_once (requires brokers version 0.11.0 or higher), and exactly_once_beta (requires brokers version 2.5 or higher). Note that exactly-once processing requires a cluster of at least three brokers by default what is the recommended setting for production; for development you can change this, by adjusting broker setting transaction.state.log.replication.factor and transaction.state.log.min.isr . security.protocol Type: string Default: PLAINTEXT Importance: medium Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. task.timeout.ms Type: long Default: 300000 (5 minutes) Valid Values: [0,... ] Importance: medium The maximum amount of time in milliseconds a task might stall due to internal errors and retries until an error is raised. For a timeout of 0ms, a task would raise an error for the first internal error. For any timeout larger than 0ms, a task will retry at least once before an error is raised. topology.optimization Type: string Default: none Valid Values: [none, all] Importance: medium A configuration telling Kafka Streams if it should optimize the topology, disabled by default. application.server Type: string Default: "" Importance: low A host:port pair pointing to a user-defined endpoint that can be used for state store discovery and interactive queries on this KafkaStreams instance. buffered.records.per.partition Type: int Default: 1000 Importance: low Maximum number of records to buffer per partition. built.in.metrics.version Type: string Default: latest Valid Values: [0.10.0-2.4, latest] Importance: low Version of the built-in metrics to use. commit.interval.ms Type: long Default: 30000 (30 seconds) Valid Values: [0,... ] Importance: low The frequency in milliseconds with which to save the position of the processor. (Note, if processing.guarantee is set to exactly_once , the default value is 100 , otherwise the default value is 30000 . connections.max.idle.ms Type: long Default: 540000 (9 minutes) Importance: low Close idle connections after the number of milliseconds specified by this config. metadata.max.age.ms Type: long Default: 300000 (5 minutes) Valid Values: [0,... ] Importance: low The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions. metric.reporters Type: list Default: "" Importance: low A list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics. metrics.num.samples Type: int Default: 2 Valid Values: [1,... ] Importance: low The number of samples maintained to compute metrics. metrics.recording.level Type: string Default: INFO Valid Values: [INFO, DEBUG, TRACE] Importance: low The highest recording level for metrics. metrics.sample.window.ms Type: long Default: 30000 (30 seconds) Valid Values: [0,... ] Importance: low The window of time a metrics sample is computed over. partition.grouper Type: class Default: org.apache.kafka.streams.processor.DefaultPartitionGrouper Importance: low Partition grouper class that implements the org.apache.kafka.streams.processor.PartitionGrouper interface. WARNING: This config is deprecated and will be removed in 3.0.0 release. poll.ms Type: long Default: 100 Importance: low The amount of time in milliseconds to block waiting for input. probing.rebalance.interval.ms Type: long Default: 600000 (10 minutes) Valid Values: [60000,... ] Importance: low The maximum time in milliseconds to wait before triggering a rebalance to probe for warmup replicas that have finished warming up and are ready to become active. Probing rebalances will continue to be triggered until the assignment is balanced. Must be at least 1 minute. receive.buffer.bytes Type: int Default: 32768 (32 kibibytes) Valid Values: [-1,... ] Importance: low The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used. reconnect.backoff.max.ms Type: long Default: 1000 (1 second) Valid Values: [0,... ] Importance: low The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms. reconnect.backoff.ms Type: long Default: 50 Valid Values: [0,... ] Importance: low The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker. request.timeout.ms Type: int Default: 40000 (40 seconds) Valid Values: [0,... ] Importance: low The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. retries Type: int Default: 0 Valid Values: [0,... ,2147483647] Importance: low Setting a value greater than zero will cause the client to resend any request that fails with a potentially transient error. It is recommended to set the value to either zero or MAX_VALUE and use corresponding timeout parameters to control how long a client should retry a request. retry.backoff.ms Type: long Default: 100 Valid Values: [0,... ] Importance: low The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. rocksdb.config.setter Type: class Default: null Importance: low A Rocks DB config setter class or class name that implements the org.apache.kafka.streams.state.RocksDBConfigSetter interface. send.buffer.bytes Type: int Default: 131072 (128 kibibytes) Valid Values: [-1,... ] Importance: low The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used. state.cleanup.delay.ms Type: long Default: 600000 (10 minutes) Importance: low The amount of time in milliseconds to wait before deleting state when a partition has migrated. Only state directories that have not been modified for at least state.cleanup.delay.ms will be removed. upgrade.from Type: string Default: null Valid Values: [null, 0.10.0, 0.10.1, 0.10.2, 0.11.0, 1.0, 1.1, 2.0, 2.1, 2.2, 2.3] Importance: low Allows upgrading in a backward compatible way. This is needed when upgrading from [0.10.0, 1.1] to 2.0+, or when upgrading from [2.0, 2.3] to 2.4+. When upgrading from 2.4 to a newer version it is not required to specify this config. Default is null . Accepted values are "0.10.0", "0.10.1", "0.10.2", "0.11.0", "1.0", "1.1", "2.0", "2.1", "2.2", "2.3" (for upgrading from the corresponding old version). window.size.ms Type: long Default: null Importance: low Sets window size for the deserializer in order to calculate window end times. windowstore.changelog.additional.retention.ms Type: long Default: 86400000 (1 day) Importance: low Added to a windows maintainMs to ensure data is not deleted from the log prematurely. Allows for clock drift. Default is 1 day. | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_amq_streams_on_rhel/kafka-streams-configuration-parameters-str |
Chapter 36. Setting the priority for a process with the chrt utility | Chapter 36. Setting the priority for a process with the chrt utility You can set the priority for a process using the chrt utility. Prerequisites You have administrator privileges. 36.1. Setting the process priority using the chrt utility The chrt utility checks and adjusts scheduler policies and priorities. It can start new processes with the desired properties, or change the properties of a running process. Procedure To set the scheduling policy of a process, run the chrt command with the appropriate command options and parameters. In the following example, the process ID affected by the command is 1000 , and the priority ( -p ) is 50 . To start an application with a specified scheduling policy and priority, add the name of the application, and the path to it, if necessary, along with the attributes. For more information about the chrt utility options, see The chrt utility options . 36.2. The chrt utility options The chrt utility options include command options and parameters specifying the process and priority for the command. Policy options -f Sets the scheduler policy to SCHED_FIFO . -o Sets the scheduler policy to SCHED_OTHER . -r Sets the scheduler policy to SCHED_RR (round robin). -d Sets the scheduler policy to SCHED_DEADLINE . -p n Sets the priority of the process to n . When setting a process to SCHED_DEADLINE, you must specify the runtime , deadline , and period parameters. For example: where --sched-runtime 5000000 is the run time in nanoseconds. --sched-deadline 10000000 is the relative deadline in nanoseconds. --sched-period 16666666 is the period in nanoseconds. 0 is a placeholder for unused priority required by the chrt command. 36.3. Additional resources chrt(1) man page on your system | [
"chrt -f -p 50 1000",
"chrt -r -p 50 /bin/my-app",
"chrt -d --sched-runtime 5000000 --sched-deadline 10000000 --sched-period 16666666 0 video_processing_tool"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/9/html/optimizing_rhel_9_for_real_time_for_low_latency_operation/assembly_setting-the-priority-for-a-process-with-the-chrt-utility_optimizing-RHEL9-for-real-time-for-low-latency-operation |
Network Observability | Network Observability OpenShift Container Platform 4.17 Configuring and using the Network Observability Operator in OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/network_observability/index |
3.8. Considerations for NetworkManager | 3.8. Considerations for NetworkManager The use of NetworkManager is not supported on cluster nodes. If you have installed NetworkManager on your cluster nodes, you should either remove it or disable it. Note The cman service will not start if NetworkManager is either running or has been configured to run with the chkconfig command. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-networkmanager-ca |
Chapter 8. Important links | Chapter 8. Important links Red Hat AMQ 7 Supported Configurations Red Hat AMQ 7 Component Details Revised on 2021-12-14 20:09:39 UTC | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/release_notes_for_amq_streams_1.8_on_rhel/important-links-str |
Chapter 20. Workload partitioning | Chapter 20. Workload partitioning In resource-constrained environments, you can use workload partitioning to isolate OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. The minimum number of reserved CPUs required for the cluster management is four CPU Hyper-Threads (HTs). With workload partitioning, you annotate the set of cluster management pods and a set of typical add-on Operators for inclusion in the cluster management workload partition. These pods operate normally within the minimum size CPU configuration. Additional Operators or workloads outside of the set of minimum cluster management pods require additional CPUs to be added to the workload partition. Workload partitioning isolates user workloads from platform workloads using standard Kubernetes scheduling capabilities. The following changes are required for workload partitioning: In the install-config.yaml file, add the additional field: cpuPartitioningMode . apiVersion: v1 baseDomain: devcluster.openshift.com cpuPartitioningMode: AllNodes 1 compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: {} replicas: 3 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: {} replicas: 3 1 Sets up a cluster for CPU partitioning at install time. The default value is None . Note Workload partitioning can only be enabled during cluster installation. You cannot disable workload partitioning postinstallation. In the performance profile, specify the isolated and reserved CPUs. Recommended performance profile configuration apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: # if you change this name make sure the 'include' line in TunedPerformancePatch.yaml # matches this name: include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # Also in file 'validatorCRs/informDuValidator.yaml': # name: 50-performance-USD{PerformanceProfile.metadata.name} name: openshift-node-performance-profile annotations: ran.openshift.io/reference-configuration: "ran-du.redhat.com" spec: additionalKernelArgs: - "rcupdate.rcu_normal_after_boot=0" - "efi=runtime" - "vfio_pci.enable_sriov=1" - "vfio_pci.disable_idle_d3=1" - "module_blacklist=irdma" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: "" nodeSelector: node-role.kubernetes.io/USDmcp: "" numa: topologyPolicy: "restricted" # To use the standard (non-realtime) kernel, set enabled to false realTimeKernel: enabled: true workloadHints: # WorkloadHints defines the set of upper level flags for different type of workloads. # See https://github.com/openshift/cluster-node-tuning-operator/blob/master/docs/performanceprofile/performance_profile.md#workloadhints # for detailed descriptions of each item. # The configuration below is set for a low latency, performance mode. realTime: true highPowerConsumption: false perPodPowerManagement: false Table 20.1. PerformanceProfile CR options for single-node OpenShift clusters PerformanceProfile CR field Description metadata.name Ensure that name matches the following fields set in related GitOps ZTP custom resources (CRs): include=openshift-node-performance-USD{PerformanceProfile.metadata.name} in TunedPerformancePatch.yaml name: 50-performance-USD{PerformanceProfile.metadata.name} in validatorCRs/informDuValidator.yaml spec.additionalKernelArgs "efi=runtime" Configures UEFI secure boot for the cluster host. spec.cpu.isolated Set the isolated CPUs. Ensure all of the Hyper-Threading pairs match. Important The reserved and isolated CPU pools must not overlap and together must span all available cores. CPU cores that are not accounted for cause an undefined behaviour in the system. spec.cpu.reserved Set the reserved CPUs. When workload partitioning is enabled, system processes, kernel threads, and system container threads are restricted to these CPUs. All CPUs that are not isolated should be reserved. spec.hugepages.pages Set the number of huge pages ( count ) Set the huge pages size ( size ). Set node to the NUMA node where the hugepages are allocated ( node ) spec.realTimeKernel Set enabled to true to use the realtime kernel. spec.workloadHints Use workloadHints to define the set of top level flags for different type of workloads. The example configuration configures the cluster for low latency and high performance. Workload partitioning introduces an extended management.workload.openshift.io/cores resource type for platform pods. kubelet advertises the resources and CPU requests by pods allocated to the pool within the corresponding resource. When workload partitioning is enabled, the management.workload.openshift.io/cores resource allows the scheduler to correctly assign pods based on the cpushares capacity of the host, not just the default cpuset . Additional resources For the recommended workload partitioning configuration for single-node OpenShift clusters, see Workload partitioning . | [
"apiVersion: v1 baseDomain: devcluster.openshift.com cpuPartitioningMode: AllNodes 1 compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: {} replicas: 3 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: {} replicas: 3",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: # if you change this name make sure the 'include' line in TunedPerformancePatch.yaml # matches this name: include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # Also in file 'validatorCRs/informDuValidator.yaml': # name: 50-performance-USD{PerformanceProfile.metadata.name} name: openshift-node-performance-profile annotations: ran.openshift.io/reference-configuration: \"ran-du.redhat.com\" spec: additionalKernelArgs: - \"rcupdate.rcu_normal_after_boot=0\" - \"efi=runtime\" - \"vfio_pci.enable_sriov=1\" - \"vfio_pci.disable_idle_d3=1\" - \"module_blacklist=irdma\" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: \"\" nodeSelector: node-role.kubernetes.io/USDmcp: \"\" numa: topologyPolicy: \"restricted\" # To use the standard (non-realtime) kernel, set enabled to false realTimeKernel: enabled: true workloadHints: # WorkloadHints defines the set of upper level flags for different type of workloads. # See https://github.com/openshift/cluster-node-tuning-operator/blob/master/docs/performanceprofile/performance_profile.md#workloadhints # for detailed descriptions of each item. # The configuration below is set for a low latency, performance mode. realTime: true highPowerConsumption: false perPodPowerManagement: false"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/scalability_and_performance/enabling-workload-partitioning |
23.16. Timer Element Attributes | 23.16. Timer Element Attributes The name element contains the name of the time source to be used. It can have any of the following values: Table 23.12. Name attribute values Value Description pit Programmable Interval Timer - a timer with periodic interrupts. When using this attribute, the tickpolicy delay becomes the default setting. rtc Real Time Clock - a continuously running timer with periodic interrupts. This attribute supports the tickpolicy catchup sub-element. kvmclock KVM clock - the recommended clock source for KVM guest virtual machines. KVM pvclock, or kvm-clock allows guest virtual machines to read the host physical machine's wall clock time. The track attribute specifies what is tracked by the timer, and is only valid for a name value of rtc . Table 23.13. track attribute values Value Description boot Corresponds to old host physical machine option, this is an unsupported tracking option. guest RTC always tracks the guest virtual machine time. wall RTC always tracks the host time. The tickpolicy attribute and the values dictate the policy that is used to pass ticks on to the guest virtual machine. Table 23.14. tickpolicy attribute values Value Description delay Continue to deliver at normal rate (ticks are delayed). catchup Deliver at a higher rate to catch up. merge Ticks merged into one single tick. discard All missed ticks are discarded. The present attribute is used to override the default set of timers visible to the guest virtual machine. The present attribute can take the following values: Table 23.15. present attribute values Value Description yes Force this timer to be visible to the guest virtual machine. no Force this timer to not be visible to the guest virtual machine. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-manipulating_the_domain_xml-timer_element_attributes |
Data Grid Security Guide | Data Grid Security Guide Red Hat Data Grid 8.4 Enable and configure Data Grid security Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_security_guide/index |
Chapter 6. Connecting clients to the router network | Chapter 6. Connecting clients to the router network After creating a router network, you can connect clients (messaging applications) to it so that they can begin sending and receiving messages. By default, the Red Hat Integration - AMQ Interconnect Operator creates a Service for the router deployment and configures the following ports for client access: 5672 for plain AMQP traffic without authentication 5671 for AMQP traffic secured with TLS authentication To connect clients to the router network, you can do the following: If any clients are outside of the OpenShift cluster, expose the ports so that they can connect to the router network. Configure your clients to connect to the router network. 6.1. Exposing ports for clients outside of OpenShift Container Platform You expose ports to enable clients outside of the OpenShift Container Platform cluster to connect to the router network. Procedure Start editing the Interconnect Custom Resource YAML file that describes the router deployment for which you want to expose ports. In the spec.listeners section, expose each port that you want clients outside of the cluster to be able to access. In this example, port 5671 is exposed. This enables clients outside of the cluster to authenticate with and connect to the router network. Sample router-mesh.yaml file apiVersion: interconnectedcloud.github.io/v1alpha1 kind: Interconnect metadata: name: router-mesh spec: ... listeners: - port: 5672 - authenticatePeer: true expose: true http: true port: 8080 - port: 5671 sslProfile: default expose: true ... The Red Hat Integration - AMQ Interconnect Operator creates a Route, which clients from outside the cluster can use to connect to the router network. 6.2. Authentication for client connections When you create a router deployment, the Red Hat Integration - AMQ Interconnect Operator uses the Red Hat Integration - AMQ Certificate Manager Operator to create default SSL/TLS certificates for client authentication, and configures port 5671 for SSL encryption. 6.3. Configuring clients to connect to the router network You can connect messaging clients running in the same OpenShift cluster as the router network, a different cluster, or outside of OpenShift altogether so that they can exchange messages. Prerequisites If the client is outside of the OpenShift Container Platform cluster, a connecting port must be exposed. For more information, see Section 6.1, "Exposing ports for clients outside of OpenShift Container Platform" . Procedure To connect a client to the router network, use the following connection URL format: <scheme> Use one of the following: amqp - unencrypted TCP from within the same OpenShift cluster amqps - for secure connections using SSL/TLS authentication amqpws - AMQP over WebSockets for unencrypted connections from outside the OpenShift cluster <username> If you deployed the router mesh with user name/password authentication, provide the client's user name. <host> If the client is in the same OpenShift cluster as the router network, use the OpenShift Service host name. Otherwise, use the host name of the Route. <port> If you are connecting to a Route, you must specify the port. To connect on an unsecured connection, use port 80 . Otherwise, to connect on a secured connection, use port 443 . Note To connect on an unsecured connection (port 80 ), the client must use AMQP over WebSockets ( amqpws ). The following table shows some example connection URLs. URL Description amqp://admin@router-mesh:5672 The client and router network are both in the same OpenShift cluster, so the Service host name is used for the connection URL. In this case, user name/password authentication is implemented, which requires the user name ( admin ) to be provided. amqps://router-mesh-myproject.mycluster.com:443 The client is outside of OpenShift, so the Route host name is used for the connection URL. In this case, SSL/TLS authentication is implemented, which requires the amqps scheme and port 443 . amqpws://router-mesh-myproject.mycluster.com:80 The client is outside of OpenShift, so the Route host name is used for the connection URL. In this case, no authentication is implemented, which means the client must use the amqpws scheme and port 80 . | [
"oc edit -f router-mesh.yaml",
"apiVersion: interconnectedcloud.github.io/v1alpha1 kind: Interconnect metadata: name: router-mesh spec: listeners: - port: 5672 - authenticatePeer: true expose: true http: true port: 8080 - port: 5671 sslProfile: default expose: true",
"< scheme >://[< username >@]< host >[:< port >]"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/deploying_amq_interconnect_on_openshift/connecting-clients-router-network-router-ocp |
11.10. Additional Resources | 11.10. Additional Resources Installed Documentation udev(7) man page - Describes the Linux dynamic device management daemon, udevd . systemd(1) man page - Describes systemd system and service manager. biosdevname(1) man page - Describes the utility for obtaining the BIOS-given name of a device. Online Documentation The IBM Knowledge Center Publication SC34-2710-00 Device Drivers, Features, and Commands on Red Hat Enterprise Linux 7 includes information on " Predictable network device names " for IBM System z devices and attachments. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-consistent_network_device_naming-additional_resources |
8.4. Installing in the Graphical User Interface | 8.4. Installing in the Graphical User Interface The graphical installation interface is the preferred method of manually installing Red Hat Enterprise Linux. It allows you full control over all available settings, including custom partitioning and advanced storage configuration, and it is also localized to many languages other than English, allowing you to perform the entire installation in a different language. The graphical mode is used by default when you boot the system from local media (a CD, DVD or a USB flash drive). Figure 8.2. The Installation Summary Screen The sections below discuss each screen available in the installation process. Note that due to the installer's parallel nature, most of the screens do not have to be completed in the order in which they are described here. Each screen in the graphical interface contains a Help button. This button opens the Yelp help browser displaying the section of the Red Hat Enterprise Linux Installation Guide relevant to the current screen. You can also control the graphical installer with your keyboard. Following table shows you the shortcuts you can use. Table 8.2. Graphical installer keyboard shortcuts Shortcut keys Usage Tab and Shift + Tab Cycle through active control elements (buttons, check boxes, and so on.) on the current screen Up and Down Scroll through lists Left and Right Scroll through horizontal toolbars and table entries Space and Enter Select or remove a highlighted item from selection and expand and collapse drop-down menus Additionally, elements in each screen can be toggled using their respective shortcuts. These shortcuts are highlighted (underlined) when you hold down the Alt key; to toggle that element, press Alt + X , where X is the highlighted letter. Your current keyboard layout is displayed in the top right hand corner. Only one layout is configured by default; if you configure more than layout in the Keyboard Layout screen ( Section 8.9, "Keyboard Configuration" ), you can switch between them by clicking the layout indicator. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-installation-graphical-mode-x86 |
Chapter 5. Insights client data obfuscation | Chapter 5. Insights client data obfuscation Red Hat Insights has optional controls for excluding the IP address or hostname from the data file transmitted to Red Hat and to obfuscate the values within the user interface. You can also set a custom display name for the identification of obfuscated hosts. 5.1. Obfuscation overview The Insights client obfuscation feature uses a Python data cleaning process to replace the hostname and IP address with preset values when it processes the Insights archive. The processed archive file containing the obfuscated values is then sent to Red Hat Insights for Red Hat Enterprise Linux. To enable obfuscation, configure the applicable options in the /etc/insights-client/insights-client.conf file. You can choose to obfuscate the system IP address, or you can choose to obfuscate both the IP address and hostname. You cannot obfuscate the hostname only. Obfuscation is disabled by default. Note The Python data cleaning process automatically generates the masked values. You cannot choose the values for obfuscation. The Red Hat Insights for Red Hat Enterprise Linux compliance service uses OpenSCAP tools to generate compliance reports based on information from the host system. The collaboration with OpenSCAP prevents the compliance service's ability to completely obfuscate or redact hostname and IP address data. Also, host information is sent to Insights for Red Hat Enterprise Linux when a compliance data collection job launches on the host system. Red Hat Insights for Red Hat Enterprise Linux is working to improve obfuscation options for host information. For information about how Red Hat Insights for Red Hat Enterprise Linux handles data collection, see Red Hat Insights Data & Application Security . Important Double obfuscation is required if you use Red Hat Satellite to manage clients and register them on console.redhat.com . This means you must enable obfuscation in both the insights-client.conf and the Satellite web UI. For more information about enabling obfuscation in Satellite, see the Red Hat Cloud settings chapter of the Administering Red Hat Satellite guide. 5.2. Obfuscating the IPv4 address You can mask the IPv4 host address in the archive file before it is sent to Red Hat Insights for Red Hat Enterprise Linux by enabling obfuscation. When you choose IP address obfuscation, your host address in the archive file is changed to the value provided in the Python data cleaning file. You cannot configure the value provided for obfuscation. You also cannot obfuscate or select the portion of the host IP address to obfuscate. Important Red Hat Insights supports IP address obfuscation for IPv4 addresses only. Prerequisites If you are using Red Hat Satellite to manage clients and register them on console.redhat.com , complete the following step: In the Satellite web UI, go to the Red Hat Cloud settings and enable the Obfuscate host IPv4 addresses option. Procedure Open the /etc/insights-client/insights-client.conf file with an editor. Locate the following section: Remove the preceding hash ( # ) character, and change False to True , as follows: Save and close the /etc/insights-client/insights-client.conf file. Result When obfuscation is successfully enabled, the original IP address is masked in the console UI, logs, and in any archive data files that Red Hat collects, as shown in the following example. Important After you enable obfuscation, you will continue to see the original IP address in the command-line output of some insights-client commands. Example The original host system IP address: The obfuscated host IP address The following screenshot provides an example of an obfuscated IP address in the Red Hat Hybrid Cloud Console UI: Note When you enable obfuscation on multiple systems, the same obfuscated IP address gets generated. Therefore, in the example scenario provided, when you search or filter by IP address in the Insights UI on the Hybrid Cloud Console you might see several instances of 10.230.230.1 . This is because the Python data cleaning process that the Insights obfuscation feature uses, can generate the same obfuscated IP address in the archive file. 5.3. Obfuscating the hostname When you obfuscate the hostname of a system in Insights, the value of the hostname configured in /etc/hostname is masked in the console GUI and in the archive file before it is sent to Red Hat. To obfuscate the hostname of a system, you must also enable obfuscation on the IP address. You cannot obfuscate only the hostname. When obfuscation is enabled in Insights, the hostname value in /etc/hostname changes to a 12-character UUID that is automatically generated by the Python data cleaning process. Tip Assign a display name to your system so that you can more easily find and manage your obfuscated hosts. The display name does not get obfuscated and displays in the Insights console UI. Only the value of /etc/hostname gets obfuscated. Prerequisites You have obfuscated the IP address. For more information, see Obfuscating the IPv4 address . If you are using Red Hat Satellite to manage clients and register them on console.redhat.com , complete the following step before you enable hostname obfuscation: In the Satellite web UI, go to the Red Hat Cloud settings and enable the Obfuscate host names option. Procedure Open the /etc/insights-client/insights-client.conf file with an editor. Locate the line that has obfuscate_hostname . Remove the # and change False to True . (Optional) To help you find and manage your obfuscated hosts in the Insights console UI, set a display name for your system in the insights-client.conf file, as follows: Note You can also set a display name on the console by using the following command: Save and close the /etc/insights-client/insights-client.conf file. Result When obfuscation is successfully enabled, the hostname gets masked in the Insights console UI, logs, and in any archive data files that Red Hat collects. Note If you configure hostname obfuscation on more than one system, you might see multiple systems with the same hostname in the Red Hat Insights for Red Hat Enterprise Linux GUI as a result of obfuscation. Setting a display name can help you to more easily identify your obfuscated hosts. After you enable obfuscation, there are some instances where the original hostname displays in the command-line output of some insights-client commands. Example The original hostname of the system in /etc/hostname : The obfuscated /etc/hostname as it displays in Red Hat Insights for Red Hat Enterprise Linux: The following screenshot of the Red Hat Hybrid Cloud Console UI shows an example of a system whose hostname and IP address are obfuscated: Additional resources Obfuscating the IPv4 address | [
"Obfuscate IP addresses #obfuscate=False",
"obfuscate=True",
"192.168.0.24",
"10.230.230.1",
"#obfuscate_hostname=False",
"obfuscate_hostname=True",
"display_name=example-display-name",
"insights-client --display-name ITC-4",
"RTP.data.center.01",
"90f4a9365ce0.example.com"
] | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/client_configuration_guide_for_red_hat_insights/assembly-client-data-obfuscation |
Chapter 9. Configuring alert notifications | Chapter 9. Configuring alert notifications In OpenShift Container Platform, an alert is fired when the conditions defined in an alerting rule are true. An alert provides a notification that a set of circumstances are apparent within a cluster. Firing alerts can be viewed in the Alerting UI in the OpenShift Container Platform web console by default. After an installation, you can configure OpenShift Container Platform to send alert notifications to external systems. 9.1. Sending notifications to external systems In OpenShift Container Platform 4.9, firing alerts can be viewed in the Alerting UI. Alerts are not configured by default to be sent to any notification systems. You can configure OpenShift Container Platform to send alerts to the following receiver types: PagerDuty Webhook Email Slack Routing alerts to receivers enables you to send timely notifications to the appropriate teams when failures occur. For example, critical alerts require immediate attention and are typically paged to an individual or a critical response team. Alerts that provide non-critical warning notifications might instead be routed to a ticketing system for non-immediate review. Checking that alerting is operational by using the watchdog alert OpenShift Container Platform monitoring includes a watchdog alert that fires continuously. Alertmanager repeatedly sends watchdog alert notifications to configured notification providers. The provider is usually configured to notify an administrator when it stops receiving the watchdog alert. This mechanism helps you quickly identify any communication issues between Alertmanager and the notification provider. 9.1.1. Configuring alert receivers You can configure alert receivers to ensure that you learn about important issues with your cluster. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure In the Administrator perspective, navigate to Administration Cluster Settings Configuration Alertmanager . Note Alternatively, you can navigate to the same page through the notification drawer. Select the bell icon at the top right of the OpenShift Container Platform web console and choose Configure in the AlertmanagerReceiverNotConfigured alert. Select Create Receiver in the Receivers section of the page. In the Create Receiver form, add a Receiver Name and choose a Receiver Type from the list. Edit the receiver configuration: For PagerDuty receivers: Choose an integration type and add a PagerDuty integration key. Add the URL of your PagerDuty installation. Select Show advanced configuration if you want to edit the client and incident details or the severity specification. For webhook receivers: Add the endpoint to send HTTP POST requests to. Select Show advanced configuration if you want to edit the default option to send resolved alerts to the receiver. For email receivers: Add the email address to send notifications to. Add SMTP configuration details, including the address to send notifications from, the smarthost and port number used for sending emails, the hostname of the SMTP server, and authentication details. Choose whether TLS is required. Select Show advanced configuration if you want to edit the default option not to send resolved alerts to the receiver or edit the body of email notifications configuration. For Slack receivers: Add the URL of the Slack webhook. Add the Slack channel or user name to send notifications to. Select Show advanced configuration if you want to edit the default option not to send resolved alerts to the receiver or edit the icon and username configuration. You can also choose whether to find and link channel names and usernames. By default, firing alerts with labels that match all of the selectors will be sent to the receiver. If you want label values for firing alerts to be matched exactly before they are sent to the receiver: Add routing label names and values in the Routing Labels section of the form. Select Regular Expression if want to use a regular expression. Select Add Label to add further routing labels. Select Create to create the receiver. 9.2. Additional resources Monitoring overview Managing alerts | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/post-installation_configuration/configuring-alert-notifications |
Chapter 4. KafkaSpec schema reference | Chapter 4. KafkaSpec schema reference Used in: Kafka Property Property type Description kafka KafkaClusterSpec Configuration of the Kafka cluster. zookeeper ZookeeperClusterSpec Configuration of the ZooKeeper cluster. This section is required when running a ZooKeeper-based Apache Kafka cluster. entityOperator EntityOperatorSpec Configuration of the Entity Operator. clusterCa CertificateAuthority Configuration of the cluster certificate authority. clientsCa CertificateAuthority Configuration of the clients certificate authority. cruiseControl CruiseControlSpec Configuration for Cruise Control deployment. Deploys a Cruise Control instance when specified. jmxTrans JmxTransSpec The jmxTrans property has been deprecated. JMXTrans is deprecated and related resources removed in Streams for Apache Kafka 2.5. As of Streams for Apache Kafka 2.5, JMXTrans is not supported anymore and this option is ignored. kafkaExporter KafkaExporterSpec Configuration of the Kafka Exporter. Kafka Exporter can provide additional metrics, for example lag of consumer group at topic/partition. maintenanceTimeWindows string array A list of time windows for maintenance tasks (that is, certificates renewal). Each time window is defined by a cron expression. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-KafkaSpec-reference |
Chapter 19. Obtaining an IdM certificate for a service using certmonger | Chapter 19. Obtaining an IdM certificate for a service using certmonger 19.1. Certmonger overview When Identity Management (IdM) is installed with an integrated IdM Certificate Authority (CA), it uses the certmonger service to track and renew system and service certificates. When the certificate is reaching its expiration date, certmonger manages the renewal process by: Regenerating a certificate-signing request (CSR) using the options provided in the original request. Submitting the CSR to the IdM CA using the IdM API cert-request command. Receiving the certificate from the IdM CA. Executing a pre-save command if specified by the original request. Installing the new certificate in the location specified in the renewal request: either in an NSS database or in a file. Executing a post-save command if specified by the original request. For example, the post-save command can instruct certmonger to restart a relevant service, so that the service picks up the new certificate. Types of certificates certmonger tracks Certificates can be divided into system and service certificates. Unlike service certificates (for example, for HTTP , LDAP and PKINIT ), which have different keypairs and subject names on different servers, IdM system certificates and their keys are shared by all CA replicas. The IdM system certificates include: IdM CA certificate OCSP signing certificate IdM CA subsystem certificates IdM CA audit signing certificate IdM renewal agent (RA) certificate KRA transport and storage certificates The certmonger service tracks the IdM system and service certificates that were requested during the installation of IdM environment with an integrated CA. Certmonger also tracks certificates that have been requested manually by the system administrator for other services running on the IdM host. Certmonger does not track external CA certificates or user certificates. Certmonger components The certmonger service consists of two main components: The certmonger daemon , which is the engine tracking the list of certificates and launching renewal commands The getcert utility for the command line (CLI), which allows the system administrator to actively send commands to the certmonger daemon. More specifically, the system administrator can use the getcert utility to: Request a new certificate View the list of certificates that certmonger tracks Start or stop tracking a certificate Renew a certificate 19.2. Obtaining an IdM certificate for a service using certmonger To ensure that communication between browsers and the web service running on your Identity Management (IdM) client is secure and encrypted, use a TLS certificate. Obtain the TLS certificate for your web service from the IdM Certificate Authority (CA). Follow this procedure to use certmonger to obtain an IdM certificate for a service ( HTTP/my_company.idm.example.com @ IDM.EXAMPLE.COM ) running on an IdM client. Using certmonger to request the certificate automatically means that certmonger manages and renews the certificate when it is due for a renewal. For a visual representation of what happens when certmonger requests a service certificate, see Section 19.3, "Communication flow for certmonger requesting a service certificate" . Prerequisites The web server is enrolled as an IdM client. You have root access to the IdM client on which you are running the procedure. The service for which you are requesting a certificate does not have to pre-exist in IdM. Procedure On the my_company.idm.example.com IdM client on which the HTTP service is running, request a certificate for the service corresponding to the HTTP/[email protected] principal, and specify that The certificate is to be stored in the local /etc/pki/tls/certs/httpd.pem file The private key is to be stored in the local /etc/pki/tls/private/httpd.key file That an extensionRequest for a SubjectAltName be added to the signing request with the DNS name of my_company.idm.example.com : In the command above: The ipa-getcert request command specifies that the certificate is to be obtained from the IdM CA. The ipa-getcert request command is a shortcut for getcert request -c IPA . The -g option specifies the size of key to be generated if one is not already in place. The -D option specifies the SubjectAltName DNS value to be added to the request. The -C option instructs certmonger to restart the httpd service after obtaining the certificate. To specify that the certificate be issued with a particular profile, use the -T option. To request a certificate using the named issuer from the specified CA, use the -X ISSUER option. Optional: To check the status of your request: The output shows that the request is in the MONITORING status, which means that a certificate has been obtained. The locations of the key pair and the certificate are those requested. 19.3. Communication flow for certmonger requesting a service certificate These diagrams show the stages of what happens when certmonger requests a service certificate from Identity Management (IdM) certificate authority (CA) server. The sequence consists of these diagrams: Unencrypted communication Certmonger requesting a service certificate IdM CA issuing the service certificate Certmonger applying the service certificate Certmonger requesting a new certificate when the old one is nearing expiration Unencrypted communication shows the initial situation: without an HTTPS certificate, the communication between the web server and the browser is unencrypted. Figure 19.1. Unencrypted communication Certmonger requesting a service certificate shows the system administrator using certmonger to manually request an HTTPS certificate for the Apache web server. Note that when requesting a web server certificate, certmonger does not communicate directly with the CA. It proxies through IdM. Figure 19.2. Certmonger requesting a service certificate IdM CA issuing the service certificate shows an IdM CA issuing an HTTPS certificate for the web server. Figure 19.3. IdM CA issuing the service certificate Certmonger applying the service certificate shows certmonger placing the HTTPS certificate in appropriate locations on the IdM client and, if instructed to do so, restarting the httpd service. The Apache server subsequently uses the HTTPS certificate to encrypt the traffic between itself and the browser. Figure 19.4. Certmonger applying the service certificate Certmonger requesting a new certificate when the old one is nearing expiration shows certmonger automatically requesting a renewal of the service certificate from the IdM CA before the expiration of the certificate. The IdM CA issues a new certificate. Figure 19.5. Certmonger requesting a new certificate when the old one is nearing expiration 19.4. Viewing the details of a certificate request tracked by certmonger The certmonger service monitors certificate requests. When a request for a certificate is successfully signed, it results in a certificate. Certmonger manages certificate requests including the resulting certificates. Follow this procedure to view the details of a particular certificate request managed by certmonger . Procedure If you know how to specify the certificate request, list the details of only that particular certificate request. You can, for example, specify: The request ID The location of the certificate The certificate nickname For example, to view the details of the certificate whose request ID is 20190408143846, using the -v option to view all the details of errors in case your request for a certificate was unsuccessful: The output displays several pieces of information about the certificate, for example: the certificate location; in the example above, it is the NSS database in the /etc/dirsrv/slapd-IDM-EXAMPLE-COM directory the certificate nickname; in the example above, it is Server-Cert the file storing the pin; in the example above, it is /etc/dirsrv/slapd-IDM-EXAMPLE-COM/pwdfile.txt the Certificate Authority (CA) that will be used to renew the certificate; in the example above, it is the IPA CA the expiration date; in the example above, it is 2021-04-08 16:38:47 CEST the status of the certificate; in the example above, the MONITORING status means that the certificate is valid and it is being tracked the post-save command; in the example above, it is the restart of the LDAP service If you do not know how to specify the certificate request, list the details of all the certificates that certmonger is monitoring or attempting to obtain: Additional resources See the getcert list man page on your system. 19.5. Starting and stopping certificate tracking Follow this procedure to use the getcert stop-tracking and getcert start-tracking commands to monitor certificates. The two commands are provided by the certmonger service. Enabling certificate tracking is especially useful if you have imported a certificate issued by the Identity Management (IdM) certificate authority (CA) onto the machine from a different IdM client. Enabling certificate tracking can also be the final step of the following provisioning scenario: On the IdM server, you create a certificate for a system that does not exist yet. You create the new system. You enroll the new system as an IdM client. You import the certificate and the key from the IdM server on to the IdM client. You start tracking the certificate using certmonger to ensure that it gets renewed when it is due to expire. Procedure To disable the monitoring of a certificate with the Request ID of 20190408143846: For more options, see the getcert stop-tracking man page on your system. To enable the monitoring of a certificate stored in the /tmp/some_cert.crt file, whose private key is stored in the /tmp/some_key.key file: Certmonger cannot automatically identify the CA type that issued the certificate. For this reason, add the -c option with the IPA value to the getcert start-tracking command if the certificate was issued by the IdM CA. Omitting to add the -c option results in certmonger entering the NEED_CA state. For more options, see the getcert start-tracking man page on your system. Note The two commands do not manipulate the certificate. For example, getcert stop-tracking does not delete the certificate or remove it from the NSS database or from the filesystem but simply removes the certificate from the list of monitored certificates. Similarly, getcert start-tracking only adds a certificate to the list of monitored certificates. 19.6. Renewing a certificate manually When a certificate is near its expiration date, the certmonger daemon automatically issues a renewal command using the certificate authority (CA) helper, obtains a renewed certificate and replaces the certificate with the new one. You can also manually renew a certificate in advance by using the getcert resubmit command. This way, you can update the information the certificate contains, for example, by adding a Subject Alternative Name (SAN). Follow this procedure to renew a certificate manually. Procedure To renew a certificate with the Request ID of 20190408143846: To obtain the Request ID for a specific certificate, use the getcert list command. For details, see the getcert list man page on your system. 19.7. Making certmonger resume tracking of IdM certificates on a CA replica This procedure shows how to make certmonger resume the tracking of Identity Management (IdM) system certificates that are crucial for an IdM deployment with an integrated certificate authority after the tracking of certificates was interrupted. The interruption may have been caused by the IdM host being unenrolled from IdM during the renewal of the system certificates or by replication topology not working properly. The procedure also shows how to make certmonger resume the tracking of the IdM service certificates, namely the HTTP , LDAP and PKINIT certificates. Prerequisites The host on which you want to resume tracking system certificates is an IdM server that is also an IdM certificate authority (CA) but not the IdM CA renewal server. Procedure Get the PIN for the subsystem CA certificates: Add tracking to the subsystem CA certificates, replacing [internal PIN] in the commands below with the PIN obtained in the step: Add tracking for the remaining IdM certificates, the HTTP , LDAP , IPA renewal agent and PKINIT certificates: Restart certmonger : Wait for one minute after certmonger has started and then check the statuses of the new certificates: Note the following: If your IdM system certificates have all expired, see the Red Hat Knowledgebase solution How do I manually renew Identity Management (IPA) certificates on RHEL7/RHEL 8 after they have expired? to manually renew IdM system certificates on the IdM CA server that is also the CA renewal server and the CRL publisher server. Follow the procedure described in the Red Hat Knowledgebase solution How do I manually renew Identity Management (IPA) certificates on RHEL7 after they have expired? to manually renew IdM system certificates on all the other CA servers in the topology. 19.8. Using SCEP with certmonger The Simple Certificate Enrollment Protocol (SCEP) is a certificate management protocol that you can use across different devices and operating systems. If you are using a SCEP server as an external certificate authority (CA) in your environment, you can use certmonger to obtain a certificate for an Identity Management (IdM) client. 19.8.1. SCEP overview The Simple Certificate Enrollment Protocol (SCEP) is a certificate management protocol that you can use across different devices and operating systems. You can use a SCEP server as an external certificate authority (CA). You can configure an Identity Management (IdM) client to request and retrieve a certificate over HTTP directly from the CA SCEP service. This process is secured by a shared secret that is usually valid only for a limited time. On the client side, SCEP requires you to provide the following components: SCEP URL: the URL of the CA SCEP interface. SCEP shared secret: a challengePassword PIN shared between the CA and the SCEP client, used to obtain the certificate. The client then retrieves the CA certificate chain over SCEP and sends a certificate signing request to the CA. When configuring SCEP with certmonger , you create a new CA configuration profile that specifies the issued certificate parameters. 19.8.2. Requesting an IdM CA-signed certificate through SCEP The following example adds a SCEP_example SCEP CA configuration to certmonger and requests a new certificate on the client.idm.example.com IdM client. certmonger supports both the NSS certificate database format and file-based (PEM) formats, such as OpenSSL. Prerequisites You know the SCEP URL. You have the challengePassword PIN shared secret. Procedure Add the CA configuration to certmonger : -c : Mandatory nickname for the CA configuration. The same value can later be used with other getcert commands. -u : URL of the server's SCEP interface. Important When using an HTTPS URL, you must also specify the location of the PEM-formatted copy of the SCEP server CA certificate using the -R option. Verify that the CA configuration has been successfully added: If the configuration was successfully added, certmonger retrieves the CA chain from the remote CA. The CA chain then appears as thumbprints in the command output. When accessing the server over unencrypted HTTP, manually compare the thumbprints with the ones displayed at the SCEP server to prevent a man-in-the-middle attack. Request a certificate from the CA: If you are using NSS: You can use the options to specify the following parameters of the certificate request: -I : (Optional) Name of the task: the tracking ID for the request. The same value can later be used with the getcert list command. -c : CA configuration to submit the request to. -d : Directory with the NSS database to store the certificate and key. -n : Nickname of the certificate, used in the NSS database. -N : Subject name in the CSR. -L : Time-limited one-time challengePassword PIN issued by the CA. -D : Subject Alternative Name for the certificate, usually the same as the host name. If you are using OpenSSL: You can use the options to specify the following parameters of the certificate request: -I : (Optional) Name of the task: the tracking ID for the request. The same value can later be used with the getcert list command. -c : CA configuration to submit the request to. -f : Storage path to the certificate. -k : Storage path to the key. -N : Subject name in the CSR. -L : Time-limited one-time challengePassword PIN issued by the CA. -D : Subject Alternative Name for the certificate, usually the same as the host name. Verification Verify that a certificate was issued and correctly stored in the local database: If you used NSS, enter: If you used OpenSSL, enter: The status MONITORING signifies a successful retrieval of the issued certificate. The getcert-list(1) man page lists other possible states and their meanings. Additional resources For more options when requesting a certificate, see the getcert-request(1) man page on your system. 19.8.3. Automatically renewing AD SCEP certificates with certmonger When certmonger sends a SCEP certificate renewal request, this request is signed using the existing certificate private key. However, renewal requests sent by certmonger by default also include the challengePassword PIN that was used to originally obtain the certificates. An Active Directory (AD) Network Device Enrollment Service (NDES) server that works as the SCEP server automatically rejects any requests for renewal that contain the original challengePassword PIN. Consequently, the renewal fails. For renewal with AD to work, you need to configure certmonger to send the signed renewal requests without the challengePassword PIN. You also need to configure the AD server so that it does not compare the subject name at renewal. Note There may be SCEP servers other than AD that also refuse requests containing the challengePassword . In those cases, you may also need to change the certmonger configuration in this way. Prerequisites The RHEL server has to be running RHEL 8.6 or newer. Procedure Open regedit on the AD server. In the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Cryptography\MSCEP subkey, add a new 32-bit REG_DWORD entry DisableRenewalSubjectNameMatch and set its value to 1 . On the server where certmonger is running, open the /etc/certmonger/certmonger.conf file and add the following section: Restart certmonger: | [
"ipa-getcert request -K HTTP/my_company.idm.example.com -k /etc/pki/tls/private/httpd.key -f /etc/pki/tls/certs/httpd.pem -g 2048 -D my_company.idm.example.com -C \"systemctl restart httpd\" New signing request \"20190604065735\" added.",
"ipa-getcert list -f /etc/pki/tls/certs/httpd.pem Number of certificates and requests being tracked: 3. Request ID '20190604065735': status: MONITORING stuck: no key pair storage: type=FILE,location='/etc/pki/tls/private/httpd.key' certificate: type=FILE,location='/etc/pki/tls/certs/httpd.crt' CA: IPA [...]",
"getcert list -i 20190408143846 -v Number of certificates and requests being tracked: 16. Request ID '20190408143846': status: MONITORING stuck: no key pair storage: type=NSSDB,location='/etc/dirsrv/slapd-IDM-EXAMPLE-COM',nickname='Server-Cert',token='NSS Certificate DB',pinfile='/etc/dirsrv/slapd-IDM-EXAMPLE-COM/pwdfile.txt' certificate: type=NSSDB,location='/etc/dirsrv/slapd-IDM-EXAMPLE-COM',nickname='Server-Cert',token='NSS Certificate DB' CA: IPA issuer: CN=Certificate Authority,O=IDM.EXAMPLE.COM subject: CN=r8server.idm.example.com,O=IDM.EXAMPLE.COM expires: 2021-04-08 16:38:47 CEST dns: r8server.idm.example.com principal name: ldap/[email protected] key usage: digitalSignature,nonRepudiation,keyEncipherment,dataEncipherment eku: id-kp-serverAuth,id-kp-clientAuth pre-save command: post-save command: /usr/libexec/ipa/certmonger/restart_dirsrv IDM-EXAMPLE-COM track: true auto-renew: true",
"getcert list",
"getcert stop-tracking -i 20190408143846",
"getcert start-tracking -c IPA -f /tmp/some_cert.crt -k /tmp/some_key.key",
"getcert resubmit -i 20190408143846",
"grep 'internal=' /var/lib/pki/pki-tomcat/conf/password.conf",
"getcert start-tracking -d /etc/pki/pki-tomcat/alias -n \"caSigningCert cert-pki-ca\" -c 'dogtag-ipa-ca-renew-agent' -P [internal PIN] -B /usr/libexec/ipa/certmonger/stop_pkicad -C '/usr/libexec/ipa/certmonger/renew_ca_cert \"caSigningCert cert-pki-ca\"' -T caCACert getcert start-tracking -d /etc/pki/pki-tomcat/alias -n \"auditSigningCert cert-pki-ca\" -c 'dogtag-ipa-ca-renew-agent' -P [internal PIN] -B /usr/libexec/ipa/certmonger/stop_pkicad -C '/usr/libexec/ipa/certmonger/renew_ca_cert \"auditSigningCert cert-pki-ca\"' -T caSignedLogCert getcert start-tracking -d /etc/pki/pki-tomcat/alias -n \"ocspSigningCert cert-pki-ca\" -c 'dogtag-ipa-ca-renew-agent' -P [internal PIN] -B /usr/libexec/ipa/certmonger/stop_pkicad -C '/usr/libexec/ipa/certmonger/renew_ca_cert \"ocspSigningCert cert-pki-ca\"' -T caOCSPCert getcert start-tracking -d /etc/pki/pki-tomcat/alias -n \"subsystemCert cert-pki-ca\" -c 'dogtag-ipa-ca-renew-agent' -P [internal PIN] -B /usr/libexec/ipa/certmonger/stop_pkicad -C '/usr/libexec/ipa/certmonger/renew_ca_cert \"subsystemCert cert-pki-ca\"' -T caSubsystemCert getcert start-tracking -d /etc/pki/pki-tomcat/alias -n \"Server-Cert cert-pki-ca\" -c 'dogtag-ipa-ca-renew-agent' -P [internal PIN] -B /usr/libexec/ipa/certmonger/stop_pkicad -C '/usr/libexec/ipa/certmonger/renew_ca_cert \"Server-Cert cert-pki-ca\"' -T caServerCert",
"getcert start-tracking -f /var/lib/ipa/certs/httpd.crt -k /var/lib/ipa/private/httpd.key -p /var/lib/ipa/passwds/idm.example.com-443-RSA -c IPA -C /usr/libexec/ipa/certmonger/restart_httpd -T caIPAserviceCert getcert start-tracking -d /etc/dirsrv/slapd-IDM-EXAMPLE-COM -n \"Server-Cert\" -c IPA -p /etc/dirsrv/slapd-IDM-EXAMPLE-COM/pwdfile.txt -C '/usr/libexec/ipa/certmonger/restart_dirsrv \"IDM-EXAMPLE-COM\"' -T caIPAserviceCert getcert start-tracking -f /var/lib/ipa/ra-agent.pem -k /var/lib/ipa/ra-agent.key -c dogtag-ipa-ca-renew-agent -B /usr/libexec/ipa/certmonger/renew_ra_cert_pre -C /usr/libexec/ipa/certmonger/renew_ra_cert -T caSubsystemCert getcert start-tracking -f /var/kerberos/krb5kdc/kdc.crt -k /var/kerberos/krb5kdc/kdc.key -c dogtag-ipa-ca-renew-agent -B /usr/libexec/ipa/certmonger/renew_ra_cert_pre -C /usr/libexec/ipa/certmonger/renew_kdc_cert -T KDCs_PKINIT_Certs",
"systemctl restart certmonger",
"getcert list",
"getcert add-scep-ca -c SCEP_example -u SCEP_URL",
"getcert list-cas -c SCEP_example CA 'SCEP_example': is-default: no ca-type: EXTERNAL helper-location: /usr/libexec/certmonger/scep-submit -u http://SCEP_server_enrollment_interface_URL SCEP CA certificate thumbprint (MD5): A67C2D4B 771AC186 FCCA654A 5E55AAF7 SCEP CA certificate thumbprint (SHA1): FBFF096C 6455E8E9 BD55F4A5 5787C43F 1F512279",
"getcert request -I Example_Task -c SCEP_example -d /etc/pki/nssdb -n ExampleCert -N cn=\" client.idm.example.com \" -L one-time_PIN -D client.idm.example.com",
"getcert request -I Example_Task -c SCEP_example -f /etc/pki/tls/certs/server.crt -k /etc/pki/tls/private/private.key -N cn=\" client.idm.example.com \" -L one-time_PIN -D client.idm.example.com",
"getcert list -I Example_Task Request ID 'Example_Task': status: MONITORING stuck: no key pair storage: type=NSSDB,location='/etc/pki/nssdb',nickname='ExampleCert',token='NSS Certificate DB' certificate: type=NSSDB,location='/etc/pki/nssdb',nickname='ExampleCert',token='NSS Certificate DB' signing request thumbprint (MD5): 503A8EDD DE2BE17E 5BAA3A57 D68C9C1B signing request thumbprint (SHA1): B411ECE4 D45B883A 75A6F14D 7E3037F1 D53625F4 CA: IPA issuer: CN=Certificate Authority,O=EXAMPLE.COM subject: CN=client.idm.example.com,O=EXAMPLE.COM expires: 2018-05-06 10:28:06 UTC key usage: digitalSignature,keyEncipherment eku: iso.org.dod.internet.security.mechanisms.8.2.2 certificate template/profile: IPSECIntermediateOffline pre-save command: post-save command: track: true auto-renew: true",
"getcert list -I Example_Task Request ID 'Example_Task': status: MONITORING stuck: no key pair storage: type=FILE,location='/etc/pki/tls/private/private.key' certificate: type=FILE,location='/etc/pki/tls/certs/server.crt' CA: IPA issuer: CN=Certificate Authority,O=EXAMPLE.COM subject: CN=client.idm.example.com,O=EXAMPLE.COM expires: 2018-05-06 10:28:06 UTC eku: id-kp-serverAuth,id-kp-clientAuth pre-save command: post-save command: track: true auto-renew: true",
"[scep] challenge_password_otp = yes",
"systemctl restart certmonger"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_certificates_in_idm/using-certmonger_managing-certificates-in-idm |
Chapter 4. Important links | Chapter 4. Important links Red Hat AMQ 7 Supported Configurations Red Hat AMQ 7 Component Details AMQ Clients 2.7 Release Notes AMQ Clients 2.6 Release Notes AMQ Clients 2.5 Release Notes AMQ Clients 2.4 Release Notes AMQ Clients 2.3 Release Notes AMQ Clients 2.2 Release Notes AMQ Clients 2.1 Release Notes AMQ Clients 2.0 Release Notes AMQ Clients 1.2 Release Notes AMQ Clients 1.1 Release Notes Revised on 2020-10-08 11:29:42 UTC | null | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/amq_clients_2.8_release_notes/important_links |
16.3. Using the Same Service Principal for Multiple Services | 16.3. Using the Same Service Principal for Multiple Services Within a cluster, the same service principal can be used for multiple services, spread across different machines. Retrieve a service principal using the ipa-getkeytab command. Either direct multiple servers or services to use the same file, or copy the file to individual servers as required. | [
"ipa-getkeytab -s kdc.example.com -p HTTP/server.example.com -k /etc/httpd/conf/krb5.keytab -e aes256-cts"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/using_the_same_service_principal_for_multiple_services |
2.3. Monitoring System Capacity | 2.3. Monitoring System Capacity Monitoring system capacity is done as part of an ongoing capacity planning program. Capacity planning uses long-term resource monitoring to determine rates of change in the utilization of system resources. Once these rates of change are known, it becomes possible to conduct more accurate long-term planning regarding the procurement of additional resources. Monitoring done for capacity planning purposes is different from performance monitoring in two ways: The monitoring is done on a more-or-less continuous basis The monitoring is usually not as detailed The reason for these differences stems from the goals of a capacity planning program. Capacity planning requires a "big picture" viewpoint; short-term or anomalous resource usage is of little concern. Instead, data is collected over a period of time, making it possible to categorize resource utilization in terms of changes in workload. In more narrowly-defined environments, (where only one application is run, for example) it is possible to model the application's impact on system resources. This can be done with sufficient accuracy to make it possible to determine, for example, the impact of five more customer service representatives running the customer service application during the busiest time of the day. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s1-resource-capacity |
Chapter 9. Installing on IBM Cloud VPC | Chapter 9. Installing on IBM Cloud VPC 9.1. Preparing to install on IBM Cloud VPC The installation workflows documented in this section are for IBM Cloud VPC infrastructure environments. IBM Cloud Classic is not supported at this time. For more information on the difference between Classic and VPC infrastructures, see IBM's documentation . 9.1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . Important IBM Cloud using installer-provisioned infrastructure is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 9.1.2. Requirements for installing OpenShift Container Platform on IBM Cloud VPC Before installing OpenShift Container Platform on IBM Cloud VPC, you must create a service account and configure an IBM Cloud account. See Configuring an IBM Cloud account for details about creating an account, enabling API services, configuring DNS, IBM Cloud account limits, and supported IBM Cloud VPC regions. You must manually manage your cloud credentials when installing a cluster to IBM Cloud VPC. Do this by configuring the Cloud Credential Operator (CCO) for manual mode before you install the cluster. For more information, see Configuring IAM for IBM Cloud VPC . 9.1.3. Choosing a method to install OpenShift Container Platform on IBM Cloud VPC You can install OpenShift Container Platform on IBM Cloud VPC using installer-provisioned infrastructure. This process involves using an installation program to provision the underlying infrastructure for your cluster. Installing OpenShift Container Platform on IBM Cloud VPC using user-provisioned infrastructure is not supported at this time. See Installation process for more information about installer-provisioned installation processes. 9.1.3.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on IBM Cloud VPC infrastructure that is provisioned by the OpenShift Container Platform installation program by using one of the following methods: Installing a customized cluster on IBM Cloud VPC : You can install a customized cluster on IBM Cloud VPC infrastructure that the installation program provisions. The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation . Installing a cluster on IBM Cloud VPC with network customizations : You can customize your OpenShift Container Platform network configuration during installation, so that your cluster can coexist with your existing IP address allocations and adhere to your network requirements. 9.1.4. steps Configuring an IBM Cloud account 9.2. Configuring an IBM Cloud account Before you can install OpenShift Container Platform, you must configure an IBM Cloud account. Important IBM Cloud VPC using installer-provisioned infrastructure is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 9.2.1. Prerequisites You have an IBM Cloud account with a subscription. You cannot install OpenShift Container Platform on a free or trial IBM Cloud account. 9.2.2. Quotas and limits on IBM Cloud VPC The OpenShift Container Platform cluster uses a number of IBM Cloud VPC components, and the default quotas and limits affect your ability to install OpenShift Container Platform clusters. If you use certain cluster configurations, deploy your cluster in certain regions, or run multiple clusters from your account, you might need to request additional resources for your IBM Cloud account. For a comprehensive list of the default IBM Cloud VPC quotas and service limits, see IBM Cloud's documentation for Quotas and service limits . Virtual Private Cloud (VPC) Each OpenShift Container Platform cluster creates its own VPC. The default quota of VPCs per region is 10 and will allow 10 clusters. To have more than 10 clusters in a single region, you must increase this quota. Application load balancer By default, each cluster creates three application load balancers (ALBs): Internal load balancer for the master API server External load balancer for the master API server Load balancer for the router You can create additional LoadBalancer service objects to create additional ALBs. The default quota of VPC ALBs are 50 per region. To have more than 50 ALBs, you must increase this quota. VPC ALBs are supported. Classic ALBs are not supported for IBM Cloud VPC. Floating IP address By default, the installation program distributes control plane and compute machines across all availability zones within a region to provision the cluster in a highly available configuration. In each availability zone, a public gateway is created and requires a separate floating IP address. The default quota for a floating IP address is 20 addresses per availability zone. The default cluster configuration yields three floating IP addresses: Two floating IP addresses in the us-east-1 primary zone. The IP address associated with the bootstrap node is removed after installation. One floating IP address in the us-east-2 secondary zone. One floating IP address in the us-east-3 secondary zone. IBM Cloud VPC can support up to 19 clusters per region in an account. If you plan to have more than 19 default clusters, you must increase this quota. Virtual Server Instances (VSI) By default, a cluster creates VSIs using bx2-4x16 profiles, which includes the following resources by default: 4 vCPUs 16 GB RAM The following nodes are created: One bx2-4x16 bootstrap machine, which is removed after the installation is complete Three bx2-4x16 control plane nodes Three bx2-4x16 compute nodes For more information, see IBM Cloud's documentation on supported profiles . Table 9.1. VSI component quotas and limits VSI component Default IBM Cloud VPC quota Default cluster configuration Maximum number of clusters vCPU 200 vCPUs per region 28 vCPUs, or 24 vCPUs after bootstrap removal 8 per region RAM 1600 GB per region 112 GB, or 96 GB after bootstrap removal 16 per region Storage 18 TB per region 1050 GB, or 900 GB after bootstrap removal 19 per region If you plan to exceed the resources stated in the table, you must increase your IBM Cloud account quota. Block Storage Volumes For each VPC machine, a block storage device is attached for its boot volume. The default cluster configuration creates seven VPC machines, resulting in seven block storage volumes. Additional Kubernetes persistent volume claims (PVCs) of the IBM Cloud VPC storage class create additional block storage volumes. The default quota of VPC block storage volumes are 300 per region. To have more than 300 volumes, you must increase this quota. 9.2.3. Configuring DNS resolution using Cloud Internet Services IBM Cloud Internet Services (CIS) is used by the installation program to configure cluster DNS resolution and provide name lookup for the cluster to external resources. Only public DNS is supported with IBM Cloud VPC. Note IBM Cloud VPC does not support IPv6, so dual stack or IPv6 environments are not possible. You must create a domain zone in CIS in the same account as your cluster. You must also ensure the zone is authoritative for the domain. You can do this using a root domain or subdomain. Prerequisites You have installed the IBM Cloud CLI . Procedure If you do not already have an existing domain and registrar, you must acquire them. For more information, see IBM's documentation . Create a CIS instance to use with your cluster. Install the CIS plugin: USD ibmcloud plugin install cis Create the CIS instance: USD ibmcloud cis instance-create <instance_name> standard 1 1 At a minimum, a Standard plan is required for CIS to manage the cluster subdomain and its DNS records. Connect an existing domain to your CIS instance. Set the context instance for CIS: USD ibmcloud cis instance-set <instance_name> 1 1 The instance cloud resource name. Add the domain for CIS: USD ibmcloud cis domain-add <domain_name> 1 1 The fully qualified domain name. You can use either the root domain or subdomain value as the domain name, depending on which you plan to configure. Note A root domain uses the form openshiftcorp.com . A subdomain uses the form clusters.openshiftcorp.com . Open the CIS web console , navigate to the Overview page, and note your CIS name servers. These name servers will be used in the step. Configure the name servers for your domains or subdomains at the domain's registrar or DNS provider. For more information, see IBM Cloud's documentation . 9.2.4. IBM Cloud VPC IAM Policies and API Key To install OpenShift Container Platform into your IBM Cloud account, the installation program requires an IAM API key, which provides authentication and authorization to access IBM Cloud service APIs. You can use an existing IAM API key that contains the required policies or create a new one. For an IBM Cloud IAM overview, see the IBM Cloud documentation . 9.2.4.1. Required access policies You must assign the required access policies to your IBM Cloud account. Table 9.2. Required access policies Service type Service Access policy scope Platform access Service access Account management IAM Identity Service All resources or a subset of resources [1] Editor, Operator, Viewer, Administrator Service ID creator Account management [2] Identity and Access Management All resources Editor, Operator, Viewer, Administrator Account management Resource group only All resource groups in the account Administrator IAM services Cloud Object Storage All resources or a subset of resources [1] Editor, Operator, Viewer, Administrator Reader, Writer, Manager, Content Reader, Object Reader, Object Writer IAM services Internet Services All resources or a subset of resources [1] Editor, Operator, Viewer, Administrator Reader, Writer, Manager IAM services VPC Infrastructure Services All resources or a subset of resources [1] Editor, Operator, Viewer, Administrator Reader, Writer, Manager The policy access scope should be set based on how granular you want to assign access. The scope can be set to All resources or Resources based on selected attributes . Optional: This access policy is only required if you want the installation program to create a resource group. For more information on resource groups, see IBM Cloud's documentation . 9.2.4.2. Access policy assignment In IBM Cloud VPC IAM, access policies can be attached to different subjects: Access group (Recommended) Service ID User The recommended method is to define IAM access policies in an access group . This helps organize all the access required for OpenShift Container Platform and enables you to onboard users and service IDs to this group. You can also assign access to users and service IDs directly, if desired. 9.2.4.3. Creating an API key You must create a user API key or a service ID API key for your IBM Cloud account. Prerequisites You have assigned the required access policies to your IBM Cloud account. You have attached you IAM access policies to an access group, or other appropriate resource. Procedure Create an API key, depending on how you defined your IAM access policies. For example, if you assigned your access policies to a user, you must create a user API key . If you assigned your access policies to a service ID, you must create a service ID API key . If your access policies are assigned to an access group, you can use either API key type. For more information on IBM Cloud VPC API keys, see Understanding API keys . 9.2.5. Supported IBM Cloud VPC regions You can deploy an OpenShift Container Platform cluster to the following regions: au-syd (Sydney, Australia) br-sao (Sao Paulo, Brazil) ca-tor (Toronto, Canada) eu-de (Frankfurt, Germany) eu-gb (London, United Kingdom) jp-osa (Osaka, Japan) jp-tok (Tokyo, Japan) us-east (Washington DC, United States) us-south (Dallas, United States) 9.2.6. steps Configuring IAM for IBM Cloud VPC 9.3. Configuring IAM for IBM Cloud VPC In environments where the cloud identity and access management (IAM) APIs are not reachable, you must put the Cloud Credential Operator (CCO) into manual mode before you install the cluster. Important IBM Cloud VPC using installer-provisioned infrastructure is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 9.3.1. Alternatives to storing administrator-level secrets in the kube-system project The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). You can configure the CCO to suit the security requirements of your organization by setting different values for the credentialsMode parameter in the install-config.yaml file. Storing an administrator-level credential secret in the cluster kube-system project is not supported for IBM Cloud; therefore, you must set the credentialsMode parameter for the CCO to Manual when installing OpenShift Container Platform and manage your cloud credentials manually. Using manual mode allows each cluster component to have only the permissions it requires, without storing an administrator-level credential in the cluster. You can also use this mode if your environment does not have connectivity to the cloud provider public IAM endpoint. However, you must manually reconcile permissions with new release images for every upgrade. You must also manually supply credentials for every component that requests them. Additional resources About the Cloud Credential Operator 9.3.2. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Procedure Obtain the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file by running the following command: USD ccoctl --help Output of ccoctl --help OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. Additional resources Rotating API keys for IBM Cloud VPC 9.3.3. steps Installing a cluster on IBM Cloud VPC with customizations 9.3.4. Additional resources Preparing to update a cluster with manually maintained credentials 9.4. Installing a cluster on IBM Cloud VPC with customizations In OpenShift Container Platform version 4.11, you can install a customized cluster on infrastructure that the installation program provisions on IBM Cloud VPC. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. Important IBM Cloud VPC using installer-provisioned infrastructure is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 9.4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an IBM Cloud account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You configured the ccoctl utility before you installed the cluster. For more information, see Configuring IAM for IBM Cloud VPC . 9.4.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.11, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 9.4.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 9.4.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 9.4.5. Exporting the IBM Cloud VPC API key You must set the IBM Cloud VPC API key you created as a global variable; the installation program ingests the variable during startup to set the API key. Prerequisties You have created either a user API key or service ID API key for your IBM Cloud account. Procedure Export your IBM Cloud VPC API key as a global variable: USD export IC_API_KEY=<api_key> Important You must set the variable name exactly as specified; the installation program expects the variable name to be present during startup. 9.4.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on IBM Cloud. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select ibmcloud as the platform to target. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 9.4.6.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 9.4.6.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 9.3. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 9.4.6.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 9.4. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) cluster network provider to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 9.4.6.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 9.5. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 and vCurrent . v4.11 enables the baremetal Operator, the marketplace Operator, and the openshift-samples content. vCurrent installs the recommended set of capabilities for the current version of OpenShift Container Platform. The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . Valid values are baremetal , marketplace and openshift-samples . You may specify multiple capabilities in this parameter. String array cgroupsV2 Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. true compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 9.4.6.1.4. Additional IBM Cloud VPC configuration parameters Additional IBM Cloud VPC configuration parameters are described in the following table: Table 9.6. Additional IBM Cloud VPC parameters Parameter Description Values platform.ibmcloud.resourceGroupName The name of an existing resource group to install your cluster to. This resource group must only be used for this specific cluster because the cluster components assume ownership of all of the resources in the resource group. If undefined, a new resource group is created for the cluster. [ 1 ] String, for example existing_resource_group . platform.ibmcloud.dedicatedHosts.profile The new dedicated host to create. If you specify a value for platform.ibmcloud.dedicatedHosts.name , this parameter is not required. Valid IBM Cloud VPC dedicated host profile, such as cx2-host-152x304 . [ 2 ] platform.ibmcloud.dedicatedHosts.name An existing dedicated host. If you specify a value for platform.ibmcloud.dedicatedHosts.profile , this parameter is not required. String, for example my-dedicated-host-name . platform.ibmcloud.type The instance type for all IBM Cloud VPC machines. Valid IBM Cloud VPC instance type, such as bx2-8x32 . [ 2 ] Whether you define an existing resource group, or if the installer creates one, determines how the resource group is treated when the cluster is uninstalled. If you define a resource group, the installer removes all of the installer-provisioned resources, but leaves the resource group alone; if a resource group is created as part of the installation, the installer removes all of the installer provisioned resources and the resource group. To determine which profile best meets your needs, see Instance Profiles in the IBM documentation. 9.4.6.2. Sample customized install-config.yaml file for IBM Cloud VPC You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibmcloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: us-south 9 credentialsMode: Manual publish: External pullSecret: '{"auths": ...}' 10 fips: false 11 sshKey: ssh-ed25519 AAAA... 12 1 8 9 10 Required. The installation program prompts you for this value. 2 5 If you do not provide these parameters and values, the installation program provides the default value. 3 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 7 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 11 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS Validated or Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 12 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 9.4.6.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 9.4.7. Manually creating IAM for IBM Cloud VPC Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets for you cloud provider. You can use the Cloud Credential Operator (CCO) utility ( ccoctl ) to create the required IBM Cloud VPC resources. Prerequisites You have configured the ccoctl binary. You have an existing install-config.yaml file. Procedure Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled ... 1 This line is added to set the credentialsMode parameter to Manual . To generate the manifests, run the following command from the directory that contains the installation program: USD openshift-install create manifests --dir <installation_directory> From the directory that contains the installation program, obtain the OpenShift Container Platform release image that your openshift-install binary is built to use: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the CredentialsRequest objects from the OpenShift Container Platform release image: USD oc adm release extract --cloud=ibmcloud --credentials-requests USDRELEASE_IMAGE \ --to=<path_to_credential_requests_directory> 1 1 The directory where the credential requests will be stored. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer Create the service ID for each credential request, assign the policies defined, create an API key in IBM Cloud VPC, and generate the secret: USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir <path_to_store_credential_request_templates> \ 1 --name <cluster_name> \ 2 --output-dir <installation_directory> \ --resource-group-name <resource_group_name> 3 1 The directory where the credential requests are stored. 2 The name of the OpenShift Container Platform cluster. 3 Optional: The name of the resource group used for scoping the access policies. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. If an incorrect resource group name is provided, the installation fails during the bootstrap phase. To find the correct resource group name, run the following command: USD grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml Verification Ensure that the appropriate secrets were generated in your cluster's manifests directory. 9.4.8. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 9.4.9. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.11. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture in the Product Variant drop-down menu. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.11 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 9.4.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources Accessing the web console 9.4.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.11, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 9.4.12. steps Customize your cluster . If necessary, you can opt out of remote health reporting . 9.5. Installing a cluster on IBM Cloud VPC with network customizations In OpenShift Container Platform version 4.11, you can install a cluster with a customized network configuration on infrastructure that the installation program provisions on IBM Cloud VPC. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster. Important IBM Cloud VPC using installer-provisioned infrastructure is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 9.5.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an IBM Cloud account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You configured the ccoctl utility before you installed the cluster. For more information, see Configuring IAM for IBM Cloud VPC . 9.5.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.11, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 9.5.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 9.5.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 9.5.5. Exporting the IBM Cloud VPC API key You must set the IBM Cloud VPC API key you created as a global variable; the installation program ingests the variable during startup to set the API key. Prerequisties You have created either a user API key or service ID API key for your IBM Cloud account. Procedure Export your IBM Cloud VPC API key as a global variable: USD export IC_API_KEY=<api_key> Important You must set the variable name exactly as specified; the installation program expects the variable name to be present during startup. 9.5.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on IBM Cloud. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select ibmcloud as the platform to target. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 9.5.6.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 9.5.6.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 9.7. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 9.5.6.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 9.8. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) cluster network provider to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 9.5.6.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 9.9. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 and vCurrent . v4.11 enables the baremetal Operator, the marketplace Operator, and the openshift-samples content. vCurrent installs the recommended set of capabilities for the current version of OpenShift Container Platform. The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . Valid values are baremetal , marketplace and openshift-samples . You may specify multiple capabilities in this parameter. String array cgroupsV2 Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. true compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 9.5.6.1.4. Additional IBM Cloud VPC configuration parameters Additional IBM Cloud VPC configuration parameters are described in the following table: Table 9.10. Additional IBM Cloud VPC parameters Parameter Description Values platform.ibmcloud.resourceGroupName The name of an existing resource group to install your cluster to. This resource group must only be used for this specific cluster because the cluster components assume ownership of all of the resources in the resource group. If undefined, a new resource group is created for the cluster. [ 1 ] String, for example existing_resource_group . platform.ibmcloud.dedicatedHosts.profile The new dedicated host to create. If you specify a value for platform.ibmcloud.dedicatedHosts.name , this parameter is not required. Valid IBM Cloud VPC dedicated host profile, such as cx2-host-152x304 . [ 2 ] platform.ibmcloud.dedicatedHosts.name An existing dedicated host. If you specify a value for platform.ibmcloud.dedicatedHosts.profile , this parameter is not required. String, for example my-dedicated-host-name . platform.ibmcloud.type The instance type for all IBM Cloud VPC machines. Valid IBM Cloud VPC instance type, such as bx2-8x32 . [ 2 ] Whether you define an existing resource group, or if the installer creates one, determines how the resource group is treated when the cluster is uninstalled. If you define a resource group, the installer removes all of the installer-provisioned resources, but leaves the resource group alone; if a resource group is created as part of the installation, the installer removes all of the installer provisioned resources and the resource group. To determine which profile best meets your needs, see Instance Profiles in the IBM documentation. 9.5.6.2. Sample customized install-config.yaml file for IBM Cloud VPC You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibmcloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: networking: 9 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: us-south 10 credentialsMode: Manual publish: External pullSecret: '{"auths": ...}' 11 fips: false 12 sshKey: ssh-ed25519 AAAA... 13 1 8 10 11 Required. The installation program prompts you for this value. 2 5 9 If you do not provide these parameters and values, the installation program provides the default value. 3 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 7 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 12 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS Validated or Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 13 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 9.5.6.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 9.5.7. Manually creating IAM for IBM Cloud VPC Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets for you cloud provider. You can use the Cloud Credential Operator (CCO) utility ( ccoctl ) to create the required IBM Cloud VPC resources. Prerequisites You have configured the ccoctl binary. You have an existing install-config.yaml file. Procedure Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled ... 1 This line is added to set the credentialsMode parameter to Manual . To generate the manifests, run the following command from the directory that contains the installation program: USD openshift-install create manifests --dir <installation_directory> From the directory that contains the installation program, obtain the OpenShift Container Platform release image that your openshift-install binary is built to use: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the CredentialsRequest objects from the OpenShift Container Platform release image: USD oc adm release extract --cloud=ibmcloud --credentials-requests USDRELEASE_IMAGE \ --to=<path_to_credential_requests_directory> 1 1 The directory where the credential requests will be stored. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer Create the service ID for each credential request, assign the policies defined, create an API key in IBM Cloud VPC, and generate the secret: USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir <path_to_store_credential_request_templates> \ 1 --name <cluster_name> \ 2 --output-dir <installation_directory> \ --resource-group-name <resource_group_name> 3 1 The directory where the credential requests are stored. 2 The name of the OpenShift Container Platform cluster. 3 Optional: The name of the resource group used for scoping the access policies. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. If an incorrect resource group name is provided, the installation fails during the bootstrap phase. To find the correct resource group name, run the following command: USD grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml Verification Ensure that the appropriate secrets were generated in your cluster's manifests directory. 9.5.8. Network configuration phases There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration. Phase 1 You can customize the following network-related fields in the install-config.yaml file before you create the manifest files: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork For more information on these fields, refer to Installation configuration parameters . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. Important The CIDR range 172.17.0.0/16 is reserved by libVirt. You cannot use this range or any range that overlaps with this range for any networks in your cluster. Phase 2 After creating the manifest files by running openshift-install create manifests , you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify advanced network configuration. You cannot override the values specified in phase 1 in the install-config.yaml file during phase 2. However, you can further customize the cluster network provider during phase 2. 9.5.9. Specifying advanced network configuration You can use advanced network configuration for your cluster network provider to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster. Important Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites You have created the install-config.yaml file and completed any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml file, such as in the following example: Specify a different VXLAN port for the OpenShift SDN network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800 Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program consumes the manifests/ directory when you create the Ignition config files. 9.5.10. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network provider, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network provider configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 9.5.10.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 9.11. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes Container Network Interface (CNI) network providers support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the Container Network Interface (CNI) cluster network provider for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network provider, the kube-proxy configuration has no effect. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 9.12. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The cluster network provider is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OpenShift SDN Container Network Interface (CNI) cluster network provider by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN cluster network provider. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes cluster network provider. Configuration for the OpenShift SDN CNI cluster network provider The following table describes the configuration fields for the OpenShift SDN Container Network Interface (CNI) cluster network provider. Table 9.13. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes CNI cluster network provider The following table describes the configuration fields for the OVN-Kubernetes CNI cluster network provider. Table 9.14. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Note IPsec for the OVN-Kubernetes network provider is not supported when installing a cluster on IBM Cloud. Table 9.15. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 9.16. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 kubeProxyConfig object configuration The values for the kubeProxyConfig object are defined in the following table: Table 9.17. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 9.5.11. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 9.5.12. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.11. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture in the Product Variant drop-down menu. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.11 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 9.5.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources Accessing the web console 9.5.14. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.11, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 9.5.15. steps Customize your cluster . If necessary, you can opt out of remote health reporting . 9.6. Uninstalling a cluster on IBM Cloud VPC You can remove a cluster that you deployed to IBM Cloud VPC. 9.6.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. You have configured the ccoctl binary. You have installed the IBM Cloud CLI and installed or updated the VPC infrastructure service plugin. For more information see "Prerequisites" in the IBM Cloud VPC CLI documentation . Procedure If the following conditions are met, this step is required: The installer created a resource group as part of the installation process. You or one of your applications created persistent volume claims (PVCs) after the cluster was deployed. In which case, the PVCs are not removed when uninstalling the cluster, which might prevent the resource group from being successfully removed. To prevent a failure: Log in to the IBM Cloud using the CLI. To list the PVCs, run the following command: USD ibmcloud is volumes --resource-group-name <infrastructure_id> For more information about listing volumes, see the IBM Cloud VPC CLI documentation . To delete the PVCs, run the following command: USD ibmcloud is volume-delete --force <volume_id> For more information about deleting volumes, see the IBM Cloud VPC CLI documentation . Export the IBM Cloud API key that was created as part of the installation process. USD export IC_API_KEY=<api_key> Note You must set the variable name exactly as specified. The installation program expects the variable name to be present to remove the service IDs that were created when the cluster was installed. On the computer that you used to install the cluster, go to the directory that contains the installation program, and run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Remove the manual CCO credentials that were created for the cluster: USD ccoctl ibmcloud delete-service-id \ --credentials-requests-dir <path_to_credential_requests_directory> \ --name <cluster_name> Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. | [
"ibmcloud plugin install cis",
"ibmcloud cis instance-create <instance_name> standard 1",
"ibmcloud cis instance-set <instance_name> 1",
"ibmcloud cis domain-add <domain_name> 1",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"ccoctl --help",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"export IC_API_KEY=<api_key>",
"./openshift-install create install-config --dir <installation_directory> 1",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibmcloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: us-south 9 credentialsMode: Manual publish: External pullSecret: '{\"auths\": ...}' 10 fips: false 11 sshKey: ssh-ed25519 AAAA... 12",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --cloud=ibmcloud --credentials-requests USDRELEASE_IMAGE --to=<path_to_credential_requests_directory> 1",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer",
"ccoctl ibmcloud create-service-id --credentials-requests-dir <path_to_store_credential_request_templates> \\ 1 --name <cluster_name> \\ 2 --output-dir <installation_directory> --resource-group-name <resource_group_name> 3",
"grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"export IC_API_KEY=<api_key>",
"./openshift-install create install-config --dir <installation_directory> 1",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibmcloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: networking: 9 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: us-south 10 credentialsMode: Manual publish: External pullSecret: '{\"auths\": ...}' 11 fips: false 12 sshKey: ssh-ed25519 AAAA... 13",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --cloud=ibmcloud --credentials-requests USDRELEASE_IMAGE --to=<path_to_credential_requests_directory> 1",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer",
"ccoctl ibmcloud create-service-id --credentials-requests-dir <path_to_store_credential_request_templates> \\ 1 --name <cluster_name> \\ 2 --output-dir <installation_directory> --resource-group-name <resource_group_name> 3",
"grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml",
"./openshift-install create manifests --dir <installation_directory> 1",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"ibmcloud is volumes --resource-group-name <infrastructure_id>",
"ibmcloud is volume-delete --force <volume_id>",
"export IC_API_KEY=<api_key>",
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2",
"ccoctl ibmcloud delete-service-id --credentials-requests-dir <path_to_credential_requests_directory> --name <cluster_name>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/installing/installing-on-ibm-cloud-vpc |
Chapter 11. Troubleshooting CephFS PVC creation in external mode | Chapter 11. Troubleshooting CephFS PVC creation in external mode If you have updated the Red Hat Ceph Storage cluster from a version lower than 4.1.1 to the latest release and is not a freshly deployed cluster, you must manually set the application type for the CephFS pool on the Red Hat Ceph Storage cluster to enable CephFS Persistent Volume Claim (PVC) creation in external mode. Check for CephFS pvc stuck in Pending status. Example output : Check the output of the oc describe command to see the events for respective pvc. Expected error message is cephfs_metadata/csi.volumes.default/csi.volume.pvc-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx: (1) Operation not permitted) Example output: Check the settings for the <cephfs metadata pool name> (here cephfs_metadata ) and <cephfs data pool name> (here cephfs_data ). For running the command, you will need jq preinstalled in the Red Hat Ceph Storage client node. Set the application type for the CephFS pool. Run the following commands on the Red Hat Ceph Storage client node : Verify if the settings are applied. Check the CephFS PVC status again. The PVC should now be in Bound state. Example output : | [
"oc get pvc -n <namespace>",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ngx-fs-pxknkcix20-pod Pending ocs-external-storagecluster-cephfs 28h [...]",
"oc describe pvc ngx-fs-pxknkcix20-pod -n nginx-file",
"Name: ngx-fs-pxknkcix20-pod Namespace: nginx-file StorageClass: ocs-external-storagecluster-cephfs Status: Pending Volume: Labels: <none> Annotations: volume.beta.kubernetes.io/storage-provisioner: openshift-storage.cephfs.csi.ceph.com Finalizers: [kubernetes.io/pvc-protection] Capacity: Access Modes: VolumeMode: Filesystem Mounted By: ngx-fs-oyoe047v2bn2ka42jfgg-pod-hqhzf Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning ProvisioningFailed 107m (x245 over 22h) openshift-storage.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-5f8b66cc96-hvcqp_6b7044af-c904-4795-9ce5-bf0cf63cc4a4 (combined from similar events): failed to provision volume with StorageClass \"ocs-external-storagecluster-cephfs\": rpc error: code = Internal desc = error (an error (exit status 1) occurred while running rados args: [-m 192.168.13.212:6789,192.168.13.211:6789,192.168.13.213:6789 --id csi-cephfs-provisioner --keyfile= stripped -c /etc/ceph/ceph.conf -p cephfs_metadata getomapval csi.volumes.default csi.volume.pvc-1ac0c6e6-9428-445d-bbd6-1284d54ddb47 /tmp/omap-get-186436239 --namespace=csi]) occurred, command output streams is ( error getting omap value cephfs_metadata/csi.volumes.default/csi.volume.pvc-1ac0c6e6-9428-445d-bbd6-1284d54ddb47: (1) Operation not permitted)",
"ceph osd pool ls detail --format=json | jq '.[] | select(.pool_name| startswith(\"cephfs\")) | .pool_name, .application_metadata' \"cephfs_data\" { \"cephfs\": {} } \"cephfs_metadata\" { \"cephfs\": {} }",
"ceph osd pool application set <cephfs metadata pool name> cephfs metadata cephfs",
"ceph osd pool application set <cephfs data pool name> cephfs data cephfs",
"ceph osd pool ls detail --format=json | jq '.[] | select(.pool_name| startswith(\"cephfs\")) | .pool_name, .application_metadata' \"cephfs_data\" { \"cephfs\": { \"data\": \"cephfs\" } } \"cephfs_metadata\" { \"cephfs\": { \"metadata\": \"cephfs\" } }",
"oc get pvc -n <namespace>",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ngx-fs-pxknkcix20-pod Bound pvc-1ac0c6e6-9428-445d-bbd6-1284d54ddb47 1Mi RWO ocs-external-storagecluster-cephfs 29h [...]"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/troubleshooting_openshift_data_foundation/troubleshooting-cephfs-pvc-creation-in-external-mode_rhodf |
Registry | Registry OpenShift Container Platform 4.17 Configuring registries for OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/registry/index |
Chapter 38. Cross-instrumentation of SystemTap | Chapter 38. Cross-instrumentation of SystemTap Cross-instrumentation of SystemTap is creating SystemTap instrumentation modules from a SystemTap script on one system to be used on another system that does not have SystemTap fully deployed. 38.1. SystemTap cross-instrumentation When you run a SystemTap script, a kernel module is built out of that script. SystemTap then loads the module into the kernel. Normally, SystemTap scripts can run only on systems where SystemTap is deployed. To run SystemTap on ten systems, SystemTap needs to be deployed on all those systems. In some cases, this might be neither feasible nor desired. For example, corporate policy might prohibit you from installing packages that provide compilers or debug information about specific machines, which will prevent the deployment of SystemTap. To work around this, use cross-instrumentation . Cross-instrumentation is the process of generating SystemTap instrumentation modules from a SystemTap script on one system to be used on another system. This process offers the following benefits: The kernel information packages for various machines can be installed on a single host machine. Important Kernel packaging bugs may prevent the installation. In such cases, the kernel-debuginfo and kernel-devel packages for the host system and target system must match. If a bug occurs, report the bug at https://bugzilla.redhat.com/ . Each target machine needs only one package to be installed to use the generated SystemTap instrumentation module: systemtap-runtime . Important The host system must be the same architecture and running the same distribution of Linux as the target system in order for the built instrumentation module to work. Terminology instrumentation module The kernel module built from a SystemTap script; the SystemTap module is built on the host system , and will be loaded on the target kernel of the target system . host system The system on which the instrumentation modules (from SystemTap scripts) are compiled, to be loaded on target systems . target system The system in which the instrumentation module is being built (from SystemTap scripts). target kernel The kernel of the target system . This is the kernel that loads and runs the instrumentation module . 38.2. Initializing cross-instrumentation of SystemTap Initialize cross-instrumentation of SystemTap to build SystemTap instrumentation modules from a SystemTap script on one system and use them on another system that does not have SystemTap fully deployed. Prerequisites SystemTap is installed on the host system as described in Installing Systemtap . The systemtap-runtime package is installed on each target system : Both the host system and target system are the same architecture. Both the host system and target system are running the same major version of Red Hat Enterprise Linux (such as Red Hat Enterprise Linux 8), they can be running different minor versions (such as 8.1 and 8.2). Important Kernel packaging bugs may prevent multiple kernel-debuginfo and kernel-devel packages from being installed on one system. In such cases, the minor version for the host system and target system must match. If a bug occurs, report it at https://bugzilla.redhat.com/ . Procedure Determine the kernel running on each target system : Repeat this step for each target system . On the host system , install the target kernel and related packages for each target system by the method described in Installing Systemtap . Build an instrumentation module on the host system , copy this module to and run this module on on the target system either: Using remote implementation: This command remotely implements the specified script on the target system . You must ensure an SSH connection can be made to the target system from the host system for this to be successful. Manually: Build the instrumentation module on the host system : Here, kernel_version refers to the version of the target kernel determined in step 1, script refers to the script to be converted into an instrumentation module , and module_name is the desired name of the instrumentation module . The -p4 option tells SystemTap to not load and run the compiled module. Once the instrumentation module is compiled, copy it to the target system and load it using the following command: | [
"yum install systemtap-runtime",
"uname -r",
"stap --remote target_system script",
"stap -r kernel_version script -m module_name -p 4",
"staprun module_name .ko"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/monitoring_and_managing_system_status_and_performance/cross-instrumentation-of-systemtap_monitoring-and-managing-system-status-and-performance |
Chapter 5. Topics | Chapter 5. Topics Messages in Kafka are always sent to or received from a topic. This chapter describes how to configure and manage Kafka topics. 5.1. Partitions and replicas Messages in Kafka are always sent to or received from a topic. A topic is always split into one or more partitions. Partitions act as shards. That means that every message sent by a producer is always written only into a single partition. Thanks to the sharding of messages into different partitions, topics are easy to scale horizontally. Each partition can have one or more replicas, which will be stored on different brokers in the cluster. When creating a topic you can configure the number of replicas using the replication factor . Replication factor defines the number of copies which will be held within the cluster. One of the replicas for given partition will be elected as a leader. The leader replica will be used by the producers to send new messages and by the consumers to consume messages. The other replicas will be follower replicas. The followers replicate the leader. If the leader fails, one of the followers will automatically become the new leader. Each server acts as a leader for some of its partitions and a follower for others so the load is well balanced within the cluster. Note The replication factor determines the number of replicas including the leader and the followers. For example, if you set the replication factor to 3 , then there will one leader and two follower replicas. 5.2. Message retention The message retention policy defines how long the messages will be stored on the Kafka brokers. It can be defined based on time, partition size or both. For example, you can define that the messages should be kept: For 7 days Until the parition has 1GB of messages. Once the limit is reached, the oldest messages will be removed. For 7 days or until the 1GB limit has been reached. Whatever limit comes first will be used. Warning Kafka brokers store messages in log segments. The messages which are past their retention policy will be deleted only when a new log segment is created. New log segments are created when the log segment exceeds the configured log segment size. Additionally, users can request new segments to be created periodically. Additionally, Kafka brokers support a compacting policy. For a topic with the compacted policy, the broker will always keep only the last message for each key. The older messages with the same key will be removed from the partition. Because compacting is a periodically executed action, it does not happen immediately when the new message with the same key are sent to the partition. Instead it might take some time until the older messages are removed. For more information about the message retention configuration options, see Section 5.5, "Topic configuration" . 5.3. Topic auto-creation When a producer or consumer tries to send messages to or receive messages from a topic that does not exist, Kafka will, by default, automatically create that topic. This behavior is controlled by the auto.create.topics.enable configuration property which is set to true by default. To disable it, set auto.create.topics.enable to false in the Kafka broker configuration file: 5.4. Topic deletion Kafka offers the possibility to disable deletion of topics. This is configured through the delete.topic.enable property, which is set to true by default (that is, deleting topics is possible). When this property is set to false it will be not possible to delete topics and all attempts to delete topic will return success but the topic will not be deleted. 5.5. Topic configuration Auto-created topics will use the default topic configuration which can be specified in the broker properties file. However, when creating topics manually, their configuration can be specified at creation time. It is also possible to change a topic's configuration after it has been created. The main topic configuration options for manually created topics are: cleanup.policy Configures the retention policy to delete or compact . The delete policy will delete old records. The compact policy will enable log compaction. The default value is delete . For more information about log compaction, see Kafka website . compression.type Specifies the compression which is used for stored messages. Valid values are gzip , snappy , lz4 , uncompressed (no compression) and producer (retain the compression codec used by the producer). The default value is producer . max.message.bytes The maximum size of a batch of messages allowed by the Kafka broker, in bytes. The default value is 1000012 . min.insync.replicas The minimum number of replicas which must be in sync for a write to be considered successful. The default value is 1 . retention.ms Maximum number of milliseconds for which log segments will be retained. Log segments older than this value will be deleted. The default value is 604800000 (7 days). retention.bytes The maximum number of bytes a partition will retain. Once the partition size grows over this limit, the oldest log segments will be deleted. Value of -1 indicates no limit. The default value is -1 . segment.bytes The maximum file size of a single commit log segment file in bytes. When the segment reaches its size, a new segment will be started. The default value is 1073741824 bytes (1 gibibyte). For list of all supported topic configuration options, see Appendix B, Topic configuration parameters . The defaults for auto-created topics can be specified in the Kafka broker configuration using similar options: log.cleanup.policy See cleanup.policy above. compression.type See compression.type above. message.max.bytes See max.message.bytes above. min.insync.replicas See min.insync.replicas above. log.retention.ms See retention.ms above. log.retention.bytes See retention.bytes above. log.segment.bytes See segment.bytes above. default.replication.factor Default replication factor for automatically created topics. Default value is 1 . num.partitions Default number of partitions for automatically created topics. Default value is 1 . For list of all supported Kafka broker configuration options, see Appendix A, Broker configuration parameters . 5.6. Internal topics Internal topics are created and used internally by the Kafka brokers and clients. Kafka has several internal topics. These are used to store consumer offsets ( __consumer_offsets ) or transaction state ( __transaction_state ). These topics can be configured using dedicated Kafka broker configuration options starting with prefix offsets.topic. and transaction.state.log. . The most important configuration options are: offsets.topic.replication.factor Number of replicas for __consumer_offsets topic. The default value is 3 . offsets.topic.num.partitions Number of partitions for __consumer_offsets topic. The default value is 50 . transaction.state.log.replication.factor Number of replicas for __transaction_state topic. The default value is 3 . transaction.state.log.num.partitions Number of partitions for __transaction_state topic. The default value is 50 . transaction.state.log.min.isr Minimum number of replicas that must acknowledge a write to __transaction_state topic to be considered successful. If this minimum cannot be met, then the producer will fail with an exception. The default value is 2 . 5.7. Creating a topic The kafka-topics.sh tool can be used to manage topics. kafka-topics.sh is part of the AMQ Streams distribution and can be found in the bin directory. Prerequisites AMQ Streams cluster is installed and running Creating a topic Create a topic using the kafka-topics.sh utility and specify the following: Host and port of the Kafka broker in the --bootstrap-server option. The new topic to be created in the --create option. Topic name in the --topic option. The number of partitions in the --partitions option. Topic replication factor in the --replication-factor option. You can also override some of the default topic configuration options using the option --config . This option can be used multiple times to override different options. bin/kafka-topics.sh --bootstrap-server <BrokerAddress> --create --topic <TopicName> --partitions <NumberOfPartitions> --replication-factor <ReplicationFactor> --config <Option1> = <Value1> --config <Option2> = <Value2> Example of the command to create a topic named mytopic bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic mytopic --partitions 50 --replication-factor 3 --config cleanup.policy=compact --config min.insync.replicas=2 Verify that the topic exists using kafka-topics.sh . bin/kafka-topics.sh --bootstrap-server <BrokerAddress> --describe --topic <TopicName> Example of the command to describe a topic named mytopic bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe --topic mytopic Additional resources For more information about topic configuration, see Section 5.5, "Topic configuration" . For list of all supported topic configuration options, see Appendix B, Topic configuration parameters . 5.8. Listing and describing topics The kafka-topics.sh tool can be used to list and describe topics. kafka-topics.sh is part of the AMQ Streams distribution and can be found in the bin directory. Prerequisites AMQ Streams cluster is installed and running Topic mytopic exists Describing a topic Describe a topic using the kafka-topics.sh utility and specify the following: Host and port of the Kafka broker in the --bootstrap-server option. Use the --describe option to specify that you want to describe a topic. Topic name must be specified in the --topic option. When the --topic option is omitted, it will describe all available topics. bin/kafka-topics.sh --bootstrap-server <BrokerAddress> --describe --topic <TopicName> Example of the command to describe a topic named mytopic bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe --topic mytopic The describe command will list all partitions and replicas which belong to this topic. It will also list all topic configuration options. Additional resources For more information about topic configuration, see Section 5.5, "Topic configuration" . For more information about creating topics, see Section 5.7, "Creating a topic" . 5.9. Modifying a topic configuration The kafka-configs.sh tool can be used to modify topic configurations. kafka-configs.sh is part of the AMQ Streams distribution and can be found in the bin directory. Prerequisites AMQ Streams cluster is installed and running Topic mytopic exists Modify topic configuration Use the kafka-configs.sh tool to get the current configuration. Specify the host and port of the Kafka broker in the --bootstrap-server option. Set the --entity-type as topic and --entity-name to the name of your topic. Use --describe option to get the current configuration. bin/kafka-configs.sh --bootstrap-server <BrokerAddress> --entity-type topics --entity-name <TopicName> --describe Example of the command to get configuration of a topic named mytopic bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name mytopic --describe Use the kafka-configs.sh tool to change the configuration. Specify the host and port of the Kafka broker in the --bootstrap-server option. Set the --entity-type as topic and --entity-name to the name of your topic. Use --alter option to modify the current configuration. Specify the options you want to add or change in the option --add-config . bin/kafka-configs.sh --bootstrap-server <BrokerAddress> --entity-type topics --entity-name <TopicName> --alter --add-config <Option> = <Value> Example of the command to change configuration of a topic named mytopic bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name mytopic --alter --add-config min.insync.replicas=1 Use the kafka-configs.sh tool to delete an existing configuration option. Specify the host and port of the Kafka broker in the --bootstrap-server option. Set the --entity-type as topic and --entity-name to the name of your topic. Use --delete-config option to remove existing configuration option. Specify the options you want to remove in the option --remove-config . bin/kafka-configs.sh --bootstrap-server <BrokerAddress> --entity-type topics --entity-name <TopicName> --alter --delete-config <Option> Example of the command to change configuration of a topic named mytopic bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name mytopic --alter --delete-config min.insync.replicas Additional resources For more information about topic configuration, see Section 5.5, "Topic configuration" . For more information about creating topics, see Section 5.7, "Creating a topic" . For list of all supported topic configuration options, see Appendix B, Topic configuration parameters . 5.10. Deleting a topic The kafka-topics.sh tool can be used to manage topics. kafka-topics.sh is part of the AMQ Streams distribution and can be found in the bin directory. Prerequisites AMQ Streams cluster is installed and running Topic mytopic exists Deleting a topic Delete a topic using the kafka-topics.sh utility. Host and port of the Kafka broker in the --bootstrap-server option. Use the --delete option to specify that an existing topic should be deleted. Topic name must be specified in the --topic option. bin/kafka-topics.sh --bootstrap-server <BrokerAddress> --delete --topic <TopicName> Example of the command to create a topic named mytopic bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic mytopic Verify that the topic was deleted using kafka-topics.sh . bin/kafka-topics.sh --bootstrap-server <BrokerAddress> --list Example of the command to list all topics bin/kafka-topics.sh --bootstrap-server localhost:9092 --list Additional resources For more information about creating topics, see Section 5.7, "Creating a topic" . | [
"auto.create.topics.enable=false",
"delete.topic.enable=false",
"bin/kafka-topics.sh --bootstrap-server <BrokerAddress> --create --topic <TopicName> --partitions <NumberOfPartitions> --replication-factor <ReplicationFactor> --config <Option1> = <Value1> --config <Option2> = <Value2>",
"bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic mytopic --partitions 50 --replication-factor 3 --config cleanup.policy=compact --config min.insync.replicas=2",
"bin/kafka-topics.sh --bootstrap-server <BrokerAddress> --describe --topic <TopicName>",
"bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe --topic mytopic",
"bin/kafka-topics.sh --bootstrap-server <BrokerAddress> --describe --topic <TopicName>",
"bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe --topic mytopic",
"bin/kafka-configs.sh --bootstrap-server <BrokerAddress> --entity-type topics --entity-name <TopicName> --describe",
"bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name mytopic --describe",
"bin/kafka-configs.sh --bootstrap-server <BrokerAddress> --entity-type topics --entity-name <TopicName> --alter --add-config <Option> = <Value>",
"bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name mytopic --alter --add-config min.insync.replicas=1",
"bin/kafka-configs.sh --bootstrap-server <BrokerAddress> --entity-type topics --entity-name <TopicName> --alter --delete-config <Option>",
"bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name mytopic --alter --delete-config min.insync.replicas",
"bin/kafka-topics.sh --bootstrap-server <BrokerAddress> --delete --topic <TopicName>",
"bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic mytopic",
"bin/kafka-topics.sh --bootstrap-server <BrokerAddress> --list",
"bin/kafka-topics.sh --bootstrap-server localhost:9092 --list"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.1/html/using_amq_streams_on_rhel/topics-str |
Chapter 20. API reference | Chapter 20. API reference 20.1. 5.6 Logging API reference 20.1.1. Logging 5.6 API reference 20.1.1.1. ClusterLogForwarder ClusterLogForwarder is an API to configure forwarding logs. You configure forwarding by specifying a list of pipelines , which forward from a set of named inputs to a set of named outputs. There are built-in input names for common log categories, and you can define custom inputs to do additional filtering. There is a built-in output name for the default openshift log store, but you can define your own outputs with a URL and other connection information to forward logs to other stores or processors, inside or outside the cluster. For more details see the documentation on the API fields. Property Type Description spec object Specification of the desired behavior of ClusterLogForwarder status object Status of the ClusterLogForwarder 20.1.1.1.1. .spec 20.1.1.1.1.1. Description ClusterLogForwarderSpec defines how logs should be forwarded to remote targets. 20.1.1.1.1.1.1. Type object Property Type Description inputs array (optional) Inputs are named filters for log messages to be forwarded. outputDefaults object (optional) DEPRECATED OutputDefaults specify forwarder config explicitly for the default store. outputs array (optional) Outputs are named destinations for log messages. pipelines array Pipelines forward the messages selected by a set of inputs to a set of outputs. 20.1.1.1.2. .spec.inputs[] 20.1.1.1.2.1. Description InputSpec defines a selector of log messages. 20.1.1.1.2.1.1. Type array Property Type Description application object (optional) Application, if present, enables named set of application logs that name string Name used to refer to the input of a pipeline . 20.1.1.1.3. .spec.inputs[].application 20.1.1.1.3.1. Description Application log selector. All conditions in the selector must be satisfied (logical AND) to select logs. 20.1.1.1.3.1.1. Type object Property Type Description namespaces array (optional) Namespaces from which to collect application logs. selector object (optional) Selector for logs from pods with matching labels. 20.1.1.1.4. .spec.inputs[].application.namespaces[] 20.1.1.1.4.1. Description 20.1.1.1.4.1.1. Type array 20.1.1.1.5. .spec.inputs[].application.selector 20.1.1.1.5.1. Description A label selector is a label query over a set of resources. 20.1.1.1.5.1.1. Type object Property Type Description matchLabels object (optional) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels 20.1.1.1.6. .spec.inputs[].application.selector.matchLabels 20.1.1.1.6.1. Description 20.1.1.1.6.1.1. Type object 20.1.1.1.7. .spec.outputDefaults 20.1.1.1.7.1. Description 20.1.1.1.7.1.1. Type object Property Type Description elasticsearch object (optional) Elasticsearch OutputSpec default values 20.1.1.1.8. .spec.outputDefaults.elasticsearch 20.1.1.1.8.1. Description ElasticsearchStructuredSpec is spec related to structured log changes to determine the elasticsearch index 20.1.1.1.8.1.1. Type object Property Type Description enableStructuredContainerLogs bool (optional) EnableStructuredContainerLogs enables multi-container structured logs to allow structuredTypeKey string (optional) StructuredTypeKey specifies the metadata key to be used as name of elasticsearch index structuredTypeName string (optional) StructuredTypeName specifies the name of elasticsearch schema 20.1.1.1.9. .spec.outputs[] 20.1.1.1.9.1. Description Output defines a destination for log messages. 20.1.1.1.9.1.1. Type array Property Type Description syslog object (optional) fluentdForward object (optional) elasticsearch object (optional) kafka object (optional) cloudwatch object (optional) loki object (optional) googleCloudLogging object (optional) splunk object (optional) name string Name used to refer to the output from a pipeline . secret object (optional) Secret for authentication. tls object TLS contains settings for controlling options on TLS client connections. type string Type of output plugin. url string (optional) URL to send log records to. 20.1.1.1.10. .spec.outputs[].secret 20.1.1.1.10.1. Description OutputSecretSpec is a secret reference containing name only, no namespace. 20.1.1.1.10.1.1. Type object Property Type Description name string Name of a secret in the namespace configured for log forwarder secrets. 20.1.1.1.11. .spec.outputs[].tls 20.1.1.1.11.1. Description OutputTLSSpec contains options for TLS connections that are agnostic to the output type. 20.1.1.1.11.1.1. Type object Property Type Description insecureSkipVerify bool If InsecureSkipVerify is true, then the TLS client will be configured to ignore errors with certificates. 20.1.1.1.12. .spec.pipelines[] 20.1.1.1.12.1. Description PipelinesSpec link a set of inputs to a set of outputs. 20.1.1.1.12.1.1. Type array Property Type Description detectMultilineErrors bool (optional) DetectMultilineErrors enables multiline error detection of container logs inputRefs array InputRefs lists the names ( input.name ) of inputs to this pipeline. labels object (optional) Labels applied to log records passing through this pipeline. name string (optional) Name is optional, but must be unique in the pipelines list if provided. outputRefs array OutputRefs lists the names ( output.name ) of outputs from this pipeline. parse string (optional) Parse enables parsing of log entries into structured logs 20.1.1.1.13. .spec.pipelines[].inputRefs[] 20.1.1.1.13.1. Description 20.1.1.1.13.1.1. Type array 20.1.1.1.14. .spec.pipelines[].labels 20.1.1.1.14.1. Description 20.1.1.1.14.1.1. Type object 20.1.1.1.15. .spec.pipelines[].outputRefs[] 20.1.1.1.15.1. Description 20.1.1.1.15.1.1. Type array 20.1.1.1.16. .status 20.1.1.1.16.1. Description ClusterLogForwarderStatus defines the observed state of ClusterLogForwarder 20.1.1.1.16.1.1. Type object Property Type Description conditions object Conditions of the log forwarder. inputs Conditions Inputs maps input name to condition of the input. outputs Conditions Outputs maps output name to condition of the output. pipelines Conditions Pipelines maps pipeline name to condition of the pipeline. 20.1.1.1.17. .status.conditions 20.1.1.1.17.1. Description 20.1.1.1.17.1.1. Type object 20.1.1.1.18. .status.inputs 20.1.1.1.18.1. Description 20.1.1.1.18.1.1. Type Conditions 20.1.1.1.19. .status.outputs 20.1.1.1.19.1. Description 20.1.1.1.19.1.1. Type Conditions 20.1.1.1.20. .status.pipelines 20.1.1.1.20.1. Description 20.1.1.1.20.1.1. Type Conditions== ClusterLogging A Red Hat OpenShift Logging instance. ClusterLogging is the Schema for the clusterloggings API Property Type Description spec object Specification of the desired behavior of ClusterLogging status object Status defines the observed state of ClusterLogging 20.1.1.1.21. .spec 20.1.1.1.21.1. Description ClusterLoggingSpec defines the desired state of ClusterLogging 20.1.1.1.21.1.1. Type object Property Type Description collection object Specification of the Collection component for the cluster curation object (DEPRECATED) (optional) Deprecated. Specification of the Curation component for the cluster forwarder object (DEPRECATED) (optional) Deprecated. Specification for Forwarder component for the cluster logStore object (optional) Specification of the Log Storage component for the cluster managementState string (optional) Indicator if the resource is 'Managed' or 'Unmanaged' by the operator visualization object (optional) Specification of the Visualization component for the cluster 20.1.1.1.22. .spec.collection 20.1.1.1.22.1. Description This is the struct that will contain information pertinent to Log and event collection 20.1.1.1.22.1.1. Type object Property Type Description resources object (optional) The resource requirements for the collector nodeSelector object (optional) Define which Nodes the Pods are scheduled on. tolerations array (optional) Define the tolerations the Pods will accept fluentd object (optional) Fluentd represents the configuration for forwarders of type fluentd. logs object (DEPRECATED) (optional) Deprecated. Specification of Log Collection for the cluster type string (optional) The type of Log Collection to configure 20.1.1.1.23. .spec.collection.fluentd 20.1.1.1.23.1. Description FluentdForwarderSpec represents the configuration for forwarders of type fluentd. 20.1.1.1.23.1.1. Type object Property Type Description buffer object inFile object 20.1.1.1.24. .spec.collection.fluentd.buffer 20.1.1.1.24.1. Description FluentdBufferSpec represents a subset of fluentd buffer parameters to tune the buffer configuration for all fluentd outputs. It supports a subset of parameters to configure buffer and queue sizing, flush operations and retry flushing. For general parameters refer to: https://docs.fluentd.org/configuration/buffer-section#buffering-parameters For flush parameters refer to: https://docs.fluentd.org/configuration/buffer-section#flushing-parameters For retry parameters refer to: https://docs.fluentd.org/configuration/buffer-section#retries-parameters 20.1.1.1.24.1.1. Type object Property Type Description chunkLimitSize string (optional) ChunkLimitSize represents the maximum size of each chunk. Events will be flushInterval string (optional) FlushInterval represents the time duration to wait between two consecutive flush flushMode string (optional) FlushMode represents the mode of the flushing thread to write chunks. The mode flushThreadCount int (optional) FlushThreadCount represents the number of threads used by the fluentd buffer overflowAction string (optional) OverflowAction represents the action for the fluentd buffer plugin to retryMaxInterval string (optional) RetryMaxInterval represents the maximum time interval for exponential backoff retryTimeout string (optional) RetryTimeout represents the maximum time interval to attempt retries before giving up retryType string (optional) RetryType represents the type of retrying flush operations. Flush operations can retryWait string (optional) RetryWait represents the time duration between two consecutive retries to flush totalLimitSize string (optional) TotalLimitSize represents the threshold of node space allowed per fluentd 20.1.1.1.25. .spec.collection.fluentd.inFile 20.1.1.1.25.1. Description FluentdInFileSpec represents a subset of fluentd in-tail plugin parameters to tune the configuration for all fluentd in-tail inputs. For general parameters refer to: https://docs.fluentd.org/input/tail#parameters 20.1.1.1.25.1.1. Type object Property Type Description readLinesLimit int (optional) ReadLinesLimit represents the number of lines to read with each I/O operation 20.1.1.1.26. .spec.collection.logs 20.1.1.1.26.1. Description 20.1.1.1.26.1.1. Type object Property Type Description fluentd object Specification of the Fluentd Log Collection component type string The type of Log Collection to configure 20.1.1.1.27. .spec.collection.logs.fluentd 20.1.1.1.27.1. Description CollectorSpec is spec to define scheduling and resources for a collector 20.1.1.1.27.1.1. Type object Property Type Description nodeSelector object (optional) Define which Nodes the Pods are scheduled on. resources object (optional) The resource requirements for the collector tolerations array (optional) Define the tolerations the Pods will accept 20.1.1.1.28. .spec.collection.logs.fluentd.nodeSelector 20.1.1.1.28.1. Description 20.1.1.1.28.1.1. Type object 20.1.1.1.29. .spec.collection.logs.fluentd.resources 20.1.1.1.29.1. Description 20.1.1.1.29.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 20.1.1.1.30. .spec.collection.logs.fluentd.resources.limits 20.1.1.1.30.1. Description 20.1.1.1.30.1.1. Type object 20.1.1.1.31. .spec.collection.logs.fluentd.resources.requests 20.1.1.1.31.1. Description 20.1.1.1.31.1.1. Type object 20.1.1.1.32. .spec.collection.logs.fluentd.tolerations[] 20.1.1.1.32.1. Description 20.1.1.1.32.1.1. Type array Property Type Description effect string (optional) Effect indicates the taint effect to match. Empty means match all taint effects. key string (optional) Key is the taint key that the toleration applies to. Empty means match all taint keys. operator string (optional) Operator represents a key's relationship to the value. tolerationSeconds int (optional) TolerationSeconds represents the period of time the toleration (which must be value string (optional) Value is the taint value the toleration matches to. 20.1.1.1.33. .spec.collection.logs.fluentd.tolerations[].tolerationSeconds 20.1.1.1.33.1. Description 20.1.1.1.33.1.1. Type int 20.1.1.1.34. .spec.curation 20.1.1.1.34.1. Description This is the struct that will contain information pertinent to Log curation (Curator) 20.1.1.1.34.1.1. Type object Property Type Description curator object The specification of curation to configure type string The kind of curation to configure 20.1.1.1.35. .spec.curation.curator 20.1.1.1.35.1. Description 20.1.1.1.35.1.1. Type object Property Type Description nodeSelector object Define which Nodes the Pods are scheduled on. resources object (optional) The resource requirements for Curator schedule string The cron schedule that the Curator job is run. Defaults to "30 3 * * *" tolerations array 20.1.1.1.36. .spec.curation.curator.nodeSelector 20.1.1.1.36.1. Description 20.1.1.1.36.1.1. Type object 20.1.1.1.37. .spec.curation.curator.resources 20.1.1.1.37.1. Description 20.1.1.1.37.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 20.1.1.1.38. .spec.curation.curator.resources.limits 20.1.1.1.38.1. Description 20.1.1.1.38.1.1. Type object 20.1.1.1.39. .spec.curation.curator.resources.requests 20.1.1.1.39.1. Description 20.1.1.1.39.1.1. Type object 20.1.1.1.40. .spec.curation.curator.tolerations[] 20.1.1.1.40.1. Description 20.1.1.1.40.1.1. Type array Property Type Description effect string (optional) Effect indicates the taint effect to match. Empty means match all taint effects. key string (optional) Key is the taint key that the toleration applies to. Empty means match all taint keys. operator string (optional) Operator represents a key's relationship to the value. tolerationSeconds int (optional) TolerationSeconds represents the period of time the toleration (which must be value string (optional) Value is the taint value the toleration matches to. 20.1.1.1.41. .spec.curation.curator.tolerations[].tolerationSeconds 20.1.1.1.41.1. Description 20.1.1.1.41.1.1. Type int 20.1.1.1.42. .spec.forwarder 20.1.1.1.42.1. Description ForwarderSpec contains global tuning parameters for specific forwarder implementations. This field is not required for general use, it allows performance tuning by users familiar with the underlying forwarder technology. Currently supported: fluentd . 20.1.1.1.42.1.1. Type object Property Type Description fluentd object 20.1.1.1.43. .spec.forwarder.fluentd 20.1.1.1.43.1. Description FluentdForwarderSpec represents the configuration for forwarders of type fluentd. 20.1.1.1.43.1.1. Type object Property Type Description buffer object inFile object 20.1.1.1.44. .spec.forwarder.fluentd.buffer 20.1.1.1.44.1. Description FluentdBufferSpec represents a subset of fluentd buffer parameters to tune the buffer configuration for all fluentd outputs. It supports a subset of parameters to configure buffer and queue sizing, flush operations and retry flushing. For general parameters refer to: https://docs.fluentd.org/configuration/buffer-section#buffering-parameters For flush parameters refer to: https://docs.fluentd.org/configuration/buffer-section#flushing-parameters For retry parameters refer to: https://docs.fluentd.org/configuration/buffer-section#retries-parameters 20.1.1.1.44.1.1. Type object Property Type Description chunkLimitSize string (optional) ChunkLimitSize represents the maximum size of each chunk. Events will be flushInterval string (optional) FlushInterval represents the time duration to wait between two consecutive flush flushMode string (optional) FlushMode represents the mode of the flushing thread to write chunks. The mode flushThreadCount int (optional) FlushThreadCount reprents the number of threads used by the fluentd buffer overflowAction string (optional) OverflowAction represents the action for the fluentd buffer plugin to retryMaxInterval string (optional) RetryMaxInterval represents the maximum time interval for exponential backoff retryTimeout string (optional) RetryTimeout represents the maximum time interval to attempt retries before giving up retryType string (optional) RetryType represents the type of retrying flush operations. Flush operations can retryWait string (optional) RetryWait represents the time duration between two consecutive retries to flush totalLimitSize string (optional) TotalLimitSize represents the threshold of node space allowed per fluentd 20.1.1.1.45. .spec.forwarder.fluentd.inFile 20.1.1.1.45.1. Description FluentdInFileSpec represents a subset of fluentd in-tail plugin parameters to tune the configuration for all fluentd in-tail inputs. For general parameters refer to: https://docs.fluentd.org/input/tail#parameters 20.1.1.1.45.1.1. Type object Property Type Description readLinesLimit int (optional) ReadLinesLimit represents the number of lines to read with each I/O operation 20.1.1.1.46. .spec.logStore 20.1.1.1.46.1. Description The LogStoreSpec contains information about how logs are stored. 20.1.1.1.46.1.1. Type object Property Type Description elasticsearch object Specification of the Elasticsearch Log Store component lokistack object LokiStack contains information about which LokiStack to use for log storage if Type is set to LogStoreTypeLokiStack. retentionPolicy object (optional) Retention policy defines the maximum age for an index after which it should be deleted type string The Type of Log Storage to configure. The operator currently supports either using ElasticSearch 20.1.1.1.47. .spec.logStore.elasticsearch 20.1.1.1.47.1. Description 20.1.1.1.47.1.1. Type object Property Type Description nodeCount int Number of nodes to deploy for Elasticsearch nodeSelector object Define which Nodes the Pods are scheduled on. proxy object Specification of the Elasticsearch Proxy component redundancyPolicy string (optional) resources object (optional) The resource requirements for Elasticsearch storage object (optional) The storage specification for Elasticsearch data nodes tolerations array 20.1.1.1.48. .spec.logStore.elasticsearch.nodeSelector 20.1.1.1.48.1. Description 20.1.1.1.48.1.1. Type object 20.1.1.1.49. .spec.logStore.elasticsearch.proxy 20.1.1.1.49.1. Description 20.1.1.1.49.1.1. Type object Property Type Description resources object 20.1.1.1.50. .spec.logStore.elasticsearch.proxy.resources 20.1.1.1.50.1. Description 20.1.1.1.50.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 20.1.1.1.51. .spec.logStore.elasticsearch.proxy.resources.limits 20.1.1.1.51.1. Description 20.1.1.1.51.1.1. Type object 20.1.1.1.52. .spec.logStore.elasticsearch.proxy.resources.requests 20.1.1.1.52.1. Description 20.1.1.1.52.1.1. Type object 20.1.1.1.53. .spec.logStore.elasticsearch.resources 20.1.1.1.53.1. Description 20.1.1.1.53.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 20.1.1.1.54. .spec.logStore.elasticsearch.resources.limits 20.1.1.1.54.1. Description 20.1.1.1.54.1.1. Type object 20.1.1.1.55. .spec.logStore.elasticsearch.resources.requests 20.1.1.1.55.1. Description 20.1.1.1.55.1.1. Type object 20.1.1.1.56. .spec.logStore.elasticsearch.storage 20.1.1.1.56.1. Description 20.1.1.1.56.1.1. Type object Property Type Description size object The max storage capacity for the node to provision. storageClassName string (optional) The name of the storage class to use with creating the node's PVC. 20.1.1.1.57. .spec.logStore.elasticsearch.storage.size 20.1.1.1.57.1. Description 20.1.1.1.57.1.1. Type object Property Type Description Format string Change Format at will. See the comment for Canonicalize for d object d is the quantity in inf.Dec form if d.Dec != nil i int i is the quantity in int64 scaled form, if d.Dec == nil s string s is the generated value of this quantity to avoid recalculation 20.1.1.1.58. .spec.logStore.elasticsearch.storage.size.d 20.1.1.1.58.1. Description 20.1.1.1.58.1.1. Type object Property Type Description Dec object 20.1.1.1.59. .spec.logStore.elasticsearch.storage.size.d.Dec 20.1.1.1.59.1. Description 20.1.1.1.59.1.1. Type object Property Type Description scale int unscaled object 20.1.1.1.60. .spec.logStore.elasticsearch.storage.size.d.Dec.unscaled 20.1.1.1.60.1. Description 20.1.1.1.60.1.1. Type object Property Type Description abs Word sign neg bool 20.1.1.1.61. .spec.logStore.elasticsearch.storage.size.d.Dec.unscaled.abs 20.1.1.1.61.1. Description 20.1.1.1.61.1.1. Type Word 20.1.1.1.62. .spec.logStore.elasticsearch.storage.size.i 20.1.1.1.62.1. Description 20.1.1.1.62.1.1. Type int Property Type Description scale int value int 20.1.1.1.63. .spec.logStore.elasticsearch.tolerations[] 20.1.1.1.63.1. Description 20.1.1.1.63.1.1. Type array Property Type Description effect string (optional) Effect indicates the taint effect to match. Empty means match all taint effects. key string (optional) Key is the taint key that the toleration applies to. Empty means match all taint keys. operator string (optional) Operator represents a key's relationship to the value. tolerationSeconds int (optional) TolerationSeconds represents the period of time the toleration (which must be value string (optional) Value is the taint value the toleration matches to. 20.1.1.1.64. .spec.logStore.elasticsearch.tolerations[].tolerationSeconds 20.1.1.1.64.1. Description 20.1.1.1.64.1.1. Type int 20.1.1.1.65. .spec.logStore.lokistack 20.1.1.1.65.1. Description LokiStackStoreSpec is used to set up cluster-logging to use a LokiStack as logging storage. It points to an existing LokiStack in the same namespace. 20.1.1.1.65.1.1. Type object Property Type Description name string Name of the LokiStack resource. 20.1.1.1.66. .spec.logStore.retentionPolicy 20.1.1.1.66.1. Description 20.1.1.1.66.1.1. Type object Property Type Description application object audit object infra object 20.1.1.1.67. .spec.logStore.retentionPolicy.application 20.1.1.1.67.1. Description 20.1.1.1.67.1.1. Type object Property Type Description diskThresholdPercent int (optional) The threshold percentage of ES disk usage that when reached, old indices should be deleted (e.g. 75) maxAge string (optional) namespaceSpec array (optional) The per namespace specification to delete documents older than a given minimum age pruneNamespacesInterval string (optional) How often to run a new prune-namespaces job 20.1.1.1.68. .spec.logStore.retentionPolicy.application.namespaceSpec[] 20.1.1.1.68.1. Description 20.1.1.1.68.1.1. Type array Property Type Description minAge string (optional) Delete the records matching the namespaces which are older than this MinAge (e.g. 1d) namespace string Target Namespace to delete logs older than MinAge (defaults to 7d) 20.1.1.1.69. .spec.logStore.retentionPolicy.audit 20.1.1.1.69.1. Description 20.1.1.1.69.1.1. Type object Property Type Description diskThresholdPercent int (optional) The threshold percentage of ES disk usage that when reached, old indices should be deleted (e.g. 75) maxAge string (optional) namespaceSpec array (optional) The per namespace specification to delete documents older than a given minimum age pruneNamespacesInterval string (optional) How often to run a new prune-namespaces job 20.1.1.1.70. .spec.logStore.retentionPolicy.audit.namespaceSpec[] 20.1.1.1.70.1. Description 20.1.1.1.70.1.1. Type array Property Type Description minAge string (optional) Delete the records matching the namespaces which are older than this MinAge (e.g. 1d) namespace string Target Namespace to delete logs older than MinAge (defaults to 7d) 20.1.1.1.71. .spec.logStore.retentionPolicy.infra 20.1.1.1.71.1. Description 20.1.1.1.71.1.1. Type object Property Type Description diskThresholdPercent int (optional) The threshold percentage of ES disk usage that when reached, old indices should be deleted (e.g. 75) maxAge string (optional) namespaceSpec array (optional) The per namespace specification to delete documents older than a given minimum age pruneNamespacesInterval string (optional) How often to run a new prune-namespaces job 20.1.1.1.72. .spec.logStore.retentionPolicy.infra.namespaceSpec[] 20.1.1.1.72.1. Description 20.1.1.1.72.1.1. Type array Property Type Description minAge string (optional) Delete the records matching the namespaces which are older than this MinAge (e.g. 1d) namespace string Target Namespace to delete logs older than MinAge (defaults to 7d) 20.1.1.1.73. .spec.visualization 20.1.1.1.73.1. Description This is the struct that will contain information pertinent to Log visualization (Kibana) 20.1.1.1.73.1.1. Type object Property Type Description kibana object Specification of the Kibana Visualization component type string The type of Visualization to configure 20.1.1.1.74. .spec.visualization.kibana 20.1.1.1.74.1. Description 20.1.1.1.74.1.1. Type object Property Type Description nodeSelector object Define which Nodes the Pods are scheduled on. proxy object Specification of the Kibana Proxy component replicas int Number of instances to deploy for a Kibana deployment resources object (optional) The resource requirements for Kibana tolerations array 20.1.1.1.75. .spec.visualization.kibana.nodeSelector 20.1.1.1.75.1. Description 20.1.1.1.75.1.1. Type object 20.1.1.1.76. .spec.visualization.kibana.proxy 20.1.1.1.76.1. Description 20.1.1.1.76.1.1. Type object Property Type Description resources object 20.1.1.1.77. .spec.visualization.kibana.proxy.resources 20.1.1.1.77.1. Description 20.1.1.1.77.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 20.1.1.1.78. .spec.visualization.kibana.proxy.resources.limits 20.1.1.1.78.1. Description 20.1.1.1.78.1.1. Type object 20.1.1.1.79. .spec.visualization.kibana.proxy.resources.requests 20.1.1.1.79.1. Description 20.1.1.1.79.1.1. Type object 20.1.1.1.80. .spec.visualization.kibana.replicas 20.1.1.1.80.1. Description 20.1.1.1.80.1.1. Type int 20.1.1.1.81. .spec.visualization.kibana.resources 20.1.1.1.81.1. Description 20.1.1.1.81.1.1. Type object Property Type Description limits object (optional) Limits describes the maximum amount of compute resources allowed. requests object (optional) Requests describes the minimum amount of compute resources required. 20.1.1.1.82. .spec.visualization.kibana.resources.limits 20.1.1.1.82.1. Description 20.1.1.1.82.1.1. Type object 20.1.1.1.83. .spec.visualization.kibana.resources.requests 20.1.1.1.83.1. Description 20.1.1.1.83.1.1. Type object 20.1.1.1.84. .spec.visualization.kibana.tolerations[] 20.1.1.1.84.1. Description 20.1.1.1.84.1.1. Type array Property Type Description effect string (optional) Effect indicates the taint effect to match. Empty means match all taint effects. key string (optional) Key is the taint key that the toleration applies to. Empty means match all taint keys. operator string (optional) Operator represents a key's relationship to the value. tolerationSeconds int (optional) TolerationSeconds represents the period of time the toleration (which must be value string (optional) Value is the taint value the toleration matches to. 20.1.1.1.85. .spec.visualization.kibana.tolerations[].tolerationSeconds 20.1.1.1.85.1. Description 20.1.1.1.85.1.1. Type int 20.1.1.1.86. .status 20.1.1.1.86.1. Description ClusterLoggingStatus defines the observed state of ClusterLogging 20.1.1.1.86.1.1. Type object Property Type Description collection object (optional) conditions object (optional) curation object (optional) logStore object (optional) visualization object (optional) 20.1.1.1.87. .status.collection 20.1.1.1.87.1. Description 20.1.1.1.87.1.1. Type object Property Type Description logs object (optional) 20.1.1.1.88. .status.collection.logs 20.1.1.1.88.1. Description 20.1.1.1.88.1.1. Type object Property Type Description fluentdStatus object (optional) 20.1.1.1.89. .status.collection.logs.fluentdStatus 20.1.1.1.89.1. Description 20.1.1.1.89.1.1. Type object Property Type Description clusterCondition object (optional) daemonSet string (optional) nodes object (optional) pods string (optional) 20.1.1.1.90. .status.collection.logs.fluentdStatus.clusterCondition 20.1.1.1.90.1. Description operator-sdk generate crds does not allow map-of-slice, must use a named type. 20.1.1.1.90.1.1. Type object 20.1.1.1.91. .status.collection.logs.fluentdStatus.nodes 20.1.1.1.91.1. Description 20.1.1.1.91.1.1. Type object 20.1.1.1.92. .status.conditions 20.1.1.1.92.1. Description 20.1.1.1.92.1.1. Type object 20.1.1.1.93. .status.curation 20.1.1.1.93.1. Description 20.1.1.1.93.1.1. Type object Property Type Description curatorStatus array (optional) 20.1.1.1.94. .status.curation.curatorStatus[] 20.1.1.1.94.1. Description 20.1.1.1.94.1.1. Type array Property Type Description clusterCondition object (optional) cronJobs string (optional) schedules string (optional) suspended bool (optional) 20.1.1.1.95. .status.curation.curatorStatus[].clusterCondition 20.1.1.1.95.1. Description operator-sdk generate crds does not allow map-of-slice, must use a named type. 20.1.1.1.95.1.1. Type object 20.1.1.1.96. .status.logStore 20.1.1.1.96.1. Description 20.1.1.1.96.1.1. Type object Property Type Description elasticsearchStatus array (optional) 20.1.1.1.97. .status.logStore.elasticsearchStatus[] 20.1.1.1.97.1. Description 20.1.1.1.97.1.1. Type array Property Type Description cluster object (optional) clusterConditions object (optional) clusterHealth string (optional) clusterName string (optional) deployments array (optional) nodeConditions object (optional) nodeCount int (optional) pods object (optional) replicaSets array (optional) shardAllocationEnabled string (optional) statefulSets array (optional) 20.1.1.1.98. .status.logStore.elasticsearchStatus[].cluster 20.1.1.1.98.1. Description 20.1.1.1.98.1.1. Type object Property Type Description activePrimaryShards int The number of Active Primary Shards for the Elasticsearch Cluster activeShards int The number of Active Shards for the Elasticsearch Cluster initializingShards int The number of Initializing Shards for the Elasticsearch Cluster numDataNodes int The number of Data Nodes for the Elasticsearch Cluster numNodes int The number of Nodes for the Elasticsearch Cluster pendingTasks int relocatingShards int The number of Relocating Shards for the Elasticsearch Cluster status string The current Status of the Elasticsearch Cluster unassignedShards int The number of Unassigned Shards for the Elasticsearch Cluster 20.1.1.1.99. .status.logStore.elasticsearchStatus[].clusterConditions 20.1.1.1.99.1. Description 20.1.1.1.99.1.1. Type object 20.1.1.1.100. .status.logStore.elasticsearchStatus[].deployments[] 20.1.1.1.100.1. Description 20.1.1.1.100.1.1. Type array 20.1.1.1.101. .status.logStore.elasticsearchStatus[].nodeConditions 20.1.1.1.101.1. Description 20.1.1.1.101.1.1. Type object 20.1.1.1.102. .status.logStore.elasticsearchStatus[].pods 20.1.1.1.102.1. Description 20.1.1.1.102.1.1. Type object 20.1.1.1.103. .status.logStore.elasticsearchStatus[].replicaSets[] 20.1.1.1.103.1. Description 20.1.1.1.103.1.1. Type array 20.1.1.1.104. .status.logStore.elasticsearchStatus[].statefulSets[] 20.1.1.1.104.1. Description 20.1.1.1.104.1.1. Type array 20.1.1.1.105. .status.visualization 20.1.1.1.105.1. Description 20.1.1.1.105.1.1. Type object Property Type Description kibanaStatus array (optional) 20.1.1.1.106. .status.visualization.kibanaStatus[] 20.1.1.1.106.1. Description 20.1.1.1.106.1.1. Type array Property Type Description clusterCondition object (optional) deployment string (optional) pods string (optional) The status for each of the Kibana pods for the Visualization component replicaSets array (optional) replicas int (optional) 20.1.1.1.107. .status.visualization.kibanaStatus[].clusterCondition 20.1.1.1.107.1. Description 20.1.1.1.107.1.1. Type object 20.1.1.1.108. .status.visualization.kibanaStatus[].replicaSets[] 20.1.1.1.108.1. Description 20.1.1.1.108.1.1. Type array | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/logging/api-reference |
Chapter 1. Introduction to OpenShift Data Foundation | Chapter 1. Introduction to OpenShift Data Foundation Red Hat OpenShift Data Foundation is a highly integrated collection of cloud storage and data services for Red Hat OpenShift Container Platform. It is available as part of the Red Hat OpenShift Container Platform Service Catalog, packaged as an operator to facilitate simple deployment and management. Red Hat OpenShift Data Foundation services are primarily made available to applications by way of storage classes that represent the following components: Block storage devices, catering primarily to database workloads. Prime examples include Red Hat OpenShift Container Platform logging and monitoring, and PostgreSQL. Important Block storage should be used for any workload only when it does not require sharing the data across multiple containers. Shared and distributed file system, catering primarily to software development, messaging, and data aggregation workloads. Examples include Jenkins build sources and artifacts, Wordpress uploaded content, Red Hat OpenShift Container Platform registry, and messaging using JBoss AMQ. Multicloud object storage, featuring a lightweight S3 API endpoint that can abstract the storage and retrieval of data from multiple cloud object stores. On premises object storage, featuring a robust S3 API endpoint that scales to tens of petabytes and billions of objects, primarily targeting data intensive applications. Examples include the storage and access of row, columnar, and semi-structured data with applications like Spark, Presto, Red Hat AMQ Streams (Kafka), and even machine learning frameworks like TensorFlow and Pytorch. Note Running PostgresSQL workload on CephFS persistent volume is not supported and it is recommended to use RADOS Block Device (RBD) volume. For more information, see the knowledgebase solution ODF Database Workloads Must Not Use CephFS PVs/PVCs . Red Hat OpenShift Data Foundation version 4.x integrates a collection of software projects, including: Ceph, providing block storage, a shared and distributed file system, and on-premises object storage Ceph CSI, to manage provisioning and lifecycle of persistent volumes and claims NooBaa, providing a Multicloud Object Gateway OpenShift Data Foundation, Rook-Ceph, and NooBaa operators to initialize and manage OpenShift Data Foundation services. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/planning_your_deployment/introduction-to-openshift-data-foundation-4_rhodf |
Chapter 5. Maintaining Referential Integrity | Chapter 5. Maintaining Referential Integrity Referential Integrity is a database mechanism that ensures relationships between related entries are maintained. In the Directory Server, the Referential Integrity can be used to ensure that an update to one entry in the directory is correctly reflected in any other entries that reference to the updated entry. For example, if a user's entry is removed from the directory and Referential Integrity is enabled, the server also removes the user from any groups of which the user is a member. If Referential Integrity is not enabled, the user remains a member of the group until manually removed by the administrator. This is an important feature if you are integrating the Directory Server with other products that rely on the directory for user and group management. 5.1. How Referential Integrity Works When the Referential Integrity Postoperation plug-in is enabled, it performs integrity updates on specified attributes immediately after a delete or rename operation. By default, the Referential Integrity Postoperation plug-in is disabled. Note You must enable Referential Integrity Postoperation plug-in on all suppliers in a multi-supplier replication environment. When you delete, rename, or move a user or group entry within the directory, the operation is logged to the Referential Integrity log file. For the distinguished names (DN) in the log file, Directory Server searches and updates in intervals the attributes set in the plug-in configuration: For entries, marked in the log file as deleted, the corresponding attribute in the directory is deleted. For entries, marked in the log file as renamed or moved, the value of the corresponding attribute in the directory is renamed. By default, when the Referential Integrity Postoperation plug-in is enabled, it performs integrity updates on the member , uniquemember , owner , and seeAlso attributes immediately after a delete or rename operation. However, you can configure the behavior of the Referential Integrity Postoperation plug-in to suit the needs of the directory in several different ways: Record Referential Integrity updates in the replication change log. Modify the update interval. Select the attributes to which to apply Referential Integrity. Disable Referential Integrity. All attributes used in referential integrity must be indexed for presence, equality, and substring; not indexing those attributes results poor server performance for modify and delete operations. See Section 13.2, "Creating Standard Indexes" for more information about checking and creating indexes. | [
"nsIndexType: pres nsIndexType: eq nsIndexType: sub"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/Maintaining_Referential_Integrity |
Chapter 10. Configuring kube-rbac-proxy for Serving | Chapter 10. Configuring kube-rbac-proxy for Serving The kube-rbac-proxy component provides internal authentication and authorization capabilities for Knative Serving. 10.1. Configuring kube-rbac-proxy resources for Serving You can globally override resource allocation for the kube-rbac-proxy container by using the OpenShift Serverless Operator CR. Note You can also override resource allocation for a specific deployment. The following configuration sets Knative Serving kube-rbac-proxy minimum and maximum CPU and memory allocation: KnativeServing CR example apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: config: deployment: "kube-rbac-proxy-cpu-request": "10m" 1 "kube-rbac-proxy-memory-request": "20Mi" 2 "kube-rbac-proxy-cpu-limit": "100m" 3 "kube-rbac-proxy-memory-limit": "100Mi" 4 1 Sets minimum CPU allocation. 2 Sets minimum RAM allocation. 3 Sets maximum CPU allocation. 4 Sets maximum RAM allocation. | [
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: config: deployment: \"kube-rbac-proxy-cpu-request\": \"10m\" 1 \"kube-rbac-proxy-memory-request\": \"20Mi\" 2 \"kube-rbac-proxy-cpu-limit\": \"100m\" 3 \"kube-rbac-proxy-memory-limit\": \"100Mi\" 4"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.33/html/serving/kube-rbac-proxy-serving |
2.4.2. Vulnerable Client Applications | 2.4.2. Vulnerable Client Applications Although an administrator may have a fully secure and patched server, that does not mean remote users are secure when accessing it. For instance, if the server offers Telnet or FTP services over a public network, an attacker can capture the plain text usernames and passwords as they pass over the network, and then use the account information to access the remote user's workstation. Even when using secure protocols, such as SSH, a remote user may be vulnerable to certain attacks if they do not keep their client applications updated. For instance, v.1 SSH clients are vulnerable to an X-forwarding attack from malicious SSH servers. Once connected to the server, the attacker can quietly capture any keystrokes and mouse clicks made by the client over the network. This problem was fixed in the v.2 SSH protocol, but it is up to the user to keep track of what applications have such vulnerabilities and update them as necessary. Chapter 4, Workstation Security discusses in more detail what steps administrators and home users should take to limit the vulnerability of computer workstations. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/s2-risk-wspc-apps |
Chapter 10. Deleting a ROSA cluster | Chapter 10. Deleting a ROSA cluster This document provides steps to delete a Red Hat OpenShift Service on AWS (ROSA) cluster that uses the AWS Security Token Service (STS). After deleting your cluster, you can also delete the AWS Identity and Access Management (IAM) resources that are used by the cluster. 10.1. Prerequisites If Red Hat OpenShift Service on AWS created a VPC, you must remove the following items from your cluster before you can successfully delete your cluster: Network configurations, such as VPN configurations and VPC peering connections Any additional services that were added to the VPC If these configurations and services remain, the cluster does not delete properly. 10.2. Deleting a ROSA cluster and the cluster-specific IAM resources You can delete a Red Hat OpenShift Service on AWS (ROSA) with AWS Security Token Service (STS) cluster by using the ROSA CLI ( rosa ) or Red Hat OpenShift Cluster Manager. After deleting the cluster, you can clean up the cluster-specific Identity and Access Management (IAM) resources in your AWS account by using the ROSA CLI ( rosa ). The cluster-specific resources include the Operator roles and the OpenID Connect (OIDC) provider. Note The cluster deletion must complete before you remove the IAM resources, because the resources are used in the cluster deletion and clean-up processes. If add-ons are installed, the cluster deletion takes longer because add-ons are uninstalled before the cluster is deleted. The amount of time depends on the number and size of the add-ons. Important If the cluster that created the VPC during the installation is deleted, the associated installation program-created VPC will also be deleted, resulting in the failure of all the clusters that are using the same VPC. Additionally, any resources created with the same tagSet key-value pair of the resources created by the installation program and labeled with a value of owned will also be deleted. Prerequisites You have installed a ROSA cluster. You have installed and configured the latest ROSA CLI ( rosa ) on your installation host. Procedure Obtain the cluster ID, the Amazon Resource Names (ARNs) for the cluster-specific Operator roles and the endpoint URL for the OIDC provider: USD rosa describe cluster --cluster=<cluster_name> 1 1 Replace <cluster_name> with the name of your cluster. Example output Name: mycluster ID: 1s3v4x39lhs8sm49m90mi0822o34544a 1 ... Operator IAM Roles: 2 - arn:aws:iam::<aws_account_id>:role/mycluster-x4q9-openshift-machine-api-aws-cloud-credentials - arn:aws:iam::<aws_account_id>:role/mycluster-x4q9-openshift-cloud-credential-operator-cloud-crede - arn:aws:iam::<aws_account_id>:role/mycluster-x4q9-openshift-image-registry-installer-cloud-creden - arn:aws:iam::<aws_account_id>:role/mycluster-x4q9-openshift-ingress-operator-cloud-credentials - arn:aws:iam::<aws_account_id>:role/mycluster-x4q9-openshift-cluster-csi-drivers-ebs-cloud-credent - arn:aws:iam::<aws_account_id>:role/mycluster-x4q9-openshift-cloud-network-config-controller-cloud State: ready Private: No Created: May 13 2022 11:26:15 UTC Details Page: https://console.redhat.com/openshift/details/s/296kyEFwzoy1CREQicFRdZybrc0 OIDC Endpoint URL: https://oidc.op1.openshiftapps.com/<oidc_config_id> 3 1 Lists the cluster ID. 2 Specifies the ARNs for the cluster-specific Operator roles. For example, in the sample output the ARN for the role required by the Machine Config Operator is arn:aws:iam::<aws_account_id>:role/mycluster-x4q9-openshift-machine-api-aws-cloud-credentials . 3 Displays the endpoint URL for the cluster-specific OIDC provider. Important You require the cluster ID to delete the cluster-specific STS resources using the ROSA CLI ( rosa ) after the cluster is deleted. Delete the cluster: To delete the cluster by using Red Hat OpenShift Cluster Manager: Navigate to OpenShift Cluster Manager . Click the Options menu to your cluster and select Delete cluster . Type the name of your cluster at the prompt and click Delete . To delete the cluster using the ROSA CLI ( rosa ): Enter the following command to delete the cluster and watch the logs, replacing <cluster_name> with the name or ID of your cluster: USD rosa delete cluster --cluster=<cluster_name> --watch Important You must wait for the cluster deletion to complete before you remove the Operator roles and the OIDC provider. The cluster-specific Operator roles are required to clean-up the resources created by the OpenShift Operators. The Operators use the OIDC provider to authenticate. Delete the OIDC provider that the cluster Operators use to authenticate: USD rosa delete oidc-provider -c <cluster_id> --mode auto 1 1 Replace <cluster_id> with the ID of the cluster. Note You can use the -y option to automatically answer yes to the prompts. Optional. Delete the cluster-specific Operator IAM roles: Important The account-wide IAM roles can be used by other ROSA clusters in the same AWS account. Only remove the roles if they are not required by other clusters. USD rosa delete operator-roles -c <cluster_id> --mode auto 1 1 Replace <cluster_id> with the ID of the cluster. Troubleshooting If the cluster cannot be deleted because of missing IAM roles, see Additional Repairing a cluster that cannot be deleted . If the cluster cannot be deleted for other reasons: Check that there are no Add-ons for your cluster pending in the Hybrid Cloud Console . Check that all AWS resources and dependencies have been deleted in the Amazon Web Console. Additional resources For steps to delete the account-wide IAM roles and policies, see Deleting the account-wide IAM roles and policies . For steps to delete the OpenShift Cluster Manager and user IAM roles, see Unlinking and deleting the OpenShift Cluster Manager and user IAM roles . 10.3. Deleting the account-wide IAM resources After you have deleted all Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) clusters that depend on the account-wide AWS Identity and Access Management (IAM) resources, you can delete the account-wide resources. If you no longer need to install a ROSA with HCP cluster by using Red Hat OpenShift Cluster Manager, you can also delete the OpenShift Cluster Manager and user IAM roles. Important The account-wide IAM roles and policies might be used by other ROSA with HCP clusters in the same AWS account. Only remove the resources if they are not required by other clusters. The OpenShift Cluster Manager and user IAM roles are required if you want to install, manage, and delete other Red Hat OpenShift Service on AWS clusters in the same AWS account by using OpenShift Cluster Manager. Only remove the roles if you no longer need to install Red Hat OpenShift Service on AWS clusters in your account by using OpenShift Cluster Manager. For more information about repairing your cluster if these roles are removed before deletion, see "Repairing a cluster that cannot be deleted" in Troubleshooting cluster deployments . 10.3.1. Deleting the account-wide IAM roles and policies This section provides steps to delete the account-wide IAM roles and policies that you created for ROSA with STS ROSA with HCP deployments, along with the account-wide Operator policies. You can delete the account-wide AWS Identity and Access Management (IAM) roles and policies only after deleting all of the Red Hat OpenShift Service on AWS (ROSA) with AWS Security Token Services (STS) ROSA with HCP clusters that depend on them. Important The account-wide IAM roles and policies might be used by other ROSA clusters Red Hat OpenShift Service on AWS in the same AWS account. Only remove the roles if they are not required by other clusters. Prerequisites You have account-wide IAM roles that you want to delete. You have installed and configured the latest ROSA CLI ( rosa ) on your installation host. Procedure Delete the account-wide roles: List the account-wide roles in your AWS account by using the ROSA CLI ( rosa ): USD rosa list account-roles Example output I: Fetching account roles ROLE NAME ROLE TYPE ROLE ARN OPENSHIFT VERSION ManagedOpenShift-ControlPlane-Role Control plane arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-ControlPlane-Role 4.10 ManagedOpenShift-Installer-Role Installer arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-Installer-Role 4.10 ManagedOpenShift-Support-Role Support arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-Support-Role 4.10 ManagedOpenShift-Worker-Role Worker arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-Worker-Role 4.10 Delete the account-wide roles: USD rosa delete account-roles --prefix <prefix> --mode auto 1 1 You must include the --<prefix> argument. Replace <prefix> with the prefix of the account-wide roles to delete. If you did not specify a custom prefix when you created the account-wide roles, specify the default prefix, ManagedOpenShift . Important The account-wide IAM roles might be used by other ROSA clusters in the same AWS account. Only remove the roles if they are not required by other clusters. Example output W: There are no classic account roles to be deleted I: Deleting hosted CP account roles ? Delete the account role 'delete-rosa-HCP-ROSA-Installer-Role'? Yes I: Deleting account role 'delete-rosa-HCP-ROSA-Installer-Role' ? Delete the account role 'delete-rosa-HCP-ROSA-Support-Role'? Yes I: Deleting account role 'delete-rosa-HCP-ROSA-Support-Role' ? Delete the account role 'delete-rosa-HCP-ROSA-Worker-Role'? Yes I: Deleting account role 'delete-rosa-HCP-ROSA-Worker-Role' I: Successfully deleted the hosted CP account roles Delete the account-wide in-line and Operator policies: Under the Policies page in the AWS IAM Console , filter the list of policies by the prefix that you specified when you created the account-wide roles and policies. Note If you did not specify a custom prefix when you created the account-wide roles, search for the default prefix, ManagedOpenShift . Delete the account-wide in-line policies and Operator policies by using the AWS IAM Console . For more information about deleting IAM policies by using the AWS IAM Console, see Deleting IAM policies in the AWS documentation. Important The account-wide in-line and Operator IAM policies might be used by other ROSA clusters ROSA with HCP in the same AWS account. Only remove the roles if they are not required by other clusters. 10.3.2. Unlinking and deleting the OpenShift Cluster Manager and user IAM roles When you install a ROSA with HCP cluster by using Red Hat OpenShift Cluster Manager, you also create OpenShift Cluster Manager and user Identity and Access Management (IAM) roles that link to your Red Hat organization. After deleting your cluster, you can unlink and delete the roles by using the ROSA CLI ( rosa ). Important The OpenShift Cluster Manager and user IAM roles are required if you want to use OpenShift Cluster Manager to install and manage other ROSA with HCP in the same AWS account. Only remove the roles if you no longer need to use the OpenShift Cluster Manager to install ROSA with HCP clusters. Prerequisites You created OpenShift Cluster Manager and user IAM roles and linked them to your Red Hat organization. You have installed and configured the latest ROSA CLI ( rosa ) on your installation host. You have organization administrator privileges in your Red Hat organization. Procedure Unlink the OpenShift Cluster Manager IAM role from your Red Hat organization and delete the role: List the OpenShift Cluster Manager IAM roles in your AWS account: USD rosa list ocm-roles Example output I: Fetching ocm roles ROLE NAME ROLE ARN LINKED ADMIN AWS Managed ManagedOpenShift-OCM-Role-<red_hat_organization_external_id> arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-OCM-Role-<red_hat_organization_external_id> Yes Yes Yes If your OpenShift Cluster Manager IAM role is listed as linked in the output of the preceding command, unlink the role from your Red Hat organization by running the following command: USD rosa unlink ocm-role --role-arn <arn> 1 1 Replace <arn> with the Amazon Resource Name (ARN) for your OpenShift Cluster Manager IAM role. The ARN is specified in the output of the preceding command. In the preceding example, the ARN is in the format arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-OCM-Role-<red_hat_organization_external_id> . Example output I: Unlinking OCM role ? Unlink the 'arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-OCM-Role-<red_hat_organization_external_id>' role from organization '<red_hat_organization_id>'? Yes I: Successfully unlinked role-arn 'arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-OCM-Role-<red_hat_organization_external_id>' from organization account '<red_hat_organization_id>' Delete the OpenShift Cluster Manager IAM role and policies: USD rosa delete ocm-role --role-arn <arn> Example output I: Deleting OCM role ? OCM Role ARN: arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-OCM-Role-<red_hat_organization_external_id> ? Delete 'arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-OCM-Role-<red_hat_organization_external_id>' ocm role? Yes ? OCM role deletion mode: auto 1 I: Successfully deleted the OCM role 1 Specifies the deletion mode. You can use auto mode to automatically delete the OpenShift Cluster Manager IAM role and policies. In manual mode, the ROSA CLI generates the aws commands needed to delete the role and policies. manual mode enables you to review the details before running the aws commands manually. Unlink the user IAM role from your Red Hat organization and delete the role: List the user IAM roles in your AWS account: USD rosa list user-roles Example output I: Fetching user roles ROLE NAME ROLE ARN LINKED ManagedOpenShift-User-<ocm_user_name>-Role arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-User-<ocm_user_name>-Role Yes If your user IAM role is listed as linked in the output of the preceding command, unlink the role from your Red Hat organization: USD rosa unlink user-role --role-arn <arn> 1 1 Replace <arn> with the Amazon Resource Name (ARN) for your user IAM role. The ARN is specified in the output of the preceding command. In the preceding example, the ARN is in the format arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-User-<ocm_user_name>-Role . Example output I: Unlinking user role ? Unlink the 'arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-User-<ocm_user_name>-Role' role from the current account '<ocm_user_account_id>'? Yes I: Successfully unlinked role ARN 'arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-User-<ocm_user_name>-Role' from account '<ocm_user_account_id>' Delete the user IAM role: USD rosa delete user-role --role-arn <arn> Example output I: Deleting user role ? User Role ARN: arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-User-<ocm_user_name>-Role ? Delete the 'arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-User-<ocm_user_name>-Role' role from the AWS account? Yes ? User role deletion mode: auto 1 I: Successfully deleted the user role 1 Specifies the deletion mode. You can use auto mode to automatically delete the user IAM role. In manual mode, the ROSA CLI generates the aws command needed to delete the role. manual mode enables you to review the details before running the aws command manually. 10.4. Additional resources For information about the cluster delete protection feature, see Edit objects . For information about the AWS IAM resources for ROSA clusters that use STS, see About IAM resources . For information on cluster errors that are due to missing IAM roles, see Repairing a cluster that cannot be deleted . | [
"rosa describe cluster --cluster=<cluster_name> 1",
"Name: mycluster ID: 1s3v4x39lhs8sm49m90mi0822o34544a 1 Operator IAM Roles: 2 - arn:aws:iam::<aws_account_id>:role/mycluster-x4q9-openshift-machine-api-aws-cloud-credentials - arn:aws:iam::<aws_account_id>:role/mycluster-x4q9-openshift-cloud-credential-operator-cloud-crede - arn:aws:iam::<aws_account_id>:role/mycluster-x4q9-openshift-image-registry-installer-cloud-creden - arn:aws:iam::<aws_account_id>:role/mycluster-x4q9-openshift-ingress-operator-cloud-credentials - arn:aws:iam::<aws_account_id>:role/mycluster-x4q9-openshift-cluster-csi-drivers-ebs-cloud-credent - arn:aws:iam::<aws_account_id>:role/mycluster-x4q9-openshift-cloud-network-config-controller-cloud State: ready Private: No Created: May 13 2022 11:26:15 UTC Details Page: https://console.redhat.com/openshift/details/s/296kyEFwzoy1CREQicFRdZybrc0 OIDC Endpoint URL: https://oidc.op1.openshiftapps.com/<oidc_config_id> 3",
"rosa delete cluster --cluster=<cluster_name> --watch",
"rosa delete oidc-provider -c <cluster_id> --mode auto 1",
"rosa delete operator-roles -c <cluster_id> --mode auto 1",
"rosa list account-roles",
"I: Fetching account roles ROLE NAME ROLE TYPE ROLE ARN OPENSHIFT VERSION ManagedOpenShift-ControlPlane-Role Control plane arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-ControlPlane-Role 4.10 ManagedOpenShift-Installer-Role Installer arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-Installer-Role 4.10 ManagedOpenShift-Support-Role Support arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-Support-Role 4.10 ManagedOpenShift-Worker-Role Worker arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-Worker-Role 4.10",
"I: Fetching account roles ROLE NAME ROLE TYPE ROLE ARN OPENSHIFT VERSION AWS Managed ManagedOpenShift-HCP-ROSA-Installer-Role Installer arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-HCP-ROSA-Installer-Role 4.18 Yes ManagedOpenShift-HCP-ROSA-Support-Role Support arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-HCP-ROSA-Support-Role 4.18 Yes ManagedOpenShift-HCP-ROSA-Worker-Role Worker arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-HCP-ROSA-Worker-Role 4.18 Yes",
"rosa delete account-roles --prefix <prefix> --mode auto 1",
"W: There are no classic account roles to be deleted I: Deleting hosted CP account roles ? Delete the account role 'delete-rosa-HCP-ROSA-Installer-Role'? Yes I: Deleting account role 'delete-rosa-HCP-ROSA-Installer-Role' ? Delete the account role 'delete-rosa-HCP-ROSA-Support-Role'? Yes I: Deleting account role 'delete-rosa-HCP-ROSA-Support-Role' ? Delete the account role 'delete-rosa-HCP-ROSA-Worker-Role'? Yes I: Deleting account role 'delete-rosa-HCP-ROSA-Worker-Role' I: Successfully deleted the hosted CP account roles",
"rosa list ocm-roles",
"I: Fetching ocm roles ROLE NAME ROLE ARN LINKED ADMIN AWS Managed ManagedOpenShift-OCM-Role-<red_hat_organization_external_id> arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-OCM-Role-<red_hat_organization_external_id> Yes Yes Yes",
"rosa unlink ocm-role --role-arn <arn> 1",
"I: Unlinking OCM role ? Unlink the 'arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-OCM-Role-<red_hat_organization_external_id>' role from organization '<red_hat_organization_id>'? Yes I: Successfully unlinked role-arn 'arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-OCM-Role-<red_hat_organization_external_id>' from organization account '<red_hat_organization_id>'",
"rosa delete ocm-role --role-arn <arn>",
"I: Deleting OCM role ? OCM Role ARN: arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-OCM-Role-<red_hat_organization_external_id> ? Delete 'arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-OCM-Role-<red_hat_organization_external_id>' ocm role? Yes ? OCM role deletion mode: auto 1 I: Successfully deleted the OCM role",
"rosa list user-roles",
"I: Fetching user roles ROLE NAME ROLE ARN LINKED ManagedOpenShift-User-<ocm_user_name>-Role arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-User-<ocm_user_name>-Role Yes",
"rosa unlink user-role --role-arn <arn> 1",
"I: Unlinking user role ? Unlink the 'arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-User-<ocm_user_name>-Role' role from the current account '<ocm_user_account_id>'? Yes I: Successfully unlinked role ARN 'arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-User-<ocm_user_name>-Role' from account '<ocm_user_account_id>'",
"rosa delete user-role --role-arn <arn>",
"I: Deleting user role ? User Role ARN: arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-User-<ocm_user_name>-Role ? Delete the 'arn:aws:iam::<aws_account_id>:role/ManagedOpenShift-User-<ocm_user_name>-Role' role from the AWS account? Yes ? User role deletion mode: auto 1 I: Successfully deleted the user role"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/install_rosa_classic_clusters/rosa-sts-deleting-cluster |
Chapter 10. Pre-installation validation | Chapter 10. Pre-installation validation 10.1. Definition of pre-installation validations The Assisted Installer aims to make cluster installation as simple, efficient, and error-free as possible. The Assisted Installer performs validation checks on the configuration and the gathered telemetry before starting an installation. The Assisted Installer will use the information provided prior to installation, such as control plane topology, network configuration and hostnames. It will also use real time telemetry from the hosts you are attempting to install. When a host boots the discovery ISO, an agent will start on the host. The agent will send information about the state of the host to the Assisted Installer. The Assisted Installer uses all of this information to compute real time pre-installation validations. All validations are either blocking or non-blocking to the installation. 10.2. Blocking and non blocking validations A blocking validation will prevent progress of the installation, meaning that you will need to resolve the issue and pass the blocking validation before you can proceed. A non blocking validation is a warning and will tell you of things that might cause you a problem. 10.3. Validation types The Assisted Installer performs two types of validation: Host Host validations ensure that the configuration of a given host is valid for installation. Cluster Cluster validations ensure that the configuration of the whole cluster is valid for installation. 10.4. Host validations 10.4.1. Getting host validations by using the REST API Note If you use the web based UI, many of these validations will not show up by name. To get a list of validations consistent with the labels, use the following procedure. Prerequisites You have installed the jq utility. You have created an Infrastructure Environment by using the API or have created a cluster by using the UI. You have hosts booted with the discovery ISO You have your Cluster ID exported in your shell as CLUSTER_ID . You have credentials to use when accessing the API and have exported a token as API_TOKEN in your shell. Procedures Refresh the API token: USD source refresh-token Get all validations for all hosts: USD curl \ --silent \ --header "Authorization: Bearer USDAPI_TOKEN" \ https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID/hosts \ | jq -r .[].validations_info \ | jq 'map(.[])' Get non-passing validations for all hosts: USD curl \ --silent \ --header "Authorization: Bearer USDAPI_TOKEN" \ https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID/hosts \ | jq -r .[].validations_info \ | jq 'map(.[]) | map(select(.status=="failure" or .status=="pending")) | select(length>0)' 10.4.2. Host validations in detail Parameter Validation type Description connected non-blocking Checks that the host has recently communicated with the Assisted Installer. has-inventory non-blocking Checks that the Assisted Installer received the inventory from the host. has-min-cpu-cores non-blocking Checks that the number of CPU cores meets the minimum requirements. has-min-memory non-blocking Checks that the amount of memory meets the minimum requirements. has-min-valid-disks non-blocking Checks that at least one available disk meets the eligibility criteria. has-cpu-cores-for-role blocking Checks that the number of cores meets the minimum requirements for the host role. has-memory-for-role blocking Checks that the amount of memory meets the minimum requirements for the host role. ignition-downloadable blocking For day 2 hosts, checks that the host can download ignition configuration from the day 1 cluster. belongs-to-majority-group blocking The majority group is the largest full-mesh connectivity group on the cluster, where all members can communicate with all other members. This validation checks that hosts in a multi-node, day 1 cluster are in the majority group. valid-platform-network-settings blocking Checks that the platform is valid for the network settings. ntp-synced non-blocking Checks if an NTP server has been successfully used to synchronize time on the host. container-images-available non-blocking Checks if container images have been successfully pulled from the image registry. sufficient-installation-disk-speed blocking Checks that disk speed metrics from an earlier installation meet requirements, if they exist. sufficient-network-latency-requirement-for-role blocking Checks that the average network latency between hosts in the cluster meets the requirements. sufficient-packet-loss-requirement-for-role blocking Checks that the network packet loss between hosts in the cluster meets the requirements. has-default-route blocking Checks that the host has a default route configured. api-domain-name-resolved-correctly blocking For a multi node cluster with user managed networking. Checks that the host is able to resolve the API domain name for the cluster. api-int-domain-name-resolved-correctly blocking For a multi node cluster with user managed networking. Checks that the host is able to resolve the internal API domain name for the cluster. apps-domain-name-resolved-correctly blocking For a multi node cluster with user managed networking. Checks that the host is able to resolve the internal apps domain name for the cluster. compatible-with-cluster-platform non-blocking Checks that the host is compatible with the cluster platform dns-wildcard-not-configured blocking Checks that the wildcard DNS *.<cluster_name>.<base_domain> is not configured, because this causes known problems for OpenShift disk-encryption-requirements-satisfied non-blocking Checks that the type of host and disk encryption configured meet the requirements. non-overlapping-subnets blocking Checks that this host does not have any overlapping subnets. hostname-unique blocking Checks that the hostname is unique in the cluster. hostname-valid blocking Checks the validity of the hostname, meaning that it matches the general form of hostnames and is not forbidden. belongs-to-machine-cidr blocking Checks that the host IP is in the address range of the machine CIDR. lso-requirements-satisfied blocking Validates that the cluster meets the requirements of the Local Storage Operator. odf-requirements-satisfied blocking Validates that the cluster meets the requirements of the Openshift Data Foundation Operator. The cluster has a minimum of 3 hosts. The cluster has only 3 masters or a minimum of 3 workers. The cluster has 3 eligible disks and each host must have an eligible disk. The host role must not be "Auto Assign" for clusters with more than three hosts. cnv-requirements-satisfied blocking Validates that the cluster meets the requirements of Container Native Virtualization. The BIOS of the host must have CPU virtualization enabled. Host must have enough CPU cores and RAM available for Container Native Virtualization. Will validate the Host Path Provisioner if necessary. lvm-requirements-satisfied blocking Validates that the cluster meets the requirements of the Logical Volume Manager Operator. Host has at least one additional empty disk, not partitioned and not formatted. vsphere-disk-uuid-enabled non-blocking Verifies that each valid disk sets disk.EnableUUID to true . In VSphere this will result in each disk having a UUID. compatible-agent blocking Checks that the discovery agent version is compatible with the agent docker image version. no-skip-installation-disk blocking Checks that installation disk is not skipping disk formatting. no-skip-missing-disk blocking Checks that all disks marked to skip formatting are in the inventory. A disk ID can change on reboot, and this validation prevents issues caused by that. media-connected blocking Checks the connection of the installation media to the host. machine-cidr-defined non-blocking Checks that the machine network definition exists for the cluster. id-platform-network-settings blocking Checks that the platform is compatible with the network settings. Some platforms are only permitted when installing Single Node Openshift or when using User Managed Networking. 10.5. Cluster validations 10.5.1. Getting cluster validations by using the REST API Note: If you use the web based UI, many of these validations will not show up by name. To get a list of validations consistent with the labels, use the following procedure. Prerequisites You have installed the jq utility. You have created an Infrastructure Environment by using the API or have created a cluster by using the UI. You have your Cluster ID exported in your shell as CLUSTER_ID . You have credentials to use when accessing the API and have exported a token as API_TOKEN in your shell. Procedures Refresh the API token: USD source refresh-token Get all cluster validations: USD curl \ --silent \ --header "Authorization: Bearer USDAPI_TOKEN" \ https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID \ | jq -r .validations_info \ | jq 'map(.[])' Get non-passing cluster validations: USD curl \ --silent \ --header "Authorization: Bearer USDAPI_TOKEN" \ https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID \ | jq -r .validations_info \ | jq '. | map(.[] | select(.status=="failure" or .status=="pending")) | select(length>0)' 10.5.2. Cluster validations in detail Parameter Validation type Description machine-cidr-defined non-blocking Checks that the machine network definition exists for the cluster. cluster-cidr-defined non-blocking Checks that the cluster network definition exists for the cluster. service-cidr-defined non-blocking Checks that the service network definition exists for the cluster. no-cidrs-overlapping blocking Checks that the defined networks do not overlap. networks-same-address-families blocking Checks that the defined networks share the same address families (valid address families are IPv4, IPv6) network-prefix-valid blocking Checks the cluster network prefix to ensure that it is valid and allows enough address space for all hosts. machine-cidr-equals-to-calculated-cidr blocking For a non user managed networking cluster. Checks that apiVIPs or ingressVIPs are members of the machine CIDR if they exist. api-vips-defined non-blocking For a non user managed networking cluster. Checks that apiVIPs exist. api-vips-valid blocking For a non user managed networking cluster. Checks if the apiVIPs belong to the machine CIDR and are not in use. ingress-vips-defined blocking For a non user managed networking cluster. Checks that ingressVIPs exist. ingress-vips-valid non-blocking For a non user managed networking cluster. Checks if the ingressVIPs belong to the machine CIDR and are not in use. all-hosts-are-ready-to-install blocking Checks that all hosts in the cluster are in the "ready to install" status. sufficient-masters-count blocking This validation only applies to multi-node clusters. The cluster must have exactly three masters. If the cluster has worker nodes, a minimum of 2 worker nodes must exist. dns-domain-defined non-blocking Checks that the base DNS domain exists for the cluster. pull-secret-set non-blocking Checks that the pull secret exists. Does not check that the pull secret is valid or authorized. ntp-server-configured blocking Checks that each of the host clocks are no more than 4 minutes out of sync with each other. lso-requirements-satisfied blocking Validates that the cluster meets the requirements of the Local Storage Operator. odf-requirements-satisfied blocking Validates that the cluster meets the requirements of the Openshift Data Foundation Operator. The cluster has a minimum of 3 hosts. The cluster has only 3 masters or a minimum of 3 workers. The cluster has 3 eligible disks and each host must have an eligible disk. cnv-requirements-satisfied blocking Validates that the cluster meets the requirements of Container Native Virtualization. The CPU architecture for the cluster is x86 lvm-requirements-satisfied blocking Validates that the cluster meets the requirements of the Logical Volume Manager Operator. The cluster must be single node. The cluster must be running Openshift >= 4.11.0. network-type-valid blocking Checks the validity of the network type if it exists. The network type must be OpenshiftSDN or OVNKubernetes. OpenshiftSDN does not support IPv6 or Single Node Openshift. OVNKubernetes does not support VIP DHCP allocation. | [
"source refresh-token",
"curl --silent --header \"Authorization: Bearer USDAPI_TOKEN\" https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID/hosts | jq -r .[].validations_info | jq 'map(.[])'",
"curl --silent --header \"Authorization: Bearer USDAPI_TOKEN\" https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID/hosts | jq -r .[].validations_info | jq 'map(.[]) | map(select(.status==\"failure\" or .status==\"pending\")) | select(length>0)'",
"source refresh-token",
"curl --silent --header \"Authorization: Bearer USDAPI_TOKEN\" https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID | jq -r .validations_info | jq 'map(.[])'",
"curl --silent --header \"Authorization: Bearer USDAPI_TOKEN\" https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID | jq -r .validations_info | jq '. | map(.[] | select(.status==\"failure\" or .status==\"pending\")) | select(length>0)'"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/assisted_installer_for_openshift_container_platform/assembly_pre-installation-validation |
function::ansi_reset_color | function::ansi_reset_color Name function::ansi_reset_color - Resets Select Graphic Rendition mode. Synopsis Arguments None General Syntax ansi_reset_color Description Sends ansi code to reset foreground, background and color attribute to default values. | [
"function ansi_reset_color()"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-ansi-reset-color |
Chapter 1. Getting ready to install MicroShift | Chapter 1. Getting ready to install MicroShift Plan for your Red Hat Device Edge by planning your Red Hat Enterprise Linux (RHEL) installation type and your MicroShift configuration. 1.1. System requirements for installing MicroShift The following conditions must be met prior to installing MicroShift: A compatible version of Red Hat Enterprise Linux (RHEL). For more information, see Compatibility table . AArch64 or x86_64 system architecture. 2 CPU cores. 2 GB RAM. Installing from the network (UEFI HTTPs or PXE boot) requires 3 GB RAM for RHEL. 10 GB of storage. You have an active MicroShift subscription on your Red Hat account. If you do not have a subscription, contact your sales representative for more information. if your workload requires Persistent Volumes (PVs), you have a Logical Volume Manager (LVM) Volume Group (VG) with sufficient free capacity for the workloads. Important These requirements are the minimum system requirements for MicroShift and Red Hat Enterprise Linux (RHEL). Add the system requirements for the workload you plan to run. For example, if an IoT gateway solution requires 4 GB of RAM, your system needs to have at least 2 GB for Red Hat Enterprise Linux (RHEL) and MicroShift, plus 4 GB for the workloads. 6 GB of RAM is required in total. It is recommended to allow for extra capacity for future needs if you are deploying physical devices in remote locations. If you are uncertain of the RAM required and if the budget permits, use the maximum RAM capacity that the device can support. Important Ensure you configure secure access to the system to be able to manage it accordingly. For more information, see Using secure communications between two systems with OpenSSH . 1.2. Compatibility table Plan to pair a supported version of RHEL for Edge with the MicroShift version you are using as described in the following compatibility table. Red Hat Device Edge release compatibility matrix Red Hat Enterprise Linux (RHEL) and MicroShift work together as a single solution for device-edge computing. You can update each component separately, but the product versions must be compatible. Supported configurations of Red Hat Device Edge use verified releases for each together as listed in the following table: RHEL Version(s) MicroShift Version Supported MicroShift Version -> Version Updates 9.4 4.18 4.18.0 -> 4.18.z 9.4 4.17 4.17.1 -> 4.17.z, 4.17 -> 4.18 9.4 4.16 4.16.0 -> 4.16.z, 4.16 -> 4.17, 4.16 -> 4.18 9.2, 9.3 4.15 4.15.0 -> 4.15.z, 4.15 -> 4.16 on RHEL 9.4 9.2, 9.3 4.14 4.14.0 -> 4.14.z, 4.14 -> 4.15, 4.14 -> 4.16 on RHEL 9.4 1.3. MicroShift installation tools To use MicroShift, you must already have or plan to install a RHEL type, such as on bare metal, or as a virtual machine (VM) that you provision. Although each use case has different details, each installation of Red Hat Device Edge uses RHEL tools and the OpenShift CLI ( oc ). You can use RPMs to install MicroShift on an existing RHEL machine. See Installing from an RPM package for more information. No other tools are required unless you are also installing an image-based RHEL system or VM at the same time. 1.4. RHEL installation types Where you want to run your cluster and what your application needs to do determine the RHEL installation type that you choose. For every installation target, you must configure both the operating system and MicroShift. Consider your application storage needs, networking for cluster or application access, and your authentication and security requirements. Understand the differences between the RHEL installation types, including the support scope of each, and the tools used. 1.4.1. Using RPMs, or package-based installation This simple installation type uses a basic command to install MicroShift on an existing RHEL machine. See Installing from an RPM package for more information. No other tools are required unless you are also installing a RHEL system or virtual machine (VM) at the same time. 1.4.2. RHEL image-based installations Image-based installation types involve creating an rpm-ostree -based, immutable version of RHEL that is optimized for edge deployment. RHEL for Edge can be deployed to the edge in production environments. This installation type can be used where network connections are present or completely offline, depending on the local environment. Image mode for RHEL is available with the Technology Preview support scope. This image-based installation type is based on OCI container images and bootable containers. See bootc: Getting started with bootable containers for an introduction to bootc technology. When choosing an image-based installation, consider whether the installation target is intended to be in an offline or networked state, where you plan to build system images, and how you plan to load your Red Hat Device Edge. Use the following scenarios as general guidance: If you build either a fully self-contained RHEL for Edge or an image mode for RHEL ISO outside a disconnected environment, and then install the ISO locally on your edge devices, you likely do not need an RPM repository or a mirror registry. If you build an ISO outside a disconnected environment that does not include the container images, but consists of only the RPMs, you need a mirror registry inside your disconnected environment. You use your mirror registry to pull container images. If you build images inside a disconnected environment, or use package mode for installations, you need both a mirror registry and a local RPM mirror repository. You can use either the RHEL reposync utility or Red Hat Satellite for advanced use cases. See How to create a local mirror of the latest update for Red Hat Enterprise Linux 8 and 9 without using Satellite Server and Red Hat Satellite for more information. 1.5. RHEL installation tools and concepts Familiarize yourself with the following RHEL tools and concepts: A Kickstart file, which contains the configuration and instructions used during the installation of your specific operating system. RHEL image builder is a tool for creating deployment-ready customized system images. RHEL image builder uses a blueprint that you create to make the ISO. RHEL image builder is best installed on a RHEL VM and is built with the composer-cli tool. To set up these tools and review the workflow, see the following RHEL documentation links: Introducing the RHEL image builder command-line interface Installing image builder Creating a system image with RHEL image builder in the command-line interface A blueprint file directs RHEL image builder to the items to include in the ISO. An image blueprint provides a persistent definition of image customizations. You can create multiple builds from a single blueprint. You can also edit an existing blueprint to build a new ISO as requirements change. For more information, see Creating a blueprint by using the command-line interface in the RHEL documentation. An ISO, which is the bootable operating system on which MicroShift runs. See Creating a boot ISO installer image using the RHEL image builder CLI , Installing a bootable ISO to a media and booting it , and Embedding in a RHEL for Edge image using image builder . 1.6. Red Hat Device Edge installation steps For most installation types, you must also take the following steps: Download the pull secret from the Red Hat Hybrid Cloud Console. Be ready to configure MicroShift by adding parameters and values to the MicroShift YAML configuration file. See Using the MicroShift configuration file for more information. Decide whether you need to configure storage for the application and tasks you are using in your MicroShift cluster, or disable the MicroShift storage plug-in completely. For more information about creating volume groups and persistent volumes on RHEL, see Overview of logical volume management . For more information about the MicroShift plug-in, see Dynamic storage using the LVMS plugin . Configure networking settings according to the access needs you plan for your MicroShift cluster and applications. Consider whether you want to use single or dual-stack networks, configure a firewall, or configure routes. For more information about MicroShift networking options, see Understanding networking settings . Install the OpenShift CLI ( oc ) to access your cluster, see Getting started with the OpenShift CLI . Note Red Hat Enterprise Linux for Real Time (real-time kernel) can be used where predictable latency is critical. Workload partitioning is also required for low-latency applications. For more information about low latency and the real-time kernel, see Configuring low latency . Additional resources Mirroring container images for disconnected installations . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/getting_ready_to_install_microshift/microshift-install-get-ready |
Chapter 8. Windows node upgrades | Chapter 8. Windows node upgrades You can ensure your Windows nodes have the latest updates by upgrading the Windows Machine Config Operator (WMCO). 8.1. Windows Machine Config Operator upgrades When a new version of the Windows Machine Config Operator (WMCO) is released that is compatible with the current cluster version, the Operator is upgraded based on the upgrade channel and subscription approval strategy it was installed with when using the Operator Lifecycle Manager (OLM). The WMCO upgrade results in the Kubernetes components in the Windows machine being upgraded. Note If you are upgrading to a new version of the WMCO and want to use cluster monitoring, you must have the openshift.io/cluster-monitoring=true label present in the WMCO namespace. If you add the label to a pre-existing WMCO namespace, and there are already Windows nodes configured, restart the WMCO pod to allow monitoring graphs to display. For a non-disruptive upgrade, the WMCO terminates the Windows machines configured by the version of the WMCO and recreates them using the current version. This is done by deleting the Machine object, which results in the drain and deletion of the Windows node. To facilitate an upgrade, the WMCO adds a version annotation to all the configured nodes. During an upgrade, a mismatch in version annotation results in the deletion and recreation of a Windows machine. To have minimal service disruptions during an upgrade, the WMCO only updates one Windows machine at a time. Important The WMCO is only responsible for updating Kubernetes components, not for Windows operating system updates. You provide the Windows image when creating the VMs; therefore, you are responsible for providing an updated image. You can provide an updated Windows image by changing the image configuration in the MachineSet spec. For more information on Operator upgrades using the Operator Lifecycle Manager (OLM), see Updating installed Operators . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/windows_container_support_for_openshift/windows-node-upgrades |
function::symline | function::symline Name function::symline - Return the line number of an address. Synopsis Arguments addr The address to translate. Description Returns the (approximate) line number of the given address, if known. If the line number cannot be found, the hex string representation of the address will be returned. | [
"symline:string(addr:long)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-symline |
2.3. Boot Loader Options | 2.3. Boot Loader Options Figure 2.3. Boot Loader Options GRUB is the default boot loader for Red Hat Enterprise Linux. If you do not want to install a boot loader, select Do not install a boot loader . If you choose not to install a boot loader, make sure you create a boot diskette or have another way to boot your system, such as a third-party boot loader. You must choose where to install the boot loader (the Master Boot Record or the first sector of the /boot partition). Install the boot loader on the MBR if you plan to use it as your boot loader. To pass any special parameters to the kernel to be used when the system boots, enter them in the Kernel parameters text field. For example, if you have an IDE CD-ROM Writer, you can tell the kernel to use the SCSI emulation driver that must be loaded before using cdrecord by configuring hdd=ide-scsi as a kernel parameter (where hdd is the CD-ROM device). You can password protect the GRUB boot loader by configuring a GRUB password. Select Use GRUB password , and enter a password in the Password field. Type the same password in the Confirm Password text field. To save the password as an encrypted password in the file, select Encrypt GRUB password . If the encryption option is selected, when the file is saved, the plain text password that you typed are encrypted and written to the kickstart file. If type an already encrypted password, unselect to encrypt it. If Upgrade an existing installation is selected on the Installation Method page, select Upgrade existing boot loader to upgrade the existing boot loader configuration, while preserving the old entries. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/RHKSTOOL-Boot_Loader_Options |
5.2.25. /proc/pci | 5.2.25. /proc/pci This file contains a full listing of every PCI device on the system. Depending on the number of PCI devices, /proc/pci can be rather long. A sampling of this file from a basic system looks similar to the following: This output shows a list of all PCI devices, sorted in the order of bus, device, and function. Beyond providing the name and version of the device, this list also gives detailed IRQ information so an administrator can quickly look for conflicts. Note To get a more readable version of this information, type: | [
"Bus 0, device 0, function 0: Host bridge: Intel Corporation 440BX/ZX - 82443BX/ZX Host bridge (rev 3). Master Capable. Latency=64. Prefetchable 32 bit memory at 0xe4000000 [0xe7ffffff]. Bus 0, device 1, function 0: PCI bridge: Intel Corporation 440BX/ZX - 82443BX/ZX AGP bridge (rev 3). Master Capable. Latency=64. Min Gnt=128. Bus 0, device 4, function 0: ISA bridge: Intel Corporation 82371AB PIIX4 ISA (rev 2). Bus 0, device 4, function 1: IDE interface: Intel Corporation 82371AB PIIX4 IDE (rev 1). Master Capable. Latency=32. I/O at 0xd800 [0xd80f]. Bus 0, device 4, function 2: USB Controller: Intel Corporation 82371AB PIIX4 USB (rev 1). IRQ 5. Master Capable. Latency=32. I/O at 0xd400 [0xd41f]. Bus 0, device 4, function 3: Bridge: Intel Corporation 82371AB PIIX4 ACPI (rev 2). IRQ 9. Bus 0, device 9, function 0: Ethernet controller: Lite-On Communications Inc LNE100TX (rev 33). IRQ 5. Master Capable. Latency=32. I/O at 0xd000 [0xd0ff]. Non-prefetchable 32 bit memory at 0xe3000000 [0xe30000ff]. Bus 0, device 12, function 0: VGA compatible controller: S3 Inc. ViRGE/DX or /GX (rev 1). IRQ 11. Master Capable. Latency=32. Min Gnt=4.Max Lat=255. Non-prefetchable 32 bit memory at 0xdc000000 [0xdfffffff].",
"/sbin/lspci -vb"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-proc-pci |
Chapter 11. Using the Service Interconnect Console | Chapter 11. Using the Service Interconnect Console The Service Interconnect Console provides data and visualizations of the traffic flow between Skupper sites. 11.1. Enabling the Service Interconnect Console By default, when you create a Skupper site, a Service Interconnect Console is not available. When enabled, the Service Interconnect Console URL is displayed whenever you check site status using skupper status . Prerequisites A Kubernetes namespace where you plan to create a site Procedure Determine which site in your service network is best to enable the console. Enabling the console also requires that you enable the flow-collector component, which requires resources to process traffic data from all sites. You might locate the console using the following criteria: Does the service network cross a firewall? For example, if you want the console to be available only inside the firewall, you need to locate the flow-collector and console on a site inside the firewall. Is there a site that processes more traffic than other sites? For example, if you have a frontend component that calls a set of services from other sites, it might make sense to locate the flow collector and console on that site to minimize data traffic. Is there a site with more or cheaper resources that you want to use? For example, if you have two sites, A and B, and resources are more expensive on site A, you might want to locate the flow collector and console on site B. Create a site with the flow collector and console enabled: USD skupper init --enable-console --enable-flow-collector 11.2. Accessing the Service Interconnect Console By default, the Service Interconnect Console is protected by credentials available in the skupper-console-users secret. Procedure Determine the Service Interconnect Console URL using the skupper CLI, for example: USD skupper status Skupper is enabled for namespace "west" in interior mode. It is not connected to any other sites. It has no exposed services. The site console url is: https://skupper-west.apps-crc.testing Browse to the Service Interconnect Console URL. The credential prompt depends on how the site was created using skupper init : Using the --console-auth unsecured option, you are not prompted for credentials. Using the --console-auth openshift option, you are prompted to enter OpenShift cluster credentials. Using the default or --console-user <user> --console-password <password> options, you are prompted to enter those credentials. If you created the site using default settings, that is skupper init , a random password is generated for the admin user. To retrieve the password the admin user for a Kubernetes site: + To retrieve the password the admin user for a Podman site: + 11.3. Exploring the Service Interconnect Console After exposing a service on the service network, you create an address , that is, a service name and port number associated with a site. There might be many replicas associated with an address. These replicas are shown in the Service Interconnect Console as processes . Not all participants on a service network are services. For example, a frontend deployment might call an exposed service named backend , but that frontend is not part of the service network. In the console, both are shown so that you can view the traffic and these are called components . The Service Interconnect Console provides an overview of the following: Topology Addresses Sites Components Processes The Service Interconnect Console also provides useful networking information about the service network, for example, traffic levels. Check the Sites tab. All your sites should be listed. See the Topology tab to view how the sites are linked. Check that all the services you exposed are visible in the Components tab. Click a component to show the component details and associated processes. Click on a process to display the process traffic. Note The process detail displays the associated image, host, and addresses. You can also view the clients that are calling the process. Click Addresses and choose an address to show the details for that address. This shows the set of servers that are exposed across the service network. Tip To view information about each window, click the ? icon. | [
"skupper init --enable-console --enable-flow-collector",
"skupper status Skupper is enabled for namespace \"west\" in interior mode. It is not connected to any other sites. It has no exposed services. The site console url is: https://skupper-west.apps-crc.testing",
"kubectl get secret skupper-console-users -o jsonpath={.data.admin} | base64 -d JNZWzMHtyg",
"cat ~/.local/share/containers/storage/volumes/skupper-console-users/_data/admin JNZWzMHtyg"
] | https://docs.redhat.com/en/documentation/red_hat_service_interconnect/1.8/html/using_service_interconnect/skupper-console |
7.71. hal-info | 7.71. hal-info 7.71.1. RHBA-2015:1268 - hal-info bug fix update An updated hal-info package that fixes one bug and adds one enhancement is now available for Red Hat Enterprise Linux 6. The hal-info package contains various device information files (also known as .fdi files) for the hal package. Bug Fix BZ# 841419 Previously, the "Mic Mute" and "Touchpad Toggle" keys did not transmit the correct symbol in Lenovo laptops. With this update, the aforementioned keys are correctly recognized by the X.Org Server, and the XF86AudioMicMute and XF86TouchpadToggle signals are transmitted successfully. Enhancement BZ# 1172669 To support the various "Fn" keys on latest Toshiba laptops, this update changes the hal-info remapping rules for Toshiba laptops from the provided kernel keycode to a keycode compatible with X. Users of hal-info are advised to upgrade to this updated package, which fixes this bug and adds this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-hal-info |
Data Grid downloads | Data Grid downloads Access the Data Grid Software Downloads on the Red Hat customer portal. Note You must have a Red Hat account to access and download Data Grid software. | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/hot_rod_node.js_client_guide/rhdg-downloads_datagrid |
Chapter 17. Deploying distributed units manually on single-node OpenShift | Chapter 17. Deploying distributed units manually on single-node OpenShift The procedures in this topic tell you how to manually deploy clusters on a small number of single nodes as a distributed unit (DU) during installation. The procedures do not describe how to install single-node OpenShift. This can be accomplished through many mechanisms. Rather, they are intended to capture the elements that should be configured as part of the installation process: Networking is needed to enable connectivity to the single-node OpenShift DU when the installation is complete. Workload partitioning, which can only be configured during installation. Additional items that help minimize the potential reboots post installation. 17.1. Configuring the distributed units (DUs) This section describes a set of configurations for an OpenShift Container Platform cluster so that it meets the feature and performance requirements necessary for running a distributed unit (DU) application. Some of this content must be applied during installation and other configurations can be applied post-install. After you have installed the single-node OpenShift DU, further configuration is needed to enable the platform to carry a DU workload. The configurations in this section are applied to the cluster after installation in order to configure the cluster for DU workloads. 17.1.1. Enabling workload partitioning A key feature to enable as part of a single-node OpenShift installation is workload partitioning. This limits the cores allowed to run platform services, maximizing the CPU core for application payloads. You must configure workload partitioning at cluster installation time. Note You can enable workload partitioning during cluster installation only. You cannot disable workload partitioning post-installation. However, you can reconfigure workload partitioning by updating the cpu value that you define in the performance profile, and in the related cpuset value in the MachineConfig custom resource (CR). Procedure The base64-encoded content below contains the CPU set that the management workloads are constrained to. This content must be adjusted to match the set specified in the performanceprofile and must be accurate for the number of cores on the cluster. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 02-master-workload-partitioning spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW8ucnVudGltZS53b3JrbG9hZHMubWFuYWdlbWVudF0KYWN0aXZhdGlvbl9hbm5vdGF0aW9uID0gInRhcmdldC53b3JrbG9hZC5vcGVuc2hpZnQuaW8vbWFuYWdlbWVudCIKYW5ub3RhdGlvbl9wcmVmaXggPSAicmVzb3VyY2VzLndvcmtsb2FkLm9wZW5zaGlmdC5pbyIKW2NyaW8ucnVudGltZS53b3JrbG9hZHMubWFuYWdlbWVudC5yZXNvdXJjZXNdCmNwdXNoYXJlcyA9IDAKQ1BVcyA9ICIwLTEsIDUyLTUzIgo= mode: 420 overwrite: true path: /etc/crio/crio.conf.d/01-workload-partitioning user: name: root - contents: source: data:text/plain;charset=utf-8;base64,ewogICJtYW5hZ2VtZW50IjogewogICAgImNwdXNldCI6ICIwLTEsNTItNTMiCiAgfQp9Cg== mode: 420 overwrite: true path: /etc/kubernetes/openshift-workload-pinning user: name: root The contents of /etc/crio/crio.conf.d/01-workload-partitioning should look like this: [crio.runtime.workloads.management] activation_annotation = "target.workload.openshift.io/management" annotation_prefix = "resources.workload.openshift.io" [crio.runtime.workloads.management.resources] cpushares = 0 cpuset = "0-1, 52-53" 1 1 The cpuset value varies based on the installation. If Hyper-Threading is enabled, specify both threads for each core. The cpuset value must match the reserved CPUs that you define in the spec.cpu.reserved field in the performance profile. If Hyper-Threading is enabled, specify both threads of each core. The CPUs value must match the reserved CPU set specified in the performance profile. This content should be base64 encoded and provided in the 01-workload-partitioning-content in the manifest above. The contents of /etc/kubernetes/openshift-workload-pinning should look like this: { "management": { "cpuset": "0-1,52-53" 1 } } 1 The cpuset must match the cpuset value in /etc/crio/crio.conf.d/01-workload-partitioning . 17.1.2. Configuring the container mount namespace To reduce the overall management footprint of the platform, a machine configuration is provided to contain the mount points. No configuration changes are needed. Use the provided settings: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: container-mount-namespace-and-kubelet-conf-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKCmRlYnVnKCkgewogIGVjaG8gJEAgPiYyCn0KCnVzYWdlKCkgewogIGVjaG8gVXNhZ2U6ICQoYmFzZW5hbWUgJDApIFVOSVQgW2VudmZpbGUgW3Zhcm5hbWVdXQogIGVjaG8KICBlY2hvIEV4dHJhY3QgdGhlIGNvbnRlbnRzIG9mIHRoZSBmaXJzdCBFeGVjU3RhcnQgc3RhbnphIGZyb20gdGhlIGdpdmVuIHN5c3RlbWQgdW5pdCBhbmQgcmV0dXJuIGl0IHRvIHN0ZG91dAogIGVjaG8KICBlY2hvICJJZiAnZW52ZmlsZScgaXMgcHJvdmlkZWQsIHB1dCBpdCBpbiB0aGVyZSBpbnN0ZWFkLCBhcyBhbiBlbnZpcm9ubWVudCB2YXJpYWJsZSBuYW1lZCAndmFybmFtZSciCiAgZWNobyAiRGVmYXVsdCAndmFybmFtZScgaXMgRVhFQ1NUQVJUIGlmIG5vdCBzcGVjaWZpZWQiCiAgZXhpdCAxCn0KClVOSVQ9JDEKRU5WRklMRT0kMgpWQVJOQU1FPSQzCmlmIFtbIC16ICRVTklUIHx8ICRVTklUID09ICItLWhlbHAiIHx8ICRVTklUID09ICItaCIgXV07IHRoZW4KICB1c2FnZQpmaQpkZWJ1ZyAiRXh0cmFjdGluZyBFeGVjU3RhcnQgZnJvbSAkVU5JVCIKRklMRT0kKHN5c3RlbWN0bCBjYXQgJFVOSVQgfCBoZWFkIC1uIDEpCkZJTEU9JHtGSUxFI1wjIH0KaWYgW1sgISAtZiAkRklMRSBdXTsgdGhlbgogIGRlYnVnICJGYWlsZWQgdG8gZmluZCByb290IGZpbGUgZm9yIHVuaXQgJFVOSVQgKCRGSUxFKSIKICBleGl0CmZpCmRlYnVnICJTZXJ2aWNlIGRlZmluaXRpb24gaXMgaW4gJEZJTEUiCkVYRUNTVEFSVD0kKHNlZCAtbiAtZSAnL15FeGVjU3RhcnQ9LipcXCQvLC9bXlxcXSQvIHsgcy9eRXhlY1N0YXJ0PS8vOyBwIH0nIC1lICcvXkV4ZWNTdGFydD0uKlteXFxdJC8geyBzL15FeGVjU3RhcnQ9Ly87IHAgfScgJEZJTEUpCgppZiBbWyAkRU5WRklMRSBdXTsgdGhlbgogIFZBUk5BTUU9JHtWQVJOQU1FOi1FWEVDU1RBUlR9CiAgZWNobyAiJHtWQVJOQU1FfT0ke0VYRUNTVEFSVH0iID4gJEVOVkZJTEUKZWxzZQogIGVjaG8gJEVYRUNTVEFSVApmaQo= mode: 493 path: /usr/local/bin/extractExecStart - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKbnNlbnRlciAtLW1vdW50PS9ydW4vY29udGFpbmVyLW1vdW50LW5hbWVzcGFjZS9tbnQgIiRAIgo= mode: 493 path: /usr/local/bin/nsenterCmns systemd: units: - contents: | [Unit] Description=Manages a mount namespace that both kubelet and crio can use to share their container-specific mounts [Service] Type=oneshot RemainAfterExit=yes RuntimeDirectory=container-mount-namespace Environment=RUNTIME_DIRECTORY=%t/container-mount-namespace Environment=BIND_POINT=%t/container-mount-namespace/mnt ExecStartPre=bash -c "findmnt USD{RUNTIME_DIRECTORY} || mount --make-unbindable --bind USD{RUNTIME_DIRECTORY} USD{RUNTIME_DIRECTORY}" ExecStartPre=touch USD{BIND_POINT} ExecStart=unshare --mount=USD{BIND_POINT} --propagation slave mount --make-rshared / ExecStop=umount -R USD{RUNTIME_DIRECTORY} enabled: true name: container-mount-namespace.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c "nsenter --mount=%t/container-mount-namespace/mnt \ USD{ORIG_EXECSTART}" name: 90-container-mount-namespace.conf name: crio.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c "nsenter --mount=%t/container-mount-namespace/mnt \ USD{ORIG_EXECSTART} --housekeeping-interval=30s" name: 90-container-mount-namespace.conf - contents: | [Service] Environment="OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s" Environment="OPENSHIFT_EVICTION_MONITORING_PERIOD_DURATION=30s" name: 30-kubelet-interval-tuning.conf name: kubelet.service 17.1.3. Enabling Stream Control Transmission Protocol (SCTP) SCTP is a key protocol used in RAN applications. This MachineConfig object adds the SCTP kernel module to the node to enable this protocol. Procedure No configuration changes are needed. Use the provided settings: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: load-sctp-module spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8,sctp filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf 17.1.4. Creating OperatorGroups for Operators This configuration is provided to enable addition of the Operators needed to configure the platform post-installation. It adds the Namespace and OperatorGroup objects for the Local Storage Operator, Logging Operator, Performance Addon Operator, PTP Operator, and SRIOV Network Operator. Procedure No configuration changes are needed. Use the provided settings: Local Storage Operator apiVersion: v1 kind: Namespace metadata: annotations: workload.openshift.io/allowed: management name: openshift-local-storage --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-local-storage namespace: openshift-local-storage spec: targetNamespaces: - openshift-local-storage Logging Operator apiVersion: v1 kind: Namespace metadata: annotations: workload.openshift.io/allowed: management name: openshift-logging --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging spec: targetNamespaces: - openshift-logging Performance Addon Operator apiVersion: v1 kind: Namespace metadata: annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: "true" name: openshift-performance-addon-operator spec: {} --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: performance-addon-operator namespace: openshift-performance-addon-operator PTP Operator apiVersion: v1 kind: Namespace metadata: annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: "true" name: openshift-ptp --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ptp-operators namespace: openshift-ptp spec: targetNamespaces: - openshift-ptp SRIOV Network Operator apiVersion: v1 kind: Namespace metadata: annotations: workload.openshift.io/allowed: management name: openshift-sriov-network-operator --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator 17.1.5. Subscribing to the Operators The subscription provides the location to download the Operators needed for platform configuration. Procedure Use the following example to configure the subscription: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging spec: channel: "stable" 1 name: cluster-logging source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual 2 --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage spec: channel: "stable" 3 installPlanApproval: Automatic name: local-storage-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: performance-addon-operator namespace: openshift-performance-addon-operator spec: channel: "4.10" 4 name: performance-addon-operator source: performance-addon-operator sourceNamespace: openshift-marketplace installPlanApproval: Manual --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ptp-operator-subscription namespace: openshift-ptp spec: channel: "stable" 5 name: ptp-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator spec: channel: "stable" 6 name: sriov-network-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual 1 Specify the channel to get the cluster-logging Operator. 2 Specify Manual or Automatic . In Automatic mode, the Operator automatically updates to the latest versions in the channel as they become available in the registry. In Manual mode, new Operator versions are installed only after they are explicitly approved. 3 Specify the channel to get the local-storage-operator Operator. 4 Specify the channel to get the performance-addon-operator Operator. 5 Specify the channel to get the ptp-operator Operator. 6 Specify the channel to get the sriov-network-operator Operator. 17.1.6. Configuring logging locally and forwarding To be able to debug a single node distributed unit (DU), logs need to be stored for further analysis. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: apiVersion: logging.openshift.io/v1 kind: ClusterLogging 1 metadata: name: instance namespace: openshift-logging spec: collection: logs: fluentd: {} type: fluentd curation: type: "curator" curator: schedule: "30 3 * * *" managementState: Managed --- apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder 2 metadata: name: instance namespace: openshift-logging spec: inputs: - infrastructure: {} outputs: - name: kafka-open type: kafka url: tcp://10.46.55.190:9092/test 3 pipelines: - inputRefs: - audit name: audit-logs outputRefs: - kafka-open - inputRefs: - infrastructure name: infrastructure-logs outputRefs: - kafka-open 1 Updates the existing instance or creates the instance if it does not exist. 2 Updates the existing instance or creates the instance if it does not exist. 3 Specifies the destination of the kafka server. 17.1.7. Configuring the Performance Addon Operator This is a key configuration for the single node distributed unit (DU). Many of the real-time capabilities and service assurance are configured here. Procedure Configure the performance addons using the following example: Recommended performance profile configuration apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: openshift-node-performance-profile 1 spec: additionalKernelArgs: - "idle=poll" - "rcupdate.rcu_normal_after_boot=0" cpu: isolated: 2-51,54-103 2 reserved: 0-1,52-53 3 hugepages: defaultHugepagesSize: 1G pages: - count: 32 4 size: 1G 5 node: 0 6 machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/master: "" net: userLevelNetworking: true 7 nodeSelector: node-role.kubernetes.io/master: '' numa: topologyPolicy: "restricted" realTimeKernel: enabled: true 8 1 Ensure that the value for name matches that specified in the spec.profile.data field of TunedPerformancePatch.yaml and the status.configuration.source.name field of validatorCRs/informDuValidator.yaml . 2 Set the isolated CPUs. Ensure all of the Hyper-Threading pairs match. 3 Set the reserved CPUs. When workload partitioning is enabled, system processes, kernel threads, and system container threads are restricted to these CPUs. All CPUs that are not isolated should be reserved. 4 Set the number of huge pages. 5 Set the huge page size. 6 Set node to the NUMA node where the hugepages are allocated. 7 Set userLevelNetworking to true to isolate the CPUs from networking interrupts. 8 Set enabled to true to install the real-time Linux kernel. 17.1.8. Configuring Precision Time Protocol (PTP) In the far edge, the RAN uses PTP to synchronize the systems. Procedure Configure PTP using the following example: apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: du-ptp-slave namespace: openshift-ptp spec: profile: - interface: ens5f0 1 name: slave phc2sysOpts: -a -r -n 24 ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 248 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison ieee1588 G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval 4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport UDPv4 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 ptp4lOpts: -2 -s --summary_interval -4 recommend: - match: - nodeLabel: node-role.kubernetes.io/master priority: 4 profile: slave 1 Sets the interface used for PTP. 17.1.9. Disabling Network Time Protocol (NTP) After the system is configured for Precision Time Protocol (PTP), you need to remove NTP to prevent it from impacting the system clock. Procedure No configuration changes are needed. Use the provided settings: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: disable-chronyd spec: config: systemd: units: - contents: | [Unit] Description=NTP client/server Documentation=man:chronyd(8) man:chrony.conf(5) After=ntpdate.service sntp.service ntpd.service Conflicts=ntpd.service systemd-timesyncd.service ConditionCapability=CAP_SYS_TIME [Service] Type=forking PIDFile=/run/chrony/chronyd.pid EnvironmentFile=-/etc/sysconfig/chronyd ExecStart=/usr/sbin/chronyd USDOPTIONS ExecStartPost=/usr/libexec/chrony-helper update-daemon PrivateTmp=yes ProtectHome=yes ProtectSystem=full [Install] WantedBy=multi-user.target enabled: false name: chronyd.service ignition: version: 2.2.0 17.1.10. Configuring single root I/O virtualization (SR-IOV) SR-IOV is commonly used to enable the fronthaul and the midhaul networks. Procedure Use the following configuration to configure SRIOV on a single node distributed unit (DU). Note that the first custom resource (CR) is required. The following CRs are examples. apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: configDaemonNodeSelector: node-role.kubernetes.io/master: "" disableDrain: true enableInjector: true enableOperatorWebhook: true --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-nw-du-mh namespace: openshift-sriov-network-operator spec: networkNamespace: openshift-sriov-network-operator resourceName: du_mh vlan: 150 1 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-nnp-du-mh namespace: openshift-sriov-network-operator spec: deviceType: vfio-pci 2 isRdma: false nicSelector: pfNames: - ens7f0 3 nodeSelector: node-role.kubernetes.io/master: "" numVfs: 8 4 priority: 10 resourceName: du_mh --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-nw-du-fh namespace: openshift-sriov-network-operator spec: networkNamespace: openshift-sriov-network-operator resourceName: du_fh vlan: 140 5 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-nnp-du-fh namespace: openshift-sriov-network-operator spec: deviceType: netdevice 6 isRdma: true nicSelector: pfNames: - ens5f0 7 nodeSelector: node-role.kubernetes.io/master: "" numVfs: 8 8 priority: 10 resourceName: du_fh 1 Specifies the VLAN for the midhaul network. 2 Select either vfio-pci or netdevice , as needed. 3 Specifies the interface connected to the midhaul network. 4 Specifies the number of VFs for the midhaul network. 5 The VLAN for the fronthaul network. 6 Select either vfio-pci or netdevice , as needed. 7 Specifies the interface connected to the fronthaul network. 8 Specifies the number of VFs for the fronthaul network. 17.1.11. Disabling the console Operator The console-operator installs and maintains the web console on a cluster. When the node is centrally managed the Operator is not needed and makes space for application workloads. Procedure You can disable the Operator using the following configuration file. No configuration changes are needed. Use the provided settings: apiVersion: operator.openshift.io/v1 kind: Console metadata: annotations: include.release.openshift.io/ibm-cloud-managed: "false" include.release.openshift.io/self-managed-high-availability: "false" include.release.openshift.io/single-node-developer: "false" release.openshift.io/create-only: "true" name: cluster spec: logLevel: Normal managementState: Removed operatorLogLevel: Normal 17.2. Applying the distributed unit (DU) configuration to a single-node OpenShift cluster Perform the following tasks to configure a single-node cluster for a DU: Apply the required extra installation manifests at installation time. Apply the post-install configuration custom resources (CRs). 17.2.1. Applying the extra installation manifests To apply the distributed unit (DU) configuration to the single-node cluster, the following extra installation manifests need to be included during installation: Enable workload partitioning. Other MachineConfig objects - There is a set of MachineConfig custom resources (CRs) included by default. You can choose to include these additional MachineConfig CRs that are unique to their environment. It is recommended, but not required, to apply these CRs during installation in order to minimize the number of reboots that can occur during post-install configuration. 17.2.2. Applying the post-install configuration custom resources (CRs) After OpenShift Container Platform is installed on the cluster, use the following command to apply the CRs you configured for the distributed units (DUs): USD oc apply -f <file_name>.yaml | [
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 02-master-workload-partitioning spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW8ucnVudGltZS53b3JrbG9hZHMubWFuYWdlbWVudF0KYWN0aXZhdGlvbl9hbm5vdGF0aW9uID0gInRhcmdldC53b3JrbG9hZC5vcGVuc2hpZnQuaW8vbWFuYWdlbWVudCIKYW5ub3RhdGlvbl9wcmVmaXggPSAicmVzb3VyY2VzLndvcmtsb2FkLm9wZW5zaGlmdC5pbyIKW2NyaW8ucnVudGltZS53b3JrbG9hZHMubWFuYWdlbWVudC5yZXNvdXJjZXNdCmNwdXNoYXJlcyA9IDAKQ1BVcyA9ICIwLTEsIDUyLTUzIgo= mode: 420 overwrite: true path: /etc/crio/crio.conf.d/01-workload-partitioning user: name: root - contents: source: data:text/plain;charset=utf-8;base64,ewogICJtYW5hZ2VtZW50IjogewogICAgImNwdXNldCI6ICIwLTEsNTItNTMiCiAgfQp9Cg== mode: 420 overwrite: true path: /etc/kubernetes/openshift-workload-pinning user: name: root",
"[crio.runtime.workloads.management] activation_annotation = \"target.workload.openshift.io/management\" annotation_prefix = \"resources.workload.openshift.io\" [crio.runtime.workloads.management.resources] cpushares = 0 cpuset = \"0-1, 52-53\" 1",
"{ \"management\": { \"cpuset\": \"0-1,52-53\" 1 } }",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: container-mount-namespace-and-kubelet-conf-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKCmRlYnVnKCkgewogIGVjaG8gJEAgPiYyCn0KCnVzYWdlKCkgewogIGVjaG8gVXNhZ2U6ICQoYmFzZW5hbWUgJDApIFVOSVQgW2VudmZpbGUgW3Zhcm5hbWVdXQogIGVjaG8KICBlY2hvIEV4dHJhY3QgdGhlIGNvbnRlbnRzIG9mIHRoZSBmaXJzdCBFeGVjU3RhcnQgc3RhbnphIGZyb20gdGhlIGdpdmVuIHN5c3RlbWQgdW5pdCBhbmQgcmV0dXJuIGl0IHRvIHN0ZG91dAogIGVjaG8KICBlY2hvICJJZiAnZW52ZmlsZScgaXMgcHJvdmlkZWQsIHB1dCBpdCBpbiB0aGVyZSBpbnN0ZWFkLCBhcyBhbiBlbnZpcm9ubWVudCB2YXJpYWJsZSBuYW1lZCAndmFybmFtZSciCiAgZWNobyAiRGVmYXVsdCAndmFybmFtZScgaXMgRVhFQ1NUQVJUIGlmIG5vdCBzcGVjaWZpZWQiCiAgZXhpdCAxCn0KClVOSVQ9JDEKRU5WRklMRT0kMgpWQVJOQU1FPSQzCmlmIFtbIC16ICRVTklUIHx8ICRVTklUID09ICItLWhlbHAiIHx8ICRVTklUID09ICItaCIgXV07IHRoZW4KICB1c2FnZQpmaQpkZWJ1ZyAiRXh0cmFjdGluZyBFeGVjU3RhcnQgZnJvbSAkVU5JVCIKRklMRT0kKHN5c3RlbWN0bCBjYXQgJFVOSVQgfCBoZWFkIC1uIDEpCkZJTEU9JHtGSUxFI1wjIH0KaWYgW1sgISAtZiAkRklMRSBdXTsgdGhlbgogIGRlYnVnICJGYWlsZWQgdG8gZmluZCByb290IGZpbGUgZm9yIHVuaXQgJFVOSVQgKCRGSUxFKSIKICBleGl0CmZpCmRlYnVnICJTZXJ2aWNlIGRlZmluaXRpb24gaXMgaW4gJEZJTEUiCkVYRUNTVEFSVD0kKHNlZCAtbiAtZSAnL15FeGVjU3RhcnQ9LipcXCQvLC9bXlxcXSQvIHsgcy9eRXhlY1N0YXJ0PS8vOyBwIH0nIC1lICcvXkV4ZWNTdGFydD0uKlteXFxdJC8geyBzL15FeGVjU3RhcnQ9Ly87IHAgfScgJEZJTEUpCgppZiBbWyAkRU5WRklMRSBdXTsgdGhlbgogIFZBUk5BTUU9JHtWQVJOQU1FOi1FWEVDU1RBUlR9CiAgZWNobyAiJHtWQVJOQU1FfT0ke0VYRUNTVEFSVH0iID4gJEVOVkZJTEUKZWxzZQogIGVjaG8gJEVYRUNTVEFSVApmaQo= mode: 493 path: /usr/local/bin/extractExecStart - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKbnNlbnRlciAtLW1vdW50PS9ydW4vY29udGFpbmVyLW1vdW50LW5hbWVzcGFjZS9tbnQgIiRAIgo= mode: 493 path: /usr/local/bin/nsenterCmns systemd: units: - contents: | [Unit] Description=Manages a mount namespace that both kubelet and crio can use to share their container-specific mounts [Service] Type=oneshot RemainAfterExit=yes RuntimeDirectory=container-mount-namespace Environment=RUNTIME_DIRECTORY=%t/container-mount-namespace Environment=BIND_POINT=%t/container-mount-namespace/mnt ExecStartPre=bash -c \"findmnt USD{RUNTIME_DIRECTORY} || mount --make-unbindable --bind USD{RUNTIME_DIRECTORY} USD{RUNTIME_DIRECTORY}\" ExecStartPre=touch USD{BIND_POINT} ExecStart=unshare --mount=USD{BIND_POINT} --propagation slave mount --make-rshared / ExecStop=umount -R USD{RUNTIME_DIRECTORY} enabled: true name: container-mount-namespace.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c \"nsenter --mount=%t/container-mount-namespace/mnt USD{ORIG_EXECSTART}\" name: 90-container-mount-namespace.conf name: crio.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c \"nsenter --mount=%t/container-mount-namespace/mnt USD{ORIG_EXECSTART} --housekeeping-interval=30s\" name: 90-container-mount-namespace.conf - contents: | [Service] Environment=\"OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s\" Environment=\"OPENSHIFT_EVICTION_MONITORING_PERIOD_DURATION=30s\" name: 30-kubelet-interval-tuning.conf name: kubelet.service",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: load-sctp-module spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8,sctp filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf",
"apiVersion: v1 kind: Namespace metadata: annotations: workload.openshift.io/allowed: management name: openshift-local-storage --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-local-storage namespace: openshift-local-storage spec: targetNamespaces: - openshift-local-storage",
"apiVersion: v1 kind: Namespace metadata: annotations: workload.openshift.io/allowed: management name: openshift-logging --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging spec: targetNamespaces: - openshift-logging",
"apiVersion: v1 kind: Namespace metadata: annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: \"true\" name: openshift-performance-addon-operator spec: {} --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: performance-addon-operator namespace: openshift-performance-addon-operator",
"apiVersion: v1 kind: Namespace metadata: annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: \"true\" name: openshift-ptp --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ptp-operators namespace: openshift-ptp spec: targetNamespaces: - openshift-ptp",
"apiVersion: v1 kind: Namespace metadata: annotations: workload.openshift.io/allowed: management name: openshift-sriov-network-operator --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging spec: channel: \"stable\" 1 name: cluster-logging source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual 2 --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage spec: channel: \"stable\" 3 installPlanApproval: Automatic name: local-storage-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: performance-addon-operator namespace: openshift-performance-addon-operator spec: channel: \"4.10\" 4 name: performance-addon-operator source: performance-addon-operator sourceNamespace: openshift-marketplace installPlanApproval: Manual --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ptp-operator-subscription namespace: openshift-ptp spec: channel: \"stable\" 5 name: ptp-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator spec: channel: \"stable\" 6 name: sriov-network-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging 1 metadata: name: instance namespace: openshift-logging spec: collection: logs: fluentd: {} type: fluentd curation: type: \"curator\" curator: schedule: \"30 3 * * *\" managementState: Managed --- apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder 2 metadata: name: instance namespace: openshift-logging spec: inputs: - infrastructure: {} outputs: - name: kafka-open type: kafka url: tcp://10.46.55.190:9092/test 3 pipelines: - inputRefs: - audit name: audit-logs outputRefs: - kafka-open - inputRefs: - infrastructure name: infrastructure-logs outputRefs: - kafka-open",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: openshift-node-performance-profile 1 spec: additionalKernelArgs: - \"idle=poll\" - \"rcupdate.rcu_normal_after_boot=0\" cpu: isolated: 2-51,54-103 2 reserved: 0-1,52-53 3 hugepages: defaultHugepagesSize: 1G pages: - count: 32 4 size: 1G 5 node: 0 6 machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/master: \"\" net: userLevelNetworking: true 7 nodeSelector: node-role.kubernetes.io/master: '' numa: topologyPolicy: \"restricted\" realTimeKernel: enabled: true 8",
"apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: du-ptp-slave namespace: openshift-ptp spec: profile: - interface: ens5f0 1 name: slave phc2sysOpts: -a -r -n 24 ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 248 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison ieee1588 G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval 4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport UDPv4 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 ptp4lOpts: -2 -s --summary_interval -4 recommend: - match: - nodeLabel: node-role.kubernetes.io/master priority: 4 profile: slave",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: disable-chronyd spec: config: systemd: units: - contents: | [Unit] Description=NTP client/server Documentation=man:chronyd(8) man:chrony.conf(5) After=ntpdate.service sntp.service ntpd.service Conflicts=ntpd.service systemd-timesyncd.service ConditionCapability=CAP_SYS_TIME [Service] Type=forking PIDFile=/run/chrony/chronyd.pid EnvironmentFile=-/etc/sysconfig/chronyd ExecStart=/usr/sbin/chronyd USDOPTIONS ExecStartPost=/usr/libexec/chrony-helper update-daemon PrivateTmp=yes ProtectHome=yes ProtectSystem=full [Install] WantedBy=multi-user.target enabled: false name: chronyd.service ignition: version: 2.2.0",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: configDaemonNodeSelector: node-role.kubernetes.io/master: \"\" disableDrain: true enableInjector: true enableOperatorWebhook: true --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-nw-du-mh namespace: openshift-sriov-network-operator spec: networkNamespace: openshift-sriov-network-operator resourceName: du_mh vlan: 150 1 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-nnp-du-mh namespace: openshift-sriov-network-operator spec: deviceType: vfio-pci 2 isRdma: false nicSelector: pfNames: - ens7f0 3 nodeSelector: node-role.kubernetes.io/master: \"\" numVfs: 8 4 priority: 10 resourceName: du_mh --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-nw-du-fh namespace: openshift-sriov-network-operator spec: networkNamespace: openshift-sriov-network-operator resourceName: du_fh vlan: 140 5 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-nnp-du-fh namespace: openshift-sriov-network-operator spec: deviceType: netdevice 6 isRdma: true nicSelector: pfNames: - ens5f0 7 nodeSelector: node-role.kubernetes.io/master: \"\" numVfs: 8 8 priority: 10 resourceName: du_fh",
"apiVersion: operator.openshift.io/v1 kind: Console metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"false\" include.release.openshift.io/self-managed-high-availability: \"false\" include.release.openshift.io/single-node-developer: \"false\" release.openshift.io/create-only: \"true\" name: cluster spec: logLevel: Normal managementState: Removed operatorLogLevel: Normal",
"oc apply -f <file_name>.yaml"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/scalability_and_performance/sno-du-deploying-distributed-units-manually-on-single-node-openshift |
Chapter 3. Migrating from Jenkins to OpenShift Pipelines or Tekton | Chapter 3. Migrating from Jenkins to OpenShift Pipelines or Tekton You can migrate your CI/CD workflows from Jenkins to Red Hat OpenShift Pipelines , a cloud-native CI/CD experience based on the Tekton project. 3.1. Comparison of Jenkins and OpenShift Pipelines concepts You can review and compare the following equivalent terms used in Jenkins and OpenShift Pipelines. 3.1.1. Jenkins terminology Jenkins offers declarative and scripted pipelines that are extensible using shared libraries and plugins. Some basic terms in Jenkins are as follows: Pipeline : Automates the entire process of building, testing, and deploying applications by using Groovy syntax. Node : A machine capable of either orchestrating or executing a scripted pipeline. Stage : A conceptually distinct subset of tasks performed in a pipeline. Plugins or user interfaces often use this block to display the status or progress of tasks. Step : A single task that specifies the exact action to be taken, either by using a command or a script. 3.1.2. OpenShift Pipelines terminology OpenShift Pipelines uses YAML syntax for declarative pipelines and consists of tasks. Some basic terms in OpenShift Pipelines are as follows: Pipeline : A set of tasks in a series, in parallel, or both. Task : A sequence of steps as commands, binaries, or scripts. PipelineRun : Execution of a pipeline with one or more tasks. TaskRun : Execution of a task with one or more steps. Note You can initiate a PipelineRun or a TaskRun with a set of inputs such as parameters and workspaces, and the execution results in a set of outputs and artifacts. Workspace : In OpenShift Pipelines, workspaces are conceptual blocks that serve the following purposes: Storage of inputs, outputs, and build artifacts. Common space to share data among tasks. Mount points for credentials held in secrets, configurations held in config maps, and common tools shared by an organization. Note In Jenkins, there is no direct equivalent of OpenShift Pipelines workspaces. You can think of the control node as a workspace, as it stores the cloned code repository, build history, and artifacts. When a job is assigned to a different node, the cloned code and the generated artifacts are stored in that node, but the control node maintains the build history. 3.1.3. Mapping of concepts The building blocks of Jenkins and OpenShift Pipelines are not equivalent, and a specific comparison does not provide a technically accurate mapping. The following terms and concepts in Jenkins and OpenShift Pipelines correlate in general: Table 3.1. Jenkins and OpenShift Pipelines - basic comparison Jenkins OpenShift Pipelines Pipeline Pipeline and PipelineRun Stage Task Step A step in a task 3.2. Migrating a sample pipeline from Jenkins to OpenShift Pipelines You can use the following equivalent examples to help migrate your build, test, and deploy pipelines from Jenkins to OpenShift Pipelines. 3.2.1. Jenkins pipeline Consider a Jenkins pipeline written in Groovy for building, testing, and deploying: pipeline { agent any stages { stage('Build') { steps { sh 'make' } } stage('Test'){ steps { sh 'make check' junit 'reports/**/*.xml' } } stage('Deploy') { steps { sh 'make publish' } } } } 3.2.2. OpenShift Pipelines pipeline To create a pipeline in OpenShift Pipelines that is equivalent to the preceding Jenkins pipeline, you create the following three tasks: Example build task YAML definition file apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myproject-build spec: workspaces: - name: source steps: - image: my-ci-image command: ["make"] workingDir: USD(workspaces.source.path) Example test task YAML definition file apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myproject-test spec: workspaces: - name: source steps: - image: my-ci-image command: ["make check"] workingDir: USD(workspaces.source.path) - image: junit-report-image script: | #!/usr/bin/env bash junit-report reports/**/*.xml workingDir: USD(workspaces.source.path) Example deploy task YAML definition file apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myprojectd-deploy spec: workspaces: - name: source steps: - image: my-deploy-image command: ["make deploy"] workingDir: USD(workspaces.source.path) You can combine the three tasks sequentially to form a pipeline in OpenShift Pipelines: Example: OpenShift Pipelines pipeline for building, testing, and deployment apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: myproject-pipeline spec: workspaces: - name: shared-dir tasks: - name: build taskRef: name: myproject-build workspaces: - name: source workspace: shared-dir - name: test taskRef: name: myproject-test workspaces: - name: source workspace: shared-dir - name: deploy taskRef: name: myproject-deploy workspaces: - name: source workspace: shared-dir 3.3. Migrating from Jenkins plugins to Tekton Hub tasks You can extend the capability of Jenkins by using plugins . To achieve similar extensibility in OpenShift Pipelines, use any of the tasks available from Tekton Hub . For example, consider the git-clone task in Tekton Hub, which corresponds to the git plugin for Jenkins. Example: git-clone task from Tekton Hub apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: demo-pipeline spec: params: - name: repo_url - name: revision workspaces: - name: source tasks: - name: fetch-from-git taskRef: name: git-clone params: - name: url value: USD(params.repo_url) - name: revision value: USD(params.revision) workspaces: - name: output workspace: source 3.4. Extending OpenShift Pipelines capabilities using custom tasks and scripts In OpenShift Pipelines, if you do not find the right task in Tekton Hub, or need greater control over tasks, you can create custom tasks and scripts to extend the capabilities of OpenShift Pipelines. Example: A custom task for running the maven test command apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: maven-test spec: workspaces: - name: source steps: - image: my-maven-image command: ["mvn test"] workingDir: USD(workspaces.source.path) Example: Run a custom shell script by providing its path ... steps: image: ubuntu script: | #!/usr/bin/env bash /workspace/my-script.sh ... Example: Run a custom Python script by writing it in the YAML file ... steps: image: python script: | #!/usr/bin/env python3 print("hello from python!") ... 3.5. Comparison of Jenkins and OpenShift Pipelines execution models Jenkins and OpenShift Pipelines offer similar functions but are different in architecture and execution. Table 3.2. Comparison of execution models in Jenkins and OpenShift Pipelines Jenkins OpenShift Pipelines Jenkins has a controller node. Jenkins runs pipelines and steps centrally, or orchestrates jobs running in other nodes. OpenShift Pipelines is serverless and distributed, and there is no central dependency for execution. Containers are launched by the Jenkins controller node through the pipeline. OpenShift Pipelines adopts a 'container-first' approach, where every step runs as a container in a pod (equivalent to nodes in Jenkins). Extensibility is achieved by using plugins. Extensibility is achieved by using tasks in Tekton Hub or by creating custom tasks and scripts. 3.6. Examples of common use cases Both Jenkins and OpenShift Pipelines offer capabilities for common CI/CD use cases, such as: Compiling, building, and deploying images using Apache Maven Extending the core capabilities by using plugins Reusing shareable libraries and custom scripts 3.6.1. Running a Maven pipeline in Jenkins and OpenShift Pipelines You can use Maven in both Jenkins and OpenShift Pipelines workflows for compiling, building, and deploying images. To map your existing Jenkins workflow to OpenShift Pipelines, consider the following examples: Example: Compile and build an image and deploy it to OpenShift using Maven in Jenkins #!/usr/bin/groovy node('maven') { stage 'Checkout' checkout scm stage 'Build' sh 'cd helloworld && mvn clean' sh 'cd helloworld && mvn compile' stage 'Run Unit Tests' sh 'cd helloworld && mvn test' stage 'Package' sh 'cd helloworld && mvn package' stage 'Archive artifact' sh 'mkdir -p artifacts/deployments && cp helloworld/target/*.war artifacts/deployments' archive 'helloworld/target/*.war' stage 'Create Image' sh 'oc login https://kubernetes.default -u admin -p admin --insecure-skip-tls-verify=true' sh 'oc new-project helloworldproject' sh 'oc project helloworldproject' sh 'oc process -f helloworld/jboss-eap70-binary-build.json | oc create -f -' sh 'oc start-build eap-helloworld-app --from-dir=artifacts/' stage 'Deploy' sh 'oc new-app helloworld/jboss-eap70-deploy.json' } Example: Compile and build an image and deploy it to OpenShift using Maven in OpenShift Pipelines. apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: maven-pipeline spec: workspaces: - name: shared-workspace - name: maven-settings - name: kubeconfig-dir optional: true params: - name: repo-url - name: revision - name: context-path tasks: - name: fetch-repo taskRef: name: git-clone workspaces: - name: output workspace: shared-workspace params: - name: url value: "USD(params.repo-url)" - name: subdirectory value: "" - name: deleteExisting value: "true" - name: revision value: USD(params.revision) - name: mvn-build taskRef: name: maven runAfter: - fetch-repo workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: "USD(params.context-path)" - name: GOALS value: ["-DskipTests", "clean", "compile"] - name: mvn-tests taskRef: name: maven runAfter: - mvn-build workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: "USD(params.context-path)" - name: GOALS value: ["test"] - name: mvn-package taskRef: name: maven runAfter: - mvn-tests workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: "USD(params.context-path)" - name: GOALS value: ["package"] - name: create-image-and-deploy taskRef: name: openshift-client runAfter: - mvn-package workspaces: - name: manifest-dir workspace: shared-workspace - name: kubeconfig-dir workspace: kubeconfig-dir params: - name: SCRIPT value: | cd "USD(params.context-path)" mkdir -p ./artifacts/deployments && cp ./target/*.war ./artifacts/deployments oc new-project helloworldproject oc project helloworldproject oc process -f jboss-eap70-binary-build.json | oc create -f - oc start-build eap-helloworld-app --from-dir=artifacts/ oc new-app jboss-eap70-deploy.json 3.6.2. Extending the core capabilities of Jenkins and OpenShift Pipelines by using plugins Jenkins has the advantage of a large ecosystem of numerous plugins developed over the years by its extensive user base. You can search and browse the plugins in the Jenkins Plugin Index . OpenShift Pipelines also has many tasks developed and contributed by the community and enterprise users. A publicly available catalog of reusable OpenShift Pipelines tasks are available in the Tekton Hub . In addition, OpenShift Pipelines incorporates many of the plugins of the Jenkins ecosystem within its core capabilities. For example, authorization is a critical function in both Jenkins and OpenShift Pipelines. While Jenkins ensures authorization using the Role-based Authorization Strategy plugin, OpenShift Pipelines uses OpenShift's built-in Role-based Access Control system. 3.6.3. Sharing reusable code in Jenkins and OpenShift Pipelines Jenkins shared libraries provide reusable code for parts of Jenkins pipelines. The libraries are shared between Jenkinsfiles to create highly modular pipelines without code repetition. Although there is no direct equivalent of Jenkins shared libraries in OpenShift Pipelines, you can achieve similar workflows by using tasks from the Tekton Hub in combination with custom tasks and scripts. 3.7. Additional resources Understanding OpenShift Pipelines Role-based Access Control | [
"pipeline { agent any stages { stage('Build') { steps { sh 'make' } } stage('Test'){ steps { sh 'make check' junit 'reports/**/*.xml' } } stage('Deploy') { steps { sh 'make publish' } } } }",
"apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myproject-build spec: workspaces: - name: source steps: - image: my-ci-image command: [\"make\"] workingDir: USD(workspaces.source.path)",
"apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myproject-test spec: workspaces: - name: source steps: - image: my-ci-image command: [\"make check\"] workingDir: USD(workspaces.source.path) - image: junit-report-image script: | #!/usr/bin/env bash junit-report reports/**/*.xml workingDir: USD(workspaces.source.path)",
"apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myprojectd-deploy spec: workspaces: - name: source steps: - image: my-deploy-image command: [\"make deploy\"] workingDir: USD(workspaces.source.path)",
"apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: myproject-pipeline spec: workspaces: - name: shared-dir tasks: - name: build taskRef: name: myproject-build workspaces: - name: source workspace: shared-dir - name: test taskRef: name: myproject-test workspaces: - name: source workspace: shared-dir - name: deploy taskRef: name: myproject-deploy workspaces: - name: source workspace: shared-dir",
"apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: demo-pipeline spec: params: - name: repo_url - name: revision workspaces: - name: source tasks: - name: fetch-from-git taskRef: name: git-clone params: - name: url value: USD(params.repo_url) - name: revision value: USD(params.revision) workspaces: - name: output workspace: source",
"apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: maven-test spec: workspaces: - name: source steps: - image: my-maven-image command: [\"mvn test\"] workingDir: USD(workspaces.source.path)",
"steps: image: ubuntu script: | #!/usr/bin/env bash /workspace/my-script.sh",
"steps: image: python script: | #!/usr/bin/env python3 print(\"hello from python!\")",
"#!/usr/bin/groovy node('maven') { stage 'Checkout' checkout scm stage 'Build' sh 'cd helloworld && mvn clean' sh 'cd helloworld && mvn compile' stage 'Run Unit Tests' sh 'cd helloworld && mvn test' stage 'Package' sh 'cd helloworld && mvn package' stage 'Archive artifact' sh 'mkdir -p artifacts/deployments && cp helloworld/target/*.war artifacts/deployments' archive 'helloworld/target/*.war' stage 'Create Image' sh 'oc login https://kubernetes.default -u admin -p admin --insecure-skip-tls-verify=true' sh 'oc new-project helloworldproject' sh 'oc project helloworldproject' sh 'oc process -f helloworld/jboss-eap70-binary-build.json | oc create -f -' sh 'oc start-build eap-helloworld-app --from-dir=artifacts/' stage 'Deploy' sh 'oc new-app helloworld/jboss-eap70-deploy.json' }",
"apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: maven-pipeline spec: workspaces: - name: shared-workspace - name: maven-settings - name: kubeconfig-dir optional: true params: - name: repo-url - name: revision - name: context-path tasks: - name: fetch-repo taskRef: name: git-clone workspaces: - name: output workspace: shared-workspace params: - name: url value: \"USD(params.repo-url)\" - name: subdirectory value: \"\" - name: deleteExisting value: \"true\" - name: revision value: USD(params.revision) - name: mvn-build taskRef: name: maven runAfter: - fetch-repo workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: \"USD(params.context-path)\" - name: GOALS value: [\"-DskipTests\", \"clean\", \"compile\"] - name: mvn-tests taskRef: name: maven runAfter: - mvn-build workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: \"USD(params.context-path)\" - name: GOALS value: [\"test\"] - name: mvn-package taskRef: name: maven runAfter: - mvn-tests workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: \"USD(params.context-path)\" - name: GOALS value: [\"package\"] - name: create-image-and-deploy taskRef: name: openshift-client runAfter: - mvn-package workspaces: - name: manifest-dir workspace: shared-workspace - name: kubeconfig-dir workspace: kubeconfig-dir params: - name: SCRIPT value: | cd \"USD(params.context-path)\" mkdir -p ./artifacts/deployments && cp ./target/*.war ./artifacts/deployments oc new-project helloworldproject oc project helloworldproject oc process -f jboss-eap70-binary-build.json | oc create -f - oc start-build eap-helloworld-app --from-dir=artifacts/ oc new-app jboss-eap70-deploy.json"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/jenkins/migrating-from-jenkins-to-openshift-pipelines_images-other-jenkins-agent |
Chapter 3. Configuring Red Hat Quay before deployment | Chapter 3. Configuring Red Hat Quay before deployment The Red Hat Quay Operator can manage all of the Red Hat Quay components when deployed on OpenShift Container Platform. This is the default configuration, however, you can manage one or more components externally when you want more control over the set up. Use the following pattern to configure unmanaged Red Hat Quay components. Procedure Create a config.yaml configuration file with the appropriate settings. Use the following reference for a minimal configuration: USD touch config.yaml AUTHENTICATION_TYPE: Database BUILDLOGS_REDIS: host: <quay-server.example.com> password: <strongpassword> port: 6379 ssl: false DATABASE_SECRET_KEY: <0ce4f796-c295-415b-bf9d-b315114704b8> DB_URI: <postgresql://quayuser:[email protected]:5432/quay> DEFAULT_TAG_EXPIRATION: 2w DISTRIBUTED_STORAGE_CONFIG: default: - LocalStorage - storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - default PREFERRED_URL_SCHEME: http SECRET_KEY: <e8f9fe68-1f84-48a8-a05f-02d72e6eccba> SERVER_HOSTNAME: <quay-server.example.com> SETUP_COMPLETE: true TAG_EXPIRATION_OPTIONS: - 0s - 1d - 1w - 2w - 4w - 3y USER_EVENTS_REDIS: host: <quay-server.example.com> port: 6379 ssl: false Create a Secret using the configuration file by entering the following command: USD oc create secret generic --from-file config.yaml=./config.yaml config-bundle-secret Create a quayregistry.yaml file, identifying the unmanaged components and also referencing the created Secret , for example: Example QuayRegistry YAML file apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: <config_bundle_secret> components: - kind: objectstorage managed: false # ... Enter the following command to deploy the registry by using the quayregistry.yaml file: USD oc create -n quay-enterprise -f quayregistry.yaml 3.1. Pre-configuring Red Hat Quay for automation Red Hat Quay supports several configuration options that enable automation. Users can configure these options before deployment to reduce the need for interaction with the user interface. 3.1.1. Allowing the API to create the first user To create the first user, users need to set the FEATURE_USER_INITIALIZE parameter to true and call the /api/v1/user/initialize API. Unlike all other registry API calls that require an OAuth token generated by an OAuth application in an existing organization, the API endpoint does not require authentication. Users can use the API to create a user such as quayadmin after deploying Red Hat Quay, provided no other users have been created. For more information, see Using the API to create the first user . 3.1.2. Enabling general API access Users should set the BROWSER_API_CALLS_XHR_ONLY configuration option to false to allow general access to the Red Hat Quay registry API. 3.1.3. Adding a superuser After deploying Red Hat Quay, users can create a user and give the first user administrator privileges with full permissions. Users can configure full permissions in advance by using the SUPER_USER configuration object. For example: # ... SERVER_HOSTNAME: quay-server.example.com SETUP_COMPLETE: true SUPER_USERS: - quayadmin # ... 3.1.4. Restricting user creation After you have configured a superuser, you can restrict the ability to create new users to the superuser group by setting the FEATURE_USER_CREATION to false . For example: # ... FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - quayadmin FEATURE_USER_CREATION: false # ... 3.1.5. Enabling new functionality in Red Hat Quay 3.13 To use new Red Hat Quay 3.13 functions, enable some or all of the following features: # ... FEATURE_UI_V2: true FEATURE_UI_V2_REPO_SETTINGS: true FEATURE_AUTO_PRUNE: true ROBOTS_DISALLOW: false # ... 3.1.6. Suggested configuration for automation The following config.yaml parameters are suggested for automation: # ... FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - quayadmin FEATURE_USER_CREATION: false # ... 3.2. Configuring object storage You need to configure object storage before installing Red Hat Quay, irrespective of whether you are allowing the Red Hat Quay Operator to manage the storage or managing it yourself. If you want the Red Hat Quay Operator to be responsible for managing storage, see the section on Managed storage for information on installing and configuring NooBaa and the Red Hat OpenShift Data Foundations Operator. If you are using a separate storage solution, set objectstorage as unmanaged when configuring the Operator. See the following section. Unmanaged storage , for details of configuring existing storage. 3.2.1. Using unmanaged storage This section provides configuration examples for unmanaged storage for your convenience. Refer to the Red Hat Quay configuration guide for complete instructions on how to set up object storage. 3.2.1.1. AWS S3 storage Use the following example when configuring AWS S3 storage for your Red Hat Quay deployment. DISTRIBUTED_STORAGE_CONFIG: s3Storage: - S3Storage - host: s3.us-east-2.amazonaws.com s3_access_key: ABCDEFGHIJKLMN s3_secret_key: OL3ABCDEFGHIJKLMN s3_bucket: quay_bucket s3_region: <region> storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - s3Storage 3.2.1.2. Google Cloud storage Use the following example when configuring Google Cloud storage for your Red Hat Quay deployment. DISTRIBUTED_STORAGE_CONFIG: googleCloudStorage: - GoogleCloudStorage - access_key: GOOGQIMFB3ABCDEFGHIJKLMN bucket_name: quay-bucket secret_key: FhDAYe2HeuAKfvZCAGyOioNaaRABCDEFGHIJKLMN storage_path: /datastorage/registry boto_timeout: 120 1 DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - googleCloudStorage 1 Optional. The time, in seconds, until a timeout exception is thrown when attempting to read from a connection. The default is 60 seconds. Also encompasses the time, in seconds, until a timeout exception is thrown when attempting to make a connection. The default is 60 seconds. 3.2.1.3. Microsoft Azure storage Use the following example when configuring Microsoft Azure storage for your Red Hat Quay deployment. DISTRIBUTED_STORAGE_CONFIG: azureStorage: - AzureStorage - azure_account_name: azure_account_name_here azure_container: azure_container_here storage_path: /datastorage/registry azure_account_key: azure_account_key_here sas_token: some/path/ endpoint_url: https://[account-name].blob.core.usgovcloudapi.net 1 DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - azureStorage 1 The endpoint_url parameter for Microsoft Azure storage is optional and can be used with Microsoft Azure Government (MAG) endpoints. If left blank, the endpoint_url will connect to the normal Microsoft Azure region. As of Red Hat Quay 3.7, you must use the Primary endpoint of your MAG Blob service. Using the Secondary endpoint of your MAG Blob service will result in the following error: AuthenticationErrorDetail:Cannot find the claimed account when trying to GetProperties for the account whusc8-secondary . 3.2.1.4. Ceph/RadosGW Storage Use the following example when configuring Ceph/RadosGW storage for your Red Hat Quay deployment. DISTRIBUTED_STORAGE_CONFIG: radosGWStorage: #storage config name - RadosGWStorage #actual driver - access_key: access_key_here #parameters secret_key: secret_key_here bucket_name: bucket_name_here hostname: hostname_here is_secure: 'true' port: '443' storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: #must contain name of the storage config - radosGWStorage 3.2.1.5. Swift storage Use the following example when configuring Swift storage for your Red Hat Quay deployment. DISTRIBUTED_STORAGE_CONFIG: swiftStorage: - SwiftStorage - swift_user: swift_user_here swift_password: swift_password_here swift_container: swift_container_here auth_url: https://example.org/swift/v1/quay auth_version: 3 os_options: tenant_id: <osp_tenant_id_here> user_domain_name: <osp_domain_name_here> ca_cert_path: /conf/stack/swift.cert" storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - swiftStorage 3.2.1.6. NooBaa unmanaged storage Use the following procedure to deploy NooBaa as your unmanaged storage configuration. Procedure Create a NooBaa Object Bucket Claim in the Red Hat Quay console by navigating to Storage Object Bucket Claims . Retrieve the Object Bucket Claim Data details, including the Access Key, Bucket Name, Endpoint (hostname), and Secret Key. Create a config.yaml configuration file that uses the information for the Object Bucket Claim: DISTRIBUTED_STORAGE_CONFIG: default: - RHOCSStorage - access_key: WmrXtSGk8B3nABCDEFGH bucket_name: my-noobaa-bucket-claim-8b844191-dc6c-444e-9ea4-87ece0abcdef hostname: s3.openshift-storage.svc.cluster.local is_secure: true port: "443" secret_key: X9P5SDGJtmSuHFCMSLMbdNCMfUABCDEFGH+C5QD storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - default For more information about configuring an Object Bucket Claim, see Object Bucket Claim . 3.2.2. Using an unmanaged NooBaa instance Use the following procedure to use an unmanaged NooBaa instance for your Red Hat Quay deployment. Procedure Create a NooBaa Object Bucket Claim in the console at Storage Object Bucket Claims. Retrieve the Object Bucket Claim Data details including the Access Key , Bucket Name , Endpoint (hostname) , and Secret Key . Create a config.yaml configuration file using the information for the Object Bucket Claim. For example: DISTRIBUTED_STORAGE_CONFIG: default: - RHOCSStorage - access_key: WmrXtSGk8B3nABCDEFGH bucket_name: my-noobaa-bucket-claim-8b844191-dc6c-444e-9ea4-87ece0abcdef hostname: s3.openshift-storage.svc.cluster.local is_secure: true port: "443" secret_key: X9P5SDGJtmSuHFCMSLMbdNCMfUABCDEFGH+C5QD storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - default 3.2.3. Managed storage If you want the Red Hat Quay Operator to manage object storage for Red Hat Quay, your cluster needs to be capable of providing object storage through the ObjectBucketClaim API. Using the Red Hat OpenShift Data Foundation Operator, there are two supported options available: A standalone instance of the Multi-Cloud Object Gateway backed by a local Kubernetes PersistentVolume storage Not highly available Included in the Red Hat Quay subscription Does not require a separate subscription for Red Hat OpenShift Data Foundation A production deployment of Red Hat OpenShift Data Foundation with scale-out Object Service and Ceph Highly available Requires a separate subscription for Red Hat OpenShift Data Foundation To use the standalone instance option, continue reading below. For production deployment of Red Hat OpenShift Data Foundation, please refer to the official documentation . Note Object storage disk space is allocated automatically by the Red Hat Quay Operator with 50 GiB. This number represents a usable amount of storage for most small to medium Red Hat Quay installations but might not be sufficient for your use cases. Resizing the Red Hat OpenShift Data Foundation volume is currently not handled by the Red Hat Quay Operator. See the section below about resizing managed storage for more details. 3.2.3.1. Leveraging the Multicloud Object Gateway Component in the Red Hat OpenShift Data Foundation Operator for Red Hat Quay As part of a Red Hat Quay subscription, users are entitled to use the Multicloud Object Gateway component of the Red Hat OpenShift Data Foundation Operator (formerly known as OpenShift Container Storage Operator). This gateway component allows you to provide an S3-compatible object storage interface to Red Hat Quay backed by Kubernetes PersistentVolume -based block storage. The usage is limited to a Red Hat Quay deployment managed by the Operator and to the exact specifications of the multicloud Object Gateway instance as documented below. Since Red Hat Quay does not support local filesystem storage, users can leverage the gateway in combination with Kubernetes PersistentVolume storage instead, to provide a supported deployment. A PersistentVolume is directly mounted on the gateway instance as a backing store for object storage and any block-based StorageClass is supported. By the nature of PersistentVolume , this is not a scale-out, highly available solution and does not replace a scale-out storage system like Red Hat OpenShift Data Foundation. Only a single instance of the gateway is running. If the pod running the gateway becomes unavailable due to rescheduling, updates or unplanned downtime, this will cause temporary degradation of the connected Red Hat Quay instances. Using the following procedures, you will install the Local Storage Operator, Red Hat OpenShift Data Foundation, and create a standalone Multicloud Object Gateway to deploy Red Hat Quay on OpenShift Container Platform. Note The following documentation shares commonality with the official Red Hat OpenShift Data Foundation documentation . 3.2.3.1.1. Installing the Local Storage Operator on OpenShift Container Platform Use the following procedure to install the Local Storage Operator from the OperatorHub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Log in to the OpenShift Web Console . Click Operators OperatorHub . Type local storage into the search box to find the Local Storage Operator from the list of Operators. Click Local Storage . Click Install . Set the following options on the Install Operator page: For Update channel, select stable . For Installation mode, select A specific namespace on the cluster . For Installed Namespace, select Operator recommended namespace openshift-local-storage . For Update approval, select Automatic . Click Install . 3.2.3.1.2. Installing Red Hat OpenShift Data Foundation on OpenShift Container Platform Use the following procedure to install Red Hat OpenShift Data Foundation on OpenShift Container Platform. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and Operator installation permissions. You must have at least three worker nodes in the OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Procedure Log in to the OpenShift Web Console . Click Operators OperatorHub . Type OpenShift Data Foundation in the search box. Click OpenShift Data Foundation . Click Install . Set the following options on the Install Operator page: For Update channel, select the most recent stable version. For Installation mode, select A specific namespace on the cluster . For Installed Namespace, select Operator recommended Namespace: openshift-storage . For Update approval, select Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. For Console plugin, select Enable . Click Install . After the Operator is installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. Continue to the following section, "Creating a standalone Multicloud Object Gateway", to leverage the Multicloud Object Gateway Component for Red Hat Quay. 3.2.3.1.3. Creating a standalone Multicloud Object Gateway using the OpenShift Container Platform UI Use the following procedure to create a standalone Multicloud Object Gateway. Prerequisites You have installed the Local Storage Operator. You have installed the Red Hat OpenShift Data Foundation Operator. Procedure In the OpenShift Web Console , click Operators Installed Operators to view all installed Operators. Ensure that the namespace is openshift-storage . Click Create StorageSystem . On the Backing storage page, select the following: Select Multicloud Object Gateway for Deployment type . Select the Create a new StorageClass using the local storage devices option. Click . Note You are prompted to install the Local Storage Operator if it is not already installed. Click Install , and follow the procedure as described in "Installing the Local Storage Operator on OpenShift Container Platform". On the Create local volume set page, provide the following information: Enter a name for the LocalVolumeSet and the StorageClass . By default, the local volume set name appears for the storage class name. You can change the name. Choose one of the following: Disk on all nodes Uses the available disks that match the selected filters on all the nodes. Disk on selected nodes Uses the available disks that match the selected filters only on the selected nodes. From the available list of Disk Type , select SSD/NVMe . Expand the Advanced section and set the following options: Volume Mode Filesystem is selected by default. Always ensure that Filesystem is selected for Volume Mode. Device Type Select one or more device type from the dropdown list. Disk Size Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included. Maximum Disks Limit This indicates the maximum number of PVs that can be created on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes. Click A pop-up to confirm the creation of LocalVolumeSet is displayed. Click Yes to continue. In the Capacity and nodes page, configure the following: Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class. Click to continue. Optional. Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token Authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate, and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to Red Hat OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate , and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back . Click Create StorageSystem . 3.2.3.1.4. Create A standalone Multicloud Object Gateway using the CLI Use the following procedure to install the Red Hat OpenShift Data Foundation (formerly known as OpenShift Container Storage) Operator and configure a single instance Multi-Cloud Gateway service. Note The following configuration cannot be run in parallel on a cluster with Red Hat OpenShift Data Foundation installed. Procedure On the OpenShift Web Console , and then select Operators OperatorHub . Search for Red Hat OpenShift Data Foundation , and then select Install . Accept all default options, and then select Install . Confirm that the Operator has installed by viewing the Status column, which should be marked as Succeeded . Warning When the installation of the Red Hat OpenShift Data Foundation Operator is finished, you are prompted to create a storage system. Do not follow this instruction. Instead, create NooBaa object storage as outlined the following steps. On your machine, create a file named noobaa.yaml with the following information: apiVersion: noobaa.io/v1alpha1 kind: NooBaa metadata: name: noobaa namespace: openshift-storage spec: dbResources: requests: cpu: '0.1' memory: 1Gi dbType: postgres coreResources: requests: cpu: '0.1' memory: 1Gi This creates a single instance deployment of the Multi-cloud Object Gateway . Apply the configuration with the following command: USD oc create -n openshift-storage -f noobaa.yaml Example output noobaa.noobaa.io/noobaa created After a few minutes, the Multi-cloud Object Gateway should finish provisioning. You can enter the following command to check its status: USD oc get -n openshift-storage noobaas noobaa -w Example output NAME MGMT-ENDPOINTS S3-ENDPOINTS IMAGE PHASE AGE noobaa [https://10.0.32.3:30318] [https://10.0.32.3:31958] registry.redhat.io/ocs4/mcg-core-rhel8@sha256:56624aa7dd4ca178c1887343c7445a9425a841600b1309f6deace37ce6b8678d Ready 3d18h Configure a backing store for the gateway by creating the following YAML file, named noobaa-pv-backing-store.yaml : apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: noobaa-pv-backing-store namespace: openshift-storage spec: pvPool: numVolumes: 1 resources: requests: storage: 50Gi 1 storageClass: STORAGE-CLASS-NAME 2 type: pv-pool 1 The overall capacity of the object storage service. Adjust as needed. 2 The StorageClass to use for the PersistentVolumes requested. Delete this property to use the cluster default. Enter the following command to apply the configuration: USD oc create -f noobaa-pv-backing-store.yaml Example output backingstore.noobaa.io/noobaa-pv-backing-store created This creates the backing store configuration for the gateway. All images in Red Hat Quay will be stored as objects through the gateway in a PersistentVolume created by the above configuration. Run the following command to make the PersistentVolume backing store the default for all ObjectBucketClaims issued by the Red Hat Quay Operator: USD oc patch bucketclass noobaa-default-bucket-class --patch '{"spec":{"placementPolicy":{"tiers":[{"backingStores":["noobaa-pv-backing-store"]}]}}}' --type merge -n openshift-storage | [
"touch config.yaml",
"AUTHENTICATION_TYPE: Database BUILDLOGS_REDIS: host: <quay-server.example.com> password: <strongpassword> port: 6379 ssl: false DATABASE_SECRET_KEY: <0ce4f796-c295-415b-bf9d-b315114704b8> DB_URI: <postgresql://quayuser:[email protected]:5432/quay> DEFAULT_TAG_EXPIRATION: 2w DISTRIBUTED_STORAGE_CONFIG: default: - LocalStorage - storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - default PREFERRED_URL_SCHEME: http SECRET_KEY: <e8f9fe68-1f84-48a8-a05f-02d72e6eccba> SERVER_HOSTNAME: <quay-server.example.com> SETUP_COMPLETE: true TAG_EXPIRATION_OPTIONS: - 0s - 1d - 1w - 2w - 4w - 3y USER_EVENTS_REDIS: host: <quay-server.example.com> port: 6379 ssl: false",
"oc create secret generic --from-file config.yaml=./config.yaml config-bundle-secret",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: <config_bundle_secret> components: - kind: objectstorage managed: false",
"oc create -n quay-enterprise -f quayregistry.yaml",
"SERVER_HOSTNAME: quay-server.example.com SETUP_COMPLETE: true SUPER_USERS: - quayadmin",
"FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - quayadmin FEATURE_USER_CREATION: false",
"FEATURE_UI_V2: true FEATURE_UI_V2_REPO_SETTINGS: true FEATURE_AUTO_PRUNE: true ROBOTS_DISALLOW: false",
"FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - quayadmin FEATURE_USER_CREATION: false",
"DISTRIBUTED_STORAGE_CONFIG: s3Storage: - S3Storage - host: s3.us-east-2.amazonaws.com s3_access_key: ABCDEFGHIJKLMN s3_secret_key: OL3ABCDEFGHIJKLMN s3_bucket: quay_bucket s3_region: <region> storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - s3Storage",
"DISTRIBUTED_STORAGE_CONFIG: googleCloudStorage: - GoogleCloudStorage - access_key: GOOGQIMFB3ABCDEFGHIJKLMN bucket_name: quay-bucket secret_key: FhDAYe2HeuAKfvZCAGyOioNaaRABCDEFGHIJKLMN storage_path: /datastorage/registry boto_timeout: 120 1 DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - googleCloudStorage",
"DISTRIBUTED_STORAGE_CONFIG: azureStorage: - AzureStorage - azure_account_name: azure_account_name_here azure_container: azure_container_here storage_path: /datastorage/registry azure_account_key: azure_account_key_here sas_token: some/path/ endpoint_url: https://[account-name].blob.core.usgovcloudapi.net 1 DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - azureStorage",
"DISTRIBUTED_STORAGE_CONFIG: radosGWStorage: #storage config name - RadosGWStorage #actual driver - access_key: access_key_here #parameters secret_key: secret_key_here bucket_name: bucket_name_here hostname: hostname_here is_secure: 'true' port: '443' storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: #must contain name of the storage config - radosGWStorage",
"DISTRIBUTED_STORAGE_CONFIG: swiftStorage: - SwiftStorage - swift_user: swift_user_here swift_password: swift_password_here swift_container: swift_container_here auth_url: https://example.org/swift/v1/quay auth_version: 3 os_options: tenant_id: <osp_tenant_id_here> user_domain_name: <osp_domain_name_here> ca_cert_path: /conf/stack/swift.cert\" storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - swiftStorage",
"DISTRIBUTED_STORAGE_CONFIG: default: - RHOCSStorage - access_key: WmrXtSGk8B3nABCDEFGH bucket_name: my-noobaa-bucket-claim-8b844191-dc6c-444e-9ea4-87ece0abcdef hostname: s3.openshift-storage.svc.cluster.local is_secure: true port: \"443\" secret_key: X9P5SDGJtmSuHFCMSLMbdNCMfUABCDEFGH+C5QD storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - default",
"DISTRIBUTED_STORAGE_CONFIG: default: - RHOCSStorage - access_key: WmrXtSGk8B3nABCDEFGH bucket_name: my-noobaa-bucket-claim-8b844191-dc6c-444e-9ea4-87ece0abcdef hostname: s3.openshift-storage.svc.cluster.local is_secure: true port: \"443\" secret_key: X9P5SDGJtmSuHFCMSLMbdNCMfUABCDEFGH+C5QD storage_path: /datastorage/registry DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: [] DISTRIBUTED_STORAGE_PREFERENCE: - default",
"apiVersion: noobaa.io/v1alpha1 kind: NooBaa metadata: name: noobaa namespace: openshift-storage spec: dbResources: requests: cpu: '0.1' memory: 1Gi dbType: postgres coreResources: requests: cpu: '0.1' memory: 1Gi",
"oc create -n openshift-storage -f noobaa.yaml",
"noobaa.noobaa.io/noobaa created",
"oc get -n openshift-storage noobaas noobaa -w",
"NAME MGMT-ENDPOINTS S3-ENDPOINTS IMAGE PHASE AGE noobaa [https://10.0.32.3:30318] [https://10.0.32.3:31958] registry.redhat.io/ocs4/mcg-core-rhel8@sha256:56624aa7dd4ca178c1887343c7445a9425a841600b1309f6deace37ce6b8678d Ready 3d18h",
"apiVersion: noobaa.io/v1alpha1 kind: BackingStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: noobaa-pv-backing-store namespace: openshift-storage spec: pvPool: numVolumes: 1 resources: requests: storage: 50Gi 1 storageClass: STORAGE-CLASS-NAME 2 type: pv-pool",
"oc create -f noobaa-pv-backing-store.yaml",
"backingstore.noobaa.io/noobaa-pv-backing-store created",
"oc patch bucketclass noobaa-default-bucket-class --patch '{\"spec\":{\"placementPolicy\":{\"tiers\":[{\"backingStores\":[\"noobaa-pv-backing-store\"]}]}}}' --type merge -n openshift-storage"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/deploying_the_red_hat_quay_operator_on_openshift_container_platform/operator-preconfigure |
Chapter 2. Installing the Operator | Chapter 2. Installing the Operator The Red Hat Service Interconnect Operator creates and manages sites in OpenShift. Note The Red Hat Service Interconnect Operator is supported only on OpenShift version 4 as distinct from OpenShift version 3. Installing an Operator requires administrator-level privileges for your cluster. 2.1. Installing the Operator for all namespaces using the CLI The steps in this section show how to use the kubectl command to install and deploy the latest version of the Red Hat Service Interconnect Operator in a given Kubernetes cluster. Installing the operator for all namespaces allows you create a site in any namespace. Prerequisites Access to a cluster using a cluster-admin account. Operator Lifecycle Manager is installed. This is installed by default on OpenShift clusters. See QuickStart for more information about installation. Procedure Log in as a cluster administrator. Complete the steps described in Red Hat Container Registry Authentication . Create a file named subscription-all.yaml with the following: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: skupper-operator namespace: openshift-operators spec: channel: stable-1 installPlanApproval: Automatic name: skupper-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: skupper-operator.v1.8.1-rh-3 Note If you want to keep updates confined to 1.8.x releases, set the value of channel to stable-1.8 . If you do not specify startingCSV , the subscription defaults to the latest operator version. If you specify installPlanApproval as Manual , sites are not automatically upgraded to the latest version of Service Interconnect. See Chapter 4, Upgrading the Red Hat Service Interconnect Operator and sites for information on manually upgrading sites. Apply the subscription YAML: USD kubectl apply -f subscription-all.yaml Additional information See Using Skupper for instructions about using YAML to create sites. 2.2. Installing the Operator for a single namespace using the CLI The steps in this section show how to use the kubectl command to install and deploy the latest version of the Red Hat Service Interconnect Operator in a given Kubernetes cluster. Installing the operator for a single namespaces allows you create a site in the specified namespace. Prerequisites Access to a cluster using a cluster-admin account. Operator Lifecycle Manager is installed. This is installed by default on OpenShift clusters. See QuickStart for more information about installation. Procedure Log in as a cluster administrator. Complete the steps described in Red Hat Container Registry Authentication . Create an Operator group in the namespace where you want to create a site: Create a file named operator-group.yaml with the following: kind: OperatorGroup apiVersion: operators.coreos.com/v1 metadata: name: skupper-operator namespace: my-namespace spec: targetNamespaces: - my-namespace where my-namespace is the name of the namespace you want to create the site. Apply the Operator group YAML: USD kubectl apply -f operator-group.yaml Create a file named subscription-myns.yaml with the following: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: skupper-operator namespace: my-namespace spec: channel: stable-1 installPlanApproval: Automatic name: skupper-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: skupper-operator.v1.8.1-rh-3 where my-namespace is the name of the namespace you want to create the site. Note If you want to keep updates confined to 1.8.x releases, set the value of channel to stable-1.8 . If you do not specify startingCSV , the subscription defaults to the latest operator version. If you specify installPlanApproval as Manual , sites are not automatically upgraded to the latest version of Service Interconnect. See Chapter 4, Upgrading the Red Hat Service Interconnect Operator and sites for information on manually upgrading sites. Apply the subscription YAML: USD kubectl apply -f subscription-myns.yaml Additional information See Using Skupper for instructions about using YAML to create sites. 2.3. Installing the Operator using the OpenShift console The procedures in this section show how to use the OperatorHub from the OpenShift console to install and deploy the latest version of the Red Hat Service Interconnect Operator in a given OpenShift namespace. Prerequisites Access to an OpenShift cluster using a cluster-admin account. See Release Notes for supported OpenShift versions. Procedure In the OpenShift web console, navigate to Operators OperatorHub . Choose Red Hat Service Interconnect Operator from the list of available Operators, and then click Install . On the Operator Installation page, two Installation mode options are available: All namespaces on the cluster A specific namespace on the cluster For this example, choose A specific namespace on the cluster . Choose an Update approval option. By default, Automatic approval is selected, and sites will upgrade to the latest version of Service Interconnect. If you choose Manual approval, sites will not be automatically upgraded to the latest version of Service Interconnect. See Chapter 4, Upgrading the Red Hat Service Interconnect Operator and sites for information on manually upgrading sites. Select the namespace into which you want to install the Operator, and then click Install . The Installed Operators page appears displaying the status of the Operator installation. Verify that the Red Hat Service Interconnect Operator is displayed and wait until the Status changes to Succeeded . If the installation is not successful, troubleshoot the error: Click Red Hat Service Interconnect Operator on the Installed Operators page. Select the Subscription tab and view any failures or errors. For more information about installing Operators, see the OpenShift Documentation Additional information See Using Skupper for instructions about using YAML to create sites. | [
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: skupper-operator namespace: openshift-operators spec: channel: stable-1 installPlanApproval: Automatic name: skupper-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: skupper-operator.v1.8.1-rh-3",
"kubectl apply -f subscription-all.yaml",
"kind: OperatorGroup apiVersion: operators.coreos.com/v1 metadata: name: skupper-operator namespace: my-namespace spec: targetNamespaces: - my-namespace",
"kubectl apply -f operator-group.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: skupper-operator namespace: my-namespace spec: channel: stable-1 installPlanApproval: Automatic name: skupper-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: skupper-operator.v1.8.1-rh-3",
"kubectl apply -f subscription-myns.yaml"
] | https://docs.redhat.com/en/documentation/red_hat_service_interconnect/1.8/html/installation/installing-operator |
Chapter 359. uniVocity CSV DataFormat | Chapter 359. uniVocity CSV DataFormat Available as of Camel version 2.15 This Data Format uses uniVocity-parsers for reading and writing 3 kinds of tabular data text files: CSV (Comma Separated Values), where the values are separated by a symbol (usually a comma) fixed-width, where the values have known sizes TSV (Tabular Separated Values), where the fields are separated by a tabulation Thus there are 3 data formats based on uniVocity-parsers. If you use Maven you can just add the following to your pom.xml, substituting the version number for the latest and greatest release (see the download page for the latest versions ). <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-univocity-parsers</artifactId> <version>x.x.x</version> </dependency> 359.1. Options Most configuration options of the uniVocity-parsers are available in the data formats. If you want more information about a particular option, please refer to their documentation page . The 3 data formats share common options and have dedicated ones, this section presents them all. 359.2. Options The uniVocity CSV dataformat supports 18 options, which are listed below. Name Default Java Type Description quoteAllFields false Boolean Whether or not all values must be quoted when writing them. quote " String The quote symbol. quoteEscape " String The quote escape symbol delimiter , String The delimiter of values nullValue String The string representation of a null value. The default value is null skipEmptyLines true Boolean Whether or not the empty lines must be ignored. The default value is true ignoreTrailingWhitespaces true Boolean Whether or not the trailing white spaces must ignored. The default value is true ignoreLeadingWhitespaces true Boolean Whether or not the leading white spaces must be ignored. The default value is true headersDisabled false Boolean Whether or not the headers are disabled. When defined, this option explicitly sets the headers as null which indicates that there is no header. The default value is false headerExtractionEnabled false Boolean Whether or not the header must be read in the first line of the test document The default value is false numberOfRecordsToRead Integer The maximum number of record to read. emptyValue String The String representation of an empty value lineSeparator String The line separator of the files The default value is to use the JVM platform line separator normalizedLineSeparator String The normalized line separator of the files The default value is a new line character. comment # String The comment symbol. The default value is # lazyLoad false Boolean Whether the unmarshalling should produce an iterator that reads the lines on the fly or if all the lines must be read at one. The default value is false asMap false Boolean Whether the unmarshalling should produce maps for the lines values instead of lists. It requires to have header (either defined or collected). The default value is false contentTypeHeader false Boolean Whether the data format should set the Content-Type header with the type from the data format if the data format is capable of doing so. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSon etc. 359.3. Spring Boot Auto-Configuration The component supports 19 options, which are listed below. Name Description Default Type camel.dataformat.univocity-csv.as-map Whether the unmarshalling should produce maps for the lines values instead of lists. It requires to have header (either defined or collected). The default value is false false Boolean camel.dataformat.univocity-csv.comment The comment symbol. The default value is # # String camel.dataformat.univocity-csv.content-type-header Whether the data format should set the Content-Type header with the type from the data format if the data format is capable of doing so. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSon etc. false Boolean camel.dataformat.univocity-csv.delimiter The delimiter of values , String camel.dataformat.univocity-csv.empty-value The String representation of an empty value String camel.dataformat.univocity-csv.enabled Enable univocity-csv dataformat true Boolean camel.dataformat.univocity-csv.header-extraction-enabled Whether or not the header must be read in the first line of the test document The default value is false false Boolean camel.dataformat.univocity-csv.headers-disabled Whether or not the headers are disabled. When defined, this option explicitly sets the headers as null which indicates that there is no header. The default value is false false Boolean camel.dataformat.univocity-csv.ignore-leading-whitespaces Whether or not the leading white spaces must be ignored. The default value is true true Boolean camel.dataformat.univocity-csv.ignore-trailing-whitespaces Whether or not the trailing white spaces must ignored. The default value is true true Boolean camel.dataformat.univocity-csv.lazy-load Whether the unmarshalling should produce an iterator that reads the lines on the fly or if all the lines must be read at one. The default value is false false Boolean camel.dataformat.univocity-csv.line-separator The line separator of the files The default value is to use the JVM platform line separator String camel.dataformat.univocity-csv.normalized-line-separator The normalized line separator of the files The default value is a new line character. String camel.dataformat.univocity-csv.null-value The string representation of a null value. The default value is null String camel.dataformat.univocity-csv.number-of-records-to-read The maximum number of record to read. Integer camel.dataformat.univocity-csv.quote The quote symbol. " String camel.dataformat.univocity-csv.quote-all-fields Whether or not all values must be quoted when writing them. false Boolean camel.dataformat.univocity-csv.quote-escape The quote escape symbol " String camel.dataformat.univocity-csv.skip-empty-lines Whether or not the empty lines must be ignored. The default value is true true Boolean 359.4. Marshalling usages The marshalling accepts either: A list of maps (L`ist<Map<String, ?>>`), one for each line A single map ( Map<String, ?> ), for a single line Any other body will throws an exception. 359.4.1. Usage example: marshalling a Map into CSV format <route> <from uri="direct:input"/> <marshal> <univocity-csv/> </marshal> <to uri="mock:result"/> </route> 359.4.2. Usage example: marshalling a Map into fixed-width format <route> <from uri="direct:input"/> <marshal> <univocity-fixed padding="_"> <univocity-header length="5"/> <univocity-header length="5"/> <univocity-header length="5"/> </univocity-fixed> </marshal> <to uri="mock:result"/> </route> 359.4.3. Usage example: marshalling a Map into TSV format <route> <from uri="direct:input"/> <marshal> <univocity-tsv/> </marshal> <to uri="mock:result"/> </route> 359.5. Unmarshalling usages The unmarshalling uses an InputStream in order to read the data. Each row produces either: a list with all the values in it ( asMap option with false ); A map with all the values indexed by the headers ( asMap option with true ). All the rows can either: be collected at once into a list ( lazyLoad option with false ); be read on the fly using an iterator ( lazyLoad option with true ). 359.5.1. Usage example: unmarshalling a CSV format into maps with automatic headers <route> <from uri="direct:input"/> <unmarshal> <univocity-csv headerExtractionEnabled="true" asMap="true"/> </unmarshal> <to uri="mock:result"/> </route> 359.5.2. Usage example: unmarshalling a fixed-width format into lists <route> <from uri="direct:input"/> <unmarshal> <univocity-fixed> <univocity-header length="5"/> <univocity-header length="5"/> <univocity-header length="5"/> </univocity-fixed> </unmarshal> <to uri="mock:result"/> </route> | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-univocity-parsers</artifactId> <version>x.x.x</version> </dependency>",
"<route> <from uri=\"direct:input\"/> <marshal> <univocity-csv/> </marshal> <to uri=\"mock:result\"/> </route>",
"<route> <from uri=\"direct:input\"/> <marshal> <univocity-fixed padding=\"_\"> <univocity-header length=\"5\"/> <univocity-header length=\"5\"/> <univocity-header length=\"5\"/> </univocity-fixed> </marshal> <to uri=\"mock:result\"/> </route>",
"<route> <from uri=\"direct:input\"/> <marshal> <univocity-tsv/> </marshal> <to uri=\"mock:result\"/> </route>",
"<route> <from uri=\"direct:input\"/> <unmarshal> <univocity-csv headerExtractionEnabled=\"true\" asMap=\"true\"/> </unmarshal> <to uri=\"mock:result\"/> </route>",
"<route> <from uri=\"direct:input\"/> <unmarshal> <univocity-fixed> <univocity-header length=\"5\"/> <univocity-header length=\"5\"/> <univocity-header length=\"5\"/> </univocity-fixed> </unmarshal> <to uri=\"mock:result\"/> </route>"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/univocity-csv-dataformat |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/managing_secrets_with_the_key_manager_service/making-open-source-more-inclusive |
Chapter 3. Developing and running Camel K integrations | Chapter 3. Developing and running Camel K integrations This chapter explains how to set up your development environment and how to develop and deploy simple Camel K integrations written in Java and YAML. It also shows how to use the kamel command line to manage Camel K integrations at runtime. For example, this includes running, describing, logging, and deleting integrations. Section 3.1, "Setting up your Camel K development environment" Section 3.2, "Developing Camel K integrations in Java" Section 3.3, "Developing Camel K integrations in YAML" Section 3.4, "Running Camel K integrations" Section 3.5, "Running Camel K integrations in development mode" Section 3.6, "Running Camel K integrations using modeline" Section 3.7, "Build" Section 3.8, "Promoting across environments" 3.1. Setting up your Camel K development environment You must set up your environment with the recommended development tooling before you can automatically deploy the Camel K quick start tutorials. This section explains how to install the recommended Visual Studio (VS) Code IDE and the extensions that it provides for Camel K. Note The Camel K VS Code extensions are community features. VS Code is recommended for ease of use and the best developer experience of Camel K. This includes automatic completion of Camel DSL code and Camel K traits. However, you can manually enter your code and tutorial commands using your chosen IDE instead of VS Code. Prerequisites You must have access to an OpenShift cluster on which the Camel K Operator and OpenShift Serverless Operator are installed: Installing Camel K Installing OpenShift Serverless from the OperatorHub Procedure Install VS Code on your development platform. For example, on Red Hat Enterprise Linux: Install the required key and repository: USD sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc USD sudo sh -c 'echo -e "[code]\nname=Visual Studio Code\nbaseurl=https://packages.microsoft.com/yumrepos/vscode\nenabled=1\ngpgcheck=1\ngpgkey=https://packages.microsoft.com/keys/microsoft.asc" > /etc/yum.repos.d/vscode.repo' Update the cache and install the VS Code package: USD yum check-update USD sudo yum install code For details on installing on other platforms, see the VS Code installation documentation . Enter the code command to launch the VS Code editor. For more details, see the VS Code command line documentation . Install the VS Code Camel Extension Pack, which includes the extensions required for Camel K. For example, in VS Code: In the left navigation bar, click Extensions . In the search box, enter Apache Camel . Select the Extension Pack for Apache Camel by Red Hat , and click Install . For more details, see the instructions for the Extension Pack for Apache Camel by Red Hat . Additional resources VS Code Getting Started documentation VS Code Tooling for Apache Camel K by Red Hat extension VS Code Language Support for Apache Camel by Red Hat extension Apache Camel K and VS Code tooling example To upgrade your Camel application from Camel 3.x to 3.y see, Camel 3.x Upgrade Guide . 3.2. Developing Camel K integrations in Java This section shows how to develop a simple Camel K integration in Java DSL. Writing an integration in Java to be deployed using Camel K is the same as defining your routing rules in Camel. However, you do not need to build and package the integration as a JAR when using Camel K. You can use any Camel component directly in your integration routes. Camel K automatically handles the dependency management and imports all the required libraries from the Camel catalog using code inspection. Prerequisites Setting up your Camel K development environment Procedure Enter the kamel init command to generate a simple Java integration file. For example: USD kamel init HelloCamelK.java Open the generated integration file in your IDE and edit as appropriate. For example, the HelloCamelK.java integration automatically includes the Camel timer and log components to help you get started: // camel-k: language=java import org.apache.camel.builder.RouteBuilder; public class HelloCamelK extends RouteBuilder { @Override public void configure() throws Exception { // Write your routes here, for example: from("timer:java?period=1s") .routeId("java") .setBody() .simple("Hello Camel K from USD{routeId}") .to("log:info"); } } steps Running Camel K integrations 3.3. Developing Camel K integrations in YAML This section explains how to develop a simple Camel K integration in YAML DSL. Writing an integration in YAML to be deployed using Camel K is the same as defining your routing rules in Camel. You can use any Camel component directly in your integration routes. Camel K automatically handles the dependency management and imports all the required libraries from the Camel catalog using code inspection. Prerequisites Setting up your Camel K development environment Procedure Enter the kamel init command to generate a simple YAML integration file. For example: USD kamel init hello.camelk.yaml Open the generated integration file in your IDE and edit as appropriate. For example, the hello.camelk.yaml integration automatically includes the Camel timer and log components to help you get started: # Write your routes here, for example: - from: uri: "timer:yaml" parameters: period: "1s" steps: - set-body: constant: "Hello Camel K from yaml" - to: "log:info" 3.4. Running Camel K integrations You can run Camel K integrations in the cloud on your OpenShift cluster from the command line using the kamel run command. Prerequisites Setting up your Camel K development environment . You must already have a Camel integration written in Java or YAML DSL. Procedure Log into your OpenShift cluster using the oc client tool, for example: USD oc login --token=my-token --server=https://my-cluster.example.com:6443 Ensure that the Camel K Operator is running, for example: USD oc get pod NAME READY STATUS RESTARTS AGE camel-k-operator-86b8d94b4-pk7d6 1/1 Running 0 6m28s Enter the kamel run command to run your integration in the cloud on OpenShift. For example: Java example USD kamel run HelloCamelK.java integration "hello-camel-k" created YAML example USD kamel run hello.camelk.yaml integration "hello" created Enter the kamel get command to check the status of the integration: USD kamel get NAME PHASE KIT hello Building Kit myproject/kit-bq666mjej725sk8sn12g When the integration runs for the first time, Camel K builds the integration kit for the container image, which downloads all the required Camel modules and adds them to the image classpath. Enter kamel get again to verify that the integration is running: USD kamel get NAME PHASE KIT hello Running myproject/kit-bq666mjej725sk8sn12g Enter the kamel log command to print the log to stdout : USD kamel log hello [1] 2021-08-11 17:58:40,573 INFO [org.apa.cam.k.Runtime] (main) Apache Camel K Runtime 1.7.1.fuse-800025-redhat-00001 [1] 2021-08-11 17:58:40,653 INFO [org.apa.cam.qua.cor.CamelBootstrapRecorder] (main) bootstrap runtime: org.apache.camel.quarkus.main.CamelMainRuntime [1] 2021-08-11 17:58:40,844 INFO [org.apa.cam.k.lis.SourcesConfigurer] (main) Loading routes from: SourceDefinition{name='camel-k-embedded-flow', language='yaml', location='file:/etc/camel/sources/camel-k-embedded-flow.yaml', } [1] 2021-08-11 17:58:41,216 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Routes startup summary (total:1 started:1) [1] 2021-08-11 17:58:41,217 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started route1 (timer://yaml) [1] 2021-08-11 17:58:41,217 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 3.10.0.fuse-800010-redhat-00001 (camel-1) started in 136ms (build:0ms init:100ms start:36ms) [1] 2021-08-11 17:58:41,268 INFO [io.quarkus] (main) camel-k-integration 1.6.6 on JVM (powered by Quarkus 1.11.7.Final-redhat-00009) started in 2.064s. [1] 2021-08-11 17:58:41,269 INFO [io.quarkus] (main) Profile prod activated. [1] 2021-08-11 17:58:41,269 INFO [io.quarkus] (main) Installed features: [camel-bean, camel-core, camel-k-core, camel-k-runtime, camel-log, camel-support-common, camel-timer, camel-yaml-dsl, cdi] [1] 2021-08-11 17:58:42,423 INFO [info] (Camel (camel-1) thread #0 - timer://yaml) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Camel K from yaml] ... Press Ctrl-C to terminate logging in the terminal. Additional resources For more details on the kamel run command, enter kamel run --help For faster deployment turnaround times, see Running Camel K integrations in development mode For details of development tools to run integrations, see VS Code Tooling for Apache Camel K by Red Hat See also Managing Camel K integrations Running An Integration Without CLI You can run an integration without a CLI (Command Line Interface) and create an Integration Custom Resource with the configuration to run your application. For example, execute the following sample route. It returns the expected Integration Custom Resource. Save this custom resource in a yaml file, my-integration.yaml . Now, run the integration that contains the Integration Custom Resource using the oc command line, the UI, or the API to call the OpenShift cluster. In the following example, oc CLI is used from the command line. The operator runs the Integration. Note Kubernetes supports Structural Schemas for CustomResourceDefinitions. For more details about Camel K traits see, Camel K trait configuration reference . Schema changes on Custom Resources The strongly-typed Trait API imposes changes on the following CustomResourceDefinitions: integrations , integrationkits', and `integrationplatforms. Trait properties under spec.traits.<trait-id>.configuration are now defined directly under spec.traits.<trait-id>. vvv Backward compatibility is possible in this implementation. To achieve backward compatibility, the Configuration field with RawMessage type is provided for each trait type, so that the existing integrations and resources are read from the new Red Hat build of Apache Camel K version. When the old integrations and resources are read, the legacy configuration in each trait (if any) is migrated to the new Trait API fields. If the values are predefined on the new API fields, they precede the legacy ones. 3.5. Running Camel K integrations in development mode You can run Camel K integrations in development mode on your OpenShift cluster from the command line. Using development mode, you can iterate quickly on integrations in development and get fast feedback on your code. When you specify the kamel run command with the --dev option, this deploys the integration in the cloud immediately and shows the integration logs in the terminal. You can then change the code and see the changes automatically applied instantly to the remote integration Pod on OpenShift. The terminal automatically displays all redeployments of the remote integration in the cloud. Note The artifacts generated by Camel K in development mode are identical to those that you run in production. The purpose of development mode is faster development. Prerequisites Setting up your Camel K development environment . You must already have a Camel integration written in Java or YAML DSL. Procedure Log into your OpenShift cluster using the oc client tool, for example: USD oc login --token=my-token --server=https://my-cluster.example.com:6443 Ensure that the Camel K Operator is running, for example: USD oc get pod NAME READY STATUS RESTARTS AGE camel-k-operator-86b8d94b4-pk7d6 1/1 Running 0 6m28s Enter the kamel run command with --dev to run your integration in development mode on OpenShift in the cloud. The following shows a simple Java example: USD kamel run HelloCamelK.java --dev Condition "IntegrationPlatformAvailable" is "True" for Integration hello-camel-k: test/camel-k Integration hello-camel-k in phase "Initialization" Integration hello-camel-k in phase "Building Kit" Condition "IntegrationKitAvailable" is "True" for Integration hello-camel-k: kit-c49sqn4apkb4qgn55ak0 Integration hello-camel-k in phase "Deploying" Progress: integration "hello-camel-k" in phase Initialization Progress: integration "hello-camel-k" in phase Building Kit Progress: integration "hello-camel-k" in phase Deploying Integration hello-camel-k in phase "Running" Condition "DeploymentAvailable" is "True" for Integration hello-camel-k: deployment name is hello-camel-k Progress: integration "hello-camel-k" in phase Running Condition "CronJobAvailable" is "False" for Integration hello-camel-k: different controller strategy used (deployment) Condition "KnativeServiceAvailable" is "False" for Integration hello-camel-k: different controller strategy used (deployment) Condition "Ready" is "False" for Integration hello-camel-k Condition "Ready" is "True" for Integration hello-camel-k [1] Monitoring pod hello-camel-k-7f85df47b8-js7cb ... ... [1] 2021-08-11 18:34:44,069 INFO [org.apa.cam.k.Runtime] (main) Apache Camel K Runtime 1.7.1.fuse-800025-redhat-00001 [1] 2021-08-11 18:34:44,167 INFO [org.apa.cam.qua.cor.CamelBootstrapRecorder] (main) bootstrap runtime: org.apache.camel.quarkus.main.CamelMainRuntime [1] 2021-08-11 18:34:44,362 INFO [org.apa.cam.k.lis.SourcesConfigurer] (main) Loading routes from: SourceDefinition{name='HelloCamelK', language='java', location='file:/etc/camel/sources/HelloCamelK.java', } [1] 2021-08-11 18:34:46,180 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Routes startup summary (total:1 started:1) [1] 2021-08-11 18:34:46,180 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started java (timer://java) [1] 2021-08-11 18:34:46,180 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 3.10.0.fuse-800010-redhat-00001 (camel-1) started in 243ms (build:0ms init:213ms start:30ms) [1] 2021-08-11 18:34:46,190 INFO [io.quarkus] (main) camel-k-integration 1.6.6 on JVM (powered by Quarkus 1.11.7.Final-redhat-00009) started in 3.457s. [1] 2021-08-11 18:34:46,190 INFO [io.quarkus] (main) Profile prod activated. [1] 2021-08-11 18:34:46,191 INFO [io.quarkus] (main) Installed features: [camel-bean, camel-core, camel-java-joor-dsl, camel-k-core, camel-k-runtime, camel-log, camel-support-common, camel-timer, cdi] [1] 2021-08-11 18:34:47,200 INFO [info] (Camel (camel-1) thread #0 - timer://java) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Camel K from java] [1] 2021-08-11 18:34:48,180 INFO [info] (Camel (camel-1) thread #0 - timer://java) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Camel K from java] [1] 2021-08-11 18:34:49,180 INFO [info] (Camel (camel-1) thread #0 - timer://java) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Camel K from java] ... Edit the content of your integration DSL file, save your changes, and see the changes displayed instantly in the terminal. For example: ... integration "hello-camel-k" updated ... [2] 2021-08-11 18:40:54,173 INFO [org.apa.cam.k.Runtime] (main) Apache Camel K Runtime 1.7.1.fuse-800025-redhat-00001 [2] 2021-08-11 18:40:54,209 INFO [org.apa.cam.qua.cor.CamelBootstrapRecorder] (main) bootstrap runtime: org.apache.camel.quarkus.main.CamelMainRuntime [2] 2021-08-11 18:40:54,301 INFO [org.apa.cam.k.lis.SourcesConfigurer] (main) Loading routes from: SourceDefinition{name='HelloCamelK', language='java', location='file:/etc/camel/sources/HelloCamelK.java', } [2] 2021-08-11 18:40:55,796 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Routes startup summary (total:1 started:1) [2] 2021-08-11 18:40:55,796 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started java (timer://java) [2] 2021-08-11 18:40:55,797 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 3.10.0.fuse-800010-redhat-00001 (camel-1) started in 174ms (build:0ms init:147ms start:27ms) [2] 2021-08-11 18:40:55,803 INFO [io.quarkus] (main) camel-k-integration 1.6.6 on JVM (powered by Quarkus 1.11.7.Final-redhat-00009) started in 3.025s. [2] 2021-08-11 18:40:55,808 INFO [io.quarkus] (main) Profile prod activated. [2] 2021-08-11 18:40:55,809 INFO [io.quarkus] (main) Installed features: [camel-bean, camel-core, camel-java-joor-dsl, camel-k-core, camel-k-runtime, camel-log, camel-support-common, camel-timer, cdi] [2] 2021-08-11 18:40:56,810 INFO [info] (Camel (camel-1) thread #0 - timer://java) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Camel K from java] [2] 2021-08-11 18:40:57,793 INFO [info] (Camel (camel-1) thread #0 - timer://java) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Camel K from java] ... Press Ctrl-C to terminate logging in the terminal. Additional resources For more details on the kamel run command, enter kamel run --help For details of development tools to run integrations, see VS Code Tooling for Apache Camel K by Red Hat Managing Camel K integrations Configuring Camel K integration dependencies 3.6. Running Camel K integrations using modeline You can use the Camel K modeline to specify multiple configuration options in a Camel K integration source file, which are executed at runtime. This creates efficiencies by saving you the time of re-entering multiple command line options and helps to prevent input errors. The following example shows a modeline entry from a Java integration file that enables 3scale and limits the integration container memory. Prerequisites Setting up your Camel K development environment You must already have a Camel integration written in Java or YAML DSL. Procedure Add a Camel K modeline entry to your integration file. For example: ThreeScaleRest.java // camel-k: trait=3scale.enabled=true trait=container.limit-memory=256Mi 1 import org.apache.camel.builder.RouteBuilder; public class ThreeScaleRest extends RouteBuilder { @Override public void configure() throws Exception { rest().get("/") .to("direct:x"); from("direct:x") .setBody().constant("Hello"); } } Enables both the container and 3scale traits, to expose the route through 3scale and to limit the container memory. Run the integration, for example: The kamel run command outputs any modeline options specified in the integration, for example: Modeline options have been loaded from source files Full command: kamel run ThreeScaleRest.java --trait=3scale.enabled=true --trait=container.limit-memory=256Mi Additional resources Camel K modeline options For details of development tools to run modeline integrations, see Introducing IDE support for Apache Camel K Modeline . 3.7. Build A Build resource describes the process of assembling a container image that copes with the requirement of an Integration or IntegrationKit . The result of a build is an IntegrationKit that must be reused for multiple Integrations . type Build struct { Spec BuildSpec 1 Status BuildStatus 2 } type BuildSpec struct { Tasks []Task 3 } 1 The desired state 2 The status of the object at current time 3 The build tasks Note The full go definition can be found here . 3.7.1. Build strategy You can choose from different build strategies. The build strategy defines how a build must be executed and following are the available strategies. buildStrategy: pod (each build is ran in a separate pod, the operator monitors the pod state) buildStrategy: routine (each build is ran as a go routine inside the operator pod) Note Routine is the default strategy. The following description allows you to decide when to use which strategy. Routine : provides slightly faster builds as no additional pod is started, and loaded build dependencies (e.g. Maven dependencies) are cached between builds. Good for normal amount of builds being executed and only few builds running in parallel. Pod : prevents memory pressure on the operator as the build does not consume CPU and memory from the operator go runtime. Good for many builds being executed and many parallel builds. 3.7.2. Build queues IntegrationKits and its base images must be reused for multiple Integrations to accomplish an efficient resource management and to optimize build and startup times for Camel K Integrations. To reuse images, the operator is going to queue builds in sequential order. This way the operator is able to use efficient image layering for Integrations. Note By default, builds are queued sequentially based on their layout (e.g. native, fast-jar) and the build namespace. However, builds may not run sequentially but in parallel to each other based on certain criteria. For instance, native builds will always run in parallel to other builds. Also, when the build requires to run with a custom IntegrationPlatform it may run in parallel to other builds that run with the default operator IntegrationPlatform. In general, when there is no chance to reuse the build's image layers, the build is eager to run in parallel to other builds. Therefore, to avoid having many builds running in parallel, the operator uses a maximum number of running builds setting that limits the amount of builds running. You can set this limit in the IntegrationPlatform settings. The default values for this limitation is based on the build strategy. buildStrategy: pod (MaxRunningBuilds=10) buildStrategy: routine (MaxRunningBuilds=3) 3.8. Promoting across environments As soon as you have an Integration running in your cluster, you can move that integration to a higher environment. That is, you can test your integration in a development environment, and, after obtaining the result, you can move it into a production environment. Camel K achieves this goal by using the kamel promote command. With this command you can move an integration from one namespace to another. Prerequisites Setting up your Camel K development environment You must already have a Camel integration written in Java or YAML DSL. Ensure that both the source operator and the destination operator are using the same container registry, default registry (if Camel K operator is installed via OperatorHub) is registry.redhat.io Also ensure that the destination namespace provides the required Configmaps, Secrets or Kamelets required by the integration. Note To use the same container registry, you can use the --registry option during installation phase or change the IntegrationPlatform to reflect that accordingly. Code example Following is a simple integration that uses a Configmap to expose some message on an HTTP endpoint. You can start creating such an integration and testing in a namespace called development . kubectl create configmap my-cm --from-literal=greeting="hello, I am development!" -n development PromoteServer.java import org.apache.camel.builder.RouteBuilder; public class PromoteServer extends RouteBuilder { @Override public void configure() throws Exception { from("platform-http:/hello?httpMethodRestrict=GET").setBody(simple("resource:classpath:greeting")); } } Now run it. kamel run --dev -n development PromoteServer.java --config configmap:my-cm [-t service.node-port=true] You must tweak the service trait, depending on the Kubernetes platform and the level of exposure you want to provide. After that you can test it. curl http://192.168.49.2:32116/hello hello, I am development! After testing of your integration, you can move it to a production environment. You must have the destination environment (a Openshift namespace) ready with an operator (sharing the same operator source container registry) and any configuration, such as the configmap you have used here. For that scope, create one on the destination namespace. kubectl create configmap my-cm --from-literal=greeting="hello, I am production!" -n production Note For security reasons, there is a check to ensure that the expected resources such as Configmaps, Secrets and Kamelets are present on the destination. If any of these resources are missing, the integration does not move. You can now promote your integration. kamel promote promote-server -n development --to production kamel logs promote-server -n production Test the promoted integration. curl http://192.168.49.2:30764/hello hello, I am production! Since the Integration is reusing the same container image, the new application is executed immediately. Also, the immutability of the Integration is assured as the container used is exactly the same as the one tested in development (changes are just the configurations). Note The integration running in the test is not altered in any way and keeps running until you stop it. | [
"sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc sudo sh -c 'echo -e \"[code]\\nname=Visual Studio Code\\nbaseurl=https://packages.microsoft.com/yumrepos/vscode\\nenabled=1\\ngpgcheck=1\\ngpgkey=https://packages.microsoft.com/keys/microsoft.asc\" > /etc/yum.repos.d/vscode.repo'",
"yum check-update sudo yum install code",
"kamel init HelloCamelK.java",
"// camel-k: language=java import org.apache.camel.builder.RouteBuilder; public class HelloCamelK extends RouteBuilder { @Override public void configure() throws Exception { // Write your routes here, for example: from(\"timer:java?period=1s\") .routeId(\"java\") .setBody() .simple(\"Hello Camel K from USD{routeId}\") .to(\"log:info\"); } }",
"kamel init hello.camelk.yaml",
"Write your routes here, for example: - from: uri: \"timer:yaml\" parameters: period: \"1s\" steps: - set-body: constant: \"Hello Camel K from yaml\" - to: \"log:info\"",
"oc login --token=my-token --server=https://my-cluster.example.com:6443",
"oc get pod NAME READY STATUS RESTARTS AGE camel-k-operator-86b8d94b4-pk7d6 1/1 Running 0 6m28s",
"kamel run HelloCamelK.java integration \"hello-camel-k\" created",
"kamel run hello.camelk.yaml integration \"hello\" created",
"kamel get NAME PHASE KIT hello Building Kit myproject/kit-bq666mjej725sk8sn12g",
"kamel get NAME PHASE KIT hello Running myproject/kit-bq666mjej725sk8sn12g",
"kamel log hello [1] 2021-08-11 17:58:40,573 INFO [org.apa.cam.k.Runtime] (main) Apache Camel K Runtime 1.7.1.fuse-800025-redhat-00001 [1] 2021-08-11 17:58:40,653 INFO [org.apa.cam.qua.cor.CamelBootstrapRecorder] (main) bootstrap runtime: org.apache.camel.quarkus.main.CamelMainRuntime [1] 2021-08-11 17:58:40,844 INFO [org.apa.cam.k.lis.SourcesConfigurer] (main) Loading routes from: SourceDefinition{name='camel-k-embedded-flow', language='yaml', location='file:/etc/camel/sources/camel-k-embedded-flow.yaml', } [1] 2021-08-11 17:58:41,216 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Routes startup summary (total:1 started:1) [1] 2021-08-11 17:58:41,217 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started route1 (timer://yaml) [1] 2021-08-11 17:58:41,217 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 3.10.0.fuse-800010-redhat-00001 (camel-1) started in 136ms (build:0ms init:100ms start:36ms) [1] 2021-08-11 17:58:41,268 INFO [io.quarkus] (main) camel-k-integration 1.6.6 on JVM (powered by Quarkus 1.11.7.Final-redhat-00009) started in 2.064s. [1] 2021-08-11 17:58:41,269 INFO [io.quarkus] (main) Profile prod activated. [1] 2021-08-11 17:58:41,269 INFO [io.quarkus] (main) Installed features: [camel-bean, camel-core, camel-k-core, camel-k-runtime, camel-log, camel-support-common, camel-timer, camel-yaml-dsl, cdi] [1] 2021-08-11 17:58:42,423 INFO [info] (Camel (camel-1) thread #0 - timer://yaml) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Camel K from yaml]",
"kamel run Sample.java -o yaml",
"apiVersion: camel.apache.org/v1 kind: Integration metadata: creationTimestamp: null name: my-integration namespace: default spec: sources: - content: \" import org.apache.camel.builder.RouteBuilder; public class Sample extends RouteBuilder { @Override public void configure() throws Exception { from(\\\"timer:tick\\\") .log(\\\"Hello Integration!\\\"); } }\" name: Sample.java status: {}",
"apply -f my-integration.yaml integration.camel.apache.org/my-integration created",
"traits: container: configuration: enabled: true name: my-integration",
"traits: container: enabled: true name: my-integration",
"type Trait struct { // Can be used to enable or disable a trait. All traits share this common property. Enabled *bool `property:\"enabled\" json:\"enabled,omitempty\"` // Legacy trait configuration parameters. // Deprecated: for backward compatibility. Configuration *Configuration `json:\"configuration,omitempty\"` } // Deprecated: for backward compatibility. type Configuration struct { RawMessage `json:\",inline\"` }",
"oc login --token=my-token --server=https://my-cluster.example.com:6443",
"oc get pod NAME READY STATUS RESTARTS AGE camel-k-operator-86b8d94b4-pk7d6 1/1 Running 0 6m28s",
"kamel run HelloCamelK.java --dev Condition \"IntegrationPlatformAvailable\" is \"True\" for Integration hello-camel-k: test/camel-k Integration hello-camel-k in phase \"Initialization\" Integration hello-camel-k in phase \"Building Kit\" Condition \"IntegrationKitAvailable\" is \"True\" for Integration hello-camel-k: kit-c49sqn4apkb4qgn55ak0 Integration hello-camel-k in phase \"Deploying\" Progress: integration \"hello-camel-k\" in phase Initialization Progress: integration \"hello-camel-k\" in phase Building Kit Progress: integration \"hello-camel-k\" in phase Deploying Integration hello-camel-k in phase \"Running\" Condition \"DeploymentAvailable\" is \"True\" for Integration hello-camel-k: deployment name is hello-camel-k Progress: integration \"hello-camel-k\" in phase Running Condition \"CronJobAvailable\" is \"False\" for Integration hello-camel-k: different controller strategy used (deployment) Condition \"KnativeServiceAvailable\" is \"False\" for Integration hello-camel-k: different controller strategy used (deployment) Condition \"Ready\" is \"False\" for Integration hello-camel-k Condition \"Ready\" is \"True\" for Integration hello-camel-k [1] Monitoring pod hello-camel-k-7f85df47b8-js7cb [1] 2021-08-11 18:34:44,069 INFO [org.apa.cam.k.Runtime] (main) Apache Camel K Runtime 1.7.1.fuse-800025-redhat-00001 [1] 2021-08-11 18:34:44,167 INFO [org.apa.cam.qua.cor.CamelBootstrapRecorder] (main) bootstrap runtime: org.apache.camel.quarkus.main.CamelMainRuntime [1] 2021-08-11 18:34:44,362 INFO [org.apa.cam.k.lis.SourcesConfigurer] (main) Loading routes from: SourceDefinition{name='HelloCamelK', language='java', location='file:/etc/camel/sources/HelloCamelK.java', } [1] 2021-08-11 18:34:46,180 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Routes startup summary (total:1 started:1) [1] 2021-08-11 18:34:46,180 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started java (timer://java) [1] 2021-08-11 18:34:46,180 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 3.10.0.fuse-800010-redhat-00001 (camel-1) started in 243ms (build:0ms init:213ms start:30ms) [1] 2021-08-11 18:34:46,190 INFO [io.quarkus] (main) camel-k-integration 1.6.6 on JVM (powered by Quarkus 1.11.7.Final-redhat-00009) started in 3.457s. [1] 2021-08-11 18:34:46,190 INFO [io.quarkus] (main) Profile prod activated. [1] 2021-08-11 18:34:46,191 INFO [io.quarkus] (main) Installed features: [camel-bean, camel-core, camel-java-joor-dsl, camel-k-core, camel-k-runtime, camel-log, camel-support-common, camel-timer, cdi] [1] 2021-08-11 18:34:47,200 INFO [info] (Camel (camel-1) thread #0 - timer://java) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Camel K from java] [1] 2021-08-11 18:34:48,180 INFO [info] (Camel (camel-1) thread #0 - timer://java) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Camel K from java] [1] 2021-08-11 18:34:49,180 INFO [info] (Camel (camel-1) thread #0 - timer://java) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Camel K from java]",
"integration \"hello-camel-k\" updated [2] 2021-08-11 18:40:54,173 INFO [org.apa.cam.k.Runtime] (main) Apache Camel K Runtime 1.7.1.fuse-800025-redhat-00001 [2] 2021-08-11 18:40:54,209 INFO [org.apa.cam.qua.cor.CamelBootstrapRecorder] (main) bootstrap runtime: org.apache.camel.quarkus.main.CamelMainRuntime [2] 2021-08-11 18:40:54,301 INFO [org.apa.cam.k.lis.SourcesConfigurer] (main) Loading routes from: SourceDefinition{name='HelloCamelK', language='java', location='file:/etc/camel/sources/HelloCamelK.java', } [2] 2021-08-11 18:40:55,796 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Routes startup summary (total:1 started:1) [2] 2021-08-11 18:40:55,796 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started java (timer://java) [2] 2021-08-11 18:40:55,797 INFO [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 3.10.0.fuse-800010-redhat-00001 (camel-1) started in 174ms (build:0ms init:147ms start:27ms) [2] 2021-08-11 18:40:55,803 INFO [io.quarkus] (main) camel-k-integration 1.6.6 on JVM (powered by Quarkus 1.11.7.Final-redhat-00009) started in 3.025s. [2] 2021-08-11 18:40:55,808 INFO [io.quarkus] (main) Profile prod activated. [2] 2021-08-11 18:40:55,809 INFO [io.quarkus] (main) Installed features: [camel-bean, camel-core, camel-java-joor-dsl, camel-k-core, camel-k-runtime, camel-log, camel-support-common, camel-timer, cdi] [2] 2021-08-11 18:40:56,810 INFO [info] (Camel (camel-1) thread #0 - timer://java) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Camel K from java] [2] 2021-08-11 18:40:57,793 INFO [info] (Camel (camel-1) thread #0 - timer://java) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello Camel K from java]",
"// camel-k: trait=3scale.enabled=true trait=container.limit-memory=256Mi 1 import org.apache.camel.builder.RouteBuilder; public class ThreeScaleRest extends RouteBuilder { @Override public void configure() throws Exception { rest().get(\"/\") .to(\"direct:x\"); from(\"direct:x\") .setBody().constant(\"Hello\"); } }",
"kamel run ThreeScaleRest.java",
"Modeline options have been loaded from source files Full command: kamel run ThreeScaleRest.java --trait=3scale.enabled=true --trait=container.limit-memory=256Mi",
"type Build struct { Spec BuildSpec 1 Status BuildStatus 2 } type BuildSpec struct { Tasks []Task 3 }",
"create configmap my-cm --from-literal=greeting=\"hello, I am development!\" -n development",
"import org.apache.camel.builder.RouteBuilder; public class PromoteServer extends RouteBuilder { @Override public void configure() throws Exception { from(\"platform-http:/hello?httpMethodRestrict=GET\").setBody(simple(\"resource:classpath:greeting\")); } }",
"kamel run --dev -n development PromoteServer.java --config configmap:my-cm [-t service.node-port=true]",
"curl http://192.168.49.2:32116/hello hello, I am development!",
"create configmap my-cm --from-literal=greeting=\"hello, I am production!\" -n production",
"kamel promote promote-server -n development --to production kamel logs promote-server -n production",
"curl http://192.168.49.2:30764/hello hello, I am production!"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/getting_started_with_camel_k/developing-and-running-camel-k-integrations |
Chapter 6. Upgrading Fuse applications on JBoss EAP standalone | Chapter 6. Upgrading Fuse applications on JBoss EAP standalone To upgrade your Fuse applications on JBoss EAP: You should consider Apache Camel updates as described in Section 6.1, "Camel migration considerations" . You must update your Fuse project's Maven dependencies to ensure that you are using the correct version of Fuse. Typically, you use Maven to build Fuse applications. Maven is a free and open source build tool from Apache. Maven configuration is defined in a Fuse application project's pom.xml file. While building a Fuse project, the default behavior is that Maven searches external repositories and downloads the required artifacts. You add a dependency for the Fuse Bill of Materials (BOM) to the pom.xml file so that the Maven build process picks up the correct set of Fuse supported artifacts. The following sections provide information on Maven dependencies and how to update them in your Fuse projects. Section 6.2, "About Maven dependencies" Section 6.3, "Updating your Fuse project's Maven dependencies" You must update your Fuse project's Maven dependencies to ensure that you are using the upgraded versions of the Java EE dependencies as described in Section 6.4, "Upgrading your Java EE dependencies" . 6.1. Camel migration considerations Creating a connection to MongoDB using the MongoClients factory From Fuse 7.12, use com.mongodb.client.MongoClient instead of com.mongodb.MongoClient to create a connection to MongoDB (note the extra .client sub-package in the full path). If any of your existing Fuse applications use the camel-mongodb component, you must: Update your applications to create the connection bean as a com.mongodb.client.MongoClient instance. For example, create a connection to MongoDB as follows: You can then create the MongoClient bean as shown in following example: Evaluate and, if needed, refactor any code related to the methods exposed by the MongoClient class. Camel 2.23 Red Hat Fuse uses Apache Camel 2.23. You should consider the following updates to Camel 2.22 and 2.23 when you upgrade to Fuse 7.8. Camel 2.22 updates Camel has upgraded from Spring Boot v1 to v2 and therefore v1 is no longer supported. Upgraded to Spring Framework 5. Camel should work with Spring 4.3.x as well, but going forward Spring 5.x will be the minimum Spring version in future releases. Upgraded to Karaf 4.2. You may run Camel on Karaf 4.1 but we only officially support Karaf 4.2 in this release. Optimized using toD DSL to reuse endpoints and producers for components where it is possible. For example, HTTP based components will now reuse producer (HTTP clients) with dynamic URIs sending to the same host. The File2 consumer with read-lock idempotent/idempotent-changed can now be configured to delay the release tasks to expand the window when a file is regarded as in-process, which is usable in active/active cluster settings with a shared idempotent repository to ensure other nodes don't too quickly see a processed file as a file they can process (only needed if you have readLockRemoveOnCommit=true). Allow to plugin a custom request/reply correlation id manager implementation on Netty4 producer in request/reply mode. The Twitter component now uses extended mode by default to support tweets greater than 140 characters Rest DSL producer now supports being configured in REST configuration by using endpointProperties. The Kafka component now supports HeaderFilterStrategy to plugin custom implementations for controlling header mappings between Camel and Kafka messages. REST DSL now supports client request validation to validate that Content-Type/Accept headers are possible for the REST service. Camel now has a Service Registry SPI which allows you to register routes to a service registry (such as consul, etcd, or zookeeper) by using a Camel implementation or Spring Cloud. The SEDA component now has a default queue size of 1000 instead of unlimited. The following noteworthy issues have been fixed: Fixed a CXF continuation timeout issue with camel-cxf consumer that could cause the consumer to return a response with data instead of triggering a timeout to the calling SOAP client. Fixed camel-cxf consumer doesn't release UoW when using a robust one-way operation. Fixed using AdviceWith and using weave methods on onException etc. not working. Fixed Splitter in parallel processing and streaming mode may block, while iterating message body when the iterator throws an exception in the first invoked () method call. Fixed Kafka consumer to not auto commit if autoCommitEnable=false. Fixed file consumer was using markerFile as read-lock by default, which should have been none. Fixed using manual commit with Kafka to provide the current record offset and not the (and -1 for first). Fixed Content Based Router in Java DSL may not resolve property placeholders in when predicates. Camel 2.23 updates Upgraded to Spring Boot 2.1. Additional component-level options can now be configured by using spring-boot auto-configuration. These options are included in the spring-boot component metadata JSON file descriptor for tooling assistance. Added a documentation section that includes all the Spring Boot auto configuration options for all the components, data-formats, and languages. All the Camel Spring Boot starter JARs now include META-INF/spring-autoconfigure-metadata.properties file in their JARs to optimize Spring Boot auto-configuration. The Throttler now supports correlation groups based on dynamic expression so that you can group messages into different throttled sets. The Hystrix EIP now allows inheritance for Camel's error handler so that you can retry the entire Hystrix EIP block again if you have enabled error handling with redeliveries. SQL and ElSql consumers now support dynamic query parameters in route form. Note that this feature is limited to calling beans by using simple expressions. The swagger-restdsl maven plugin now supports generating DTO model classes from the Swagger specification file. The following noteworthy issues have been fixed: The Aggregator2 has been fixed to not propagate control headers for forcing completion of all groups, so it will not happen again if another aggregator EIP is in use later during routing. Fixed Tracer not working if redelivery was activa†ed in the error handler. The built-in type converter for XML Documents may output parsing errors to stdout, which has now been fixed to output by using the logging API. Fixed SFTP writing files by using the charset option would not work if the message body was streaming-based. Fixed Zipkin root id to not be reused when routing over multiple routes to group them together into a single parent span. Fixed optimized toD when using HTTP endpoints had a bug when the hostname contains an IP address with digits. Fixed issue with RabbitMQ with request/reply over temporary queues and using manual acknowledge mode. It would not acknowledge the temporary queue (which is needed to make request/reply possible). Fixed various HTTP consumer components that may not return all allowed HTTP verbs in Allow header for OPTIONS requests (such as when using rest-dsl). Fixed the thread-safety issue with FluentProducerTemplate. 6.2. About Maven dependencies The purpose of a Maven Bill of Materials (BOM) file is to provide a curated set of Maven dependency versions that work well together, saving you from having to define versions individually for every Maven artifact. There is a dedicated BOM file for each container in which Fuse runs. Note You can find these BOM files here: https://github.com/jboss-fuse/redhat-fuse . Alternatively, go to the latest Release Notes for information on BOM file updates. The Fuse BOM offers the following advantages: Defines versions for Maven dependencies, so that you do not need to specify the version when you add a dependency to your pom.xml file. Defines a set of curated dependencies that are fully tested and supported for a specific version of Fuse. Simplifies upgrades of Fuse. Important Only the set of dependencies defined by a Fuse BOM are supported by Red Hat. 6.3. Updating your Fuse project's Maven dependencies To upgrade your Fuse application for JBoss EAP, update your project's Maven dependencies. Procedure Open your project's pom.xml file. Add a dependencyManagement element in your project's pom.xml file (or, possibly, in a parent pom.xml file), as shown in the following example: <?xml version="1.0" encoding="UTF-8" standalone="no"?> <project ...> ... <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <!-- configure the versions you want to use here --> <fuse.version>7.13.0.fuse-7_13_0-00012-redhat-00001</fuse.version> </properties> <dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.redhat-fuse</groupId> <artifactId>fuse-eap-bom</artifactId> <version>USD{fuse.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> ... </project> Save your pom.xml file. After you specify the BOM as a dependency in your pom.xml file, it becomes possible to add Maven dependencies to your pom.xml file without specifying the version of the artifact. For example, to add a dependency for the camel-velocity component, you would add the following XML fragment to the dependencies element in your pom.xml file: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-velocity</artifactId> <scope>provided</scope> </dependency> Note how the version element is omitted from this dependency definition. 6.4. Upgrading your Java EE dependencies In Fuse 7.8, some managed dependencies in the BOM file have updated groupId or artifactId properties, therefore you must update your project's pom.xml file accordingly. Procedure Open your project's pom.xml file. To change the org.jboss.spec.javax.transaction version from 1.2 to 1.3 and the org.jboss.spec.javax.servlet version from 3.1 to 4.0, update the dependencies as shown in the following example: <dependency> <groupId>org.jboss.spec.javax.transaction</groupId> <artifactId>jboss-transaction-api_1.3_spec</artifactId> </dependency> <dependency> <groupId>org.jboss.spec.javax.servlet</groupId> <artifactId>jboss-servlet-api_4.0_spec</artifactId> </dependency> To migrate from Java EE API to Jakarta EE, replace javax.* with jakarta.* for each groupId and modify the artifactId for individual dependencies as shown in the following example: <dependency> <groupId>jakarta.validation</groupId> <artifactId>jakarta.validation-api</artifactId> </dependency> <dependency> <groupId>jakarta.enterprise</groupId> <artifactId>jakarta.enterprise.cdi-api</artifactId> </dependency> <dependency> <groupId>jakarta.inject</groupId> <artifactId>jakarta.inject-api</artifactId> </dependency> 6.5. Upgrading an existing Fuse on JBoss EAP installation The following procedure describes how to upgrade an existing Fuse on JBoss EAP installation. Procedure To upgrade from one JBoss EAP minor release to another, you should follow the instructions in the JBoss EAP Patching and Upgrading Guide guide. To update Fuse, you must run the Fuse on JBoss EAP installer as described in the Installing on JBoss EAP guide. Note You should not need to recompile or redploy your Fuse application. 6.6. Upgrading Fuse and JBoss EAP simultaneously The following procedure describes how to upgrade a Fuse installation and the JBoss EAP runtime simultanously, for example, if you are migrating from Fuse 7.7 on JBoss EAP 7.2 to Fuse 7.8 on JBoss EAP 7.3. Warning When upgrading both Fuse and the JBoss EAP runtime, Red Hat recommends that you perform a fresh installation of both Fuse and the JBoss EAP runtime. Procedure To perform a new installation of the JBoss EAP runtime, follow the instructions in the Installing on JBoss EAP guide. To perform a new installation of Fuse, run the Fuse on JBoss EAP installer as described in the Installing on JBoss EAP guide. | [
"import com.mongodb.client.MongoClient;",
"return MongoClients.create(\"mongodb://admin:[email protected]:32553\");",
"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"no\"?> <project ...> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <!-- configure the versions you want to use here --> <fuse.version>7.13.0.fuse-7_13_0-00012-redhat-00001</fuse.version> </properties> <dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.redhat-fuse</groupId> <artifactId>fuse-eap-bom</artifactId> <version>USD{fuse.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> </project>",
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-velocity</artifactId> <scope>provided</scope> </dependency>",
"<dependency> <groupId>org.jboss.spec.javax.transaction</groupId> <artifactId>jboss-transaction-api_1.3_spec</artifactId> </dependency> <dependency> <groupId>org.jboss.spec.javax.servlet</groupId> <artifactId>jboss-servlet-api_4.0_spec</artifactId> </dependency>",
"<dependency> <groupId>jakarta.validation</groupId> <artifactId>jakarta.validation-api</artifactId> </dependency> <dependency> <groupId>jakarta.enterprise</groupId> <artifactId>jakarta.enterprise.cdi-api</artifactId> </dependency> <dependency> <groupId>jakarta.inject</groupId> <artifactId>jakarta.inject-api</artifactId> </dependency>"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/migration_guide/upgrading-fuse-applications-on-jboss-eap-standalone |
Chapter 12. Building container images | Chapter 12. Building container images Building container images involves creating a blueprint for a containerized application. Blueprints rely on base images from other public repositories that define how the application should be installed and configured. Note Because blueprints rely on images from other public repositories, they might be subject to rate limiting. Consequently, your build could fail. Quay.io supports the ability to build Docker and Podman container images. This functionality is valuable for developers and organizations who rely on container and container orchestration. On Quay.io, this feature works the same across both free, and paid, tier plans. Note Quay.io limits the number of simultaneous builds that a single user can submit at one time. 12.1. Build contexts When building an image with Docker or Podman, a directory is specified to become the build context . This is true for both manual Builds and Build triggers, because the Build that is created by Quay.io is not different than running docker build or podman build on your local machine. Quay.io Build contexts are always specified in the subdirectory from the Build setup, and fallback to the root of the Build source if a directory is not specified. When a build is triggered, Quay.io Build workers clone the Git repository to the worker machine, and then enter the Build context before conducting a Build. For Builds based on .tar archives, Build workers extract the archive and enter the Build context. For example: Extracted Build archive example ├── .git ├── Dockerfile ├── file └── subdir └── Dockerfile Imagine that the Extracted Build archive is the directory structure got a Github repository called example. If no subdirectory is specified in the Build trigger setup, or when manually starting the Build, the Build operates in the example directory. If a subdirectory is specified in the Build trigger setup, for example, subdir , only the Dockerfile within it is visible to the Build. This means that you cannot use the ADD command in the Dockerfile to add file , because it is outside of the Build context. Unlike Docker Hub, the Dockerfile is part of the Build context on Quay.io. As a result, it must not appear in the .dockerignore file. 12.2. Tag naming for build triggers Custom tags are available for use in Quay.io. One option is to include any string of characters assigned as a tag for each built image. Alternatively, you can use the following tag templates on the Configure Tagging section of the build trigger to tag images with information from each commit: USD{commit} : Full SHA of the issued commit USD{parsed_ref.branch} : Branch information (if available) USD{parsed_ref.tag} : Tag information (if available) USD{parsed_ref.remote} : The remote name USD{commit_info.date} : Date when the commit was issued USD{commit_info.author.username} : Username of the author of the commit USD{commit_info.short_sha} : First 7 characters of the commit SHA USD{committer.properties.username} : Username of the committer This list is not complete, but does contain the most useful options for tagging purposes. You can find the complete tag template schema on this page . For more information, see Set up custom tag templates in build triggers for Red Hat Quay and Quay.io . 12.3. Skipping a source control-triggered build To specify that a commit should be ignored by the Quay.io build system, add the text [skip build] or [build skip] anywhere in your commit message. 12.4. Starting a new build By default, Quay.io users can start new builds out-of-the-box. Use the following procedure to start a new build by uploading a Dockerfile. For information about creating a build trigger , see "Build triggers". Prerequisites You have navigated to the Builds page of your repository. Procedure On the Builds page, click Start New Build . When prompted, click Upload Dockerfile to upload a Dockerfile or an archive that contains a Dockerfile at the root directory. Click Start Build . Note Currently, users cannot specify the Docker build context when manually starting a build. Currently, BitBucket is unsupported on the Red Hat Quay v2 UI. You are redirected to the build , which can be viewed in real-time. Wait for the Dockerfile build to be completed and pushed. Optional. you can click Download Logs to download the logs, or Copy Logs to copy the logs. Click the back button to return to the Repository Builds page, where you can view the build history . 12.5. Build triggers Build triggers are automated mechanisms that start a container image build when specific conditions are met, such as changes to source code, updates to dependencies, or creating a webhook call . These triggers help automate the image-building process and ensure that the container images are always up-to-date without manual intervention. The following sections cover content related to creating a build trigger, tag naming conventions, how to skip a source control-triggered build, starting a build , or manually triggering a build . 12.5.1. Creating a build trigger The following procedure sets up a custom Git trigger . A custom Git trigger is a generic way for any Git server to act as a build trigger . It relies solely on SSH keys and webhook endpoints. Creating a custom Git trigger is similar to the creation of any other trigger, with the exception of the following: Quay.io cannot automatically detect the proper Robot Account to use with the trigger. This must be done manually during the creation process. These steps can be replicated to create a build trigger using Github, Gitlab, or Bitbucket, however, you must configure the credentials for these services in your config.yaml file. Note If you want to use Github to create a build trigger , you must configure Github to be used with Red Hat Quay by creating an OAuth application. For more information, see "Creating an OAuth application Github". Procedure Log in to your Red Hat Quay registry. In the navigation pane, click Repositories . Click Create Repository . Click the Builds tab. On the Builds page, click Create Build Trigger . Select the desired platform, for example, Github , Bitbucket , Gitlab , or use a custom Git repository. For this example, click Custom Git Repository Push . Enter a custom Git repository name, for example, [email protected]:<username>/<repo>.git . Then, click . When prompted, configure the tagging options by selecting one of, or both of, the following options: Tag manifest with the branch or tag name . When selecting this option, the built manifest the name of the branch or tag for the git commit are tagged. Add latest tag if on default branch . When selecting this option, the built manifest with latest if the build occurred on the default branch for the repository are tagged. Optionally, you can add a custom tagging template. There are multiple tag templates that you can enter here, including using short SHA IDs, timestamps, author names, committer, and branch names from the commit as tags. For more information, see "Tag naming for build triggers". After you have configured tagging, click . When prompted, select the location of the Dockerfile to be built when the trigger is invoked. If the Dockerfile is located at the root of the git repository and named Dockerfile, enter /Dockerfile as the Dockerfile path. Then, click . When prompted, select the context for the Docker build. If the Dockerfile is located at the root of the Git repository, enter / as the build context directory. Then, click . Optional. Choose an optional robot account. This allows you to pull a private base image during the build process. If you know that a private base image is not used, you can skip this step. Click . Check for any verification warnings. If necessary, fix the issues before clicking Finish . You are alerted that the trigger has been successfully activated. Note that using this trigger requires the following actions: You must give the following public key read access to the git repository. You must set your repository to POST to the following URL to trigger a build. Save the SSH Public Key, then click Return to <organization_name>/<repository_name> . You are redirected to the Builds page of your repository. On the Builds page, you now have a build trigger . For example: After you have created a custom Git trigger, additional steps are required. Continue on to "Setting up a custom Git trigger". If you are setting up a build trigger for Github, Gitlab, or Bitbucket, continue on to "Manually triggering a build". 12.5.2. Manually triggering a build Builds can be triggered manually by using the following procedure. Procedure On the Builds page, Start new build . When prompted, select Invoke Build Trigger . Click Run Trigger Now to manually start the process. Enter a commit ID from which to initiate the build, for example, 1c002dd . After the build starts, you can see the build ID on the Repository Builds page. 12.6. Setting up a custom Git trigger After you have created a custom Git trigger , two additional steps are required: You must provide read access to the SSH public key that is generated when creating the trigger. You must setup a webhook that POSTs to the Quay.io endpoint to trigger the build. These steps are only required if you are using a custom Git trigger . 12.6.1. Obtaining build trigger credentials The SSH public key and Webhook Endpoint URL are available on the Red Hat Quay UI. Prerequisites You have created a custom Git trigger . Procedure On the Builds page of your repository, click the menu kebab for your custom Git trigger . Click View Credentials . Save the SSH Public Key and Webhook Endpoint URL. The key and the URL are available by selecting View Credentials from the Settings , or gear icon. View and modify tags from your repository 12.6.1.1. SSH public key access Depending on the Git server configuration, there are multiple ways to install the SSH public key that Quay.io generates for a custom Git trigger. For example, documentation for Getting Git on a Server describes a describes how to set up a Git server on a Linux-based machine with a focus on managing repositories and access control through SSH. In this procedure, a small server is set up to add the keys to the USDHOME/.ssh/authorize_keys folder, which provides access for builders to clone the repository. For any Git repository management software that is not officially supported, there is usually a location to input the key that is often labeled as Deploy Keys . 12.6.1.2. Webhook To automatically trigger a build, you must POST a .json payload to the webhook URL using the following format: Note This request requires a Content-Type header containing application/json in order to be valid. Example webhook { "commit": "1c002dd", // required "ref": "refs/heads/master", // required "default_branch": "master", // required "commit_info": { // optional "url": "gitsoftware.com/repository/commits/1234567", // required "message": "initial commit", // required "date": "timestamp", // required "author": { // optional "username": "user", // required "avatar_url": "gravatar.com/user.png", // required "url": "gitsoftware.com/users/user" // required }, "committer": { // optional "username": "user", // required "avatar_url": "gravatar.com/user.png", // required "url": "gitsoftware.com/users/user" // required } } } This can typically be accomplished with a post-receive Git hook , however it does depend on your server setup. | [
"example ├── .git ├── Dockerfile ├── file └── subdir └── Dockerfile",
"{ \"commit\": \"1c002dd\", // required \"ref\": \"refs/heads/master\", // required \"default_branch\": \"master\", // required \"commit_info\": { // optional \"url\": \"gitsoftware.com/repository/commits/1234567\", // required \"message\": \"initial commit\", // required \"date\": \"timestamp\", // required \"author\": { // optional \"username\": \"user\", // required \"avatar_url\": \"gravatar.com/user.png\", // required \"url\": \"gitsoftware.com/users/user\" // required }, \"committer\": { // optional \"username\": \"user\", // required \"avatar_url\": \"gravatar.com/user.png\", // required \"url\": \"gitsoftware.com/users/user\" // required } } }"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3/html/about_quay_io/building-dockerfiles |
A.7. Troubleshooting Replication | A.7. Troubleshooting Replication Test replication on at least two servers (see Section 4.6, "Testing the New Replica" ). If changes made on one IdM server are not replicated to the other server: Make sure you meet the conditions in Section 2.1.5, "Host Name and DNS Configuration" . Make sure that both servers can resolve each other's forward and reverse DNS records: Make sure that the time difference on both servers is 5 minutes at the most. Review the Directory Server error log on both servers: /var/log/dirsrv/slapd- SERVER-EXAMPLE-COM /errors . If you see errors related to Kerberos, make sure that the Directory Server keytab is correct and that you can use it to query the other server ( server2 in this example): Related Information See Section C.2, "Identity Management Log Files and Directories" for descriptions of various Identity Management log files. | [
"dig +short server2.example.com A dig +short server2.example.com AAAA dig +short -x server2_IPv4_or_IPv6_address",
"dig +short server1.example.com A dig +short server1.example.com AAAA dig +short -x server1_IPv4_or_IPv6_address",
"kinit -kt /etc/dirsrv/ds.keytab ldap/ server1.example.com klist ldapsearch -Y GSSAPI -h server1.example.com -b \"\" -s base ldapsearch -Y GSSAPI -h server2_FQDN . -b \"\" -s base"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/trouble-gen-replication |
6.2. Creating and managing nftables tables, chains, and rules | 6.2. Creating and managing nftables tables, chains, and rules This section explains how to display the nftables rule set, and how to manage it. 6.2.1. Displaying the nftables rule set The rule set of nftables contains tables, chains, and rules. This section explains how to display this rule set. To display all the rule set, enter: Note By default, nftables does not pre-create tables. As a consequence, displaying the rule set on a host without any tables, the nft list ruleset command shows no output. 6.2.2. Creating an nftables table A table in nftables is a name space that contains a collection of chains, rules, sets, and other objects. This section explains how to create a table. Each table must have an address family defined. The address family of a table defines what address types the table processes. You can set one of the following address families when you create a table: ip : Matches only IPv4 packets. This is the default if you do not specify an address family. ip6 : Matches only IPv6 packets. inet : Matches both IPv4 and IPv6 packets. arp : Matches IPv4 address resolution protocol (ARP) packets. bridge : Matches packets that traverse a bridge device. netdev : Matches packets from ingress. Procedure 6.4. Creating an nftables table Use the nft add table command to create a new table. For example, to create a table named example_table that processes IPv4 and IPv6 packets: Optionally, list all tables in the rule set: Additional resources For further details about address families, see the Address families section in the nft(8) man page. For details on other actions you can run on tables, see the Tables section in the nft(8) man page. 6.2.3. Creating an nftables chain Chains are containers for rules. The following two rule types exists: Base chain: You can use base chains as an entry point for packets from the networking stack. Regular chain: You can use regular chains as a jump target and to better organize rules. The procedure describes how to add a base chain to an existing table. Prerequisites The table to which you want to add the new chain exists. Procedure 6.5. Creating an nftables chain Use the nft add chain command to create a new chain. For example, to create a chain named example_chain in example_table : Important To avoid that the shell interprets the semicolons as the end of the command, you must escape the semicolons with a backslash. Moreover, some shells interpret the curly braces as well, so quote the curly braces and anything inside them with ticks ( ' ). This chain filters incoming packets. The priority parameter specifies the order in which nftables processes chains with the same hook value. A lower priority value has precedence over higher ones. The policy parameter sets the default action for rules in this chain. Note that if you are logged in to the server remotely and you set the default policy to drop , you are disconnected immediately if no other rule allows the remote access. Optionally, display all chains: Additional resources For further details about address families, see the Address families section in the nft(8) man page. For details on other actions you can run on chains, see the Chains section in the nft(8) man page. 6.2.4. Appending a rule to the end of an nftables chain This section explains how to append a rule to the end of an existing nftables chain. Prerequisites The chain to which you want to add the rule exists. Procedure 6.6. Appending a rule to the end of an nftables chain To add a new rule, use the nft add rule command. For example, to add a rule to the example_chain in the example_table that allows TCP traffic on port 22: You can alternatively specify the name of the service instead of the port number. In the example, you could use ssh instead of the port number 22 . Note that a service name is resolved to a port number based on its entry in the /etc/services file. Optionally, display all chains and their rules in example_table : Additional resources For further details about address families, see the Address families section in the nft(8) man page. For details on other actions you can run on chains, see the Rules section in the nft(8) man page. 6.2.5. Inserting a rule at the beginning of an nftables chain This section explains how to insert a rule at the beginning of an existing nftables chain. Prerequisites The chain to which you want to add the rule exists. Procedure 6.7. Inserting a rule at the beginning of an nftables chain To insert a new rule, use the nft insert rule command. For example, to insert a rule to the example_chain in the example_table that allows TCP traffic on port 22 : You can alternatively specify the name of the service instead of the port number. In the example, you could use ssh instead of the port number 22 . Note that a service name is resolved to a port number based on its entry in the /etc/services file. Optionally, display all chains and their rules in example_table : Additional resources For further details about address families, see the Address families section in the nft(8) man page. For details on other actions you can run on chains, see the Rules section in the nft(8) man page. 6.2.6. Inserting a rule at a specific position of an nftables chain This section explains how to insert rules before and after an existing rule in an nftables chain. This way you can place new rules at the right position. Prerequisites The chain to which you want to add the rule exists. Procedure 6.8. Inserting a rule at a specific position of an nftables chain Use the nft -a list ruleset command to display all chains and their rules in the example_table including their handle: Using the -a displays the handles. You require this information to position the new rules in the steps. Insert the new rules to the example_chain chain in the example_table : To insert a rule that allows TCP traffic on port 636 before handle 3 , enter: To add a rule that allows TCP traffic on port 80 after handle 3 , enter: Optionally, display all chains and their rules in example_table : Additional resources For further details about address families, see the Address families section in the nft(8) man page. For details on other actions you can run on chains, see the Rules section in the nft(8) man page. | [
"nft list ruleset table inet example_table { chain example_chain { type filter hook input priority filter; policy accept; tcp dport http accept tcp dport ssh accept } }",
"nft add table inet example_table",
"nft list tables table inet example_table",
"nft add chain inet example_table example_chain '{ type filter hook input priority 0 ; policy accept ; }'",
"nft list chains table inet example_table { chain example_chain { type filter hook input priority filter; policy accept; } }",
"nft add rule inet example_table example_chain tcp dport 22 accept",
"nft list table inet example_table table inet example_table { chain example_chain { type filter hook input priority filter; policy accept; tcp dport ssh accept } }",
"nft insert rule inet example_table example_chain tcp dport 22 accept",
"nft list table inet example_table table inet example_table { chain example_chain { type filter hook input priority filter; policy accept; tcp dport ssh accept } }",
"nft -a list table inet example_table table inet example_table { # handle 1 chain example_chain { # handle 1 type filter hook input priority filter; policy accept; tcp dport 22 accept # handle 2 tcp dport 443 accept # handle 3 tcp dport 389 accept # handle 4 } }",
"nft insert rule inet example_table example_chain position 3 tcp dport 636 accept",
"nft add rule inet example_table example_chain position 3 tcp dport 80 accept",
"nft -a list table inet example_table table inet example_table { # handle 1 chain example_chain { # handle 1 type filter hook input priority filter; policy accept; tcp dport 22 accept # handle 2 tcp dport 636 accept # handle 5 tcp dport 443 accept # handle 3 tcp dport 80 accept # handle 6 tcp dport 389 accept # handle 4 } }"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/sec-creating_and_managing_nftables_tables_chains_and_rules |
Chapter 18. Topology Aware Lifecycle Manager for cluster updates | Chapter 18. Topology Aware Lifecycle Manager for cluster updates You can use the Topology Aware Lifecycle Manager (TALM) to manage the software lifecycle of multiple single-node OpenShift clusters. TALM uses Red Hat Advanced Cluster Management (RHACM) policies to perform changes on the target clusters. Important Topology Aware Lifecycle Manager is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 18.1. About the Topology Aware Lifecycle Manager configuration The Topology Aware Lifecycle Manager (TALM) manages the deployment of Red Hat Advanced Cluster Management (RHACM) policies for one or more OpenShift Container Platform clusters. Using TALM in a large network of clusters allows the phased rollout of policies to the clusters in limited batches. This helps to minimize possible service disruptions when updating. With TALM, you can control the following actions: The timing of the update The number of RHACM-managed clusters The subset of managed clusters to apply the policies to The update order of the clusters The set of policies remediated to the cluster The order of policies remediated to the cluster TALM supports the orchestration of the OpenShift Container Platform y-stream and z-stream updates, and day-two operations on y-streams and z-streams. 18.2. About managed policies used with Topology Aware Lifecycle Manager The Topology Aware Lifecycle Manager (TALM) uses RHACM policies for cluster updates. TALM can be used to manage the rollout of any policy CR where the remediationAction field is set to inform . Supported use cases include the following: Manual user creation of policy CRs Automatically generated policies from the PolicyGenTemplate custom resource definition (CRD) For policies that update an Operator subscription with manual approval, TALM provides additional functionality that approves the installation of the updated Operator. For more information about managed policies, see Policy Overview in the RHACM documentation. For more information about the PolicyGenTemplate CRD, see the "About the PolicyGenTemplate CRD" section in "Configuring managed clusters with policies and PolicyGenTemplate resources". 18.3. Installing the Topology Aware Lifecycle Manager by using the web console You can use the OpenShift Container Platform web console to install the Topology Aware Lifecycle Manager. Prerequisites Install the latest version of the RHACM Operator. Set up a hub cluster with disconnected regitry. Log in as a user with cluster-admin privileges. Procedure In the OpenShift Container Platform web console, navigate to Operators OperatorHub . Search for the Topology Aware Lifecycle Manager from the list of available Operators, and then click Install . Keep the default selection of Installation mode ["All namespaces on the cluster (default)"] and Installed Namespace ("openshift-operators") to ensure that the Operator is installed properly. Click Install . Verification To confirm that the installation is successful: Navigate to the Operators Installed Operators page. Check that the Operator is installed in the All Namespaces namespace and its status is Succeeded . If the Operator is not installed successfully: Navigate to the Operators Installed Operators page and inspect the Status column for any errors or failures. Navigate to the Workloads Pods page and check the logs in any containers in the cluster-group-upgrades-controller-manager pod that are reporting issues. 18.4. Installing the Topology Aware Lifecycle Manager by using the CLI You can use the OpenShift CLI ( oc ) to install the Topology Aware Lifecycle Manager (TALM). Prerequisites Install the OpenShift CLI ( oc ). Install the latest version of the RHACM Operator. Set up a hub cluster with disconnected registry. Log in as a user with cluster-admin privileges. Procedure Create a Subscription CR: Define the Subscription CR and save the YAML file, for example, talm-subscription.yaml : apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-topology-aware-lifecycle-manager-subscription namespace: openshift-operators spec: channel: "stable" name: topology-aware-lifecycle-manager source: redhat-operators sourceNamespace: openshift-marketplace Create the Subscription CR by running the following command: USD oc create -f talm-subscription.yaml Verification Verify that the installation succeeded by inspecting the CSV resource: USD oc get csv -n openshift-operators Example output NAME DISPLAY VERSION REPLACES PHASE topology-aware-lifecycle-manager.4.11.x Topology Aware Lifecycle Manager 4.11.x Succeeded Verify that the TALM is up and running: USD oc get deploy -n openshift-operators Example output NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE openshift-operators cluster-group-upgrades-controller-manager 1/1 1 1 14s 18.5. About the ClusterGroupUpgrade CR The Topology Aware Lifecycle Manager (TALM) builds the remediation plan from the ClusterGroupUpgrade CR for a group of clusters. You can define the following specifications in a ClusterGroupUpgrade CR: Clusters in the group Blocking ClusterGroupUpgrade CRs Applicable list of managed policies Number of concurrent updates Applicable canary updates Actions to perform before and after the update Update timing As TALM works through remediation of the policies to the specified clusters, the ClusterGroupUpgrade CR can have the following states: UpgradeNotStarted UpgradeCannotStart UpgradeNotComplete UpgradeTimedOut UpgradeCompleted PrecachingRequired Note After TALM completes a cluster update, the cluster does not update again under the control of the same ClusterGroupUpgrade CR. You must create a new ClusterGroupUpgrade CR in the following cases: When you need to update the cluster again When the cluster changes to non-compliant with the inform policy after being updated 18.5.1. The UpgradeNotStarted state The initial state of the ClusterGroupUpgrade CR is UpgradeNotStarted . TALM builds a remediation plan based on the following fields: The clusterSelector field specifies the labels of the clusters that you want to update. The clusters field specifies a list of clusters to update. The canaries field specifies the clusters for canary updates. The maxConcurrency field specifies the number of clusters to update in a batch. You can use the clusters and the clusterSelector fields together to create a combined list of clusters. The remediation plan starts with the clusters listed in the canaries field. Each canary cluster forms a single-cluster batch. Note Any failures during the update of a canary cluster stops the update process. The ClusterGroupUpgrade CR transitions to the UpgradeNotCompleted state after the remediation plan is successfully created and after the enable field is set to true . At this point, TALM starts to update the non-compliant clusters with the specified managed policies. Note You can only make changes to the spec fields if the ClusterGroupUpgrade CR is either in the UpgradeNotStarted or the UpgradeCannotStart state. Sample ClusterGroupUpgrade CR in the UpgradeNotStarted state apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-upgrade-complete namespace: default spec: clusters: 1 - spoke1 enable: false managedPolicies: 2 - policy1-common-cluster-version-policy - policy2-common-nto-sub-policy remediationStrategy: 3 canaries: 4 - spoke1 maxConcurrency: 1 5 timeout: 240 status: 6 conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: "False" type: Ready copiedPolicies: - cgu-upgrade-complete-policy1-common-cluster-version-policy - cgu-upgrade-complete-policy2-common-nto-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-nto-sub-policy namespace: default placementBindings: - cgu-upgrade-complete-policy1-common-cluster-version-policy - cgu-upgrade-complete-policy2-common-nto-sub-policy placementRules: - cgu-upgrade-complete-policy1-common-cluster-version-policy - cgu-upgrade-complete-policy2-common-nto-sub-policy remediationPlan: - - spoke1 1 Defines the list of clusters to update. 2 Lists the user-defined set of policies to remediate. 3 Defines the specifics of the cluster updates. 4 Defines the clusters for canary updates. 5 Defines the maximum number of concurrent updates in a batch. The number of remediation batches is the number of canary clusters, plus the number of clusters, except the canary clusters, divided by the maxConcurrency value. The clusters that are already compliant with all the managed policies are excluded from the remediation plan. 6 Displays information about the status of the updates. 18.5.2. The UpgradeCannotStart state In the UpgradeCannotStart state, the update cannot start because of the following reasons: Blocking CRs are missing from the system Blocking CRs have not yet finished 18.5.3. The UpgradeNotCompleted state In the UpgradeNotCompleted state, TALM enforces the policies following the remediation plan defined in the UpgradeNotStarted state. Enforcing the policies for subsequent batches starts immediately after all the clusters of the current batch are compliant with all the managed policies. If the batch times out, TALM moves on to the batch. The timeout value of a batch is the spec.timeout field divided by the number of batches in the remediation plan. Note The managed policies apply in the order that they are listed in the managedPolicies field in the ClusterGroupUpgrade CR. One managed policy is applied to the specified clusters at a time. After the specified clusters comply with the current policy, the managed policy is applied to the non-compliant cluster. Sample ClusterGroupUpgrade CR in the UpgradeNotCompleted state apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-upgrade-complete namespace: default spec: clusters: - spoke1 enable: true 1 managedPolicies: - policy1-common-cluster-version-policy - policy2-common-nto-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: 2 conditions: - message: The ClusterGroupUpgrade CR has upgrade policies that are still non compliant reason: UpgradeNotCompleted status: "False" type: Ready copiedPolicies: - cgu-upgrade-complete-policy1-common-cluster-version-policy - cgu-upgrade-complete-policy2-common-nto-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-nto-sub-policy namespace: default placementBindings: - cgu-upgrade-complete-policy1-common-cluster-version-policy - cgu-upgrade-complete-policy2-common-nto-sub-policy placementRules: - cgu-upgrade-complete-policy1-common-cluster-version-policy - cgu-upgrade-complete-policy2-common-nto-sub-policy remediationPlan: - - spoke1 status: currentBatch: 1 remediationPlanForBatch: 3 spoke1: 0 1 The update starts when the value of the spec.enable field is true . 2 The status fields change accordingly when the update begins. 3 Lists the clusters in the batch and the index of the policy that is being currently applied to each cluster. The index of the policies starts with 0 and the index follows the order of the status.managedPoliciesForUpgrade list. 18.5.4. The UpgradeTimedOut state In the UpgradeTimedOut state, TALM checks every hour if all the policies for the ClusterGroupUpgrade CR are compliant. The checks continue until the ClusterGroupUpgrade CR is deleted or the updates are completed. The periodic checks allow the updates to complete if they get prolonged due to network, CPU, or other issues. TALM transitions to the UpgradeTimedOut state in two cases: When the current batch contains canary updates and the cluster in the batch does not comply with all the managed policies within the batch timeout. When the clusters do not comply with the managed policies within the timeout value specified in the remediationStrategy field. If the policies are compliant, TALM transitions to the UpgradeCompleted state. 18.5.5. The UpgradeCompleted state In the UpgradeCompleted state, the cluster updates are complete. Sample ClusterGroupUpgrade CR in the UpgradeCompleted state apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-upgrade-complete namespace: default spec: actions: afterCompletion: deleteObjects: true 1 clusters: - spoke1 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-nto-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: 2 conditions: - message: The ClusterGroupUpgrade CR has all clusters compliant with all the managed policies reason: UpgradeCompleted status: "True" type: Ready managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-nto-sub-policy namespace: default remediationPlan: - - spoke1 status: remediationPlanForBatch: spoke1: -2 3 1 The value of spec.action.afterCompletion.deleteObjects field is true by default. After the update is completed, TALM deletes the underlying RHACM objects that were created during the update. This option is to prevent the RHACM hub from continuously checking for compliance after a successful update. 2 The status fields show that the updates completed successfully. 3 Displays that all the policies are applied to the cluster. <discreet> <title>The PrecachingRequired state</title> In the PrecachingRequired state, the clusters need to have images pre-cached before the update can start. For more information about pre-caching, see the "Using the container image pre-cache feature" section. </discreet> 18.5.6. Blocking ClusterGroupUpgrade CRs You can create multiple ClusterGroupUpgrade CRs and control their order of application. For example, if you create ClusterGroupUpgrade CR C that blocks the start of ClusterGroupUpgrade CR A, then ClusterGroupUpgrade CR A cannot start until the status of ClusterGroupUpgrade CR C becomes UpgradeComplete . One ClusterGroupUpgrade CR can have multiple blocking CRs. In this case, all the blocking CRs must complete before the upgrade for the current CR can start. Prerequisites Install the Topology Aware Lifecycle Manager (TALM). Provision one or more managed clusters. Log in as a user with cluster-admin privileges. Create RHACM policies in the hub cluster. Procedure Save the content of the ClusterGroupUpgrade CRs in the cgu-a.yaml , cgu-b.yaml , and cgu-c.yaml files. apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-a namespace: default spec: blockingCRs: 1 - name: cgu-c namespace: default clusters: - spoke1 - spoke2 - spoke3 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy remediationStrategy: canaries: - spoke1 maxConcurrency: 2 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: "False" type: Ready copiedPolicies: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default placementBindings: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy placementRules: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy remediationPlan: - - spoke1 - - spoke2 1 Defines the blocking CRs. The cgu-a update cannot start until cgu-c is complete. apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-b namespace: default spec: blockingCRs: 1 - name: cgu-a namespace: default clusters: - spoke4 - spoke5 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: "False" type: Ready copiedPolicies: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy placementRules: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy remediationPlan: - - spoke4 - - spoke5 status: {} 1 The cgu-b update cannot start until cgu-a is complete. apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-c namespace: default spec: 1 clusters: - spoke6 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: "False" type: Ready copiedPolicies: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy managedPoliciesCompliantBeforeUpgrade: - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy placementRules: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy remediationPlan: - - spoke6 status: {} 1 The cgu-c update does not have any blocking CRs. TALM starts the cgu-c update when the enable field is set to true . Create the ClusterGroupUpgrade CRs by running the following command for each relevant CR: USD oc apply -f <name>.yaml Start the update process by running the following command for each relevant CR: USD oc --namespace=default patch clustergroupupgrade.ran.openshift.io/<name> \ --type merge -p '{"spec":{"enable":true}}' The following examples show ClusterGroupUpgrade CRs where the enable field is set to true : Example for cgu-a with blocking CRs apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-a namespace: default spec: blockingCRs: - name: cgu-c namespace: default clusters: - spoke1 - spoke2 - spoke3 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy remediationStrategy: canaries: - spoke1 maxConcurrency: 2 timeout: 240 status: conditions: - message: 'The ClusterGroupUpgrade CR is blocked by other CRs that have not yet completed: [cgu-c]' 1 reason: UpgradeCannotStart status: "False" type: Ready copiedPolicies: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default placementBindings: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy placementRules: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy remediationPlan: - - spoke1 - - spoke2 status: {} 1 Shows the list of blocking CRs. Example for cgu-b with blocking CRs apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-b namespace: default spec: blockingCRs: - name: cgu-a namespace: default clusters: - spoke4 - spoke5 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: 'The ClusterGroupUpgrade CR is blocked by other CRs that have not yet completed: [cgu-a]' 1 reason: UpgradeCannotStart status: "False" type: Ready copiedPolicies: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy placementRules: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy remediationPlan: - - spoke4 - - spoke5 status: {} 1 Shows the list of blocking CRs. Example for cgu-c with blocking CRs apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-c namespace: default spec: clusters: - spoke6 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR has upgrade policies that are still non compliant 1 reason: UpgradeNotCompleted status: "False" type: Ready copiedPolicies: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy managedPoliciesCompliantBeforeUpgrade: - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy placementRules: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy remediationPlan: - - spoke6 status: currentBatch: 1 remediationPlanForBatch: spoke6: 0 1 The cgu-c update does not have any blocking CRs. 18.6. Update policies on managed clusters The Topology Aware Lifecycle Manager (TALM) remediates a set of inform policies for the clusters specified in the ClusterGroupUpgrade CR. TALM remediates inform policies by making enforce copies of the managed RHACM policies. Each copied policy has its own corresponding RHACM placement rule and RHACM placement binding. One by one, TALM adds each cluster from the current batch to the placement rule that corresponds with the applicable managed policy. If a cluster is already compliant with a policy, TALM skips applying that policy on the compliant cluster. TALM then moves on to applying the policy to the non-compliant cluster. After TALM completes the updates in a batch, all clusters are removed from the placement rules associated with the copied policies. Then, the update of the batch starts. If a spoke cluster does not report any compliant state to RHACM, the managed policies on the hub cluster can be missing status information that TALM needs. TALM handles these cases in the following ways: If a policy's status.compliant field is missing, TALM ignores the policy and adds a log entry. Then, TALM continues looking at the policy's status.status field. If a policy's status.status is missing, TALM produces an error. If a cluster's compliance status is missing in the policy's status.status field, TALM considers that cluster to be non-compliant with that policy. For more information about RHACM policies, see Policy overview . Additional resources For more information about the PolicyGenTemplate CRD, see About the PolicyGenTemplate CRD . 18.6.1. Applying update policies to managed clusters You can update your managed clusters by applying your policies. Prerequisites Install the Topology Aware Lifecycle Manager (TALM). Provision one or more managed clusters. Log in as a user with cluster-admin privileges. Create RHACM policies in the hub cluster. Procedure Save the contents of the ClusterGroupUpgrade CR in the cgu-1.yaml file. apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-1 namespace: default spec: managedPolicies: 1 - policy1-common-cluster-version-policy - policy2-common-nto-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy enable: false clusters: 2 - spoke1 - spoke2 - spoke5 - spoke6 remediationStrategy: maxConcurrency: 2 3 timeout: 240 4 1 The name of the policies to apply. 2 The list of clusters to update. 3 The maxConcurrency field signifies the number of clusters updated at the same time. 4 The update timeout in minutes. Create the ClusterGroupUpgrade CR by running the following command: USD oc create -f cgu-1.yaml Check if the ClusterGroupUpgrade CR was created in the hub cluster by running the following command: USD oc get cgu --all-namespaces Example output NAMESPACE NAME AGE default cgu-1 8m55s Check the status of the update by running the following command: USD oc get cgu -n default cgu-1 -ojsonpath='{.status}' | jq Example output { "computedMaxConcurrency": 2, "conditions": [ { "lastTransitionTime": "2022-02-25T15:34:07Z", "message": "The ClusterGroupUpgrade CR is not enabled", 1 "reason": "UpgradeNotStarted", "status": "False", "type": "Ready" } ], "copiedPolicies": [ "cgu-policy1-common-cluster-version-policy", "cgu-policy2-common-nto-sub-policy", "cgu-policy3-common-ptp-sub-policy", "cgu-policy4-common-sriov-sub-policy" ], "managedPoliciesContent": { "policy1-common-cluster-version-policy": "null", "policy2-common-nto-sub-policy": "[{\"kind\":\"Subscription\",\"name\":\"node-tuning-operator\",\"namespace\":\"openshift-cluster-node-tuning-operator\"}]", "policy3-common-ptp-sub-policy": "[{\"kind\":\"Subscription\",\"name\":\"ptp-operator-subscription\",\"namespace\":\"openshift-ptp\"}]", "policy4-common-sriov-sub-policy": "[{\"kind\":\"Subscription\",\"name\":\"sriov-network-operator-subscription\",\"namespace\":\"openshift-sriov-network-operator\"}]" }, "managedPoliciesForUpgrade": [ { "name": "policy1-common-cluster-version-policy", "namespace": "default" }, { "name": "policy2-common-nto-sub-policy", "namespace": "default" }, { "name": "policy3-common-ptp-sub-policy", "namespace": "default" }, { "name": "policy4-common-sriov-sub-policy", "namespace": "default" } ], "managedPoliciesNs": { "policy1-common-cluster-version-policy": "default", "policy2-common-nto-sub-policy": "default", "policy3-common-ptp-sub-policy": "default", "policy4-common-sriov-sub-policy": "default" }, "placementBindings": [ "cgu-policy1-common-cluster-version-policy", "cgu-policy2-common-nto-sub-policy", "cgu-policy3-common-ptp-sub-policy", "cgu-policy4-common-sriov-sub-policy" ], "placementRules": [ "cgu-policy1-common-cluster-version-policy", "cgu-policy2-common-nto-sub-policy", "cgu-policy3-common-ptp-sub-policy", "cgu-policy4-common-sriov-sub-policy" ], "precaching": { "spec": {} }, "remediationPlan": [ [ "spoke1", "spoke2" ], [ "spoke5", "spoke6" ] ], "status": {} } 1 The spec.enable field in the ClusterGroupUpgrade CR is set to false . Check the status of the policies by running the following command: USD oc get policies -A Example output NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default cgu-policy1-common-cluster-version-policy enforce 17m 1 default cgu-policy2-common-nto-sub-policy enforce 17m default cgu-policy3-common-ptp-sub-policy enforce 17m default cgu-policy4-common-sriov-sub-policy enforce 17m default policy1-common-cluster-version-policy inform NonCompliant 15h default policy2-common-nto-sub-policy inform NonCompliant 15h default policy3-common-ptp-sub-policy inform NonCompliant 18m default policy4-common-sriov-sub-policy inform NonCompliant 18m 1 The spec.remediationAction field of policies currently applied on the clusters is set to enforce . The managed policies in inform mode from the ClusterGroupUpgrade CR remain in inform mode during the update. Change the value of the spec.enable field to true by running the following command: USD oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-1 \ --patch '{"spec":{"enable":true}}' --type=merge Verification Check the status of the update again by running the following command: USD oc get cgu -n default cgu-1 -ojsonpath='{.status}' | jq Example output { "computedMaxConcurrency": 2, "conditions": [ 1 { "lastTransitionTime": "2022-02-25T15:34:07Z", "message": "The ClusterGroupUpgrade CR has upgrade policies that are still non compliant", "reason": "UpgradeNotCompleted", "status": "False", "type": "Ready" } ], "copiedPolicies": [ "cgu-policy1-common-cluster-version-policy", "cgu-policy2-common-nto-sub-policy", "cgu-policy3-common-ptp-sub-policy", "cgu-policy4-common-sriov-sub-policy" ], "managedPoliciesContent": { "policy1-common-cluster-version-policy": "null", "policy2-common-nto-sub-policy": "[{\"kind\":\"Subscription\",\"name\":\"node-tuning-operator\",\"namespace\":\"openshift-cluster-node-tuning-operator\"}]", "policy3-common-ptp-sub-policy": "[{\"kind\":\"Subscription\",\"name\":\"ptp-operator-subscription\",\"namespace\":\"openshift-ptp\"}]", "policy4-common-sriov-sub-policy": "[{\"kind\":\"Subscription\",\"name\":\"sriov-network-operator-subscription\",\"namespace\":\"openshift-sriov-network-operator\"}]" }, "managedPoliciesForUpgrade": [ { "name": "policy1-common-cluster-version-policy", "namespace": "default" }, { "name": "policy2-common-nto-sub-policy", "namespace": "default" }, { "name": "policy3-common-ptp-sub-policy", "namespace": "default" }, { "name": "policy4-common-sriov-sub-policy", "namespace": "default" } ], "managedPoliciesNs": { "policy1-common-cluster-version-policy": "default", "policy2-common-nto-sub-policy": "default", "policy3-common-ptp-sub-policy": "default", "policy4-common-sriov-sub-policy": "default" }, "placementBindings": [ "cgu-policy1-common-cluster-version-policy", "cgu-policy2-common-nto-sub-policy", "cgu-policy3-common-ptp-sub-policy", "cgu-policy4-common-sriov-sub-policy" ], "placementRules": [ "cgu-policy1-common-cluster-version-policy", "cgu-policy2-common-nto-sub-policy", "cgu-policy3-common-ptp-sub-policy", "cgu-policy4-common-sriov-sub-policy" ], "precaching": { "spec": {} }, "remediationPlan": [ [ "spoke1", "spoke2" ], [ "spoke5", "spoke6" ] ], "status": { "currentBatch": 1, "currentBatchStartedAt": "2022-02-25T15:54:16Z", "remediationPlanForBatch": { "spoke1": 0, "spoke2": 1 }, "startedAt": "2022-02-25T15:54:16Z" } } 1 Reflects the update progress of the current batch. Run this command again to receive updated information about the progress. If the policies include Operator subscriptions, you can check the installation progress directly on the single-node cluster. Export the KUBECONFIG file of the single-node cluster you want to check the installation progress for by running the following command: USD export KUBECONFIG=<cluster_kubeconfig_absolute_path> Check all the subscriptions present on the single-node cluster and look for the one in the policy you are trying to install through the ClusterGroupUpgrade CR by running the following command: USD oc get subs -A | grep -i <subscription_name> Example output for cluster-logging policy NAMESPACE NAME PACKAGE SOURCE CHANNEL openshift-logging cluster-logging cluster-logging redhat-operators stable If one of the managed policies includes a ClusterVersion CR, check the status of platform updates in the current batch by running the following command against the spoke cluster: USD oc get clusterversion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.9.5 True True 43s Working towards 4.9.7: 71 of 735 done (9% complete) Check the Operator subscription by running the following command: USD oc get subs -n <operator-namespace> <operator-subscription> -ojsonpath="{.status}" Check the install plans present on the single-node cluster that is associated with the desired subscription by running the following command: USD oc get installplan -n <subscription_namespace> Example output for cluster-logging Operator NAMESPACE NAME CSV APPROVAL APPROVED openshift-logging install-6khtw cluster-logging.5.3.3-4 Manual true 1 1 The install plans have their Approval field set to Manual and their Approved field changes from false to true after TALM approves the install plan. Note When TALM is remediating a policy containing a subscription, it automatically approves any install plans attached to that subscription. Where multiple install plans are needed to get the operator to the latest known version, TALM might approve multiple install plans, upgrading through one or more intermediate versions to get to the final version. Check if the cluster service version for the Operator of the policy that the ClusterGroupUpgrade is installing reached the Succeeded phase by running the following command: USD oc get csv -n <operator_namespace> Example output for OpenShift Logging Operator NAME DISPLAY VERSION REPLACES PHASE cluster-logging.5.4.2 Red Hat OpenShift Logging 5.4.2 Succeeded 18.7. Creating a backup of cluster resources before upgrade For single-node OpenShift, the Topology Aware Lifecycle Manager (TALM) can create a backup of a deployment before an upgrade. If the upgrade fails, you can recover the version and restore a cluster to a working state without requiring a reprovision of applications. The container image backup starts when the backup field is set to true in the ClusterGroupUpgrade CR. The backup process can be in the following statuses: BackupStatePreparingToStart The first reconciliation pass is in progress. The TALM deletes any spoke backup namespace and hub view resources that have been created in a failed upgrade attempt. BackupStateStarting The backup prerequisites and backup job are being created. BackupStateActive The backup is in progress. BackupStateSucceeded The backup has succeeded. BackupStateTimeout Artifact backup has been partially done. BackupStateError The backup has ended with a non-zero exit code. Note If the backup fails and enters the BackupStateTimeout or BackupStateError state, the cluster upgrade does not proceed. 18.7.1. Creating a ClusterGroupUpgrade CR with backup For single-node OpenShift, you can create a backup of a deployment before an upgrade. If the upgrade fails you can use the upgrade-recovery.sh script generated by Topology Aware Lifecycle Manager (TALM) to return the system to its preupgrade state. The backup consists of the following items: Cluster backup A snapshot of etcd and static pod manifests. Content backup Backups of folders, for example, /etc , /usr/local , /var/lib/kubelet . Changed files backup Any files managed by machine-config that have been changed. Deployment A pinned ostree deployment. Images (Optional) Any container images that are in use. Prerequisites Install the Topology Aware Lifecycle Manager (TALM). Provision one or more managed clusters. Log in as a user with cluster-admin privileges. Install Red Hat Advanced Cluster Management (RHACM). Note It is highly recommended that you create a recovery partition. The following is an example SiteConfig custom resource (CR) for a recovery partition of 50 GB: nodes: - hostName: "snonode.sno-worker-0.e2e.bos.redhat.com" role: "master" rootDeviceHints: hctl: "0:2:0:0" deviceName: /dev/sda ........ ........ #Disk /dev/sda: 893.3 GiB, 959119884288 bytes, 1873281024 sectors diskPartition: - device: /dev/sda partitions: - mount_point: /var/recovery size: 51200 start: 800000 Procedure Save the contents of the ClusterGroupUpgrade CR with the backup field set to true in the clustergroupupgrades-group-du.yaml file: apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: du-upgrade-4918 namespace: ztp-group-du-sno spec: preCaching: true backup: true clusters: - cnfdb1 - cnfdb2 enable: false managedPolicies: - du-upgrade-platform-upgrade remediationStrategy: maxConcurrency: 2 timeout: 240 To start the update, apply the ClusterGroupUpgrade CR by running the following command: USD oc apply -f clustergroupupgrades-group-du.yaml Verification Check the status of the upgrade in the hub cluster by running the following command: USD oc get cgu -n ztp-group-du-sno du-upgrade-4918 -o jsonpath='{.status}' Example output { "backup": { "clusters": [ "cnfdb2", "cnfdb1" ], "status": { "cnfdb1": "Succeeded", "cnfdb2": "Succeeded" } }, "computedMaxConcurrency": 1, "conditions": [ { "lastTransitionTime": "2022-04-05T10:37:19Z", "message": "Backup is completed", "reason": "BackupCompleted", "status": "True", "type": "BackupDone" } ], "precaching": { "spec": {} }, "status": {} 18.7.2. Recovering a cluster after a failed upgrade If an upgrade of a cluster fails, you can manually log in to the cluster and use the backup to return the cluster to its preupgrade state. There are two stages: Rollback If the attempted upgrade included a change to the platform OS deployment, you must roll back to the version before running the recovery script. Important A rollback is only applicable to upgrades from TALM and single-node OpenShift. This process does not apply to rollbacks from any other upgrade type. Recovery The recovery shuts down containers and uses files from the backup partition to relaunch containers and restore clusters. Prerequisites Install the Topology Aware Lifecycle Manager (TALM). Provision one or more managed clusters. Install Red Hat Advanced Cluster Management (RHACM). Log in as a user with cluster-admin privileges. Run an upgrade that is configured for backup. Procedure Delete the previously created ClusterGroupUpgrade custom resource (CR) by running the following command: USD oc delete cgu/du-upgrade-4918 -n ztp-group-du-sno Log in to the cluster that you want to recover. Check the status of the platform OS deployment by running the following command: USD oc ostree admin status Example outputs [root@lab-test-spoke2-node-0 core]# ostree admin status * rhcos c038a8f08458bbed83a77ece033ad3c55597e3f64edad66ea12fda18cbdceaf9.0 Version: 49.84.202202230006-0 Pinned: yes 1 origin refspec: c038a8f08458bbed83a77ece033ad3c55597e3f64edad66ea12fda18cbdceaf9 1 The current deployment is pinned. A platform OS deployment rollback is not necessary. [root@lab-test-spoke2-node-0 core]# ostree admin status * rhcos f750ff26f2d5550930ccbe17af61af47daafc8018cd9944f2a3a6269af26b0fa.0 Version: 410.84.202204050541-0 origin refspec: f750ff26f2d5550930ccbe17af61af47daafc8018cd9944f2a3a6269af26b0fa rhcos ad8f159f9dc4ea7e773fd9604c9a16be0fe9b266ae800ac8470f63abc39b52ca.0 (rollback) 1 Version: 410.84.202203290245-0 Pinned: yes 2 origin refspec: ad8f159f9dc4ea7e773fd9604c9a16be0fe9b266ae800ac8470f63abc39b52ca 1 This platform OS deployment is marked for rollback. 2 The deployment is pinned and can be rolled back. To trigger a rollback of the platform OS deployment, run the following command: USD rpm-ostree rollback -r The first phase of the recovery shuts down containers and restores files from the backup partition to the targeted directories. To begin the recovery, run the following command: USD /var/recovery/upgrade-recovery.sh When prompted, reboot the cluster by running the following command: USD systemctl reboot After the reboot, restart the recovery by running the following command: USD /var/recovery/upgrade-recovery.sh --resume Note If the recovery utility fails, you can retry with the --restart option: USD /var/recovery/upgrade-recovery.sh --restart Verification To check the status of the recovery run the following command: USD oc get clusterversion,nodes,clusteroperator Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS clusterversion.config.openshift.io/version 4.9.23 True False 86d Cluster version is 4.9.23 1 NAME STATUS ROLES AGE VERSION node/lab-test-spoke1-node-0 Ready master,worker 86d v1.22.3+b93fd35 2 NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE clusteroperator.config.openshift.io/authentication 4.9.23 True False False 2d7h 3 clusteroperator.config.openshift.io/baremetal 4.9.23 True False False 86d .............. 1 The cluster version is available and has the correct version. 2 The node status is Ready . 3 The ClusterOperator object's availability is True . 18.8. Using the container image pre-cache feature Clusters might have limited bandwidth to access the container image registry, which can cause a timeout before the updates are completed. Note The time of the update is not set by TALM. You can apply the ClusterGroupUpgrade CR at the beginning of the update by manual application or by external automation. The container image pre-caching starts when the preCaching field is set to true in the ClusterGroupUpgrade CR. After a successful pre-caching process, you can start remediating policies. The remediation actions start when the enable field is set to true . The pre-caching process can be in the following statuses: PrecacheNotStarted This is the initial state all clusters are automatically assigned to on the first reconciliation pass of the ClusterGroupUpgrade CR. In this state, TALM deletes any pre-caching namespace and hub view resources of spoke clusters that remain from incomplete updates. TALM then creates a new ManagedClusterView resource for the spoke pre-caching namespace to verify its deletion in the PrecachePreparing state. PrecachePreparing Cleaning up any remaining resources from incomplete updates is in progress. PrecacheStarting Pre-caching job prerequisites and the job are created. PrecacheActive The job is in "Active" state. PrecacheSucceeded The pre-cache job has succeeded. PrecacheTimeout The artifact pre-caching has been partially done. PrecacheUnrecoverableError The job ends with a non-zero exit code. 18.8.1. Creating a ClusterGroupUpgrade CR with pre-caching The pre-cache feature allows the required container images to be present on the spoke cluster before the update starts. Prerequisites Install the Topology Aware Lifecycle Manager (TALM). Provision one or more managed clusters. Log in as a user with cluster-admin privileges. Procedure Save the contents of the ClusterGroupUpgrade CR with the preCaching field set to true in the clustergroupupgrades-group-du.yaml file: apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: du-upgrade-4918 namespace: ztp-group-du-sno spec: preCaching: true 1 clusters: - cnfdb1 - cnfdb2 enable: false managedPolicies: - du-upgrade-platform-upgrade remediationStrategy: maxConcurrency: 2 timeout: 240 1 The preCaching field is set to true , which enables TALM to pull the container images before starting the update. When you want to start the update, apply the ClusterGroupUpgrade CR by running the following command: USD oc apply -f clustergroupupgrades-group-du.yaml Verification Check if the ClusterGroupUpgrade CR exists in the hub cluster by running the following command: USD oc get cgu -A Example output NAMESPACE NAME AGE ztp-group-du-sno du-upgrade-4918 10s 1 1 The CR is created. Check the status of the pre-caching task by running the following command: USD oc get cgu -n ztp-group-du-sno du-upgrade-4918 -o jsonpath='{.status}' Example output { "conditions": [ { "lastTransitionTime": "2022-01-27T19:07:24Z", "message": "Precaching is not completed (required)", 1 "reason": "PrecachingRequired", "status": "False", "type": "Ready" }, { "lastTransitionTime": "2022-01-27T19:07:24Z", "message": "Precaching is required and not done", "reason": "PrecachingNotDone", "status": "False", "type": "PrecachingDone" }, { "lastTransitionTime": "2022-01-27T19:07:34Z", "message": "Pre-caching spec is valid and consistent", "reason": "PrecacheSpecIsWellFormed", "status": "True", "type": "PrecacheSpecValid" } ], "precaching": { "clusters": [ "cnfdb1" 2 ], "spec": { "platformImage": "image.example.io"}, "status": { "cnfdb1": "Active"} } } 1 Displays that the update is in progress. 2 Displays the list of identified clusters. Check the status of the pre-caching job by running the following command on the spoke cluster: USD oc get jobs,pods -n openshift-talm-pre-cache Example output NAME COMPLETIONS DURATION AGE job.batch/pre-cache 0/1 3m10s 3m10s NAME READY STATUS RESTARTS AGE pod/pre-cache--1-9bmlr 1/1 Running 0 3m10s Check the status of the ClusterGroupUpgrade CR by running the following command: USD oc get cgu -n ztp-group-du-sno du-upgrade-4918 -o jsonpath='{.status}' Example output "conditions": [ { "lastTransitionTime": "2022-01-27T19:30:41Z", "message": "The ClusterGroupUpgrade CR has all clusters compliant with all the managed policies", "reason": "UpgradeCompleted", "status": "True", "type": "Ready" }, { "lastTransitionTime": "2022-01-27T19:28:57Z", "message": "Precaching is completed", "reason": "PrecachingCompleted", "status": "True", "type": "PrecachingDone" 1 } 1 The pre-cache tasks are done. 18.9. Troubleshooting the Topology Aware Lifecycle Manager The Topology Aware Lifecycle Manager (TALM) is an OpenShift Container Platform Operator that remediates RHACM policies. When issues occur, use the oc adm must-gather command to gather details and logs and to take steps in debugging the issues. For more information about related topics, see the following documentation: Red Hat Advanced Cluster Management for Kubernetes 2.4 Support Matrix Red Hat Advanced Cluster Management Troubleshooting The "Troubleshooting Operator issues" section 18.9.1. General troubleshooting You can determine the cause of the problem by reviewing the following questions: Is the configuration that you are applying supported? Are the RHACM and the OpenShift Container Platform versions compatible? Are the TALM and RHACM versions compatible? Which of the following components is causing the problem? Section 18.9.3, "Managed policies" Section 18.9.4, "Clusters" Section 18.9.5, "Remediation Strategy" Section 18.9.6, "Topology Aware Lifecycle Manager" To ensure that the ClusterGroupUpgrade configuration is functional, you can do the following: Create the ClusterGroupUpgrade CR with the spec.enable field set to false . Wait for the status to be updated and go through the troubleshooting questions. If everything looks as expected, set the spec.enable field to true in the ClusterGroupUpgrade CR. Warning After you set the spec.enable field to true in the ClusterUpgradeGroup CR, the update procedure starts and you cannot edit the CR's spec fields anymore. 18.9.2. Cannot modify the ClusterUpgradeGroup CR Issue You cannot edit the ClusterUpgradeGroup CR after enabling the update. Resolution Restart the procedure by performing the following steps: Remove the old ClusterGroupUpgrade CR by running the following command: USD oc delete cgu -n <ClusterGroupUpgradeCR_namespace> <ClusterGroupUpgradeCR_name> Check and fix the existing issues with the managed clusters and policies. Ensure that all the clusters are managed clusters and available. Ensure that all the policies exist and have the spec.remediationAction field set to inform . Create a new ClusterGroupUpgrade CR with the correct configurations. USD oc apply -f <ClusterGroupUpgradeCR_YAML> 18.9.3. Managed policies Checking managed policies on the system Issue You want to check if you have the correct managed policies on the system. Resolution Run the following command: USD oc get cgu lab-upgrade -ojsonpath='{.spec.managedPolicies}' Example output ["group-du-sno-validator-du-validator-policy", "policy2-common-nto-sub-policy", "policy3-common-ptp-sub-policy"] Checking remediationAction mode Issue You want to check if the remediationAction field is set to inform in the spec of the managed policies. Resolution Run the following command: USD oc get policies --all-namespaces Example output NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default policy1-common-cluster-version-policy inform NonCompliant 5d21h default policy2-common-nto-sub-policy inform Compliant 5d21h default policy3-common-ptp-sub-policy inform NonCompliant 5d21h default policy4-common-sriov-sub-policy inform NonCompliant 5d21h Checking policy compliance state Issue You want to check the compliance state of policies. Resolution Run the following command: USD oc get policies --all-namespaces Example output NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default policy1-common-cluster-version-policy inform NonCompliant 5d21h default policy2-common-nto-sub-policy inform Compliant 5d21h default policy3-common-ptp-sub-policy inform NonCompliant 5d21h default policy4-common-sriov-sub-policy inform NonCompliant 5d21h 18.9.4. Clusters Checking if managed clusters are present Issue You want to check if the clusters in the ClusterGroupUpgrade CR are managed clusters. Resolution Run the following command: USD oc get managedclusters Example output NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true https://api.hub.example.com:6443 True Unknown 13d spoke1 true https://api.spoke1.example.com:6443 True True 13d spoke3 true https://api.spoke3.example.com:6443 True True 27h Alternatively, check the TALM manager logs: Get the name of the TALM manager by running the following command: USD oc get pod -n openshift-operators Example output NAME READY STATUS RESTARTS AGE cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp 2/2 Running 0 45m Check the TALM manager logs by running the following command: USD oc logs -n openshift-operators \ cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp -c manager Example output ERROR controller-runtime.manager.controller.clustergroupupgrade Reconciler error {"reconciler group": "ran.openshift.io", "reconciler kind": "ClusterGroupUpgrade", "name": "lab-upgrade", "namespace": "default", "error": "Cluster spoke5555 is not a ManagedCluster"} 1 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem 1 The error message shows that the cluster is not a managed cluster. Checking if managed clusters are available Issue You want to check if the managed clusters specified in the ClusterGroupUpgrade CR are available. Resolution Run the following command: USD oc get managedclusters Example output NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true https://api.hub.testlab.com:6443 True Unknown 13d spoke1 true https://api.spoke1.testlab.com:6443 True True 13d 1 spoke3 true https://api.spoke3.testlab.com:6443 True True 27h 2 1 2 The value of the AVAILABLE field is True for the managed clusters. Checking clusterSelector Issue You want to check if the clusterSelector field is specified in the ClusterGroupUpgrade CR in at least one of the managed clusters. Resolution Run the following command: USD oc get managedcluster --selector=upgrade=true 1 1 The label for the clusters you want to update is upgrade:true . Example output NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE spoke1 true https://api.spoke1.testlab.com:6443 True True 13d spoke3 true https://api.spoke3.testlab.com:6443 True True 27h Checking if canary clusters are present Issue You want to check if the canary clusters are present in the list of clusters. Example ClusterGroupUpgrade CR spec: clusters: - spoke1 - spoke3 clusterSelector: - upgrade2=true remediationStrategy: canaries: - spoke3 maxConcurrency: 2 timeout: 240 Resolution Run the following commands: USD oc get cgu lab-upgrade -ojsonpath='{.spec.clusters}' Example output ["spoke1", "spoke3"] Check if the canary clusters are present in the list of clusters that match clusterSelector labels by running the following command: USD oc get managedcluster --selector=upgrade=true Example output NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE spoke1 true https://api.spoke1.testlab.com:6443 True True 13d spoke3 true https://api.spoke3.testlab.com:6443 True True 27h Note A cluster can be present in spec.clusters and also be matched by the spec.clusterSelecter label. Checking the pre-caching status on spoke clusters Check the status of pre-caching by running the following command on the spoke cluster: USD oc get jobs,pods -n openshift-talo-pre-cache 18.9.5. Remediation Strategy Checking if remediationStrategy is present in the ClusterGroupUpgrade CR Issue You want to check if the remediationStrategy is present in the ClusterGroupUpgrade CR. Resolution Run the following command: USD oc get cgu lab-upgrade -ojsonpath='{.spec.remediationStrategy}' Example output {"maxConcurrency":2, "timeout":240} Checking if maxConcurrency is specified in the ClusterGroupUpgrade CR Issue You want to check if the maxConcurrency is specified in the ClusterGroupUpgrade CR. Resolution Run the following command: USD oc get cgu lab-upgrade -ojsonpath='{.spec.remediationStrategy.maxConcurrency}' Example output 2 18.9.6. Topology Aware Lifecycle Manager Checking condition message and status in the ClusterGroupUpgrade CR Issue You want to check the value of the status.conditions field in the ClusterGroupUpgrade CR. Resolution Run the following command: USD oc get cgu lab-upgrade -ojsonpath='{.status.conditions}' Example output {"lastTransitionTime":"2022-02-17T22:25:28Z", "message":"The ClusterGroupUpgrade CR has managed policies that are missing:[policyThatDoesntExist]", "reason":"UpgradeCannotStart", "status":"False", "type":"Ready"} Checking corresponding copied policies Issue You want to check if every policy from status.managedPoliciesForUpgrade has a corresponding policy in status.copiedPolicies . Resolution Run the following command: USD oc get cgu lab-upgrade -oyaml Example output status: ... copiedPolicies: - lab-upgrade-policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy3-common-ptp-sub-policy namespace: default Checking if status.remediationPlan was computed Issue You want to check if status.remediationPlan is computed. Resolution Run the following command: USD oc get cgu lab-upgrade -ojsonpath='{.status.remediationPlan}' Example output [["spoke2", "spoke3"]] Errors in the TALM manager container Issue You want to check the logs of the manager container of TALM. Resolution Run the following command: USD oc logs -n openshift-operators \ cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp -c manager Example output ERROR controller-runtime.manager.controller.clustergroupupgrade Reconciler error {"reconciler group": "ran.openshift.io", "reconciler kind": "ClusterGroupUpgrade", "name": "lab-upgrade", "namespace": "default", "error": "Cluster spoke5555 is not a ManagedCluster"} 1 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem 1 Displays the error. Additional resources For information about troubleshooting, see OpenShift Container Platform Troubleshooting Operator Issues . For more information about using Topology Aware Lifecycle Manager in the ZTP workflow, see Updating managed policies with Topology Aware Lifecycle Manager . For more information about the PolicyGenTemplate CRD, see About the PolicyGenTemplate CRD | [
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-topology-aware-lifecycle-manager-subscription namespace: openshift-operators spec: channel: \"stable\" name: topology-aware-lifecycle-manager source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f talm-subscription.yaml",
"oc get csv -n openshift-operators",
"NAME DISPLAY VERSION REPLACES PHASE topology-aware-lifecycle-manager.4.11.x Topology Aware Lifecycle Manager 4.11.x Succeeded",
"oc get deploy -n openshift-operators",
"NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE openshift-operators cluster-group-upgrades-controller-manager 1/1 1 1 14s",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-upgrade-complete namespace: default spec: clusters: 1 - spoke1 enable: false managedPolicies: 2 - policy1-common-cluster-version-policy - policy2-common-nto-sub-policy remediationStrategy: 3 canaries: 4 - spoke1 maxConcurrency: 1 5 timeout: 240 status: 6 conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: \"False\" type: Ready copiedPolicies: - cgu-upgrade-complete-policy1-common-cluster-version-policy - cgu-upgrade-complete-policy2-common-nto-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-nto-sub-policy namespace: default placementBindings: - cgu-upgrade-complete-policy1-common-cluster-version-policy - cgu-upgrade-complete-policy2-common-nto-sub-policy placementRules: - cgu-upgrade-complete-policy1-common-cluster-version-policy - cgu-upgrade-complete-policy2-common-nto-sub-policy remediationPlan: - - spoke1",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-upgrade-complete namespace: default spec: clusters: - spoke1 enable: true 1 managedPolicies: - policy1-common-cluster-version-policy - policy2-common-nto-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: 2 conditions: - message: The ClusterGroupUpgrade CR has upgrade policies that are still non compliant reason: UpgradeNotCompleted status: \"False\" type: Ready copiedPolicies: - cgu-upgrade-complete-policy1-common-cluster-version-policy - cgu-upgrade-complete-policy2-common-nto-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-nto-sub-policy namespace: default placementBindings: - cgu-upgrade-complete-policy1-common-cluster-version-policy - cgu-upgrade-complete-policy2-common-nto-sub-policy placementRules: - cgu-upgrade-complete-policy1-common-cluster-version-policy - cgu-upgrade-complete-policy2-common-nto-sub-policy remediationPlan: - - spoke1 status: currentBatch: 1 remediationPlanForBatch: 3 spoke1: 0",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-upgrade-complete namespace: default spec: actions: afterCompletion: deleteObjects: true 1 clusters: - spoke1 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-nto-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: 2 conditions: - message: The ClusterGroupUpgrade CR has all clusters compliant with all the managed policies reason: UpgradeCompleted status: \"True\" type: Ready managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-nto-sub-policy namespace: default remediationPlan: - - spoke1 status: remediationPlanForBatch: spoke1: -2 3",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-a namespace: default spec: blockingCRs: 1 - name: cgu-c namespace: default clusters: - spoke1 - spoke2 - spoke3 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy remediationStrategy: canaries: - spoke1 maxConcurrency: 2 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: \"False\" type: Ready copiedPolicies: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default placementBindings: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy placementRules: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy remediationPlan: - - spoke1 - - spoke2",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-b namespace: default spec: blockingCRs: 1 - name: cgu-a namespace: default clusters: - spoke4 - spoke5 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: \"False\" type: Ready copiedPolicies: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy placementRules: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy remediationPlan: - - spoke4 - - spoke5 status: {}",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-c namespace: default spec: 1 clusters: - spoke6 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: \"False\" type: Ready copiedPolicies: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy managedPoliciesCompliantBeforeUpgrade: - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy placementRules: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy remediationPlan: - - spoke6 status: {}",
"oc apply -f <name>.yaml",
"oc --namespace=default patch clustergroupupgrade.ran.openshift.io/<name> --type merge -p '{\"spec\":{\"enable\":true}}'",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-a namespace: default spec: blockingCRs: - name: cgu-c namespace: default clusters: - spoke1 - spoke2 - spoke3 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy remediationStrategy: canaries: - spoke1 maxConcurrency: 2 timeout: 240 status: conditions: - message: 'The ClusterGroupUpgrade CR is blocked by other CRs that have not yet completed: [cgu-c]' 1 reason: UpgradeCannotStart status: \"False\" type: Ready copiedPolicies: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default placementBindings: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy placementRules: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy remediationPlan: - - spoke1 - - spoke2 status: {}",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-b namespace: default spec: blockingCRs: - name: cgu-a namespace: default clusters: - spoke4 - spoke5 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: 'The ClusterGroupUpgrade CR is blocked by other CRs that have not yet completed: [cgu-a]' 1 reason: UpgradeCannotStart status: \"False\" type: Ready copiedPolicies: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy placementRules: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy remediationPlan: - - spoke4 - - spoke5 status: {}",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-c namespace: default spec: clusters: - spoke6 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR has upgrade policies that are still non compliant 1 reason: UpgradeNotCompleted status: \"False\" type: Ready copiedPolicies: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy managedPoliciesCompliantBeforeUpgrade: - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy placementRules: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy remediationPlan: - - spoke6 status: currentBatch: 1 remediationPlanForBatch: spoke6: 0",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-1 namespace: default spec: managedPolicies: 1 - policy1-common-cluster-version-policy - policy2-common-nto-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy enable: false clusters: 2 - spoke1 - spoke2 - spoke5 - spoke6 remediationStrategy: maxConcurrency: 2 3 timeout: 240 4",
"oc create -f cgu-1.yaml",
"oc get cgu --all-namespaces",
"NAMESPACE NAME AGE default cgu-1 8m55s",
"oc get cgu -n default cgu-1 -ojsonpath='{.status}' | jq",
"{ \"computedMaxConcurrency\": 2, \"conditions\": [ { \"lastTransitionTime\": \"2022-02-25T15:34:07Z\", \"message\": \"The ClusterGroupUpgrade CR is not enabled\", 1 \"reason\": \"UpgradeNotStarted\", \"status\": \"False\", \"type\": \"Ready\" } ], \"copiedPolicies\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-nto-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"managedPoliciesContent\": { \"policy1-common-cluster-version-policy\": \"null\", \"policy2-common-nto-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"node-tuning-operator\\\",\\\"namespace\\\":\\\"openshift-cluster-node-tuning-operator\\\"}]\", \"policy3-common-ptp-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"ptp-operator-subscription\\\",\\\"namespace\\\":\\\"openshift-ptp\\\"}]\", \"policy4-common-sriov-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"sriov-network-operator-subscription\\\",\\\"namespace\\\":\\\"openshift-sriov-network-operator\\\"}]\" }, \"managedPoliciesForUpgrade\": [ { \"name\": \"policy1-common-cluster-version-policy\", \"namespace\": \"default\" }, { \"name\": \"policy2-common-nto-sub-policy\", \"namespace\": \"default\" }, { \"name\": \"policy3-common-ptp-sub-policy\", \"namespace\": \"default\" }, { \"name\": \"policy4-common-sriov-sub-policy\", \"namespace\": \"default\" } ], \"managedPoliciesNs\": { \"policy1-common-cluster-version-policy\": \"default\", \"policy2-common-nto-sub-policy\": \"default\", \"policy3-common-ptp-sub-policy\": \"default\", \"policy4-common-sriov-sub-policy\": \"default\" }, \"placementBindings\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-nto-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"placementRules\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-nto-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"precaching\": { \"spec\": {} }, \"remediationPlan\": [ [ \"spoke1\", \"spoke2\" ], [ \"spoke5\", \"spoke6\" ] ], \"status\": {} }",
"oc get policies -A",
"NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default cgu-policy1-common-cluster-version-policy enforce 17m 1 default cgu-policy2-common-nto-sub-policy enforce 17m default cgu-policy3-common-ptp-sub-policy enforce 17m default cgu-policy4-common-sriov-sub-policy enforce 17m default policy1-common-cluster-version-policy inform NonCompliant 15h default policy2-common-nto-sub-policy inform NonCompliant 15h default policy3-common-ptp-sub-policy inform NonCompliant 18m default policy4-common-sriov-sub-policy inform NonCompliant 18m",
"oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-1 --patch '{\"spec\":{\"enable\":true}}' --type=merge",
"oc get cgu -n default cgu-1 -ojsonpath='{.status}' | jq",
"{ \"computedMaxConcurrency\": 2, \"conditions\": [ 1 { \"lastTransitionTime\": \"2022-02-25T15:34:07Z\", \"message\": \"The ClusterGroupUpgrade CR has upgrade policies that are still non compliant\", \"reason\": \"UpgradeNotCompleted\", \"status\": \"False\", \"type\": \"Ready\" } ], \"copiedPolicies\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-nto-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"managedPoliciesContent\": { \"policy1-common-cluster-version-policy\": \"null\", \"policy2-common-nto-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"node-tuning-operator\\\",\\\"namespace\\\":\\\"openshift-cluster-node-tuning-operator\\\"}]\", \"policy3-common-ptp-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"ptp-operator-subscription\\\",\\\"namespace\\\":\\\"openshift-ptp\\\"}]\", \"policy4-common-sriov-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"sriov-network-operator-subscription\\\",\\\"namespace\\\":\\\"openshift-sriov-network-operator\\\"}]\" }, \"managedPoliciesForUpgrade\": [ { \"name\": \"policy1-common-cluster-version-policy\", \"namespace\": \"default\" }, { \"name\": \"policy2-common-nto-sub-policy\", \"namespace\": \"default\" }, { \"name\": \"policy3-common-ptp-sub-policy\", \"namespace\": \"default\" }, { \"name\": \"policy4-common-sriov-sub-policy\", \"namespace\": \"default\" } ], \"managedPoliciesNs\": { \"policy1-common-cluster-version-policy\": \"default\", \"policy2-common-nto-sub-policy\": \"default\", \"policy3-common-ptp-sub-policy\": \"default\", \"policy4-common-sriov-sub-policy\": \"default\" }, \"placementBindings\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-nto-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"placementRules\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-nto-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"precaching\": { \"spec\": {} }, \"remediationPlan\": [ [ \"spoke1\", \"spoke2\" ], [ \"spoke5\", \"spoke6\" ] ], \"status\": { \"currentBatch\": 1, \"currentBatchStartedAt\": \"2022-02-25T15:54:16Z\", \"remediationPlanForBatch\": { \"spoke1\": 0, \"spoke2\": 1 }, \"startedAt\": \"2022-02-25T15:54:16Z\" } }",
"export KUBECONFIG=<cluster_kubeconfig_absolute_path>",
"oc get subs -A | grep -i <subscription_name>",
"NAMESPACE NAME PACKAGE SOURCE CHANNEL openshift-logging cluster-logging cluster-logging redhat-operators stable",
"oc get clusterversion",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.9.5 True True 43s Working towards 4.9.7: 71 of 735 done (9% complete)",
"oc get subs -n <operator-namespace> <operator-subscription> -ojsonpath=\"{.status}\"",
"oc get installplan -n <subscription_namespace>",
"NAMESPACE NAME CSV APPROVAL APPROVED openshift-logging install-6khtw cluster-logging.5.3.3-4 Manual true 1",
"oc get csv -n <operator_namespace>",
"NAME DISPLAY VERSION REPLACES PHASE cluster-logging.5.4.2 Red Hat OpenShift Logging 5.4.2 Succeeded",
"nodes: - hostName: \"snonode.sno-worker-0.e2e.bos.redhat.com\" role: \"master\" rootDeviceHints: hctl: \"0:2:0:0\" deviceName: /dev/sda ..... ..... #Disk /dev/sda: 893.3 GiB, 959119884288 bytes, 1873281024 sectors diskPartition: - device: /dev/sda partitions: - mount_point: /var/recovery size: 51200 start: 800000",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: du-upgrade-4918 namespace: ztp-group-du-sno spec: preCaching: true backup: true clusters: - cnfdb1 - cnfdb2 enable: false managedPolicies: - du-upgrade-platform-upgrade remediationStrategy: maxConcurrency: 2 timeout: 240",
"oc apply -f clustergroupupgrades-group-du.yaml",
"oc get cgu -n ztp-group-du-sno du-upgrade-4918 -o jsonpath='{.status}'",
"{ \"backup\": { \"clusters\": [ \"cnfdb2\", \"cnfdb1\" ], \"status\": { \"cnfdb1\": \"Succeeded\", \"cnfdb2\": \"Succeeded\" } }, \"computedMaxConcurrency\": 1, \"conditions\": [ { \"lastTransitionTime\": \"2022-04-05T10:37:19Z\", \"message\": \"Backup is completed\", \"reason\": \"BackupCompleted\", \"status\": \"True\", \"type\": \"BackupDone\" } ], \"precaching\": { \"spec\": {} }, \"status\": {}",
"oc delete cgu/du-upgrade-4918 -n ztp-group-du-sno",
"oc ostree admin status",
"ostree admin status * rhcos c038a8f08458bbed83a77ece033ad3c55597e3f64edad66ea12fda18cbdceaf9.0 Version: 49.84.202202230006-0 Pinned: yes 1 origin refspec: c038a8f08458bbed83a77ece033ad3c55597e3f64edad66ea12fda18cbdceaf9",
"ostree admin status * rhcos f750ff26f2d5550930ccbe17af61af47daafc8018cd9944f2a3a6269af26b0fa.0 Version: 410.84.202204050541-0 origin refspec: f750ff26f2d5550930ccbe17af61af47daafc8018cd9944f2a3a6269af26b0fa rhcos ad8f159f9dc4ea7e773fd9604c9a16be0fe9b266ae800ac8470f63abc39b52ca.0 (rollback) 1 Version: 410.84.202203290245-0 Pinned: yes 2 origin refspec: ad8f159f9dc4ea7e773fd9604c9a16be0fe9b266ae800ac8470f63abc39b52ca",
"rpm-ostree rollback -r",
"/var/recovery/upgrade-recovery.sh",
"systemctl reboot",
"/var/recovery/upgrade-recovery.sh --resume",
"/var/recovery/upgrade-recovery.sh --restart",
"oc get clusterversion,nodes,clusteroperator",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS clusterversion.config.openshift.io/version 4.9.23 True False 86d Cluster version is 4.9.23 1 NAME STATUS ROLES AGE VERSION node/lab-test-spoke1-node-0 Ready master,worker 86d v1.22.3+b93fd35 2 NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE clusteroperator.config.openshift.io/authentication 4.9.23 True False False 2d7h 3 clusteroperator.config.openshift.io/baremetal 4.9.23 True False False 86d ...........",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: du-upgrade-4918 namespace: ztp-group-du-sno spec: preCaching: true 1 clusters: - cnfdb1 - cnfdb2 enable: false managedPolicies: - du-upgrade-platform-upgrade remediationStrategy: maxConcurrency: 2 timeout: 240",
"oc apply -f clustergroupupgrades-group-du.yaml",
"oc get cgu -A",
"NAMESPACE NAME AGE ztp-group-du-sno du-upgrade-4918 10s 1",
"oc get cgu -n ztp-group-du-sno du-upgrade-4918 -o jsonpath='{.status}'",
"{ \"conditions\": [ { \"lastTransitionTime\": \"2022-01-27T19:07:24Z\", \"message\": \"Precaching is not completed (required)\", 1 \"reason\": \"PrecachingRequired\", \"status\": \"False\", \"type\": \"Ready\" }, { \"lastTransitionTime\": \"2022-01-27T19:07:24Z\", \"message\": \"Precaching is required and not done\", \"reason\": \"PrecachingNotDone\", \"status\": \"False\", \"type\": \"PrecachingDone\" }, { \"lastTransitionTime\": \"2022-01-27T19:07:34Z\", \"message\": \"Pre-caching spec is valid and consistent\", \"reason\": \"PrecacheSpecIsWellFormed\", \"status\": \"True\", \"type\": \"PrecacheSpecValid\" } ], \"precaching\": { \"clusters\": [ \"cnfdb1\" 2 ], \"spec\": { \"platformImage\": \"image.example.io\"}, \"status\": { \"cnfdb1\": \"Active\"} } }",
"oc get jobs,pods -n openshift-talm-pre-cache",
"NAME COMPLETIONS DURATION AGE job.batch/pre-cache 0/1 3m10s 3m10s NAME READY STATUS RESTARTS AGE pod/pre-cache--1-9bmlr 1/1 Running 0 3m10s",
"oc get cgu -n ztp-group-du-sno du-upgrade-4918 -o jsonpath='{.status}'",
"\"conditions\": [ { \"lastTransitionTime\": \"2022-01-27T19:30:41Z\", \"message\": \"The ClusterGroupUpgrade CR has all clusters compliant with all the managed policies\", \"reason\": \"UpgradeCompleted\", \"status\": \"True\", \"type\": \"Ready\" }, { \"lastTransitionTime\": \"2022-01-27T19:28:57Z\", \"message\": \"Precaching is completed\", \"reason\": \"PrecachingCompleted\", \"status\": \"True\", \"type\": \"PrecachingDone\" 1 }",
"oc delete cgu -n <ClusterGroupUpgradeCR_namespace> <ClusterGroupUpgradeCR_name>",
"oc apply -f <ClusterGroupUpgradeCR_YAML>",
"oc get cgu lab-upgrade -ojsonpath='{.spec.managedPolicies}'",
"[\"group-du-sno-validator-du-validator-policy\", \"policy2-common-nto-sub-policy\", \"policy3-common-ptp-sub-policy\"]",
"oc get policies --all-namespaces",
"NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default policy1-common-cluster-version-policy inform NonCompliant 5d21h default policy2-common-nto-sub-policy inform Compliant 5d21h default policy3-common-ptp-sub-policy inform NonCompliant 5d21h default policy4-common-sriov-sub-policy inform NonCompliant 5d21h",
"oc get policies --all-namespaces",
"NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default policy1-common-cluster-version-policy inform NonCompliant 5d21h default policy2-common-nto-sub-policy inform Compliant 5d21h default policy3-common-ptp-sub-policy inform NonCompliant 5d21h default policy4-common-sriov-sub-policy inform NonCompliant 5d21h",
"oc get managedclusters",
"NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true https://api.hub.example.com:6443 True Unknown 13d spoke1 true https://api.spoke1.example.com:6443 True True 13d spoke3 true https://api.spoke3.example.com:6443 True True 27h",
"oc get pod -n openshift-operators",
"NAME READY STATUS RESTARTS AGE cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp 2/2 Running 0 45m",
"oc logs -n openshift-operators cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp -c manager",
"ERROR controller-runtime.manager.controller.clustergroupupgrade Reconciler error {\"reconciler group\": \"ran.openshift.io\", \"reconciler kind\": \"ClusterGroupUpgrade\", \"name\": \"lab-upgrade\", \"namespace\": \"default\", \"error\": \"Cluster spoke5555 is not a ManagedCluster\"} 1 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem",
"oc get managedclusters",
"NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true https://api.hub.testlab.com:6443 True Unknown 13d spoke1 true https://api.spoke1.testlab.com:6443 True True 13d 1 spoke3 true https://api.spoke3.testlab.com:6443 True True 27h 2",
"oc get managedcluster --selector=upgrade=true 1",
"NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE spoke1 true https://api.spoke1.testlab.com:6443 True True 13d spoke3 true https://api.spoke3.testlab.com:6443 True True 27h",
"spec: clusters: - spoke1 - spoke3 clusterSelector: - upgrade2=true remediationStrategy: canaries: - spoke3 maxConcurrency: 2 timeout: 240",
"oc get cgu lab-upgrade -ojsonpath='{.spec.clusters}'",
"[\"spoke1\", \"spoke3\"]",
"oc get managedcluster --selector=upgrade=true",
"NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE spoke1 true https://api.spoke1.testlab.com:6443 True True 13d spoke3 true https://api.spoke3.testlab.com:6443 True True 27h",
"oc get jobs,pods -n openshift-talo-pre-cache",
"oc get cgu lab-upgrade -ojsonpath='{.spec.remediationStrategy}'",
"{\"maxConcurrency\":2, \"timeout\":240}",
"oc get cgu lab-upgrade -ojsonpath='{.spec.remediationStrategy.maxConcurrency}'",
"2",
"oc get cgu lab-upgrade -ojsonpath='{.status.conditions}'",
"{\"lastTransitionTime\":\"2022-02-17T22:25:28Z\", \"message\":\"The ClusterGroupUpgrade CR has managed policies that are missing:[policyThatDoesntExist]\", \"reason\":\"UpgradeCannotStart\", \"status\":\"False\", \"type\":\"Ready\"}",
"oc get cgu lab-upgrade -oyaml",
"status: ... copiedPolicies: - lab-upgrade-policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy3-common-ptp-sub-policy namespace: default",
"oc get cgu lab-upgrade -ojsonpath='{.status.remediationPlan}'",
"[[\"spoke2\", \"spoke3\"]]",
"oc logs -n openshift-operators cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp -c manager",
"ERROR controller-runtime.manager.controller.clustergroupupgrade Reconciler error {\"reconciler group\": \"ran.openshift.io\", \"reconciler kind\": \"ClusterGroupUpgrade\", \"name\": \"lab-upgrade\", \"namespace\": \"default\", \"error\": \"Cluster spoke5555 is not a ManagedCluster\"} 1 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/scalability_and_performance/cnf-talm-for-cluster-updates |
Chapter 1. Overview of Insights for Red Hat Enterprise Linux advisor service reporting | Chapter 1. Overview of Insights for Red Hat Enterprise Linux advisor service reporting The advisor service enables the following ways to share the status of your Red Hat Enterprise Linux (RHEL) infrastructure: Export and download a report (in CSV, JSON, or YAML file format) that shows recommendations for your impacted RHEL systems, and share the information with strategic stakeholders. Subscribe to the advisor Weekly Report email to receive a brief summary of the health of your RHEL environment. Download an executive report to share a high level overview of your infrastructure with an executive audience. These methods provide a quick and easily accessible way for you or other stakeholders to assess the health of your infrastructure and plan or act accordingly. | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/generating_advisor_service_reports_with_fedramp/insights-report-overview |
Chapter 3. Enabling NVIDIA GPUs | Chapter 3. Enabling NVIDIA GPUs Before you can use NVIDIA GPUs in OpenShift AI, you must install the NVIDIA GPU Operator. Prerequisites You have logged in to your OpenShift cluster. You have the cluster-admin role in your OpenShift cluster. You have installed an NVIDIA GPU and confirmed that it is detected in your environment. Procedure To enable GPU support on an OpenShift cluster in a disconnected or airgapped environment, follow the instructions here: Deploy GPU Operators in a disconnected or airgapped environment in the NVIDIA documentation. Important After you install the Node Feature Discovery (NFD) Operator, you must create an instance of NodeFeatureDiscovery. In addition, after you install the NVIDIA GPU Operator, you must create a ClusterPolicy and populate it with default values. Delete the migration-gpu-status ConfigMap. In the OpenShift web console, switch to the Administrator perspective. Set the Project to All Projects or redhat-ods-applications to ensure you can see the appropriate ConfigMap. Search for the migration-gpu-status ConfigMap. Click the action menu (...) and select Delete ConfigMap from the list. The Delete ConfigMap dialog appears. Inspect the dialog and confirm that you are deleting the correct ConfigMap. Click Delete . Restart the dashboard replicaset. In the OpenShift web console, switch to the Administrator perspective. Click Workloads Deployments . Set the Project to All Projects or redhat-ods-applications to ensure you can see the appropriate deployment. Search for the rhods-dashboard deployment. Click the action menu (...) and select Restart Rollout from the list. Wait until the Status column indicates that all pods in the rollout have fully restarted. Verification The reset migration-gpu-status instance is present on the Instances tab on the AcceleratorProfile custom resource definition (CRD) details page. From the Administrator perspective, go to the Operators Installed Operators page. Confirm that the following Operators appear: NVIDIA GPU Node Feature Discovery (NFD) Kernel Module Management (KMM) The GPU is correctly detected a few minutes after full installation of the Node Feature Discovery (NFD) and NVIDIA GPU Operators. The OpenShift command line interface (CLI) displays the appropriate output for the GPU worker node. For example: Note In OpenShift AI 2.18, Red Hat supports the use of accelerators within the same cluster only. Red Hat does not support remote direct memory access (RDMA) between accelerators, or the use of accelerators across a network, for example, by using technology such as NVIDIA GPUDirect or NVLink. After installing the NVIDIA GPU Operator, create an accelerator profile as described in Working with accelerator profiles . | [
"Expected output when the GPU is detected properly describe node <node name> Capacity: cpu: 4 ephemeral-storage: 313981932Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 16076568Ki nvidia.com/gpu: 1 pods: 250 Allocatable: cpu: 3920m ephemeral-storage: 288292006229 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 12828440Ki nvidia.com/gpu: 1 pods: 250"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/working_with_accelerators/enabling-nvidia-gpus_accelerators |
8.215. sg3_utils | 8.215. sg3_utils 8.215.1. RHBA-2014:1601 - sg3_utils bug fix update Updated sg3_utils packages that fix one bug are now available for Red Hat Enterprise Linux 6. The sg3_utils packages contain a collection of tools for SCSI devices that use the Linux SCSI generic (sg) interface. This collection includes utilities for database copying based on "dd" syntax and semantics (the "sg_dd", "sgp_dd" and "sgm_dd" commands), INQUIRY data checking and associated pages ("sg_inq"), mode and log page checking ("sg_modes" and "sg_logs"), disk spinning ("sg_start") and self-tests ("sg_senddiag"), as well as other utilities. It also contains the rescan-scsi-bus.sh script. Bug Fix BZ# 857200 When a Logical Unit Number (LUN) was resized on a target side, the rescan-scsi-bus.sh script failed to resize SCSI devices on the host side. This update applies a patch to fix this bug and when an LUN is resized on the target side, the change is propagated to the host side as expected. Users of sg3_utils are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/sg3_utils |
Chapter 2. Onboarding certification partners | Chapter 2. Onboarding certification partners Use the Red Hat Customer Portal to create a new account if you are a new partner, or use your existing Red Hat account if you are a current partner to onboard with Red Hat for certifying your products. 2.1. Onboarding existing certification partners As an existing partner you could be: A member of the one-to-many EPM program who has some degree of representation on the EPM team, but does not have any assistance with OpenStack certification. OR A member fully managed by the EPM team in the traditional manner with a dedicated EPM team member who is assigned to manage the partner, including questions about OpenStack certification requests. Prerequisites You have an existing Red Hat account. Procedure Access Red Hat Customer Portal and click Log in . Enter your Red Hat login or email address and click . Then, use either of the following options: Log in with company single sign-on Log in with Red Hat account From the menu bar on the header, click your avatar to view the account details. If an account number is associated with your account, then contact the certification team to proceed with the certification process. If an account number is not associated with your account, then first contact the Red Hat global customer service team to raise a request for creating a new account number. After you get an account number, contact the certification team to proceed with the certification process. 2.2. Onboarding new certification partners Creating a new Red Hat account is the first step in onboarding new certification partners. Access Red Hat Customer Portal and click Register . Enter the following details to create a new Red Hat account: Select Corporate in the Account Type field. If you have created a Corporate type account and require an account number, contact the Red Hat global customer service team . Note Ensure that you create a company account and not a personal account. The account created during this step is also used to sign in to the Red Hat Ecosystem Catalog when working with certification requests. Choose a Red Hat login and password. Important If your login ID is associated with multiple accounts, then do not use your contact email as the login ID as this can cause issues during login. Also, you cannot change your login ID once created. Enter your Personal information and Company information . Click Create My Account . A new Red Hat account is created. Contact your Ecosystem Partner Management (EPM) representative, if available. Else contact the certification team to proceed with the certification process. 2.3. Exploring the Partner landing page After logging in to Red Hat Partner Connect , the partner landing page opens. This page serves as a centralized hub, offering access to various partner services and capabilities that enable you to start working on opportunities. The Partner landing page offers the following services: Certified technology portal Deal registrations Red Hat Partner Training Portal Access to our library of marketing, sales & technical content Help and support Email preference center Partner subscriptions User account As part of the Red Hat partnership, partners receive access to various Red Hat systems and services that enable them to create shared value with Red Hat for our joint customers. Select the Certified technology portal tile to begin your product certification journey. The personalized Certified Technology partner dashboard opens. | null | https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_openstack_certification_workflow_guide/onboarding-certification-partners_rhosp-wf-cert-introduction |
Providing feedback on Red Hat build of OpenJDK documentation | Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Create creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.432/providing-direct-documentation-feedback_openjdk |
2.6.2. TCP Wrappers Configuration Files | 2.6.2. TCP Wrappers Configuration Files To determine if a client is allowed to connect to a service, TCP Wrappers reference the following two files, which are commonly referred to as hosts access files: /etc/hosts.allow /etc/hosts.deny When a TCP-wrapped service receives a client request, it performs the following steps: It references /etc/hosts.allow - The TCP-wrapped service sequentially parses the /etc/hosts.allow file and applies the first rule specified for that service. If it finds a matching rule, it allows the connection. If not, it moves on to the step. It references /etc/hosts.deny - The TCP-wrapped service sequentially parses the /etc/hosts.deny file. If it finds a matching rule, it denies the connection. If not, it grants access to the service. The following are important points to consider when using TCP Wrappers to protect network services: Because access rules in hosts.allow are applied first, they take precedence over rules specified in hosts.deny . Therefore, if access to a service is allowed in hosts.allow , a rule denying access to that same service in hosts.deny is ignored. The rules in each file are read from the top down and the first matching rule for a given service is the only one applied. The order of the rules is extremely important. If no rules for the service are found in either file, or if neither file exists, access to the service is granted. TCP-wrapped services do not cache the rules from the hosts access files, so any changes to hosts.allow or hosts.deny take effect immediately, without restarting network services. Warning If the last line of a hosts access file is not a newline character (created by pressing the Enter key), the last rule in the file fails and an error is logged to either /var/log/messages or /var/log/secure . This is also the case for a rule that spans multiple lines without using the backslash character. The following example illustrates the relevant portion of a log message for a rule failure due to either of these circumstances: 2.6.2.1. Formatting Access Rules The format for both /etc/hosts.allow and /etc/hosts.deny is identical. Each rule must be on its own line. Blank lines or lines that start with a hash (#) are ignored. Each rule uses the following basic format to control access to network services: <daemon list> : <client list> [ : <option> : <option> : ... ] <daemon list> - A comma-separated list of process names ( not service names) or the ALL wildcard. The daemon list also accepts operators (refer to Section 2.6.2.1.4, "Operators" ) to allow greater flexibility. <client list> - A comma-separated list of hostnames, host IP addresses, special patterns, or wildcards which identify the hosts affected by the rule. The client list also accepts operators listed in Section 2.6.2.1.4, "Operators" to allow greater flexibility. <option> - An optional action or colon-separated list of actions performed when the rule is triggered. Option fields support expansions, launch shell commands, allow or deny access, and alter logging behavior. Note More information on some of the terms above can be found elsewhere in this guide: Section 2.6.2.1.1, "Wildcards" Section 2.6.2.1.2, "Patterns" Section 2.6.2.2.4, "Expansions" Section 2.6.2.2, "Option Fields" The following is a basic sample hosts access rule: This rule instructs TCP Wrappers to watch for connections to the FTP daemon ( vsftpd ) from any host in the example.com domain. If this rule appears in hosts.allow , the connection is accepted. If this rule appears in hosts.deny , the connection is rejected. The sample hosts access rule is more complex and uses two option fields: Note that each option field is preceded by the backslash (\). Use of the backslash prevents failure of the rule due to length. This sample rule states that if a connection to the SSH daemon ( sshd ) is attempted from a host in the example.com domain, execute the echo command to append the attempt to a special log file, and deny the connection. Because the optional deny directive is used, this line denies access even if it appears in the hosts.allow file. Refer to Section 2.6.2.2, "Option Fields" for a more detailed look at available options. 2.6.2.1.1. Wildcards Wildcards allow TCP Wrappers to more easily match groups of daemons or hosts. They are used most frequently in the client list field of access rules. The following wildcards are available: ALL - Matches everything. It can be used for both the daemon list and the client list. LOCAL - Matches any host that does not contain a period (.), such as localhost. KNOWN - Matches any host where the hostname and host address are known or where the user is known. UNKNOWN - Matches any host where the hostname or host address are unknown or where the user is unknown. PARANOID - A reverse DNS lookup is done on the source IP address to obtain the host name. Then a DNS lookup is performed to resolve the IP address. If the two IP addresses do not match the connection is dropped and the logs are updated Important The KNOWN , UNKNOWN , and PARANOID wildcards should be used with care, because they rely on a functioning DNS server for correct operation. Any disruption to name resolution may prevent legitimate users from gaining access to a service. 2.6.2.1.2. Patterns Patterns can be used in the client field of access rules to more precisely specify groups of client hosts. The following is a list of common patterns for entries in the client field: Hostname beginning with a period (.) - Placing a period at the beginning of a hostname matches all hosts sharing the listed components of the name. The following example applies to any host within the example.com domain: IP address ending with a period (.) - Placing a period at the end of an IP address matches all hosts sharing the initial numeric groups of an IP address. The following example applies to any host within the 192.168.x.x network: IP address/netmask pair - Netmask expressions can also be used as a pattern to control access to a particular group of IP addresses. The following example applies to any host with an address range of 192.168.0.0 through 192.168.1.255 : Important When working in the IPv4 address space, the address/prefix length ( prefixlen ) pair declarations ( CIDR notation) are not supported. Only IPv6 rules can use this format. [IPv6 address]/prefixlen pair - [net]/prefixlen pairs can also be used as a pattern to control access to a particular group of IPv6 addresses. The following example would apply to any host with an address range of 3ffe:505:2:1:: through 3ffe:505:2:1:ffff:ffff:ffff:ffff : The asterisk (*) - Asterisks can be used to match entire groups of hostnames or IP addresses, as long as they are not mixed in a client list containing other types of patterns. The following example would apply to any host within the example.com domain: The slash (/) - If a client list begins with a slash, it is treated as a file name. This is useful if rules specifying large numbers of hosts are necessary. The following example refers TCP Wrappers to the /etc/telnet.hosts file for all Telnet connections: Other, less used patterns are also accepted by TCP Wrappers. Refer to the hosts_access man 5 page for more information. Warning Be very careful when using hostnames and domain names. Attackers can use a variety of tricks to circumvent accurate name resolution. In addition, disruption to DNS service prevents even authorized users from using network services. It is, therefore, best to use IP addresses whenever possible. 2.6.2.1.3. Portmap and TCP Wrappers Portmap 's implementation of TCP Wrappers does not support host look-ups, which means portmap can not use hostnames to identify hosts. Consequently, access control rules for portmap in hosts.allow or hosts.deny must use IP addresses, or the keyword ALL , for specifying hosts. Changes to portmap access control rules may not take effect immediately. You may need to restart the portmap service. Widely used services, such as NIS and NFS, depend on portmap to operate, so be aware of these limitations. 2.6.2.1.4. Operators At present, access control rules accept one operator, EXCEPT . It can be used in both the daemon list and the client list of a rule. The EXCEPT operator allows specific exceptions to broader matches within the same rule. In the following example from a hosts.allow file, all example.com hosts are allowed to connect to all services except attacker.example.com : In another example from a hosts.allow file, clients from the 192.168.0. x network can use all services except for FTP: Note Organizationally, it is often easier to avoid using EXCEPT operators. This allows other administrators to quickly scan the appropriate files to see what hosts are allowed or denied access to services, without having to sort through EXCEPT operators. | [
"warning: /etc/hosts.allow, line 20: missing newline or line too long",
"vsftpd : .example.com",
"sshd : .example.com : spawn /bin/echo `/bin/date` access denied>>/var/log/sshd.log : deny",
"ALL : .example.com",
"ALL : 192.168.",
"ALL : 192.168.0.0/255.255.254.0",
"ALL : [3ffe:505:2:1::]/64",
"ALL : *.example.com",
"in.telnetd : /etc/telnet.hosts",
"ALL : .example.com EXCEPT attacker.example.com",
"ALL EXCEPT vsftpd : 192.168.0."
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-tcp_wrappers_and_xinetd-tcp_wrappers_configuration_files |
Chapter 1. Ceph RESTful API | Chapter 1. Ceph RESTful API As a storage administrator, you can use the Ceph RESTful API, or simply the Ceph API, provided by the Red Hat Ceph Storage Dashboard to interact with the Red Hat Ceph Storage cluster. You can display information about the Ceph Monitors and OSDs, along with their respective configuration options. You can even create or edit Ceph pools. The Ceph API uses the following standards: HTTP 1.1 JSON MIME and HTTP Content Negotiation JWT These standards are OpenAPI 3.0 compliant, regulating the API syntax, semantics, content encoding, versioning, authentication, and authorization. Prerequisites A healthy running Red Hat Ceph Storage cluster. Access to the node running the Ceph Manager. 1.1. Versioning for the Ceph API A main goal for the Ceph RESTful API, is to provide a stable interface. To achieve a stable interface, the Ceph API is built on the following principles: A mandatory explicit default version for all endpoints to avoid implicit defaults. Fine-grain change control per-endpoint. The expected version from a specific endpoint is stated in the HTTP header. Syntax Example If the current Ceph API server is not able to address that specific version, a 415 - Unsupported Media Type response will be returned. Using semantic versioning. Major changes are backwards incompatible. Changes might result in non-additive changes to the request, and to the response formats for a specific endpoint. Minor changes are backwards and forwards compatible. Changes consist of additive changes to the request or response formats for a specific endpoint. 1.2. Authentication and authorization for the Ceph API Access to the Ceph RESTful API goes through two checkpoints. The first is authenticating that the request is done on the behalf of a valid, and existing user. Secondly, is authorizing the previously authenticated user can do a specific action, such as creating, reading, updating, or deleting, on the target end point. Before users start using the Ceph API, they need a valid JSON Web Token (JWT). The /api/auth endpoint allows you to retrieve this token. Example This token must be used together with every API request by placing it within the Authorization HTTP header. Syntax Additional Resources See the Ceph user management chapter in the Red Hat Ceph Storage Administration Guide for more details. 1.3. Enabling and Securing the Ceph API module The Red Hat Ceph Storage Dashboard module offers the RESTful API access to the storage cluster over an SSL-secured connection. Important If disabling SSL, then user names and passwords are sent unencrypted to the Red Hat Ceph Storage Dashboard. Prerequisites Root-level access to a Ceph Monitor node. Ensure that you have at least one ceph-mgr daemon active. If you use a firewall, ensure that TCP port 8443 , for SSL, and TCP port 8080 , without SSL, are open on the node with the active ceph-mgr daemon. Procedure Log into the Cephadm shell: Example Enable the RESTful plug-in: Configure an SSL certificate. If your organization's certificate authority (CA) provides a certificate, then set using the certificate files: Syntax Example If you want to set unique node-based certificates, then add a HOST_NAME to the commands: Example Alternatively, you can generate a self-signed certificate. However, using a self-signed certificate does not provide full security benefits of the HTTPS protocol: Warning Most modern web browsers will complain about self-signed certificates, which require you to confirm before establishing a secure connection. Create a user, set the password, and set the role: Syntax Example This example creates a user named user1 with the administrator role. Connect to the RESTful plug-in web page. Open a web browser and enter the following URL: Syntax Example If you used a self-signed certificate, confirm a security exception. Additional Resources The ceph dashboard --help command. The https:// HOST_NAME :8443/doc page, where HOST_NAME is the IP address or name of the node with the running ceph-mgr instance. For more information, see the Security Hardening guide within the Product Documentation for Red Hat Enterprise Linux for your OS version, on the Red Hat Customer Portal. 1.4. Questions and Answers 1.4.1. Getting information This section describes how to use the Ceph API to view information about the storage cluster, Ceph Monitors, OSDs, pools, and hosts. 1.4.1.1. How Can I View All Cluster Configuration Options? This section describes how to use the RESTful plug-in to view cluster configuration options and their values. The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance CEPH_MANAGER_PORT with the TCP port number. The default TCP port number is 8443. Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: Web Browser In the web browser, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user name and password when prompted. Additional Resources The Configuration Guide for Red Hat Ceph Storage 6 1.4.1.2. How Can I View a Particular Cluster Configuration Option? This section describes how to view a particular cluster option and its value. The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ARGUMENT with the configuration option you want to view Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ARGUMENT with the configuration option you want to view USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: Web Browser In the web browser, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ARGUMENT with the configuration option you want to view Enter the user name and password when prompted. Additional Resources The Configuration Guide for Red Hat Ceph Storage 6 1.4.1.3. How Can I View All Configuration Options for OSDs? This section describes how to view all configuration options and their values for OSDs. The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: Web Browser In the web browser, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user name and password when prompted. Additional Resources The Configuration Guide for Red Hat Ceph Storage 6 1.4.1.4. How Can I View CRUSH Rules? This section describes how to view CRUSH rules. The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: Web Browser In the web browser, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user name and password when prompted. Additional Resources The CRUSH Rules section in the Administration Guide for Red Hat Ceph Storage 6. 1.4.1.5. How Can I View Information about Monitors? This section describes how to view information about a particular Monitor, such as: IP address Name Quorum status The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: Web Browser In the web browser, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user name and password when prompted. 1.4.1.6. How Can I View Information About a Particular Monitor? This section describes how to view information about a particular Monitor, such as: IP address Name Quorum status The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance NAME with the short host name of the Monitor Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance NAME with the short host name of the Monitor USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: Web Browser In the web browser, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance NAME with the short host name of the Monitor Enter the user name and password when prompted. 1.4.1.7. How Can I View Information about OSDs? This section describes how to view information about OSDs, such as: IP address Its pools Affinity Weight The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: Web Browser In the web browser, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user name and password when prompted. 1.4.1.8. How Can I View Information about a Particular OSD? This section describes how to view information about a particular OSD, such as: IP address Its pools Affinity Weight The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the OSD listed in the osd field Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the OSD listed in the osd field USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: Web Browser In the web browser, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the OSD listed in the osd field Enter the user name and password when prompted. 1.4.1.9. How Can I Determine What Processes Can Be Scheduled on an OSD? This section describes how to use the RESTful plug-in to view what processes, such as scrubbing or deep scrubbing, can be scheduled on an OSD. The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the OSD listed in the osd field Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the OSD listed in the osd field USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: Web Browser In the web browser, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the OSD listed in the osd field Enter the user name and password when prompted. 1.4.1.10. How Can I View Information About Pools? This section describes how to view information about pools, such as: Flags Size Number of placement groups The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: Web Browser In the web browser, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user name and password when prompted. 1.4.1.11. How Can I View Information About a Particular Pool? This section describes how to view information about a particular pool, such as: Flags Size Number of placement groups The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the pool listed in the pool field Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the pool listed in the pool field USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: Web Browser In the web browser, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the pool listed in the pool field Enter the user name and password when prompted. 1.4.1.12. How Can I View Information About Hosts? This section describes how to view information about hosts, such as: Host names Ceph daemons and their IDs Ceph version The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: Web Browser In the web browser, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user name and password when prompted. 1.4.1.13. How Can I View Information About a Particular Host? This section describes how to view information about a particular host, such as: Host names Ceph daemons and their IDs Ceph version The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance HOST_NAME with the host name of the host listed in the hostname field Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance HOST_NAME with the host name of the host listed in the hostname field USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: Web Browser In the web browser, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance HOST_NAME with the host name of the host listed in the hostname field Enter the user name and password when prompted. 1.4.2. Changing Configuration This section describes how to use the Ceph API to change OSD configuration options, the state of an OSD, and information about pools. 1.4.2.1. How Can I Change OSD Configuration Options? This section describes how to use the RESTful plug-in to change OSD configuration options. The curl Command On the command line, use: Replace: OPTION with the option to modify; pause , noup , nodown , noout , noin , nobackfill , norecover , noscrub , nodeep-scrub VALUE with true or false USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance OPTION with the option to modify; pause , noup , nodown , noout , noin , nobackfill , norecover , noscrub , nodeep-scrub VALUE with True or False USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: 1.4.2.2. How Can I Change the OSD State? This section describes how to use the RESTful plug-in to change the state of an OSD. The curl Command On the command line, use: Replace: STATE with the state to change ( in or up ) VALUE with true or false USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the OSD listed in the osd field Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the OSD listed in the osd field STATE with the state to change ( in or up ) VALUE with True or False USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: 1.4.2.3. How Can I Reweight an OSD? This section describes how to change the weight of an OSD. The curl Command On the command line, use: Replace: VALUE with the new weight USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the OSD listed in the osd field Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the OSD listed in the osd field VALUE with the new weight USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: 1.4.2.4. How Can I Change Information for a Pool? This section describes how to use the RESTful plug-in to change information for a particular pool. The curl Command On the command line, use: Replace: OPTION with the option to modify VALUE with the new value of the option USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the pool listed in the pool field Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the pool listed in the pool field OPTION with the option to modify VALUE with the new value of the option USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: 1.4.3. Administering the Cluster This section describes how to use the Ceph API to initialize scrubbing or deep scrubbing on an OSD, create a pool or remove data from a pool, remove requests, or create a request. 1.4.3.1. How Can I Run a Scheduled Process on an OSD? This section describes how to use the RESTful API to run scheduled processes, such as scrubbing or deep scrubbing, on an OSD. The curl Command On the command line, use: Replace: COMMAND with the process ( scrub , deep-scrub , or repair ) you want to start. Verify it the process is supported on the OSD. See Section 1.4.1.9, "How Can I Determine What Processes Can Be Scheduled on an OSD?" for details. USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the OSD listed in the osd field Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the OSD listed in the osd field COMMAND with the process ( scrub , deep-scrub , or repair ) you want to start. Verify it the process is supported on the OSD. See Section 1.4.1.9, "How Can I Determine What Processes Can Be Scheduled on an OSD?" for details. USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: 1.4.3.2. How Can I Create a New Pool? This section describes how to use the RESTful plug-in to create a new pool. The curl Command On the command line, use: Replace: NAME with the name of the new pool NUMBER with the number of the placement groups USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance NAME with the name of the new pool NUMBER with the number of the placement groups USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: 1.4.3.3. How Can I Remove Pools? This section describes how to use the RESTful plug-in to remove a pool. This request is by default forbidden. To allow it, add the following parameter to the Ceph configuration guide. The curl Command On the command line, use: Replace: USER with the user name CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the pool listed in the pool field Enter the user's password when prompted. If you used a self-signed certificate, use the --insecure option: Python In the Python interpreter, enter: Replace: CEPH_MANAGER with the IP address or short host name of the node with the active ceph-mgr instance ID with the ID of the pool listed in the pool field USER with the user name PASSWORD with the user's password If you used a self-signed certificate, use the verify=False option: | [
"Accept: application/vnd.ceph.api.v MAJOR . MINOR +json",
"Accept: application/vnd.ceph.api.v1.0+json",
"curl -X POST \"https://example.com:8443/api/auth\" -H \"Accept: application/vnd.ceph.api.v1.0+json\" -H \"Content-Type: application/json\" -d '{\"username\": user1, \"password\": password1}'",
"curl -H \"Authorization: Bearer TOKEN \"",
"root@host01 ~]# cephadm shell",
"ceph mgr module enable dashboard",
"ceph dashboard set-ssl-certificate HOST_NAME -i CERT_FILE ceph dashboard set-ssl-certificate-key HOST_NAME -i KEY_FILE",
"ceph dashboard set-ssl-certificate -i dashboard.crt ceph dashboard set-ssl-certificate-key -i dashboard.key",
"ceph dashboard set-ssl-certificate host01 -i dashboard.crt ceph dashboard set-ssl-certificate-key host01 -i dashboard.key",
"ceph dashboard create-self-signed-cert",
"echo -n \" PASSWORD \" > PATH_TO_FILE / PASSWORD_FILE ceph dashboard ac-user-create USER_NAME -i PASSWORD_FILE ROLE",
"echo -n \"p@ssw0rd\" > /root/dash-password.txt ceph dashboard ac-user-create user1 -i /root/dash-password.txt administrator",
"https:// HOST_NAME :8443",
"https://host01:8443",
"curl --silent --user USER 'https:// CEPH_MANAGER : CEPH_MANAGER_PORT /api/cluster_conf'",
"curl --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/cluster_conf'",
"python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/cluster_conf', auth=(\" USER \", \" PASSWORD \")) >> print result.json()",
"python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/cluster_conf', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()",
"https:// CEPH_MANAGER :8080/api/cluster_conf",
"curl --silent --user USER 'https:// CEPH_MANAGER :8080/api/cluster_conf/ ARGUMENT '",
"curl --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/cluster_conf/ ARGUMENT '",
"python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/cluster_conf/ ARGUMENT ', auth=(\" USER \", \" PASSWORD \")) >> print result.json()",
"python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/cluster_conf/ ARGUMENT ', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()",
"https:// CEPH_MANAGER :8080/api/cluster_conf/ ARGUMENT",
"curl --silent --user USER 'https:// CEPH_MANAGER :8080/api/osd/flags'",
"curl --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/osd/flags'",
"python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/osd/flags', auth=(\" USER \", \" PASSWORD \")) >> print result.json()",
"python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/osd/flags', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()",
"https:// CEPH_MANAGER :8080/api/osd/flags",
"curl --silent --user USER 'https:// CEPH_MANAGER :8080/api/crush_rule'",
"curl --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/crush_rule'",
"python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/crush_rule', auth=(\" USER \", \" PASSWORD \")) >> print result.json()",
"python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/crush_rule', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()",
"https:// CEPH_MANAGER :8080/api/crush_rule",
"curl --silent --user USER 'https:// CEPH_MANAGER :8080/api/monitor'",
"curl --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/monitor'",
"python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/monitor', auth=(\" USER \", \" PASSWORD \")) >> print result.json()",
"python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/monitor', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()",
"https:// CEPH_MANAGER :8080/api/monitor",
"curl --silent --user USER 'https:// CEPH_MANAGER :8080/api/monitor/ NAME '",
"curl --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/monitor/ NAME '",
"python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/monitor/ NAME ', auth=(\" USER \", \" PASSWORD \")) >> print result.json()",
"python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/monitor/ NAME ', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()",
"https:// CEPH_MANAGER :8080/api/monitor/ NAME",
"curl --silent --user USER 'https:// CEPH_MANAGER :8080/api/osd'",
"curl --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/osd'",
"python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/osd/', auth=(\" USER \", \" PASSWORD \")) >> print result.json()",
"python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/osd/', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()",
"https:// CEPH_MANAGER :8080/api/osd",
"curl --silent --user USER 'https:// CEPH_MANAGER :8080/api/osd/ ID '",
"curl --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/osd/ ID '",
"python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/osd/ ID ', auth=(\" USER \", \" PASSWORD \")) >> print result.json()",
"python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/osd/ ID ', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()",
"https:// CEPH_MANAGER :8080/api/osd/ ID",
"curl --silent --user USER 'https:// CEPH_MANAGER :8080/api/osd/ ID /command'",
"curl --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/osd/ ID /command'",
"python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/osd/ ID /command', auth=(\" USER \", \" PASSWORD \")) >> print result.json()",
"python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/osd/ ID /command', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()",
"https:// CEPH_MANAGER :8080/api/osd/ ID /command",
"curl --silent --user USER 'https:// CEPH_MANAGER :8080/api/pool'",
"curl --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/pool'",
"python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/pool', auth=(\" USER \", \" PASSWORD \")) >> print result.json()",
"python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/pool', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()",
"https:// CEPH_MANAGER :8080/api/pool",
"curl --silent --user USER 'https:// CEPH_MANAGER :8080/api/pool/ ID '",
"curl --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/pool/ ID '",
"python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/pool/ ID ', auth=(\" USER \", \" PASSWORD \")) >> print result.json()",
"python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/pool/ ID ', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()",
"https:// CEPH_MANAGER :8080/api/pool/ ID",
"curl --silent --user USER 'https:// CEPH_MANAGER :8080/api/host'",
"curl --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/host'",
"python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/host', auth=(\" USER \", \" PASSWORD \")) >> print result.json()",
"python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/host', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()",
"https:// CEPH_MANAGER :8080/api/host",
"curl --silent --user USER 'https:// CEPH_MANAGER :8080/api/host/ HOST_NAME '",
"curl --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/host/ HOST_NAME '",
"python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/host/ HOST_NAME ', auth=(\" USER \", \" PASSWORD \")) >> print result.json()",
"python >> import requests >> result = requests.get('https:// CEPH_MANAGER :8080/api/host/ HOST_NAME ', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()",
"https:// CEPH_MANAGER :8080/api/host/ HOST_NAME",
"echo -En '{\" OPTION \": VALUE }' | curl --request PATCH --data @- --silent --user USER 'https:// CEPH_MANAGER :8080/api/osd/flags'",
"echo -En '{\" OPTION \": VALUE }' | curl --request PATCH --data @- --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/osd/flags'",
"python >> import requests >> result = requests.patch('https:// CEPH_MANAGER :8080/api/osd/flags', json={\" OPTION \": VALUE }, auth=(\" USER \", \" PASSWORD \")) >> print result.json()",
"python >> import requests >> result = requests.patch('https:// CEPH_MANAGER :8080/api/osd/flags', json={\" OPTION \": VALUE }, auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()",
"echo -En '{\" STATE \": VALUE }' | curl --request PATCH --data @- --silent --user USER 'https:// CEPH_MANAGER :8080/api/osd/ ID '",
"echo -En '{\" STATE \": VALUE }' | curl --request PATCH --data @- --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/osd/ ID '",
"python >> import requests >> result = requests.patch('https:// CEPH_MANAGER :8080/api/osd/ ID ', json={\" STATE \": VALUE }, auth=(\" USER \", \" PASSWORD \")) >> print result.json()",
"python >> import requests >> result = requests.patch('https:// CEPH_MANAGER :8080/api/osd/ ID ', json={\" STATE \": VALUE }, auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()",
"echo -En '{\"reweight\": VALUE }' | curl --request PATCH --data @- --silent --user USER 'https:// CEPH_MANAGER :8080/api/osd/ ID '",
"echo -En '{\"reweight\": VALUE }' | curl --request PATCH --data @- --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/osd/ ID '",
"python >> import requests >> result = requests.patch('https:// CEPH_MANAGER :8080/osd/ ID ', json={\"reweight\": VALUE }, auth=(\" USER \", \" PASSWORD \")) >> print result.json()",
"python >> import requests >> result = requests.patch('https:// CEPH_MANAGER :8080/api/osd/ ID ', json={\"reweight\": VALUE }, auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()",
"echo -En '{\" OPTION \": VALUE }' | curl --request PATCH --data @- --silent --user USER 'https:// CEPH_MANAGER :8080/api/pool/ ID '",
"echo -En '{\" OPTION \": VALUE }' | curl --request PATCH --data @- --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/pool/ ID '",
"python >> import requests >> result = requests.patch('https:// CEPH_MANAGER :8080/api/pool/ ID ', json={\" OPTION \": VALUE }, auth=(\" USER , \" PASSWORD \")) >> print result.json()",
"python >> import requests >> result = requests.patch('https:// CEPH_MANAGER :8080/api/pool/ ID ', json={\" OPTION \": VALUE }, auth=(\" USER , \" PASSWORD \"), verify=False) >> print result.json()",
"echo -En '{\"command\": \" COMMAND \"}' | curl --request POST --data @- --silent --user USER 'https:// CEPH_MANAGER :8080/api/osd/ ID /command'",
"echo -En '{\"command\": \" COMMAND \"}' | curl --request POST --data @- --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/osd/ ID /command'",
"python >> import requests >> result = requests.post('https:// CEPH_MANAGER :8080/api/osd/ ID /command', json={\"command\": \" COMMAND \"}, auth=(\" USER \", \" PASSWORD \")) >> print result.json()",
"python >> import requests >> result = requests.post('https:// CEPH_MANAGER :8080/api/osd/ ID /command', json={\"command\": \" COMMAND \"}, auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()",
"echo -En '{\"name\": \" NAME \", \"pg_num\": NUMBER }' | curl --request POST --data @- --silent --user USER 'https:// CEPH_MANAGER :8080/api/pool'",
"echo -En '{\"name\": \" NAME \", \"pg_num\": NUMBER }' | curl --request POST --data @- --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/pool'",
"python >> import requests >> result = requests.post('https:// CEPH_MANAGER :8080/api/pool', json={\"name\": \" NAME \", \"pg_num\": NUMBER }, auth=(\" USER \", \" PASSWORD \")) >> print result.json()",
"python >> import requests >> result = requests.post('https:// CEPH_MANAGER :8080/api/pool', json={\"name\": \" NAME \", \"pg_num\": NUMBER }, auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()",
"mon_allow_pool_delete = true",
"curl --request DELETE --silent --user USER 'https:// CEPH_MANAGER :8080/api/pool/ ID '",
"curl --request DELETE --silent --insecure --user USER 'https:// CEPH_MANAGER :8080/api/pool/ ID '",
"python >> import requests >> result = requests.delete('https:// CEPH_MANAGER :8080/api/pool/ ID ', auth=(\" USER \", \" PASSWORD \")) >> print result.json()",
"python >> import requests >> result = requests.delete('https:// CEPH_MANAGER :8080/api/pool/ ID ', auth=(\" USER \", \" PASSWORD \"), verify=False) >> print result.json()"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/developer_guide/ceph-restful-api |
probe::nfs.proc.lookup | probe::nfs.proc.lookup Name probe::nfs.proc.lookup - NFS client opens/searches a file on server Synopsis nfs.proc.lookup Values bitmask1 V4 bitmask representing the set of attributes supported on this filesystem bitmask0 V4 bitmask representing the set of attributes supported on this filesystem filename the name of file which client opens/searches on server server_ip IP address of server prot transfer protocol name_len the length of file name version NFS version | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-nfs-proc-lookup |
Chapter 7. Monitoring your brokers | Chapter 7. Monitoring your brokers 7.1. Viewing brokers in Fuse Console You can configure an Operator-based broker deployment to use Fuse Console for OpenShift instead of the AMQ Management Console. When you have configured your broker deployment appropriately, Fuse Console discovers the brokers and displays them on a dedicated Artemis tab. You can view the same broker runtime data that you do in the AMQ Management Console. You can also perform the same basic management operations, such as creating addresses and queues. The following procedure describes how to configure the Custom Resource (CR) instance for a broker deployment to enable Fuse Console for OpenShift to discover and display brokers in the deployment. Prerequisites Fuse Console for OpenShift must be deployed to an OCP cluster, or to a specific namespace on that cluster. If you have deployed the console to a specific namespace, your broker deployment must be in the same namespace, to enable the console to discover the brokers. Otherwise, it is sufficient for Fuse Console and the brokers to be deployed on the same OCP cluster. For more information on installing Fuse Online on OCP, see Installing and Operating Fuse Online on OpenShift Container Platform . You must have already created a broker deployment. For example, to learn how to use a Custom Resource (CR) instance to create a basic Operator-based deployment, see Section 3.4.1, "Deploying a basic broker instance" . Procedure Open the CR instance that you used for your broker deployment. For example, the CR for a basic deployment might resemble the following: apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: deploymentPlan: size: 4 image: registry.redhat.io/amq7/amq-broker-rhel8:7.10 ... In the deploymentPlan section, add the jolokiaAgentEnabled and managementRBACEnabled properties and specify values, as shown below. apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: deploymentPlan: size: 4 image: registry.redhat.io/amq7/amq-broker-rhel8:7.10 ... jolokiaAgentEnabled: true managementRBACEnabled: false jolokiaAgentEnabled Specifies whether Fuse Console can discover and display runtime data for the brokers in the deployment. To use Fuse Console, set the value to true . managementRBACEnabled Specifies whether role-based access control (RBAC) is enabled for the brokers in the deployment. You must set the value to false to use Fuse Console because Fuse Console uses its own role-based access control. Important If you set the value of managementRBACEnabled to false to enable use of Fuse Console, management MBeans for the brokers no longer require authorization. You should not use the AMQ management console while managementRBACEnabled is set to false because this potentially exposes all management operations on the brokers to unauthorized use. Save the CR instance. Switch to the project in which you previously created your broker deployment. At the command line, apply the change. USD oc apply -f <path/to/custom_resource_instance> .yaml In Fuse Console, to view Fuse applications, click the Online tab. To view running brokers, in the left navigation menu, click Artemis . Additional resources For more information about using Fuse Console for OpenShift, see Monitoring and managing Red Hat Fuse applications on OpenShift . To learn about using AMQ Management Console to view and manage brokers in the same way that you can in Fuse Console, see Managing brokers using AMQ Management Console . 7.2. Monitoring broker runtime metrics using Prometheus The sections that follow describe how to configure the Prometheus metrics plugin for AMQ Broker on OpenShift Container Platform. You can use the plugin to monitor and store broker runtime metrics. You might also use a graphical tool such as Grafana to configure more advanced visualizations and dashboards of the data that the Prometheus plugin collects. Note The Prometheus metrics plugin enables you to collect and export broker metrics in Prometheus format . However, Red Hat does not provide support for installation or configuration of Prometheus itself, nor of visualization tools such as Grafana. If you require support with installing, configuring, or running Prometheus or Grafana, visit the product websites for resources such as community support and documentation. 7.2.1. Metrics overview To monitor the health and performance of your broker instances, you can use the Prometheus plugin for AMQ Broker to monitor and store broker runtime metrics. The AMQ Broker Prometheus plugin exports the broker runtime metrics to Prometheus format, enabling you to use Prometheus itself to visualize and run queries on the data. You can also use a graphical tool, such as Grafana, to configure more advanced visualizations and dashboards for the metrics that the Prometheus plugin collects. The metrics that the plugin exports to Prometheus format are described below. Broker metrics artemis_address_memory_usage Number of bytes used by all addresses on this broker for in-memory messages. artemis_address_memory_usage_percentage Memory used by all the addresses on this broker as a percentage of the global-max-size parameter. artemis_connection_count Number of clients connected to this broker. artemis_total_connection_count Number of clients that have connected to this broker since it was started. Address metrics artemis_routed_message_count Number of messages routed to one or more queue bindings. artemis_unrouted_message_count Number of messages not routed to any queue bindings. Queue metrics artemis_consumer_count Number of clients consuming messages from a given queue. artemis_delivering_durable_message_count Number of durable messages that a given queue is currently delivering to consumers. artemis_delivering_durable_persistent_size Persistent size of durable messages that a given queue is currently delivering to consumers. artemis_delivering_message_count Number of messages that a given queue is currently delivering to consumers. artemis_delivering_persistent_size Persistent size of messages that a given queue is currently delivering to consumers. artemis_durable_message_count Number of durable messages currently in a given queue. This includes scheduled, paged, and in-delivery messages. artemis_durable_persistent_size Persistent size of durable messages currently in a given queue. This includes scheduled, paged, and in-delivery messages. artemis_messages_acknowledged Number of messages acknowledged from a given queue since the queue was created. artemis_messages_added Number of messages added to a given queue since the queue was created. artemis_message_count Number of messages currently in a given queue. This includes scheduled, paged, and in-delivery messages. artemis_messages_killed Number of messages removed from a given queue since the queue was created. The broker kills a message when the message exceeds the configured maximum number of delivery attempts. artemis_messages_expired Number of messages expired from a given queue since the queue was created. artemis_persistent_size Persistent size of all messages (both durable and non-durable) currently in a given queue. This includes scheduled, paged, and in-delivery messages. artemis_scheduled_durable_message_count Number of durable, scheduled messages in a given queue. artemis_scheduled_durable_persistent_size Persistent size of durable, scheduled messages in a given queue. artemis_scheduled_message_count Number of scheduled messages in a given queue. artemis_scheduled_persistent_size Persistent size of scheduled messages in a given queue. For higher-level broker metrics that are not listed above, you can calculate these by aggregating lower-level metrics. For example, to calculate total message count, you can aggregate the artemis_message_count metrics from all queues in your broker deployment. For an on-premise deployment of AMQ Broker, metrics for the Java Virtual Machine (JVM) hosting the broker are also exported to Prometheus format. This does not apply to a deployment of AMQ Broker on OpenShift Container Platform. 7.2.2. Enabling the Prometheus plugin using a CR When you install AMQ Broker, a Prometheus metrics plugin is included in your installation. When enabled, the plugin collects runtime metrics for the broker and exports these to Prometheus format. The following procedure shows how to enable the Prometheus plugin for AMQ Broker using a CR. This procedure supports new and existing deployments of AMQ Broker 7.9 or later. See Section 7.2.3, "Enabling the Prometheus plugin for a running broker deployment using an environment variable" for an alternative procedure with running brokers. Procedure Open the CR instance that you use for your broker deployment. For example, the CR for a basic deployment might resemble the following: apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: deploymentPlan: size: 4 image: registry.redhat.io/amq7/amq-broker-rhel8:7.10 ... In the deploymentPlan section, add the enableMetricsPlugin property and set the value to true , as shown below. apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: deploymentPlan: size: 4 image: registry.redhat.io/amq7/amq-broker-rhel8:7.10 ... enableMetricsPlugin: true enableMetricsPlugin Specifies whether the Prometheus plugin is enabled for the brokers in the deployment. Save the CR instance. Switch to the project in which you previously created your broker deployment. At the command line, apply the change. USD oc apply -f <path/to/custom_resource_instance> .yaml The metrics plugin starts to gather broker runtime metrics in Prometheus format. Additional resources For information about updating a running broker, see Section 3.4.3, "Applying Custom Resource changes to running broker deployments" . 7.2.3. Enabling the Prometheus plugin for a running broker deployment using an environment variable The following procedure shows how to enable the Prometheus plugin for AMQ Broker using an environment variable. See Section 7.2.2, "Enabling the Prometheus plugin using a CR" for an alternative procedure. Prerequisites You can enable the Prometheus plugin for a broker Pod created with the AMQ Broker Operator. However, your deployed broker must use the broker container image for AMQ Broker 7.7 or later. Procedure Log in to the OpenShift Container Platform web console with administrator privileges for the project that contains your broker deployment. In the web console, click Home Projects . Choose the project that contains your broker deployment. To see the StatefulSets or DeploymentConfigs in your project, click Workloads StatefulSets or Workloads DeploymentConfigs . Click the StatefulSet or DeploymentConfig that corresponds to your broker deployment. To access the environment variables for your broker deployment, click the Environment tab. Add a new environment variable, AMQ_ENABLE_METRICS_PLUGIN . Set the value of the variable to true . When you set the AMQ_ENABLE_METRICS_PLUGIN environment variable, OpenShift restarts each broker Pod in the StatefulSet or DeploymentConfig. When there are multiple Pods in the deployment, OpenShift restarts each Pod in turn. When each broker Pod restarts, the Prometheus plugin for that broker starts to gather broker runtime metrics. 7.2.4. Accessing Prometheus metrics for a running broker Pod This procedure shows how to access Prometheus metrics for a running broker Pod. Prerequisites You must have already enabled the Prometheus plugin for your broker Pod. See Section 7.2.3, "Enabling the Prometheus plugin for a running broker deployment using an environment variable" . Procedure For the broker Pod whose metrics you want to access, you need to identify the Route you previously created to connect the Pod to the AMQ Broker management console. The Route name forms part of the URL needed to access the metrics. Click Networking Routes . For your chosen broker Pod, identify the Route created to connect the Pod to the AMQ Broker management console. Under Hostname , note the complete URL that is shown. For example: To access Prometheus metrics, in a web browser, enter the previously noted Route name appended with "/metrics" . For example: Note If your console configuration does not use SSL, specify http in the URL. In this case, DNS resolution of the host name directs traffic to port 80 of the OpenShift router. If your console configuration uses SSL, specify https in the URL. In this case, your browser defaults to port 443 of the OpenShift router. This enables a successful connection to the console if the OpenShift router also uses port 443 for SSL traffic, which the router does by default. 7.3. Monitoring broker runtime data using JMX This example shows how to monitor a broker using the Jolokia REST interface to JMX. Prerequisites Completion of Deploying a basic broker is recommended. Procedure Get the list of running pods: Run the oc logs command: Run your query to monitor your broker for MaxConsumers : | [
"apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: deploymentPlan: size: 4 image: registry.redhat.io/amq7/amq-broker-rhel8:7.10",
"apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: deploymentPlan: size: 4 image: registry.redhat.io/amq7/amq-broker-rhel8:7.10 jolokiaAgentEnabled: true managementRBACEnabled: false",
"oc project <project_name>",
"oc apply -f <path/to/custom_resource_instance> .yaml",
"apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: deploymentPlan: size: 4 image: registry.redhat.io/amq7/amq-broker-rhel8:7.10",
"apiVersion: broker.amq.io/v1beta1 kind: ActiveMQArtemis metadata: name: ex-aao application: ex-aao-app spec: deploymentPlan: size: 4 image: registry.redhat.io/amq7/amq-broker-rhel8:7.10 enableMetricsPlugin: true",
"oc project <project_name>",
"oc apply -f <path/to/custom_resource_instance> .yaml",
"http://rte-console-access-pod1.openshiftdomain",
"http://rte-console-access-pod1.openshiftdomain/metrics",
"oc get pods NAME READY STATUS RESTARTS AGE ex-aao-ss-1 1/1 Running 0 14d",
"oc logs -f ex-aao-ss-1 Running Broker in /home/jboss/amq-broker 2021-09-17 09:35:10,813 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server 2021-09-17 09:35:10,882 INFO [org.apache.activemq.artemis.core.server] AMQ221000: live Message Broker is starting with configuration Broker Configuration (clustered=true,journalDirectory=data/journal,bindingsDirectory=data/bindings,largeMessagesDirectory=data/large-messages,pagingDirectory=data/paging) 2021-09-17 09:35:10,971 INFO [org.apache.activemq.artemis.core.server] AMQ221013: Using NIO Journal 2021-09-17 09:35:11,114 INFO [org.apache.activemq.artemis.core.server] AMQ221057: Global Max Size is being adjusted to 1/2 of the JVM max size (-Xmx). being defined as 2,566,914,048 2021-09-17 09:35:11,369 WARNING [org.jgroups.stack.Configurator] JGRP000014: BasicTCP.use_send_queues has been deprecated: will be removed in 4.0 2021-09-17 09:35:11,385 WARNING [org.jgroups.stack.Configurator] JGRP000014: Discovery.timeout has been deprecated: GMS.join_timeout should be used instead 2021-09-17 09:35:11,480 INFO [org.jgroups.protocols.openshift.DNS_PING] serviceName [ex-aao-ping-svc] set; clustering enabled 2021-09-17 09:35:24,540 INFO [org.openshift.ping.common.Utils] 3 attempt(s) with a 1000ms sleep to execute [GetServicePort] failed. Last failure was [javax.naming.CommunicationException: DNS error] 2021-09-17 09:35:25,044 INFO [org.apache.activemq.artemis.core.server] AMQ221034: Waiting indefinitely to obtain live lock 2021-09-17 09:35:25,045 INFO [org.apache.activemq.artemis.core.server] AMQ221035: Live Server Obtained live lock 2021-09-17 09:35:25,206 INFO [org.apache.activemq.artemis.core.server] AMQ221080: Deploying address DLQ supporting [ANYCAST] 2021-09-17 09:35:25,240 INFO [org.apache.activemq.artemis.core.server] AMQ221003: Deploying ANYCAST queue DLQ on address DLQ 2021-09-17 09:35:25,360 INFO [org.apache.activemq.artemis.core.server] AMQ221080: Deploying address ExpiryQueue supporting [ANYCAST] 2021-09-17 09:35:25,362 INFO [org.apache.activemq.artemis.core.server] AMQ221003: Deploying ANYCAST queue ExpiryQueue on address ExpiryQueue 2021-09-17 09:35:25,656 INFO [org.apache.activemq.artemis.core.server] AMQ221020: Started EPOLL Acceptor at ex-aao-ss-1.ex-aao-hdls-svc.broker.svc.cluster.local:61616 for protocols [CORE] 2021-09-17 09:35:25,660 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live 2021-09-17 09:35:25,660 INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.16.0.redhat-00022 [amq-broker, nodeID=8d886031-179a-11ec-9e02-0a580ad9008b] 2021-09-17 09:35:26,470 INFO [org.apache.amq.hawtio.branding.PluginContextListener] Initialized amq-broker-redhat-branding plugin 2021-09-17 09:35:26,656 INFO [org.apache.activemq.hawtio.plugin.PluginContextListener] Initialized artemis-plugin plugin",
"curl -k -u admin:admin http://console-broker.amq-demo.apps.example.com/console/jolokia/read/org.apache.activemq.artemis:broker=%22broker%22,component=addresses,address=%22TESTQUEUE%22,subcomponent=queues,routing-type=%22anycast%22,queue=%22TESTQUEUE%22/MaxConsumers {\"request\":{\"mbean\":\"org.apache.activemq.artemis:address=\\\"TESTQUEUE\\\",broker=\\\"broker\\\",component=addresses,queue=\\\"TESTQUEUE\\\",routing-type=\\\"anycast\\\",subcomponent=queues\",\"attribute\":\"MaxConsumers\",\"type\":\"read\"},\"value\":-1,\"timestamp\":1528297825,\"status\":200}"
] | https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.10/html/deploying_amq_broker_on_openshift/assembly_br-broker-monitoring_broker-ocp |
Chapter 13. The Hot Rod Interface | Chapter 13. The Hot Rod Interface 13.1. About Hot Rod Hot Rod is a binary TCP client-server protocol used in Red Hat JBoss Data Grid. It was created to overcome deficiencies in other client/server protocols, such as Memcached. Hot Rod will failover on a server cluster that undergoes a topology change. Hot Rod achieves this by providing regular updates to clients about the cluster topology. Hot Rod enables clients to do smart routing of requests in partitioned or distributed JBoss Data Grid server clusters. To do this, Hot Rod allows clients to determine the partition that houses a key and then communicate directly with the server that has the key. This functionality relies on Hot Rod updating the cluster topology with clients, and that the clients use the same consistent hash algorithm as the servers. JBoss Data Grid contains a server module that implements the Hot Rod protocol. The Hot Rod protocol facilitates faster client and server interactions in comparison to other text-based protocols and allows clients to make decisions about load balancing, failover and data location operations. Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/chap-the_hot_rod_interface |
Red Hat Data Grid | Red Hat Data Grid Data Grid is a high-performance, distributed in-memory data store. Schemaless data structure Flexibility to store different objects as key-value pairs. Grid-based data storage Designed to distribute and replicate data across clusters. Elastic scaling Dynamically adjust the number of nodes to meet demand without service disruption. Data interoperability Store, retrieve, and query data in the grid from different endpoints. | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_operator_guide/red-hat-data-grid |
8.9. akonadi | 8.9. akonadi 8.9.1. RHBA-2014:0539 - akonadi bug fix update Updated akonadi packages that fix one bug are now available for Red Hat Enterprise Linux 6. Akonadi is a storage service for personal information management (PIM) data and metadata. The service provides unique desktop-wide object identification and retrieval, and functions as an extensible data storage for all PIM applications. Bug Fix BZ# 1073939 Previously, the Akonadi service used the hard-coded ~/.local/share/akonadi socket directory. As a consequence, the Akonadi server did not start if the home directory was located on Andrew File System (AFS), which did not support the creation of UNIX sockets. With this update, the directory that holds the sockets has been changed to '/tmp/[username]-akonadi.[random]'. As a result, Akonadi starts on systems with the home directory on AFS as expected. Users of akonadi are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/akonadi |
Chapter 22. Control Bus | Chapter 22. Control Bus Only producer is supported The Control Bus from the EIP patterns allows for the integration system to be monitored and managed from within the framework. Use a Control Bus to manage an enterprise integration system. The Control Bus uses the same messaging mechanism used by the application data, but uses separate channels to transmit data that is relevant to the management of components involved in the message flow. In Camel you can manage and monitor using JMX, or by using a Java API from the CamelContext , or from the org.apache.camel.api.management package, or use the event notifier which has an example here. The ControlBus component provides easy management of Camel applications based on the Control Bus EIP pattern. For example, by sending a message to an Endpoint you can control the lifecycle of routes, or gather performance statistics. Where command can be any string to identify which type of command to use. 22.1. Commands Command Description route To control routes using the routeId and action parameter. language Allows you to specify a to use for evaluating the message body. If there is any result from the evaluation, then the result is put in the message body. 22.2. Dependencies When using controlbus with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-controlbus-starter</artifactId> </dependency> 22.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 22.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 22.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 22.4. Component Options The Control Bus component supports 2 options, which are listed below. Name Description Default Type lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 22.5. Endpoint Options The Control Bus endpoint is configured using URI syntax: with the following path and query parameters: 22.5.1. Path Parameters (2 parameters) Name Description Default Type command (producer) Required Command can be either route or language. Enum values: route language String language (producer) Allows you to specify the name of a Language to use for evaluating the message body. If there is any result from the evaluation, then the result is put in the message body. Enum values: bean constant el exchangeProperty file groovy header jsonpath mvel ognl ref simple spel sql terser tokenize xpath xquery xtokenize Language 22.5.1.1. Query Parameters (6 parameters) Name Description Default Type action (producer) To denote an action that can be either: start, stop, or status. To either start or stop a route, or to get the status of the route as output in the message body. You can use suspend and resume from Camel 2.11.1 onwards to either suspend or resume a route. And from Camel 2.11.1 onwards you can use stats to get performance statics returned in XML format; the routeId option can be used to define which route to get the performance stats for, if routeId is not defined, then you get statistics for the entire CamelContext. The restart action will restart the route. Enum values: start stop suspend resume restart status stats String async (producer) Whether to execute the control bus task asynchronously. Important: If this option is enabled, then any result from the task is not set on the Exchange. This is only possible if executing tasks synchronously. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean loggingLevel (producer) Logging level used for logging when task is done, or if any exceptions occurred during processing the task. Enum values: TRACE DEBUG INFO WARN ERROR OFF INFO LoggingLevel restartDelay (producer) The delay in millis to use when restarting a route. 1000 int routeId (producer) To specify a route by its id. The special keyword current indicates the current route. String 22.6. Using route command The route command allows you to do common tasks on a given route very easily, for example to start a route, you can send an empty message to this endpoint: template.sendBody("controlbus:route?routeId=foo&action=start", null); To get the status of the route, you can do: String status = template.requestBody("controlbus:route?routeId=foo&action=status", null, String.class); 22.7. Getting performance statistics This requires JMX to be enabled (is by default) then you can get the performance statistics per route, or for the CamelContext. For example to get the statistics for a route named foo, we can do: String xml = template.requestBody("controlbus:route?routeId=foo&action=stats", null, String.class); The returned statistics is in XML format. Its the same data you can get from JMX with the dumpRouteStatsAsXml operation on the ManagedRouteMBean . To get statistics for the entire CamelContext you just omit the routeId parameter as shown below: String xml = template.requestBody("controlbus:route?action=stats", null, String.class); 22.8. Using Simple language You can use the Simple language with the control bus, for example to stop a specific route, you can send a message to the "controlbus:language:simple" endpoint containing the following message: template.sendBody("controlbus:language:simple", "USD{camelContext.getRouteController().stopRoute('myRoute')}"); As this is a void operation, no result is returned. However, if you want the route status you can do: String status = template.requestBody("controlbus:language:simple", "USD{camelContext.getRouteStatus('myRoute')}", String.class); It's easier to use the route command to control lifecycle of routes. The language command allows you to execute a language script that has stronger powers such as Groovy or to some extend the Simple language. For example to shutdown Camel itself you can do: template.sendBody("controlbus:language:simple?async=true", "USD{camelContext.stop()}"); We use async=true to stop Camel asynchronously as otherwise we would be trying to stop Camel while it was in-flight processing the message we sent to the control bus component. Note You can also use other languages such as Groovy , etc. 22.9. Spring Boot Auto-Configuration The component supports 3 options, which are listed below. Name Description Default Type camel.component.controlbus.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.controlbus.enabled Whether to enable auto configuration of the controlbus component. This is enabled by default. Boolean camel.component.controlbus.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean | [
"controlbus:command[?options]",
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-controlbus-starter</artifactId> </dependency>",
"controlbus:command:language",
"template.sendBody(\"controlbus:route?routeId=foo&action=start\", null);",
"String status = template.requestBody(\"controlbus:route?routeId=foo&action=status\", null, String.class);",
"String xml = template.requestBody(\"controlbus:route?routeId=foo&action=stats\", null, String.class);",
"String xml = template.requestBody(\"controlbus:route?action=stats\", null, String.class);",
"template.sendBody(\"controlbus:language:simple\", \"USD{camelContext.getRouteController().stopRoute('myRoute')}\");",
"String status = template.requestBody(\"controlbus:language:simple\", \"USD{camelContext.getRouteStatus('myRoute')}\", String.class);",
"template.sendBody(\"controlbus:language:simple?async=true\", \"USD{camelContext.stop()}\");"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-control-bus-component-starter |
Chapter 15. Socket Tapset | Chapter 15. Socket Tapset This family of probe points is used to probe socket activities. It contains the following probe points: | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/socket-dot-stp |
Chapter 7. Configuring Compute nodes for performance | Chapter 7. Configuring Compute nodes for performance You can configure the scheduling and placement of instances for optimal performance by creating customized flavors to target specialized workloads, including NFV and High Performance Computing (HPC). Use the following features to tune your instances for optimal performance: CPU pinning: Pin virtual CPUs to physical CPUs. Emulator threads: Pin emulator threads associated with the instance to physical CPUs. Huge pages: Tune instance memory allocation policies both for normal memory (4k pages) and huge pages (2 MB or 1 GB pages). Note Configuring any of these features creates an implicit NUMA topology on the instance if there is no NUMA topology already present. 7.1. Configuring CPU pinning on the Compute node You can configure instances to run on dedicated host CPUs. Enabling CPU pinning implicitly configures a guest NUMA topology. Each NUMA node of this NUMA topology maps to a separate host NUMA node. For more information about NUMA, see CPUs and NUMA nodes in the Network Functions Virtualization Product Guide . Configure CPU pinning on your Compute node based on the NUMA topology of your host system. Reserve some CPU cores across all the NUMA nodes for the host processes for efficiency. Assign the remaining CPU cores to managing your instances. The following example illustrates eight CPU cores spread across two NUMA nodes. Table 7.1. Example of NUMA Topology NUMA Node 0 NUMA Node 1 Core 0 Core 1 Core 2 Core 3 Core 4 Core 5 Core 6 Core 7 You can schedule dedicated (pinned) and shared (unpinned) instances on the same Compute node. The following procedure reserves cores 0 and 4 for host processes, cores 1, 3, 5 and 7 for instances that require CPU pinning, and cores 2 and 6 for floating instances that do not require CPU pinning. Note If the host supports simultaneous multithreading (SMT), group thread siblings together in either the dedicated or the shared set. Thread siblings share some common hardware which means it is possible for a process running on one thread sibling to impact the performance of the other thread sibling. For example, the host identifies four CPUs in a dual core CPU with SMT: 0, 1, 2, and 3. Of these four, there are two pairs of thread siblings: Thread sibling 1: CPUs 0 and 2 Thread sibling 2: CPUs 1 and 3 In this scenario, you should not assign CPUs 0 and 1 as dedicated and 2 and 3 as shared. Instead, you should assign 0 and 2 as dedicated and 1 and 3 as shared. Prerequisite You know the NUMA topology of your Compute node. For more information, see Discovering your NUMA node topology in the Network Functions Virtualization Planning and Configuration Guide . Procedure Reserve physical CPU cores for the dedicated instances by setting the NovaComputeCpuDedicatedSet configuration in the Compute environment file for each Compute node: Reserve physical CPU cores for the shared instances by setting the NovaComputeCpuSharedSet configuration in the Compute environment file for each Compute node: Set the NovaReservedHostMemory option in the same files to the amount of RAM to reserve for host processes. For example, if you want to reserve 512 MB, use: To ensure that host processes do not run on the CPU cores reserved for instances, set the parameter IsolCpusList in each Compute environment file to the CPU cores you have reserved for instances. Specify the value of the IsolCpusList parameter using a list, or ranges, of CPU indices separated by a whitespace. To filter out hosts based on its NUMA topology, add NUMATopologyFilter to the NovaSchedulerDefaultFilters parameter in each Compute environment file. To apply this configuration, add the environment file(s) to your deployment command and deploy the overcloud: 7.1.1. Upgrading CPU pinning configuration From Red Hat OpenStack Platform (RHOSP) 16+ it is not necessary to use host aggregates to ensure dedicated (pinned) and shared (unpinned) instance types run on separate hosts. Also, the [DEFAULT] reserved_host_cpus config option is no longer necessary and can be unset. To upgrade your CPU pinning configuration from earlier versions of RHOSP: Migrate the value of NovaVcpuPinSet to NovaComputeCpuDedicatedSet for hosts that were previously used for pinned instances. Migrate the value of NovaVcpuPinSet to NovaComputeCpuSharedSet for hosts that were previously used for unpinned instances. If there is no value set for NovaVcpuPinSet , then all host cores should be assigned to either NovaComputeCpuDedicatedSet or NovaComputeCpuSharedSet , depending on the type of instance running there. Once the upgrade is complete, it is possible to start setting both options on the same host. However, to do this, all the instances should be migrated from the host, as the Compute service cannot start when cores for an unpinned instance are not listed in NovaComputeCpuSharedSet , or when cores for a pinned instance are not listed in NovaComputeCpuDedicatedSet . 7.1.2. Launching an instance with CPU pinning You can launch an instance that uses CPU pinning by specifying a flavor with a dedicated CPU policy. Prerequisites Simultaneous multithreading (SMT) is enabled on the host. The Compute node is configured to allow CPU pinning. For more information, see Configuring CPU pinning on the Compute node . Procedure Create a flavor for instances that require CPU pinning: To request pinned CPUs, set the hw:cpu_policy property of the flavor to dedicated : To place each vCPU on thread siblings, set the hw:cpu_thread_policy property of the flavor to require : Note If the host does not have an SMT architecture or enough CPU cores with available thread siblings, scheduling will fail. To prevent this, set hw:cpu_thread_policy to prefer instead of require . The (default) prefer policy ensures that thread siblings are used when available. If you use cpu_thread_policy=isolate , you must have SMT disabled or use a platform that does not support SMT. Create an instance using the new flavor: To verify correct placement of the new instance, run the following command and check for OS-EXT-SRV-ATTR:hypervisor_hostname in the output: 7.1.3. Launching a floating instance You can launch an instance that is placed on a floating CPU by specifying a flavor with a shared CPU policy. Prerequisites The Compute node is configured to reserve physical CPU cores for the floating instances. For more information, see Configuring CPU pinning on the Compute node . Procedure Create a flavor for instances that do not require CPU pinning: To request floating CPUs, set the hw:cpu_policy property of the flavor to shared : Create an instance using the new flavor: To verify correct placement of the new instance, run the following command and check for OS-EXT-SRV-ATTR:hypervisor_hostname in the output: 7.2. Configuring huge pages on the Compute node Configure the Compute node to enable instances to request huge pages. Procedure Configure the amount of huge page memory to reserve on each NUMA node for processes that are not instances: Where: Attribute Description size The size of the allocated huge page. Valid values: * 2048 (for 2MB) * 1GB count The number of huge pages used by OVS per NUMA node. For example, for 4096 of socket memory used by Open vSwitch, set this to 2. (Optional) To allow instances to allocate 1GB huge pages, configure the CPU feature flags, cpu_model_extra_flags , to include "pdpe1gb": Note CPU feature flags do not need to be configured to allow instances to only request 2 MB huge pages. You can only allocate 1G huge pages to an instance if the host supports 1G huge page allocation. You only need to set cpu_model_extra_flags to pdpe1gb when cpu_mode is set to host-model or custom . If the host supports pdpe1gb , and host-passthrough is used as the cpu_mode , then you do not need to set pdpe1gb as a cpu_model_extra_flags . The pdpe1gb flag is only included in Opteron_G4 and Opteron_G5 CPU models, it is not included in any of the Intel CPU models supported by QEMU. To mitigate for CPU hardware issues, such as Microarchitectural Data Sampling (MDS), you might need to configure other CPU flags. For more information, see RHOS Mitigation for MDS ("Microarchitectural Data Sampling") Security Flaws . To avoid loss of performance after applying Meltdown protection, configure the CPU feature flags, cpu_model_extra_flags , to include "+pcid": Tip For more information, see Reducing the performance impact of Meltdown CVE fixes for OpenStack guests with "PCID" CPU feature flag . Add NUMATopologyFilter to the NovaSchedulerDefaultFilters parameter in each Compute environment file, if not already present. Apply this huge page configuration by adding the environment file(s) to your deployment command and deploying the overcloud: 7.2.1. Allocating huge pages to instances Create a flavor with the hw:mem_page_size extra specification key to specify that the instance should use huge pages. Prerequisites The Compute node is configured for huge pages. For more information, see Configuring huge pages on the Compute node . Procedure Create a flavor for instances that require huge pages: Set the flavor for huge pages: Valid values for hw:mem_page_size : large - Selects the largest page size supported on the host, which may be 2 MB or 1 GB on x86_64 systems. small - (Default) Selects the smallest page size supported on the host. On x86_64 systems this is 4 kB (normal pages). any - Selects the largest available huge page size, as determined by the libvirt driver. <pagesize>: (string) Set an explicit page size if the workload has specific requirements. Use an integer value for the page size in KB, or any standard suffix. For example: 4KB, 2MB, 2048, 1GB. Create an instance using the new flavor: Validation The scheduler identifies a host with enough free huge pages of the required size to back the memory of the instance. If the scheduler is unable to find a host and NUMA node with enough pages, then the request will fail with a NoValidHost error. | [
"NovaComputeCpuDedicatedSet: 1,3,5,7",
"NovaComputeCpuSharedSet: 2,6",
"NovaReservedHostMemory: 512",
"IsolCpusList: 1 2 3 5 6 7",
"(undercloud) USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml",
"(overcloud) USD openstack flavor create --ram <size-mb> --disk <size-gb> --vcpus <no_reserved_vcpus> pinned_cpus",
"(overcloud) USD openstack flavor set --property hw:cpu_policy=dedicated pinned_cpus",
"(overcloud) USD openstack flavor set --property hw:cpu_thread_policy=require pinned_cpus",
"(overcloud) USD openstack server create --flavor pinned_cpus --image <image> pinned_cpu_instance",
"(overcloud) USD openstack server show pinned_cpu_instance",
"(overcloud) USD openstack flavor create --ram <size-mb> --disk <size-gb> --vcpus <no_reserved_vcpus> floating_cpus",
"(overcloud) USD openstack flavor set --property hw:cpu_policy=shared floating_cpus",
"(overcloud) USD openstack server create --flavor floating_cpus --image <image> floating_cpu_instance",
"(overcloud) USD openstack server show floating_cpu_instance",
"parameter_defaults: NovaReservedHugePages: [\"node:0,size:2048,count:64\",\"node:1,size:1GB,count:1\"]",
"parameter_defaults: ComputeExtraConfig: nova::compute::libvirt::libvirt_cpu_mode: 'custom' nova::compute::libvirt::libvirt_cpu_model: 'Haswell-noTSX' nova::compute::libvirt::libvirt_cpu_model_extra_flags: 'vmx, pdpe1gb'",
"parameter_defaults: ComputeExtraConfig: nova::compute::libvirt::libvirt_cpu_mode: 'custom' nova::compute::libvirt::libvirt_cpu_model: 'Haswell-noTSX' nova::compute::libvirt::libvirt_cpu_model_extra_flags: 'vmx, pdpe1gb, +pcid'",
"(undercloud) USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml",
"openstack flavor create --ram <size-mb> --disk <size-gb> --vcpus <no_reserved_vcpus> huge_pages",
"openstack flavor set huge_pages --property hw:mem_page_size=1GB",
"openstack server create --flavor huge_pages --image <image> huge_pages_instance"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/instances_and_images_guide/ch-compute-performance |
5.151. libhbalinux | 5.151. libhbalinux 5.151.1. RHEA-2012:0848 - libhbalinux enhancement update An updated libhbalinux package that fixes multiple bugs and adds various enhancements is now available for Red Hat Enterprise Linux 6. The libhbalinux package contains the Host Bus Adapter API (HBAAPI) vendor library which uses standard kernel interfaces to obtain information about Fiber Channel Host Buses (FC HBA) in the system. The libhbalinux packages have been upgraded to upstream version 1.0.13, which provides a number of bug fixes and enhancements over the version. (BZ# 719584 ) All users of libhbalinux are advised to upgrade to this updated libhbalinux package, which fixes these bugs and adds these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/libhbalinux |
2.9.3. Troubleshooting GFS2 Performance with the GFS2 Lock Dump | 2.9.3. Troubleshooting GFS2 Performance with the GFS2 Lock Dump If your cluster performance is suffering because of inefficient use of GFS2 caching, you may see large and increasing I/O wait times. You can make use of GFS2's lock dump information to determine the cause of the problem. This section provides an overview of the GFS2 lock dump. For a more complete description of the GFS2 lock dump, see Appendix C, GFS2 tracepoints and the debugfs glocks File . The GFS2 lock dump information can be gathered from the debugfs file which can be found at the following path name, assuming that debugfs is mounted on /sys/kernel/debug/ : The content of the file is a series of lines. Each line starting with G: represents one glock, and the following lines, indented by a single space, represent an item of information relating to the glock immediately before them in the file. The best way to use the debugfs file is to use the cat command to take a copy of the complete content of the file (it might take a long time if you have a large amount of RAM and a lot of cached inodes) while the application is experiencing problems, and then looking through the resulting data at a later date. Note It can be useful to make two copies of the debugfs file, one a few seconds or even a minute or two after the other. By comparing the holder information in the two traces relating to the same glock number, you can tell whether the workload is making progress (that is, it is just slow) or whether it has become stuck (which is always a bug and should be reported to Red Hat support immediately). Lines in the debugfs file starting with H: (holders) represent lock requests either granted or waiting to be granted. The flags field on the holders line f: shows which: The 'W' flag refers to a waiting request, the 'H' flag refers to a granted request. The glocks which have large numbers of waiting requests are likely to be those which are experiencing particular contention. Table 2.1, "Glock flags" shows the meanings of the different glock flags and Table 2.2, "Glock holder flags" shows the meanings of the different glock holder flags in the order that they appear in the glock dumps. Table 2.1. Glock flags Flag Name Meaning b Blocking Valid when the locked flag is set, and indicates that the operation that has been requested from the DLM may block. This flag is cleared for demotion operations and for "try" locks. The purpose of this flag is to allow gathering of stats of the DLM response time independent from the time taken by other nodes to demote locks. d Pending demote A deferred (remote) demote request D Demote A demote request (local or remote) f Log flush The log needs to be committed before releasing this glock F Frozen Replies from remote nodes ignored - recovery is in progress. This flag is not related to file system freeze, which uses a different mechanism, but is used only in recovery. i Invalidate in progress In the process of invalidating pages under this glock I Initial Set when DLM lock is associated with this glock l Locked The glock is in the process of changing state L LRU Set when the glock is on the LRU list o Object Set when the glock is associated with an object (that is, an inode for type 2 glocks, and a resource group for type 3 glocks) p Demote in progress The glock is in the process of responding to a demote request q Queued Set when a holder is queued to a glock, and cleared when the glock is held, but there are no remaining holders. Used as part of the algorithm the calculates the minimum hold time for a glock. r Reply pending Reply received from remote node is awaiting processing y Dirty Data needs flushing to disk before releasing this glock Table 2.2. Glock holder flags Flag Name Meaning a Async Do not wait for glock result (will poll for result later) A Any Any compatible lock mode is acceptable c No cache When unlocked, demote DLM lock immediately e No expire Ignore subsequent lock cancel requests E exact Must have exact lock mode F First Set when holder is the first to be granted for this lock H Holder Indicates that requested lock is granted p Priority Enqueue holder at the head of the queue t Try A "try" lock T Try 1CB A "try" lock that sends a callback W Wait Set while waiting for request to complete Having identified a glock which is causing a problem, the step is to find out which inode it relates to. The glock number (n: on the G: line) indicates this. It is of the form type / number and if type is 2, then the glock is an inode glock and the number is an inode number. To track down the inode, you can then run find -inum number where number is the inode number converted from the hex format in the glocks file into decimal. Note If you run the find on a file system when it is experiencing lock contention, you are likely to make the problem worse. It is a good idea to stop the application before running the find when you are looking for contended inodes. Table 2.3, "Glock types" shows the meanings of the different glock types. Table 2.3. Glock types Type number Lock type Use 1 Trans Transaction lock 2 Inode Inode metadata and data 3 Rgrp Resource group metadata 4 Meta The superblock 5 Iopen Inode last closer detection 6 Flock flock (2) syscall 8 Quota Quota operations 9 Journal Journal mutex If the glock that was identified was of a different type, then it is most likely to be of type 3: (resource group). If you see significant numbers of processes waiting for other types of glock under normal loads, then report this to Red Hat support. If you do see a number of waiting requests queued on a resource group lock there may be a number of reason for this. One is that there are a large number of nodes compared to the number of resource groups in the file system. Another is that the file system may be very nearly full (requiring, on average, longer searches for free blocks). The situation in both cases can be improved by adding more storage and using the gfs2_grow command to expand the file system. | [
"/sys/kernel/debug/gfs2/ fsname /glocks"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/global_file_system_2/gfs2_performance_troubleshoot |
Object Gateway Guide | Object Gateway Guide Red Hat Ceph Storage 8 Deploying, configuring, and administering a Ceph Object Gateway Red Hat Ceph Storage Documentation Team | [
"ceph orch apply mon --placement=\"host1 host2 host3\"",
"service_type: mon placement: hosts: - host01 - host02 - host03",
"ceph orch apply -i mon.yml",
"ceph orch apply rgw example --placement=\"6 host1 host2 host3\"",
"service_type: rgw service_id: example placement: count: 6 hosts: - host01 - host02 - host03",
"ceph orch apply -i rgw.yml",
"mon_pg_warn_max_per_osd = n",
"ceph osd pool create .us-west.rgw.buckets.non-ec 64 64 replicated rgw-service",
"## SAS-SSD ROOT DECLARATION ## root sas-ssd { id -1 # do not change unnecessarily # weight 0.000 alg straw hash 0 # rjenkins1 item data2-sas-ssd weight 4.000 item data1-sas-ssd weight 4.000 item data0-sas-ssd weight 4.000 }",
"## INDEX ROOT DECLARATION ## root index { id -2 # do not change unnecessarily # weight 0.000 alg straw hash 0 # rjenkins1 item data2-index weight 1.000 item data1-index weight 1.000 item data0-index weight 1.000 }",
"host data2-sas-ssd { id -11 # do not change unnecessarily # weight 0.000 alg straw hash 0 # rjenkins1 item osd.0 weight 1.000 item osd.1 weight 1.000 item osd.2 weight 1.000 item osd.3 weight 1.000 }",
"host data2-index { id -21 # do not change unnecessarily # weight 0.000 alg straw hash 0 # rjenkins1 item osd.4 weight 1.000 }",
"osd_crush_update_on_start = false",
"[osd.0] osd crush location = \"host=data2-sas-ssd\" [osd.1] osd crush location = \"host=data2-sas-ssd\" [osd.2] osd crush location = \"host=data2-sas-ssd\" [osd.3] osd crush location = \"host=data2-sas-ssd\" [osd.4] osd crush location = \"host=data2-index\"",
"## SERVICE RULE DECLARATION ## rule rgw-service { type replicated min_size 1 max_size 10 step take sas-ssd step chooseleaf firstn 0 type rack step emit }",
"## THROUGHPUT RULE DECLARATION ## rule rgw-throughput { type replicated min_size 1 max_size 10 step take sas-ssd step chooseleaf firstn 0 type host step emit }",
"## INDEX RULE DECLARATION ## rule rgw-index { type replicated min_size 1 max_size 10 step take index step chooseleaf firstn 0 type rack step emit }",
"rule ecpool-86 { step take default class hdd step choose indep 4 type host step choose indep 4 type osd step emit }",
"rule ecpool-86 { type msr_indep step take default class hdd step choosemsr 4 type host step choosemsr 4 type osd step emit }",
"rule ecpool-86 { step take default class hdd step choose indep 4 type host step choose indep 4 type osd step emit }",
"rule ecpool-86 { type msr_indep step take default class hdd step choosemsr 4 type host step choosemsr 4 type osd step emit }",
"[osd] osd_max_backfills = 1 osd_recovery_max_active = 1 osd_recovery_op_priority = 1",
"ceph config set global osd_map_message_max 10 ceph config set osd osd_map_cache_size 20 ceph config set osd osd_map_share_max_epochs 10 ceph config set osd osd_pg_epoch_persisted_max_stale 10",
"[osd] osd_scrub_begin_hour = 23 #23:01H, or 10:01PM. osd_scrub_end_hour = 6 #06:01H or 6:01AM.",
"[osd] osd_scrub_load_threshold = 0.25",
"objecter_inflight_ops = 24576",
"rgw_thread_pool_size = 512",
"ceph soft nofile unlimited",
"USER_NAME soft nproc unlimited",
"cephadm shell",
"radosgw-admin realm create --rgw-realm= REALM_NAME --default",
"radosgw-admin realm create --rgw-realm=test_realm --default",
"radosgw-admin zonegroup create --rgw-zonegroup= ZONE_GROUP_NAME --master --default",
"radosgw-admin zonegroup create --rgw-zonegroup=default --master --default",
"radosgw-admin zone create --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME --master --default",
"radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=test_zone --master --default",
"radosgw-admin period update --rgw-realm= REALM_NAME --commit",
"radosgw-admin period update --rgw-realm=test_realm --commit",
"ceph orch apply rgw NAME [--realm= REALM_NAME ] [--zone= ZONE_NAME ] [--zonegroup= ZONE_GROUP_NAME ] --placement=\" NUMBER_OF_DAEMONS [ HOST_NAME_1 HOST_NAME_2 ]\"",
"ceph orch apply rgw test --realm=test_realm --zone=test_zone --zonegroup=default --placement=\"2 host01 host02\"",
"ceph orch apply rgw SERVICE_NAME",
"ceph orch apply rgw foo",
"ceph orch host label add HOST_NAME_1 LABEL_NAME ceph orch host label add HOSTNAME_2 LABEL_NAME ceph orch apply rgw SERVICE_NAME --placement=\"label: LABEL_NAME count-per-host: NUMBER_OF_DAEMONS \" --port=8000",
"ceph orch host label add host01 rgw # the 'rgw' label can be anything ceph orch host label add host02 rgw ceph orch apply rgw foo --placement=\"label:rgw count-per-host:2\" --port=8000",
"ceph orch ls",
"ceph orch ps --daemon_type= DAEMON_NAME",
"ceph orch ps --daemon_type=rgw",
"cephadm shell",
"cat nfs-conf.yml service_type: nfs service_id: nfs-rgw-service placement: hosts: ['host1'] spec: port: 2049",
"ceph orch apply -i nfs-conf.yml",
"ceph orch ls --service_name nfs.nfs-rgw-service --service_type nfs",
"touch radosgw.yml",
"service_type: rgw service_id: REALM_NAME . ZONE_NAME placement: hosts: - HOST_NAME_1 - HOST_NAME_2 count_per_host: NUMBER_OF_DAEMONS spec: rgw_realm: REALM_NAME rgw_zone: ZONE_NAME rgw_zonegroup: ZONE_GROUP_NAME rgw_frontend_port: FRONT_END_PORT networks: - NETWORK_CIDR # Ceph Object Gateway service binds to a specific network",
"service_type: rgw service_id: default placement: hosts: - host01 - host02 - host03 count_per_host: 2 spec: rgw_realm: default rgw_zone: default rgw_zonegroup: default rgw_frontend_port: 1234 networks: - 192.169.142.0/24",
"radosgw-admin realm create --rgw-realm=test_realm --default radosgw-admin zonegroup create --rgw-zonegroup=test_zonegroup --default radosgw-admin zone create --rgw-zonegroup=test_zonegroup --rgw-zone=test_zone --default radosgw-admin period update --rgw-realm=test_realm --commit",
"service_type: rgw service_id: test_realm.test_zone placement: hosts: - host01 - host02 - host03 count_per_host: 2 spec: rgw_realm: test_realm rgw_zone: test_zone rgw_zonegroup: test_zonegroup rgw_frontend_port: 1234 networks: - 192.169.142.0/24",
"cephadm shell --mount radosgw.yml:/var/lib/ceph/radosgw/radosgw.yml",
"ceph orch apply -i FILE_NAME .yml",
"ceph orch apply -i /var/lib/ceph/radosgw/radosgw.yml",
"ceph orch ls",
"ceph orch ps --daemon_type= DAEMON_NAME",
"ceph orch ps --daemon_type=rgw",
"radosgw-admin realm create --rgw-realm= REALM_NAME --default",
"radosgw-admin realm create --rgw-realm=test_realm --default",
"radosgw-admin zonegroup create --rgw-zonegroup= ZONE_GROUP_NAME --endpoints=http:// RGW_PRIMARY_HOSTNAME : RGW_PRIMARY_PORT_NUMBER_1 --master --default",
"radosgw-admin zonegroup create --rgw-zonegroup=us --endpoints=http://rgw1:80 --master --default",
"radosgw-admin zone create --rgw-zonegroup= PRIMARY_ZONE_GROUP_NAME --rgw-zone= PRIMARY_ZONE_NAME --endpoints=http:// RGW_PRIMARY_HOSTNAME : RGW_PRIMARY_PORT_NUMBER_1 --access-key= SYSTEM_ACCESS_KEY --secret= SYSTEM_SECRET_KEY",
"radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east-1 --endpoints=http://rgw1:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ",
"radosgw-admin zonegroup delete --rgw-zonegroup=default ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it ceph osd pool rm default.rgw.meta default.rgw.meta --yes-i-really-really-mean-it ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it ceph osd pool rm default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it ceph osd pool rm default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it",
"radosgw-admin user create --uid= USER_NAME --display-name=\" USER_NAME \" --access-key= SYSTEM_ACCESS_KEY --secret= SYSTEM_SECRET_KEY --system",
"radosgw-admin user create --uid=zone.user --display-name=\"Zone user\" --system",
"radosgw-admin zone modify --rgw-zone= PRIMARY_ZONE_NAME --access-key= ACCESS_KEY --secret= SECRET_KEY",
"radosgw-admin zone modify --rgw-zone=us-east-1 --access-key=NE48APYCAODEPLKBCZVQ--secret=u24GHQWRE3yxxNBnFBzjM4jn14mFIckQ4EKL6LoW",
"radosgw-admin period update --commit",
"radosgw-admin period update --commit",
"systemctl list-units | grep ceph",
"systemctl start ceph- FSID @ DAEMON_NAME systemctl enable ceph- FSID @ DAEMON_NAME",
"systemctl start [email protected]_realm.us-east-1.host01.ahdtsw.service systemctl enable [email protected]_realm.us-east-1.host01.ahdtsw.service",
"radosgw-admin realm pull --rgw-realm= PRIMARY_REALM --url= URL_TO_PRIMARY_ZONE_GATEWAY --access-key= ACCESS_KEY --secret-key= SECRET_KEY --default",
"radosgw-admin realm pull --rgw-realm=test_realm --url=http://10.74.249.26:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ --default",
"radosgw-admin period pull --url= URL_TO_PRIMARY_ZONE_GATEWAY --access-key= ACCESS_KEY --secret-key= SECRET_KEY",
"radosgw-admin period pull --url=http://10.74.249.26:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ",
"radosgw-admin zone create --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= SECONDARY_ZONE_NAME --endpoints=http:// RGW_SECONDARY_HOSTNAME : RGW_PRIMARY_PORT_NUMBER_1 --access-key= SYSTEM_ACCESS_KEY --secret= SYSTEM_SECRET_KEY --endpoints=http:// FQDN :80 [--read-only]",
"radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east-2 --endpoints=http://rgw2:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ",
"radosgw-admin zone rm --rgw-zone=default ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it ceph osd pool rm default.rgw.meta default.rgw.meta --yes-i-really-really-mean-it ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it ceph osd pool rm default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it ceph osd pool rm default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it",
"ceph config set SERVICE_NAME rgw_zone SECONDARY_ZONE_NAME",
"ceph config set rgw rgw_zone us-east-2",
"radosgw-admin period update --commit",
"radosgw-admin period update --commit",
"systemctl list-units | grep ceph",
"systemctl start ceph- FSID @ DAEMON_NAME systemctl enable ceph- FSID @ DAEMON_NAME",
"systemctl start [email protected]_realm.us-east-2.host04.ahdtsw.service systemctl enable [email protected]_realm.us-east-2.host04.ahdtsw.service",
"ceph orch apply rgw NAME --realm= REALM_NAME --zone= PRIMARY_ZONE_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 \"",
"ceph orch apply rgw east --realm=test_realm --zone=us-east-1 --placement=\"2 host01 host02\"",
"radosgw-admin sync status",
"cephadm shell",
"ceph orch ls",
"ceph orch rm SERVICE_NAME",
"ceph orch rm rgw.test_realm.test_zone_bb",
"ceph orch ps",
"ceph orch ps",
"cephadm shell",
"ceph mgr module enable rgw",
"ceph rgw realm bootstrap [--realm name REALM_NAME ] [--zonegroup-name ZONEGROUP_NAME ] [--zone-name ZONE_NAME ] [--port PORT_NUMBER ] [--placement HOSTNAME ] [--start-radosgw]",
"ceph rgw realm bootstrap --realm-name myrealm --zonegroup-name myzonegroup --zone-name myzone --port 5500 --placement=\"host01 host02\" --start-radosgw Realm(s) created correctly. Please, use 'ceph rgw realm tokens' to get the token.",
"rgw_realm: REALM_NAME rgw_zonegroup: ZONEGROUP_NAME rgw_zone: ZONE_NAME placement: hosts: - _HOSTNAME_1_ - _HOSTNAME_2_",
"cat rgw.yaml rgw_realm: myrealm rgw_zonegroup: myzonegroup rgw_zone: myzone placement: hosts: - host01 - host02",
"service_type: rgw placement: hosts: - _host1_ - _host2_ spec: rgw_realm: my_realm rgw_zonegroup: my_zonegroup rgw_zone: my_zone zonegroup_hostnames: - _hostname1_ - _hostname2_",
"service_type: rgw placement: hosts: - _host1_ - _host2_ spec: rgw_realm: my_realm rgw_zonegroup: my_zonegroup rgw_zone: my_zone zonegroup_hostnames: - foo - bar",
"cephadm shell --mount rgw.yaml:/var/lib/ceph/rgw/rgw.yaml",
"ceph rgw realm bootstrap -i /var/lib/ceph/rgw/rgw.yaml",
"ceph rgw realm tokens | jq [ { \"realm\": \"myrealm\", \"token\": \"ewogICAgInJlYWxtX25hbWUiOiAibXlyZWFsbSIsCiAgICAicmVhbG1faWQiOiAiZDA3YzAwZWYtOTA0MS00ZjZlLTg4MDQtN2Q0MDI0MDU1NmFlIiwKICAgICJlbmRwb2ludCI6ICJodHRwOi8vdm0tMDA6NDMyMSIsCiAgICAiYWNjZXNzX2tleSI6ICI5NTY1VFZSMVFWTExFRzdVNFIxRCIsCiAgICAic2VjcmV0IjogImQ3b0FJQXZrNEdYeXpyd3Q2QVZ6bEZNQmNnRG53RVdMMHFDenE3cjUiCn1=\" } ]",
"ceph orch list --daemon-type=rgw NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID rgw.myrealm.myzonegroup.ceph-saya-6-osd-host01.eburst ceph-saya-6-osd-host01 *:80 running (111m) 9m ago 111m 82.3M - 17.2.6-22.el9cp 2d5b080de0b0 2f3eaca7e88e",
"radosgw-admin zonegroup get --rgw-zonegroup _zone_group_name_",
"radosgw-admin zonegroup get --rgw-zonegroup my_zonegroup { \"id\": \"02a175e2-7f23-4882-8651-6fbb15d25046\", \"name\": \"my_zonegroup_ck\", \"api_name\": \"my_zonegroup_ck\", \"is_master\": true, \"endpoints\": [ \"http://vm-00:80\" ], \"hostnames\": [ \"foo\" \"bar\" ], \"hostnames_s3website\": [], \"master_zone\": \"f42fea84-a89e-4995-996e-61b7223fb0b0\", \"zones\": [ { \"id\": \"f42fea84-a89e-4995-996e-61b7223fb0b0\", \"name\": \"my_zone_ck\", \"endpoints\": [ \"http://vm-00:80\" ], \"log_meta\": false, \"log_data\": false, \"bucket_index_max_shards\": 11, \"read_only\": false, \"tier_type\": \"\", \"sync_from_all\": true, \"sync_from\": [], \"redirect_zone\": \"\", \"supported_features\": [ \"compress-encrypted\", \"resharding\" ] } ], \"placement_targets\": [ { \"name\": \"default-placement\", \"tags\": [], \"storage_classes\": [ \"STANDARD\" ] } ], \"default_placement\": \"default-placement\", \"realm_id\": \"439e9c37-4ddc-43a3-99e9-ea1f3825bb51\", \"sync_policy\": { \"groups\": [] }, \"enabled_features\": [ \"resharding\" ] }",
"cephadm shell",
"ceph mgr module enable rgw",
"ceph rgw realm bootstrap [--realm name REALM_NAME ] [--zonegroup-name ZONEGROUP_NAME ] [--zone-name ZONE_NAME ] [--port PORT_NUMBER ] [--placement HOSTNAME ] [--start-radosgw]",
"ceph rgw realm bootstrap --realm-name myrealm --zonegroup-name myzonegroup --zone-name myzone --port 5500 --placement=\"host01 host02\" --start-radosgw Realm(s) created correctly. Please, use 'ceph rgw realm tokens' to get the token.",
"rgw_realm: REALM_NAME rgw_zonegroup: ZONEGROUP_NAME rgw_zone: ZONE_NAME placement: hosts: - HOSTNAME_1 - HOSTNAME_2 spec: rgw_frontend_port: PORT_NUMBER zone_endpoints: http:// RGW_HOSTNAME_1 : RGW_PORT_NUMBER_1 , http:// RGW_HOSTNAME_2 : RGW_PORT_NUMBER_2",
"cat rgw.yaml rgw_realm: myrealm rgw_zonegroup: myzonegroup rgw_zone: myzone placement: hosts: - host01 - host02 spec: rgw_frontend_port: 5500 zone_endpoints: http://<rgw_host1>:<rgw_port1>, http://<rgw_host2>:<rgw_port2>",
"cephadm shell --mount rgw.yaml:/var/lib/ceph/rgw/rgw.yaml",
"ceph rgw realm bootstrap -i /var/lib/ceph/rgw/rgw.yaml",
"ceph rgw realm tokens | jq [ { \"realm\": \"myrealm\", \"token\": \"ewogICAgInJlYWxtX25hbWUiOiAibXlyZWFsbSIsCiAgICAicmVhbG1faWQiOiAiZDA3YzAwZWYtOTA0MS00ZjZlLTg4MDQtN2Q0MDI0MDU1NmFlIiwKICAgICJlbmRwb2ludCI6ICJodHRwOi8vdm0tMDA6NDMyMSIsCiAgICAiYWNjZXNzX2tleSI6ICI5NTY1VFZSMVFWTExFRzdVNFIxRCIsCiAgICAic2VjcmV0IjogImQ3b0FJQXZrNEdYeXpyd3Q2QVZ6bEZNQmNnRG53RVdMMHFDenE3cjUiCn1=\" } ]",
"cat zone-spec.yaml rgw_zone: my-secondary-zone rgw_realm_token: <token> placement: hosts: - ceph-node-1 - ceph-node-2 spec: rgw_frontend_port: 5500",
"cephadm shell --mount zone-spec.yaml:/var/lib/ceph/radosgw/zone-spec.yaml",
"ceph mgr module enable rgw",
"ceph rgw zone create -i /var/lib/ceph/radosgw/zone-spec.yaml",
"radosgw-admin realm list { \"default_info\": \"d07c00ef-9041-4f6e-8804-7d40240556ae\", \"realms\": [ \"myrealm\" ] }",
"bucket-name.domain-name.com",
"address=/. HOSTNAME_OR_FQDN / HOST_IP_ADDRESS",
"address=/.gateway-host01/192.168.122.75",
"USDTTL 604800 @ IN SOA gateway-host01. root.gateway-host01. ( 2 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; @ IN NS gateway-host01. @ IN A 192.168.122.113 * IN CNAME @",
"ping mybucket. HOSTNAME",
"ping mybucket.gateway-host01",
"radosgw-admin zonegroup get --rgw-zonegroup= ZONEGROUP_NAME > zonegroup.json",
"radosgw-admin zonegroup get --rgw-zonegroup=us > zonegroup.json",
"cp zonegroup.json zonegroup.backup.json",
"cat zonegroup.json { \"id\": \"d523b624-2fa5-4412-92d5-a739245f0451\", \"name\": \"asia\", \"api_name\": \"asia\", \"is_master\": \"true\", \"endpoints\": [], \"hostnames\": [], \"hostnames_s3website\": [], \"master_zone\": \"d2a3b90f-f4f3-4d38-ac1f-6463a2b93c32\", \"zones\": [ { \"id\": \"d2a3b90f-f4f3-4d38-ac1f-6463a2b93c32\", \"name\": \"india\", \"endpoints\": [], \"log_meta\": \"false\", \"log_data\": \"false\", \"bucket_index_max_shards\": 11, \"read_only\": \"false\", \"tier_type\": \"\", \"sync_from_all\": \"true\", \"sync_from\": [], \"redirect_zone\": \"\" } ], \"placement_targets\": [ { \"name\": \"default-placement\", \"tags\": [], \"storage_classes\": [ \"STANDARD\" ] } ], \"default_placement\": \"default-placement\", \"realm_id\": \"d7e2ad25-1630-4aee-9627-84f24e13017f\", \"sync_policy\": { \"groups\": [] } }",
"\"hostnames\": [\"host01\", \"host02\",\"host03\"],",
"radosgw-admin zonegroup set --rgw-zonegroup= ZONEGROUP_NAME --infile=zonegroup.json",
"radosgw-admin zonegroup set --rgw-zonegroup=us --infile=zonegroup.json",
"radosgw-admin period update --commit",
"[client.rgw.node1] rgw frontends = beast ssl_endpoint=192.168.0.100:443 ssl_certificate=<path to SSL certificate>",
"touch rgw.yml",
"service_type: rgw service_id: SERVICE_ID service_name: SERVICE_NAME placement: hosts: - HOST_NAME spec: ssl: true rgw_frontend_ssl_certificate: CERT_HASH",
"service_type: rgw service_id: foo service_name: rgw.foo placement: hosts: - host01 spec: ssl: true rgw_frontend_ssl_certificate: | -----BEGIN RSA PRIVATE KEY----- MIIEpAIBAAKCAQEA+Cf4l9OagD6x67HhdCy4Asqw89Zz9ZuGbH50/7ltIMQpJJU0 gu9ObNtIoC0zabJ7n1jujueYgIpOqGnhRSvsGJiEkgN81NLQ9rqAVaGpadjrNLcM bpgqJCZj0vzzmtFBCtenpb5l/EccMFcAydGtGeLP33SaWiZ4Rne56GBInk6SATI/ JSKweGD1y5GiAWipBR4C74HiAW9q6hCOuSdp/2WQxWT3T1j2sjlqxkHdtInUtwOm j5Ism276IndeQ9hR3reFR8PJnKIPx73oTBQ7p9CMR1J4ucq9Ny0J12wQYT00fmJp -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIIEBTCCAu2gAwIBAgIUGfYFsj8HyA9Zv2l600hxzT8+gG4wDQYJKoZIhvcNAQEL BQAwgYkxCzAJBgNVBAYTAklOMQwwCgYDVQQIDANLQVIxDDAKBgNVBAcMA0JMUjEM MAoGA1UECgwDUkhUMQswCQYDVQQLDAJCVTEkMCIGA1UEAwwbY2VwaC1zc2wtcmhj czUtOGRjeHY2LW5vZGU1MR0wGwYJKoZIhvcNAQkBFg5hYmNAcmVkaGF0LmNvbTAe -----END CERTIFICATE-----",
"ceph orch apply -i rgw.yml",
"mkfs.ext4 nvme-drive-path",
"mkfs.ext4 /dev/nvme0n1 mount /dev/nvme0n1 /mnt/nvme0n1/",
"mkdir <nvme-mount-path>/cache-directory-name",
"mkdir /mnt/nvme0n1/rgw_datacache",
"chmod a+rwx nvme-mount-path ; chmod a+rwx rgw_d3n_l1_datacache_persistent_path",
"chmod a+rwx /mnt/nvme0n1 ; chmod a+rwx /mnt/nvme0n1/rgw_datacache/",
"\"extra_container_args: \"-v\" \"rgw_d3n_l1_datacache_persistent_path:rgw_d3n_l1_datacache_persistent_path\" \"",
"cat rgw-spec.yml service_type: rgw service_id: rgw.test placement: hosts: host1 host2 extra_container_args: \"-v\" \"/mnt/nvme0n1/rgw_datacache/:/mnt/nvme0n1/rgw_datacache/\"",
"\"extra_container_args: \"-v\" \"/mnt/nvme0n1/rgw_datacache/rgw1/:/mnt/nvme0n1/rgw_datacache/rgw1/\" \"-v\" \"/mnt/nvme0n1/rgw_datacache/rgw2/:/mnt/nvme0n1/rgw_datacache/rgw2/\" \"",
"cat rgw-spec.yml service_type: rgw service_id: rgw.test placement: hosts: host1 host2 count_per_host: 2 extra_container_args: \"-v\" \"/mnt/nvme0n1/rgw_datacache/rgw1/:/mnt/nvme0n1/rgw_datacache/rgw1/\" \"-v\" \"/mnt/nvme0n1/rgw_datacache/rgw2/:/mnt/nvme0n1/rgw_datacache/rgw2/\"",
"ceph orch apply -i rgw-spec.yml",
"ceph config set <client.rgw> <CONF-OPTION> <VALUE>",
"rgw_d3n_l1_datacache_persistent_path=/mnt/nvme/rgw_datacache/",
"rgw_d3n_l1_datacache_size=10737418240",
"fallocate -l 1G ./1G.dat s3cmd mb s3://bkt s3cmd put ./1G.dat s3://bkt",
"s3cmd get s3://bkt/1G.dat /dev/shm/1G_get.dat download: 's3://bkt/1G.dat' -> './1G_get.dat' [1 of 1] 1073741824 of 1073741824 100% in 13s 73.94 MB/s done",
"ls -lh /mnt/nvme/rgw_datacache rw-rr. 1 ceph ceph 1.0M Jun 2 06:18 cc7f967c-0021-43b2-9fdf-23858e868663.615391.1_shadow.ZCiCtMWeu_19wb100JIEZ-o4tv2IyA_1",
"s3cmd get s3://bkt/1G.dat /dev/shm/1G_get.dat download: 's3://bkt/1G.dat' -> './1G_get.dat' [1 of 1] 1073741824 of 1073741824 100% in 6s 155.07 MB/s done",
"ceph config set client.rgw debug_rgw VALUE",
"ceph config set client.rgw debug_rgw 20",
"ceph --admin-daemon /var/run/ceph/ceph-client.rgw. NAME .asok config set debug_rgw VALUE",
"ceph --admin-daemon /var/run/ceph/ceph-client.rgw.rgw.asok config set debug_rgw 20",
"ceph config set global log_to_file true ceph config set global mon_cluster_log_to_file true",
"ceph config set client.rgw OPTION VALUE",
"ceph config set client.rgw rgw_enable_static_website true ceph config set client.rgw rgw_enable_apis s3,s3website ceph config set client.rgw rgw_dns_name objects-zonegroup.example.com ceph config set client.rgw rgw_dns_s3website_name objects-website-zonegroup.example.com ceph config set client.rgw rgw_resolve_cname true",
"objects-zonegroup.domain.com. IN A 192.0.2.10 objects-zonegroup.domain.com. IN AAAA 2001:DB8::192:0:2:10 *.objects-zonegroup.domain.com. IN CNAME objects-zonegroup.domain.com. objects-website-zonegroup.domain.com. IN A 192.0.2.20 objects-website-zonegroup.domain.com. IN AAAA 2001:DB8::192:0:2:20",
"*.objects-website-zonegroup.domain.com. IN CNAME objects-website-zonegroup.domain.com.",
"http://bucket1.objects-website-zonegroup.domain.com",
"www.example.com. IN CNAME bucket2.objects-website-zonegroup.domain.com.",
"http://www.example.com",
"www.example.com. IN CNAME www.example.com.objects-website-zonegroup.domain.com.",
"http://www.example.com",
"www.example.com. IN A 192.0.2.20 www.example.com. IN AAAA 2001:DB8::192:0:2:20",
"http://www.example.com",
"[root@host01 ~] touch ingress.yaml",
"service_type: ingress 1 service_id: SERVICE_ID 2 placement: 3 hosts: - HOST1 - HOST2 - HOST3 spec: backend_service: SERVICE_ID virtual_ip: IP_ADDRESS / CIDR 4 frontend_port: INTEGER 5 monitor_port: INTEGER 6 virtual_interface_networks: 7 - IP_ADDRESS / CIDR ssl_cert: | 8",
"service_type: ingress service_id: rgw.foo placement: hosts: - host01.example.com - host02.example.com - host03.example.com spec: backend_service: rgw.foo virtual_ip: 192.168.1.2/24 frontend_port: 8080 monitor_port: 1967 virtual_interface_networks: - 10.10.0.0/16 ssl_cert: | -----BEGIN CERTIFICATE----- MIIEpAIBAAKCAQEA+Cf4l9OagD6x67HhdCy4Asqw89Zz9ZuGbH50/7ltIMQpJJU0 gu9ObNtIoC0zabJ7n1jujueYgIpOqGnhRSvsGJiEkgN81NLQ9rqAVaGpadjrNLcM bpgqJCZj0vzzmtFBCtenpb5l/EccMFcAydGtGeLP33SaWiZ4Rne56GBInk6SATI/ JSKweGD1y5GiAWipBR4C74HiAW9q6hCOuSdp/2WQxWT3T1j2sjlqxkHdtInUtwOm j5Ism276IndeQ9hR3reFR8PJnKIPx73oTBQ7p9CMR1J4ucq9Ny0J12wQYT00fmJp -----END CERTIFICATE----- -----BEGIN PRIVATE KEY----- MIIEBTCCAu2gAwIBAgIUGfYFsj8HyA9Zv2l600hxzT8+gG4wDQYJKoZIhvcNAQEL BQAwgYkxCzAJBgNVBAYTAklOMQwwCgYDVQQIDANLQVIxDDAKBgNVBAcMA0JMUjEM MAoGA1UECgwDUkhUMQswCQYDVQQLDAJCVTEkMCIGA1UEAwwbY2VwaC1zc2wtcmhj czUtOGRjeHY2LW5vZGU1MR0wGwYJKoZIhvcNAQkBFg5hYmNAcmVkaGF0LmNvbTAe -----END PRIVATE KEY-----",
"service_type: ingress service_id: rgw.ssl # adjust to match your existing RGW service placement: hosts: - hostname1 - hostname2 spec: backend_service: rgw.rgw.ssl.ceph13 # adjust to match your existing RGW service virtual_ip: IP_ADDRESS/CIDR # ex: 192.168.20.1/24 frontend_port: INTEGER # ex: 443 monitor_port: INTEGER # ex: 1969 use_tcp_mode_over_rgw: True",
"cephadm shell --mount ingress.yaml:/var/lib/ceph/radosgw/ingress.yaml",
"ceph config set mgr mgr/cephadm/container_image_haproxy HAPROXY_IMAGE_ID ceph config set mgr mgr/cephadm/container_image_keepalived KEEPALIVED_IMAGE_ID",
"ceph config set mgr mgr/cephadm/container_image_haproxy registry.redhat.io/rhceph/rhceph-haproxy-rhel9:latest ceph config set mgr mgr/cephadm/container_image_keepalived registry.redhat.io/rhceph/keepalived-rhel9:latest",
"ceph orch apply -i /var/lib/ceph/radosgw/ingress.yaml",
"ip addr show",
"wget HOST_NAME",
"wget host01.example.com",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <ListAllMyBucketsResult xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\"> <Owner> <ID>anonymous</ID> <DisplayName></DisplayName> </Owner> <Buckets> </Buckets> </ListAllMyBucketsResult>",
"cephadm shell",
"ceph nfs export create rgw --cluster-id NFS_CLUSTER_NAME --pseudo-path PATH_FROM_ROOT --user-id USER_ID",
"ceph nfs export create rgw --cluster-id cluster1 --pseudo-path root/testnfs1/ --user-id nfsuser",
"mount -t nfs IP_ADDRESS:PATH_FROM_ROOT -osync MOUNT_POINT",
"mount -t nfs 10.0.209.0:/root/testnfs1 -osync /mnt/mount1",
"cat ./haproxy.cfg global log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 7000 user haproxy group haproxy daemon stats socket /var/lib/haproxy/stats defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 30s timeout server 30s timeout http-keep-alive 10s timeout check 10s timeout client-fin 1s timeout server-fin 1s maxconn 6000 listen stats bind 0.0.0.0:1936 mode http log global maxconn 256 clitimeout 10m srvtimeout 10m contimeout 10m timeout queue 10m JTH start stats enable stats hide-version stats refresh 30s stats show-node ## stats auth admin:password stats uri /haproxy?stats stats admin if TRUE frontend main bind *:5000 acl url_static path_beg -i /static /images /javascript /stylesheets acl url_static path_end -i .jpg .gif .png .css .js use_backend static if url_static default_backend app maxconn 6000 backend static balance roundrobin fullconn 6000 server app8 host01:8080 check maxconn 2000 server app9 host02:8080 check maxconn 2000 server app10 host03:8080 check maxconn 2000 backend app balance roundrobin fullconn 6000 server app8 host01:8080 check maxconn 2000 server app9 host02:8080 check maxconn 2000 server app10 host03:8080 check maxconn 2000",
"ceph config set osd osd_pool_default_pg_num 50 ceph config set osd osd_pool_default_pgp_num 50",
"radosgw-admin realm create --rgw-realm REALM_NAME --default",
"radosgw-admin zonegroup rename --rgw-zonegroup default --zonegroup-new-name NEW_ZONE_GROUP_NAME radosgw-admin zone rename --rgw-zone default --zone-new-name NEW_ZONE_NAME --rgw-zonegroup NEW_ZONE_GROUP_NAME",
"radosgw-admin zonegroup modify --api-name NEW_ZONE_GROUP_NAME --rgw-zonegroup NEW_ZONE_GROUP_NAME",
"radosgw-admin zonegroup modify --rgw-realm REALM_NAME --rgw-zonegroup NEW_ZONE_GROUP_NAME --endpoints http://ENDPOINT --master --default",
"radosgw-admin zone modify --rgw-realm REALM_NAME --rgw-zonegroup NEW_ZONE_GROUP_NAME --rgw-zone NEW_ZONE_NAME --endpoints http://ENDPOINT --master --default",
"radosgw-admin user create --uid USER_ID --display-name DISPLAY_NAME --access-key ACCESS_KEY --secret SECRET_KEY --system",
"radosgw-admin period update --commit",
"ceph orch ls | grep rgw",
"ceph config set client.rgw.SERVICE_NAME rgw_realm REALM_NAME ceph config set client.rgw.SERVICE_NAME rgw_zonegroup ZONE_GROUP_NAME ceph config set client.rgw.SERVICE_NAME rgw_zone PRIMARY_ZONE_NAME",
"ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_realm test_realm ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zonegroup us ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zone us-east-1",
"systemctl restart ceph-radosgw@rgw.`hostname -s`",
"ceph orch restart _RGW_SERVICE_NAME_",
"ceph orch restart rgw.rgwsvcid.mons-1.jwgwwp",
"cephadm shell",
"radosgw-admin realm pull --url= URL_TO_PRIMARY_ZONE_GATEWAY --access-key= ACCESS_KEY --secret-key= SECRET_KEY",
"radosgw-admin realm pull --url=http://10.74.249.26:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ",
"radosgw-admin period pull --url= URL_TO_PRIMARY_ZONE_GATEWAY --access-key= ACCESS_KEY --secret-key= SECRET_KEY",
"radosgw-admin period pull --url=http://10.74.249.26:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ",
"radosgw-admin zone create --rgw-zonegroup=_ZONE_GROUP_NAME_ --rgw-zone=_SECONDARY_ZONE_NAME_ --endpoints=http://_RGW_SECONDARY_HOSTNAME_:_RGW_PRIMARY_PORT_NUMBER_1_ --access-key=_SYSTEM_ACCESS_KEY_ --secret=_SYSTEM_SECRET_KEY_ [--read-only]",
"radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east-2 --endpoints=http://rgw2:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ",
"radosgw-admin zone rm --rgw-zone=default ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it ceph osd pool rm default.rgw.meta default.rgw.meta --yes-i-really-really-mean-it ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it ceph osd pool rm default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it ceph osd pool rm default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it",
"ceph config set client.rgw. SERVICE_NAME rgw_realm REALM_NAME ceph config set client.rgw. SERVICE_NAME rgw_zonegroup ZONE_GROUP_NAME ceph config set client.rgw. SERVICE_NAME rgw_zone SECONDARY_ZONE_NAME",
"ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_realm test_realm ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zonegroup us ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zone us-east-2",
"radosgw-admin period update --commit",
"radosgw-admin period update --commit",
"systemctl list-units | grep ceph",
"systemctl start ceph- FSID @ DAEMON_NAME systemctl enable ceph- FSID @ DAEMON_NAME",
"systemctl start [email protected]_realm.us-east-2.host04.ahdtsw.service systemctl enable [email protected]_realm.us-east-2.host04.ahdtsw.service",
"radosgw-admin zone create --rgw-zonegroup={ ZONE_GROUP_NAME } --rgw-zone={ ZONE_NAME } --endpoints={http:// FQDN : PORT },{http:// FQDN : PORT } --tier-type=archive",
"radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east --endpoints={http://example.com:8080} --tier-type=archive",
"radosgw-admin zone modify --rgw-zone archive --sync_from primary --sync_from_all false --sync-from-rm secondary radosgw-admin period update --commit",
"ceph config set client.rgw rgw_max_objs_per_shard 50000",
"<?xml version=\"1.0\" ?> <LifecycleConfiguration xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\"> <Rule> <ID>delete-1-days-az</ID> <Filter> <Prefix></Prefix> <ArchiveZone /> 1 </Filter> <Status>Enabled</Status> <Expiration> <Days>1</Days> </Expiration> </Rule> </LifecycleConfiguration>",
"radosgw-admin lc get --bucket BUCKET_NAME",
"radosgw-admin lc get --bucket test-bkt { \"prefix_map\": { \"\": { \"status\": true, \"dm_expiration\": true, \"expiration\": 0, \"noncur_expiration\": 2, \"mp_expiration\": 0, \"transitions\": {}, \"noncur_transitions\": {} } }, \"rule_map\": [ { \"id\": \"Rule 1\", \"rule\": { \"id\": \"Rule 1\", \"prefix\": \"\", \"status\": \"Enabled\", \"expiration\": { \"days\": \"\", \"date\": \"\" }, \"noncur_expiration\": { \"days\": \"2\", \"date\": \"\" }, \"mp_expiration\": { \"days\": \"\", \"date\": \"\" }, \"filter\": { \"prefix\": \"\", \"obj_tags\": { \"tagset\": {} }, \"archivezone\": \"\" 1 }, \"transitions\": {}, \"noncur_transitions\": {}, \"dm_expiration\": true } } ] }",
"radosgw-admin bucket link --uid NEW_USER_ID --bucket BUCKET_NAME --yes-i-really-mean-it",
"radosgw-admin bucket link --uid arcuser1 --bucket arc1-deleted-da473fbbaded232dc5d1e434675c1068 --yes-i-really-mean-it",
"radosgw-admin zone modify --rgw-zone= ZONE_NAME --master --default",
"radosgw-admin zone modify --rgw-zone= ZONE_NAME --master --default --read-only=false",
"radosgw-admin period update --commit",
"systemctl restart ceph- CLUSTER_ID @ SERVICE_TYPE . ID .service",
"systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw",
"radosgw-admin realm pull --url= URL_TO_PRIMARY_ZONE_GATEWAY --access-key= ACCESS_KEY --secret= SECRET_KEY",
"radosgw-admin zone modify --rgw-zone= ZONE_NAME --master --default",
"radosgw-admin period update --commit",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw",
"radosgw-admin zone modify --rgw-zone= ZONE_NAME --read-only radosgw-admin zone modify --rgw-zone= ZONE_NAME --read-only",
"radosgw-admin period update --commit",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw",
"radosgw-admin realm create --rgw-realm= REALM_NAME --default",
"radosgw-admin realm create --rgw-realm=ldc1 --default",
"radosgw-admin zonegroup create --rgw-zonegroup= ZONE_GROUP_NAME --endpoints=http:// RGW_NODE_NAME :80 --rgw-realm= REALM_NAME --master --default",
"radosgw-admin zonegroup create --rgw-zonegroup=ldc1zg --endpoints=http://rgw1:80 --rgw-realm=ldc1 --master --default",
"radosgw-admin zone create --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME --master --default --endpoints= HTTP_FQDN [, HTTP_FQDN ]",
"radosgw-admin zone create --rgw-zonegroup=ldc1zg --rgw-zone=ldc1z --master --default --endpoints=http://rgw.example.com",
"radosgw-admin period update --commit",
"ceph orch apply rgw SERVICE_NAME --realm= REALM_NAME --zone= ZONE_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 \"",
"ceph orch apply rgw rgw --realm=ldc1 --zone=ldc1z --placement=\"1 host01\"",
"ceph config set client.rgw. SERVICE_NAME rgw_realm REALM_NAME ceph config set client.rgw. SERVICE_NAME rgw_zonegroup ZONE_GROUP_NAME ceph config set client.rgw. SERVICE_NAME rgw_zone ZONE_NAME",
"ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_realm ldc1 ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zonegroup ldc1zg ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zone ldc1z",
"systemctl restart ceph- CLUSTER_ID @ SERVICE_TYPE . ID .service",
"systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw",
"radosgw-admin realm create --rgw-realm= REALM_NAME --default",
"radosgw-admin realm create --rgw-realm=ldc2 --default",
"radosgw-admin zonegroup create --rgw-zonegroup= ZONE_GROUP_NAME --endpoints=http:// RGW_NODE_NAME :80 --rgw-realm= REALM_NAME --master --default",
"radosgw-admin zonegroup create --rgw-zonegroup=ldc2zg --endpoints=http://rgw2:80 --rgw-realm=ldc2 --master --default",
"radosgw-admin zone create --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME --master --default --endpoints= HTTP_FQDN [, HTTP_FQDN ]",
"radosgw-admin zone create --rgw-zonegroup=ldc2zg --rgw-zone=ldc2z --master --default --endpoints=http://rgw.example.com",
"radosgw-admin period update --commit",
"ceph orch apply rgw SERVICE_NAME --realm= REALM_NAME --zone= ZONE_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 \"",
"ceph orch apply rgw rgw --realm=ldc2 --zone=ldc2z --placement=\"1 host01\"",
"ceph config set client.rgw. SERVICE_NAME rgw_realm REALM_NAME ceph config set client.rgw. SERVICE_NAME rgw_zonegroup ZONE_GROUP_NAME ceph config set client.rgw. SERVICE_NAME rgw_zone ZONE_NAME",
"ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_realm ldc2 ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zonegroup ldc2zg ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zone ldc2z",
"systemctl restart ceph- CLUSTER_ID @ SERVICE_TYPE . ID .service",
"systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw",
"radosgw-admin realm create --rgw-realm= REPLICATED_REALM_1 --default",
"radosgw-admin realm create --rgw-realm=rdc1 --default",
"radosgw-admin zonegroup create --rgw-zonegroup= RGW_ZONE_GROUP --endpoints=http://_RGW_NODE_NAME :80 --rgw-realm=_RGW_REALM_NAME --master --default",
"radosgw-admin zonegroup create --rgw-zonegroup=rdc1zg --endpoints=http://rgw1:80 --rgw-realm=rdc1 --master --default",
"radosgw-admin zone create --rgw-zonegroup= RGW_ZONE_GROUP --rgw-zone=_MASTER_RGW_NODE_NAME --master --default --endpoints= HTTP_FQDN [, HTTP_FQDN ]",
"radosgw-admin zone create --rgw-zonegroup=rdc1zg --rgw-zone=rdc1z --master --default --endpoints=http://rgw.example.com",
"radosgw-admin user create --uid=\" SYNCHRONIZATION_USER \" --display-name=\"Synchronization User\" --system radosgw-admin zone modify --rgw-zone= RGW_ZONE --access-key= ACCESS_KEY --secret= SECRET_KEY",
"radosgw-admin user create --uid=\"synchronization-user\" --display-name=\"Synchronization User\" --system radosgw-admin zone modify --rgw-zone=rdc1zg --access-key=3QV0D6ZMMCJZMSCXJ2QJ --secret=VpvQWcsfI9OPzUCpR4kynDLAbqa1OIKqRB6WEnH8",
"radosgw-admin period update --commit",
"ceph orch apply rgw SERVICE_NAME --realm= REALM_NAME --zone= ZONE_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 \"",
"ceph orch apply rgw rgw --realm=rdc1 --zone=rdc1z --placement=\"1 host01\"",
"ceph config set client.rgw. SERVICE_NAME rgw_realm REALM_NAME ceph config set client.rgw. SERVICE_NAME rgw_zonegroup ZONE_GROUP_NAME ceph config set client.rgw. SERVICE_NAME rgw_zone ZONE_NAME",
"ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_realm rdc1 ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zonegroup rdc1zg ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zone rdc1z",
"systemctl restart ceph- CLUSTER_ID @ SERVICE_TYPE . ID .service",
"systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw",
"radosgw-admin realm pull --url=https://tower-osd1.cephtips.com --access-key= ACCESS_KEY --secret-key= SECRET_KEY",
"radosgw-admin realm pull --url=https://tower-osd1.cephtips.com --access-key=3QV0D6ZMMCJZMSCXJ2QJ --secret-key=VpvQWcsfI9OPzUCpR4kynDLAbqa1OIKqRB6WEnH8",
"radosgw-admin period pull --url=https://tower-osd1.cephtips.com --access-key= ACCESS_KEY --secret-key= SECRET_KEY",
"radosgw-admin period pull --url=https://tower-osd1.cephtips.com --access-key=3QV0D6ZMMCJZMSCXJ2QJ --secret-key=VpvQWcsfI9OPzUCpR4kynDLAbqa1OIKqRB6WEnH8",
"radosgw-admin zone create --rgw-zone= RGW_ZONE --rgw-zonegroup= RGW_ZONE_GROUP --endpoints=https://tower-osd4.cephtips.com --access-key=_ACCESS_KEY --secret-key= SECRET_KEY",
"radosgw-admin zone create --rgw-zone=rdc2z --rgw-zonegroup=rdc1zg --endpoints=https://tower-osd4.cephtips.com --access-key=3QV0D6ZMMCJZMSCXJ2QJ --secret-key=VpvQWcsfI9OPzUCpR4kynDLAbqa1OIKqRB6WEnH8",
"radosgw-admin period update --commit",
"ceph orch apply rgw SERVICE_NAME --realm= REALM_NAME --zone= ZONE_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 \"",
"ceph orch apply rgw rgw --realm=rdc1 --zone=rdc2z --placement=\"1 host04\"",
"ceph config set client.rgw. SERVICE_NAME rgw_realm REALM_NAME ceph config set client.rgw. SERVICE_NAME rgw_zonegroup ZONE_GROUP_NAME ceph config set client.rgw. SERVICE_NAME rgw_zone ZONE_NAME",
"ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_realm rdc1 ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zonegroup rdc1zg ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zone rdc2z",
"systemctl restart ceph- CLUSTER_ID @ SERVICE_TYPE . ID .service",
"systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw",
"radosgw-admin sync status",
"radosgw-admin sync status realm 59762f08-470c-46de-b2b1-d92c50986e67 (ldc2) zonegroup 7cf8daf8-d279-4d5c-b73e-c7fd2af65197 (ldc2zg) zone 034ae8d3-ae0c-4e35-8760-134782cb4196 (ldc2z) metadata sync no sync (zone is master)",
"radosgw-admin sync status --rgw-realm RGW_REALM_NAME",
"radosgw-admin sync status --rgw-realm rdc1 realm 73c7b801-3736-4a89-aaf8-e23c96e6e29d (rdc1) zonegroup d67cc9c9-690a-4076-89b8-e8127d868398 (rdc1zg) zone 67584789-375b-4d61-8f12-d1cf71998b38 (rdc2z) metadata sync syncing full sync: 0/64 shards incremental sync: 64/64 shards metadata is caught up with master data sync source: 705ff9b0-68d5-4475-9017-452107cec9a0 (rdc1z) syncing full sync: 0/128 shards incremental sync: 128/128 shards data is caught up with source realm 73c7b801-3736-4a89-aaf8-e23c96e6e29d (rdc1) zonegroup d67cc9c9-690a-4076-89b8-e8127d868398 (rdc1zg) zone 67584789-375b-4d61-8f12-d1cf71998b38 (rdc2z) metadata sync syncing full sync: 0/64 shards incremental sync: 64/64 shards metadata is caught up with master data sync source: 705ff9b0-68d5-4475-9017-452107cec9a0 (rdc1z) syncing full sync: 0/128 shards incremental sync: 128/128 shards data is caught up with source",
"radosgw-admin user create --uid=\" LOCAL_USER\" --display-name=\"Local user\" --rgw-realm=_REALM_NAME --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME",
"radosgw-admin user create --uid=\"local-user\" --display-name=\"Local user\" --rgw-realm=ldc1 --rgw-zonegroup=ldc1zg --rgw-zone=ldc1z",
"radosgw-admin sync info --bucket=buck { \"sources\": [ { \"id\": \"pipe1\", \"source\": { \"zone\": \"us-east\", \"bucket\": \"buck:115b12b3-....4409.1\" }, \"dest\": { \"zone\": \"us-west\", \"bucket\": \"buck:115b12b3-....4409.1\" }, } ], \"dests\": [ { \"id\": \"pipe1\", \"source\": { \"zone\": \"us-west\", \"bucket\": \"buck:115b12b3-....4409.1\" }, \"dest\": { \"zone\": \"us-east\", \"bucket\": \"buck:115b12b3-....4409.1\" }, }, { \"id\": \"pipe1\", \"source\": { \"zone\": \"us-west\", \"bucket\": \"buck:115b12b3-....4409.1\" }, \"dest\": { \"zone\": \"us-west-2\", \"bucket\": \"buck:115b12b3-....4409.1\" }, } ], }",
"radosgw-admin sync policy get --bucket= BUCKET_NAME",
"radosgw-admin sync policy get --bucket=mybucket",
"radosgw-admin sync group create --bucket= BUCKET_NAME --group-id= GROUP_ID --status=enabled | allowed | forbidden",
"radosgw-admin sync group create --group-id=mygroup1 --status=enabled",
"radosgw-admin bucket sync run",
"radosgw-admin bucket sync run",
"radosgw-admin sync group modify --bucket= BUCKET_NAME --group-id= GROUP_ID --status=enabled | allowed | forbidden",
"radosgw-admin sync group modify --group-id=mygroup1 --status=forbidden",
"radosgw-admin bucket sync run",
"radosgw-admin bucket sync run",
"radosgw-admin sync group get --bucket= BUCKET_NAME --group-id= GROUP_ID",
"radosgw-admin sync group get --group-id=mygroup",
"radosgw-admin sync group remove --bucket= BUCKET_NAME --group-id= GROUP_ID",
"radosgw-admin sync group remove --group-id=mygroup",
"radosgw-admin sync group flow create --bucket= BUCKET_NAME --group-id= GROUP_ID --flow-id= FLOW_ID --flow-type=directional --source-zone= SOURCE_ZONE --dest-zone= DESTINATION_ZONE",
"radosgw-admin sync group flow create --bucket= BUCKET_NAME --group-id= GROUP_ID --flow-id= FLOW_ID --flow-type=symmetrical --zones= ZONE_NAME1 , ZONE_NAME2",
"radosgw-admin sync group flow remove --bucket= BUCKET_NAME --group-id= GROUP_ID --flow-id= FLOW_ID --flow-type=directional --source-zone= SOURCE_ZONE --dest-zone= DESTINATION_ZONE",
"radosgw-admin sync group flow remove --bucket= BUCKET_NAME --group-id= GROUP_ID --flow-id= FLOW_ID --flow-type=symmetrical --zones= ZONE_NAME1 , ZONE_NAME2",
"radosgw-admin sync group flow remove --group-id= GROUP_ID --flow-id= FLOW_ID --flow-type=symmetrical --zones= ZONE_NAME1 , ZONE_NAME2",
"radosgw-admin sync group pipe create --bucket= BUCKET_NAME --group-id= GROUP_ID --pipe-id= PIPE_ID --source-zones=' ZONE_NAME ',' ZONE_NAME2 '... --source-bucket= SOURCE_BUCKET --source-bucket-id= SOURCE_BUCKET_ID --dest-zones=' ZONE_NAME ',' ZONE_NAME2 '... --dest-bucket= DESTINATION_BUCKET --dest-bucket-id= DESTINATION_BUCKET_ID --prefix= SOURCE_PREFIX --prefix-rm --tags-add= KEY1=VALUE1 , KEY2=VALUE2 ,.. --tags-rm= KEY1=VALUE1 , KEY2=VALUE2 , ... --dest-owner= OWNER_ID --storage-class= STORAGE_CLASS --mode= USER --uid= USER_ID",
"radosgw-admin sync group pipe modify --bucket= BUCKET_NAME --group-id= GROUP_ID --pipe-id= PIPE_ID --source-zones=' ZONE_NAME ',' ZONE_NAME2 '... --source-bucket= SOURCE_BUCKET1 --source-bucket-id= SOURCE_BUCKET_ID --dest-zones=' ZONE_NAME ',' ZONE_NAME2 '... --dest-bucket= DESTINATION_BUCKET1 --dest-bucket-id=_DESTINATION_BUCKET-ID",
"radosgw-admin sync group pipe modify --group-id=zonegroup --pipe-id=pipe --dest-zones='primary','secondary','tertiary' --source-zones='primary','secondary','tertiary' --source-bucket=pri-bkt-1 --dest-bucket=pri-bkt-1",
"radosgw-admin sync group pipe remove --bucket= BUCKET_NAME --group-id= GROUP_ID --pipe-id= PIPE_ID --source-zones=' ZONE_NAME ',' ZONE_NAME2 '... --source-bucket= SOURCE_BUCKET , --source-bucket-id= SOURCE_BUCKET_ID --dest-zones=' ZONE_NAME ',' ZONE_NAME2 '... --dest-bucket= DESTINATION_BUCKET --dest-bucket-id= DESTINATION_BUCKET-ID",
"radosgw-admin sync group pipe remove --group-id=zonegroup --pipe-id=pipe --dest-zones='primary','secondary','tertiary' --source-zones='primary','secondary','tertiary' --source-bucket=pri-bkt-1 --dest-bucket=pri-bkt-1",
"radosgw-admin sync group pipe remove --bucket= BUCKET_NAME --group-id= GROUP_ID --pipe-id= PIPE_ID",
"radosgw-admin sync group pipe remove -bucket-name=mybuck --group-id=zonegroup --pipe-id=pipe",
"radosgw-admin sync info --bucket= BUCKET_NAME --effective-zone-name= ZONE_NAME",
"radosgw-admin sync info",
"radosgw-admin sync group create --group-id=group1 --status=allowed",
"radosgw-admin sync group flow create --group-id=group1 --flow-id=flow-mirror --flow-type=symmetrical --zones=us-east,us-west",
"radosgw-admin sync group pipe create --group-id=group1 --pipe-id=pipe1 --source-zones='*' --source-bucket='*' --dest-zones='*' --dest-bucket='*'",
"radosgw-admin sync group modify --group-id=group1 --status=enabled",
"radosgw-admin period update --commit",
"radosgw-admin sync info -bucket buck { \"sources\": [ { \"id\": \"pipe1\", \"source\": { \"zone\": \"us-east\", \"bucket\": \"buck:115b12b3-....4409.1\" }, \"dest\": { \"zone\": \"us-west\", \"bucket\": \"buck:115b12b3-....4409.1\" }, } ], \"dests\": [ { \"id\": \"pipe1\", \"source\": { \"zone\": \"us-west\", \"bucket\": \"buck:115b12b3-....4409.1\" }, \"dest\": { \"zone\": \"us-east\", \"bucket\": \"buck:115b12b3-....4409.1\" }, } ], }",
"radosgw-admin sync group create --group-id= GROUP_ID --status=allowed",
"radosgw-admin sync group create --group-id=group1 --status=allowed",
"radosgw-admin sync group flow create --group-id= GROUP_ID --flow-id= FLOW_ID --flow-type=directional --source-zone= SOURCE_ZONE_NAME --dest-zone= DESTINATION_ZONE_NAME",
"radosgw-admin sync group flow create --group-id=group1 --flow-id=us-west-backup --flow-type=directional --source-zone=us-west --dest-zone=us-west-2",
"radosgw-admin sync group pipe create --group-id= GROUP_ID --pipe-id= PIPE_ID --source-zones=' SOURCE_ZONE_NAME ' --dest-zones=' DESTINATION_ZONE_NAME '",
"radosgw-admin sync group pipe create --group-id=group1 --pipe-id=pipe1 --source-zones='us-west' --dest-zones='us-west-2'",
"radosgw-admin period update --commit",
"radosgw-admin sync info",
"radosgw-admin sync group create --group-id= GROUP_ID --status=allowed --bucket= BUCKET_NAME",
"radosgw-admin sync group create --group-id=group1 --status=allowed --bucket=buck",
"radosgw-admin sync group flow create --bucket-name= BUCKET_NAME --group-id= GROUP_ID --flow-id= FLOW_ID --flow-type=directional --source-zone= SOURCE_ZONE_NAME --dest-zone= DESTINATION_ZONE_NAME",
"radosgw-admin sync group flow create --bucket-name=buck --group-id=group1 --flow-id=us-west-backup --flow-type=directional --source-zone=us-west --dest-zone=us-west-2",
"radosgw-admin sync group pipe create --group-id= GROUP_ID --bucket-name= BUCKET_NAME --pipe-id= PIPE_ID --source-zones=' SOURCE_ZONE_NAME ' --dest-zones=' DESTINATION_ZONE_NAME '",
"radosgw-admin sync group pipe create --group-id=group1 --bucket-name=buck --pipe-id=pipe1 --source-zones='us-west' --dest-zones='us-west-2'",
"radosgw-admin sync info --bucket-name= BUCKET_NAME",
"radosgw-admin sync group modify --group-id=group1 --status=allowed",
"radosgw-admin period update --commit",
"radosgw-admin sync group create --bucket=buck --group-id=buck-default --status=enabled",
"radosgw-admin sync group pipe create --bucket=buck --group-id=buck-default --pipe-id=pipe1 --source-zones='*' --dest-zones='*'",
"radosgw-admin bucket sync info --bucket buck realm 33157555-f387-44fc-b4b4-3f9c0b32cd66 (india) zonegroup 594f1f63-de6f-4e1e-90b6-105114d7ad55 (shared) zone ffaa5ba4-c1bd-4c17-b176-2fe34004b4c5 (primary) bucket :buck[ffaa5ba4-c1bd-4c17-b176-2fe34004b4c5.16191.1] source zone e0e75beb-4e28-45ff-8d48-9710de06dcd0 bucket :buck[ffaa5ba4-c1bd-4c17-b176-2fe34004b4c5.16191.1]",
"radosgw-admin sync info --bucket buck { \"id\": \"pipe1\", \"source\": { \"zone\": \"secondary\", \"bucket\": \"buck:ffaa5ba4-c1bd-4c17-b176-2fe34004b4c5.16191.1\" }, \"dest\": { \"zone\": \"primary\", \"bucket\": \"buck:ffaa5ba4-c1bd-4c17-b176-2fe34004b4c5.16191.1\" }, \"params\": { \"source\": { \"filter\": { \"tags\": [] } }, \"dest\": {}, \"priority\": 0, \"mode\": \"system\", \"user\": \"\" } }, { \"id\": \"pipe1\", \"source\": { \"zone\": \"primary\", \"bucket\": \"buck:ffaa5ba4-c1bd-4c17-b176-2fe34004b4c5.16191.1\" }, \"dest\": { \"zone\": \"secondary\", \"bucket\": \"buck:ffaa5ba4-c1bd-4c17-b176-2fe34004b4c5.16191.1\" }, \"params\": { \"source\": { \"filter\": { \"tags\": [] } }, \"dest\": {}, \"priority\": 0, \"mode\": \"system\", \"user\": \"\" } }",
"radosgw-admin sync group create --bucket= BUCKET_NAME --group-id= GROUP_ID --status=enabled",
"radosgw-admin sync group create --bucket=buck4 --group-id=buck4-default --status=enabled",
"radosgw-admin sync group pipe create --bucket-name= BUCKET_NAME --group-id= GROUP_ID --pipe-id= PIPE_ID --source-zones= SOURCE_ZONE_NAME --source-bucket= SOURCE_BUCKET_NAME --dest-zones= DESTINATION_ZONE_NAME",
"radosgw-admin sync group pipe create --bucket=buck4 --group-id=buck4-default --pipe-id=pipe1 --source-zones='*' --source-bucket=buck5 --dest-zones='*'",
"radosgw-admin sync group pipe modify --bucket=buck4 --group-id=buck4-default --pipe-id=pipe1 --source-zones=us-west --source-bucket=buck5 --dest-zones='*'",
"radosgw-admin sync info --bucket-name= BUCKET_NAME",
"radosgw-admin sync info --bucket=buck4 { \"sources\": [], \"dests\": [], \"hints\": { \"sources\": [], \"dests\": [ \"buck4:115b12b3-....14433.2\" ] }, \"resolved-hints-1\": { \"sources\": [], \"dests\": [ { \"id\": \"pipe1\", \"source\": { \"zone\": \"us-west\", \"bucket\": \"buck5\" }, \"dest\": { \"zone\": \"us-east\", \"bucket\": \"buck4:115b12b3-....14433.2\" }, }, { \"id\": \"pipe1\", \"source\": { \"zone\": \"us-west\", \"bucket\": \"buck5\" }, \"dest\": { \"zone\": \"us-west-2\", \"bucket\": \"buck4:115b12b3-....14433.2\" }, } ] }, \"resolved-hints\": { \"sources\": [], \"dests\": [] }",
"radosgw-admin sync group create --bucket= BUCKET_NAME --group-id= GROUP_ID --status=enabled",
"radosgw-admin sync group create --bucket=buck6 --group-id=buck6-default --status=enabled",
"radosgw-admin sync group pipe create --bucket-name= BUCKET_NAME --group-id= GROUP_ID --pipe-id= PIPE_ID --source-zones= SOURCE_ZONE_NAME --dest-zones= DESTINATION_ZONE_NAME --dest-bucket= DESTINATION_BUCKET_NAME",
"radosgw-admin sync group pipe create --bucket=buck6 --group-id=buck6-default --pipe-id=pipe1 --source-zones='*' --dest-zones='*' --dest-bucket=buck5",
"radosgw-admin sync group pipe modify --bucket=buck6 --group-id=buck6-default --pipe-id=pipe1 --source-zones='*' --dest-zones='us-west' --dest-bucket=buck5",
"radosgw-admin sync info --bucket-name= BUCKET_NAME",
"radosgw-admin sync info --bucket buck5 { \"sources\": [], \"dests\": [ { \"id\": \"pipe1\", \"source\": { \"zone\": \"us-west\", \"bucket\": \"buck6:c7887c5b-f6ff-4d5f-9736-aa5cdb4a15e8.20493.4\" }, \"dest\": { \"zone\": \"us-east\", \"bucket\": \"buck5\" }, \"params\": { \"source\": { \"filter\": { \"tags\": [] } }, \"dest\": {}, \"priority\": 0, \"mode\": \"system\", \"user\": \"s3cmd\" } }, ], \"hints\": { \"sources\": [], \"dests\": [ \"buck5\" ] }, \"resolved-hints-1\": { \"sources\": [], \"dests\": [] }, \"resolved-hints\": { \"sources\": [], \"dests\": [] } }",
"radosgw-admin sync group create --bucket= BUCKET_NAME --group-id= GROUP_ID --status=enabled",
"radosgw-admin sync group create --bucket=buck1 --group-id=buck8-default --status=enabled",
"radosgw-admin sync group pipe create --bucket= BUCKET_NAME --group-id= GROUP_ID --pipe-id= PIPE_ID --tags-add= KEY1 = VALUE1 , KEY2 = VALUE2 --source-zones=' ZONE_NAME1 ',' ZONE_NAME2 ' --dest-zones=' ZONE_NAME1 ',' ZONE_NAME2 '",
"radosgw-admin sync group pipe create --bucket=buck1 --group-id=buck1-default --pipe-id=pipe-tags --tags-add=color=blue,color=red --source-zones='*' --dest-zones='*'",
"radosgw-admin sync group pipe create --bucket= BUCKET_NAME --group-id= GROUP_ID --pipe-id= PIPE_ID --prefix= PREFIX --source-zones=' ZONE_NAME1 ',' ZONE_NAME2 ' --dest-zones=' ZONE_NAME1 ',' ZONE_NAME2 '",
"radosgw-admin sync group pipe create --bucket=buck1 --group-id=buck1-default --pipe-id=pipe-prefix --prefix=foo/ --source-zones='*' --dest-zones='*' \\",
"radosgw-admin sync info --bucket= BUCKET_NAME",
"radosgw-admin sync info --bucket=buck1",
"radosgw-admin sync group modify --group-id buck-default --status forbidden --bucket buck { \"groups\": [ { \"id\": \"buck-default\", \"data_flow\": {}, \"pipes\": [ { \"id\": \"pipe1\", \"source\": { \"bucket\": \"*\", \"zones\": [ \"*\" ] }, \"dest\": { \"bucket\": \"*\", \"zones\": [ \"*\" ] }, \"params\": { \"source\": { \"filter\": { \"tags\": [] } }, \"dest\": {}, \"priority\": 0, \"mode\": \"system\", } } ], \"status\": \"forbidden\" } ] }",
"radosgw-admin sync info --bucket buck { \"sources\": [], \"dests\": [], \"hints\": { \"sources\": [], \"dests\": [] }, \"resolved-hints-1\": { \"sources\": [], \"dests\": [] }, \"resolved-hints\": { \"sources\": [], \"dests\": [] } }",
"radosgw-admin realm create --rgw-realm= REALM_NAME",
"radosgw-admin realm create --rgw-realm=test_realm",
"radosgw-admin realm default --rgw-realm= REALM_NAME",
"radosgw-admin realm default --rgw-realm=test_realm1",
"radosgw-admin realm default --rgw-realm=test_realm",
"radosgw-admin realm delete --rgw-realm= REALM_NAME",
"radosgw-admin realm delete --rgw-realm=test_realm",
"radosgw-admin realm get --rgw-realm= REALM_NAME",
"radosgw-admin realm get --rgw-realm=test_realm >filename.json",
"{ \"id\": \"0a68d52e-a19c-4e8e-b012-a8f831cb3ebc\", \"name\": \"test_realm\", \"current_period\": \"b0c5bbef-4337-4edd-8184-5aeab2ec413b\", \"epoch\": 1 }",
"radosgw-admin realm set --rgw-realm= REALM_NAME --infile= IN_FILENAME",
"radosgw-admin realm set --rgw-realm=test_realm --infile=filename.json",
"radosgw-admin realm list",
"radosgw-admin realm list-periods",
"radosgw-admin realm pull --url= URL_TO_MASTER_ZONE_GATEWAY --access-key= ACCESS_KEY --secret= SECRET_KEY",
"radosgw-admin realm rename --rgw-realm= REALM_NAME --realm-new-name= NEW_REALM_NAME",
"radosgw-admin realm rename --rgw-realm=test_realm --realm-new-name=test_realm2",
"radosgw-admin period update --commit",
"radosgw-admin period update --commit",
"radosgw-admin zonegroup create --rgw-zonegroup= ZONE_GROUP_NAME [--rgw-realm= REALM_NAME ] [--master]",
"radosgw-admin zonegroup create --rgw-zonegroup=zonegroup1 --rgw-realm=test_realm --default",
"zonegroup modify --rgw-zonegroup= ZONE_GROUP_NAME",
"radosgw-admin zonegroup modify --rgw-zonegroup=zonegroup1",
"radosgw-admin zonegroup default --rgw-zonegroup= ZONE_GROUP_NAME",
"radosgw-admin zonegroup default --rgw-zonegroup=zonegroup2",
"radosgw-admin period update --commit",
"radosgw-admin period update --commit",
"radosgw-admin zonegroup default --rgw-zonegroup=us",
"radosgw-admin period update --commit",
"radosgw-admin zonegroup add --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME",
"radosgw-admin period update --commit",
"radosgw-admin zonegroup remove --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME",
"radosgw-admin period update --commit",
"radosgw-admin zonegroup rename --rgw-zonegroup= ZONE_GROUP_NAME --zonegroup-new-name= NEW_ZONE_GROUP_NAME",
"radosgw-admin period update --commit",
"radosgw-admin zonegroup delete --rgw-zonegroup= ZONE_GROUP_NAME",
"radosgw-admin period update --commit",
"radosgw-admin zonegroup list",
"{ \"default_info\": \"90b28698-e7c3-462c-a42d-4aa780d24eda\", \"zonegroups\": [ \"us\" ] }",
"radosgw-admin zonegroup get [--rgw-zonegroup= ZONE_GROUP_NAME ]",
"{ \"id\": \"90b28698-e7c3-462c-a42d-4aa780d24eda\", \"name\": \"us\", \"api_name\": \"us\", \"is_master\": \"true\", \"endpoints\": [ \"http:\\/\\/rgw1:80\" ], \"hostnames\": [], \"hostnames_s3website\": [], \"master_zone\": \"9248cab2-afe7-43d8-a661-a40bf316665e\", \"zones\": [ { \"id\": \"9248cab2-afe7-43d8-a661-a40bf316665e\", \"name\": \"us-east\", \"endpoints\": [ \"http:\\/\\/rgw1\" ], \"log_meta\": \"true\", \"log_data\": \"true\", \"bucket_index_max_shards\": 11, \"read_only\": \"false\" }, { \"id\": \"d1024e59-7d28-49d1-8222-af101965a939\", \"name\": \"us-west\", \"endpoints\": [ \"http:\\/\\/rgw2:80\" ], \"log_meta\": \"false\", \"log_data\": \"true\", \"bucket_index_max_shards\": 11, \"read_only\": \"false\" } ], \"placement_targets\": [ { \"name\": \"default-placement\", \"tags\": [] } ], \"default_placement\": \"default-placement\", \"realm_id\": \"ae031368-8715-4e27-9a99-0c9468852cfe\" }",
"radosgw-admin zonegroup set --infile zonegroup.json",
"radosgw-admin period update --commit",
"{ \"zonegroups\": [ { \"key\": \"90b28698-e7c3-462c-a42d-4aa780d24eda\", \"val\": { \"id\": \"90b28698-e7c3-462c-a42d-4aa780d24eda\", \"name\": \"us\", \"api_name\": \"us\", \"is_master\": \"true\", \"endpoints\": [ \"http:\\/\\/rgw1:80\" ], \"hostnames\": [], \"hostnames_s3website\": [], \"master_zone\": \"9248cab2-afe7-43d8-a661-a40bf316665e\", \"zones\": [ { \"id\": \"9248cab2-afe7-43d8-a661-a40bf316665e\", \"name\": \"us-east\", \"endpoints\": [ \"http:\\/\\/rgw1\" ], \"log_meta\": \"true\", \"log_data\": \"true\", \"bucket_index_max_shards\": 11, \"read_only\": \"false\" }, { \"id\": \"d1024e59-7d28-49d1-8222-af101965a939\", \"name\": \"us-west\", \"endpoints\": [ \"http:\\/\\/rgw2:80\" ], \"log_meta\": \"false\", \"log_data\": \"true\", \"bucket_index_max_shards\": 11, \"read_only\": \"false\" } ], \"placement_targets\": [ { \"name\": \"default-placement\", \"tags\": [] } ], \"default_placement\": \"default-placement\", \"realm_id\": \"ae031368-8715-4e27-9a99-0c9468852cfe\" } } ], \"master_zonegroup\": \"90b28698-e7c3-462c-a42d-4aa780d24eda\", \"bucket_quota\": { \"enabled\": false, \"max_size_kb\": -1, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"max_size_kb\": -1, \"max_objects\": -1 } }",
"radosgw-admin zonegroup-map set --infile zonegroupmap.json",
"radosgw-admin period update --commit",
"radosgw-admin zone create --rgw-zone= ZONE_NAME [--zonegroup= ZONE_GROUP_NAME ] [--endpoints= ENDPOINT_PORT [,<endpoint:port>] [--master] [--default] --access-key ACCESS_KEY --secret SECRET_KEY",
"radosgw-admin period update --commit",
"radosgw-admin period update --commit",
"radosgw-admin zonegroup remove --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME",
"radosgw-admin period update --commit",
"radosgw-admin zone delete --rgw-zone= ZONE_NAME",
"radosgw-admin period update --commit",
"ceph osd pool delete DELETED_ZONE_NAME .rgw.control DELETED_ZONE_NAME .rgw.control --yes-i-really-really-mean-it ceph osd pool delete DELETED_ZONE_NAME .rgw.data.root DELETED_ZONE_NAME .rgw.data.root --yes-i-really-really-mean-it ceph osd pool delete DELETED_ZONE_NAME .rgw.log DELETED_ZONE_NAME .rgw.log --yes-i-really-really-mean-it ceph osd pool delete DELETED_ZONE_NAME .rgw.users.uid DELETED_ZONE_NAME .rgw.users.uid --yes-i-really-really-mean-it",
"radosgw-admin zone modify [options] --access-key=<key> --secret/--secret-key=<key> --master --default --endpoints=<list>",
"radosgw-admin period update --commit",
"radosgw-admin zone list",
"radosgw-admin zone get [--rgw-zone= ZONE_NAME ]",
"{ \"domain_root\": \".rgw\", \"control_pool\": \".rgw.control\", \"gc_pool\": \".rgw.gc\", \"log_pool\": \".log\", \"intent_log_pool\": \".intent-log\", \"usage_log_pool\": \".usage\", \"user_keys_pool\": \".users\", \"user_email_pool\": \".users.email\", \"user_swift_pool\": \".users.swift\", \"user_uid_pool\": \".users.uid\", \"system_key\": { \"access_key\": \"\", \"secret_key\": \"\"}, \"placement_pools\": [ { \"key\": \"default-placement\", \"val\": { \"index_pool\": \".rgw.buckets.index\", \"data_pool\": \".rgw.buckets\"} } ] }",
"radosgw-admin zone set --rgw-zone=test-zone --infile zone.json",
"radosgw-admin period update --commit",
"radosgw-admin zone rename --rgw-zone= ZONE_NAME --zone-new-name= NEW_ZONE_NAME",
"radosgw-admin period update --commit",
"firewall-cmd --zone=public --add-port=636/tcp firewall-cmd --zone=public --add-port=636/tcp --permanent",
"certutil -d /etc/openldap/certs -A -t \"TC,,\" -n \"msad-frog-MSAD-FROG-CA\" -i /path/to/ldap.pem",
"setsebool -P httpd_can_network_connect on",
"chmod 644 /etc/openldap/certs/*",
"ldapwhoami -H ldaps://rh-directory-server.example.com -d 9",
"radosgw-admin metadata list user",
"ldapsearch -x -D \"uid=ceph,ou=People,dc=example,dc=com\" -W -H ldaps://example.com -b \"ou=People,dc=example,dc=com\" -s sub 'uid=ceph'",
"ceph config set client.rgw OPTION VALUE",
"ceph config set client.rgw rgw_ldap_secret /etc/bindpass",
"service_type: rgw service_id: rgw.1 service_name: rgw.rgw.1 placement: label: rgw extra_container_args: - -v - /etc/bindpass:/etc/bindpass",
"ceph config set client.rgw OPTION VALUE",
"ceph config set client.rgw rgw_ldap_uri ldaps://:636 ceph config set client.rgw rgw_ldap_binddn \"ou=poc,dc=example,dc=local\" ceph config set client.rgw rgw_ldap_searchdn \"ou=poc,dc=example,dc=local\" ceph config set client.rgw rgw_ldap_dnattr \"uid\" ceph config set client.rgw rgw_s3_auth_use_ldap true",
"systemctl restart ceph- CLUSTER_ID @ SERVICE_TYPE . ID .service",
"systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw",
"\"objectclass=inetorgperson\"",
"\"(&(uid=joe)(objectclass=inetorgperson))\"",
"\"(&(uid=@USERNAME@)(memberOf=cn=ceph-users,ou=groups,dc=mycompany,dc=com))\"",
"export RGW_ACCESS_KEY_ID=\" USERNAME \"",
"export RGW_SECRET_ACCESS_KEY=\" PASSWORD \"",
"radosgw-token --encode --ttype=ldap",
"radosgw-token --encode --ttype=ad",
"export RGW_ACCESS_KEY_ID=\"ewogICAgIlJHV19UT0tFTiI6IHsKICAgICAgICAidmVyc2lvbiI6IDEsCiAgICAgICAgInR5cGUiOiAibGRhcCIsCiAgICAgICAgImlkIjogImNlcGgiLAogICAgICAgICJrZXkiOiAiODAwI0dvcmlsbGEiCiAgICB9Cn0K\"",
"cat .aws/credentials [default] aws_access_key_id = ewogICaGbnjlwe9UT0tFTiI6IHsKICAgICAgICAidmVyc2lvbiI6IDEsCiAgICAgICAgInR5cGUiOiAiYWQiLAogICAgICAgICJpZCI6ICJjZXBoIiwKICAgICAgICAia2V5IjogInBhc3M0Q2VwaCIKICAgIH0KfQo= aws_secret_access_key =",
"aws s3 ls --endpoint http://host03 2023-12-11 17:08:50 mybucket 2023-12-24 14:55:44 mybucket2",
"radosgw-admin user info --uid dir1 { \"user_id\": \"dir1\", \"display_name\": \"dir1\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"subusers\": [], \"keys\": [], \"swift_keys\": [], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"default_storage_class\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"ldap\", \"mfa_ids\": [] }",
"radosgw-admin metadata list user",
"ldapsearch -x -D \"uid=ceph,ou=People,dc=example,dc=com\" -W -H ldaps://example.com -b \"ou=People,dc=example,dc=com\" -s sub 'uid=ceph'",
"ceph config set client.rgw OPTION VALUE",
"ceph config set client.rgw rgw_ldap_secret /etc/bindpass",
"service_type: rgw service_id: rgw.1 service_name: rgw.rgw.1 placement: label: rgw extra_container_args: - -v - /etc/bindpass:/etc/bindpass",
"ceph config set client.rgw OPTION VALUE",
"ceph config set client.rgw rgw_ldap_uri ldaps://_FQDN_:636 ceph config set client.rgw rgw_ldap_binddn \"_BINDDN_\" ceph config set client.rgw rgw_ldap_searchdn \"_SEARCHDN_\" ceph config set client.rgw rgw_ldap_dnattr \"cn\" ceph config set client.rgw rgw_s3_auth_use_ldap true",
"rgw_ldap_binddn \"uid=ceph,cn=users,cn=accounts,dc=example,dc=com\"",
"rgw_ldap_searchdn \"cn=users,cn=accounts,dc=example,dc=com\"",
"systemctl restart ceph- CLUSTER_ID @ SERVICE_TYPE . ID .service",
"systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw",
"export RGW_ACCESS_KEY_ID=\" USERNAME \"",
"export RGW_SECRET_ACCESS_KEY=\" PASSWORD \"",
"radosgw-token --encode --ttype=ldap",
"radosgw-token --encode --ttype=ad",
"export RGW_ACCESS_KEY_ID=\"ewogICAgIlJHV19UT0tFTiI6IHsKICAgICAgICAidmVyc2lvbiI6IDEsCiAgICAgICAgInR5cGUiOiAibGRhcCIsCiAgICAgICAgImlkIjogImNlcGgiLAogICAgICAgICJrZXkiOiAiODAwI0dvcmlsbGEiCiAgICB9Cn0K\"",
"cat .aws/credentials [default] aws_access_key_id = ewogICaGbnjlwe9UT0tFTiI6IHsKICAgICAgICAidmVyc2lvbiI6IDEsCiAgICAgICAgInR5cGUiOiAiYWQiLAogICAgICAgICJpZCI6ICJjZXBoIiwKICAgICAgICAia2V5IjogInBhc3M0Q2VwaCIKICAgIH0KfQo= aws_secret_access_key =",
"aws s3 ls --endpoint http://host03 2023-12-11 17:08:50 mybucket 2023-12-24 14:55:44 mybucket2",
"radosgw-admin user info --uid dir1 { \"user_id\": \"dir1\", \"display_name\": \"dir1\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"subusers\": [], \"keys\": [], \"swift_keys\": [], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"default_storage_class\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"ldap\", \"mfa_ids\": [] }",
"openstack service create --name=swift --description=\"Swift Service\" object-store",
"openstack endpoint create --region REGION_NAME swift admin \" URL \" openstack endpoint create --region REGION_NAME swift public \" URL \" openstack endpoint create --region REGION_NAME swift internal \" URL \"",
"openstack endpoint create --region us-west swift admin \"http://radosgw.example.com:8080/swift/v1\" openstack endpoint create --region us-west swift public \"http://radosgw.example.com:8080/swift/v1\" openstack endpoint create --region us-west swift internal \"http://radosgw.example.com:8080/swift/v1\"",
"openstack endpoint list --service=swift",
"openstack endpoint show ENDPOINT_ID",
"mkdir /var/ceph/nss openssl x509 -in /etc/keystone/ssl/certs/ca.pem -pubkey | certutil -d /var/ceph/nss -A -n ca -t \"TCu,Cu,Tuw\" openssl x509 -in /etc/keystone/ssl/certs/signing_cert.pem -pubkey | certutil -A -d /var/ceph/nss -n signing_cert -t \"P,P,P\"",
"ceph config set client.rgw nss_db_path \"/var/lib/ceph/radosgw/ceph-rgw.rgw01/nss\"",
"ceph config set client.rgw rgw_keystone_verify_ssl TRUE / FALSE ceph config set client.rgw rgw_s3_auth_use_keystone TRUE / FALSE ceph config set client.rgw rgw_keystone_api_version API_VERSION ceph config set client.rgw rgw_keystone_url KEYSTONE_URL : ADMIN_PORT ceph config set client.rgw rgw_keystone_accepted_roles ACCEPTED_ROLES_ ceph config set client.rgw rgw_keystone_accepted_admin_roles ACCEPTED_ADMIN_ROLES ceph config set client.rgw rgw_keystone_admin_domain default ceph config set client.rgw rgw_keystone_admin_project SERVICE_NAME ceph config set client.rgw rgw_keystone_admin_user KEYSTONE_TENANT_USER_NAME ceph config set client.rgw rgw_keystone_admin_password KEYSTONE_TENANT_USER_PASSWORD ceph config set client.rgw rgw_keystone_implicit_tenants KEYSTONE_IMPLICIT_TENANT_NAME ceph config set client.rgw rgw_swift_versioning_enabled TRUE / FALSE ceph config set client.rgw rgw_swift_enforce_content_length TRUE / FALSE ceph config set client.rgw rgw_swift_account_in_url TRUE / FALSE ceph config set client.rgw rgw_trust_forwarded_https TRUE / FALSE ceph config set client.rgw rgw_max_attr_name_len MAXIMUM_LENGTH_OF_METADATA_NAMES ceph config set client.rgw rgw_max_attrs_num_in_req MAXIMUM_NUMBER_OF_METADATA_ITEMS ceph config set client.rgw rgw_max_attr_size MAXIMUM_LENGTH_OF_METADATA_VALUE ceph config set client.rgw rgw_keystone_accepted_reader_roles SwiftSystemReader",
"ceph config set client.rgw rgw_keystone_verify_ssl false ceph config set client.rgw rgw_s3_auth_use_keystone true ceph config set client.rgw rgw_keystone_api_version 3 ceph config set client.rgw rgw_keystone_url http://<public Keystone endpoint>:5000/ ceph config set client.rgw rgw_keystone_accepted_roles 'member, Member, admin' ceph config set client.rgw rgw_keystone_accepted_admin_roles 'ResellerAdmin, swiftoperator' ceph config set client.rgw rgw_keystone_admin_domain default ceph config set client.rgw rgw_keystone_admin_project service ceph config set client.rgw rgw_keystone_admin_user swift ceph config set client.rgw rgw_keystone_admin_password password ceph config set client.rgw rgw_keystone_implicit_tenants true ceph config set client.rgw rgw_swift_versioning_enabled true ceph config set client.rgw rgw_swift_enforce_content_length true ceph config set client.rgw rgw_swift_account_in_url true ceph config set client.rgw rgw_trust_forwarded_https true ceph config set client.rgw rgw_max_attr_name_len 128 ceph config set client.rgw rgw_max_attrs_num_in_req 90 ceph config set client.rgw rgw_max_attr_size 1024 ceph config set client.rgw rgw_keystone_accepted_reader_roles SwiftSystemReader",
"systemctl restart ceph- CLUSTER_ID @ SERVICE_TYPE . ID .service",
"systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw",
"grubby --update-kernel=ALL --args=\"intel_iommu=on\"",
"dnf install -y qatlib-service qatlib qatzip qatengine",
"usermod -aG qat root",
"cat /etc/sysconfig/qat ServicesEnabled=asym POLICY=8",
"cat /etc/sysconfig/qat ServicesEnabled=dc POLICY=8",
"cat /etc/sysconfig/qat ServicesEnabled=asym,dc POLICY=8",
"sudo vim /etc/security/limits.conf root - memlock 500000 ceph - memlock 500000",
"sudo su -l USDUSER",
"systemctl enable qat",
"systemctl reboot",
"service_type: rgw service_id: rgw_qat placement: label: rgw extra_container_args: - \"-v /etc/group:/etc/group:ro\" - \"--group-add=keep-groups\" - \"--cap-add=SYS_ADMIN\" - \"--cap-add=SYS_PTRACE\" - \"--cap-add=IPC_LOCK\" - \"--security-opt seccomp=unconfined\" - \"--ulimit memlock=209715200:209715200\" - \"--device=/dev/qat_adf_ctl:/dev/qat_adf_ctl\" - \"--device=/dev/vfio/vfio:/dev/vfio/vfio\" - \"--device=/dev/vfio/333:/dev/vfio/333\" - \"--device=/dev/vfio/334:/dev/vfio/334\" - \"--device=/dev/vfio/335:/dev/vfio/335\" - \"--device=/dev/vfio/336:/dev/vfio/336\" - \"--device=/dev/vfio/337:/dev/vfio/337\" - \"--device=/dev/vfio/338:/dev/vfio/338\" - \"--device=/dev/vfio/339:/dev/vfio/339\" - \"--device=/dev/vfio/340:/dev/vfio/340\" - \"--device=/dev/vfio/341:/dev/vfio/341\" - \"--device=/dev/vfio/342:/dev/vfio/342\" - \"--device=/dev/vfio/343:/dev/vfio/343\" - \"--device=/dev/vfio/344:/dev/vfio/344\" - \"--device=/dev/vfio/345:/dev/vfio/345\" - \"--device=/dev/vfio/346:/dev/vfio/346\" - \"--device=/dev/vfio/347:/dev/vfio/347\" - \"--device=/dev/vfio/348:/dev/vfio/348\" - \"--device=/dev/vfio/349:/dev/vfio/349\" - \"--device=/dev/vfio/350:/dev/vfio/350\" - \"--device=/dev/vfio/351:/dev/vfio/351\" - \"--device=/dev/vfio/352:/dev/vfio/352\" - \"--device=/dev/vfio/353:/dev/vfio/353\" - \"--device=/dev/vfio/354:/dev/vfio/354\" - \"--device=/dev/vfio/355:/dev/vfio/355\" - \"--device=/dev/vfio/356:/dev/vfio/356\" - \"--device=/dev/vfio/357:/dev/vfio/357\" - \"--device=/dev/vfio/358:/dev/vfio/358\" - \"--device=/dev/vfio/359:/dev/vfio/359\" - \"--device=/dev/vfio/360:/dev/vfio/360\" - \"--device=/dev/vfio/361:/dev/vfio/361\" - \"--device=/dev/vfio/362:/dev/vfio/362\" - \"--device=/dev/vfio/363:/dev/vfio/363\" - \"--device=/dev/vfio/364:/dev/vfio/364\" - \"--device=/dev/vfio/365:/dev/vfio/365\" - \"--device=/dev/vfio/366:/dev/vfio/366\" - \"--device=/dev/vfio/367:/dev/vfio/367\" - \"--device=/dev/vfio/368:/dev/vfio/368\" - \"--device=/dev/vfio/369:/dev/vfio/369\" - \"--device=/dev/vfio/370:/dev/vfio/370\" - \"--device=/dev/vfio/371:/dev/vfio/371\" - \"--device=/dev/vfio/372:/dev/vfio/372\" - \"--device=/dev/vfio/373:/dev/vfio/373\" - \"--device=/dev/vfio/374:/dev/vfio/374\" - \"--device=/dev/vfio/375:/dev/vfio/375\" - \"--device=/dev/vfio/376:/dev/vfio/376\" - \"--device=/dev/vfio/377:/dev/vfio/377\" - \"--device=/dev/vfio/378:/dev/vfio/378\" - \"--device=/dev/vfio/379:/dev/vfio/379\" - \"--device=/dev/vfio/380:/dev/vfio/380\" - \"--device=/dev/vfio/381:/dev/vfio/381\" - \"--device=/dev/vfio/382:/dev/vfio/382\" - \"--device=/dev/vfio/383:/dev/vfio/383\" - \"--device=/dev/vfio/384:/dev/vfio/384\" - \"--device=/dev/vfio/385:/dev/vfio/385\" - \"--device=/dev/vfio/386:/dev/vfio/386\" - \"--device=/dev/vfio/387:/dev/vfio/387\" - \"--device=/dev/vfio/388:/dev/vfio/388\" - \"--device=/dev/vfio/389:/dev/vfio/389\" - \"--device=/dev/vfio/390:/dev/vfio/390\" - \"--device=/dev/vfio/391:/dev/vfio/391\" - \"--device=/dev/vfio/392:/dev/vfio/392\" - \"--device=/dev/vfio/393:/dev/vfio/393\" - \"--device=/dev/vfio/394:/dev/vfio/394\" - \"--device=/dev/vfio/395:/dev/vfio/395\" - \"--device=/dev/vfio/396:/dev/vfio/396\" - \"--device=/dev/vfio/devices/vfio0:/dev/vfio/devices/vfio0\" - \"--device=/dev/vfio/devices/vfio1:/dev/vfio/devices/vfio1\" - \"--device=/dev/vfio/devices/vfio2:/dev/vfio/devices/vfio2\" - \"--device=/dev/vfio/devices/vfio3:/dev/vfio/devices/vfio3\" - \"--device=/dev/vfio/devices/vfio4:/dev/vfio/devices/vfio4\" - \"--device=/dev/vfio/devices/vfio5:/dev/vfio/devices/vfio5\" - \"--device=/dev/vfio/devices/vfio6:/dev/vfio/devices/vfio6\" - \"--device=/dev/vfio/devices/vfio7:/dev/vfio/devices/vfio7\" - \"--device=/dev/vfio/devices/vfio8:/dev/vfio/devices/vfio8\" - \"--device=/dev/vfio/devices/vfio9:/dev/vfio/devices/vfio9\" - \"--device=/dev/vfio/devices/vfio10:/dev/vfio/devices/vfio10\" - \"--device=/dev/vfio/devices/vfio11:/dev/vfio/devices/vfio11\" - \"--device=/dev/vfio/devices/vfio12:/dev/vfio/devices/vfio12\" - \"--device=/dev/vfio/devices/vfio13:/dev/vfio/devices/vfio13\" - \"--device=/dev/vfio/devices/vfio14:/dev/vfio/devices/vfio14\" - \"--device=/dev/vfio/devices/vfio15:/dev/vfio/devices/vfio15\" - \"--device=/dev/vfio/devices/vfio16:/dev/vfio/devices/vfio16\" - \"--device=/dev/vfio/devices/vfio17:/dev/vfio/devices/vfio17\" - \"--device=/dev/vfio/devices/vfio18:/dev/vfio/devices/vfio18\" - \"--device=/dev/vfio/devices/vfio19:/dev/vfio/devices/vfio19\" - \"--device=/dev/vfio/devices/vfio20:/dev/vfio/devices/vfio20\" - \"--device=/dev/vfio/devices/vfio21:/dev/vfio/devices/vfio21\" - \"--device=/dev/vfio/devices/vfio22:/dev/vfio/devices/vfio22\" - \"--device=/dev/vfio/devices/vfio23:/dev/vfio/devices/vfio23\" - \"--device=/dev/vfio/devices/vfio24:/dev/vfio/devices/vfio24\" - \"--device=/dev/vfio/devices/vfio25:/dev/vfio/devices/vfio25\" - \"--device=/dev/vfio/devices/vfio26:/dev/vfio/devices/vfio26\" - \"--device=/dev/vfio/devices/vfio27:/dev/vfio/devices/vfio27\" - \"--device=/dev/vfio/devices/vfio28:/dev/vfio/devices/vfio28\" - \"--device=/dev/vfio/devices/vfio29:/dev/vfio/devices/vfio29\" - \"--device=/dev/vfio/devices/vfio30:/dev/vfio/devices/vfio30\" - \"--device=/dev/vfio/devices/vfio31:/dev/vfio/devices/vfio31\" - \"--device=/dev/vfio/devices/vfio32:/dev/vfio/devices/vfio32\" - \"--device=/dev/vfio/devices/vfio33:/dev/vfio/devices/vfio33\" - \"--device=/dev/vfio/devices/vfio34:/dev/vfio/devices/vfio34\" - \"--device=/dev/vfio/devices/vfio35:/dev/vfio/devices/vfio35\" - \"--device=/dev/vfio/devices/vfio36:/dev/vfio/devices/vfio36\" - \"--device=/dev/vfio/devices/vfio37:/dev/vfio/devices/vfio37\" - \"--device=/dev/vfio/devices/vfio38:/dev/vfio/devices/vfio38\" - \"--device=/dev/vfio/devices/vfio39:/dev/vfio/devices/vfio39\" - \"--device=/dev/vfio/devices/vfio40:/dev/vfio/devices/vfio40\" - \"--device=/dev/vfio/devices/vfio41:/dev/vfio/devices/vfio41\" - \"--device=/dev/vfio/devices/vfio42:/dev/vfio/devices/vfio42\" - \"--device=/dev/vfio/devices/vfio43:/dev/vfio/devices/vfio43\" - \"--device=/dev/vfio/devices/vfio44:/dev/vfio/devices/vfio44\" - \"--device=/dev/vfio/devices/vfio45:/dev/vfio/devices/vfio45\" - \"--device=/dev/vfio/devices/vfio46:/dev/vfio/devices/vfio46\" - \"--device=/dev/vfio/devices/vfio47:/dev/vfio/devices/vfio47\" - \"--device=/dev/vfio/devices/vfio48:/dev/vfio/devices/vfio48\" - \"--device=/dev/vfio/devices/vfio49:/dev/vfio/devices/vfio49\" - \"--device=/dev/vfio/devices/vfio50:/dev/vfio/devices/vfio50\" - \"--device=/dev/vfio/devices/vfio51:/dev/vfio/devices/vfio51\" - \"--device=/dev/vfio/devices/vfio52:/dev/vfio/devices/vfio52\" - \"--device=/dev/vfio/devices/vfio53:/dev/vfio/devices/vfio53\" - \"--device=/dev/vfio/devices/vfio54:/dev/vfio/devices/vfio54\" - \"--device=/dev/vfio/devices/vfio55:/dev/vfio/devices/vfio55\" - \"--device=/dev/vfio/devices/vfio56:/dev/vfio/devices/vfio56\" - \"--device=/dev/vfio/devices/vfio57:/dev/vfio/devices/vfio57\" - \"--device=/dev/vfio/devices/vfio58:/dev/vfio/devices/vfio58\" - \"--device=/dev/vfio/devices/vfio59:/dev/vfio/devices/vfio59\" - \"--device=/dev/vfio/devices/vfio60:/dev/vfio/devices/vfio60\" - \"--device=/dev/vfio/devices/vfio61:/dev/vfio/devices/vfio61\" - \"--device=/dev/vfio/devices/vfio62:/dev/vfio/devices/vfio62\" - \"--device=/dev/vfio/devices/vfio63:/dev/vfio/devices/vfio63\" networks: - 172.17.8.0/24 spec: rgw_frontend_port: 8000",
"plugin crypto accelerator = crypto_qat",
"qat compressor enabled=true",
"[user@client ~]USD vi bucket-encryption.json",
"{ \"Rules\": [ { \"ApplyServerSideEncryptionByDefault\": { \"SSEAlgorithm\": \"AES256\" } } ] }",
"aws --endpoint-url=pass:q[_RADOSGW_ENDPOINT_URL_]:pass:q[_PORT_] s3api put-bucket-encryption --bucket pass:q[_BUCKET_NAME_] --server-side-encryption-configuration pass:q[_file://PATH_TO_BUCKET_ENCRYPTION_CONFIGURATION_FILE/BUCKET_ENCRYPTION_CONFIGURATION_FILE.json_]",
"[user@client ~]USD aws --endpoint-url=http://host01:80 s3api put-bucket-encryption --bucket testbucket --server-side-encryption-configuration file://bucket-encryption.json",
"aws --endpoint-url=pass:q[_RADOSGW_ENDPOINT_URL_]:pass:q[_PORT_] s3api get-bucket-encryption --bucket BUCKET_NAME",
"[user@client ~]USD aws --profile ceph --endpoint=http://host01:80 s3api get-bucket-encryption --bucket testbucket { \"ServerSideEncryptionConfiguration\": { \"Rules\": [ { \"ApplyServerSideEncryptionByDefault\": { \"SSEAlgorithm\": \"AES256\" } } ] } }",
"aws --endpoint-url= RADOSGW_ENDPOINT_URL : PORT s3api delete-bucket-encryption --bucket BUCKET_NAME",
"[user@client ~]USD aws --endpoint-url=http://host01:80 s3api delete-bucket-encryption --bucket testbucket",
"aws --endpoint-url= RADOSGW_ENDPOINT_URL : PORT s3api get-bucket-encryption --bucket BUCKET_NAME",
"[user@client ~]USD aws --endpoint=http://host01:80 s3api get-bucket-encryption --bucket testbucket An error occurred (ServerSideEncryptionConfigurationNotFoundError) when calling the GetBucketEncryption operation: The server side encryption configuration was not found",
"frontend http_web *:80 mode http default_backend rgw frontend rgw\\u00ad-https bind *:443 ssl crt /etc/ssl/private/example.com.pem default_backend rgw backend rgw balance roundrobin mode http server rgw1 10.0.0.71:8080 check server rgw2 10.0.0.80:8080 check",
"frontend http_web *:80 mode http default_backend rgw frontend rgw\\u00ad-https bind *:443 ssl crt /etc/ssl/private/example.com.pem http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto https here we set the incoming HTTPS port on the load balancer (eg : 443) http-request set-header X-Forwarded-Port 443 default_backend rgw backend rgw balance roundrobin mode http server rgw1 10.0.0.71:8080 check server rgw2 10.0.0.80:8080 check",
"ceph config set client.rgw rgw_trust_forwarded_https true",
"systemctl enable haproxy systemctl start haproxy",
"ceph config set client.rgw rgw_crypt_vault_secret_engine transit compat=0",
"ceph config set client.rgw rgw_crypt_vault_secret_engine transit compat=1",
"ceph config set client.rgw rgw_crypt_vault_secret_engine transit compat=2",
"vault policy write rgw-kv-policy -<<EOF path \"secret/data/*\" { capabilities = [\"read\"] } EOF",
"vault policy write rgw-transit-policy -<<EOF path \"transit/keys/*\" { capabilities = [ \"create\", \"update\" ] denied_parameters = {\"exportable\" = [], \"allow_plaintext_backup\" = [] } } path \"transit/keys/*\" { capabilities = [\"read\", \"delete\"] } path \"transit/keys/\" { capabilities = [\"list\"] } path \"transit/keys/+/rotate\" { capabilities = [ \"update\" ] } path \"transit/*\" { capabilities = [ \"update\" ] } EOF",
"vault policy write old-rgw-transit-policy -<<EOF path \"transit/export/encryption-key/*\" { capabilities = [\"read\"] } EOF",
"ceph config set client.rgw rgw_crypt_s3_kms_backend vault",
"ceph config set client.rgw rgw_crypt_vault_auth agent ceph config set client.rgw rgw_crypt_vault_addr http:// VAULT_SERVER :8100",
"vault read auth/approle/role/rgw-ap/role-id -format=json | \\ jq -r .data.role_id > PATH_TO_FILE",
"vault read auth/approle/role/rgw-ap/role-id -format=json | \\ jq -r .data.secret_id > PATH_TO_FILE",
"pid_file = \"/run/kv-vault-agent-pid\" auto_auth { method \"AppRole\" { mount_path = \"auth/approle\" config = { role_id_file_path =\"/root/vault_configs/kv-agent-role-id\" secret_id_file_path =\"/root/vault_configs/kv-agent-secret-id\" remove_secret_id_file_after_reading =\"false\" } } } cache { use_auto_auth_token = true } listener \"tcp\" { address = \"127.0.0.1:8100\" tls_disable = true } vault { address = \"http://10.8.128.9:8200\" }",
"/usr/local/bin/vault agent -config=/usr/local/etc/vault/rgw-agent.hcl",
"ceph config set client.rgw rgw_crypt_vault_secret_engine kv",
"ceph config set client.rgw rgw_crypt_vault_secret_engine transit",
"ceph config set client.rgw rgw_crypt_vault_namespace testnamespace1",
"ceph config set client.rgw rgw_crypt_vault_prefix /v1/secret/data",
"ceph config set client.rgw rgw_crypt_vault_prefix /v1/transit/export/encryption-key",
"http://vault-server:8200/v1/transit/export/encryption-key",
"systemctl restart ceph- CLUSTER_ID@SERVICE_TYPE . ID .service",
"systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw",
"cephadm shell",
"ceph config set client.rgw rgw_crypt_sse_s3_backend vault",
"ceph config set client.rgw rgw_crypt_sse_s3_vault_auth agent ceph config set client.rgw rgw_crypt_sse_s3_vault_addr http:// VAULT_AGENT : VAULT_AGENT_PORT",
"ceph config set client.rgw rgw_crypt_sse_s3_vault_auth agent ceph config set client.rgw rgw_crypt_sse_s3_vault_addr http://vaultagent:8100",
"vault read auth/approle/role/rgw-ap/role-id -format=json | \\ jq -r .rgw-ap-role-id > PATH_TO_FILE",
"vault read auth/approle/role/rgw-ap/role-id -format=json | \\ jq -r .rgw-ap-secret-id > PATH_TO_FILE",
"pid_file = \"/run/rgw-vault-agent-pid\" auto_auth { method \"AppRole\" { mount_path = \"auth/approle\" config = { role_id_file_path =\"/usr/local/etc/vault/.rgw-ap-role-id\" secret_id_file_path =\"/usr/local/etc/vault/.rgw-ap-secret-id\" remove_secret_id_file_after_reading =\"false\" } } } cache { use_auto_auth_token = true } listener \"tcp\" { address = \"127.0.0.1:8100\" tls_disable = true } vault { address = \"https://vaultserver:8200\" }",
"/usr/local/bin/vault agent -config=/usr/local/etc/vault/rgw-agent.hcl",
"ceph config set client.rgw rgw_crypt_sse_s3_vault_secret_engine kv",
"ceph config set client.rgw rgw_crypt_sse_s3_vault_secret_engine transit",
"ceph config set client.rgw rgw_crypt_sse_s3_vault_namespace company/testnamespace1",
"ceph config set client.rgw rgw_crypt_sse_s3_vault_prefix /v1/secret/data",
"ceph config set client.rgw rgw_crypt_sse_s3_vault_prefix /v1/transit",
"http://vaultserver:8200/v1/transit",
"ceph config set client.rgw rgw_crypt_sse_s3_vault_verify_ssl true ceph config set client.rgw rgw_crypt_sse_s3_vault_ssl_cacert PATH_TO_CA_CERTIFICATE ceph config set client.rgw rgw_crypt_sse_s3_vault_ssl_clientcert PATH_TO_CLIENT_CERTIFICATE ceph config set client.rgw rgw_crypt_sse_s3_vault_ssl_clientkey PATH_TO_PRIVATE_KEY",
"ceph config set client.rgw rgw_crypt_sse_s3_vault_verify_ssl true ceph config set client.rgw rgw_crypt_sse_s3_vault_ssl_cacert /etc/ceph/vault.ca ceph config set client.rgw rgw_crypt_sse_s3_vault_ssl_clientcert /etc/ceph/vault.crt ceph config set client.rgw rgw_crypt_sse_s3_vault_ssl_clientkey /etc/ceph/vault.key",
"systemctl restart ceph- CLUSTER_ID@SERVICE_TYPE . ID .service",
"systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw",
"vault secrets enable -path secret kv-v2",
"vault kv put secret/ PROJECT_NAME / BUCKET_NAME key=USD(openssl rand -base64 32)",
"vault kv put secret/myproject/mybucketkey key=USD(openssl rand -base64 32) ====== Metadata ====== Key Value --- ---- created_time 2020-02-21T17:01:09.095824999Z deletion_time n/a destroyed false version 1",
"vault secrets enable transit",
"vault write -f transit/keys/ BUCKET_NAME exportable=true",
"vault write -f transit/keys/mybucketkey exportable=true",
"vault read transit/export/encryption-key/ BUCKET_NAME / VERSION_NUMBER",
"vault read transit/export/encryption-key/mybucketkey/1 Key Value --- ----- keys map[1:-gbTI9lNpqv/V/2lDcmH2Nq1xKn6FPDWarCmFM2aNsQ=] name mybucketkey type aes256-gcm96",
"[user@client ~]USD aws --endpoint=http://radosgw:8000 s3 cp plaintext.txt s3://mybucket/encrypted.txt --sse=aws:kms --sse-kms-key-id myproject/mybucketkey",
"[user@client ~]USD aws s3api --endpoint http://rgw_host:8080 put-object --bucket my-bucket --key obj1 --body test_file_to_upload --server-side-encryption AES256",
"[user@client ~]USD aws --endpoint=http://radosgw:8000 s3 cp plaintext.txt s3://mybucket/encrypted.txt --sse=aws:kms --sse-kms-key-id mybucketkey",
"[user@client ~]USD aws s3api --endpoint http://rgw_host:8080 put-object --bucket my-bucket --key obj1 --body test_file_to_upload --server-side-encryption AES256",
"[user@host01 ~]USD SEED=USD(head -10 /dev/urandom | sha512sum | cut -b 1-30)",
"[user@host01 ~]USD echo USDSEED 492dedb20cf51d1405ef6a1316017e",
"radosgw-admin mfa create --uid= USERID --totp-serial= SERIAL --totp-seed= SEED --totp-seed-type= SEED_TYPE --totp-seconds= TOTP_SECONDS --totp-window= TOTP_WINDOW",
"radosgw-admin mfa create --uid=johndoe --totp-serial=MFAtest --totp-seed=492dedb20cf51d1405ef6a1316017e",
"radosgw-admin mfa check --uid= USERID --totp-serial= SERIAL --totp-pin= PIN",
"radosgw-admin mfa check --uid=johndoe --totp-serial=MFAtest --totp-pin=870305 ok",
"radosgw-admin mfa resync --uid= USERID --totp-serial= SERIAL --totp-pin= PREVIOUS_PIN --totp=pin= CURRENT_PIN",
"radosgw-admin mfa resync --uid=johndoe --totp-serial=MFAtest --totp-pin=802021 --totp-pin=439996",
"radosgw-admin mfa check --uid= USERID --totp-serial= SERIAL --totp-pin= PIN",
"radosgw-admin mfa check --uid=johndoe --totp-serial=MFAtest --totp-pin=870305 ok",
"radosgw-admin mfa list --uid= USERID",
"radosgw-admin mfa list --uid=johndoe { \"entries\": [ { \"type\": 2, \"id\": \"MFAtest\", \"seed\": \"492dedb20cf51d1405ef6a1316017e\", \"seed_type\": \"hex\", \"time_ofs\": 0, \"step_size\": 30, \"window\": 2 } ] }",
"radosgw-admin mfa get --uid= USERID --totp-serial= SERIAL",
"radosgw-admin mfa remove --uid= USERID --totp-serial= SERIAL",
"radosgw-admin mfa remove --uid=johndoe --totp-serial=MFAtest",
"radosgw-admin mfa get --uid= USERID --totp-serial= SERIAL",
"radosgw-admin mfa get --uid=johndoe --totp-serial=MFAtest MFA serial id not found",
"radosgw-admin zonegroup --rgw-zonegroup= ZONE_GROUP_NAME get > FILE_NAME .json",
"radosgw-admin zonegroup --rgw-zonegroup=default get > zonegroup.json",
"{ \"name\": \"default\", \"api_name\": \"\", \"is_master\": \"true\", \"endpoints\": [], \"hostnames\": [], \"master_zone\": \"\", \"zones\": [{ \"name\": \"default\", \"endpoints\": [], \"log_meta\": \"false\", \"log_data\": \"false\", \"bucket_index_max_shards\": 5 }], \"placement_targets\": [{ \"name\": \"default-placement\", \"tags\": [] }, { \"name\": \"special-placement\", \"tags\": [] }], \"default_placement\": \"default-placement\" }",
"radosgw-admin zonegroup set < zonegroup.json",
"radosgw-admin zone get > zone.json",
"{ \"domain_root\": \".rgw\", \"control_pool\": \".rgw.control\", \"gc_pool\": \".rgw.gc\", \"log_pool\": \".log\", \"intent_log_pool\": \".intent-log\", \"usage_log_pool\": \".usage\", \"user_keys_pool\": \".users\", \"user_email_pool\": \".users.email\", \"user_swift_pool\": \".users.swift\", \"user_uid_pool\": \".users.uid\", \"system_key\": { \"access_key\": \"\", \"secret_key\": \"\" }, \"placement_pools\": [{ \"key\": \"default-placement\", \"val\": { \"index_pool\": \".rgw.buckets.index\", \"data_pool\": \".rgw.buckets\", \"data_extra_pool\": \".rgw.buckets.extra\" } }, { \"key\": \"special-placement\", \"val\": { \"index_pool\": \".rgw.buckets.index\", \"data_pool\": \".rgw.buckets.special\", \"data_extra_pool\": \".rgw.buckets.extra\" } }] }",
"radosgw-admin zone set < zone.json",
"radosgw-admin period update --commit",
"curl -i http://10.0.0.1/swift/v1/TestContainer/file.txt -X PUT -H \"X-Storage-Policy: special-placement\" -H \"X-Auth-Token: AUTH_rgwtxxxxxx\"",
"radosgw-admin zonegroup placement add --rgw-zonegroup=\"default\" --placement-id=\"indexless-placement\"",
"radosgw-admin zone placement add --rgw-zone=\"default\" --placement-id=\"indexless-placement\" --data-pool=\"default.rgw.buckets.data\" --index-pool=\"default.rgw.buckets.index\" --data_extra_pool=\"default.rgw.buckets.non-ec\" --placement-index-type=\"indexless\"",
"radosgw-admin zonegroup placement default --placement-id \"indexless-placement\"",
"radosgw-admin period update --commit",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw",
"ln: failed to access '/tmp/rgwrbi-object-list.4053207': No such file or directory",
"/usr/bin/rgw-restore-bucket-index -b bucket-large-1 -p local-zone.rgw.buckets.data marker is d8a347a4-99b6-4312-a5c1-75b83904b3d4.41610.2 bucket_id is d8a347a4-99b6-4312-a5c1-75b83904b3d4.41610.2 number of bucket index shards is 5 data pool is local-zone.rgw.buckets.data NOTICE: This tool is currently considered EXPERIMENTAL. The list of objects that we will attempt to restore can be found in \"/tmp/rgwrbi-object-list.49946\". Please review the object names in that file (either below or in another window/terminal) before proceeding. Type \"proceed!\" to proceed, \"view\" to view object list, or \"q\" to quit: view Viewing Type \"proceed!\" to proceed, \"view\" to view object list, or \"q\" to quit: proceed! Proceeding NOTICE: Bucket stats are currently incorrect. They can be restored with the following command after 2 minutes: radosgw-admin bucket list --bucket=bucket-large-1 --allow-unordered --max-entries=1073741824 Would you like to take the time to recalculate bucket stats now? [yes/no] yes Done real 2m16.530s user 0m1.082s sys 0m0.870s",
"time rgw-restore-bucket-index --proceed serp-bu-ver-1 default.rgw.buckets.data NOTICE: This tool is currently considered EXPERIMENTAL. marker is e871fb65-b87f-4c16-a7c3-064b66feb1c4.25076.5 bucket_id is e871fb65-b87f-4c16-a7c3-064b66feb1c4.25076.5 Error: this bucket appears to be versioned, and this tool cannot work with versioned buckets.",
"Bucket _BUCKET_NAME_ already has too many log generations (4) from previous reshards that peer zones haven't finished syncing. Resharding is not recommended until the old generations sync, but you can force a reshard with `--yes-i-really-mean-it`.",
"number of objects expected in a bucket / 100,000",
"ceph config set client.rgw rgw_override_bucket_index_max_shards VALUE",
"ceph config set client.rgw rgw_override_bucket_index_max_shards 12",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw",
"number of objects expected in a bucket / 100,000",
"radosgw-admin zonegroup get > zonegroup.json",
"bucket_index_max_shards = VALUE",
"bucket_index_max_shards = 12",
"radosgw-admin zonegroup set < zonegroup.json",
"radosgw-admin period update --commit",
"radosgw-admin reshard status --bucket BUCKET_NAME",
"radosgw-admin reshard status --bucket data",
"radosgw-admin sync status",
"radosgw-admin period get",
"ceph config set client.rgw OPTION VALUE",
"ceph config set client.rgw rgw_reshard_num_logs 23",
"radosgw-admin reshard add --bucket BUCKET --num-shards NUMBER",
"radosgw-admin reshard add --bucket data --num-shards 10",
"radosgw-admin reshard list",
"radosgw-admin bucket layout --bucket data { \"layout\": { \"resharding\": \"None\", \"current_index\": { \"gen\": 1, \"layout\": { \"type\": \"Normal\", \"normal\": { \"num_shards\": 23, \"hash_type\": \"Mod\" } } }, \"logs\": [ { \"gen\": 0, \"layout\": { \"type\": \"InIndex\", \"in_index\": { \"gen\": 0, \"layout\": { \"num_shards\": 11, \"hash_type\": \"Mod\" } } } }, { \"gen\": 1, \"layout\": { \"type\": \"InIndex\", \"in_index\": { \"gen\": 1, \"layout\": { \"num_shards\": 23, \"hash_type\": \"Mod\" } } } } ] } }",
"radosgw-admin reshard status --bucket BUCKET",
"radosgw-admin reshard status --bucket data",
"radosgw-admin reshard process",
"radosgw-admin reshard cancel --bucket BUCKET",
"radosgw-admin reshard cancel --bucket data",
"radosgw-admin reshard status --bucket BUCKET",
"radosgw-admin reshard status --bucket data",
"radosgw-admin sync status",
"radosgw-admin zonegroup modify --rgw-zonegroup= ZONEGROUP_NAME --enable-feature=resharding",
"radosgw-admin zonegroup modify --rgw-zonegroup=us --enable-feature=resharding",
"radosgw-admin period update --commit",
"radosgw-admin zone modify --rgw-zone= ZONE_NAME --enable-feature=resharding",
"radosgw-admin zone modify --rgw-zone=us-east --enable-feature=resharding",
"radosgw-admin period update --commit",
"radosgw-admin period get \"zones\": [ { \"id\": \"505b48db-6de0-45d5-8208-8c98f7b1278d\", \"name\": \"us_east\", \"endpoints\": [ \"http://10.0.208.11:8080\" ], \"log_meta\": \"false\", \"log_data\": \"true\", \"bucket_index_max_shards\": 11, \"read_only\": \"false\", \"tier_type\": \"\", \"sync_from_all\": \"true\", \"sync_from\": [], \"redirect_zone\": \"\", \"supported_features\": [ \"resharding\" ] \"default_placement\": \"default-placement\", \"realm_id\": \"26cf6f23-c3a0-4d57-aae4-9b0010ee55cc\", \"sync_policy\": { \"groups\": [] }, \"enabled_features\": [ \"resharding\" ]",
"radosgw-admin sync status realm 26cf6f23-c3a0-4d57-aae4-9b0010ee55cc (usa) zonegroup 33a17718-6c77-493e-99fe-048d3110a06e (us) zone 505b48db-6de0-45d5-8208-8c98f7b1278d (us_east) zonegroup features enabled: resharding",
"radosgw-admin zonegroup modify --rgw-zonegroup= ZONEGROUP_NAME --disable-feature=resharding",
"radosgw-admin zonegroup modify --rgw-zonegroup=us --disable-feature=resharding",
"radosgw-admin period update --commit",
"radosgw-admin bi list --bucket= BUCKET > BUCKET .list.backup",
"radosgw-admin bi list --bucket=data > data.list.backup",
"radosgw-admin bucket reshard --bucket= BUCKET --num-shards= NUMBER",
"radosgw-admin bucket reshard --bucket=data --num-shards=100",
"radosgw-admin reshard status --bucket bucket",
"radosgw-admin reshard status --bucket data",
"radosgw-admin reshard stale-instances list",
"radosgw-admin reshard stale-instances rm",
"radosgw-admin reshard status --bucket BUCKET",
"radosgw-admin reshard status --bucket data",
"[root@host01 ~] radosgw-admin zone placement modify --rgw-zone=default --placement-id=default-placement --compression=zlib { \"placement_pools\": [ { \"key\": \"default-placement\", \"val\": { \"index_pool\": \"default.rgw.buckets.index\", \"data_pool\": \"default.rgw.buckets.data\", \"data_extra_pool\": \"default.rgw.buckets.non-ec\", \"index_type\": 0, \"compression\": \"zlib\" } } ], }",
"radosgw-admin bucket stats --bucket= BUCKET_NAME { \"usage\": { \"rgw.main\": { \"size\": 1075028, \"size_actual\": 1331200, \"size_utilized\": 592035, \"size_kb\": 1050, \"size_kb_actual\": 1300, \"size_kb_utilized\": 579, \"num_objects\": 104 } }, }",
"radosgw-admin user <create|modify|info|rm|suspend|enable|check|stats> <--uid= USER_ID |--subuser= SUB_USER_NAME > [other-options]",
"radosgw-admin --tenant testx --uid tester --display-name \"Test User\" --access_key TESTER --secret test123 user create",
"radosgw-admin --tenant testx --uid tester --display-name \"Test User\" --subuser tester:swift --key-type swift --access full subuser create radosgw-admin key create --subuser 'testxUSDtester:swift' --key-type swift --secret test123",
"radosgw-admin user create --uid= USER_ID [--key-type= KEY_TYPE ] [--gen-access-key|--access-key= ACCESS_KEY ] [--gen-secret | --secret= SECRET_KEY ] [--email= EMAIL ] --display-name= DISPLAY_NAME",
"radosgw-admin user create --uid=janedoe --access-key=11BS02LGFB6AL6H1ADMW --secret=vzCEkuryfn060dfee4fgQPqFrncKEIkh3ZcdOANY [email protected] --display-name=Jane Doe",
"{ \"user_id\": \"janedoe\", \"display_name\": \"Jane Doe\", \"email\": \"[email protected]\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [], \"keys\": [ { \"user\": \"janedoe\", \"access_key\": \"11BS02LGFB6AL6H1ADMW\", \"secret_key\": \"vzCEkuryfn060dfee4fgQPqFrncKEIkh3ZcdOANY\"}], \"swift_keys\": [], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"max_size_kb\": -1, \"max_objects\": -1}, \"user_quota\": { \"enabled\": false, \"max_size_kb\": -1, \"max_objects\": -1}, \"temp_url_keys\": []}",
"radosgw-admin subuser create --uid= USER_ID --subuser= SUB_USER_ID --access=[ read | write | readwrite | full ]",
"radosgw-admin subuser create --uid=janedoe --subuser=janedoe:swift --access=full { \"user_id\": \"janedoe\", \"display_name\": \"Jane Doe\", \"email\": \"[email protected]\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [ { \"id\": \"janedoe:swift\", \"permissions\": \"full-control\"}], \"keys\": [ { \"user\": \"janedoe\", \"access_key\": \"11BS02LGFB6AL6H1ADMW\", \"secret_key\": \"vzCEkuryfn060dfee4fgQPqFrncKEIkh3ZcdOANY\"}], \"swift_keys\": [], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"max_size_kb\": -1, \"max_objects\": -1}, \"user_quota\": { \"enabled\": false, \"max_size_kb\": -1, \"max_objects\": -1}, \"temp_url_keys\": []}",
"radosgw-admin user info --uid=janedoe",
"radosgw-admin user info --uid=janedoe --tenant=test",
"radosgw-admin user modify --uid=janedoe --display-name=\"Jane E. Doe\"",
"radosgw-admin subuser modify --subuser=janedoe:swift --access=full",
"radosgw-admin user suspend --uid=johndoe",
"radosgw-admin user enable --uid=johndoe",
"radosgw-admin user rm --uid= USER_ID [--purge-keys] [--purge-data]",
"radosgw-admin user rm --uid=johndoe --purge-data",
"radosgw-admin subuser rm --subuser=johndoe:swift --purge-keys",
"radosgw-admin subuser rm --subuser= SUB_USER_ID",
"radosgw-admin subuser rm --subuser=johndoe:swift",
"radosgw-admin user rename --uid= CURRENT_USER_NAME --new-uid= NEW_USER_NAME",
"radosgw-admin user rename --uid=user1 --new-uid=user2 { \"user_id\": \"user2\", \"display_name\": \"user 2\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [], \"keys\": [ { \"user\": \"user2\", \"access_key\": \"59EKHI6AI9F8WOW8JQZJ\", \"secret_key\": \"XH0uY3rKCUcuL73X0ftjXbZqUbk0cavD11rD8MsA\" } ], \"swift_keys\": [], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"rgw\" }",
"radosgw-admin user rename --uid USER_NAME --new-uid NEW_USER_NAME --tenant TENANT",
"radosgw-admin user rename --uid=testUSDuser1 --new-uid=testUSDuser2 --tenant test 1000 objects processed in tvtester1. Next marker 80_tVtester1_99 2000 objects processed in tvtester1. Next marker 64_tVtester1_44 3000 objects processed in tvtester1. Next marker 48_tVtester1_28 4000 objects processed in tvtester1. Next marker 2_tVtester1_74 5000 objects processed in tvtester1. Next marker 14_tVtester1_53 6000 objects processed in tvtester1. Next marker 87_tVtester1_61 7000 objects processed in tvtester1. Next marker 6_tVtester1_57 8000 objects processed in tvtester1. Next marker 52_tVtester1_91 9000 objects processed in tvtester1. Next marker 34_tVtester1_74 9900 objects processed in tvtester1. Next marker 9_tVtester1_95 1000 objects processed in tvtester2. Next marker 82_tVtester2_93 2000 objects processed in tvtester2. Next marker 64_tVtester2_9 3000 objects processed in tvtester2. Next marker 48_tVtester2_22 4000 objects processed in tvtester2. Next marker 32_tVtester2_42 5000 objects processed in tvtester2. Next marker 16_tVtester2_36 6000 objects processed in tvtester2. Next marker 89_tVtester2_46 7000 objects processed in tvtester2. Next marker 70_tVtester2_78 8000 objects processed in tvtester2. Next marker 51_tVtester2_41 9000 objects processed in tvtester2. Next marker 33_tVtester2_32 9900 objects processed in tvtester2. Next marker 9_tVtester2_83 { \"user_id\": \"testUSDuser2\", \"display_name\": \"User 2\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [], \"keys\": [ { \"user\": \"testUSDuser2\", \"access_key\": \"user2\", \"secret_key\": \"123456789\" } ], \"swift_keys\": [], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"rgw\" }",
"radosgw-admin user info --uid= NEW_USER_NAME",
"radosgw-admin user info --uid=user2",
"radosgw-admin user info --uid= TENANT USD USER_NAME",
"radosgw-admin user info --uid=testUSDuser2",
"radosgw-admin key create --subuser=johndoe:swift --key-type=swift --gen-secret { \"user_id\": \"johndoe\", \"rados_uid\": 0, \"display_name\": \"John Doe\", \"email\": \"[email protected]\", \"suspended\": 0, \"subusers\": [ { \"id\": \"johndoe:swift\", \"permissions\": \"full-control\"}], \"keys\": [ { \"user\": \"johndoe\", \"access_key\": \"QFAMEDSJP5DEKJO0DDXY\", \"secret_key\": \"iaSFLDVvDdQt6lkNzHyW4fPLZugBAI1g17LO0+87\"}], \"swift_keys\": [ { \"user\": \"johndoe:swift\", \"secret_key\": \"E9T2rUZNu2gxUjcwUBO8n\\/Ev4KX6\\/GprEuH4qhu1\"}]}",
"radosgw-admin key create --uid=johndoe --key-type=s3 --gen-access-key --gen-secret",
"radosgw-admin user info --uid=johndoe",
"radosgw-admin user info --uid=johndoe { \"user_id\": \"johndoe\", \"keys\": [ { \"user\": \"johndoe\", \"access_key\": \"0555b35654ad1656d804\", \"secret_key\": \"h7GhxuBLTrlhVUyxSPUKUV8r/2EI4ngqJxD7iBdBYLhwluN30JaT3Q==\" } ], }",
"radosgw-admin key rm --uid= USER_ID --access-key ACCESS_KEY",
"radosgw-admin key rm --uid=johndoe --access-key 0555b35654ad1656d804",
"radosgw-admin caps add --uid= USER_ID --caps= CAPS",
"--caps=\"[users|buckets|metadata|usage|zone]=[*|read|write|read, write]\"",
"radosgw-admin caps add --uid=johndoe --caps=\"users=*\"",
"radosgw-admin caps remove --uid=johndoe --caps={caps}",
"radosgw-admin role create --role-name= ROLE_NAME [--path==\" PATH_TO_FILE \"] [--assume-role-policy-doc= TRUST_RELATIONSHIP_POLICY_DOCUMENT ]",
"radosgw-admin role create --role-name=S3Access1 --path=/application_abc/component_xyz/ --assume-role-policy-doc=\\{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":\\[\\{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":\\{\\\"AWS\\\":\\[\\\"arn:aws:iam:::user/TESTER\\\"\\]\\},\\\"Action\\\":\\[\\\"sts:AssumeRole\\\"\\]\\}\\]\\} { \"RoleId\": \"ca43045c-082c-491a-8af1-2eebca13deec\", \"RoleName\": \"S3Access1\", \"Path\": \"/application_abc/component_xyz/\", \"Arn\": \"arn:aws:iam:::role/application_abc/component_xyz/S3Access1\", \"CreateDate\": \"2022-06-17T10:18:29.116Z\", \"MaxSessionDuration\": 3600, \"AssumeRolePolicyDocument\": \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":[{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":{\\\"AWS\\\":[\\\"arn:aws:iam:::user/TESTER\\\"]},\\\"Action\\\":[\\\"sts:AssumeRole\\\"]}]}\" }",
"radosgw-admin role get --role-name= ROLE_NAME",
"radosgw-admin role get --role-name=S3Access1 { \"RoleId\": \"ca43045c-082c-491a-8af1-2eebca13deec\", \"RoleName\": \"S3Access1\", \"Path\": \"/application_abc/component_xyz/\", \"Arn\": \"arn:aws:iam:::role/application_abc/component_xyz/S3Access1\", \"CreateDate\": \"2022-06-17T10:18:29.116Z\", \"MaxSessionDuration\": 3600, \"AssumeRolePolicyDocument\": \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":[{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":{\\\"AWS\\\":[\\\"arn:aws:iam:::user/TESTER\\\"]},\\\"Action\\\":[\\\"sts:AssumeRole\\\"]}]}\" }",
"radosgw-admin role list",
"radosgw-admin role list [ { \"RoleId\": \"85fb46dd-a88a-4233-96f5-4fb54f4353f7\", \"RoleName\": \"kvm-sts\", \"Path\": \"/application_abc/component_xyz/\", \"Arn\": \"arn:aws:iam:::role/application_abc/component_xyz/kvm-sts\", \"CreateDate\": \"2022-09-13T11:55:09.39Z\", \"MaxSessionDuration\": 7200, \"AssumeRolePolicyDocument\": \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":[{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":{\\\"AWS\\\":[\\\"arn:aws:iam:::user/kvm\\\"]},\\\"Action\\\":[\\\"sts:AssumeRole\\\"]}]}\" }, { \"RoleId\": \"9116218d-4e85-4413-b28d-cdfafba24794\", \"RoleName\": \"kvm-sts-1\", \"Path\": \"/application_abc/component_xyz/\", \"Arn\": \"arn:aws:iam:::role/application_abc/component_xyz/kvm-sts-1\", \"CreateDate\": \"2022-09-16T00:05:57.483Z\", \"MaxSessionDuration\": 3600, \"AssumeRolePolicyDocument\": \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":[{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":{\\\"AWS\\\":[\\\"arn:aws:iam:::user/kvm\\\"]},\\\"Action\\\":[\\\"sts:AssumeRole\\\"]}]}\" } ]",
"radosgw-admin role-trust-policy modify --role-name= ROLE_NAME --assume-role-policy-doc= TRUST_RELATIONSHIP_POLICY_DOCUMENT",
"radosgw-admin role-trust-policy modify --role-name=S3Access1 --assume-role-policy-doc=\\{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":\\[\\{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":\\{\\\"AWS\\\":\\[\\\"arn:aws:iam:::user/TESTER\\\"\\]\\},\\\"Action\\\":\\[\\\"sts:AssumeRole\\\"\\]\\}\\]\\} { \"RoleId\": \"ca43045c-082c-491a-8af1-2eebca13deec\", \"RoleName\": \"S3Access1\", \"Path\": \"/application_abc/component_xyz/\", \"Arn\": \"arn:aws:iam:::role/application_abc/component_xyz/S3Access1\", \"CreateDate\": \"2022-06-17T10:18:29.116Z\", \"MaxSessionDuration\": 3600, \"AssumeRolePolicyDocument\": \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":[{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":{\\\"AWS\\\":[\\\"arn:aws:iam:::user/TESTER\\\"]},\\\"Action\\\":[\\\"sts:AssumeRole\\\"]}]}\" }",
"radosgw-admin role-policy get --role-name= ROLE_NAME --policy-name= POLICY_NAME",
"radosgw-admin role-policy get --role-name=S3Access1 --policy-name=Policy1 { \"Permission policy\": \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":[{\\\"Effect\\\":\\\"Allow\\\",\\\"Action\\\":[\\\"s3:*\\\"],\\\"Resource\\\":\\\"arn:aws:s3:::example_bucket\\\"}]}\" }",
"radosgw-admin role policy delete --role-name= ROLE_NAME --policy-name= POLICY_NAME",
"radosgw-admin role policy delete --role-name=S3Access1 --policy-name=Policy1",
"radosgw-admin role delete --role-name= ROLE_NAME",
"radosgw-admin role delete --role-name=S3Access1",
"radosgw-admin role-policy put --role-name= ROLE_NAME --policy-name= POLICY_NAME --policy-doc= PERMISSION_POLICY_DOCUMENT",
"radosgw-admin role-policy put --role-name=S3Access1 --policy-name=Policy1 --policy-doc=\\{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":\\[\\{\\\"Effect\\\":\\\"Allow\\\",\\\"Action\\\":\\[\\\"s3:*\\\"\\],\\\"Resource\\\":\\\"arn:aws:s3:::example_bucket\\\"\\}\\]\\}",
"radosgw-admin role-policy list --role-name= ROLE_NAME",
"radosgw-admin role-policy list --role-name=S3Access1 [ \"Policy1\" ]",
"radosgw-admin role policy delete --role-name= ROLE_NAME --policy-name= POLICY_NAME",
"radosgw-admin role policy delete --role-name=S3Access1 --policy-name=Policy1",
"radosgw-admin role update --role-name= ROLE_NAME --max-session-duration=7200",
"radosgw-admin role update --role-name=test-sts-role --max-session-duration=7200",
"radosgw-admin role list [ { \"RoleId\": \"d4caf33f-caba-42f3-8bd4-48c84b4ea4d3\", \"RoleName\": \"test-sts-role\", \"Path\": \"/\", \"Arn\": \"arn:aws:iam:::role/test-role\", \"CreateDate\": \"2022-09-07T20:01:15.563Z\", \"MaxSessionDuration\": 7200, <<<<<< \"AssumeRolePolicyDocument\": \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":[{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":{\\\"AWS\\\":[\\\"arn:aws:iam:::user/kvm\\\"]},\\\"Action\\\":[\\\"sts:AssumeRole\\\"]}]}\" } ]",
"radosgw-admin quota set --quota-scope=user --uid= USER_ID [--max-objects= NUMBER_OF_OBJECTS ] [--max-size= MAXIMUM_SIZE_IN_BYTES ]",
"radosgw-admin quota set --quota-scope=user --uid=johndoe --max-objects=1024 --max-size=1024",
"radosgw-admin quota enable --quota-scope=user --uid= USER_ID",
"radosgw-admin quota disable --quota-scope=user --uid= USER_ID",
"radosgw-admin quota set --uid= USER_ID --quota-scope=bucket --bucket= BUCKET_NAME [--max-objects= NUMBER_OF_OBJECTS ] [--max-size= MAXIMUM_SIZE_IN_BYTES ]",
"radosgw-admin quota enable --quota-scope=bucket --uid= USER_ID",
"radosgw-admin quota disable --quota-scope=bucket --uid= USER_ID",
"radosgw-admin user info --uid= USER_ID",
"radosgw-admin user info --uid= USER_ID --tenant= TENANT",
"radosgw-admin user stats --uid= USER_ID --sync-stats",
"radosgw-admin user stats --uid= USER_ID",
"radosgw-admin global quota get",
"radosgw-admin global quota set --quota-scope bucket --max-objects 1024 radosgw-admin global quota enable --quota-scope bucket",
"radosgw-admin bucket list [ \"34150b2e9174475db8e191c188e920f6/swcontainer\", \"s3bucket1\", \"34150b2e9174475db8e191c188e920f6/swimpfalse\", \"c278edd68cfb4705bb3e07837c7ad1a8/ec2container\", \"c278edd68cfb4705bb3e07837c7ad1a8/demoten1\", \"c278edd68cfb4705bb3e07837c7ad1a8/demo-ct\", \"c278edd68cfb4705bb3e07837c7ad1a8/demopostup\", \"34150b2e9174475db8e191c188e920f6/postimpfalse\", \"c278edd68cfb4705bb3e07837c7ad1a8/demoten2\", \"c278edd68cfb4705bb3e07837c7ad1a8/postupsw\" ]",
"radosgw-admin bucket link --bucket= ORIGINAL_NAME --bucket-new-name= NEW_NAME --uid= USER_ID",
"radosgw-admin bucket link --bucket=s3bucket1 --bucket-new-name=s3newb --uid=testuser",
"radosgw-admin bucket link --bucket= tenant / ORIGINAL_NAME --bucket-new-name= NEW_NAME --uid= TENANT USD USER_ID",
"radosgw-admin bucket link --bucket=test/s3bucket1 --bucket-new-name=s3newb --uid=testUSDtestuser",
"radosgw-admin bucket list [ \"34150b2e9174475db8e191c188e920f6/swcontainer\", \"34150b2e9174475db8e191c188e920f6/swimpfalse\", \"c278edd68cfb4705bb3e07837c7ad1a8/ec2container\", \"s3newb\", \"c278edd68cfb4705bb3e07837c7ad1a8/demoten1\", \"c278edd68cfb4705bb3e07837c7ad1a8/demo-ct\", \"c278edd68cfb4705bb3e07837c7ad1a8/demopostup\", \"34150b2e9174475db8e191c188e920f6/postimpfalse\", \"c278edd68cfb4705bb3e07837c7ad1a8/demoten2\", \"c278edd68cfb4705bb3e07837c7ad1a8/postupsw\" ]",
"radosgw-admin bucket list [ \"34150b2e9174475db8e191c188e920f6/swcontainer\", \"s3bucket1\", \"34150b2e9174475db8e191c188e920f6/swimpfalse\", \"c278edd68cfb4705bb3e07837c7ad1a8/ec2container\", \"c278edd68cfb4705bb3e07837c7ad1a8/demoten1\", \"c278edd68cfb4705bb3e07837c7ad1a8/demo-ct\", \"c278edd68cfb4705bb3e07837c7ad1a8/demopostup\", \"34150b2e9174475db8e191c188e920f6/postimpfalse\", \"c278edd68cfb4705bb3e07837c7ad1a8/demoten2\", \"c278edd68cfb4705bb3e07837c7ad1a8/postupsw\" ]",
"radosgw-admin bucket rm --bucket= BUCKET_NAME",
"radosgw-admin bucket rm --bucket=s3bucket1",
"radosgw-admin bucket rm --bucket= BUCKET --purge-objects --bypass-gc",
"radosgw-admin bucket rm --bucket=s3bucket1 --purge-objects --bypass-gc",
"radosgw-admin bucket list [ \"34150b2e9174475db8e191c188e920f6/swcontainer\", \"34150b2e9174475db8e191c188e920f6/swimpfalse\", \"c278edd68cfb4705bb3e07837c7ad1a8/ec2container\", \"c278edd68cfb4705bb3e07837c7ad1a8/demoten1\", \"c278edd68cfb4705bb3e07837c7ad1a8/demo-ct\", \"c278edd68cfb4705bb3e07837c7ad1a8/demopostup\", \"34150b2e9174475db8e191c188e920f6/postimpfalse\", \"c278edd68cfb4705bb3e07837c7ad1a8/demoten2\", \"c278edd68cfb4705bb3e07837c7ad1a8/postupsw\" ]",
"radosgw-admin bucket link --uid= USER --bucket= BUCKET",
"radosgw-admin bucket link --uid=user2 --bucket=data",
"radosgw-admin bucket list --uid=user2 [ \"data\" ]",
"radosgw-admin bucket chown --uid= user --bucket= bucket",
"radosgw-admin bucket chown --uid=user2 --bucket=data",
"radosgw-admin bucket list --bucket=data",
"radosgw-admin bucket link --bucket= CURRENT_TENANT / BUCKET --uid= NEW_TENANT USD USER",
"radosgw-admin bucket link --bucket=test/data --uid=test2USDuser2",
"radosgw-admin bucket list --uid=testUSDuser2 [ \"data\" ]",
"radosgw-admin bucket chown --bucket= NEW_TENANT / BUCKET --uid= NEW_TENANT USD USER",
"radosgw-admin bucket chown --bucket='test2/data' --uid='testUSDtuser2'",
"radosgw-admin bucket list --bucket=test2/data",
"ceph config set client.rgw rgw_keystone_implicit_tenants true",
"swift list",
"s3cmd ls",
"radosgw-admin bucket link --bucket=/ BUCKET --uid=' TENANT USD USER '",
"radosgw-admin bucket link --bucket=/data --uid='testUSDtenanted-user'",
"radosgw-admin bucket list --uid='testUSDtenanted-user' [ \"data\" ]",
"radosgw-admin bucket chown --bucket=' tenant / bucket name ' --uid=' tenant USD user '",
"radosgw-admin bucket chown --bucket='test/data' --uid='testUSDtenanted-user'",
"radosgw-admin bucket list --bucket=test/data",
"radosgw-admin bucket radoslist --bucket BUCKET_NAME",
"radosgw-admin bucket radoslist --bucket mybucket",
"head /usr/bin/rgw-orphan-list",
"mkdir orphans",
"cd orphans",
"rgw-orphan-list",
"Available pools: .rgw.root default.rgw.control default.rgw.meta default.rgw.log default.rgw.buckets.index default.rgw.buckets.data rbd default.rgw.buckets.non-ec ma.rgw.control ma.rgw.meta ma.rgw.log ma.rgw.buckets.index ma.rgw.buckets.data ma.rgw.buckets.non-ec Which pool do you want to search for orphans?",
"rgw-orphan-list -h rgw-orphan-list POOL_NAME / DIRECTORY",
"rgw-orphan-list default.rgw.buckets.data /orphans 2023-09-12 08:41:14 ceph-host01 Computing delta 2023-09-12 08:41:14 ceph-host01 Computing results 10 potential orphans found out of a possible 2412 (0%). <<<<<<< orphans detected The results can be found in './orphan-list-20230912124113.out'. Intermediate files are './rados-20230912124113.intermediate' and './radosgw-admin-20230912124113.intermediate'. *** *** WARNING: This is EXPERIMENTAL code and the results should be used *** only with CAUTION! *** Done at 2023-09-12 08:41:14.",
"ls -l -rw-r--r--. 1 root root 770 Sep 12 03:59 orphan-list-20230912075939.out -rw-r--r--. 1 root root 0 Sep 12 03:59 rados-20230912075939.error -rw-r--r--. 1 root root 248508 Sep 12 03:59 rados-20230912075939.intermediate -rw-r--r--. 1 root root 0 Sep 12 03:59 rados-20230912075939.issues -rw-r--r--. 1 root root 0 Sep 12 03:59 radosgw-admin-20230912075939.error -rw-r--r--. 1 root root 247738 Sep 12 03:59 radosgw-admin-20230912075939.intermediate",
"cat ./orphan-list-20230912124113.out a9c042bc-be24-412c-9052-dda6b2f01f55.16749.1_key1.cherylf.433-bucky-4865-0.0 a9c042bc-be24-412c-9052-dda6b2f01f55.16749.1_key1.cherylf.433-bucky-4865-0.1 a9c042bc-be24-412c-9052-dda6b2f01f55.16749.1_key1.cherylf.433-bucky-4865-0.2 a9c042bc-be24-412c-9052-dda6b2f01f55.16749.1_key1.cherylf.433-bucky-4865-0.3 a9c042bc-be24-412c-9052-dda6b2f01f55.16749.1_key1.cherylf.433-bucky-4865-0.4 a9c042bc-be24-412c-9052-dda6b2f01f55.16749.1_key1.cherylf.433-bucky-4865-0.5 a9c042bc-be24-412c-9052-dda6b2f01f55.16749.1_key1.cherylf.433-bucky-4865-0.6 a9c042bc-be24-412c-9052-dda6b2f01f55.16749.1_key1.cherylf.433-bucky-4865-0.7 a9c042bc-be24-412c-9052-dda6b2f01f55.16749.1_key1.cherylf.433-bucky-4865-0.8 a9c042bc-be24-412c-9052-dda6b2f01f55.16749.1_key1.cherylf.433-bucky-4865-0.9",
"rados -p POOL_NAME rm OBJECT_NAME",
"rados -p default.rgw.buckets.data rm myobject",
"radosgw-admin bucket check --bucket= BUCKET_NAME",
"radosgw-admin bucket check --bucket=mybucket",
"radosgw-admin bucket check --fix --bucket= BUCKET_NAME",
"radosgw-admin bucket check --fix --bucket=mybucket",
"radosgw-admin topic list",
"radosgw-admin topic get --topic=topic1",
"radosgw-admin topic rm --topic=topic1",
"client.put_bucket_notification_configuration( Bucket=bucket_name, NotificationConfiguration={ 'TopicConfigurations': [ { 'Id': notification_name, 'TopicArn': topic_arn, 'Events': ['s3:ObjectCreated:*', 's3:ObjectRemoved:*', 's3:ObjectLifecycle:Expiration:*'] }]})",
"{ \"Role\": \"arn:aws:iam::account-id:role/role-name\", \"Rules\": [ { \"ID\": \"String\", \"Status\": \"Enabled\", \"Priority\": 1, \"DeleteMarkerReplication\": { \"Status\": \"Enabled\"|\"Disabled\" }, \"Destination\": { \"Bucket\": \"BUCKET_NAME\" } } ] }",
"cat replication.json { \"Role\": \"arn:aws:iam::account-id:role/role-name\", \"Rules\": [ { \"ID\": \"pipe-bkt\", \"Status\": \"Enabled\", \"Priority\": 1, \"DeleteMarkerReplication\": { \"Status\": \"Disabled\" }, \"Destination\": { \"Bucket\": \"testbucket\" } } ] }",
"aws --endpoint-url=RADOSGW_ENDPOINT_URL s3api put-bucket-replication --bucket BUCKET_NAME --replication-configuration file://REPLICATION_CONFIIRATION_FILE.json",
"aws --endpoint-url=http://host01:80 s3api put-bucket-replication --bucket testbucket --replication-configuration file://replication.json",
"radosgw-admin sync policy get --bucket BUCKET_NAME",
"radosgw-admin sync policy get --bucket testbucket { \"groups\": [ { \"id\": \"s3-bucket-replication:disabled\", \"data_flow\": {}, \"pipes\": [], \"status\": \"allowed\" }, { \"id\": \"s3-bucket-replication:enabled\", \"data_flow\": {}, \"pipes\": [ { \"id\": \"\", \"source\": { \"bucket\": \"*\", \"zones\": [ \"*\" ] }, \"dest\": { \"bucket\": \"testbucket\", \"zones\": [ \"*\" ] }, \"params\": { \"source\": {}, \"dest\": {}, \"priority\": 1, \"mode\": \"user\", \"user\": \"s3cmd\" } } ], \"status\": \"enabled\" } ] }",
"aws s3api get-bucket-replication --bucket BUCKET_NAME --endpoint-url=RADOSGW_ENDPOINT_URL",
"aws s3api get-bucket-replication --bucket testbucket --endpoint-url=http://host01:80 { \"ReplicationConfiguration\": { \"Role\": \"\", \"Rules\": [ { \"ID\": \"pipe-bkt\", \"Status\": \"Enabled\", \"Priority\": 1, \"Destination\": { Bucket\": \"testbucket\" } } ] } }",
"aws s3api delete-bucket-replication --bucket BUCKET_NAME --endpoint-url=RADOSGW_ENDPOINT_URL",
"aws s3api delete-bucket-replication --bucket testbucket --endpoint-url=http://host01:80",
"radosgw-admin sync policy get --bucket=BUCKET_NAME",
"radosgw-admin sync policy get --bucket=testbucket",
"cat user_policy.json { \"Version\":\"2012-10-17\", \"Statement\": { \"Effect\":\"Deny\", \"Action\": [ \"s3:PutReplicationConfiguration\", \"s3:GetReplicationConfiguration\", \"s3:DeleteReplicationConfiguration\" ], \"Resource\": \"arn:aws:s3:::*\", } }",
"aws --endpoint-url=ENDPOINT_URL iam put-user-policy --user-name USER_NAME --policy-name USER_POLICY_NAME --policy-document POLICY_DOCUMENT_PATH",
"aws --endpoint-url=http://host01:80 iam put-user-policy --user-name newuser1 --policy-name userpolicy --policy-document file://user_policy.json",
"aws --endpoint-url=ENDPOINT_URL iam get-user-policy --user-name USER_NAME --policy-name USER_POLICY_NAME --region us",
"aws --endpoint-url=http://host01:80 iam get-user-policy --user-name newuser1 --policy-name userpolicy --region us",
"[user@client ~]USD vi lifecycle.json",
"{ \"Rules\": [ { \"Filter\": { \"Prefix\": \"images/\" }, \"Status\": \"Enabled\", \"Expiration\": { \"Days\": 1 }, \"ID\": \"ImageExpiration\" } ] }",
"aws --endpoint-url= RADOSGW_ENDPOINT_URL : PORT s3api put-bucket-lifecycle-configuration --bucket BUCKET_NAME --lifecycle-configuration file:// PATH_TO_LIFECYCLE_CONFIGURATION_FILE / LIFECYCLE_CONFIGURATION_FILE .json",
"[user@client ~]USD aws --endpoint-url=http://host01:80 s3api put-bucket-lifecycle-configuration --bucket testbucket --lifecycle-configuration file://lifecycle.json",
"aws --endpoint-url= RADOSGW_ENDPOINT_URL : PORT s3api get-bucket-lifecycle-configuration --bucket BUCKET_NAME",
"[user@client ~]USD aws --endpoint-url=http://host01:80 s3api get-bucket-lifecycle-configuration --bucket testbucket { \"Rules\": [ { \"Expiration\": { \"Days\": 1 }, \"ID\": \"ImageExpiration\", \"Filter\": { \"Prefix\": \"images/\" }, \"Status\": \"Enabled\" } ] }",
"radosgw-admin lc get --bucket= BUCKET_NAME",
"radosgw-admin lc get --bucket=testbucket { \"prefix_map\": { \"images/\": { \"status\": true, \"dm_expiration\": false, \"expiration\": 1, \"noncur_expiration\": 0, \"mp_expiration\": 0, \"transitions\": {}, \"noncur_transitions\": {} } }, \"rule_map\": [ { \"id\": \"ImageExpiration\", \"rule\": { \"id\": \"ImageExpiration\", \"prefix\": \"\", \"status\": \"Enabled\", \"expiration\": { \"days\": \"1\", \"date\": \"\" }, \"mp_expiration\": { \"days\": \"\", \"date\": \"\" }, \"filter\": { \"prefix\": \"images/\", \"obj_tags\": { \"tagset\": {} } }, \"transitions\": {}, \"noncur_transitions\": {}, \"dm_expiration\": false } } ] }",
"aws --endpoint-url= RADOSGW_ENDPOINT_URL : PORT s3api delete-bucket-lifecycle --bucket BUCKET_NAME",
"[user@client ~]USD aws --endpoint-url=http://host01:80 s3api delete-bucket-lifecycle --bucket testbucket",
"aws --endpoint-url= RADOSGW_ENDPOINT_URL : PORT s3api get-bucket-lifecycle-configuration --bucket BUCKET_NAME",
"aws --endpoint-url=http://host01:80 s3api get-bucket-lifecycle-configuration --bucket testbucket",
"radosgw-admin lc get --bucket= BUCKET_NAME",
"radosgw-admin lc get --bucket=testbucket",
"[user@client ~]USD vi lifecycle.json",
"{ \"Rules\": [ { \"Filter\": { \"Prefix\": \"images/\" }, \"Status\": \"Enabled\", \"Expiration\": { \"Days\": 1 }, \"ID\": \"ImageExpiration\" }, { \"Filter\": { \"Prefix\": \"docs/\" }, \"Status\": \"Enabled\", \"Expiration\": { \"Days\": 30 }, \"ID\": \"DocsExpiration\" } ] }",
"aws --endpoint-url= RADOSGW_ENDPOINT_URL : PORT s3api put-bucket-lifecycle-configuration --bucket BUCKET_NAME --lifecycle-configuration file:// PATH_TO_LIFECYCLE_CONFIGURATION_FILE / LIFECYCLE_CONFIGURATION_FILE .json",
"[user@client ~]USD aws --endpoint-url=http://host01:80 s3api put-bucket-lifecycle-configuration --bucket testbucket --lifecycle-configuration file://lifecycle.json",
"aws --endpointurl= RADOSGW_ENDPOINT_URL : PORT s3api get-bucket-lifecycle-configuration --bucket BUCKET_NAME",
"[user@client ~]USD aws -endpoint-url=http://host01:80 s3api get-bucket-lifecycle-configuration --bucket testbucket { \"Rules\": [ { \"Expiration\": { \"Days\": 30 }, \"ID\": \"DocsExpiration\", \"Filter\": { \"Prefix\": \"docs/\" }, \"Status\": \"Enabled\" }, { \"Expiration\": { \"Days\": 1 }, \"ID\": \"ImageExpiration\", \"Filter\": { \"Prefix\": \"images/\" }, \"Status\": \"Enabled\" } ] }",
"radosgw-admin lc get --bucket= BUCKET_NAME",
"radosgw-admin lc get --bucket=testbucket { \"prefix_map\": { \"docs/\": { \"status\": true, \"dm_expiration\": false, \"expiration\": 1, \"noncur_expiration\": 0, \"mp_expiration\": 0, \"transitions\": {}, \"noncur_transitions\": {} }, \"images/\": { \"status\": true, \"dm_expiration\": false, \"expiration\": 1, \"noncur_expiration\": 0, \"mp_expiration\": 0, \"transitions\": {}, \"noncur_transitions\": {} } }, \"rule_map\": [ { \"id\": \"DocsExpiration\", \"rule\": { \"id\": \"DocsExpiration\", \"prefix\": \"\", \"status\": \"Enabled\", \"expiration\": { \"days\": \"30\", \"date\": \"\" }, \"noncur_expiration\": { \"days\": \"\", \"date\": \"\" }, \"mp_expiration\": { \"days\": \"\", \"date\": \"\" }, \"filter\": { \"prefix\": \"docs/\", \"obj_tags\": { \"tagset\": {} } }, \"transitions\": {}, \"noncur_transitions\": {}, \"dm_expiration\": false } }, { \"id\": \"ImageExpiration\", \"rule\": { \"id\": \"ImageExpiration\", \"prefix\": \"\", \"status\": \"Enabled\", \"expiration\": { \"days\": \"1\", \"date\": \"\" }, \"mp_expiration\": { \"days\": \"\", \"date\": \"\" }, \"filter\": { \"prefix\": \"images/\", \"obj_tags\": { \"tagset\": {} } }, \"transitions\": {}, \"noncur_transitions\": {}, \"dm_expiration\": false } } ] }",
"cephadm shell",
"radosgw-admin lc list [ { \"bucket\": \":testbucket:8b63d584-9ea1-4cf3-8443-a6a15beca943.54187.1\", \"started\": \"Thu, 01 Jan 1970 00:00:00 GMT\", \"status\" : \"UNINITIAL\" }, { \"bucket\": \":testbucket1:8b635499-9e41-4cf3-8443-a6a15345943.54187.2\", \"started\": \"Thu, 01 Jan 1970 00:00:00 GMT\", \"status\" : \"UNINITIAL\" } ]",
"radosgw-admin lc process --bucket= BUCKET_NAME",
"radosgw-admin lc process --bucket=testbucket1",
"radosgw-admin lc process",
"radosgw-admin lc list [ { \"bucket\": \":testbucket:8b63d584-9ea1-4cf3-8443-a6a15beca943.54187.1\", \"started\": \"Thu, 17 Mar 2022 21:48:50 GMT\", \"status\" : \"COMPLETE\" } { \"bucket\": \":testbucket1:8b635499-9e41-4cf3-8443-a6a15345943.54187.2\", \"started\": \"Thu, 17 Mar 2022 20:38:50 GMT\", \"status\" : \"COMPLETE\" } ]",
"cephadm shell",
"ceph config set client.rgw rgw_lifecycle_work_time %D:%D-%D:%D",
"ceph config set client.rgw rgw_lifecycle_work_time 06:00-08:00",
"ceph config get client.rgw rgw_lifecycle_work_time 06:00-08:00",
"ceph osd pool create POOL_NAME",
"ceph osd pool create test.hot.data",
"radosgw-admin zonegroup placement add --rgw-zonegroup default --placement-id PLACEMENT_TARGET --storage-class STORAGE_CLASS",
"radosgw-admin zonegroup placement add --rgw-zonegroup default --placement-id default-placement --storage-class hot.test { \"key\": \"default-placement\", \"val\": { \"name\": \"default-placement\", \"tags\": [], \"storage_classes\": [ \"STANDARD\", \"hot.test\" ] } }",
"radosgw-admin zone placement add --rgw-zone default --placement-id PLACEMENT_TARGET --storage-class STORAGE_CLASS --data-pool DATA_POOL",
"radosgw-admin zone placement add --rgw-zone default --placement-id default-placement --storage-class hot.test --data-pool test.hot.data { \"key\": \"default-placement\", \"val\": { \"index_pool\": \"test_zone.rgw.buckets.index\", \"storage_classes\": { \"STANDARD\": { \"data_pool\": \"test.hot.data\" }, \"hot.test\": { \"data_pool\": \"test.hot.data\", } }, \"data_extra_pool\": \"\", \"index_type\": 0 }",
"ceph osd pool application enable POOL_NAME rgw",
"ceph osd pool application enable test.hot.data rgw enabled application 'rgw' on pool 'test.hot.data'",
"aws s3api create-bucket --bucket testbucket10 --create-bucket-configuration LocationConstraint=default:default-placement --endpoint-url http://10.0.0.80:8080",
"aws --endpoint=http://10.0.0.80:8080 s3api put-object --bucket testbucket10 --key compliance-upload --body /root/test2.txt",
"ceph osd pool create POOL_NAME",
"ceph osd pool create test.cold.data",
"radosgw-admin zonegroup placement add --rgw-zonegroup default --placement-id PLACEMENT_TARGET --storage-class STORAGE_CLASS",
"radosgw-admin zonegroup placement add --rgw-zonegroup default --placement-id default-placement --storage-class cold.test { \"key\": \"default-placement\", \"val\": { \"name\": \"default-placement\", \"tags\": [], \"storage_classes\": [ \"STANDARD\", \"cold.test\" ] } }",
"radosgw-admin zone placement add --rgw-zone default --placement-id PLACEMENT_TARGET --storage-class STORAGE_CLASS --data-pool DATA_POOL",
"radosgw-admin zone placement add --rgw-zone default --placement-id default-placement --storage-class cold.test --data-pool test.cold.data",
"ceph osd pool application enable POOL_NAME rgw",
"ceph osd pool application enable test.cold.data rgw enabled application 'rgw' on pool 'test.cold.data'",
"radosgw-admin zonegroup get { \"id\": \"3019de59-ddde-4c5c-b532-7cdd29de09a1\", \"name\": \"default\", \"api_name\": \"default\", \"is_master\": \"true\", \"endpoints\": [], \"hostnames\": [], \"hostnames_s3website\": [], \"master_zone\": \"adacbe1b-02b4-41b8-b11d-0d505b442ed4\", \"zones\": [ { \"id\": \"adacbe1b-02b4-41b8-b11d-0d505b442ed4\", \"name\": \"default\", \"endpoints\": [], \"log_meta\": \"false\", \"log_data\": \"false\", \"bucket_index_max_shards\": 11, \"read_only\": \"false\", \"tier_type\": \"\", \"sync_from_all\": \"true\", \"sync_from\": [], \"redirect_zone\": \"\" } ], \"placement_targets\": [ { \"name\": \"default-placement\", \"tags\": [], \"storage_classes\": [ \"hot.test\", \"cold.test\", \"STANDARD\" ] } ], \"default_placement\": \"default-placement\", \"realm_id\": \"\", \"sync_policy\": { \"groups\": [] } }",
"radosgw-admin zone get { \"id\": \"adacbe1b-02b4-41b8-b11d-0d505b442ed4\", \"name\": \"default\", \"domain_root\": \"default.rgw.meta:root\", \"control_pool\": \"default.rgw.control\", \"gc_pool\": \"default.rgw.log:gc\", \"lc_pool\": \"default.rgw.log:lc\", \"log_pool\": \"default.rgw.log\", \"intent_log_pool\": \"default.rgw.log:intent\", \"usage_log_pool\": \"default.rgw.log:usage\", \"roles_pool\": \"default.rgw.meta:roles\", \"reshard_pool\": \"default.rgw.log:reshard\", \"user_keys_pool\": \"default.rgw.meta:users.keys\", \"user_email_pool\": \"default.rgw.meta:users.email\", \"user_swift_pool\": \"default.rgw.meta:users.swift\", \"user_uid_pool\": \"default.rgw.meta:users.uid\", \"otp_pool\": \"default.rgw.otp\", \"system_key\": { \"access_key\": \"\", \"secret_key\": \"\" }, \"placement_pools\": [ { \"key\": \"default-placement\", \"val\": { \"index_pool\": \"default.rgw.buckets.index\", \"storage_classes\": { \"cold.test\": { \"data_pool\": \"test.cold.data\" }, \"hot.test\": { \"data_pool\": \"test.hot.data\" }, \"STANDARD\": { \"data_pool\": \"default.rgw.buckets.data\" } }, \"data_extra_pool\": \"default.rgw.buckets.non-ec\", \"index_type\": 0 } } ], \"realm_id\": \"\", \"notif_pool\": \"default.rgw.log:notif\" }",
"aws s3api create-bucket --bucket testbucket10 --create-bucket-configuration LocationConstraint=default:default-placement --endpoint-url http://10.0.0.80:8080",
"radosgw-admin bucket list --bucket testbucket10 { \"ETag\": \"\\\"211599863395c832a3dfcba92c6a3b90\\\"\", \"Size\": 540, \"StorageClass\": \"STANDARD\", \"Key\": \"obj1\", \"VersionId\": \"W95teRsXPSJI4YWJwwSG30KxSCzSgk-\", \"IsLatest\": true, \"LastModified\": \"2023-11-23T10:38:07.214Z\", \"Owner\": { \"DisplayName\": \"test-user\", \"ID\": \"test-user\" } }",
"vi lifecycle.json",
"{ \"Rules\": [ { \"Filter\": { \"Prefix\": \"\" }, \"Status\": \"Enabled\", \"Transitions\": [ { \"Days\": 5, \"StorageClass\": \"hot.test\" }, { \"Days\": 20, \"StorageClass\": \"cold.test\" } ], \"Expiration\": { \"Days\": 365 }, \"ID\": \"double transition and expiration\" } ] }",
"aws s3api put-bucket-lifecycle-configuration --bucket testbucket10 --lifecycle-configuration file://lifecycle.json",
"aws s3api get-bucket-lifecycle-configuration --bucket testbucke10 { \"Rules\": [ { \"Expiration\": { \"Days\": 365 }, \"ID\": \"double transition and expiration\", \"Prefix\": \"\", \"Status\": \"Enabled\", \"Transitions\": [ { \"Days\": 20, \"StorageClass\": \"cold.test\" }, { \"Days\": 5, \"StorageClass\": \"hot.test\" } ] } ] }",
"radosgw-admin bucket list --bucket testbucket10 { \"ETag\": \"\\\"211599863395c832a3dfcba92c6a3b90\\\"\", \"Size\": 540, \"StorageClass\": \"cold.test\", \"Key\": \"obj1\", \"VersionId\": \"W95teRsXPSJI4YWJwwSG30KxSCzSgk-\", \"IsLatest\": true, \"LastModified\": \"2023-11-23T10:38:07.214Z\", \"Owner\": { \"DisplayName\": \"test-user\", \"ID\": \"test-user\" } }",
"aws --endpoint=http:// RGW_PORT :8080 s3api create-bucket --bucket BUCKET_NAME --object-lock-enabled-for-bucket",
"aws --endpoint=http://rgw.ceph.com:8080 s3api create-bucket --bucket worm-bucket --object-lock-enabled-for-bucket",
"aws --endpoint=http:// RGW_PORT :8080 s3api put-object-lock-configuration --bucket BUCKET_NAME --object-lock-configuration '{ \"ObjectLockEnabled\": \"Enabled\", \"Rule\": { \"DefaultRetention\": { \"Mode\": \" RETENTION_MODE \", \"Days\": NUMBER_OF_DAYS }}}'",
"aws --endpoint=http://rgw.ceph.com:8080 s3api put-object-lock-configuration --bucket worm-bucket --object-lock-configuration '{ \"ObjectLockEnabled\": \"Enabled\", \"Rule\": { \"DefaultRetention\": { \"Mode\": \"COMPLIANCE\", \"Days\": 10 }}}'",
"aws --endpoint=http:// RGW_PORT :8080 s3api put-object --bucket BUCKET_NAME --object-lock-mode RETENTION_MODE --object-lock-retain-until-date \" DATE \" --key compliance-upload --body TEST_FILE",
"aws --endpoint=http://rgw.ceph.com:8080 s3api put-object --bucket worm-bucket --object-lock-mode COMPLIANCE --object-lock-retain-until-date \"2022-05-31\" --key compliance-upload --body test.dd { \"ETag\": \"\\\"d560ea5652951637ba9c594d8e6ea8c1\\\"\", \"VersionId\": \"Nhhk5kRS6Yp6dZXVWpZZdRcpSpBKToD\" }",
"aws --endpoint=http:// RGW_PORT :8080 s3api put-object --bucket BUCKET_NAME --object-lock-mode RETENTION_MODE --object-lock-retain-until-date \" DATE \" --key compliance-upload --body PATH",
"aws --endpoint=http://rgw.ceph.com:8080 s3api put-object --bucket worm-bucket --object-lock-mode COMPLIANCE --object-lock-retain-until-date \"2022-05-31\" --key compliance-upload --body /etc/fstab { \"ETag\": \"\\\"d560ea5652951637ba9c594d8e6ea8c1\\\"\", \"VersionId\": \"Nhhk5kRS6Yp6dZXVWpZZdRcpSpBKToD\" }",
"aws --endpoint=http://rgw.ceph.com:8080 s3api put-object-legal-hold --bucket worm-bucket --key compliance-upload --legal-hold Status=ON",
"aws --endpoint=http://rgw.ceph.com:8080 s3api list-objects --bucket worm-bucket",
"aws --endpoint=http://rgw.ceph.com:8080 s3api list-objects --bucket worm-bucket { \"Versions\": [ { \"ETag\": \"\\\"d560ea5652951637ba9c594d8e6ea8c1\\\"\", \"Size\": 288, \"StorageClass\": \"STANDARD\", \"Key\": \"hosts\", \"VersionId\": \"Nhhk5kRS6Yp6dZXVWpZZdRcpSpBKToD\", \"IsLatest\": true, \"LastModified\": \"2022-06-17T08:51:17.392000+00:00\", \"Owner\": { \"DisplayName\": \"Test User in Tenant test\", \"ID\": \"testUSDtest.user\" } } } ] }",
"aws --endpoint=http://rgw.ceph.com:8080 s3api get-object --bucket worm-bucket --key compliance-upload --version-id 'IGOU.vdIs3SPduZglrB-RBaK.sfXpcd' download.1 { \"AcceptRanges\": \"bytes\", \"LastModified\": \"2022-06-17T08:51:17+00:00\", \"ContentLength\": 288, \"ETag\": \"\\\"d560ea5652951637ba9c594d8e6ea8c1\\\"\", \"VersionId\": \"Nhhk5kRS6Yp6dZXVWpZZdRcpSpBKToD\", \"ContentType\": \"binary/octet-stream\", \"Metadata\": {}, \"ObjectLockMode\": \"COMPLIANCE\", \"ObjectLockRetainUntilDate\": \"2023-06-17T08:51:17+00:00\" }",
"radosgw-admin usage show --uid=johndoe --start-date=2022-06-01 --end-date=2022-07-01",
"radosgw-admin usage show --show-log-entries=false",
"radosgw-admin usage trim --start-date=2022-06-01 --end-date=2022-07-31 radosgw-admin usage trim --uid=johndoe radosgw-admin usage trim --uid=johndoe --end-date=2021-04-31",
"radosgw-admin metadata get bucket: BUCKET_NAME radosgw-admin metadata get bucket.instance: BUCKET : BUCKET_ID radosgw-admin metadata get user: USER radosgw-admin metadata set user: USER",
"radosgw-admin metadata list radosgw-admin metadata list bucket radosgw-admin metadata list bucket.instance radosgw-admin metadata list user",
".bucket.meta.prodtx:test%25star:default.84099.6 .bucket.meta.testcont:default.4126.1 .bucket.meta.prodtx:testcont:default.84099.4 prodtx/testcont prodtx/test%25star testcont",
"prodtxUSDprodt test2.buckets prodtxUSDprodt.buckets test2",
"radosgw-admin ratelimit set --ratelimit-scope=user --uid= USER_ID [--max-read-ops= NUMBER_OF_OPERATIONS ] [--max-read-bytes= NUMBER_OF_BYTES ] [--max-write-ops= NUMBER_OF_OPERATIONS ] [--max-write-bytes= NUMBER_OF_BYTES ]",
"radosgw-admin ratelimit set --ratelimit-scope=user --uid=testing --max-read-ops=1024 --max-write-bytes=10240",
"radosgw-admin ratelimit get --ratelimit-scope=user --uid= USER_ID",
"radosgw-admin ratelimit get --ratelimit-scope=user --uid=testing { \"user_ratelimit\": { \"max_read_ops\": 1024, \"max_write_ops\": 0, \"max_read_bytes\": 0, \"max_write_bytes\": 10240, \"enabled\": false } }",
"radosgw-admin ratelimit enable --ratelimit-scope=user --uid= USER_ID",
"radosgw-admin ratelimit enable --ratelimit-scope=user --uid=testing { \"user_ratelimit\": { \"max_read_ops\": 1024, \"max_write_ops\": 0, \"max_read_bytes\": 0, \"max_write_bytes\": 10240, \"enabled\": true } }",
"radosgw-admin ratelimit disable --ratelimit-scope=user --uid= USER_ID",
"radosgw-admin ratelimit disable --ratelimit-scope=user --uid=testing",
"radosgw-admin ratelimit set --ratelimit-scope=bucket --bucket= BUCKET_NAME [--max-read-ops= NUMBER_OF_OPERATIONS ] [--max-read-bytes= NUMBER_OF_BYTES ] [--max-write-ops= NUMBER_OF_OPERATIONS ] [--max-write-bytes= NUMBER_OF_BYTES ]",
"radosgw-admin ratelimit set --ratelimit-scope=bucket --bucket=mybucket --max-read-ops=1024 --max-write-bytes=10240",
"radosgw-admin ratelimit get --ratelimit-scope=bucket --bucket= BUCKET_NAME",
"radosgw-admin ratelimit get --ratelimit-scope=bucket --bucket=mybucket { \"bucket_ratelimit\": { \"max_read_ops\": 1024, \"max_write_ops\": 0, \"max_read_bytes\": 0, \"max_write_bytes\": 10240, \"enabled\": false } }",
"radosgw-admin ratelimit enable --ratelimit-scope=bucket --bucket= BUCKET_NAME",
"radosgw-admin ratelimit enable --ratelimit-scope=bucket --bucket=mybucket { \"bucket_ratelimit\": { \"max_read_ops\": 1024, \"max_write_ops\": 0, \"max_read_bytes\": 0, \"max_write_bytes\": 10240, \"enabled\": true } }",
"radosgw-admin ratelimit disable --ratelimit-scope=bucket --bucket= BUCKET_NAME",
"radosgw-admin ratelimit disable --ratelimit-scope=bucket --bucket=mybucket",
"radosgw-admin global ratelimit get",
"radosgw-admin global ratelimit get { \"bucket_ratelimit\": { \"max_read_ops\": 1024, \"max_write_ops\": 0, \"max_read_bytes\": 0, \"max_write_bytes\": 0, \"enabled\": false }, \"user_ratelimit\": { \"max_read_ops\": 0, \"max_write_ops\": 0, \"max_read_bytes\": 0, \"max_write_bytes\": 0, \"enabled\": false }, \"anonymous_ratelimit\": { \"max_read_ops\": 0, \"max_write_ops\": 0, \"max_read_bytes\": 0, \"max_write_bytes\": 0, \"enabled\": false } }",
"radosgw-admin global ratelimit set --ratelimit-scope=bucket [--max-read-ops= NUMBER_OF_OPERATIONS ] [--max-read-bytes= NUMBER_OF_BYTES ] [--max-write-ops= NUMBER_OF_OPERATIONS ] [--max-write-bytes= NUMBER_OF_BYTES ]",
"radosgw-admin global ratelimit set --ratelimit-scope bucket --max-read-ops=1024",
"radosgw-admin global ratelimit enable --ratelimit-scope=bucket",
"radosgw-admin global ratelimit enable --ratelimit-scope bucket",
"radosgw-admin global ratelimit set --ratelimit-scope=user [--max-read-ops= NUMBER_OF_OPERATIONS ] [--max-read-bytes= NUMBER_OF_BYTES ] [--max-write-ops= NUMBER_OF_OPERATIONS ] [--max-write-bytes= NUMBER_OF_BYTES ]",
"radosgw-admin global ratelimit set --ratelimit-scope=user --max-read-ops=1024",
"radosgw-admin global ratelimit enable --ratelimit-scope=user",
"radosgw-admin global ratelimit enable --ratelimit-scope=user",
"radosgw-admin global ratelimit set --ratelimit-scope=anonymous [--max-read-ops= NUMBER_OF_OPERATIONS ] [--max-read-bytes= NUMBER_OF_BYTES ] [--max-write-ops= NUMBER_OF_OPERATIONS ] [--max-write-bytes= NUMBER_OF_BYTES ]",
"radosgw-admin global ratelimit set --ratelimit-scope=anonymous --max-read-ops=1024",
"radosgw-admin global ratelimit enable --ratelimit-scope=anonymous",
"radosgw-admin global ratelimit enable --ratelimit-scope=anonymous",
"radosgw-admin gc list",
"radosgw-admin gc list",
"ceph config set client.rgw rgw_gc_max_concurrent_io 20 ceph config set client.rgw rgw_gc_max_trim_chunk 64",
"ceph config set client.rgw rgw_lc_max_worker 7",
"ceph config set client.rgw rgw_lc_max_wp_worker 7",
"radosgw-admin user create --uid= USER_NAME --display-name=\" DISPLAY_NAME \" [--access-key ACCESS_KEY --secret-key SECRET_KEY ]",
"radosgw-admin user create --uid=test-user --display-name=\"test-user\" --access-key a21e86bce636c3aa1 --secret-key cf764951f1fdde5e { \"user_id\": \"test-user\", \"display_name\": \"test-user\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"subusers\": [], \"keys\": [ { \"user\": \"test-user\", \"access_key\": \"a21e86bce636c3aa1\", \"secret_key\": \"cf764951f1fdde5e\" } ], \"swift_keys\": [], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"default_storage_class\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"rgw\", \"mfa_ids\": [] }",
"radosgw-admin zonegroup placement add --rgw-zonegroup = ZONE_GROUP_NAME --placement-id= PLACEMENT_ID --storage-class = STORAGE_CLASS_NAME --tier-type=cloud-s3",
"radosgw-admin zonegroup placement add --rgw-zonegroup=default --placement-id=default-placement --storage-class=CLOUDTIER --tier-type=cloud-s3 [ { \"key\": \"default-placement\", \"val\": { \"name\": \"default-placement\", \"tags\": [], \"storage_classes\": [ \"CLOUDTIER\", \"STANDARD\" ], \"tier_targets\": [ { \"key\": \"CLOUDTIER\", \"val\": { \"tier_type\": \"cloud-s3\", \"storage_class\": \"CLOUDTIER\", \"retain_head_object\": \"false\", \"s3\": { \"endpoint\": \"\", \"access_key\": \"\", \"secret\": \"\", \"host_style\": \"path\", \"target_storage_class\": \"\", \"target_path\": \"\", \"acl_mappings\": [], \"multipart_sync_threshold\": 33554432, \"multipart_min_part_size\": 33554432 } } } ] } } ]",
"radosgw-admin zonegroup placement modify --rgw-zonegroup ZONE_GROUP_NAME --placement-id PLACEMENT_ID --storage-class STORAGE_CLASS_NAME --tier-config=endpoint= AWS_ENDPOINT_URL , access_key= AWS_ACCESS_KEY ,secret= AWS_SECRET_KEY , target_path=\" TARGET_BUCKET_ON_AWS \", multipart_sync_threshold=44432, multipart_min_part_size=44432, retain_head_object=true region= REGION_NAME",
"radosgw-admin zonegroup placement modify --rgw-zonegroup default --placement-id default-placement --storage-class CLOUDTIER --tier-config=endpoint=http://10.0.210.010:8080, access_key=a21e86bce636c3aa2,secret=cf764951f1fdde5f, target_path=\"dfqe-bucket-01\", multipart_sync_threshold=44432, multipart_min_part_size=44432, retain_head_object=true region=us-east-1 [ { \"key\": \"default-placement\", \"val\": { \"name\": \"default-placement\", \"tags\": [], \"storage_classes\": [ \"CLOUDTIER\", \"STANDARD\", \"cold.test\", \"hot.test\" ], \"tier_targets\": [ { \"key\": \"CLOUDTIER\", \"val\": { \"tier_type\": \"cloud-s3\", \"storage_class\": \"CLOUDTIER\", \"retain_head_object\": \"true\", \"s3\": { \"endpoint\": \"http://10.0.210.010:8080\", \"access_key\": \"a21e86bce636c3aa2\", \"secret\": \"cf764951f1fdde5f\", \"region\": \"\", \"host_style\": \"path\", \"target_storage_class\": \"\", \"target_path\": \"dfqe-bucket-01\", \"acl_mappings\": [], \"multipart_sync_threshold\": 44432, \"multipart_min_part_size\": 44432 } } } ] } } ] ]",
"ceph orch restart CEPH_OBJECT_GATEWAY_SERVICE_NAME",
"ceph orch restart rgw.rgw.1 Scheduled to restart rgw.rgw.1.host03.vkfldf on host 'host03'",
"s3cmd --configure Enter new values or accept defaults in brackets with Enter. Refer to user manual for detailed description of all options. Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables. Access Key: a21e86bce636c3aa2 Secret Key: cf764951f1fdde5f Default Region [US]: Use \"s3.amazonaws.com\" for S3 Endpoint and not modify it to the target Amazon S3. S3 Endpoint [s3.amazonaws.com]: 10.0.210.78:80 Use \"%(bucket)s.s3.amazonaws.com\" to the target Amazon S3. \"%(bucket)s\" and \"%(location)s\" vars can be used if the target S3 system supports dns based buckets. DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: 10.0.210.78:80 Encryption password is used to protect your files from reading by unauthorized persons while in transfer to S3 Encryption password: Path to GPG program [/usr/bin/gpg]: When using secure HTTPS protocol all communication with Amazon S3 servers is protected from 3rd party eavesdropping. This method is slower than plain HTTP, and can only be proxied with Python 2.7 or newer Use HTTPS protocol [Yes]: No On some networks all internet access must go through a HTTP proxy. Try setting it here if you can't connect to S3 directly HTTP Proxy server name: New settings: Access Key: a21e86bce636c3aa2 Secret Key: cf764951f1fdde5f Default Region: US S3 Endpoint: 10.0.210.78:80 DNS-style bucket+hostname:port template for accessing a bucket: 10.0.210.78:80 Encryption password: Path to GPG program: /usr/bin/gpg Use HTTPS protocol: False HTTP Proxy server name: HTTP Proxy server port: 0 Test access with supplied credentials? [Y/n] Y Please wait, attempting to list all buckets Success. Your access key and secret key worked fine :-) Now verifying that encryption works Not configured. Never mind. Save settings? [y/N] y Configuration saved to '/root/.s3cfg'",
"s3cmd mb s3:// NAME_OF_THE_BUCKET_FOR_S3",
"s3cmd mb s3://awstestbucket Bucket 's3://awstestbucket/' created",
"s3cmd put FILE_NAME s3:// NAME_OF_THE_BUCKET_ON_S3",
"s3cmd put test.txt s3://awstestbucket upload: 'test.txt' -> 's3://awstestbucket/test.txt' [1 of 1] 21 of 21 100% in 1s 16.75 B/s done",
"<LifecycleConfiguration> <Rule> <ID> RULE_NAME </ID> <Filter> <Prefix></Prefix> </Filter> <Status>Enabled</Status> <Transition> <Days> DAYS </Days> <StorageClass> STORAGE_CLASS_NAME </StorageClass> </Transition> </Rule> </LifecycleConfiguration>",
"cat lc_cloud.xml <LifecycleConfiguration> <Rule> <ID>Archive all objects</ID> <Filter> <Prefix></Prefix> </Filter> <Status>Enabled</Status> <Transition> <Days>2</Days> <StorageClass>CLOUDTIER</StorageClass> </Transition> </Rule> </LifecycleConfiguration>",
"s3cmd setlifecycle FILE_NAME s3:// NAME_OF_THE_BUCKET_FOR_S3",
"s3cmd setlifecycle lc_config.xml s3://awstestbucket s3://awstestbucket/: Lifecycle Policy updated",
"cephadm shell",
"ceph orch restart CEPH_OBJECT_GATEWAY_SERVICE_NAME",
"ceph orch restart rgw.rgw.1 Scheduled to restart rgw.rgw.1.host03.vkfldf on host 'host03'",
"radosgw-admin lc list [ { \"bucket\": \":awstestbucket:552a3adb-39e0-40f6-8c84-00590ed70097.54639.1\", \"started\": \"Mon, 26 Sep 2022 18:32:07 GMT\", \"status\": \"COMPLETE\" } ]",
"[root@client ~]USD radosgw-admin bucket list [ \"awstestbucket\" ]",
"[root@host01 ~]USD aws s3api list-objects --bucket awstestbucket --endpoint=http://10.0.209.002:8080 { \"Contents\": [ { \"Key\": \"awstestbucket/test\", \"LastModified\": \"2022-08-25T16:14:23.118Z\", \"ETag\": \"\\\"378c905939cc4459d249662dfae9fd6f\\\"\", \"Size\": 29, \"StorageClass\": \"STANDARD\", \"Owner\": { \"DisplayName\": \"test-user\", \"ID\": \"test-user\" } } ] }",
"s3cmd ls s3://awstestbucket 2022-08-25 09:57 0 s3://awstestbucket/test.txt",
"s3cmd info s3://awstestbucket/test.txt s3://awstestbucket/test.txt (object): File size: 0 Last mod: Mon, 03 Aug 2022 09:57:49 GMT MIME type: text/plain Storage: CLOUDTIER MD5 sum: 991d2528bb41bb839d1a9ed74b710794 SSE: none Policy: none CORS: none ACL: test-user: FULL_CONTROL x-amz-meta-s3cmd-attrs: atime:1664790668/ctime:1664790668/gid:0/gname:root/md5:991d2528bb41bb839d1a9ed74b710794/mode:33188/mtime:1664790668/uid:0/uname:root",
"[client@client01 ~]USD aws configure AWS Access Key ID [****************6VVP]: AWS Secret Access Key [****************pXqy]: Default region name [us-east-1]: Default output format [json]:",
"[client@client01 ~]USD aws s3 ls s3://dfqe-bucket-01/awstest PRE awstestbucket/",
"[client@client01 ~]USD aws s3 cp s3://dfqe-bucket-01/awstestbucket/test.txt . download: s3://dfqe-bucket-01/awstestbucket/test.txt to ./test.txt",
"radosgw-admin user create --uid= USER_NAME --display-name=\" DISPLAY_NAME \" [--access-key ACCESS_KEY --secret-key SECRET_KEY ]",
"radosgw-admin user create --uid=test-user --display-name=\"test-user\" --access-key a21e86bce636c3aa1 --secret-key cf764951f1fdde5e { \"user_id\": \"test-user\", \"display_name\": \"test-user\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"subusers\": [], \"keys\": [ { \"user\": \"test-user\", \"access_key\": \"a21e86bce636c3aa1\", \"secret_key\": \"cf764951f1fdde5e\" } ], \"swift_keys\": [], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"default_storage_class\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"rgw\", \"mfa_ids\": [] }",
"aws s3 --ca-bundle CA_PERMISSION --profile rgw --endpoint ENDPOINT_URL --region default mb s3:// BUCKET_NAME",
"[root@host01 ~]USD aws s3 --ca-bundle /etc/pki/ca-trust/source/anchors/myCA.pem --profile rgw --endpoint https://host02.example.com:8043 --region default mb s3://transition",
"radosgw-admin bucket stats --bucket transition { \"bucket\": \"transition\", \"num_shards\": 11, \"tenant\": \"\", \"zonegroup\": \"b29b0e50-1301-4330-99fc-5cdcfc349acf\", \"placement_rule\": \"default-placement\", \"explicit_placement\": { \"data_pool\": \"\", \"data_extra_pool\": \"\", \"index_pool\": \"\" },",
"[root@host01 ~]USD oc project openshift-storage [root@host01 ~]USD oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.6 True False 4d1h Cluster version is 4.11.6 [root@host01 ~]USD oc get storagecluster NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-storagecluster 4d Ready 2023-06-27T15:23:01Z 4.11.0",
"noobaa namespacestore create azure-blob az --account-key=' ACCOUNT_KEY ' --account-name=' ACCOUNT_NAME' --target-blob-container='_AZURE_CONTAINER_NAME '",
"[root@host01 ~]USD noobaa namespacestore create azure-blob az --account-key='iq3+6hRtt9bQ46QfHKQ0nSm2aP+tyMzdn8dBSRW4XWrFhY+1nwfqEj4hk2q66nmD85E/o5OrrUqo+AStkKwm9w==' --account-name='transitionrgw' --target-blob-container='mcgnamespace'",
"[root@host01 ~]USD noobaa bucketclass create namespace-bucketclass single aznamespace-bucket-class --resource az -n openshift-storage",
"noobaa obc create OBC_NAME --bucketclass aznamespace-bucket-class -n openshift-storage",
"[root@host01 ~]USD noobaa obc create rgwobc --bucketclass aznamespace-bucket-class -n openshift-storage",
"radosgw-admin zonegroup placement add --rgw-zonegroup = ZONE_GROUP_NAME --placement-id= PLACEMENT_ID --storage-class = STORAGE_CLASS_NAME --tier-type=cloud-s3",
"radosgw-admin zonegroup placement add --rgw-zonegroup=default --placement-id=default-placement --storage-class=AZURE --tier-type=cloud-s3 [ { \"key\": \"default-placement\", \"val\": { \"name\": \"default-placement\", \"tags\": [], \"storage_classes\": [ \"AZURE\", \"STANDARD\" ], \"tier_targets\": [ { \"key\": \"AZURE\", \"val\": { \"tier_type\": \"cloud-s3\", \"storage_class\": \"AZURE\", \"retain_head_object\": \"false\", \"s3\": { \"endpoint\": \"\", \"access_key\": \"\", \"secret\": \"\", \"host_style\": \"path\", \"target_storage_class\": \"\", \"target_path\": \"\", \"acl_mappings\": [], \"multipart_sync_threshold\": 33554432, \"multipart_min_part_size\": 33554432 } } } ] } } ]",
"radosgw-admin zonegroup placement modify --rgw-zonegroup ZONE_GROUP_NAME --placement-id PLACEMENT_ID --storage-class STORAGE_CLASS_NAME --tier-config=endpoint= ENDPOINT_URL , access_key= ACCESS_KEY ,secret= SECRET_KEY , target_path=\" TARGET_BUCKET_ON \", multipart_sync_threshold=44432, multipart_min_part_size=44432, retain_head_object=true region= REGION_NAME",
"radosgw-admin zonegroup placement modify --rgw-zonegroup default --placement-id default-placement --storage-class AZURE --tier-config=endpoint=\"https://s3-openshift-storage.apps.ocp410.0e73azopenshift.com\", access_key=a21e86bce636c3aa2,secret=cf764951f1fdde5f, target_path=\"dfqe-bucket-01\", multipart_sync_threshold=44432, multipart_min_part_size=44432, retain_head_object=true region=us-east-1 [ { \"key\": \"default-placement\", \"val\": { \"name\": \"default-placement\", \"tags\": [], \"storage_classes\": [ \"AZURE\", \"STANDARD\", \"cold.test\", \"hot.test\" ], \"tier_targets\": [ { \"key\": \"AZURE\", \"val\": { \"tier_type\": \"cloud-s3\", \"storage_class\": \"AZURE\", \"retain_head_object\": \"true\", \"s3\": { \"endpoint\": \"https://s3-openshift-storage.apps.ocp410.0e73azopenshift.com\", \"access_key\": \"a21e86bce636c3aa2\", \"secret\": \"cf764951f1fdde5f\", \"region\": \"\", \"host_style\": \"path\", \"target_storage_class\": \"\", \"target_path\": \"dfqe-bucket-01\", \"acl_mappings\": [], \"multipart_sync_threshold\": 44432, \"multipart_min_part_size\": 44432 } } } ] } } ] ]",
"ceph orch restart CEPH_OBJECT_GATEWAY_SERVICE_NAME",
"ceph orch restart client.rgw.objectgwhttps.host02.udyllp Scheduled to restart client.rgw.objectgwhttps.host02.udyllp on host 'host02",
"cat transition.json { \"Rules\": [ { \"Filter\": { \"Prefix\": \"\" }, \"Status\": \"Enabled\", \"Transitions\": [ { \"Days\": 30, \"StorageClass\": \" STORAGE_CLASS \" } ], \"ID\": \" TRANSITION_ID \" } ] }",
"[root@host01 ~]USD cat transition.json { \"Rules\": [ { \"Filter\": { \"Prefix\": \"\" }, \"Status\": \"Enabled\", \"Transitions\": [ { \"Days\": 30, \"StorageClass\": \"AZURE\" } ], \"ID\": \"Transition Objects in bucket to AZURE Blob after 30 days\" } ] }",
"aws s3api --ca-bundle CA_PERMISSION --profile rgw --endpoint ENDPOINT_URL --region default put-bucket-lifecycle-configuration --lifecycle-configuration file:// BUCKET .json --bucket BUCKET_NAME",
"[root@host01 ~]USD aws s3api --ca-bundle /etc/pki/ca-trust/source/anchors/myCA.pem --profile rgw --endpoint https://host02.example.com:8043 --region default put-bucket-lifecycle-configuration --lifecycle-configuration file://transition.json --bucket transition",
"aws s3api --ca-bundle CA_PERMISSION --profile rgw --endpoint ENDPOINT_URL --region default get-bucket-lifecycle-configuration --lifecycle-configuration file:// BUCKET .json --bucket BUCKET_NAME",
"[root@host01 ~]USD aws s3api --ca-bundle /etc/pki/ca-trust/source/anchors/myCA.pem --profile rgw --endpoint https://host02.example.com:8043 --region default get-bucket-lifecycle-configuration --bucket transition { \"Rules\": [ { \"ID\": \"Transition Objects in bucket to AZURE Blob after 30 days\", \"Prefix\": \"\", \"Status\": \"Enabled\", \"Transitions\": [ { \"Days\": 30, \"StorageClass\": \"AZURE\" } ] } ] }",
"radosgw-admin lc list [ { \"bucket\": \":transition:d9c4f708-5598-4c44-9d36-849552a08c4d.169377.1\", \"started\": \"Thu, 01 Jan 1970 00:00:00 GMT\", \"status\": \"UNINITIAL\" } ]",
"cephadm shell",
"ceph orch daemon CEPH_OBJECT_GATEWAY_DAEMON_NAME",
"ceph orch daemon restart rgw.objectgwhttps.host02.udyllp ceph orch daemon restart rgw.objectgw.host02.afwvyq ceph orch daemon restart rgw.objectgw.host05.ucpsrr",
"for i in 1 2 3 4 5 do aws s3 --ca-bundle /etc/pki/ca-trust/source/anchors/myCA.pem --profile rgw --endpoint https://host02.example.com:8043 --region default cp /etc/hosts s3://transition/transitionUSDi done",
"aws s3 --ca-bundle /etc/pki/ca-trust/source/anchors/myCA.pem --profile rgw --endpoint https://host02.example.com:8043 --region default ls s3://transition 2023-06-30 10:24:01 3847 transition1 2023-06-30 10:24:04 3847 transition2 2023-06-30 10:24:07 3847 transition3 2023-06-30 10:24:09 3847 transition4 2023-06-30 10:24:13 3847 transition5",
"rados ls -p default.rgw.buckets.data | grep transition d9c4f708-5598-4c44-9d36-849552a08c4d.169377.1_transition1 d9c4f708-5598-4c44-9d36-849552a08c4d.169377.1_transition4 d9c4f708-5598-4c44-9d36-849552a08c4d.169377.1_transition2 d9c4f708-5598-4c44-9d36-849552a08c4d.169377.1_transition3 d9c4f708-5598-4c44-9d36-849552a08c4d.169377.1_transition5",
"radosgw-admin lc process",
"radosgw-admin lc list [ { \"bucket\": \":transition:d9c4f708-5598-4c44-9d36-849552a08c4d.170017.5\", \"started\": \"Mon, 30 Jun 2023-06-30 16:52:56 GMT\", \"status\": \"COMPLETE\" } ]",
"[root@host01 ~]USD aws s3api list-objects --bucket awstestbucket --endpoint=http://10.0.209.002:8080 { \"Contents\": [ { \"Key\": \"awstestbucket/test\", \"LastModified\": \"2023-06-25T16:14:23.118Z\", \"ETag\": \"\\\"378c905939cc4459d249662dfae9fd6f\\\"\", \"Size\": 29, \"StorageClass\": \"STANDARD\", \"Owner\": { \"DisplayName\": \"test-user\", \"ID\": \"test-user\" } } ] }",
"[root@host01 ~]USD aws s3 --ca-bundle /etc/pki/ca-trust/source/anchors/myCA.pem --profile rgw --endpoint https://host02.example.com:8043 --region default ls s3://transition 2023-06-30 17:52:56 0 transition1 2023-06-30 17:51:59 0 transition2 2023-06-30 17:51:59 0 transition3 2023-06-30 17:51:58 0 transition4 2023-06-30 17:51:59 0 transition5",
"[root@host01 ~]USD aws s3api --ca-bundle /etc/pki/ca-trust/source/anchors/myCA.pem --profile rgw --endpoint https://host02.example.com:8043 --region default head-object --key transition1 --bucket transition { \"AcceptRanges\": \"bytes\", \"LastModified\": \"2023-06-31T16:52:56+00:00\", \"ContentLength\": 0, \"ETag\": \"\\\"46ecb42fd0def0e42f85922d62d06766\\\"\", \"ContentType\": \"binary/octet-stream\", \"Metadata\": {}, \"StorageClass\": \"CLOUDTIER\" }",
"radosgw-admin account create [--account-name={name}] [--account-id={id}] [--email={email}]",
"radosgw-admin account create --account-name=user1 --account-id=12345 [email protected]",
"radosgw-admin user create --uid={userid} --display-name={name} --account-id={accountid} --account-root --gen-secret --gen-access-key",
"radosgw-admin user create --uid=rootuser1 --display-name=\"Root User One\" --account-id=account123 --account-root --gen-secret --gen-access-key",
"radosgw-admin account rm --account-id={accountid}",
"radosgw-admin account rm --account-id=account123",
"radosgw-admin account stats --account-id={accountid} --sync-stats",
"{ \"account\": \"account123\", \"data_size\": 3145728000, # Total size in bytes (3 GB) \"num_objects\": 12000, # Total number of objects \"num_buckets\": 5, # Total number of buckets \"usage\": { \"total_size\": 3145728000, # Total size in bytes (3 GB) \"num_objects\": 12000 } }",
"radosgw-admin quota set --quota-scope=account --account-id={accountid} --max-size=10G radosgw-admin quota enable --quota-scope=account --account-id={accountid}",
"{ \"status\": \"OK\", \"message\": \"Quota enabled for account account123\" }",
"radosgw-admin quota set --quota-scope=bucket --account-id={accountid} --max-objects=1000000 radosgw-admin quota enable --quota-scope=bucket --account-id={accountid}",
"{ \"status\": \"OK\", \"message\": \"Quota enabled for bucket in account account123\" }",
"radosgw-admin quota set --quota-scope=account --account-id RGW12345678901234568 --max-buckets 10000 { \"id\": \"RGW12345678901234568\", \"tenant\": \"tenant1\", \"name\": \"account1\", \"email\": \"tenataccount1\", \"quota\": { \"enabled\": true, \"check_on_raw\": false, \"max_size\": 10737418240, \"max_size_kb\": 10485760, \"max_objects\": 100 }, \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"max_users\": 1000, \"max_roles\": 1000, \"max_groups\": 1000, \"max_buckets\": 1000, \"max_access_keys\": 4 } radosgw-admin quota enable --quota-scope=account --account-id RGW12345678901234568 { \"id\": \"RGW12345678901234568\", \"tenant\": \"tenant1\", \"name\": \"account1\", \"email\": \"tenataccount1\", \"quota\": { \"enabled\": true, \"check_on_raw\": false, \"max_size\": 10737418240, \"max_size_kb\": 10485760, \"max_objects\": 100 }, \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"max_users\": 1000, \"max_roles\": 1000, \"max_groups\": 1000, \"max_buckets\": 1000, \"max_access_keys\": 4 } radosgw-admin account get --account-id RGW12345678901234568 { \"id\": \"RGW12345678901234568\", \"tenant\": \"tenant1\", \"name\": \"account1\", \"email\": \"tenataccount1\", \"quota\": { \"enabled\": true, \"check_on_raw\": false, \"max_size\": 10737418240, \"max_size_kb\": 10485760, \"max_objects\": 100 }, \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"max_users\": 1000, \"max_roles\": 1000, \"max_groups\": 1000, \"max_buckets\": 1000, \"max_access_keys\": 4 } ceph versions { \"mon\": { \"ceph version 19.1.1-63.el9cp (8fa7b56d5e9f208c4233b0a8273665087bded8ae) squid (rc)\": 3 }, \"mgr\": { \"ceph version 19.1.1-63.el9cp (8fa7b56d5e9f208c4233b0a8273665087bded8ae) squid (rc)\": 3 }, \"osd\": { \"ceph version 19.1.1-63.el9cp (8fa7b56d5e9f208c4233b0a8273665087bded8ae) squid (rc)\": 9 }, \"rgw\": { \"ceph version 19.1.1-63.el9cp (8fa7b56d5e9f208c4233b0a8273665087bded8ae) squid (rc)\": 3 }, \"overall\": { \"ceph version 19.1.1-63.el9cp (8fa7b56d5e9f208c4233b0a8273665087bded8ae) squid (rc)\": 18 } }",
"radosgw-admin user modify --uid={userid} --account-id={accountid}",
"{\"TopicConfigurations\": [{ \"Id\": \"ID1\", \"TopicArn\": \"arn:aws:sns:default::topic1\", \"Events\": [\"s3:ObjectCreated:*\"]}]}",
"{\"TopicConfigurations\": [{ \"Id\": \"ID1\", \"TopicArn\": \"arn:aws:sns:default:RGW00000000000000001:topic1\", \"Events\": [\"s3:ObjectCreated:*\"]}]}",
"radosgw-admin topic rm --topic topic1",
"radosgw-admin user modify --uid <user_ID> --account-id <Account_ID> --account-root",
"radosgw-admin user policy attach --uid <user_ID> --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess",
"radosgw-admin user modify --uid <user_ID> --account-root=0",
"radosgw-admin user create --uid= name --display-name=\" USER_NAME \"",
"radosgw-admin user create --uid=\"testuser\" --display-name=\"Jane Doe\" { \"user_id\": \"testuser\", \"display_name\": \"Jane Doe\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [], \"keys\": [ { \"user\": \"testuser\", \"access_key\": \"CEP28KDIQXBKU4M15PDC\", \"secret_key\": \"MARoio8HFc8JxhEilES3dKFVj8tV3NOOYymihTLO\" } ], \"swift_keys\": [], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"rgw\" }",
"radosgw-admin subuser create --uid= NAME --subuser= NAME :swift --access=full",
"radosgw-admin subuser create --uid=testuser --subuser=testuser:swift --access=full { \"user_id\": \"testuser\", \"display_name\": \"First User\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [ { \"id\": \"testuser:swift\", \"permissions\": \"full-control\" } ], \"keys\": [ { \"user\": \"testuser\", \"access_key\": \"O8JDE41XMI74O185EHKD\", \"secret_key\": \"i4Au2yxG5wtr1JK01mI8kjJPM93HNAoVWOSTdJd6\" } ], \"swift_keys\": [ { \"user\": \"testuser:swift\", \"secret_key\": \"13TLtdEW7bCqgttQgPzxFxziu0AgabtOc6vM8DLA\" } ], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"rgw\" }",
"radosgw-admin key create --subuser= NAME :swift --key-type=swift --gen-secret",
"radosgw-admin key create --subuser=testuser:swift --key-type=swift --gen-secret { \"user_id\": \"testuser\", \"display_name\": \"First User\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [ { \"id\": \"testuser:swift\", \"permissions\": \"full-control\" } ], \"keys\": [ { \"user\": \"testuser\", \"access_key\": \"O8JDE41XMI74O185EHKD\", \"secret_key\": \"i4Au2yxG5wtr1JK01mI8kjJPM93HNAoVWOSTdJd6\" } ], \"swift_keys\": [ { \"user\": \"testuser:swift\", \"secret_key\": \"a4ioT4jEP653CDcdU8p4OuhruwABBRZmyNUbnSSt\" } ], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"rgw\" }",
"subscription-manager repos --enable=rhel-9-for-x86_64-highavailability-rpms",
"dnf install python3-boto3",
"vi s3test.py",
"import boto3 endpoint = \"\" # enter the endpoint URL along with the port \"http:// URL : PORT \" access_key = ' ACCESS ' secret_key = ' SECRET ' s3 = boto3.client( 's3', endpoint_url=endpoint, aws_access_key_id=access_key, aws_secret_access_key=secret_key ) s3.create_bucket(Bucket='my-new-bucket') response = s3.list_buckets() for bucket in response['Buckets']: print(\"{name}\\t{created}\".format( name = bucket['Name'], created = bucket['CreationDate'] ))",
"python3 s3test.py",
"my-new-bucket 2022-05-31T17:09:10.000Z",
"sudo yum install python-setuptools sudo easy_install pip sudo pip install --upgrade setuptools sudo pip install --upgrade python-swiftclient",
"swift -A http:// IP_ADDRESS : PORT /auth/1.0 -U testuser:swift -K ' SWIFT_SECRET_KEY ' list",
"swift -A http://10.10.143.116:80/auth/1.0 -U testuser:swift -K '244+fz2gSqoHwR3lYtSbIyomyPHf3i7rgSJrF/IA' list",
"my-new-bucket"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html-single/object_gateway_guide/create-an-s3-user-rgw |
10.2. SELinux and journald | 10.2. SELinux and journald In systemd , the journald daemon (also known as systemd-journal ) is the alternative for the syslog utility, which is a system service that collects and stores logging data. It creates and maintains structured and indexed journals based on logging information that is received from the kernel, from user processes using the libc syslog() function, from standard and error output of system services, or using its native API. It implicitly collects numerous metadata fields for each log message in a secure way. The systemd-journal service can be used with SELinux to increase security. SELinux controls processes by only allowing them to do what they were designed to do; sometimes even less, depending on the security goals of the policy writer. For example, SELinux prevents a compromised ntpd process from doing anything other than handle Network Time. However, the ntpd process sends syslog messages, so that SELinux would allow the compromised process to continue to send those messages. The compromised ntpd could format syslog messages to match other daemons and potentially mislead an administrator, or even worse, a utility that reads the syslog file into compromising the whole system. The systemd-journal daemon verifies all log messages and, among other things, adds SELinux labels to them. It is then easy to detect inconsistencies in log messages and prevent an attack of this type before it occurs. You can use the journalctl utility to query logs of systemd journals. If no command-line arguments are specified, running this utility lists the full content of the journal, starting from the oldest entries. To see all logs generated on the system, including logs for system components, execute journalctl as root. If you execute it as a non-root user, the output will be limited only to logs related to the currently logged-in user. Example 10.2. Listing Logs with journalctl It is possible to use journalctl for listing all logs related to a particular SELinux label. For example, the following command lists all logs logged under the system_u:system_r:policykit_t:s0 label: For more information about journalctl , see the journalctl (1) manual page. | [
"~]# journalctl _SELINUX_CONTEXT=system_u:system_r:policykit_t:s0 Oct 21 10:22:42 localhost.localdomain polkitd[647]: Started polkitd version 0.112 Oct 21 10:22:44 localhost.localdomain polkitd[647]: Loading rules from directory /etc/polkit-1/rules.d Oct 21 10:22:44 localhost.localdomain polkitd[647]: Loading rules from directory /usr/share/polkit-1/rules.d Oct 21 10:22:44 localhost.localdomain polkitd[647]: Finished loading, compiling and executing 5 rules Oct 21 10:22:44 localhost.localdomain polkitd[647]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Oct 21 10:23:10 localhost polkitd[647]: Registered Authentication Agent for unix-session:c1 (system bus name :1.49, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus) Oct 21 10:23:35 localhost polkitd[647]: Unregistered Authentication Agent for unix-session:c1 (system bus name :1.80 [/usr/bin/gnome-shell --mode=classic], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.utf8)"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/sec-systemd_access_control-journald |
Subsets and Splits